Abstract
Background: Large-scale biological jobs on high-performance computing systems require manual intervention
if one or more computing cores on which they execute fail. This places not only a cost on the
maintenance of the job, but also a cost on the time taken for reinstating the job and the risk of losing data
and execution accomplished by the job before it failed. Approaches which can proactively detect
computing core failures and take action to relocate the computing core's job onto reliable cores can make
a significant step towards automating fault tolerance.
Method: This paper describes an experimental investigation into the use of multi-agent approaches
for fault tolerance. Two approaches are studied, the first at the job level and the second at the core level.
The approaches are investigated for single core failure scenarios that can occur in the execution of
parallel reduction algorithms on computer clusters. A third approach is proposed that incorporates
multi-agent technology both at the job and core level. Experiments are pursued in the context of genome
searching, a popular computational biology application.
Result: The key conclusion is that the approaches proposed are feasible for automating fault tolerance
in high-performance computing systems with minimal human intervention. In a typical experiment in
which the fault tolerance is studied, centralised and decentralised checkpointing approaches on an
average add 90% to the actual time for executing the job. On the other hand, in the same experiment the
multi-agent approaches add only 10% to the overall execution time
Original language | English |
---|---|
Pages (from-to) | 28-41 |
Number of pages | 14 |
Journal | Computers in Biology and Medicine |
Volume | 48 |
Issue number | 1 |
Early online date | 20 Feb 2014 |
DOIs | |
Publication status | Published - 01 May 2014 |