Joint Analysis of Multiple Algorithms and Performance Measures

Cassio P. de Campos, Alessio Benavoli

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)
317 Downloads (Pure)

Abstract

There has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and time complexity). Once one has developed an approach to a problem of interest, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Standard tests used for this purpose are able to consider jointly neither performance measures nor multiple competitors at once. The aim of this paper is to resolve these issues by developing statistical procedures that are able to account for multiple competing measures at the same time and to compare multiple algorithms altogether. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameters of such models, as usually the number of studied cases is very reduced in such comparisons. Data from a comparison among general purpose classifiers is used to show a practical application of our tests.
Original languageEnglish
Pages (from-to)69-86
Number of pages18
JournalNew Generation Computing
Volume35
Issue number1
Early online date12 Dec 2016
DOIs
Publication statusPublished - Jan 2017

Fingerprint

Dive into the research topics of 'Joint Analysis of Multiple Algorithms and Performance Measures'. Together they form a unique fingerprint.

Cite this