Within-class Multimodal Classification

Huan Wan, Hui Wang, Bryan Scotney, Jun Liu, Wing W. Y. Ng

Research output: Contribution to journalArticlepeer-review

Abstract

In many real-world classification problems there exist multiple subclasses (or clusters) within a class; in other words, the underlying data distribution is within-class multimodal. One example is face recognition where a face (i.e. a class) may be presented in frontal view or side view, corresponding to different modalities. This issue has been largely ignored in the literature or at least under studied. How to address the within-class multimodality issue is still an unsolved problem. In this paper, we present an extensive study of within-class multimodality classification. This study is guided by a number of research questions, and conducted through experimentation on artificial data and real data. In addition, we establish a case for within-class multimodal classification that is characterised by the concurrent maximisation of between-class separation, between-subclass separation and within-class compactness. Extensive experimental results show that within-class multimodal classification consistently leads to significant performance gains when within-class multimodality is present in data. Furthermore, it has been found that within-class multimodal classification offers a competitive solution to face recognition under different lighting and face pose conditions. It is our opinion that the case for within-class multimodal classification is established, therefore there is a milestone to be achieved in some machine learning algorithms (e.g. Gaussian mixture model) when within-class multimodal classification, or part of it, is pursued.
Original languageEnglish
Pages (from-to)29327–29352
JournalMultimedia Tools and Applications
Volume79
Early online date11 Aug 2020
DOIs
Publication statusPublished - Oct 2020
Externally publishedYes

Fingerprint

Dive into the research topics of 'Within-class Multimodal Classification'. Together they form a unique fingerprint.

Cite this