Abstract
Linear discriminant analysis (LDA) is a classical method for discriminative dimensionality reduction. The original LDA may degrade in its performance for non-Gaussian data, and may be unable to extract sufficient features to satisfactorily explain the data when the number of classes is small. Two prominent extensions to address these problems are subclass discriminant analysis (SDA) and mixture subclass discriminant analysis (MSDA). They divide every class into subclasses and re-define the within-class and between-class scatter matrices on the basis of subclass. In this paper we study the issue of how to obtain subclasses more effectively in order to achieve higher class separation. We observe that there is significant overlap between models of the subclasses, which we hypothesise is undesirable. In order to reduce their overlap we propose an extension of LDA, separability oriented subclass discriminant analysis (SSDA), which employs hierarchical clustering to divide a class into subclasses using a separability oriented criterion, before applying LDA optimisation using re-defined scatter matrices. Extensive experiments have shown that SSDA has better performance than LDA, SDA and MSDA in most cases. Additional experiments have further shown that SSDA can project data into LDA space that has higher class separation than LDA, SDA and MSDA in most cases.
Original language | English |
---|---|
Pages (from-to) | 409 - 422 |
Journal | IEEE Transactions on Pattern Analysis and Machine Intelligence |
Volume | 40 |
Issue number | 2 |
Early online date | 22 Feb 2017 |
DOIs | |
Publication status | Published - Feb 2018 |
Externally published | Yes |
Bibliographical note
Compliant in UIR; evidence uploaded in 'Other files'Keywords
- Dimensionality reduction
- feature extraction
- linear discriminant analysis
- subclass discriminant analysis
- classification