We consider a robust version of regularized discriminant analysis (RDA) classifiers to account for potential spurious or mislabeled observations in the training data set. To build a robust discriminant rule, a robust estimation of the covari- ance matrix is essential. In this work, we propose to use a regularized version of Tyler’s covariance estimator, in the regime where both the number of variables and the number of training samples are large and of similar order. Building upon fundamental results from random matrix theory, we show that the robust classifier is asymptotically equivalent to traditional, non-robust classifiers when the training data is free from outliers. Simulations on synthetic and real datasets confirm our theoretical observations and further attest to the benefits brought by the robust classifier when the data is corrupted by outliers.
|Title of host publication||IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’19)|
|Place of Publication||Brighton, UK|
|Publication status||Published - 17 Apr 2019|