Largest Matching Areas for Illumination and Occlusion Robust Face Recognition

Niall McLaughlin, Ming Ji, Danny Crookes

Research output: Contribution to journalArticlepeer-review

47 Citations (Scopus)
808 Downloads (Pure)


In this paper, we introduce a novel approach to face recognition which simultaneously tackles three combined challenges: 1) uneven illumination; 2) partial occlusion; and 3) limited training data. The new approach performs lighting normalization, occlusion de-emphasis and finally face recognition, based on finding the largest matching area (LMA) at each point on the face, as opposed to traditional fixed-size local area-based approaches. Robustness is achieved with novel approaches for feature extraction, LMA-based face image comparison and unseen data modeling. On the extended YaleB and AR face databases for face identification, our method using only a single training image per person, outperforms other methods using a single training image, and matches or exceeds methods which require multiple training images. On the labeled faces in the wild face verification database, our method outperforms comparable unsupervised methods. We also show that the new method performs competitively even when the training images are corrupted.
Original languageEnglish
Number of pages13
JournalIEEE Transactions on Cybernetics
Publication statusEarly online date - 29 Feb 2016


Dive into the research topics of 'Largest Matching Areas for Illumination and Occlusion Robust Face Recognition'. Together they form a unique fingerprint.

Cite this