TY - JOUR
T1 - Attention-Based Two-Stream Convolutional Networks for Face Spoofing Detection
AU - Chen, Haonan
AU - Hu, Guosheng
AU - Lei, Zhen
AU - Chen, Yaowu
AU - Robertson, Neil
AU - Li, Stan
PY - 2019/6/17
Y1 - 2019/6/17
N2 - Since the human face preserves the richest information for recognizing individuals, face recognition has been widely investigated and achieved great success in various applications in the past decades. However, face spoofing attacks (e.g. face video replay attack) remain a threat to modern face recognition systems.Though many effective methods have been proposed for anti-spoofing, we find that the performance of many existing methods is degraded by illuminations. It motivates us to develop illumination-invariant methods for anti-spoofing. In this paper, we propose a two stream convolutional neural network (TSCNN) which works on two complementary space: RGB space (original imaging space) and multi-scale retinex (MSR) space (illumination-invariant space). Specifically, RGB space contains the detailed facial textures yet is sensitive to illumination; MSR is invariant to illumination yet contains less detailed facial information. In addition, MSR images can effectively capture the high-frequency information, which is discriminative for face spoofing detection. Images from two spaces are fed to the TSCNN to learn the discriminative features for anti-spoofing. To effectively fuse the features from two sources (RGB and MSR), we propose an attention-based fusion method, which can effectively capture the complementarity of two features. We evaluate the proposed framework on various databases, i.e. CASIA-FASD, REPLAY-ATTACK and OULU, and achieve very competitive performance. To further verify the generalization capacity of the proposed strategies, we conduct cross-database experiments, and the results show the great effectiveness of our method.
AB - Since the human face preserves the richest information for recognizing individuals, face recognition has been widely investigated and achieved great success in various applications in the past decades. However, face spoofing attacks (e.g. face video replay attack) remain a threat to modern face recognition systems.Though many effective methods have been proposed for anti-spoofing, we find that the performance of many existing methods is degraded by illuminations. It motivates us to develop illumination-invariant methods for anti-spoofing. In this paper, we propose a two stream convolutional neural network (TSCNN) which works on two complementary space: RGB space (original imaging space) and multi-scale retinex (MSR) space (illumination-invariant space). Specifically, RGB space contains the detailed facial textures yet is sensitive to illumination; MSR is invariant to illumination yet contains less detailed facial information. In addition, MSR images can effectively capture the high-frequency information, which is discriminative for face spoofing detection. Images from two spaces are fed to the TSCNN to learn the discriminative features for anti-spoofing. To effectively fuse the features from two sources (RGB and MSR), we propose an attention-based fusion method, which can effectively capture the complementarity of two features. We evaluate the proposed framework on various databases, i.e. CASIA-FASD, REPLAY-ATTACK and OULU, and achieve very competitive performance. To further verify the generalization capacity of the proposed strategies, we conduct cross-database experiments, and the results show the great effectiveness of our method.
U2 - 10.1109/TIFS.2019.2922241
DO - 10.1109/TIFS.2019.2922241
M3 - Article
SN - 1556-6013
JO - IEEE Transactions on Information Forensics and Security
JF - IEEE Transactions on Information Forensics and Security
ER -