TY - JOUR
T1 - Visual prompt engineering for enhancing facial recognition systems robustness against evasion attacks
AU - Gupta, Sandeep
AU - Raja, Kiran
AU - Passerone, Roberto
PY - 2024/10/25
Y1 - 2024/10/25
N2 - Deep learning has unequivocally emerged as the backbone of simple to highly sensitive systems demanding artificial intelligence across diverse domains. For instance, foundation models based on deep neural networks (DNNs) can play a crucial role in the design of security-sensitive systems, such as facial recognition systems (FRS). Despite achieving exceptional accuracy and human-like performance, DNNs tend to be severely sensitive to adversarial attacks. While DNNs are deemed irreplaceable in the artificial intelligence domain, the vulnerability of DNNs against adversarial examples can be detrimental to sensitive systems robustness. The paper presents a pilot study introducing an attack-defense framework aimed at enhancing the robustness of FRS against evasion attacks. Our generative adversarial network (GAN) based attack successfully deceives FRS, demonstrating that they are not only vulnerable against synthetic images visibly comparable to real user images (i.e., best-match scenarios) but also to partially constructed user images (i.e., average-match scenarios). Based on our analysis, we propose a novel solution that extends the visual prompt engineering (VPE) concept for detecting synthetic images to secure downstream tasks in FRS. The VPE detection module achieves an accuracy of 97.92% in the average-match scenario and 87.08% in the best-match scenario on our generated dataset. Furthermore, we use the Trueface postsocial dataset to validate the efficacy of the detection module obtaining an accuracy of 91.96%. Our experimental evaluation shows that VPE can effectively tackle the GAN attacks from average-match to best-match scenarios, thus enhancing the overall robustness of a security-sensitive system against evasion attacks.
AB - Deep learning has unequivocally emerged as the backbone of simple to highly sensitive systems demanding artificial intelligence across diverse domains. For instance, foundation models based on deep neural networks (DNNs) can play a crucial role in the design of security-sensitive systems, such as facial recognition systems (FRS). Despite achieving exceptional accuracy and human-like performance, DNNs tend to be severely sensitive to adversarial attacks. While DNNs are deemed irreplaceable in the artificial intelligence domain, the vulnerability of DNNs against adversarial examples can be detrimental to sensitive systems robustness. The paper presents a pilot study introducing an attack-defense framework aimed at enhancing the robustness of FRS against evasion attacks. Our generative adversarial network (GAN) based attack successfully deceives FRS, demonstrating that they are not only vulnerable against synthetic images visibly comparable to real user images (i.e., best-match scenarios) but also to partially constructed user images (i.e., average-match scenarios). Based on our analysis, we propose a novel solution that extends the visual prompt engineering (VPE) concept for detecting synthetic images to secure downstream tasks in FRS. The VPE detection module achieves an accuracy of 97.92% in the average-match scenario and 87.08% in the best-match scenario on our generated dataset. Furthermore, we use the Trueface postsocial dataset to validate the efficacy of the detection module obtaining an accuracy of 91.96%. Our experimental evaluation shows that VPE can effectively tackle the GAN attacks from average-match to best-match scenarios, thus enhancing the overall robustness of a security-sensitive system against evasion attacks.
U2 - 10.1109/ACCESS.2024.3479949
DO - 10.1109/ACCESS.2024.3479949
M3 - Article
SN - 2169-3536
VL - 12
SP - 152212
EP - 152223
JO - IEEE Access
JF - IEEE Access
ER -