TY - GEN
T1 - AdvART: adversarial art for camouflaged object detection attacks
AU - Guesmi, Amira
AU - Bilasco, Ioan Marius
AU - Shafique, Muhammad
AU - Alouani, Ihsen
PY - 2024/9/27
Y1 - 2024/9/27
N2 - Physical adversarial attacks pose a significant practical threat as it deceives deep learning systems operating in the real world by producing prominent and maliciously designed physical perturbations. Emphasizing the evaluation of naturalness is crucial in such attacks, as humans can easily detect unnatural manipulations. To address this, recent work has proposed leveraging generative adversarial networks (GANs) to generate naturalistic patches, which may seem visually suspicious and evade human’s attention. However, these approaches suffer from a limited latent space which leads to an inevitable trade-off between naturalness and attack efficiency. In this paper, we propose a novel approach to generate naturalistic and inconspicuous adversarial patches. Specifically, we redefine the optimization problem by introducing an additional loss term to the total loss. This term works as a semantic constraint to ensure that the generated camouflage pattern holds semantic meaning rather than arbitrary patterns. It leverages similarity metrics-based loss that we optimize within the global adversarial objective function. Our technique is based on directly manipulating the pixel values in the patch, which gives higher flexibility and larger space compared to the GAN-based techniques that are based on indirectly optimizing the patch by modifying the latent vector. Our attack achieves superior success rate of up to 91.19% and 72% respectively, in the digital world and when deployed in smart cameras at the edge compared to the GAN-based approach.
AB - Physical adversarial attacks pose a significant practical threat as it deceives deep learning systems operating in the real world by producing prominent and maliciously designed physical perturbations. Emphasizing the evaluation of naturalness is crucial in such attacks, as humans can easily detect unnatural manipulations. To address this, recent work has proposed leveraging generative adversarial networks (GANs) to generate naturalistic patches, which may seem visually suspicious and evade human’s attention. However, these approaches suffer from a limited latent space which leads to an inevitable trade-off between naturalness and attack efficiency. In this paper, we propose a novel approach to generate naturalistic and inconspicuous adversarial patches. Specifically, we redefine the optimization problem by introducing an additional loss term to the total loss. This term works as a semantic constraint to ensure that the generated camouflage pattern holds semantic meaning rather than arbitrary patterns. It leverages similarity metrics-based loss that we optimize within the global adversarial objective function. Our technique is based on directly manipulating the pixel values in the patch, which gives higher flexibility and larger space compared to the GAN-based techniques that are based on indirectly optimizing the patch by modifying the latent vector. Our attack achieves superior success rate of up to 91.19% and 72% respectively, in the digital world and when deployed in smart cameras at the edge compared to the GAN-based approach.
KW - AdvART
KW - adversarial art
KW - camouflaged object detection attacks
U2 - 10.1109/ICIP51287.2024.10648014
DO - 10.1109/ICIP51287.2024.10648014
M3 - Conference contribution
SN - 9798350349405
T3 - IEEE International Conference on Image Processing (ICIP): Proceedings
SP - 666
EP - 672
BT - 2024 IEEE International Conference on Image Processing (ICIP 2024): proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE International Conference on Image Processing (ICIP 2024)
Y2 - 27 October 2024 through 30 October 2024
ER -