BBAS: Towards large scale effective ensemble adversarial attacks against deep neural network learning

Research output: Contribution to journalArticlepeer-review

Abstract

Recent decades have witnessed rapid development of deep neural networks (DNN). As DNN learning is becoming more and more important to numerous intelligent system, ranging from self driving car to video surveillance system, significant research efforts have been devoted to explore how to improve DNN model’s robustness and reliability against adversarial example attacks. Distinguish from previous study, we address the problem of adversarial training with ensemble based approach and propose a novel boosting based black-box attack scheme call BBAS to facilitate high diverse adversarial example generation. BBAS not only separates example generation from the settings of the trained model but also enhance the diversity of perturbation over class distribution through seamless integration of stratified sampling and ensemble adversarial training. This leads to reliable and effective training example selection. To validate and evaluate the scheme from different perspectives, a set of comprehensive tests have been carried out based on two large open data sets. Experimental results demonstrate the superiority of our method in terms of effectiveness.
Original languageEnglish
Pages (from-to)469-478
JournalInformation Sciences
Volume569
Early online date25 Dec 2020
DOIs
Publication statusEarly online date - 25 Dec 2020

Fingerprint

Dive into the research topics of 'BBAS: Towards large scale effective ensemble adversarial attacks against deep neural network learning'. Together they form a unique fingerprint.

Cite this