Abstract
Deployment of deep neural networks (DNNs) in
safety-critical systems requires provable guarantees
for their correct behaviours. We compute the maximal radius of a safe norm ball around a given input,
within which there are no adversarial examples for
a trained DNN. We define global robustness as an
expectation of the maximal safe radius over a test
dataset, and develop an algorithm to approximate
the global robustness measure by iteratively computing its lower and upper bounds. Our algorithm is
the first efficient method for the Hamming (L0) distance, and we hypothesise that this norm is a good
proxy for a certain class of physical attacks. The
algorithm is anytime, i.e., it returns intermediate
bounds and robustness estimates that are gradually,
but strictly, improved as the computation proceeds;
tensor-based, i.e., the computation is conducted over
a set of inputs simultaneously to enable efficient
GPU computation; and has provable guarantees,
i.e., both the bounds and the robustness estimates
can converge to their optimal values. Finally, we
demonstrate the utility of our approach by applying
the algorithm to a set of challenging problems.
Original language | English |
---|---|
Title of host publication | International Joint Conferences on Artificial Intelligence Organization: Proceedings |
Pages | 5944-5952 |
Number of pages | 9 |
ISBN (Electronic) | 978-0-9992411-4-1 |
DOIs | |
Publication status | Published - Aug 2019 |