Robust Bayesian Neural Networks by Spectral Expectation Bound Regularization

Jiaru Zhang, Yang Hua, Zhengui Xue, Tao Song, Chengyu Zheng, Ruhui Ma, Haibing Guan

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Citations (Scopus)
60 Downloads (Pure)

Abstract

Bayesian neural networks have been widely used in many applications because of the distinctive probabilistic representation framework. Even though Bayesian neural networks have been found more robust to adversarial attacks compared with vanilla neural networks, their ability to deal with adversarial noises in practice is still limited. In this paper, we propose Spectral Expectation Bound Regularization (SEBR) to enhance the robustness of Bayesian neural networks. Our theoretical analysis reveals that training with SEBR improves the robustness to adversarial noises. We also prove that training with SEBR can reduce the epistemic uncertainty of the model and hence it can make the model more confident with the predictions, which verifies the robustness of the model from another point of view. Experiments on multiple Bayesian neural network structures and different adversarial attacks validate the correctness of the theoretical findings and the effectiveness of the proposed approach.
Original languageEnglish
Title of host publication2021 IEEE Conference on Computer Vision and Pattern Recognition (CVPR): Proceedings
DOIs
Publication statusEarly online date - 13 Nov 2021

Publication series

NameIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
PublisherIEEE
ISSN (Electronic)2575-7075

Fingerprint

Dive into the research topics of 'Robust Bayesian Neural Networks by Spectral Expectation Bound Regularization'. Together they form a unique fingerprint.

Cite this