Reinforcing Neural Network Stability with Attractor Dynamics

Hanming Deng, Yang Hua, Tao Song, Zhengui Xue, Ruhui Ma, Neil Robertson, Haibing Guan

Research output: Chapter in Book/Report/Conference proceedingConference contribution

16 Downloads (Pure)


Recent approaches interpret deep neural works (DNNs) as dynamical systems, drawing the connection between stability in forward propagation and generalization of DNNs. In this paper, we take a step further to be the first to reinforce this stability of DNNs without changing their original structure and verify the impact of the reinforced stability on the network representation from various aspects. More specifically, we reinforce stability by modeling attractor dynamics of a DNN and propose relu-max attractor network (RMAN), a light-weight module readily to be deployed on state-of-the-art ResNet-like networks. RMAN is only needed during training so as to modify a ResNet’s attractor dynamics by minimizing an energy function together with the loss of the original learning task. Through intensive experiments, we show that RMAN-modified attractor dynamics bring a more structured representation space to ResNet and its variants, and more importantly improve the generalization ability of ResNet-like networks in supervised tasks due to reinforced stability.
Original languageEnglish
Title of host publication34th AAAI Conference on Artificial Intelligence (AAAI-20): Proceedings
PublisherAssociation for the Advancement of Artificial Intelligence (AAAI)
Number of pages8
ISBN (Print)978-1-57735-835-0
Publication statusAccepted - 11 Nov 2019
EventThe Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) - New York, United States
Duration: 07 Feb 202012 Feb 2020

Publication series

NameProceedings of the AAAI Conference on Artificial Intelligence
ISSN (Print)2159-5399


ConferenceThe Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20)
Abbreviated titleAAAI-20
CountryUnited States
CityNew York
Internet address


Dive into the research topics of 'Reinforcing Neural Network Stability with Attractor Dynamics'. Together they form a unique fingerprint.

Cite this