Abstract
Recent approaches interpret deep neural works (DNNs) as
dynamical systems, drawing the connection between stability
in forward propagation and generalization of DNNs. In this
paper, we take a step further to be the first to reinforce this
stability of DNNs without changing their original structure
and verify the impact of the reinforced stability on the network representation from various aspects. More specifically,
we reinforce stability by modeling attractor dynamics of a
DNN and propose relu-max attractor network (RMAN), a
light-weight module readily to be deployed on state-of-the-art
ResNet-like networks. RMAN is only needed during training
so as to modify a ResNet’s attractor dynamics by minimizing an energy function together with the loss of the original
learning task. Through intensive experiments, we show that
RMAN-modified attractor dynamics bring a more structured
representation space to ResNet and its variants, and more importantly improve the generalization ability of ResNet-like
networks in supervised tasks due to reinforced stability.
Original language | English |
---|---|
Title of host publication | 34th AAAI Conference on Artificial Intelligence (AAAI-20): Proceedings |
Publisher | Association for the Advancement of Artificial Intelligence (AAAI) |
Number of pages | 8 |
ISBN (Print) | 978-1-57735-835-0 |
Publication status | Published - 2020 |
Event | The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) - New York, United States Duration: 07 Feb 2020 → 12 Feb 2020 https://aaai.org/Conferences/AAAI-20/ |
Publication series
Name | Proceedings of the AAAI Conference on Artificial Intelligence |
---|---|
Publisher | AAAI |
ISSN (Print) | 2159-5399 |
Conference
Conference | The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) |
---|---|
Abbreviated title | AAAI-20 |
Country/Territory | United States |
City | New York |
Period | 07/02/2020 → 12/02/2020 |
Internet address |