An Adaptive Memory Multi-Batch L-BFGS Algorithm for Neural Network Training

Federico Zocco*, Seán McLoone

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)
182 Downloads (Pure)

Abstract

Motivated by the potential for parallel implementation of batch-based algorithms and the accelerated convergence achievable with approximated second order information a limited memory version of the BFGS algorithm has been receiving increasing attention in recent years for large neural network training problems. As the shape of the cost function is generally not quadratic and only becomes approximately quadratic in the vicinity of a minimum, the use
of second order information by L-BFGS can be unreliable during the initial phase of training, i.e. when far from a minimum. Therefore, to control the in
uence of second order information as training progresses, we propose a multi-batch L-BFGS algorithm, namely MB-AM, that gradually increases its trust in the curvature information by implementing a progressive storage and use of curvature data through a development-based increase (dev-increase) scheme. Using six discriminative modelling benchmark problems we show empirically that MB-AM has slightly faster convergence and, on average, achieves better solutions than the standard multi-batch L-BFGS algorithm when training MLP and CNN models.
Original languageEnglish
Pages (from-to)8199-8204
Number of pages6
JournalIFAC-PapersOnLine
Volume53
Issue number2
DOIs
Publication statusPublished - 14 Apr 2021
Event21st World Congress of the International Federation of Automatic Control 2020 - Berlin, Germany
Duration: 12 Jul 202017 Jul 2020
https://www.ifac2020.org/

Bibliographical note

Will be available Diamond Open Access at https://www.journals.elsevier.com/ifac-papersonline

Fingerprint

Dive into the research topics of 'An Adaptive Memory Multi-Batch L-BFGS Algorithm for Neural Network Training'. Together they form a unique fingerprint.

Cite this