Improving neural network training solutions using regularisation

Research output: Contribution to journalArticlepeer-review

61 Citations (Scopus)

Abstract

This paper describes the application of regularisation to the training of feedforward neural networks, as a means of improving the quality of solutions obtained. The basic principles of regularisation theory are outlined for both linear and nonlinear training and then extended to cover a new hybrid training algorithm for feedforward neural networks recently proposed by the authors. The concept of functional regularisation is also introduced and discussed in relation to MLP and RBF networks. The tendency for the hybrid training algorithm and many linear optimisation strategies to generate large magnitude weight solutions when applied to ill-conditioned neural paradigms is illustrated graphically and reasoned analytically. While such weight solutions do not generally result in poor fits, it is argued that they could be subject to numerical instability and are therefore undesirable. Using an illustrative example it is shown that, as well as being beneficial from a generalisation perspective, regularisation also provides a means for controlling the magnitude of solutions. (C) 2001 Elsevier Science B.V. All rights reserved.
Original languageEnglish
Pages (from-to)71-90
Number of pages20
JournalNeurocomputing
Volume37
Publication statusPublished - 2001

ASJC Scopus subject areas

  • Artificial Intelligence
  • Cellular and Molecular Neuroscience

Fingerprint

Dive into the research topics of 'Improving neural network training solutions using regularisation'. Together they form a unique fingerprint.

Cite this