A new Jacobian matrix for optimal learning of single-layer neural networks

Jian Xun Peng, Kang Li, George Irwin

Research output: Contribution to journalArticlepeer-review

57 Citations (Scopus)
2 Downloads (Pure)


This paper investigates the learning of a wide class of single-hidden-layer feedforward neural networks (SLFNs) with two sets of adjustable parameters, i.e., the nonlinear parameters in the hidden nodes and the linear output weights. The main objective is to both speed up the convergence of second-order learning algorithms such as Levenberg-Marquardt (LM), as well as to improve the network performance. This is achieved here by reducing the dimension of the solution space and by introducing a new Jacobian matrix. Unlike conventional supervised learning methods which optimize these two sets of parameters simultaneously, the linear output weights are first converted into dependent parameters, thereby removing the need for their explicit computation. Consequently, the neural network (NN) learning is performed over a solution space of reduced dimension. A new Jacobian matrix is then proposed for use with the popular second-order learning methods in order to achieve a more accurate approximation of the cost function. The efficacy of the proposed method is shown through an analysis of the computational complexity and by presenting simulation results from four different examples.
Original languageEnglish
Pages (from-to)119-129
Number of pages11
JournalIEEE Transactions on Neural Networks
Issue number1
Publication statusPublished - Jan 2008

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Theoretical Computer Science
  • Electrical and Electronic Engineering
  • Artificial Intelligence
  • Computational Theory and Mathematics
  • Hardware and Architecture


Dive into the research topics of 'A new Jacobian matrix for optimal learning of single-layer neural networks'. Together they form a unique fingerprint.

Cite this