Conjugate gradient learning algorithms for multilayer perceptrons

D. Goryn, M. Kaveh

Research output: Contribution to conferencePaperpeer-review

7 Scopus citations

Abstract

Learning complex tasks in a multilayer perceptron is a nonlinear optimization problem that is often very difficult and painstakingly slow. The use of conjugate gradient methods to speed up convergence rates is proposed. These methods result in a very moderate increase in storage and computational complexity compared to the commonly used backpropagation algorithm. The algorithm used is a modified conjugate gradient method that uses inexact line searches. This reduces the number of function evaluations necessary in the line search part of the algorithm. Simulation results that show the improved convergence rate compared to the backpropagation algorithm are presented.

Original languageEnglish (US)
Pages736-739
Number of pages4
StatePublished - Dec 1 1989
EventProceedings of the 32nd Midwest Symposium on Circuits and Systems Part 2 (of 2) - Champaign, IL, USA
Duration: Aug 14 1989Aug 16 1989

Other

OtherProceedings of the 32nd Midwest Symposium on Circuits and Systems Part 2 (of 2)
CityChampaign, IL, USA
Period8/14/898/16/89

Fingerprint

Dive into the research topics of 'Conjugate gradient learning algorithms for multilayer perceptrons'. Together they form a unique fingerprint.

Cite this