Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

Summer 1989, Vol. 1, No. 2, Pages 161-172.
(doi: 10.1162/neco.1989.1.2.161)
© 1989 Massachusetts Institute of Technology
Recurrent Backpropagation and the Dynamical Approach to Adaptive Neural Computation
Article PDF (670.08 KB)
Abstract

Error backpropagation in feedforward neural network models is a popular learning algorithm that has its roots in nonlinear estimation and optimization. It is being used routinely to calculate error gradients in nonlinear systems with hundreds of thousands of parameters. However, the classical architecture for backpropagation has severe restrictions. The extension of backpropagation to networks with recurrent connections will be reviewed. It is now possible to efficiently compute the error gradients for networks that have temporal dynamics, which opens applications to a host of problems in systems identification and control.