Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

May 1993, Vol. 5, No. 3, Pages 456-462
(doi: 10.1162/neco.1993.5.3.456)
© 1993 Massachusetts Institute of Technology
A Simplified Gradient Algorithm for IIR Synapse Multilayer Perceptrons
Article PDF (278.09 KB)
Abstract

A network architecture with a global feedforward local recurrent construction was presented recently as a new means of modeling nonlinear dynamic time series (Back and Tsoi 1991a). The training rule used was based on minimizing the least mean square (LMS) error and performed well, although the amount of memory required for large networks may become significant if a large number of feedback connections are used. In this note, a modified training algorithm based on a technique for linear filters is presented, simplifying the gradient calculations significantly. The memory requirements are reduced from O[na(na + nb)Ns] to O[(2na + nb)Ns], where na is the number of feedback delays, and Ns is the total number of synapses. The new algorithm reduces the number of multiply-adds needed to train each synapse by na at each time step. Simulations indicate that the algorithm has almost identical performance to the previous one.