Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

Summer 1991, Vol. 3, No. 2, Pages 226-245
(doi: 10.1162/neco.1991.3.2.226)
© 1991 Massachusetts Institute of Technology
On the Convergence of the LMS Algorithm with Adaptive Learning Rate for Linear Feedforward Networks
Article PDF (984.44 KB)
Abstract

We consider the problem of training a linear feedforward neural network by using a gradient descent-like LMS learning algorithm. The objective is to find a weight matrix for the network, by repeatedly presenting to it a finite set of examples, so that the sum of the squares of the errors is minimized. Kohonen showed that with a small but fixed learning rate (or stepsize) some subsequences of the weight matrices generated by the algorithm will converge to certain matrices close to the optimal weight matrix. In this paper, we show that, by dynamically decreasing the learning rate during each training cycle, the sequence of matrices generated by the algorithm will converge to the optimal weight matrix. We also show that for any given ∊ > 0 the LMS algorithm, with decreasing learning rates, will generate an ∊-optimal weight matrix (i.e., a matrix of distance at most ∊ away from the optimal matrix) after O(1/∊) training cycles. This is in contrast to Ω(1/∊log 1/∊) training cycles needed to generate an ∊-optimal weight matrix when the learning rate is kept fixed. We also give a general condition for the learning rates under which the LMS learning algorithm is guaranteed to converge to the optimal weight matrix.