Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

January 1993, Vol. 5, No. 1, Pages 154-164
(doi: 10.1162/neco.1993.5.1.154)
© 1993 Massachusetts Institute of Technology
Learning in the Recurrent Random Neural Network
Article PDF (500.04 KB)
Abstract

The capacity to learn from examples is one of the most desirable features of neural network models. We present a learning algorithm for the recurrent random network model (Gelenbe 1989, 1990) using gradient descent of a quadratic error function. The analytical properties of the model lead to a "backpropagation" type algorithm that requires the solution of a system of n linear and n nonlinear equations each time the n-neuron network "learns" a new input-output pair.