288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

February 15, 1998, Vol. 10, No. 2, Pages 251-276
(doi: 10.1162/089976698300017746)
© 1998 Massachusetts Institute of Technology
Natural Gradient Works Efficiently in Learning
Article PDF (154.55 KB)

When a parameter space has a certain underlying structure, the ordinary gradient of a function does not represent its steepest direction, but the natural gradient does. Information geometry is used for calculating the natural gradients in the parameter space of perceptrons, the space of matrices (for blind source separation), and the space of linear dynamical systems (for blind source deconvolution). The dynamical behavior of natural gradient online learning is analyzed and is proved to be Fisher efficient, implying that it has asymptotically the same performance as the optimal batch estimation of parameters. This suggests that the plateau phenomenon, which appears in the backpropagation learning algorithm of multilayer perceptrons, might disappear or might not be so serious when the natural gradient is used. An adaptive method of updating the learning rate is proposed and analyzed.