288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

April 1, 2000, Vol. 12, No. 4, Pages 881-901
(doi: 10.1162/089976600300015637)
© 2000 Massachusetts Institute of Technology
On “Natural” Learning and Pruning in Multilayered Perceptrons
Article PDF (2.4 MB)

Several studies have shown that natural gradient descent for on-line learning is much more efficient than standard gradient descent. In this article, we derive natural gradients in a slightly different manner and discuss implications for batch-mode learning and pruning, linking them to existing algorithms such as Levenberg-Marquardt optimization and optimal brain surgeon.

The Fisher matrix plays an important role in all these algorithms. The second half of the article discusses a layered approximation of the Fisher matrix specific to multilayered perceptrons. Using this approximation rather than the exact Fisher matrix, we arrive at much faster “natural” learning algorithms and more robust pruning procedures.