Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

May 15, 1998, Vol. 10, No. 4, Pages 987-1005
(doi: 10.1162/089976698300017593)
© 1998 Massachusetts Institute of Technology
Toward Optimally Distributed Computation
Article PDF (178.02 KB)
Abstract

This article introduces the concept of optimally distributed computation in feedforward neural networks via regularization of weight saliency. By constraining the relative importance of the parameters, computation can be distributed thinly and evenly throughout the network. We propose that this will have beneficial effects on fault-tolerance performance and generalization ability in large network architectures. These theoretical predictions are verified by simulation experiments on two problems: one artificial and the other a real-world task. In summary, this article presents regularization terms for distributing neural computation optimally.