288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

January 1995, Vol. 7, No. 1, Pages 117-143
(doi: 10.1162/neco.1995.7.1.117)
© 1995 Massachusetts Institute of Technology
Bayesian Regularization and Pruning Using a Laplace Prior
Article PDF (1.17 MB)

Standard techniques for improved generalization from neural networks include weight decay and pruning. Weight decay has a Bayesian interpretation with the decay function corresponding to a prior over weights. The method of transformation groups and maximum entropy suggests a Laplace rather than a gaussian prior. After training, the weights then arrange themselves into two classes: (1) those with a common sensitivity to the data error and (2) those failing to achieve this sensitivity and that therefore vanish. Since the critical value is determined adaptively during training, pruning—in the sense of setting weights to exact zeros—becomes an automatic consequence of regularization alone. The count of free parameters is also reduced automatically as weights are pruned. A comparison is made with results of MacKay using the evidence framework and a gaussian regularizer.