288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

July 1, 1997, Vol. 9, No. 5, Pages 1163-1178
(doi: 10.1162/neco.1997.9.5.1163)
© 1997 Massachusetts Institute of Technology
Averaging Regularized Estimators
Article PDF (101.06 KB)

We compare the performance of averaged regularized estimators. We show that the improvement in performance that can be achieved by averaging depends critically on the degree of regularization which is used in training the individual estimators. We compare four different averaging approaches: simple averaging, bagging, variance-based weighting, and variance-based bagging. In any of the averaging methods, the greatest degree of improvement—if compared to the individual estimators—is achieved if no or only a small degree of regularization is used. Here, variance-based weighting and variance-based bagging are superior to simple averaging or bagging. Our experiments indicate that better performance for both individual estimators and for averaging is achieved in combination with regularization. With increasing degrees of regularization, the two bagging-based approaches (bagging and variance-based bagging) outperform the individual estimators, simple averaging, and variance-based weighting. Bagging and variance-based bagging seem to be the overall best combining methods over a wide range of degrees of regularization.