288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

November 1, 2000, Vol. 12, No. 11, Pages 2685-2717
(doi: 10.1162/089976600300014890)
© 2000 Massachusetts Institute of Technology
The Bayesian Evidence Scheme for Regularizing Probability-Density Estimating Neural Networks
Article PDF (222.86 KB)

Training probability-density estimating neural networks with the expectation-maximization (EM) algorithm aims to maximize the likelihood of the training set and therefore leads to overfitting for sparse data. In this article, a regularization method for mixture models with generalized linear kernel centers is proposed, which adopts the Bayesian evidence approach and optimizes the hyperparameters of the prior by type II maximum likelihood. This includes a marginalization over the parameters, which is done by Laplace approximation and requires the derivation of the Hessian of the log-likelihood function. The incorporation of this approach into the standard training scheme leads to a modified form of the EM algorithm, which includes a regularization term and adapts the hyperparameters on-line after each EM cycle. The article presents applications of this scheme to classification problems, the prediction of stochastic time series, and latent space models.