288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

January 1996, Vol. 8, No. 1, Pages 129-151
(doi: 10.1162/neco.1996.8.1.129)
© 1995 Massachusetts Institute of Technology
On Convergence Properties of the EM Algorithm for Gaussian Mixtures
Article PDF (929.13 KB)

We build up the mathematical connection between the “Expectation-Maximization” (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix P, and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of P and provide new results analyzing the effect that P has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of gaussian mixture models.