Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

November 1, 2001, Vol. 13, No. 11, Pages 2517-2532
(doi: 10.1162/089976601753196003)
© 2001 Massachusetts Institute of Technology
A Variational Method for Learning Sparse and Overcomplete Representations
Article PDF (457.77 KB)
Abstract

An expectation-maximization algorithm for learning sparse and overcomplete data representations is presented. The proposed algorithm exploits a variational approximation to a range of heavy-tailed distributions whose limit is the Laplacian. A rigorous lower bound on the sparse prior distribution is derived, which enables the analytic marginalization of a lower bound on the data likelihood. This lower bound enables the development of an expectation-maximization algorithm for learning the overcomplete basis vectors and inferring the most probable basis coefficients.