|
Abstract:
We derive a learning algorithm for inferring an overcomplete
basis by viewing it as probabilistic model of the observed data.
Overcomplete bases allow for better approximation of the underlying
statistical density. Using a Laplacian prior on the basis
coefficients removes redundancy and leads to representations that
are sparse and are a nonlinear function of the data. This can be
viewed as a generalization of the technique of independent
component analysis and provides a method for blind source
separation of fewer mixtures than sources. We demonstrate the
utility of overcomplete representations on natural speech and show
that compared to the traditional Fourier basis the inferred
representations potentially have much greater coding
efficiency.
|