288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

December 2012, Vol. 24, No. 12, Pages 3317-3339
(doi: 10.1162/NECO_a_00372)
© 2012 Massachusetts Institute of Technology
A Common Network Architecture Efficiently Implements a Variety of Sparsity-Based Inference Problems
Article PDF (437.92 KB)

The sparse coding hypothesis has generated significant interest in the computational and theoretical neuroscience communities, but there remain open questions about the exact quantitative form of the sparsity penalty and the implementation of such a coding rule in neurally plausible architectures. The main contribution of this work is to show that a wide variety of sparsity-based probabilistic inference problems proposed in the signal processing and statistics literatures can be implemented exactly in the common network architecture known as the locally competitive algorithm (LCA). Among the cost functions we examine are approximate norms (), modified -norms, block- norms, and reweighted algorithms. Of particular interest is that we show significantly increased performance in reweighted algorithms by inferring all parameters jointly in a dynamical system rather than using an iterative approach native to digital computational architectures.