Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

May 1, 2003, Vol. 15, No. 5, Pages 993-1012
(doi: 10.1162/089976603765202631)
© 2003 Massachusetts Institute of Technology
Sequential Bayesian Decoding with a Population of Neurons
Article PDF (130.12 KB)
Abstract

Population coding is a simplified model of distributed information processing in the brain. This study investigates the performance and implementation of a sequential Bayesian decoding (SBD) paradigm in the framework of population coding. In the first step of decoding, when no prior knowledge is available, maximum likelihood inference is used; the result forms the prior knowledge of stimulus for the second step of decoding. Estimates are propagated sequentially to apply maximum a posteriori (MAP) decoding in which prior knowledge for any step is taken from estimates from the previous step. Not only do we analyze the performance of SBD, obtaining the optimal form of prior knowledge that achieves the best estimation result, but we also investigate its possible biological realization, in the sense that all operations are performed by the dynamics of a recurrent network. In order to achieve MAP, a crucial point is to identify a mechanism that propagates prior knowledge. We find that this could be achieved by short-term adaptation of network weights according to the Hebbian learning rule. Simulation results on both constant and time-varying stimulus support the analysis.