288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

January 2008, Vol. 20, No. 1, Pages 118-145
(doi: 10.1162/neco.2008.20.1.118)
© 2007 Massachusetts Institute of Technology
Bayesian Spiking Neurons II: Learning
Article PDF (671.92 KB)

In the companion letter in this issue (“Bayesian Spiking Neurons I: Inference”), we showed that the dynamics of spiking neurons can be interpreted as a form of Bayesian integration, accumulating evidence over time about events in the external world or the body. We proceed to develop a theory of Bayesian learning in spiking neural networks, where the neurons learn to recognize temporal dynamics of their synaptic inputs. Meanwhile, successive layers of neurons learn hierarchical causal models for the sensory input. The corresponding learning rule is local, spike-time dependent, and highly nonlinear. This approach provides a principled description of spiking and plasticity rules maximizing information transfer, while limiting the number of costly spikes, between successive layers of neurons.