MIT CogNet, The Brain Sciences ConnectionFrom the MIT Press, Link to Online Catalog
SPARC Communities
Subscriber : Stanford University Libraries » LOG IN

space

Powered By Google 
Advanced Search

 

Associative memory in realistic neuronal networks

 P. Latham
  
 

Abstract:

Almost two decades ago, Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to understand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented.

References

[1] J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. , 79:2554-2558, 1982.

[2] N. Brunel. Persistent activity and the single-cell frequency-current curve in a cortical network model. Network: Computation in Neural Systems , 11:261-280, 2000.

[3] P. E. Latham and S. N. Nirenberg. Intrinsic dynamics in cultured neuronal networks. Soc. Neuroscience Abstract , 25:2259, 1999.

 
 


© 2010 The MIT Press
MIT Logo