Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

May 2010, Vol. 22, No. 5, Pages 1333-1357
(doi: 10.1162/neco.2010.02-09-957)
© 2010 Massachusetts Institute of Technology
A Gaussian Attractor Network for Memory and Recognition with Experience-Dependent Learning
Article PDF (599.7 KB)
Abstract

Attractor networks are widely believed to underlie the memory systems of animals across different species. Existing models have succeeded in qualitatively modeling properties of attractor dynamics, but their computational abilities often suffer from poor representations for realistic complex patterns, spurious attractors, low storage capacity, and difficulty in identifying attractive fields of attractors. We propose a simple two-layer architecture, gaussian attractor network, which has no spurious attractors if patterns to be stored are uncorrelated and can store as many patterns as the number of neurons in the output layer. Meanwhile the attractive fields can be precisely quantified and manipulated. Equipped with experience-dependent unsupervised learning strategies, the network can exhibit both discrete and continuous attractor dynamics. A testable prediction based on numerical simulations is that there exist neurons in the brain that can discriminate two similar stimuli at first but cannot after extensive exposure to physically intermediate stimuli. Inspired by this network, we found that adding some local feedbacks to a well-known hierarchical visual recognition model, HMAX, can enable the model to reproduce some recent experimental results related to high-level visual perception.