288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

September 1, 2001, Vol. 13, No. 9, Pages 2075-2092
(doi: 10.1162/089976601750399317)
© 2001 Massachusetts Institute of Technology
An Autoassociative Neural Network Model of Paired-Associate Learning
Article PDF (182.21 KB)

Hebbian heteroassociative learning is inherently asymmetric. Storing a forward association, from item A to item B, enables recall of B (given A), but does not permit recall of A (given B). Recurrent networks can solve this problem by associating A to B and B back to A. In these recurrent networks, the forward and backward associations can be differentially weighted to account for asymmetries in recall performance. In the special case of equal strength forward and backward weights, these recurrent networks can be modeled as a single autoassociative network where A and B are two parts of a single, stored pattern. We analyze a general, recurrent neural network model of associative memory and examine its ability to fit a rich set of experimental data on human associative learning. The model fits the data significantly better when the forward and backward storage strengths are highly correlated than when they are less correlated. This network-based analysis of associative learning supports the view that associations between symbolic elements are better conceptualized as a blending of two ideas into a single unit than as separately modifiable forward and backward associations linking representations in memory.