288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

January 2011, Vol. 23, No. 1, Pages 251-283
(doi: 10.1162/NECO_a_00064)
© 2010 Massachusetts Institute of Technology
Broken Symmetries in a Location-Invariant Word Recognition Network
Article PDF (814.08 KB)

We studied the feedforward network proposed by Dandurand et al. (2010), which maps location-specific letter inputs to location-invariant word outputs, probing the hidden layer to determine the nature of the code. Hidden patterns for words were densely distributed, and K-means clustering on single letter patterns produced evidence that the network had formed semi-location-invariant letter representations during training. The possible confound with superseding bigram representations was ruled out, and linear regressions showed that any word pattern was well approximated by a linear combination of its constituent letter patterns. Emulating this code using overlapping holographic representations (Plate, 1995) uncovered a surprisingly acute and useful correspondence with the network, stemming from a broken symmetry in the connection weight matrix and related to the group-invariance theorem (Minsky & Papert, 1969). These results also explain how the network can reproduce relative and transposition priming effects found in humans.