| |
Abstract:
Attractor networks, which map an input space to a discrete
output space, are useful for pattern completion---cleaning up noisy
or missing input features. However, designing a net to have a given
set of attractors is notoriously tricky; training procedures are
CPU intensive and often produce spurious attractors and
ill-conditioned attractor basins. These difficulties occur because
each connection in the network participates in the encoding of
multiple attractors. We describe an alternative formulation of
attractor networks in which the encoding of knowledge is local, not
distributed. Although localist attractor networks have similar
dynamics to their distributed counterparts, they are much easier to
work with and interpret. We propose a statistical formulation of
localist attractor net dynamics, which yields a convergence proof
and a mathematical interpretation of model parameters. We present
simulation experiments that explore the behavior of localist
attractor networks, showing that spurious attractors are rare, and
they facilitate two desirable properties of psychological and
neurobiological models, priming---faster convergence to an
attractor if the attractor has been recently visited---and gang
effects---in which the presence of an attractor enhances the
attractor basins of neighboring attractors
|