288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

November 1, 2002, Vol. 14, No. 11, Pages 2627-2646
(doi: 10.1162/089976602760408008)
© 2002 Massachusetts Institute of Technology
Selectively Grouping Neurons in Recurrent Networks of Lateral Inhibition
Article PDF (511.08 KB)

Winner-take-all networks have been proposed to underlie many of the brain's fundamental computational abilities. However, notmuchisknown about how to extend the grouping of potential winners in these networks beyond single neuron or uniformly arranged groups of neurons. We show that competition between arbitrary groups of neurons can be realized by organizing lateral inhibition in linear threshold networks. Given a collection of potentially overlapping groups (with the exception of some degenerate cases), the lateral inhibition results in network dynamics such that any permitted set of neurons that can be coactivated by some input at a stable steady state is contained in one of the groups. The information about the input is preserved in this operation. The activity level of a neuron in a permitted set corresponds to its stimulus strength, amplified by some constant. Sets of neurons that are not part of a group cannot be coactivated by any input at a stable steady state. We analyze the storage capacity of such a network for random groups—the number of random groups the network can store as permitted sets without creating too many spurious ones. In this framework, we calculate the optimal sparsity of the groups (maximizing group entropy). We find that for dense inputs, the optimal sparsity is unphysiologically small. However, when the inputs and the groups are equally sparse, we derive a more plausible optimal sparsity. We believe our results are the first steps toward attractor theories in hybrid analog-digital networks.