288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

July 2007, Vol. 19, No. 7, Pages 1798-1853
(doi: 10.1162/neco.2007.19.7.1798)
© 2007 Massachusetts Institute of Technology
Selectivity and Stability via Dendritic Nonlinearity
Article PDF (3.11 MB)

Inspired by recent studies regarding dendritic computation, we constructed a recurrent neural network model incorporating dendritic lateral inhibition. Our model consists of an input layer and a neuron layer that includes excitatory cells and an inhibitory cell; this inhibitory cell is activated by the pooled activities of all the excitatory cells, and it in turn inhibits each dendritic branch of the excitatory cells that receive excitations from the input layer. Dendritic nonlinear operation consisting of branch-specifically rectified inhibition and saturation is described by imposing nonlinear transfer functions before summation over the branches. In this model with sufficiently strong recurrent excitation, on transiently presenting a stimulus that has a high correlation with feed- forward connections of one of the excitatory cells, the corresponding cell becomes highly active, and the activity is sustained after the stimulus is turned off, whereas all the other excitatory cells continue to have low activities. But on transiently presenting a stimulus that does not have high correlations with feedforward connections of any of the excitatory cells, all the excitatory cells continue to have low activities. Interestingly, such stimulus-selective sustained response is preserved for a wide range of stimulus intensity. We derive an analytical formulation of the model in the limit where individual excitatory cells have an infinite number of dendritic branches and prove the existence of an equilibrium point corresponding to such a balanced low-level activity state as observed in the simulations, whose stability depends solely on the signal-to-noise ratio of the stimulus. We propose this model as a model of stimulus selectivity equipped with self-sustainability and intensity-invariance simultaneously, which was difficult in the conventional competitive neural networks with a similar degree of complexity in their network architecture. We discuss the biological relevance of the model in a general framework of computational neuroscience.