Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

December 2014, Vol. 26, No. 12, Pages 2735-2789
(doi: 10.1162/NECO_a_00675)
© 2014 Massachusetts Institute of Technology
Computing with a Canonical Neural Circuits Model with Pool Normalization and Modulating Feedback
Article PDF (978.52 KB)
Abstract

Evidence suggests that the brain uses an operational set of canonical computations like normalization, input filtering, and response gain enhancement via reentrant feedback. Here, we propose a three-stage columnar architecture of cascaded model neurons to describe a core circuit combining signal pathways of feedforward and feedback processing and the inhibitory pooling of neurons to normalize the activity. We present an analytical investigation of such a circuit by first reducing its detail through the lumping of initial feedforward response filtering and reentrant modulating signal amplification. The resulting excitatory-inhibitory pair of neurons is analyzed in a 2D phase-space. The inhibitory pool activation is treated as a separate mechanism exhibiting different effects. We analyze subtractive as well as divisive (shunting) interaction to implement center-surround mechanisms that include normalization effects in the characteristics of real neurons. Different variants of a core model architecture are derived and analyzed—in particular, individual excitatory neurons (without pool inhibition), the interaction with an inhibitory subtractive or divisive (i.e., shunting) pool, and the dynamics of recurrent self-excitation combined with divisive inhibition. The stability and existence properties of these model instances are characterized, which serve as guidelines to adjust these properties through proper model parameterization. The significance of the derived results is demonstrated by theoretical predictions of response behaviors in the case of multiple interacting hypercolumns in a single and in multiple feature dimensions. In numerical simulations, we confirm these predictions and provide some explanations for different neural computational properties. Among those, we consider orientation contrast-dependent response behavior, different forms of attentional modulation, contrast element grouping, and the dynamic adaptation of the silent surround in extraclassical receptive field configurations, using only slight variations of the same core reference model.