| |
Introduction
Introduction
Many single neurons in the primate orbitofrontal cortex respond to different combinations of taste, somatosensory, visual, olfactory, and auditory inputs. Multisensory convergence is achieved by these neurons in that many of the neurons in the preceding cortical areas (such as the primary taste cortex and the inferior temporal visual cortex) are unimodal. (For example, there are no or few visual and olfactory neurons in the primary taste cortex, and no taste, olfactory, or somatosensory neurons in the inferior temporal visual cortex.) The visual-to-taste convergence is organized by associative learning with synaptic modification of the visual inputs onto taste-responsive neurons. This associative learning can reverse (in a visual discrimination reversal task) in as little as one trial, and is an important function performed by the orbitofrontal cortex. Olfactory-to-taste associations are learned in a similar way but reverse less rapidly, providing some stability in the representation of flavor. The taste (and somatosensory) neuronal representation is of the reward value of the sensory input, in that feeding to satiety decreases the responses of the neurons and the reward value of the food. The multisensory convergence in the orbitofrontal cortex at the ends of the “what” processing systems (for vision, taste, and smell) thus enables a rich, multisensory representation of the reward (and punishment) value of stimuli that allows rapid relearning of which visual stimuli are currently associated with primary reinforcers. The roles of this convergence at the single-neuron level are to build representations that reflect the whole combination of sensory inputs needed to define a stimulus so that behavior to just that stimulus and not to others can be produced (as in sensory-specific satiety), and to enable behavior to, for example, visual stimuli to be made to depend on the taste, or more generally the reinforcer, with which the visual stimulus is currently associated. These functions provide a basis for understanding the role of the orbitofrontal cortex in emotions, which can be construed as states elicited by reinforcers. Neurons in the amygdala also respond to similar sensory inputs from two or more sensory modalities, but the learning of associations between stimuli in different sensory modalities is less flexible than that of neurons in the orbitofrontal cortex. Neurons in the cortex in the macaque superior temporal sulcus frequently respond to moving objects, including the lips of a speaker, and other neurons respond to auditory stimuli, including vocalization. This is therefore likely to be an important multisensory convergence zone for dynamically changing stimuli that correspond to a particular type of biological motion, and that can give rise to phenomena such as the McGurk effect. A formal model of coupled networks is described that can account for phenomena, including illusions, that can arise due to multisensory convergence, including the McGurk effect. Neurons in the primate hippocampus have very different representations of visual space in an allocentric coordinate frame, and combine these representations with idiothetic (self-motion) inputs from the vestibular and proprioceptive systems. The integration in the hippocampus requires path integration of the idiothetic inputs so that they can be combined with allocentric visual spatial representations, and a model of how this path integration can be performed is described. An analogous model can also account for how single neurons in the primate presubiculum respond to both visual cues anchored to head direction and to signals of vestibular origin related to changes of head direction that require path integration before they are combined with the visual inputs. These investigations provide a basis for understanding the functions of and the underlying mechanisms for the multisensory convergence found at the ends of the unimodal cortical sensory processing streams.
This chapter describes some of the rules of the cortical processing at the ends of the ventral stream cortical processing areas that build multisensory representations of “what” object is present. These ventral stream systems include the visual pathways to the inferior temporal visual cortex, where what object is being seen is represented (E. T. Rolls & Deco, 2002), and the taste and olfactory systems, which define what smell or taste is being presented (E. T. Rolls, 1999a). To understand the general architecture of these processing streams, we need evidence from neuroanatomy. To understand whether inputs from different sensory systems show convergence within a cortical area, evidence about whether single neurons respond to both types of sensory input is the direct and best measure, and using a single-neuron recording approach also allows the rules of sensory convergence (such as whether the convergence reflects a learned association between the sensory inputs) to be discovered. Finally, because each single neuron has different tuning to each sensory stimulus, the nature of the representations built by multisensory convergence can be best understood by considering populations of different single neurons at the network level, and indeed this neuronal network level enables precise models to be constructed of how whole populations of neurons interact with each other to produce interesting phenomena, including attention. Because all these approaches are necessary to define the nature and mechanisms of multisensory convergence, all are included and combined in this chapter. An introduction to this computational approach to understanding brain function that builds on single-neuron neurophysiology but leads to models that make predictions at the more global level of what is being measured in, for example, a neuroimaging experiment is provided by E. T. Rolls and Deco (2002).
One of the cortical areas that builds multisensory representations from the outputs of mainly unimodal processing streams is the orbitofrontal cortex. Because the sensory modalities that project into this region include the taste and somatosensory systems, the orbitofrontal cortex plays an important role in representing primary reinforcers, that is, stimuli that can produce reward or punishment innately, without learning (E. T. Rolls, 1999a). (A reward is a stimulus that an animal will work to obtain. A punisher is a stimulus that an animal will work to escape from or avoid. Taste is a primary reinforcer in that the first time that an animal is salt-deprived, it shows a preference for salt taste [see E. T. Rolls, 1999a]. Similarly, a painful stimulus is innately a punisher.) We will see that the orbitofrontal cortex builds multisensory representations by association learning, learning, for example, that a particular visual stimulus is associated with a particular taste.
Some of the anatomical pathways that provide the basis of the multisensory representations to be described are shown in Figure 19.1. Most of the areas that precede the orbitofrontal cortex, and the amygdala, which has similar connectivity, are mainly unimodal. This is part of the evidence that multisensory representations are formed in the orbitofrontal cortex and amygdala. An interesting aspect of the architecture shown in Figure 19.1 is that the representation of objects and faces in the inferior temporal visual cortex is in an ideal form for the pattern association learning that enables visual-to-taste associations to be formed, as will be described.
Figure 19.1.
Schematic diagram of the taste and olfactory pathways in primates showing how they converge with each other and with visual pathways. The gate functions shown refer to the finding that the responses of taste neurons in the orbitofrontal cortex and the lateral hypothalamus are modulated by hunger. VPMpc, ventral-posteromedial thalamic nucleus; V1, V2, V4, visual cortical areas; VL, ventral-posterolateral group of thalamic nuclei.
Much of the fundamental evidence for multisensory convergence comes from single-neuron recording studies, because such studies are the direct and unequivocal method by which it can be shown whether real convergence from different modalities actually occurs in a brain region, in contrast to a neuroimaging study, in which nearby (within millimeters) or even intermingled neuronal populations may not be connected to each other. Single-neuron recording also allows direct study of the nature of what is being represented from each sensory modality in the convergent neurons to be analyzed, and the rules that underlie the formation of the multisensory convergence to be directly studied.
To enable the neurophysiological studies to provide a fundamental basis for understanding multisensory convergence in these systems in humans, and thus to advance our understanding of disorders of these brain areas in humans, the studies described were performed in nonhuman primates, macaques, in which both the temporal lobe cortical visual areas and the orbitofrontal cortex are well developed, as in humans. Because visual and olfactory processing in the primate orbitofrontal cortex is closely linked to that for taste, a summary of the cortical representation of taste in the primate orbitofrontal cortex is provided first.
| |