| |
Abstract:
Abstract: In the primary cortices, the sensory modalities use
distinct sets of coordinates to encode location; vision is
initially retinotopic, while auditory space is initially
head-centered. Yet higher up in the cortical hierarchy, these
sensory modalities are merged and the question arises as to which
frame of reference is used for this merging process. The results of
a series of pointing experiments in virtual reality suggest that
the retinal frame of reference plays a central role. It appears
that the location of visual, auditory, proprioceptive and imagined
targets are all remembered in retinal coordinates. These results
are particularly surprising for auditory targets, since a pointing
motor command could in principle be computed directly from the
head-centered location of the target without recovering its retinal
position. The results demonstrate with the pivotal role of the
visual system in human spatial function.
|