MIT CogNet, The Brain Sciences ConnectionFrom the MIT Press, Link to Online Catalog
SPARC Communities
Subscriber : Stanford University Libraries » LOG IN

space

Powered By Google 
Advanced Search

 

Separate, Distributed Processing of Environmental, Speech and Musical Sounds in the Cerebral Hemispheres

 Jagmeet S. Kanwal, Jon Kim and Kyosuke Kamada
  
 

Abstract:
Abstract: Auditory processing in the primate cortex is postulated to proceed along a "sound localization" and a "sound pattern" processing stream (Rauschecker et al., 1998). We examined the neural localization for processing three functionally distinct types of sound patterns, namely speech, music and environmental sounds using the BOLD method of fMRI in six right-handed human males. Nineteen contiguous axial T2* weighted gradient-echo EPI images were obtained using a 1.5T Magnetom Siemens Vision Imager. Activation was visualized at a significance level of 0.01 for sound presentation versus silence (background noise). T1-weighted 3D volumes were acquired for anatomical localization. Speech sounds produced clustered activation in the supra and middle temporal gyrus, and a single focus of activation in the central sulcus and middle frontal gyrus, predominantly in the left hemisphere, whereas musical melodies activated some of the same areas, predominantly in the right hemisphere. Music also produced some activation in the auditory cortex on the right side. Nonspeech environmental sounds produced mostly bilateral activation in the auditory cortex as well as at the level of the pre- and post-central gyri. These results indicate that processing of different types of complex sound patterns in humans is parceled within distributed and largely separate networks at and beyond the level of the auditory cortex.

 
 


© 2010 The MIT Press
MIT Logo