Monthly
208 pp. per issue
8 1/2 x 11, illustrated
ISSN
0898-929X
E-ISSN
1530-8898
2014 Impact factor:
4.69

Journal of Cognitive Neuroscience

September 2009, Vol. 21, No. 9, Pages 1790-1804
(doi: 10.1162/jocn.2009.21118)
© 2008 Massachusetts Institute of Technology
A Multisensory Cortical Network for Understanding Speech in Noise
Article PDF (518.17 KB)
Abstract

In noisy environments, listeners tend to hear a speaker's voice yet struggle to understand what is said. The most effective way to improve intelligibility in such conditions is to watch the speaker's mouth movements. Here we identify the neural networks that distinguish understanding from merely hearing speech, and determine how the brain applies visual information to improve intelligibility. Using functional magnetic resonance imaging, we show that understanding speech-in-noise is supported by a network of brain areas including the left superior parietal lobule, the motor/premotor cortex, and the left anterior superior temporal sulcus (STS), a likely apex of the acoustic processing hierarchy. Multisensory integration likely improves comprehension through improved communication between the left temporal–occipital boundary, the left medial-temporal lobe, and the left STS. This demonstrates how the brain uses information from multiple modalities to improve speech comprehension in naturalistic, acoustically adverse conditions.