MIT CogNet, The Brain Sciences ConnectionFrom the MIT Press, Link to Online Catalog
SPARC Communities
Subscriber : Stanford University Libraries » LOG IN

space

Powered By Google 
Advanced Search

 

Fmri of Speech Perception Using Sinewave Speech and Acoustically Matched Nonspeech.

 D. H. Whalen, Randall Benson, Matthew Richardson, Vince Clark and Song Lai
  
 

Abstract:
Comparing speech to nonspeech perception usually prevents having the acoustics be completely comparable. Earlier work found areas selectively active for speech; acoustic differences, perhaps, activated processing regions for extraneous reasons. Natural speech or standard synthesis require acoustic differences, but comparability exists in sinewave speech, which replaces speech resonances with sinewaves. The result resembles, initially, weird computer sounds, but once phonetically perceived, elicits all phoneme types. Here, sinewave speech, and nonspeech organizations of those tones, provided fMRI evidence that speech perception occupies a neurological specialization regardless of acoustics. Sinewave versions of nonwords used amplitude- and frequency-modulated tones. Nonspeech versions combined tones from different syllables, time-reversing the mid-frequency tone, and swapping the halves of the lowest tone. Sentences and nonwords were identified first; the fMRI test used passive listening. Test blocks contained speech or nonspeech; control conditions included a silent block and one containing musical chords. Nineteen right-handed adults participated. Behaviorally, females typically failed to identify the speech. Most males and one female performed accurately. Brain maps, created separately for performers and nonperformers, contrasted speech with nonspeech. Posterior STG was active for both, while a parietal region and the parahippocampal gyrus, only for performers. This suggests the parietal region accompanied a conscious speech percept, while posterior STG processes speech regardless.

 
 


© 2010 The MIT Press
MIT Logo