| |
Abstract:
Abstract: Whether phonetic features are realized primarily by
auditory or speech- specific mechanisms is controversial.
Psychophysical studies have demonstrated independence of speech and
nonspeech percepts (Xu, 1997). In vivo human studies are rare, so
that physiological data is sparse or extrapolated from studies of
primates. We used fMRI to investigate perception of speech and
nonspeech varying in acoustic (nonspeech) or acoustic and phonetic
(speech) complexity. Twelve right-handed volunteers listened to
blocked stimuli consisting of tones, chords, chord progressions,
vowels, CV and CVC nonword stimuli matched for duration and
loudness. 480 trials of each stimulus type were presented to each
subject, interleaved with scanner gradient noise. Multiple
regression and ANOVA compared activation to nonspeech and speech
stimuli and revealed regions for which activity correlated with
acoustic or phonetic complexity. Regions sensitive to phonetic (and
acoustic) complexity (V-CV-CVC) included left post. STG (BA22),
left MTG (BA 21), and bilateral STG (BA 42) but not Heschl*s gyrus.
Regions sensitive to nonspeech acoustic complexity included
bilateral Heschl*s gyrus (BA 41) and surround (BA 42, anterior BA
22). The speech-by-complexity interaction was significant for left
post. STG and left MTG (greater for speech) and right anterior STG
(greater for nonspeech). Time courses revealed a preference for the
phonetic stimuli in left post. STG and left MTG. These findings
support existence of a neural module which uses population coding
occurring at levels higher than A1 to extract phonetic
features.
|