| |
Abstract:
GOALS: How are we able to distinguish our own voice from
someone else's? When we speak, we know that we are speaking and we
hear our voice. Yet, if the feedback is altered, are we still able
to recognize our own voice? We sought to investigate this process
of self-monitoring using an event-related fMRI acquisition sequence
that would allow subjects to speak and hear their voice in the
absence of scanner noise. METHOD: BOLD responses were acquired on a
1.5 Tesla GE Signa System. Twelve healthy dextral male subjects
read aloud adjectives and heard their voice which were either: (A)
undistorted; (B) pitch distorted; (C) replaced by another male
voice; (D) replaced by a distorted male voice. RESULTS: Conditions
which engaged self-monitoring (B, C, D) activated a network which
included the insular, cingulate, temporal and cerebellar cortices.
Specific components were differentially engaged by each condition.
The hippocampus, cingulate gyrus and cerebellum were particularly
activated when subjects heard their own distorted voice. Within the
superior temporal gyrus, the main effect of self vs. non-self
speech revealed regional activations that were spatially distinct.
CONCLUSION: Verbal self-recognition involves a network of areas
implicated in the generation and perception of speech. Modulation
of the auditory cortex in response to self-produced speech is
evident.
|