| |
Introduction
Introduction
In this short chapter we focus on just two questions in the domain of cortical imaging of language and sensory processing. First, to what extent do cortical circuits that subserve spoken language in hearing people also support spoken language processing, in the form of speech-reading, in people born profoundly deaf? Second, do the cortical circuits for processing a visible language such as sign language involve auditory processing regions, and do these networks differ between deaf and hearing signers?
Before these questions can be answered, it is important to realize just how variable the world of the deaf child can be, compared with that of a hearing child. The child born deaf may be deprived of not one but two human faculties. Because most human languages are spoken, the loss of hearing can have major consequences for successful language acquisition. Sensitivity to heard speech has been demonstrated as early as the second trimester of fetal development (Mehler & Christophe, 1994), and spoken language development is heavily dependent on adequate auditory function throughout childhood. If hearing is lost, for example, by infectious disease such as meningitis within the first 5 years, its impact on spoken language development can be immense (Bedford et al., 2001). Furthermore, the great majority of deaf children are born to hearing parents and may therefore be deprived of a critical feature of the ecology of language development, a salient informative communicative context shared by child and caregivers (Vygotsky, 1962, 1978). However, approximately 5%–10% of the deaf population are born to deaf parents, the majority of whom use a signed language within the home. Thus, these deaf children are raised with exposure to a visuospatial language that has all the cognitive, linguistic, and communicative requirements of a spoken human language (Klima & Bellugi, 1979; Sutton-Spence & Woll, 2000). Moreover, this language is not related in any significant way to the spoken language of the host community. The signed language development of Deaf children of Deaf parents (DoD*) follows the characteristic course, both in timing and in structure, of spoken language acquisition by hearing children (e.g., Bellugi & Fischer, 1972; Klima & Bellugi, 1979; Liddell, 1980; Petitto et al., 2001; Sacks, 1989).
Before considering the neural systems supporting processing of signed languages in native users, we will explore an aspect of language processing in deaf people from a hearing home (DoH) that, until recently, was somewhat neglected in studies of the cognitive neuroscience of language: speech-reading. Hearing people make use of visible speech actions; in fact, they cannot avoid it (McGurk & MacDonald, 1976). Deaf people are exposed to speech and its visible effects. Nevertheless, skill in speech-reading varies enormously from one deaf child to another. The origin of this variability lies in the reduced input specificity of seen compared with heard speech. This difference can be demonstrated in terms of phonological structure. Speech that is seen but not heard can deliver only a small subset of the phonological categories available to the hearing perceiver of speech (see, e.g., Summerfield, 1987). But for a profoundly deaf child born into a hearing family, speech-reading may be the only means of access to the language that surrounds her. Although earlier tests suggested otherwise (e.g., Mogford, 1987), careful recent tests of lipreading show that a proportion of people born deaf can become adept at spoken language processing and may be more skilled than the best hearing individuals at following silent, lip-read speech (Bernstein, Demorest, & Tucker, 2000). Our work has explored how cortical circuits for speech-reading become organized in people born profoundly deaf—people for whom a spoken, not a signed, language was the first language available in the home.
| |