MIT CogNet, The Brain Sciences ConnectionFrom the MIT Press, Link to Online Catalog
SPARC Communities
Subscriber : Stanford University Libraries » LOG IN

space

Powered By Google 
Advanced Search

The CogNet Library : References Collection
mitecs_logo  The Handbook of Multisensory Processes : Table of Contents: From Multisensory Integration to Talking Heads and Language Learning : Introduction
Next »»
 

Introduction

Introduction

In this handbook of multisensory processes, we learn that perceptual and behavioral outcomes are influenced by simultaneous inputs from several senses. In this chapter, we present theoretical and empirical research on speech perception by eye and ear, and address the question of whether speech is a special case of multisensory processing. Our conclusion is that speech perception is indeed an ideal or prototypical situation in which information from the face and voice is seamlessly processed to impose meaning in face-to-face communication.

Scientists are often intrigued by questions whose answers foreground some striking phenomena. One question about language is whether speech perception is uniquely specialized for processing multisensory information or whether it is simply a prototypical instance of cross-modal processing that occurs in many domains of pattern recognition. Speech is clearly special, at least in the sense that (as of now) only we big-mouthed, biped creatures can talk. Although some chimpanzees have demonstrated remarkable speech perception and understanding of spoken language, they seem to have physiological and anatomical constraints that preclude them from assuming bona fide interlocutor status (Lieberman, 2000; Savage-Rumbaugh, Shanker, & Taylor, 1998). An important item of debate, of course, is whether they also have neurological, cognitive, or linguistic constraints that will prove an impenetrable barrier for language use (Arbib, 2002). We begin with a short description of the idea that speech is special.

 
Next »»


© 2010 The MIT Press
MIT Logo