MIT CogNet, The Brain Sciences ConnectionFrom the MIT Press, Link to Online Catalog
SPARC Communities
Subscriber : Stanford University Libraries » LOG IN

space

Powered By Google 
Advanced Search

 

When Do You Milk a Coat? The Time Course of Acoustic and Semantic Processing In Sentences

 Susan Borsky, Betty Tuller, Christina Langford, Lewis P. Shapiro and Kellie Wolf
  
 

Abstract:

In spoken language, local acoustic information is frequently congruent with more than one phoneme. Nevertheless, the words of a sentence are rarely misunderstood; sentence context biases listeners towards a contextually appropriate interpretation of acoustically ambiguous words (Borsky, Tuller, and Shapiro, 1997; Connine, 1987). The purpose of the present study is to investigate the role of both acoustic and semantic information in on-line sentence processing. Ten target stimuli forming a GOAT-to-COAT continuum were created from natural speech by manipulating a temporal cue for voicing (VOT) that distinguishes word-initial /g/ from /k/. Stimuli were embedded in biased sentences such as:

Goat-biased: The busy dairyman forgot[1]to milk the (goat/coat)[2]in the[3]drafty barn.

Coat-biased: The careful tailor hurried to[1]press the (goat/coat)[2] in the [3]cluttered attic.

A cross modal lexical decision task (CMLD) was used: subjects made a word/non-word decision to a visual letter string that appeared during the uninterrupted auditory presentation of a sentence. None of the visual probes were related to the sentences. Response times (RT) to the visual probe were used to assess comparative processing load for different combinations of acoustic target stimulus and biased sentence context at positions [1], [2], and [3]. At the end of the experiment, we tested each subject's identification of the isolated target stimuli. As expected, all subjects heard the shortest VOTs as 'goat', the longest VOTs as 'coat', and mid-range VOTs as both 'goat' and 'coat' on separate trials. The CMLD data showed the following effects at each probe position.

[1] Pre-target control: There were no significant effects.

[2] Target offset: RTs were significantly greater for a mid-range VOT stimulus than for either endpoint, regardless of context.

[3] 450ms after target offset: There was a VOT x Context interaction due to the effect of context at each endpoint; RTs were significantly greater when the sentence was biased toward the opposite endpoint, even though the identification of endpoint stimuli are presumably consistent with VOT, as in the post-test identifications as well as in previous identification results for the same acoustic stimuli embedded in the same biased sentences(Borsky, et. al., 1997).

The effect of acoustic information at a potentially ambiguous target word, plus the development of sentence context effects 450ms downstream support an account of auditory sentence comprehension in which the processing of acoustic and semantic information each has a different time course.

Borsky, S., Tuller, B., and Shapiro, L.P.(in press). "How to milk a coat": The effects of semantic and acoustic information on phoneme categorization. The Journal of the Acoustical Society of America.

Connine, C.M. (1987). Constraints on interactive processes in auditory word recognition: The role of sentence context. Journal of Memory and Language, 26, 527-538.

 
 


© 2010 The MIT Press
MIT Logo