| |
Abstract:
Abstract: This study investigated the influence of
task-irrelevant facial identity information on speechreading
performance. Participants classified vowel utterances as
representing /u/ or /i/, using videodigitized faces presented
either in static or dynamic mode. Dynamic faces were 2000 ms
videoclips which were synchronized such that RTs were measured from
the first frame showing visible mouth opening, which was set to
1000 ms. Static faces consisted of one videoframe from these clips
showing the utterance at apex; they were also presented for 2000 ms
and RTs were measured from stimulus onset. Facial identity was
correlated, constant, or orthogonal to the task-relevant speech
information. Reaction times (RTs) were predicted to increase over
these conditions to the extent that facial speech could not be
processed independently of identity. In Experiment 1, facial speech
classifications were influenced by task-irrelevant identity
variations. This was independent of mode of presentation, even
though RTs for static faces were longer than RTs for videoclips. In
Experiment 2 which used dynamic video material, observers could
classify visual speech slightly faster for personally familiar than
for unfamiliar faces. These results confirm recent findings which
suggested an influence of facial identity on facial speech
analysis, and extend them to dynamic stimuli. They may indicate
that the perception of different types of social signals in the
face is not as independent as previously thought.
|