| |
Abstract:
The nature and architecture of semantic representations was
investigated in a task in which 24 subjects produced the first word
that came to mind in response to a line drawing or written word. We
reasoned that if semantic representations were unitary and amodal,
performance would not be influenced by the nature of the input
(word vs. picture) whereas there are multiple, distinct semantic
stores performance would be influenced by the nature of the input.
The "distributed" semantics account predicts that performance would
be influenced both by stimulus type but also by the semantic
characteristics of the target. Stimuli included 80 words or
pictures, half of which were animate and half manipulable. The
number of verbs produced as a function of stimulus type was
recorded. ANOVA included material (picture, word), animacy
(animate, inanimate) and manipulability
(manipulable,non-manipulable). There were main effects of all three
factors. Subjects were significantly more likely to produce a verb
to a picture (18.5 vs.13.3%; P=.0272), to a manipulable stimulus
(22.5% vs. 9.3%; P<.0001), and to an inanimate stimulus (23.1%
vs. 8.7%; P<.0001). Finally, a manipulability by animacy
interaction was noted (P=.0024); subjects were most likely to
produce a verb to a manipulable, inanimate object (e.g., scissors).
These data suggest that pictures enjoy privileged access to action
and are most consistent with a distributed semantic account.
|