| |
Abstract:
Abstract: We report on a computational model for facial
expression perception, which begins with a layer of V1 complex
cell-like receptive fields and is trained to identify one of six
"basic" emotions signaled in a given image. The model, after
training on a set of facial expression prototypes, generalizes
quite well to previously unseen images. First, without free
parameters, the model simultaneously explains contradictory
evidence for both categorical and continuous perception of
expression observed in Young et al.'s (1996) "Megamix" study. The
model provides a good fit to categorization data, response times,
discriminability, and sensitivity to multiple expressions in morph
images. Second, we find that the model's hidden layer provides a
natural explanation of the origin of the emotion "circumplex"
(Russell, 1980). Multidimensional scaling (MDS) performed on the
model's hidden layer representation generates the same circumplex
derivable from human confusion data for the same stimuli. Finally,
analysis of the trained network and its input representation shows
that it employs a local feature-based classification strategy that
attends to the visual correlates of the facial muscle movements
most discriminative for expression classification. Our results show
that much of the data on human perception of facial expressions can
be explained by the straightforward process of learning a mapping
from facial features to emotion categories.
|