| |
Abstract:
Abstract: Observers are able to identify objects of all kinds
in varied sources of lighting under different illuminations with
little effort and extraordinary accuracy. This ability is puzzling
in light of neurophysiological evidence that cells in IT are
sensitive to illumination direction. We investigated the human
behavioral aspects of this recognition ability. In four
experiments, observers learned 10 faces under a small subset of
illumination directions. We then tested observers' ability to
recognize these faces under highly variable illuminations. Across
all four experiments, recognition performance was found to be
dependent on the distance between the test illumination directions
and trained illumination directions. This effect, however, was
modulated by the nature of the trained illumination directions.
Specifically, generalizations from frontal illuminations were far
better than generalizations from extreme illuminations. These
results suggest that observers are not simply reconstructing scene
parameters, but rather are using trained images as a basis set for
a high-dimensional illumination space. Such models of illumination
variability exhibit the highest reliability near basis images.
Moreover, the quality of the basis images affects how good an
estimate of the space is constructed. These behavioral results will
be compared to a computational model for recognition over
illumination variability. This model explicitly builds a
high-dimensional illumination cone using a small set of basis
images that span some changes in lighting direction.
|