| |
Abstract:
A common problem facing cognitive neuroscientists is
determining the underlying basis for the internal representations
that govern cognition. We consider an unsupervised, generative
framework others have used to understand the representations used
in visual cortex (Olshausen & Field, 1996) and to discover the
underlying structure in hierarchical visual domains (Lewicki &
Sejnowski, 1997). We applied the Lewicki and Sejnowski approach to
learn the underlying structure present in feature based letters
with and without context and/or sparcity constraints. Context is
the added information that can provide hints about which
collections of features constitute features, while sparcity
encourages a network to use relative few units to represent any
input pattern. Analyses of the networks' internal representations
show (1) relatively poor detection of specific higher order
structure without either constraint, (2) context alone slightly
improves the developed internal representations, (3) sparcity alone
results in more specific, yet somewhat redundant, representations
(in line with Olshausen & Field), and (4) a combination of
context and sparcity dramatically improves the learned
representations. Thus, relatively specific internal representations
can be developed by a system using sparse encoding alone, but a
system also incorporating context will develop stronger internal
representations. Feedback connections in the brain may provide
context information to relatively low-level visual areas, thereby
aiding their ability to discover structure in their inputs.
|