| |
Abstract:
Abstract: The mechanisms underlying semantic processing can
be made tractable by considering the information processing
capacities of neural systems. Hopfield (1982,1984) demonstrated
that the dynamical behavior of a system of neurons have emergent
computational properties that can provide a general account of
pattern recognition and classification in a nervous system. More
recently, language processing has been described in these terms.
For example, Elman (1991) and Tabor et. al. (1997) have
demonstrated how syntactic parsing can be described in terms of the
attractor dynamics of systems of interconnected neurons. By
hypothesis, semantic processing or language comprehension in
general can also be described in this manner. In general terms,
linguistic information drives a dynamical system of neurons into
attractor basins and these attractors correspond to
"interpretations" of the sequences. To investigate this hypothesis
a continuous-time-recurrent-neural-network (CTRNN) is trained on a
50 million word corpus of natural language. The internal
representations and dynamics of the network are examined. Results
illustrate that the sequential input is coded in a metric space
organized by the similarity principle and that words and sentences
with similar contexts are classified as similar. Generalization
beyond input occurs as a natural consequence of this
representational format. Additionally, this representational format
can account for the context effects and knowledge effects commonly
seen in semantic interpretation.
|