| |
Abstract:
Abstract: A neural network learns to factor rules from
arbitrary input symbols by abstracting states within the hidden
unit space. More surprisingly, the network can factor rules from
vocabulary even if the grammars are different, as long as the
grammars are structurally similar to each other. In the vocabulary
transfer task, the underlying regular grammar rules are held
constant while the vocabularies are swapped, and a recurrent neural
network is trained to predict the end of grammatical strings. After
learning three distinct vocabularies, the network transfers
successfully to a previously unseen vocabulary with a relearning
savings of 63% of the original number of learning trials for the
first acquired grammar. In the grammar transfer task, the
underlying rules are modified and the vocabularies are also swapped
as in the above task. After learning the first grammar the network
successfully transfers to a modified grammar but fails with a
highly distant grammar. The present neural network appears to
create abstract representations due to the contingencies found in
the tasks. These results are in stark contrast to the views of
evolutionary psychologists that insist learning processes are
"impoverished" or lack appropriate bias in order to acquire
systematicities that are more abstract than the input or output
encoding provided to a neural network (Fodor & Plylsyhn 1988,
Pinker 1984, Cosmides & Tooby 1995, Marcus 1999).
|