| |
Abstract:
which avoids some of the problems associated with recurrent
neural networks. The method of creating a Prediction Fractal
Machine (PFM) is briefly described and some experiments are
presented which demonstrate the suitability of PFMs for language
modeling tasks. PFMs are able to distinguish reliably between
minimal pairs, and their behavior is consistent with the hypothesis
that well-formedness is `graded' rather than absolute. These
results form the basis of a discussion of the PFM's potential to
offer fresh insights into the problem of language acquisition and
processing.
|