Quarterly (winter, spring, summer, fall)
224 pp. per issue
6 3/4 x 9 1/4
ISSN
0024-3892
E-ISSN
1530-9150
2014 Impact factor:
1.71

Linguistic Inquiry

Spring 2002, Vol. 33, No. 2, Pages 225-244
(doi: 10.1162/002438902317406704)
© 2002 Massachusetts Institute of Technology
Probabilistic Learning Algorithms and Optimality Theory
Article PDF (109.17 KB)
Abstract

This article provides a critical assessment of the Gradual Learning Algorithm (GLA) for probabilistic optimality-theoretic (OT) grammars proposed by Boersma and Hayes (2001). We discuss the limitations of a standard algorithm for OT learning and outline how the GLA attempts to overcome these limitations. We point out a number of serious shortcomings with the GLA: (a) A methodological problem is that the GLA has not been tested on unseen data, which is standard practice in computational language learning. (b) We provide counterexamples, that is, attested data sets that the GLA is not able to learn. (c) Essential algorithmic properties of the GLA (correctness and convergence) have not been proven formally. (d) By modeling frequency distributions in the grammar, the GLA conflates the notions of competence and performance. This leads to serious conceptual problems, as OT crucially relies on the competence/performance distinction.