Learnability in Optimality Theory

Overview

Highlighting the close relationship between linguistic explanation and learnability, Bruce Tesar and Paul Smolensky examine the implications of Optimality Theory (OT) for language learnability. They show how the core principles of OT lead to the learning principle of constraint demotion, the basis for a family of algorithms that infer constraint rankings from linguistic forms.

Of primary concern to the authors are the ambiguity of the data received by the learner and the resulting interdependence of the core grammar and the structural analysis of overt linguistic forms. The authors argue that iterative approaches to interdependencies, inspired by work in statistical learning theory, can be successfully adapted to address the interdependencies of language learning. Both OT and Constraint Demotion play critical roles in their adaptation. The authors support their findings both formally and through simulations. They also illustrate how their approach could be extended to other language learning issues, including subset relations and the learning of phonological underlying forms.

Table of Contents

  1. Acknowledgments
  2. 1. Language Learning
  3. 2. An Overview of Optimality Theory
  4. 3. Constraint Demotion
  5. 4. Overcoming Ambiguity in Overt Forms
  6. 5. Issues in Language Learning
  7. 6. Learnability and Linguistic Theory
  8. 7. Correctness and Data Complexity of Constraint Demotion
  9. 8. Production-Directed Parsing
  10. Notes
  11. References
  12. Index