|
As with Volume I, this second volume represents a synthesis of issues
in three historically distinct areas of learning research:
computational learning theory, neural network research, and symbolic
machine learning. While the first volume provided a forum for building
a science of computational learning across fields, this volume
attempts to define plausible areas of joint research: the
contributions are concerned with finding constraints for theory while
at the same time interpreting theoretic results in the context of
experiments with actual learning systems. Subsequent volumes will
focus on areas identified as research opportunities.
Computational learning theory, neural networks, and AI machine
learning appear to be disparate fields; in fact they have the same
goal: to build a machine or program that can learn from its
environment. Accordingly, many of the papers in this volume deal with
the problem of learning from examples. In particular, they are
intended to encourage discussion between those trying to build
learning algorithms (for instance, algorithms addressed by learning
theoretic analyses are quite different from those used by neural
network or machine-learning researchers) and those trying to analyze
them.
The first section provides theoretical explanations for the learning
systems addressed, the second section focuses on issues in model
selection and inductive bias, the third section presents new learning
algorithms, the fourth section explores the dynamics of learning in
feedforward neural networks, and the final section focuses on the
application of learning algorithms.
|