MIT CogNet, The Brain Sciences ConnectionFrom the MIT Press, Link to Online Catalog
SPARC Communities
Subscriber » LOG IN

space

Powered By Google 
Advanced Search

Selected Title Details  
Apr 1995
ISBN 0262660962
405 pp.
92 illus.
BUY THE BOOK
Computational Learning Theory and Natural Learning Systems - Vol. III
Thomas Petsche

This is the third in a series of edited volumes exploring the evolving landscape of learning systems research which spans theory and experiment, symbols and signals. It continues the exploration of the synthesis of the machine learning subdisciplines begun in volumes I and II. The nineteen contributions cover learning theory, empirical comparisons of learning algorithms, the use of prior knowledge, probabilistic concepts, and the effect of variations over time in the concepts and feedback from the environment.

The goal of this series is to explore the intersection of three historically distinct areas of learning research: computational learning theory, neural networks and AI machine learning. Although each field has its own conferences, journals, language, research, results, and directions, there is a growing intersection and effort to bring these fields into closer coordination.

Can the various communities learn anything from one another? These volumes present research that should be of interest to practitioners of the various subdisciplines of machine learning, addressing questions that are of interest across the range of machine learning approaches, comparing various approaches on specific problems and expanding the theory to cover more realistic cases.

Table of Contents
 Preface
 Introduction
 Contributors
I Using Prior Knowledge
1 Using Heuristic Search to Expand Knowledge-Based Neural Networks
by David W. Opitz and Jude W. Shavlik
2 High Accuracy Path Tracking by Neural Linearization Techniques
by Stefan Miesbach
3 A Preliminary PAC Analysis of Theory Revision
by Raymond J. Mooney
4 A Knowledge-Based Model of Geometry Learning
by Geoffrey Towell and Richard Lehrer
II Time-Varying Tasks
5 Importance-Based Feature Extraction for Reinforcement Learning
by David J. Finton and Yu Hen Hu
6 A Method for Constructive Learning of Recurrent Neural Networks
by Dong Chen, C. Lee Giles, Gordon Sun, Mark W. Goudzreau, Hsing-Hen Chen and Yee-Chun Lee
7 Recurrent Neural Networks with Time-dependent Inputs and Outputs
by Volkmar Sterzing and Bernd Schürmann
III Probabilistic Concepts
8 Soft Classification, a.k.a. Risk Estimation, via Penalized Log Likelihood and Smoothing Spline Analysis of Variance
by Grace Wahba, Chong Gu, Yuedong Wang and Richard Chappell
9 Learning with Probabilistic Supervision
by Padhraic Smyth
10 Reducing the Small Disjuncts Problem by Learning Probabilistic Concept Descriptions
by Kamal M. Ali and Michael J. Pazzani
IV Theory
11 On the Bayesian "Occam Factors" Argument for Occam's Razor
by David H. Wolpert
12 Learning Finite Automata Using Local Distinguishing Experiments
by Wei-Min Shen
13 PAC-Learnability of Constrained Nonrecursive Logic Programs
by Saso Dzeroski, Stephen Muggleton and Stuart Russell
14 Analysis of the Blurring Process
by Yizong Cheng and Zhangyong Wan
V Empirical Comparisons
15 Learning Context to Disambiguate Word Sense
by Ellen M. Voorhees, Claudia Leacock and Geoffrey Towell
16 Investigating the Value of a Good Input Representation
by Mark W. Craven and Jude W. Shavlik
17 Improving Model Selection by Dynamic Regularization Methods
by Ferdinand Hergert, William Finnoff and Hans-Georg Zimmermann
18 Cross-Validation and Modal Theories
by Timothy L. Bailey and Charles Elkan
19 An Empirical Investigation of Brute Force to Choose Features, Smoothers and Function Approximators
by Andrew W. Moore, Daniel J. Hill and Michael P. Johnson
 References
 Index
 
Options
Related Topics
Computational Intelligence


© 2010 The MIT Press
MIT Logo