Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

July 1992, Vol. 4, No. 4, Pages 605-618
(doi: 10.1162/neco.1992.4.4.605)
© 1992 Massachusetts Institute of Technology
Four Types of Learning Curves
Article PDF (597.42 KB)
Abstract

If machines are learning to make decisions given a number of examples, the generalization error ε(t) is defined as the average probability that an incorrect decision is made for a new example by a machine when trained with t examples. The generalization error decreases as t increases, and the curve ε(t) is called a learning curve. The present paper uses the Bayesian approach to show that given the annealed approximation, learning curves can be classified into four asymptotic types. If the machine is deterministic with noiseless teacher signals, then (1) ε ∼ at-1 when the correct machine parameter is unique, and (2) ε ∼ at-2 when the set of the correct parameters has a finite measure. If the teacher signals are noisy, then (3) ε ∼ at-1/2 for a deterministic machine, and (4) ε ∼ c + at-1 for a stochastic machine.