Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

January 1995, Vol. 7, No. 1, Pages 158-172
(doi: 10.1162/neco.1995.7.1.158)
© 1995 Massachusetts Institute of Technology
Learning a Decision Boundary from Stochastic Examples: Incremental Algorithms with and without Queries
Article PDF (772.13 KB)
Abstract

Even if it is not possible to reproduce a target input-output relation, a learning machine should be able to minimize the probability of making errors. A practical learning algorithm should also be simple enough to go without memorizing example data, if possible. Incremental algorithms such as error backpropagation satisfy this requirement. We propose incremental algorithms that provide fast convergence of the machine parameter θ to its optimal choice θo with respect to the number of examples t. We will consider the binary choice model whose target relation has a blurred boundary and the machine whose parameter θ specifies a decision boundary to make the output prediction. The question we wish to address here is how fast θ can approach θo, depending upon whether in the learning stage the machine can specify inputs as queries to the target relation, or the inputs are drawn from a certain distribution. If queries are permitted, the machine can achieve the fastest convergence, (θ - θo)2O(t−1). If not, O(t−1) convergence is generally not attainable. For learning without queries, we showed in a previous paper that the error minimum algorithm exhibits a slow convergence (θ - θo)2O(t−2/3). We propose here a practical algorithm that provides a rather fast convergence, O(t−4/5). It is possible to further accelerate the convergence by using more elaborate algorithms. The fastest convergence turned out to be O[(lnt)2
t−1]. This scaling is considered optimal among possible algorithms, and is not due to the incremental nature of our algorithm.