288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

March 1995, Vol. 7, No. 2, Pages 370-379
(doi: 10.1162/neco.1995.7.2.370)
© 1995 Massachusetts Institute of Technology
Learning Linear Threshold Approximations Using Perceptrons
Article PDF (473.23 KB)

We demonstrate sufficient conditions for polynomial learnability of suboptimal linear threshold functions using perceptrons. The central result is as follows. Suppose there exists a vector w*, of n weights (including the threshold) with “accuracy” 1 − α, “average error” η, and “balancing separation” σ, i.e., with probability 1 − α, w* correctly classifies an example x; over examples incorrectly classified by w*, the expected value of |w* · x| is η (source of inaccuracy does not matter); and over a certain portion of correctly classified examples, the expected value of |w* · x| is σ. Then, with probability 1 − δ, the perceptron achieves accuracy at least 1 − [∊ + α(1 + η/σ)] after O[n−2σ−2(ln 1/δ)] examples.