## Neural Computation

We demonstrate sufficient conditions for polynomial learnability of suboptimal linear threshold functions using perceptrons. The central result is as follows. Suppose there exists a vector w_{*}, of *n* weights (including the threshold) with “accuracy” 1 − α, “average error” η, and “balancing separation” σ, i.e., with probability 1 − α, w_{*} correctly classifies an example x; over examples incorrectly classified by w_{*}, the expected value of |w_{*} · x| is η (source of inaccuracy does not matter); and over a certain portion of correctly classified examples, the expected value of |w_{*} · x| is σ. Then, with probability 1 − δ, the perceptron achieves accuracy at least 1 − [∊ + α(1 + η/σ)] after *O*[*n*∊^{−2}σ^{−2}(ln 1/δ)] examples.