Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

November 1, 2002, Vol. 14, No. 11, Pages 2709-2728
(doi: 10.1162/089976602760408035)
© 2002 Massachusetts Institute of Technology
Training a Single Sigmoidal Neuron Is Hard
Article PDF (146.16 KB)
Abstract

We first present a brief survey of hardness results for training feed forward neural networks. These results are then completed by the proof that the simplest architecture containing only a single neuron that applies a sigmoidal activation function σ:ℜ → [α, β], satisfying certain natural axioms (e.g., the standard (logistic) sigmoid or saturated-linear function), to the weighted sum of n inputs is hard to train. In particular, the problem of finding the weights of such a unit that minimize the quadratic training error within (β − α)2 or its average (over a training set) within 5(β − α)2 /(12n) of its infimum proves to be NP-hard. Hence, the well-known backpropagation learning algorithm appears not to be efficient even for one neuron, which has negative consequences in constructive learning.