Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

May 1993, Vol. 5, No. 3, Pages 371-373
(doi: 10.1162/neco.1993.5.3.371)
© 1993 Massachusetts Institute of Technology
Vapnik-Chervonenkis Dimension Bounds for Two- and Three-Layer Networks
Article PDF (171.22 KB)
Abstract

We show that the Vapnik-Chervonenkis dimension of the class of functions that can be computed by arbitrary two-layer or some completely connected three-layer threshold networks with real inputs is at least linear in the number of weights in the network. In Valiant's "probably approximately correct" learning framework, this implies that the number of random training examples necessary for learning in these networks is at least linear in the number of weights.