Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

June 1, 2000, Vol. 12, No. 6, Pages 1293-1301
(doi: 10.1162/089976600300015367)
© 2000 Massachusetts Institute of Technology
The VC Dimension for Mixtures of Binary Classifiers
Article PDF (145.58 KB)
Abstract

The mixtures-of-experts (ME) methodology provides a tool of classification when experts of logistic regression models or Bernoulli models are mixed according to a set of local weights. We show that the Vapnik-Chervonenkis dimension of the ME architecture is bounded below by the number of experts m and above by O (m4s2), where s is the dimension of the input. For mixtures of Bernoulli experts with a scalar input, we show that the lower bound m is attained, in which case we obtain the exact result that the VC dimension is equal to the number of experts.