Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

February 2015, Vol. 27, No. 2, Pages 365-387
(doi: 10.1162/NECO_a_00697)
© 2015 Massachusetts Institute of Technology Published under a Creative Commons Attribution 3.0 Unported (CC BY 3.0) license
Mismatched Training and Test Distributions Can Outperform Matched Ones
Article PDF (571.54 KB)
Abstract

In learning theory, the training and test sets are assumed to be drawn from the same probability distribution. This assumption is also followed in practical situations, where matching the training and test distributions is considered desirable. Contrary to conventional wisdom, we show that mismatched training and test distributions in supervised learning can in fact outperform matched distributions in terms of the bottom line, the out-of-sample performance, independent of the target function in question. This surprising result has theoretical and algorithmic ramifications that we discuss.