| |
Abstract:
Several effective methods for improving the performance of a
single learning algorithm have been developed recently. The general
approach is to to create a set of learned models by repeatedly
applying the algorithm to different versions of the training data,
and then combine the learned models' predictions according to a
prescribed voting scheme. Little work has been done in combining
the predictions of a collection of models generated by many
learning algorithms having different representation and/or search
strategies. This paper describes a method which uses the strategies
of stacking and correspondence analysis to model the relationship
between the learning examples and the way in which they are
classified by a collection of learned models. A nearest neighbor
method is then applied within the resulting representation to
classify previously unseen examples. The new algorithm consistently
performs as well or better than other combining techniques on a
suite of data sets.
|