| |
Abstract:
We give an unified convergence analysis of ensemble learning
methods including e.g. AdaBoost, Logistic Regression and the
Least-Square-Boost algorithm for regression. These methods have
in common that they iteratively call a base learning algorithm
which returns hypotheses that are then linearly combined. We show
that these methods are related to the
Gauss-Southwell method
known from numerical optimization and state
non-asymptotical
convergence results for all these methods. Our analysis includes
l
1
-norm regularized cost functions leading to a clean and general
way to regularize ensemble learning.
|