| |
Abstract:
In this paper we show that on-line algorithms for
classification and regression can be naturally used to obtain
hypotheses with good data-dependent tail bounds on their risk.
Our results are proven without requiring complicated
concentration-of-measure arguments and they hold for arbitrary
on-line learning algorithms. Furthermore, when applied to
concrete on-line algorithms, our results yield tail bounds that
in many cases are comparable or better than the best known
bounds.
|