| |
Abstract:
Boosting is a general method for improving the performance of
any learning algorithm that consistently generates classifiers
which need to perform only slightly better than random guessing. A
recently proposed and very promising boosting algorithm is
AdaBoost. It has been applied with great success to several
benchmark machine learning problems using rather simple learning
algorithms, in particular decision trees. In this paper we use
AdaBoost to improve the performances of neural networks applied to
character recognition tasks. We compare training methods based on
sampling the training set and weighting the cost function. Our
system achieves about 1.4 error on a data base of online
handwritten digits from more than 200 writers. Adaptive boosting of
a multi-layer network achieved less than 2 error on the UCI Letters
offline characters data set.
|