| |
Abstract:
Recently, sample complexity bounds have been derived for
problems involving linear functions such as neural networks and
support vector machines. In this paper, we extend some theoretical
results in this area by deriving dimensional independent covering
number bounds for regularized linear functions under certain
regularization conditions. We show that such bounds lead to a class
of new methods for training linear classifiers with similar
theoretical advantages of the support vector machine. Furthermore,
we also present theoretical analysis for these new methods from the
asymptotic statistical point of view. This technique provides
better description for large sample behaviors of these
algorithms
|