Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

January 2014, Vol. 26, No. 1, Pages 158-184
(doi: 10.1162/NECO_a_00535)
© 2013 Massachusetts Institute of Technology
Learning with Convex Loss and Indefinite Kernels
Article PDF (219.06 KB)
Abstract

We consider a kind of kernel-based regression with general convex loss functions in a regularization scheme. The kernels used in the scheme are not necessarily symmetric and thus are not positive semidefinite; l1−norm of the coefficients in the kernel ensembles is taken as the regularizer. Our setting in this letter is quite different from the classical regularized regression algorithms such as regularized networks and support vector machines regression. Under an established error decomposition that consists of approximation error, hypothesis error, and sample error, we present a detailed mathematical analysis for this scheme and, in particular, its learning rate. A reweighted empirical process theory is applied to the analysis of produced learning algorithms, which plays a key role in deriving the explicit learning rate under some assumptions.