Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

October 2014, Vol. 26, No. 10, Pages 2350-2378
(doi: 10.1162/NECO_a_00641)
@ 2014 Massachusetts Institute of Technology
Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel
Article PDF (276.03 KB)
Abstract

Regularization is a well-recognized powerful strategy to improve the performance of a learning machine and lq regularization schemes with are central in use. It is known that different q leads to different properties of the deduced estimators, say, l2 regularization leads to a smooth estimator, while l1 regularization leads to a sparse estimator. Then how the generalization capability of lq regularization learning varies with q is worthy of investigation. In this letter, we study this problem in the framework of statistical learning theory. Our main results show that implementing lq coefficient regularization schemes in the sample-dependent hypothesis space associated with a gaussian kernel can attain the same almost optimal learning rates for all . That is, the upper and lower bounds of learning rates for lq regularization learning are asymptotically identical for all . Our finding tentatively reveals that in some modeling contexts, the choice of q might not have a strong impact on the generalization capability. From this perspective, q can be arbitrarily specified, or specified merely by other nongeneralization criteria like smoothness, computational complexity or sparsity.