288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

October 2010, Vol. 22, No. 10, Pages 2678-2697
(doi: 10.1162/NECO_a_00018)
The author of this article expressly waives copyright and dedicates this article to the public domain
Large-Margin Classification in Infinite Neural Networks
Article PDF (371.66 KB)

We introduce a new family of positive-definite kernels for large margin classification in support vector machines (SVMs). These kernels mimic the computation in large neural networks with one layer of hidden units. We also show how to derive new kernels, by recursive composition, that may be viewed as mapping their inputs through a series of nonlinear feature spaces. These recursively derived kernels mimic the computation in deep networks with multiple hidden layers. We evaluate SVMs with these kernels on problems designed to illustrate the advantages of deep architectures. Compared to previous benchmarks, we find that on some problems, these SVMs yield state-of-the-art results, beating not only other SVMs but also deep belief nets.