288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

January 1995, Vol. 7, No. 1, Pages 72-85
(doi: 10.1162/neco.1995.7.1.72)
© 1995 Massachusetts Institute of Technology
Similarity Metric Learning for a Variable-Kernel Classifier
Article PDF (821.26 KB)

Nearest-neighbor interpolation algorithms have many useful properties for applications to learning, but they often exhibit poor generalization. In this paper, it is shown that much better generalization can be obtained by using a variable interpolation kernel in combination with conjugate gradient optimization of the similarity metric and kernel size. The resulting method is called variable-kernel similarity metric (VSM) learning. It has been tested on several standard classification data sets, and on these problems it shows better generalization than backpropagation and most other learning methods. The number of parameters that must be determined through optimization are orders of magnitude less than for backpropagation or radial basis function (RBF) networks, which may indicate that the method better captures the essential degrees of variation in learning. Other features of VSM learning are discussed that make it relevant to models for biological learning in the brain.