288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

Fall 1991, Vol. 3, No. 3, Pages 409-417
(doi: 10.1162/neco.1991.3.3.409)
© 1991 Massachusetts Institute of Technology
Generalization Effects of k-Neighbor Interpolation Training
Article PDF (391.36 KB)

This paper describes a new training method for a continuous mapping and/or pattern classification neural network that performs local sample-density smoothing. A conventional training method uses point-to-point mapping from an input space to an output space. Even though the mapping may be precise at two given training sample points, there are no guarantees of mapping accuracy at points on a line segment connecting the sample points. This paper first discusses a theory for formulating line-to-line mapping. The theory is called interpolation training. This paper then expands the theory to k-nearest neighbor interpolation. The k-neighbor interpolation training (KNIT) method connects an input sample training point to its k-neighbor points via k line segments. Then, the method maps these k line segments in the input space for each training sample to linear line segments in the output space that interpolate between training output values. Thus, a web structure made by connecting input samples is mapped into the same structure in an output space. The KNIT method reduces the over learning problem caused by point-to-point training by smoothing input/output functions. Simulation tasks show that KNIT improves vowel recognition on a small speech database.