288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

February 15, 1997, Vol. 9, No. 2, Pages 441-460
(doi: 10.1162/neco.1997.9.2.441)
© 1997 Massachusetts Institute of Technology
Average-Case Learning Curves for Radial Basis Function Networks
Article PDF (159.59 KB)

The application of statistical physics to the study of the learning curves of feedforward connectionist networks has to date been concerned mostly with perceptron-like networks. Recent work has extended the theory to networks such as committee machines and parity machines, and an important direction for current and future research is the extension of this body of theory to further connectionist networks. In this article, we use this formalism to investigate the learning curves of gaussian radial basis function networks (RBFNs) having fixed basis functions. (These networks have also been called generalized linear regression models.) We address the problem of learning linear and nonlinear, realizable and unrealizable, target rules from noise-free training examples using a stochastic training algorithm. Expressions for the generalization error, defined as the expected error for a network with a given set of parameters, are derived for general gaussian RBFNs, for which all parameters, including centers and spread parameters, are adaptable. Specializing to the case of RBFNs with fixed basis functions (basis functions having parameters chosen without reference to the training examples), we then study the learning curves for these networks in the limit of high temperature.