288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

Spring 1991, Vol. 3, No. 1, Pages 67-78
(doi: 10.1162/neco.1991.3.1.67)
© 1991 Massachusetts Institute of Technology
A Tree-Structured Algorithm for Reducing Computation in Networks with Separable Basis Functions
Article PDF (657.75 KB)

I describe a new algorithm for approximating continuous functions in high-dimensional input spaces. The algorithm builds a tree-structured network of variable size, which is determined both by the distribution of the input data and by the function to be approximated. Unlike other tree-structured algorithms, learning occurs through completely local mechanisms and the weights and structure are modified incrementally as data arrives. Efficient computation in the tree structure takes advantage of the potential for low-order dependencies between the output and the individual dimensions of the input. This algorithm is related to the ideas behind k-d trees (Bentley 1975), CART (Breiman et al. 1984), and MARS (Friedman 1988). I present an example that predicts future values of the Mackey-Glass differential delay equation.