| |
Abstract:
We study the approximation of functions by two-layer
feedforward neural networks, focusing on incremental algorithms
which incrementally add units, estimating single unit parameters at
each stage. As opposed to standard algorithms for fixed
architectures, the optimization at each stage is performed over a
small number of parameters, mitigating many of the difficult
numerical problems inherent in high-dimensional non-linear
optimization. We establish upper bounds on the approximation error
incurred in the approximation, when approximating functions from
the Sobolev space, thereby extending previous results which only
provided rates of convergence for functions in certain convex hulls
of functional spaces. By comparing our results to recently derived
lower bounds, we show that the incremental algorithms are nearly
optimal in many cases, while provably out-perform linear methods in
others. Combined with recent estimation error results for
incremental greedy algorithms, a strong case can be made for this
type of approach.
|