MIT CogNet, The Brain Sciences ConnectionFrom the MIT Press, Link to Online Catalog
SPARC Communities
Subscriber : Stanford University Libraries » LOG IN

space

Powered By Google 
Advanced Search

 

On the Optimality of Incremental Neural Network Algorithms

 Ron Meir and Vitaly E. Maiorov
  
 

Abstract:
We study the approximation of functions by two-layer feedforward neural networks, focusing on incremental algorithms which incrementally add units, estimating single unit parameters at each stage. As opposed to standard algorithms for fixed architectures, the optimization at each stage is performed over a small number of parameters, mitigating many of the difficult numerical problems inherent in high-dimensional non-linear optimization. We establish upper bounds on the approximation error incurred in the approximation, when approximating functions from the Sobolev space, thereby extending previous results which only provided rates of convergence for functions in certain convex hulls of functional spaces. By comparing our results to recently derived lower bounds, we show that the incremental algorithms are nearly optimal in many cases, while provably out-perform linear methods in others. Combined with recent estimation error results for incremental greedy algorithms, a strong case can be made for this type of approach.

 
 


© 2010 The MIT Press
MIT Logo