| |
Abstract:
We propose a novel strategy for training neural networks
using sequential Monte Carlo algorithms. In particular, we discuss
an efficient hybrid gradient descent/sampling importance resampling
algorithm that we developed recently, under the name of hybrid SIR.
This global optimisation approach allows us to learn the
probability distribution of the network weights and outputs in a
sequential framework. It is well suited to applications involving
on-line, nonlinear, non-Gaussian or non-stationary signal
processing. We show how the new algorithm outperforms extended
Kalman filter training on several simulated examples and on a real
application involving the pricing of option contracts traded in
financial markets.
|