| |
Abstract:
We present Monte-Carlo generalized EM equations for learning
in nonlinear state space models. The difficulties lie in the
Monte-Carlo E-step which consists of sampling from the posterior
distribution of the hidden variables given the observations. The
new idea presented in this paper is to generate samples from a
Gaussian approximation to the true posterior from which it is easy
to obtain independent samples. The parameters of the Gaussian
approximation are either derived from the extended Kalman filter or
the Fisher Scoring algorithm. In case the posterior density is
multimodal we propose to approximate the posterior by a sum of
Gaussians (mixture of modes approach). We show that sampling from
the approximate posterior densities obtained by the above
algorithms leads to better models than using point estimates for
the hidden states. In our experiment, the Fisher Scoring algorithm
obtained a better approximation of the posterior mode than the
extended Kalman filter. For a multimodal distribution, the mixture
of modes approach gave superior results.
|