| |
Abstract:
Over the last years, particle filters have been applied with
great success to a variety of state estimation problems. We
present a statistical approach to increasing the efficiency of
particle filters by adapting the size of samplesets on-the-fly.
The key idea of the KLD-sampling method is to bound the
approximation error introduced by the sample-based representation
of the particle filter. The name KLD-sampling is due to the fact
that we measure the approximation error by the Kullback-Leibler
distance. Our adaptation approach chooses a small number of
samples if the density is focused on a small part of the state
space, and it chooses a large number of samples if the state
uncertainty is high. Both the implementation and computation
overhead of this approach are small. Extensive experiments using
mobile robot localization as a test application show that our
approach yields drastic improvements over particle filters with
fixed sample set sizes and over a previously introduced
adaptation technique.
|