D. Fox
KLD-Sampling: Adaptive Particle Filters
Advances in Neural
Information Processing Systems 14 (NIPS-01)
Abstract
Over the last years, particle filters have been applied with great
success to a variety of state estimation problems. We present a
statistical approach to increasing the efficiency of particle
filters by adapting the size of sample sets on-the-fly. The key
idea of the KLD-sampling method is to bound the approximation error
introduced by the sample-based representation of the particle
filter. The name KLD-sampling is due to the fact that we measure
the approximation error by the Kullback-Leibler distance. Our
adaptation approach chooses a small number of samples if the density
is focused on a small part of the state space, and it chooses a
large number of samples if the state uncertainty is high. Both the
implementation and computation overhead of this approach are small.
Extensive experiments using mobile robot localization as a test
application show that our approach yields drastic improvements over
particle filters with fixed sample set sizes and over a previously
introduced adaptation technique.
Download
Full paper [.ps.gz]
(678 kb, 8 pages)
Longer, more recent journal article IJRR
Bibtex
@INPROCEEDINGS{Fox01KLD,
AUTHOR
= {Fox, D.},
TITLE
= {KLD-Sampling: Adaptive Particle Filters},
YEAR
= {2001},
BOOKTITLE = {Advances in Neural
Information Processing Systems 14},
PUBLISHER = {MIT Press}
}
[To the RSE-lab]