The algorithm
I took courses on probability, statistics, and the Monte Carlo methods while I was at school, but the ubiquity of the algorithms based on randomness still amazes me. Imagine that you are a robot moving in a 2D space. You have a map of the area, and you know your rough initial position in it. Every step you take, you get data from the controls (speed and angular speed) that is quite noisy. You can also sense obstacles within a certain range and with a certain precision with respect to your current position and heading. How do you figure out where you are?
One good strategy to solving this problem is using a particle filter. You start by creating N "particles," or guesses as to where you are, by drawing the x- and y- position as well as the heading from the Gaussian distribution around your initial estimate. Then, every step you take, for every particle, you:
- move according to the data you got from controls taking the noise into account;
- match the sensor data to the landmarks on the map using the perspective of the particle;
- assign weight to the particle based on how well the observation matches the map.
Finally, you draw with replacement N particles from the initial set with the probability proportional to the weights. The particles that match the observation well will likely be drawn multiple times and those that don't are unlikely to be drawn at all. The repetitions are not a problem because the movement is noisy, so they will diverge after the next step you take. The particle with the highest weight is your best estimate of your actual position and heading.
The result
I did an experiment on the Udacity data, and the approach using a 1000 particles turned out to work well comparing to using just one. The average deviation from the ground truth was around 10cm. Using one particle effectively ignores the observation data and relies only on the controls. You can follow the blue diamond in the video below to see how fast the effects of the noise accumulate. Both cases use the same noise values.