Skip to content

Example of a simple particle filter for robot location, Stanford's Intro to AI

Notifications You must be signed in to change notification settings

jtosey/particle_filter_demo

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 

Repository files navigation

I forked the original python demo in an attempt to make the filter
as close to the algorithm presented in class as possible.  I can now
compare the algorithm, line-for-line, with the lecture notes presented
in 11-20.

In this demo, the robot location is a green turtle, and the estimated
location of the robot (once the particles converge) is shown as a red
turtle.

I hope this helps you!  I found by playing around with this code gave me
a much better understand exactly how this algorithm works.  I encourage
you to do the same.

The code works substantially the same way, but probably performs a
little worse and is a little less efficient than Martin's original
implementation, but is more faithful to the 11-20.

With only a single sensor, and a fairly symmetric world, the robot
sometimes converges to the wrong location even with thousands of particles
because it can get the same correlation of readings in multiple parts of the
maze.  I've extended the filter with an algorithm that continuously checks
whether the estimate has converged, and whether it continues to produce
measurements that are strongly correlated with the robot.  If it has a
converged solution (low geographcial dispersion) but the weighted
measurements show poor correlation, we have to assume that it converged
to the wrong locale.  In this case we reset the filter to new random
initial conditions and start again.

 - jrt

----
Original comments:

  This is a very simple particle filter example prompted by
Stanford's Intro to AI lectures.

  A robot is placed in a maze. It has no idea where it is, and
its only sensor can measure the approximate distance to the nearest
beacon (yes, I know it's totally weird, but it's easy to implement).
Also, it shows that even very simple sensors can be used, no need
for a high resolution laser scanner.

  In the arena display the robot is represented by a small green
turtle and its beliefs are red/blue dots. The more a belief matches
the current sensor reading, the more the belief's colour changes to
red. Beacons are little cyan dots.

  The robot then starts to randomly move around the maze. As it
moves, its beliefs are updated using the particle filter algorithm.
After a couple of moves, the beliefs converge around the robot. It
finally knows where it is!

  Particle filters really are totally cool...


  Start the simulation with:

    python particle_filter.py

  Feel free to experiment with different mazes, particles counts, etc.



  Enjoy!

        mjl

About

Example of a simple particle filter for robot location, Stanford's Intro to AI

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%