pololu 3pi robot controlled via computer vision: first test

I was recently working on some boid like procedural animations and I was thinking it would be fun to transpose such a system in the physical space. Theo recently received a collection of 5 pololu 3pi robots, and I decided they would be ideal candidates for such a project.

The 3pi pololu robots are fairly simple, so I figured it would be easier to consider the robots as brainless and blind particles under the control of an all-seeing “smart” desktop computer.

The first problem was remote communication with the 3pi. Thankfully, Theo was kind enough to mount an xbee module on one of the bots. This allows remote communication via a serial port abstraction.

The second problem was vision, the desktop controller needs to know the position of the 3pi. As I wanted to create a test environment quickly I fired up CCV instead of written custom tracking code. This is an open source tracker usually used for multitouch surfaces.

I thought it would be interesting to track the 3pi with Infrared light, this would allow for a overhead projection to be added later without influencing the tracking. I used an IR sensitive ps3 eye webcam and Theo added an infrared led to the 3pi to make it visible to the camera (it’s the led on top of the mast on the 3pi picture). The setup was ready to track and send the position of the 3pi to the main desktop program. Now that I knew the position of the 3pi, it was time to make it find and move to a new arbitrary position (target).

For a first test, I opted for a very naive strategy. The situation is the following:
1. we know the position of the 3pi via tracking but not its orientation (visually tracking the orientation is too big of a problem for this first test).
2. we can control the motors of the 3pi, make it move forwards and turn, but we can’t tell it to move or turn a known distance (for instance you can’t tell it to turn 30 degrees).

However, if we tell the bot to move forwards, and we track the movement, we get a new position, which we can compare to the starting position and transpose as a vector, and now we have a direction.

The next step was to decide for a simple vocabulary of movements for the 3pi. I decided that it could either be moving forwards, or spinning to the right, or spinning to the left or, finally, it could be immobile. The heuristic is then quite simple:

1. move forwards

2. are we close enough to the target to be considered the final position?
if yes
stop all done.
else
are we getting closer to the target?
if yes
continue forwards
else
decide if target is to the left or to the right of 3pi and spin in the corresponding direction, then goto 1

Granted this is very naive, but fine for a proof of concept stage.
I used openframeworks to read the tracking information and to communicate serially (xbee) with the 3pi.

You can see the first result in the following video. On the screen, the white dot represents the target and the blue dot is the tracked position of the 3pi.

As you can see the basic system is in place, but there is still a lot of work to get a more fluid experience : )