IGVC Robot

July 2009

I competed in the 2009 Intelligent Ground Vehicle Competition, on the Georgia Tech RoboJackets Team. Our robot: Candi.

I developed the computer vision algorithms to navigate an autonomous vehicle using only a vision camera.

Our team ranked 6th place nationwide. Competition photos here.

I wrote all of the vision and mapping code! The robot navigated using a single camera only.

A screenshot of the robot code running on my laptop:

A picture of the robot in action:

 

Quick and dirty breakdown of the IGVC robot’s computer vision navigation:

  1. Capture firewire camera frame
  2. Apply an inverse perspective transform. This transform makes both near and far off objects a normalized size, forcing the assumption that the course is a flat plane. This makes it easier to process assuming the camera is looking straight down on a planer world, than dealing with 3D coordinates.
  3. A region of interest box is drawn on the area of the input image immediately in front of the robot, and assumes that whatever color fills that region is the color that is traversable land, and everything else is an obstacle.
  4. The average of RGB ratios is used to threshold the transformed image in to a binary image of ‘traverse-able (white) or obstacle (black) . This image is mapped into a world space using a homography matrix.
  5. Simultaneously, the input image is also converted to greyscale, and a feature tracker finds and tracks features for alternating frames of motion. The features found are filtered using RANSAC, and a homography matrix that maps between frames is computed.
  6. The homography matrix is used to translate/rotate the robot around in the world-map. The world-map is built up as the robot moves, where black is obstacle, white is traversable, and gray is unknown. The map slowly decays back to gray to prevent loop closure errors from building up.
  7. Scan lines protrude from the robot’s center on the world map in a semi-circle, scanning along to find dark (obstacle) pixels. The scan line with the most white pixels (traversability) is chosen, and the robot turns and moves in the new direction.

 

6th Place Award!
6th Place Award!

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.