Wednesday, October 19, 2011

Cone detection

One of the most critical tasks that our robot has to achieve at the Robomagellan competition is cone detection: there is a road cone at every GPS checkpoint, that the robot must bump into in order to lower its completion time.
   Here is how our robot is set up: after reaching a perimeter of 5 yards around a GPS checkpoint, the robot stops and starts scanning its surrounding (the camera, a CMUCAM3, is mounted on a servo). After determining which frame contains the most central image of the cone, the angle associated with that particular frame sets the direction to follow.
How to perform a simple yet efficient cone detection?
The first thought is to threshold on the cone color (bright orange). As it turns out, this very basic idea is not effective: the camera picks up a lot of red in the surrounding which interferes with the detection of the cone - during the 2011 competition in San Mateo, one kid in the crowd was wearing a screaming orange jacket and was running around the robots... Moreover, depending on the lighting, the poor sensor of the CMUCAM3 perceives the cone from a dark shade of red (contre-jour situation) to nearly white.

   In order to solve the noise issue, I singled out another characteristic, the shape of the cone. Using Hough transforms on the thresholded image, I can eliminate all the red interferences. The Hough algorithm is used here for line detection: after thresholding out everything with a low red content, we proceed to looking for lines in the entire image.
The 2 most significant lines then represent the sides of the cone. To achieve greater accuracy in detection, it is possible to compare the angle of these two lines to a reasonable range for cone angles. That's the great thing about it: using the symmetry of the cone, we should always be able to detect its main outlines, be it far, near, leaning to one side... It gives us some flexibility with the configuration of the cone.
Here's the result of the Hough transform:

Notice that there are two distinct lines (the 2 red squares give the parameterization of the 2 lines).
Here is the result of running the algorithm on a photo of a cone with a big patch of red next to it.



With simple geometry on the 2 lines, we can then determine where the center of the cone is and return that value to the robot microcontroller.

Possible improvement: we could also return the height of the cone for the robot to measure how far it is from the cone and adjust its speed -

Thursday, October 6, 2011

Robomagellan & Sparkfun competition project lead!

I just got appointed project lead for the IEEE teams at UCSD!
I will be leading this year's teams for the Robomagellan and the Sparkfun AVC competitions.
Having worked on the project last year as a regular team member (Project lead was Jordan Rhee), I may decide to start from scratch or improve and add some cool features to the robot we had. Sparkfun had so many different types of awards last year; we brought home the "Water Hazard Award"for having built a robot that survived two dives in the pond next to the Sparkfun HQs.
Here is a quick overview of our robot in this video I made:

 More details on both competitions on the UCSD IEEE website.