{"id":875,"date":"2011-01-01T11:04:10","date_gmt":"2011-01-01T18:04:10","guid":{"rendered":"http:\/\/mcclanahoochie.com\/blog\/?post_type=portfolio&#038;p=875"},"modified":"2023-06-10T10:32:26","modified_gmt":"2023-06-10T17:32:26","slug":"igvc-robot","status":"publish","type":"post","link":"https:\/\/mcclanahoochie.com\/blog\/2011\/01\/igvc-robot\/","title":{"rendered":"IGVC Robot"},"content":{"rendered":"<h3>July 2009<\/h3>\n<p><a href=\"https:\/\/www.facebook.com\/RoboJackets\/photos\/pb.398920666904296.-2207520000.1459037768.\/835727393223619\/?type=3&amp;theater\"><img data-recalc-dims=\"1\" decoding=\"async\" data-attachment-id=\"1223\" data-permalink=\"https:\/\/mcclanahoochie.com\/blog\/2011\/01\/igvc-robot\/igvcbanner\/#main\" data-orig-file=\"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/01\/igvcbanner.jpg?fit=200%2C124&amp;ssl=1\" data-orig-size=\"200,124\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;}\" data-image-title=\"igvcbanner\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/01\/igvcbanner.jpg?fit=200%2C124&amp;ssl=1\" class=\"alignnone size-full wp-image-1223\" title=\"igvcbanner\" src=\"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/01\/igvcbanner.jpg?resize=200%2C124\" alt=\"\" width=\"200\" height=\"124\" \/><\/a><\/p>\n<p>I competed in the 2009\u00a0<a href=\"http:\/\/www.igvc.org\/\">Intelligent Ground Vehicle\u00a0Competition<\/a>, on the Georgia Tech\u00a0<a href=\"http:\/\/robojackets.org\">RoboJackets<\/a> Team.\u00a0Our robot: Candi.<\/p>\n<p>I developed the computer vision algorithms to navigate an autonomous vehicle using only a vision camera.<\/p>\n<p>Our team ranked\u00a0<strong><em>6th place<\/em> nationwide<\/strong>.\u00a0Competition photos\u00a0<a href=\"https:\/\/www.facebook.com\/media\/set\/?set=a.596312857165075.1073741849.398920666904296&amp;type=3\" target=\"_blank\" rel=\"noopener\">here.<br \/>\n<\/a><br \/>\n<span style=\"text-decoration: underline;\"><em>I wrote all of the vision and mapping code!<\/em><\/span> The robot navigated using a <em>single<\/em> camera only.<\/p>\n<p>A screenshot of the robot code running on my laptop:<\/p>\n<p><a href=\"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/01\/candi_processing.png\"><img data-recalc-dims=\"1\" decoding=\"async\" data-attachment-id=\"1263\" data-permalink=\"https:\/\/mcclanahoochie.com\/blog\/2011\/01\/igvc-robot\/candi_processing\/#main\" data-orig-file=\"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/01\/candi_processing.png?fit=1000%2C625&amp;ssl=1\" data-orig-size=\"1000,625\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;}\" data-image-title=\"candi_processing\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/01\/candi_processing.png?fit=1000%2C625&amp;ssl=1\" class=\"alignnone size-medium wp-image-1263\" title=\"candi_processing\" src=\"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/01\/candi_processing-300x187.png?resize=300%2C187\" alt=\"\" width=\"300\" height=\"187\" srcset=\"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/01\/candi_processing.png?resize=300%2C187&amp;ssl=1 300w, https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/01\/candi_processing.png?w=1000&amp;ssl=1 1000w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p>A picture of the robot in action:<\/p>\n<p><a href=\"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/01\/candi_action.jpg\"><img data-recalc-dims=\"1\" decoding=\"async\" data-attachment-id=\"1264\" data-permalink=\"https:\/\/mcclanahoochie.com\/blog\/2011\/01\/igvc-robot\/candi_action\/#main\" data-orig-file=\"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/01\/candi_action.jpg?fit=949%2C984&amp;ssl=1\" data-orig-size=\"949,984\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;}\" data-image-title=\"candi_action\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/01\/candi_action.jpg?fit=949%2C984&amp;ssl=1\" class=\"alignnone size-medium wp-image-1264\" title=\"candi_action\" src=\"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/01\/candi_action-289x300.jpg?resize=289%2C300\" alt=\"\" width=\"289\" height=\"300\" srcset=\"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/01\/candi_action.jpg?resize=289%2C300&amp;ssl=1 289w, https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/01\/candi_action.jpg?w=949&amp;ssl=1 949w\" sizes=\"(max-width: 289px) 100vw, 289px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p>Quick and dirty breakdown of the IGVC robot&#8217;s computer vision navigation:<\/p>\n<ol>\n<li>Capture firewire camera frame<\/li>\n<li>Apply an inverse perspective transform. This transform makes both near and far off objects a normalized size, forcing the assumption that the course is a flat plane. This makes it easier to process assuming the camera is\u00a0<span style=\"line-height: 19px;\">looking<\/span> straight down on a planer world, than dealing with 3D coordinates.<\/li>\n<li><span style=\"line-height: 19px;\">A region of interest box is drawn on the area of the input image immediately in front of the robot, and assumes that whatever color fills that region is the color that is traversable land, and everything else is an obstacle.<\/span><\/li>\n<li>The average of RGB ratios is used to threshold the transformed image in to a binary\u00a0image of\u00a0&#8216;traverse-able\u00a0(white) or obstacle (black) . This image is mapped into a world space using a homography matrix.<\/li>\n<li>Simultaneously, the input image is also converted to\u00a0greyscale, and a feature tracker finds and tracks features for alternating frames of motion. <span style=\"line-height: 19px;\">The features found are filtered using RANSAC, and a homography matrix that maps between frames is computed. <\/span><\/li>\n<li><span style=\"line-height: 19px;\">The homography matrix is used to translate\/rotate the robot around in the world-map.\u00a0The world-map is built up as the robot moves, where black is obstacle, white is traversable, and gray is\u00a0unknown. The map slowly decays back to gray to prevent loop closure errors from building up.<\/span><\/li>\n<li><span style=\"line-height: 19px;\">Scan lines protrude from the robot&#8217;s center on the world map in a semi-circle, scanning along to find dark (obstacle) pixels. The scan line with the most white pixels (traversability) is chosen, and the robot turns and moves in the new direction.<\/span><\/li>\n<\/ol>\n<p>&nbsp;<\/p>\n<figure style=\"width: 598px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/www.facebook.com\/RoboJackets\/photos\/a.596312857165075.1073741849.398920666904296\/596313667164994\/?type=3&amp;theater\"><img decoding=\"async\" title=\"6th Place Award!\" src=\"http:\/\/www.robojackets.org\/gallery\/main.php?g2_view=core.DownloadItem&amp;g2_itemId=5924&amp;g2_serialNumber=3\" alt=\"6th Place Award!\" width=\"608\" height=\"456\" \/><\/a><figcaption class=\"wp-caption-text\">6th Place Award!<\/figcaption><\/figure>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>July 2009 I competed in the 2009\u00a0Intelligent Ground Vehicle\u00a0Competition, on the Georgia Tech\u00a0RoboJackets Team.\u00a0Our robot: Candi. I developed the computer vision algorithms to navigate an autonomous vehicle using only a vision camera. Our team ranked\u00a06th place nationwide.\u00a0Competition photos\u00a0here. I wrote all of the vision and mapping code! The robot navigated using a single camera only. &#8230; <a title=\"IGVC Robot\" class=\"read-more\" href=\"https:\/\/mcclanahoochie.com\/blog\/2011\/01\/igvc-robot\/\" aria-label=\"Read more about IGVC Robot\">Read more<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"advanced_seo_description":"","jetpack_seo_html_title":"","jetpack_seo_noindex":false,"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[1],"tags":[139,113,138,54,92,101,29,137],"class_list":["post-875","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-competition","tag-computer-vision","tag-igvc","tag-image-processing","tag-opencv","tag-programming","tag-projects","tag-robot"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/pZdXI-e7","jetpack-related-posts":[{"id":1201,"url":"https:\/\/mcclanahoochie.com\/blog\/2011\/05\/ai-learning-portfolio\/","url_meta":{"origin":875,"position":0},"title":"AI Learning Portfolio","author":"mcclanahoochie","date":"May 4, 2011","format":false,"excerpt":"As a final assignment\/write-up for my CS6601 Artificial Intelligence class at Georgia Tech, this\u00a0learning portfolio was made to summarize what I had learned throughout the course... CS 6601 Learning Portfolio This page constitutes my learning portfolio for CS 6601, Artificial Intelligence, taken in Spring 2011. In it, I discuss what\u2026","rel":"","context":"In \"ai\"","block_context":{"text":"ai","link":"https:\/\/mcclanahoochie.com\/blog\/tag\/ai\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/05\/Chris2-d%2B_copy_-1.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":950,"url":"https:\/\/mcclanahoochie.com\/blog\/2011\/01\/computer-vision-on-android\/","url_meta":{"origin":875,"position":1},"title":"Computer Vision on Android in Java","author":"mcclanahoochie","date":"January 4, 2011","format":false,"excerpt":"January 2010 \u00a0 Over the holiday break, I finally created an Android\u00a0app that allows image processing on the camera's raw data, and displays it back on the screen. It only uses\u00a0Java on the CPU for now, but in my free time I'll be porting the code to OpenGL ES to\u2026","rel":"","context":"In \"android\"","block_context":{"text":"android","link":"https:\/\/mcclanahoochie.com\/blog\/tag\/android\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/01\/device-sobel-2-small.png?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":1153,"url":"https:\/\/mcclanahoochie.com\/blog\/2011\/04\/computer-vision-on-android-opencv\/","url_meta":{"origin":875,"position":2},"title":"Computer Vision on Android with OpenCV","author":"mcclanahoochie","date":"April 8, 2011","format":false,"excerpt":"March 2011 With the help of Motodev Studio for Android, I've\u00a0extracted\u00a0the\u00a0android-opencv JNI\u00a0camera example and spawned a fork of my original computer vision app,\u00a0Viewer, to an OpenCV version: ViewerCV. Both are\u00a0available on Git Hub\u00a0as open source software example of doing Computer Vision on Android with OpenCV. Viewer Features: *FAST Features (default\u2026","rel":"","context":"In \"android\"","block_context":{"text":"android","link":"https:\/\/mcclanahoochie.com\/blog\/tag\/android\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/04\/viewercv1.png?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":1731,"url":"https:\/\/mcclanahoochie.com\/blog\/2011\/08\/image-processing-with-libjacket-opencv\/","url_meta":{"origin":875,"position":3},"title":"Image processing with LibJacket + OpenCV","author":"mcclanahoochie","date":"August 24, 2011","format":false,"excerpt":"Update: one year later:\u00a0ArrayFire+OpenCV The OpenCV library is the de-facto standard for doing computer vision and image processing research projects. OpenCV includes several hundreds of computer vision algorithms, aimed for use in real-time vision applications. LibJacket is a matrix library built on CUDA. LibJacket offers hundreds of general matrix and\u2026","rel":"","context":"In \"arrayfire\"","block_context":{"text":"arrayfire","link":"https:\/\/mcclanahoochie.com\/blog\/tag\/arrayfire\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/08\/Screen-shot-2011-08-24-at-2.42.52-PM-1024x640.png?resize=350%2C200","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/08\/Screen-shot-2011-08-24-at-2.42.52-PM-1024x640.png?resize=350%2C200 1x, https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/08\/Screen-shot-2011-08-24-at-2.42.52-PM-1024x640.png?resize=525%2C300 1.5x"},"classes":[]},{"id":1966,"url":"https:\/\/mcclanahoochie.com\/blog\/2011\/12\/computer-vision-learning-portfolio\/","url_meta":{"origin":875,"position":4},"title":"Computer Vision Learning Portfolio","author":"mcclanahoochie","date":"December 12, 2011","format":false,"excerpt":"This page constitutes my required\u00a0external\u00a0learning portfolio for CS 7495, Computer Vision, taken in Fall 2011. In it, I discuss what I have learned throughout the course, my activities and findings, how I think I did, and what impact it had on me. About me I am a coffee fanatic that\u2026","rel":"","context":"In \"computer vision\"","block_context":{"text":"computer vision","link":"https:\/\/mcclanahoochie.com\/blog\/tag\/computer-vision\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/12\/chris-raffertys-2-150x150.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":874,"url":"https:\/\/mcclanahoochie.com\/blog\/2011\/01\/laser-projection-vision-system\/","url_meta":{"origin":875,"position":5},"title":"Laser Projection Vision System","author":"mcclanahoochie","date":"January 1, 2011","format":false,"excerpt":"September 2008 A project at GTRI\u00a0FPTD I worked on\u00a0involving combining a color vision system with a 2D laser projector. Case Study about the project here: \u00a0Using Lasers to Identify Substandard Food. With\u00a0Python and OpenCV,\u00a0I got the system to find contours with the camera, and tell the laser to draw\u00a0them in\u2026","rel":"","context":"In \"computer vision\"","block_context":{"text":"computer vision","link":"https:\/\/mcclanahoochie.com\/blog\/tag\/computer-vision\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/mcclanahoochie.com\/blog\/wp-content\/uploads\/2011\/01\/laser_contours2.png?resize=350%2C200","width":350,"height":200},"classes":[]}],"jetpack_likes_enabled":false,"_links":{"self":[{"href":"https:\/\/mcclanahoochie.com\/blog\/wp-json\/wp\/v2\/posts\/875","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mcclanahoochie.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mcclanahoochie.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mcclanahoochie.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mcclanahoochie.com\/blog\/wp-json\/wp\/v2\/comments?post=875"}],"version-history":[{"count":0,"href":"https:\/\/mcclanahoochie.com\/blog\/wp-json\/wp\/v2\/posts\/875\/revisions"}],"wp:attachment":[{"href":"https:\/\/mcclanahoochie.com\/blog\/wp-json\/wp\/v2\/media?parent=875"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mcclanahoochie.com\/blog\/wp-json\/wp\/v2\/categories?post=875"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mcclanahoochie.com\/blog\/wp-json\/wp\/v2\/tags?post=875"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}