Several weeks ago , I had the pleasure of attending another lecture organized by the Robotics Initiative at Georgia Tech . This time the speaker was Dr. John Leonard who presented his current research in Simultaneous Localization And Mapping (SLAM). Dr. Leonard is a member of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). His lecture was an amalgamate of theory, research applications, and future aspirations.
Dr. Leonard concisely describes SLAM by saying on his website:
The problem of SLAM is stated as follows: starting from an initial position, a mobile robot travels through a sequence of positions and obtains a set of sensor measurements at each position. The goal is for the mobile robot to process the sensor data to produce an estimate of its position while concurrently building a map of the environment.
Dr. Leonard spent most of his time describing the applications which he and his students are testing their SLAM algorithms. Much of his team’s work focused on underwater and sea surface vehicles which fall into three areas: ship hull inspection and port safety, hunting for mines, and hunting for submarines. One project they completed involved creating a 3D map of the Titanic wreckage. He showed samples of 866 detailed photographs taken by their submersible robot which were skinned over the 3D (SLAM generated) map to recreate the deck and hull.
Another grad student project tested the capabilities of SLAM on the MIT campus. In a very clever way, the student utilized his bicycle, equipped with a hand held video camera to act as the robot test-bed. In order to properly calculate the bike’s position, the student needed to record the state variables: forward velocity and angular steering position in sync with the video footage from the camera. He utilized two sensors and fed the signals into the audio channel of the video camera. (This synchronized recording problem is quite tangent to the main problem of which the student was trying to address but I think it is the most ingenious aspect of the project).
Dr. Leonard concluded his discussion by presenting his vision of the pinnacle of SLAM applications: a “Google Robot”. The robot would be to the physical world what Google’s search engine is to the internet. It would endlessly comb over some terrain and continually update a list of objects and location of each. This list could then be searchable. To use an example I came up with, a robot would circle around a parking lot, observing parked cars, identifying each car and its location, and then allow people to ask the robot, “Where is my car?”. Such a robot, operating on an infinitely long time line would have to be free from error in endlessly mapping its own position.