After assembling the hardware for my SLAM (Simultaneous Localization and Mapping) robot, I set about to write the code which controls the drive motors, senses obstacles, and records the robot’s path. Lego’s NXT kit comes with a rather basic visual programming language called NXT-G. This program, which constructs programs by creating essentially flow charts, is almost counterintuitive to anyone with command line programing experience. I decided instead to use the third-party programming language called Not eXactly C (NXC) and its associated IDE, Bricx Command Center.
The control software is written in one “task” and one “function”. The entire program is one (hopefully) simple loop where the robot moves forward in a straight line while continually sampling the USRF sensor. When an obstacle is found, the robot stops, records the distance it has traveled, makes a random turn, and then continues forward. The program writes output data to a single file at each iteration. The program records the iteration number, the motor encoder value, the USRF sensor reading at determination of obstacle, and the value of the random turn.
I faced several problems when writing and testing this code. First the robot was not always moving in a straight line when moving forward. Initially the robot was set to move forward by issuing a move forward command to both drive motors by simply saying OnRev(OUT_AC,50) which turns on motors plugged into ports A and C at a power of 50%. Because of the physical arrangement of the motors, the Reverse command actually causes the robot to move forward. I was only taking encoder readings from one motor so any movement of the A motor was seen as a straight forward motion of the robot. This simple move command would not restrict the robot’s motors from becoming out of sync and ultimately the robot would veer to the left or right.
I fixed this problem by relying on the built-in SYNC command within the NXT architecture. By instead issuing the command OnRevReg(OUT_AC,50,OUT_REGMODE_SYNC), the C motor tracks the movements of the A motor so that the position, speed, and acceleration of each motor is the same. This ensured that the robot travels in a straight line when commanded to do so.
Another problem I encountered was that the USRF was reading false positives. The sensor was seeing an obstacle when there really was not one. My first reaction was to disassemble the entire hardware and run tests on the USRF sensor. I soon realized that I could simply sample the USRF sensor several times, before allowing the program to make a decision about the presence/absence of an obstacle.
I began by sampling the USRF three times per cycle and, if the sensor returned two positive readings then it would declare the presence of an obstacle. However the USRF was still seeing several false positives. So, I decided to require the USRF to see the obstacle all three times, in other words, the USRF had to detect the obstacle on three consecutive samplings before acting.
The final and most intriguing problem was one which required considerable judgment on my part. I had to determine the range of appropriate random turns which the robot could take. Initially, before implementing the RANDOM function, I had the robot making only left hand turns of a constant magnitude. I soon realized during testing that the robot would easily repeatedly traverse one end of the maze and not explore the entirety of its environment. After implementing the RANDOM function, I had the robot making turns between ten degrees to the left and ten degrees to the right. The robot would again get stuck but this time because it would repeatedly scan a very small section of wall (approximately 20 cm). I also realized that random turns greater than an about-face in either direction were repetitive so I limited the scope of turns from 180 degrees left to 180 degrees right.
UPDATE 9/30/09:
It has become clear that by detecting a series of obstacle locations (i.e. single points with x, y coordinates) several serious questions arose about how to build a set of “walls” from a set of ultimately unrelated obstacle points. For example, there is no way to determine with any certainty that the line segment which connects two obstacles which are “near” each other corresponds to a wall segment.
This has led me to make a major adjustment to how the robot collects obstacle data. This major change was achieved with only minor changes to the actual code. The robot, like before, moves around its environment until it detects an obstacle. Then, instead of just determining the point of the obstacle, the robot turns five degrees clockwise and takes a second ultrasonic distance reading of the obstacle. The idea being that the robot can then build a wall segment from these two points and know with some certainty that there really is a wall between them. Then the robot moves on to a new location in its environment.
The code below reflects these changes.
Entire SLAMbot control code: