The DPRG Outdoor Challenges: Obstacles

David P. Anderson


The DPRG outdoor robot contest consists of 4 navigation challenges, the first two of which were discussed in previous posts. The 3rd challenge adds obstacle avoidance to the navigation tasks, and the 4th challenge moves robot navigation and obstacle avoidance into the real world.

Here's some thoughts on a simple obstacle avoidance behavior, and how the 3rd and 4th challenges might be accomplished.

1. Obstacle avoidance.

The term obstacle avoidance is a bit misleading in this context because the robot is really reacting to sensor detections, not obstacles. An obstacle is a higher-level abstraction that suggests that multiple sensor detections can be grouped together as all belonging to the same conceptual object. That is a difficult and philosophical distinction and, fortunately, not one we need to make in order to guarantee robust avoidance behavior. But it's useful to remember in this context that the robot has a concept of sensor detections but not the higher-level concept of the objects which those detections represent, even though that seems so instinctively obvious to us humans.

2. Obstacle sensing hardware.

There are many different techniques for a robot to sense obstacles. For this example I'll use the IR light sensors of the SR04 indoor robot, and the SONAR array of the jBot outdoor robot. In both cases the hardware is arranged to give the robots two slightly overlapping lobes of detection in front of the robot, left and right, and that is the key.
a. IR

SR04 uses IR emitters/detectors to make two lobes of obstacle detection directly ahead of the robot.

                     _ - | - _
                   /     |     \
                  /      |      \
                  |  L   |  R   |
                  |      |      |
                  \      |      /
                   \     |     /
                    \    |    /
                    ===========    <--- front bumper
                    ||       ||
          L wheel  <||  SR04 ||>  R wheel
                    ||_______||
                      \_____/
                         |
                         V
                     tail wheel

                 SR04 robot, top view.

For SR04 the two lobes are formed with a pair of modulated IR LEDs and a pair of IR detectors from Radio shack. The LEDs are aimed off-axis from the detectors to produce the shape of the lobes.

Here's a couple of pictures of the installation:
http://www.geology.smu.edu/~dpa-www/robots/sr04/image49.jpg
http://www.geology.smu.edu/~dpa-www/robots/sr04/image46.jpg

And here's a link to a Seattle Robotics article on IR: http://www.seattlerobotics.org/guide/infrared.html

IR detection range for left (L) and right (R) lobes is slightly wider than the robot, from the front bumper out to about 3 robot lengths, depending on the IR reflectivity of the obstacle.

The lobe range is shaded so as not to overlap the edges of the robot left and right, so that the robot can go through narrow openings without generating detections.

The sensors are also masked off so as not to detect reflections from the floor. These detectors can see out to about 18 inches, which fits the small scale and low speed of the SR04 robot.

b. SONAR

SR04 and jBot use SONAR for obstacle detection. SR04 has a stereo pair and jBot has an array of 4 SONAR detectors that can also be used, among other things, to create the two lobes of detection directly ahead of the robot.

Here is a cartoon of jBot showing the arrangement of sensors and the truncated range of sonar detections used for obstacle avoidance:


  |----------------- 60 degrees -----------|

  \         \         |         /         /
   \         \--------|--------/         / <--- 8 feet
     \        \   L1  |  R1   /        /
       \       \      |      /       /
         \      \     |     /      /
           \-----\    |    /-----/         <--- 2 feet
             \ L2 \   |   / R2 /
               \   \  |  /   /
                 \  \ | /  /
                ||\__\|/__/||
                ||         ||
                    jBot
                ||_________||
                ||         ||

                ||_________||
                ||         ||

            jBot robot, top view.

Each SONAR has a beam width of about 15 degrees and they are mounted angled at 15 degree intervals, for a total frontal coverage of 60 degrees. They are also angled upwards about 10 degrees, to minimize reflections from the ground.

Here are a couple of pictures of the installation:
http://geology.heroy.smu.edu/~dpa-www/jpeg/sonar_01c.jpg
http://geology.heroy.smu.edu/~dpa-www/jpeg/sonar_02c.jpg

The outer-most sonar (L2,R2) are mounted pointing out over the front two wheels, just high enough so that the wheels cannot be seen by the sonar beams when the suspension is fully compressed. The inner pair (L1,R1) are mounted 7.5 degrees off axis from the center line and tilted upward 10 degrees.

Here is a picture showing the coverage:
http://geology.heroy.smu.edu/~dpa-www/jpeg/sonar_03c.jpg

The sonar are mounted on the radius of a 10 inch circle, itself mounted 5 inches from the center of the robot. Sonar readings are also offset by 5 inches so that all measurements reference the center of the robot, same as the odometry and position calculations. This also means that the sonar are mounted about 6 inches back from the front of the robot and, consequently, can detect objects right up to the front bumper.

The sonar sensors can see out to about 32 feet, and the array is used for several different kinds of navigation. For simple obstacle avoidance the robot ignores all returns from the outer two sonar (L2,R2) that are greater than about 2 feet, and all returns from the center two sonar (L1,R1) that are greater than about 8 feet, as illustrated above in the ASCII ART cartoon.

Here is a good tutorial on the SensComp sonar:
http://www.acroname.com/robotics/info/articles/sonar/sonar.html

c. Optical.

There are a variety of methods for sensing obstacles using optical means like laser range finders and video cameras using parallax, stereo imaging, structured light, and so forth.

These methods can be used in the same fashion as the IR and SONAR examples above, by dividing the area directly in front of the robot into left and right zones and testing for the presence of detections within those zones. Those detections are then passed as inputs to the obstacle avoidance software.

3. The Avoidance algorithm

The avoidance algorithm for both robots is very similar, and looks something like this:
if (detection == LEFT) slow to half speed and turn right.
if (detection == RIGHT) slow to half speed and turn left.
if (detection == BOTH) slow to 0 and continue turning.
This algorithm works for a differentially steered platform with zero turning radius. Obstacles detected by one sensor or the other cause the robot to slow and turn away. Obstacles detected by both sensors cause the robot to drive the speed toward 0 and rotate away. For Ackerman steered platforms, the robot needs to monitor for that condition and perform it's Ackerman-specific jack-knife turning maneuver at that point.

Which way should the robot turn if an obstacle is seen by both detectors? There are several different common ways of dealing with this situation, from random number generators to complex histories of where the robot has been and where it is going.

A very effective technique in this situation is to continue turning whatever way the robot is already turning.

This is easily detected on an Ackerman steered robot, and on a differentially steered robot one can just test which wheel is going slower based on its current encoder counts.

4. The Avoidance behavior in 'C'

Here's the behavior for a differentially steered platform. The inputs are the sonar readings L1, L2, R1, and R2. The outputs are obstacle_flag, obstacle_speed, and obstacle_turn for the subsumption arbitrater.
void obstacle_avoid()
{
    int extern obstacle_flag, obstacle_speed, obstacle_turn;
    int detect = 0;

    // set sonar detect bits, 1 = left, 2 = right 3 = both
    if ((L2 < TWO_FEET) || (L1 < EIGHT_FEET)) detect = 1;
    if ((R2 < TWO_FEET) || (R1 < EIGHT_FEET)) detect |= 2;

    if (detect) {
        obstacle_flag = TRUE;

        if (detect == 1) {
                obstacle_speed = top_speed/2;
                obstacle_turn = OBSTACLE_TURN;
        } else
        if (detect == 2) {
                obstacle_speed = top_speed/2;
                obstacle_turn = -OBSTACLE_TURN;
        } else {
                obstacle_speed = 0;
                if (left_encoder > right_encoder)
                        obstacle_turn = -OBSTACLE_TURN;
                else    obstacle_turn = OBSTACLE_TURN;
        }
   } else {
           obstacle_flag = FALSE;
   }
}

Top_speed is a variable set by the user.

OBSTACLE_TURN is a constant that the builder has tweaked to produce smooth turns or jerky turns, whatever you prefer.

The obstacle_flag signals the arbitrater that this behavior wants control of the robot at this time. For a more detailed description of the subsumption arbitrater see the previous post.

5. The subsumption loop

The previous post developed a control loop for a simple navigation behavior. We can now add the avoidance behavior to that loop
void subsumption()
{
    while (1) {
        cruise();
        navigate();
        waypoint();
        obstacle_avoid();
        arbitrate();

        tsleep(TDELAY);
}
and be capable of running DPRG Outdoor Challenge #3.

The robot begins its run at the origin (0,0), pointed toward the first target 100 feet away at (0,1200) using coordinates in inches, with some obstacles (probably idle DPRG members) in the way.

The navigate() behavior requests full speed straight ahead and the robot's PID controller accelerates the robot toward the target. Note that in the continuous message form of subsumption that a stream of commands is required from the navigate() behavior to the motor control subsystem to change the motion of the platform. A single detection might cause a temporary blip but won't be able to effect the robot's path substantially. This acts as a noise filter and also smooths the control response.

Somewhere along the way to the target the obstacle detection sensors will see the obstacle and, depending on whether the left detector or the right detector sees the obstacle first, it will issue a command to turn the robot. The obstacle_avoid() behavior has higher priority than navigate() so the arbitrater passes it's commands along to the motor control, and the robot begins to turn, let us say left for this example.

Immediately the navigate() behavior begins to issue commands to turn back to the right, toward the target. But these commands are ignored as long as the obstacle_avoid() behavior is still detecting the obstacle(s) and subsuming the navigation behavior.

At some point the robot has turned far enough to no longer see any detections from the obstacle, and so obstacle_avoid()sets it's behavior flag back to FALSE. At that point the arbitrater begins to pass navigate() commands to the motor control, and the robot begins steering back towards the target. This pattern is repeated for each obstacle encountered, with the navigate() behavior pulling the robot toward the target waypoint and the obstacle_avoid() behavior pushing the robot away from intervening obstacles.

6. Examples.

Here are some examples as mpeg videos that may make the above behavior descriptions more obvious.
a. IR avoidance behavior

First is SR04 running a 10 foot Borenstein UMBMark, with no obstacles (6.2 Mb):
http://www.geology.smu.edu/~dpa-www/robots/mpeg/sr04_outdoor_sq_001x.mpg

Note that the robot decelerates as it approaches each waypoint, as described in the previous post. At each waypoint, it performs two behaviors: it rings a little electronic bell, and it rotates to face the next waypoint. Using a quarter as a fiduciary reference, SR04 has about 12 inches of error on this run.

Now here is the SR04 robot running that same 10 foot square, this time with obstacles in between some of the waypoints (7.6 Mb):
http://www.geology.smu.edu/~dpa-www/robots/mpeg/sr04_sq_001x.mpg

Note that this obstacle avoidance and navigation technique combined can deal with both static and dynamic, i.e., moving, obstacles.

Finally, here is SR04 running a 24 foot out-and-back task, identical to the DPRG Outdoor Challenge #3 100 foot task, in the attic of the SMU Heroy building strewn with old furniture and equipment.
http://www.geology.smu.edu/~dpa-www/robots/mpeg/sr04_ob3.mpg

The SR04 robot has a pair of red LEDs mounted on the left and right side of the robot just above the wheels. These LEDS are turned on when the obstacle avoidance behavior senses a detection. By watching these as the robot navigates through the attic, it is possible to tell when the robot's turning is controlled by obstacle_avoid() (LEDs on) and when it is controlled by the navigate() behavior, (LEDs off).
b. SONAR avoidance behavior

jBot uses it's SONAR array for several different tasks. In addition to obstacle avoidance it also uses the SONAR readings for wall following and for escape maneuvers. In the following videos those behaviors are disabled, and the robot's control regime is very similar to the SR04 robot examples above.

First here is jBot running a 20 foot version of challenge #1, out-and- back, over broken ground. The robot has a slight mod of the waypoint navigation algorithm described previously in that it can reload the waypoint list at the end of the run, in order to run the same sequence repeatedly. That what's happening here, as jBot drives back and forth between two way points and tries to maintain it's location and orientation. Note that in this case the robot does not slow to a stop or take other actions at the waypoints (5 Mb):
http://www.geology.smu.edu/~dpa-www/robo/jbot/jbot_rough_01.mpg

Here is the same out-and-back pattern repeatedly, this time for 100 feet as in the DPRG Outdoor Challenge #3, but with many obstacles in between the two waypoints (12 Mb):
http://www.geology.smu.edu/~dpa-www/robo/jbot/jbot_gardenx.mpg

Here's a similar challenge in the Colorado Rockies, where the robot navigates 500 feet through the forest and back (9.3 Mb):
http://www.geology.smu.edu/~dpa-www/robo/jbot/jbot2/jbot_rock2a.mpg

And in this final video, the robot navigates 500 feet through the woods at Lake Dallas, avoiding trees and trashcans and picnic tables, and returns within inches of the origin, all without GPS (21 Mb):
http://www.geology.smu.edu/~dpa-www/robo/jbot/jbot_hatrick2_2.mpg

7. Challenge #4

Challenge #4 is seek to a distant waypoint and back on the grounds at FairPark. At this stage the robots will probably need some shepherding by human handlers to keep them out of the ponds and away from delicate flower beds (jBot has already transgressed in this particular area and it wasn't a pretty sight.) However, if the robots have good obstacle avoidance behaviors, this is just a matter of chasing them away from difficulties. It's our assumption that as the robots improve and advance they will require less and less shepherding. But it's allowed at this time, and even encouraged.

A couple of other behaviors might also be required for successful completion of challenge #4. One is an escape() behavior that can sense and respond appropriately when the robot collides with an obstacle or becomes physically stuck, high centered, etc. This must be a high priority behavior. Another is a behavior that can monitor for and recover from the concave singularity obstacle trap discussed in a previous post.

jBot switches into a perimeter following mode using its SONAR array looking out to about 20 feet, in order to deal with large concave geometries. That behavior has an intermediate priority between the target seeking and obstacle avoidance behaviors. The whole thing might look like this:
void subsumption()
{
   while (1) {
       cruise();
       navigate();
       waypoint();
       perimeter_follow()
       obstacle_avoid();
       collision_recovery()
       arbitrate();

       msleep(DELAY);
   }
}

The navigate() target seeking behavior can be subsumed by perimeter or wall-following behavior if it determines that the robot is trapped in a concave space. That behavior is in turn subsumed by the obstacle avoidance if any obstacles are encountered, and both are subsumed by the collision recovery behavior if it is determined that the robot is physically stuck or has had a collision. More detail on these behaviors are available on jBot's webpage.

8. Grand Adventuring

Once your robot is able to complete challenge #3 and #4, navigating to a waypoint and back with obstacles in between, it's time to turn it loose and see how it survives in the real world. If the out-and-back or Borenstein UMBMark sequences have been hard-coded into the robot, you can set your robot down at any arbitrary location and send it off adventuring. This is where it really gets to be interesting. And fun.

I often find myself giving the robot purposefully difficult tasks, like seeking a waypoint behind a big pile of old pipes and broken concrete, just to see how well the robot can handle it. The world is your playground.

Another nifty little function allows jBot to mark the GPS location of the car when I take the robot out to play. Then at the end of the day when I'm tired and the robot has become unaccountably heavy, it can seek back to the car on its own, without me having to carry it. I only have to follow along behind. It even finds the car for me if I've forgotten where we parked.

Those builders interested in pursuing the Seattle Robotics Society Robomagellan Contest at this point only need to add some means -- like a CMUCam -- of identifying and acquiring an orange traffic cone when the robot arrives at the waypoints, and add that behavior to the subsumption control loop.

In fact, some experiments I've run with jBot suggest that traffic cones can be recognized by their shape with SONAR alone. This is somewhat similar to the way that SR04 recognizes soda cans with SONAR alone for the DPRG CanCan contest: http://www.geology.smu.edu/~dpa-www/robots/mpeg/cancan.mpg which also required the addition of a single subsumption behavior.

At any rate, once your robot can do these four exercises you will hopefully have a robust enough platform to be suitable for all sorts of sensors, cameras, grippers, and grand adventures in outdoor robotry.

Happy Roboting!
dpa





PREV TOC




Rev. 1.1
Copyright (c) 2008 David P. Anderson. Verbatim copying and distribution of this entire article are permitted worldwide, without royalty, in any medium, provided this notice is preserved.