The DPRG Outdoor Challenges: Navigation

David P. Anderson

I sat down to write a follow-up to the post on the DPRG outdoor robot challenges #1 and #2, on the subject of obstacle avoidance, for challenges #3 and #4. However it became quickly apparent that the way obstacle avoidance is implemented on my robots requires a navigation behavior first, in order to explain what's going on.

So I thought it might be useful at this time to have a code example of a simple robot target-seeking navigation behavior, using the math routines previously posted at math.html to illustrate how all this can be used to control a robot.

The previous post as well as those from !!Dean and Kenneth and Robert and others have been working on the solution to accurate position sensing: how can the robot know it's location?

This navigation algorithm addresses the other half of the problem, how to take the output of the location sensing and use it to drive the robot to a target coordinate.

1. Coordinates

This code uses the robot's location in 2D Cartesian space at X,Y, as maintained by the robot's location sensing code, to seek toward a target location at X_target, Y_target.

The robot comes out of reset at the X,Y Cartesian coordinate of 0,0 and with a heading angle, or theta, of 0. Positive X is right and positive Y is straight ahead. Positive rotations are clockwise from the Y axis.

                 |   |0,0|
      -X ------------+----------------- +X
                 |   |   |
                  \ _|_ /   <--- a robot sitting at origin 0,0.

I use a coordinate system in inches, such that a point 100 feet directly ahead of the robot at origin (0,0) is at location (0,1200), and a point 50 feet directly to the left is at location (-600,0).

2. Location Sensing

For this behavior we assume a separate task is running, perhaps interrupt driven or even from a separate processor, that is constantly updating the robot's location in X,Y, and theta, as the robot moves.

// Navigation globals updated by a location sensing task,
// and read by the navigation task(s).

float X,Y, theta;

This sensing task might be odometry or GPS or something Chris and !!Dean are working on with a Kalman filter, or some sort of network of radio or ir beacons, cameras, whatever.

For this behavior we'll just call it location sensing and assume that it works and gives us a constantly updated X,Y and theta. The odometry code fragment 1a and 1b in the previous math post performs this calculation.

Those data are then used by the locate_target() code to calculate the distance and angle to the next target waypoint location at X_target,Y_target, also listed in the previous post.

// Coordinates of current waypoint target
float X_target, Y_target;

// Globals maintained by locate_target()
float target_angle, target_distance, last_target_distance;

These values are used in turn as sensor inputs for a simple closed-loop, or servoing, navigation behavior.

3. The navigation algorithm

The navigation algorithm for steering is pretty simple, and uses the target_angle calculated by locate_target():
if target_angle is left of robot's heading, turn left.
if target_angle is right of robot's heading, turn right.
An improved method might use the angle to the target to scale the radius of the turn. But this method works quite well and is easy to adjust.

The navigation algorithm for speed is almost as simple, and uses the target_distance calculated by locate_target():
if target is here then stop,
else if target is close then slow down,
else full speed.
Those two functions form the output of the navigate() behavior. They are read by a subsumption arbitrater and passed along to the motor control when navigate() is the highest priority behavior asserting it's flag, at 20 Hz.

4. Arriving at a waypoint.

How does the robot know when it has arrived at a waypoint?

The locate_target() code maintains the variable target_distance that is the current distance from the center of the robot to the waypoint target. So,
      a. if (target_distance == 0) then we've arrived.
Expecting this value to become zero is not realistic. We can't steer the robot that precisely. So,
      b. if (target_distance < TARGET_ERR)  then we've arrived.
Where the constant TARGET_ERR is a small distance which represents the radius of a circle around the target waypoint: the target error circle.

               - -
             /     \
            |   +   |    <--- waypoint at center of circle
             \     /          with TARGET_ERR radius
               - -

The robot has arrived at the waypoint when it crosses the perimeter of that circle. So the larger the circle, the easier it is to hit, but the further away the robot is from the actual target waypoint location.

This code uses a two-stage test that includes a measure of whether the robot is still closing on the target center. The robot can know this by saving the last target_distance that it calculates each time locate_target() calculates a new target_distance.
      c.  if ((target_distance < TARGET_ERR) 
              && (target_distance > last_target_distance))
                  then we have arrived.
Once the robot has crossed the perimeter of the circle, it continues steering toward the center, but as soon as the target_distance is greater than the last_target_distance, the robot is heading away from the target center. That means either the robot has 1) passed over the target center or, 2) turned away from the target center to avoid an obstacle. In either case, the robot can declare it has arrived at the waypoint.

We still need the target error circle itself as a limit because outside that circle the robot may well drive away from the target when responding to obstacles, while still on its way to the target.

To summarize: if the robot is within the target error circle but heading away from the target center, we've arrived.

Using this test for arrival at the waypoint allows the target error circle to be large while still allowing the robot to acquire the center of the target if it can. It also handles the very real problem of what to do when the robot can't actually reach the target center, which might well be inside of a wall, tree stump, traffic cone, or mischievous human.

5. The navigation behavior in 'C'

// constants that control the navigation behavior
#define TARGET_CIRCLE 36    // error circle, 3 feet
#define NAVIGATION_TURN 12  // turn size, adjust to taste
#define TARGET_CLOSE 96     // slow down within 8 feet of target
#define DEADZONE 5          // steering tolerance near 0

// navigate() target seeking behavior.  Run this from the 20 Hz
// subsumption control loop.

void navigate()            
     // target == TRUE when actively seeking a target.
     // Set by waypoint() that assigns the  X_target,Y_target coords,
     // Reset by this behavior when we arrive at a target.
     int extern target;

     // slowdown == TRUE, robot slows down when approaching target
     // Set by the user, read by this behavior.
     int extern slowdown;

     // outputs of locate_target()
     int extern target_distance, last_target_distance;
     int extern target_angle;

     // outputs of this behavior, for subsumption arbitrater
     int navigation_flag, navigation_speed, navigation_turn;

     if (target)

         // calculate target_distance and target_angle

         // have we arrived at target?
         if ((target_distance < TARGET_CIRCLE)
            && (target_distance > last_target_distance)){
             // yes!  signal arrival and disable behavior
             target = 0;
             navigation_flag = FALSE;

         } else {

             // no, still seeking target, signal arbitrater
             navigation_flag = TRUE;
             // steer toward target
             if (abs(target_angle) < DEADZONE) {
                     navigation_turn = 0;
             } else {
                 if (target_angle < 0 ) {
                     navigation_turn = -NAVIGATION_TURN;
                 } else {
                     navigation_turn =  NAVIGATION_TURN;

             // slow down when close to target
             if ((slowdown == TRUE)
                  && (target_distance < TARGET_CLOSE)) {
                      navigation_speed =
             } else {
                      navigation_speed = top_speed;
    } else {
         navigation_flag = FALSE;   // no target, turn behavior off

Top_speed is a variable set by the user.

NAVIGATION_TURN is a constant that the builder has to tweak to get smooth turns --- or sharp turns, whatever is your favorite robot aesthetic.

DEADZONE is a small range around zero of target_angle, to prevent the robot from oscillating back and forth and hunting the target. A better solution for this is to steer the robot with a separate PID controller, which can then dampen out oscillations around the set point without giving up precision. But the dead-band technique works quite well to accomplish a similar result.

6. Waypoint Actions

The target flag used above is monitored by a separate waypoint() task to determine when navigate() has successfully acquired its target location. When target goes to zero, this behavior can choose to do several things.

It can fetch a new X_target, Y_target coordinate pair from the waypoint list, and set target back to TRUE. That starts the robot on the search for the next waypoint.

It can also perform an action. It might calibrate its position, rotate toward the next waypoint, or attempt to acquire and touch an orange traffic cone. If it is the SR04 robot and it is holding an empty soda, it might try to set it down. This is a higher priority task than the navigation behavior so it can subsume that behavior at waypoints when needed.

void waypoint()
    // Global target == TRUE if actively seeking a waypoint.
    // Set by this behavior, reset by navigate().
    int extern target;

    // pointer to list of waypoint X,Y coordinates.
    int extern *list_pointer, *list_end;

    // outputs of this behavior, to arbitrater for actions
    int waypoint_flag, waypoint_speed, waypoint_turn;

    // disable the outputs for this task
    waypoint_flag = FALSE;  

    // if navigate() has arrived at waypoint target,
    if (target == FALSE) {     

        // Get next target from waypoint list
        if (list_pointer < list_end) {
            X_target = *list_pointer++;
            Y_target = *list_pointer++;

            // and set target to true
            target = TRUE;

        // Put actions that need to happen at waypoints here.
        if (actions_enabled) {
             waypoint_flag = TRUE;

             // rotate toward next target
             // and/or do calibration
             // and/or seek traffic cone
             // and/or put down soda can
             // and/or etc
             // using waypoint_speed and waypoint_turn

        } else {
            waypoint_flag = FALSE;

7. Subsumption control loop

The navigate() and waypoint() behaviors are two of several that contend for control of the robot through the mechanism of subsumption. A subsumption control loop for these behaviors and a few others might look like this:
void subsumption()
   while (1) {

a. cruise() is a behavior that just tries to ramp the robot velocity and steering to full speed, straight ahead. Alternately it can be set to bring the robot to a stop. For waypoint navigation, it brings the robot to a stop, so when the waypoint list is completed by navigate(), the robot halts. It is the lowest priority behavior.

b. navigate() is our target-seeking behavior, the next to lowest priority.

c. waypoint() manages the waypoint list and can subsume navigate() when the robot arrives at waypoints.

d. obstacle_avoid() is a behavior that steers the robot away from obstacles and can subsume everything but collision recovery. More on this anon...

e. collision_recovery() is a behavior that handles collisions and can attempt escapes from being stuck, high-centered, etc. It is highest priority in this particular scheme.

f. arbitrate() does the actual subsumption control switching. It chooses the highest priority task that is asserting its flag and passes that behavior's speed and turn arguments to the motor control subsystem.

g. tsleep(DELAY) is a delay that causes the control loop to run at a fixed rate, 20 Hz for my jBot outdoor robot. This is actually part of a cooperative multi-tasking scheduler that suspends the subsumption task while other tasks runs, but that is not important for this example. The tsleep(50) function suspends the task for the remainder of a 50 mil-sec time slice.
This is all implemented with the continuous message form of subsumption. There's a more detailed description of how these behaviors work together using the subsumption control method in the previously cited paper here:

7. Performance
DPRG outdoor robot challenge #1.
For DPRG outdoor challenge #1, the robot has a waypoint list consisting of two points, the target (0,1200) and the origin (0,0).

When the robot begins the exercise it is sitting at the origin pointed at the first target waypoint. The navigate() behavior requests full speed straight ahead and the robot's PID controller slews the platform velocity from 0 to top_speed, and the robot drives toward the target waypoint.

What happens when the robot arrives at the waypoint depends on whether the user has set the slowdown flag.

If the slowdown flag is not set, the robot will drive straight through the waypoint, switching to the next waypoint just as the center of the robot passes over the target. It will then immediately begin turning around to point back at the origin, which is the next waypoint, as steered by navigate().

You can tweak the size of that turn with the NAVIGATE_TURN constant. You might even scale the turn by the target_angle, so the robot turns sharper for larger angle errors (although in my experience that produces jerky maneuvering).

Note that at this point the robot will be some distance left or right of the target waypoint, depending on which way it turned around. So the path back to the origin will not be the same as the path out, but rather offset from and angled to that path. That is, the robot will not try to retrace it's original path, but rather will choose the shortest route to the next waypoint.

If slowdown = TRUE the robot will decelerate as it arrives at the waypoint and come to a stop, then fetch the next waypoint from the list, potentially rotate toward the new target or take some other action(s), and only then begin driving toward the next waypoint. If the robot is a differentially steered platform with zero turning radius, or an Ackerman-steered platform performing its jack-knife maneuver, then the robot actually can drive back along the original path to (0,0).

When the robot arrives at (0,0) the waypoint list is complete, so waypoint() behavior sets its waypoint_flag to FALSE, and there is no target for navigate(), so its flag is also FALSE, and so the subsumption control falls through to cruise(), which halts the robot.
DPRG outdoor robot challenge #2.
With the navigate() behavior described in this post, a robot should be able to follow an arbitrary list of waypoints, steering smoothly from one to the next as each is acquired in turn.

If that waypoint list consists of the four corners of a 100 foot square, then that list of waypoints is our challenge #2, the Borenstein UMBMark.

As before, with the slowdown flag set to zero, the robot will drive through each waypoint before curving to face the next, so it will not be driving a "true" square. With the slowdown flag TRUE, the robot will halt at each waypoint and can perform an action if desired, including rotating or jack- knifing to face the next waypoint, in order to drive a true square. Either technique is ok. The only metric that matters for the UMBMark is the distance from the final stopping place to the origin at (0,0).

With the slowdown flag set to FALSE, the robot won't stop exactly on the origin for the final measurement. However, if the robot is tracking it's own location, it will know that it is not at the origin. Borenstein's UMBMark requires that we record both the actual distance from stopping place to origin and also the robot's calculation of its own location, so that driving/steering errors as well as location sensing errors are taken into account.

If you have an onboard LCD or display of some kind you can display the robot's X,Y and theta there. Otherwise you might need to download them to a laptop at the end of each run.

8. Looking forward to challenges #3 and #4: obstacle avoidance.

The navigate() behavior described here will work seamlessly with obstacle avoidance and other higher priority behaviors that disturb the robot's path toward its target.

Happy Roboting,


Copyright (c) 2008 David P. Anderson. Verbatim copying and distribution of this entire article are permitted worldwide, without royalty, in any medium, provided this notice is preserved.