The DPRG Outdoor Challenges: #1 and #2
David P. Anderson
Howdy,
The purpose of the DPRG outdoor robot contests is to encourage
the development of autonomous, outdoor robots that can navigate
distances in arbitrary environments and survive on their own
without human intervention.
The contest is broken into 4 challenges which are meant to be
incremental steps toward that ultimate goal. These are,
1) Drive far away and back and stop.
2) Drive around a big square and stop.
3) Do #1 with obstacles in the way.
4) Do #1 with Fairpark in the way.
Here are some thoughts on ways one might accomplish the first
two of these challenges.
The first two challenges are run on a flat, level, obstacle-free,
re-bar-free asphalt parking lot with no nearby buildings and an
unobscured view of the sky. Doesn't get much better than that.
Think of this like the best-case scenario for evaluating the
robot's navigation capabilities.
Parking lots are nice because the painted stripes can be used
like an enormous piece of graph paper for testing the robots. And
most people have local access to a big one, at a grocery store
or shopping mall, for testing and development in preparation for
the contest.
I think these first two challenges are like scales and arpeggios
for a musician. They are simple to do, so the entry level
skill required is not high, but they are extremely hard to do well,
so one can endlessly improve them.
The first challenge is to drive some distance away (100 feet)
and back, and stop. The distance from the stopping place
to the starting place is then measured and recorded. You'd
like that distance to be zero.
It doesn't really matter where the robot goes, as long as it
goes far enough away to make finding its way back a challenge.
So saying, "Go down to the far end of the parking lot, past that
lamp post," is good enough, if the starting place is marked.
It's usually easy to find the conveniently-placed intersection
of a couple of paint stripes as the starting mark on the parking
lot "graph paper."
1. Challenge #1 --- Timing
The simplest way that a robot can run this exercise is with
pure timing and no other sensors, encoders, etc. That's how
jBot first ran it early in its development, when the chassis and
suspension were still being adjusted and aligned, and it had no
other sensors.
If you are building a new robot, (perhaps for a University class
taught by Randy :) this might be a good first goal.
The robot turns on its motor(s) and starts a timer and
drives for a certain period of time, then either turns around,
or just reverses the wheels, and drives backwards, for the same
period of time, and stops. If everything is working, and we
drive more or less straight, the robot should end up back near
the starting place.
And if we tweak the timing and maybe the mechanics of the platform,
we should be able to get very close to the target, on this surface
anyway.
So, what has this accomplished?
Seasoned robot builders will tell you that getting to this
stage is most of the battle. To successfully run this first
simple exercise, the robot builder must have some sort of functioning
robot platform with wheels and motors and batteries attached,
all mechanically and electrically sound; some sort of h-bridge or
other means of controlling the motors from a micro-controller;
a micro-controller with the necessary I/O all wired up and working;
a software development environment set up and working, with the
ability to connect to the robot and download code; and a robust
enough implementation that it can run for 10 or 15 minutes without
crashing, resetting, coming apart, or having motor EMF spikes
brown-out the cpu :>)
This exercise also identifies mechanical and control problems with the
platform, if it is veering left or right or having trouble holding a
straight line. None of this is trivial stuff, even if the exercise
itself seems trivial. Any robot that gets to this stage surely
deserves a gold star.
2. Challenge #2 --- Timing
The second exercise, driving around a large square, requires a bit
more sophisticated navigation scheme, but it also can be done using
just timing alone. If you add the ability to turn at a fixed rate,
for a fixed time, and then tweak the rate and time until you get
nice 90 degree turns on this surface, you can make a pretty good
square with just timing alone, no other sensors.
The robot sets a timer and drives for a while, resets the timer and
turns for while, resets the timer and drives for a while, etc,
until it arrives back at the start. But this is very difficult to do.
3. Timing + compass or rate gyro.
Probably the next step up from simple timing is to add a compass
or rate gyro for steering. We can get away with a simple compass
because we are on a flat, level, asphalt parking lot with no re-bar.
This probably won't work for challenges 3 and especially 4, but we
take baby steps...
To steer in any direction we just add an offset to the compass
reading and try to steer the robot in such a way as to keep the
compass heading constant, steering right when it rotates clockwise
and left when it rotates counter-clockwise.
The compass will improve the accuracy of the robot's ability to
maintain a heading for exercise 1, and to determine when the robot
has turned 90 degrees, irrespective of tire slippage, surface
friction, battery level, etc, for exercise 2. The compass and
timing combined with knowledge of the robot's speed are all that
is needed for classical dead reckoning.
Compasses, of course, must be kept level, and are subject to large
swings caused by local magnetic fields, so this is not a general
solution. However, it's a good starting place, and the software
developed for steering should be directly applicable when you upgrade
to a different sensing technique.
In my experience, smooth and accurate steering is not easy to achieve,
even with perfect sensors, so this is an area that can be greatly
refined. The addition of a rate gyro to help stabilize the compass
can also be used to improve navigation by timing alone, or even used
for improved steering on its own, without a compass.
4. Wheel Encoders and Odometry
We can replace the timing information with direct measurements of the
wheel rotation through the use of shaft encoders. This tells how
much the wheels have rotated and, assuming no slippage, how far over
the ground the robot has traveled. This can be used in two ways.
a. Using encoder counts in place of timer ticks.
The robot drives for a certain number of encoder counts, rather
than for a certain number of timer ticks, and turns for a certain
number of counts rather than ticks. This corrects, for example,
for sagging voltage from a draining battery, and makes the robot
behavior independent of it's speed and, to a certain extent,
independent of the type of surface on which it's driving.
b. Using encoder counts and robot geometry for odometry.
The robot can run odometry calculations and locate itself in
2D Cartesian space, and actively seek on another target location
in Cartesian space. This is how my LegoBot and SR04 robots do
navigation, and is the foundation of jBot's navigation abilities
as well.
With the odometry approach the robot is no longer limited to straight
line paths between waypoints, but can maneuver around obstacles,
for example, while still tracking its own location and seeking
towards a target location. It's pretty cool to watch.
For the second challenge, driving clockwise and counter-clockwise
around a large (100') square, the odometry approach of tracking the
robot's location in 2D Cartesian space, and seeking on a target also
in Cartesian space, is a much simpler and more robust method of
navigating than trying to drive straight lines and do perfect 90
degree turns. Instead the robot has a list of location coordinates
like this:
{0,1200} {1200,1200} {1200,0} {0,0}
which are the four X,Y target locations in inches for a 100 foot
clockwise square, with an origin at the robots starting location.
The robot seeks each target on the list in turn, stopping when it
reaches the end of the list.
This second challenge is really
J.Borenstein's UMBMark
which is designed for odometry calibration, used as a contest. So the
results of the contest can also be applied directly to calibrate
the robot's navigation errors.
5. Wheel encoders + compass or Inertial Measurement Unit (IMU)
The next step up might be for the robot to use a hybrid of wheel
odometry and compass/gyro direction to locate itself and its target.
This is the method that jBot uses, with the compass replaced by a
full 3 axis IMU. The IMU works in jostling, non-level offroad
environments and is relatively impervious to local magnetic
fields from Big Iron Things, buried pipes, re-bar, etc. With it,
jbot can travel distances of several thousand feet and return within
a few feet of the starting location.
But even without an expensive IMU, a compass-equipped robot with good
wheel odometry, maybe aided with a rate gyro, should be able to
achieve robust location and navigation performance on a flat and
level re-bar-free asphalt parking lot. It would be interesting to
see how such a robot performs on uneven ground as well --- we might
be pleasantly surprised.
6. GPS
The robot can use GPS alone for navigation, and probably get around
~20 feet accuracy, depending on how much money you want to spend.
This works well in the FairPark parking lots with a great view of
the sky, but can't really be trusted in the Park itself, (or in a
high mountain forest), again, depending on your budget. Some other
method of tracking location in between the GPS updates must be
deployed; timing, odometry, or other (?).
Timing + GPS might do pretty well, especially if you can get useful
heading data from the GPS often enough, and you wouldn't then need
any other sensors. Might be worth exploring.
7. GPS + other
GPS can be combined with timing, compass (some have them builtin),
IMU, odometry, etc. Chris was working on a Kalman filter that will
essentially combine them all, which is probably the ideal.
On the jBot robot, each of these control techniques can be selectively
disabled at run time, so the robot can run with timing alone, with
odometry alone, with odometry + IMU, and with odometry + IMU + GPS,
and compare the results.
Each improvement of performance at the lower levels tends to show up
as increased precision in the upper levels, so it's useful to be able
to run these for maintenance and debugging, and they are hard-coded
into the robot, like this:
(array size followed by X,Y coordinate pairs in inches, robot at origin):
int list1 {4, 0,240, 0,0} // 20 feet out and back
int list2 {4, 0,600, 0,0} // 50 feet out and back
int list3 {4, 0,1200, 0,0} // 100 feet out and back
int list4 {4, 0,12000, 0,0} // 1000 feet out and back
int list5 {8, 0,600, 600,600, 600,0, 0,0} // right 50' square
int list6 {8, 0,600, -600,600, -600,0, 0,0} // left 50' square
int list7 {8, 0,1200, 1200,1200, 1200,0, 0,0} // right 100' square
int list8 {8, 0,1200, -1200,1200 -1200,0 0,0} // left 100' square
A particular list is chosen at run time by the user (me) using the robot's
on-board user interface (a couple of buttons and a 2x16 line LCD).
8. Looking ahead to Challenge 3 and 4.
If the above two exercises are implemented with odometry and a
subsumption control scheme, then adding obstacle avoidance is
just a matter of adding another, higher priority behavior.
The robot will then have a low level behavior pulling it towards
a target, as before, and a higher level behavior pushing it away
from any intervening obstacles. So your navigation code implemented
thus far does not need to be changed or modified to move on to the
next level and the next two challenges, only a new layer needs to be
added. The beauty of subsumption.
Another interesting problem to be solved is that of concave obstacles,
where the simple robot algorithm of pulling towards the target and
pushing away from obstacles conspires to trap the robot inside of a
concave space. This is pretty easy to test for in the parking lot
with a bunch of people standing in an arc between the robot and its
target. And there are arcs like that all over Fair Park, so a smart
little robot will need to be able to deal with them.
I'm sure there are additional ways to solve these problems that will
be created by other builders. This post is just meant to be a starting
place and hopefully spark ideas in others.
Here are some links that might be helpful.
Description of the DPRG outdoor events from
Last Year
(http://geology.heroy.smu.edu/~dpa-www/robots/dprg/11nov06)
Discussion of
Odometry and IMU
(http://geology.heroy.smu.edu/~dpa-www/robo/Encoder/imu_odo)
Paper on
Subsumption and combining Navigation and Obstacle Avoidance
(http://www.geology.smu.edu/~dpa-www/robo/subsumption)
Happy Roboting,
dpa
PREV
NEXT
Rev 1.1
Copyright (c) 2008 David P. Anderson. Verbatim copying and distribution of this entire article are permitted worldwide, without royalty, in any medium, provided this notice is preserved.