Professional Documents
Culture Documents
net/publication/3206694
CITATIONS READS
119 1,708
2 authors, including:
Nicholas Flann
Utah State University
93 PUBLICATIONS 1,381 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Nicholas Flann on 13 November 2015.
Moore (moorek@ece.usu.edu) and Flann are with the Center for Self-Organizing and Intelligent Systems, Utah State University, Logan, UT
84322, U.S.A. An earlier version of this paper appeared in the Proceedings of the 1999 IEEE International Symposium on Intelligent Con-
trol/Intelligent Systems and Semiotics.
0272-1708/00/$10.00©2000IEEE
December 2000 IEEE Control Systems Magazine 53
gorithms for planning and control have led to the possibility has been to develop and demonstrate intelligent mobility
of realizing true autonomous mobile robot operation for concepts for unmanned ground vehicles. Mobility means,
UGV applications. literally, “the ability or readiness to move or be moved; be -
In this article we describe a specific autonomous mobile ing mobile” [2]. Intelligent mobility adds the capability to
robot developed for UGV applications. We focus on a novel determine optimal paths through terrain by using various
robotic platform [1], the Utah State University (USU) smart or intelligent navigation and path-planning strategies
omnidirectional vehicle (ODV). This platform features mul- (e.g., obstacle avoidance and negotiation using cost func-
tiple “smart wheels” in which each wheel’s speed and direc - tions and tradeoff analyses). Thus, in the context of autono-
tion can be independently controlled through dedicated mous mobile robotics, we consider two components of
processors. The result is a robot with the ability to com- intelligent mobility: mobility capability and mobility control.
pletely control both the vehicle’s orientation and its motion Mobility capability refers to the physical characteristics of
in a plane—in effect, a “hovercraft on wheels.” the vehicle that make it able to move. Mobility control
We begin by describing the mobility capability inherent means using “intelligence” in controlling the vehicle to actu -
in the smart wheel concept and discussing the distrib- ally achieve its full mobility capability. Mobility control also
uted-processor mechatronic implementation of the com- has two components:
plete system. Then we focus on a multiresolution • First, to manage the “local” dynamic interactions be -
behavior-generation strategy that we have developed for tween the vehicle and the forces it encounters, intelli-
controlling the robot. The strategy is characterized by a hi- gent vehicle-level control algorithms must be devel-
erarchical task decomposition approach. At the supervi- oped and implemented to optimally maneuver the
sory level, a knowledge-based planner and an A*-optimiza- vehicle.
tion algorithm are used to specify the vehicle’s path as a se- • Second, mission mobility and path planning is con-
quence of basic maneuvers. At the vehicle level, these basic cerned with “global” motion planning and naviga -
maneuvers are converted to time-domain trajectories. tion and is used to determine the path a vehicle
These trajectories are then tracked in an inertial reference should take to pass through a given region to
frame using a model-based feedback linearization control- achieve its objective.
ler that computes set points for each wheel’s low-level drive
motor and steering angle motor controllers. The effective-
ness of the strategy is demonstrated by results from actual
The USU “Smart Wheel”
tests with several real robots designed using the smart Our perspective on developing effective robotic systems
wheel concept. for UGV applications is that one must have both a mobility
capability to work with and the proper mobility control to
effectively utilize that capability. To this end, we have de-
The USU ODV Robotic Platform
veloped a series of novel mobile robots based on a specific
Mobility Capability and Mobility Control
mobility capability that we call the “smart wheel.” Fig. 1
Much of the research described here was funded by the U.S.
shows the T1, a 95-lb ODV vehicle with six smart wheels.
Army Tank-Automotive and Armament Command’s Intelli-
Other USU ODV six-wheel vehicles include the ARC III, a
gent Mobility Program. The broad objective of the program
45-lb small-scale robot [1], and the T2 [3], a 1480-lb robot
(shown in Fig. 2). The USU smart wheel concept is shown in
Fig. 3. Each smart wheel has a drive motor, power (in T1),
and a microcontroller, all in the wheel hub. This is com-
bined with a separate steering motor and with actuation in
the z-axis (in the T3, a newly developed robot, not shown)
to create a three degree-of-freedom mechanism. Infinite ro-
tation in the steering degree of freedom is achieved
through an innovative slip ring that allows data and (in the
T2) power to pass from the wheel to the chassis without a
wired connection.
The robotic platforms resulting from attaching multiple
smart wheels to a chassis are called omnidirectional be-
cause, as we show later, the resulting vehicle can drive a path
with independent orientation and X-Y motion. This differs
from a traditional Ackerman-steered vehicle or a tracked ve-
hicle that must use skid-steering. In such vehicles, orienta-
tion is constrained by the direction of travel. Much of our
current research focuses on exploring the mobility improve-
Figure 1. The T1, a USU ODV vehicle (95 lb). ments offered by the ODV concept over more traditional ve-
Vehicle Electronics
Overall mobility capability is provided through a complete
mechatronic system that includes a mechanical concept
(the independent steering and drive mechanism) and suit-
able vehicle electronics and control systems to properly ac-
tuate the mechanical subsystem. The vehicle electronics
system (called “vetronics” in the UGV community) used on
the T-series ODV robots is what enables the algorithms for
controlling the robot to actuate the mechanical capability
Figure 2. The T2 ODV autonomous mobile robot (1480 lb). provided by the smart wheel concept. Fig. 4 shows the
vetronics architecture on the T2 vehi-
cle. The system is built around three
Steering Rotation about
Wheel Centerline single-board Pentium computers run-
ning Linux and communicating via
TCP/IP using a LAN on the vehicle. User
Height Degree of Freedom Active Height Control interfaces to the vehicle are through a
wireless modem (for the joystick) and a
wireless TCP/IP link for talking to a
graphical user interface (GUI). The pro-
cessors communicate with various sen-
Passive Spring/Damper
sors (such as GPS and a fiber-optic
gyro) and with other parts of the system
Steering Degree of Freedom in several ways (including CAN bus, A/D
and D/A conversions, and RS-232, on
Drive Motor in Hub both PCI and PC-104 buses). In particu-
lar, the master node PC talks to six dif-
ferent wheel node electronic units, one
Drive Degree of Freedom
Control Node in Hub for each wheel. Each wheel node in-
cludes a 32-bit microcontroller with in-
terfaces to a variety of sensors,
Figure 3. The USU smart wheel. feedback from both absolute and incre-
+12 V
CHA 12 V Return Quadrature Encoder
Quadrature Computer Optical
CHB Encoder Channel A
TT8 Channel B Products
Interface CP-560 (1024)
Wheel Node
Controller
SPWM PWM
Steering Direction Steer Motor Controller
SDIR Fault
Motor Advanced Motion Controls
Sfault Interface 50A8DD
S-Current
A/D Signal
Interface
ACLK +5 V
ADATA Absolute 5 V Return
Encoder Absolute Encoder
AENALBE Interface Sequential Electronic
Systems
Temp (2) 10 Position Model 40H
Battery Type K Current
Voltage Thermo- (2)
couple
Brake Power
BRAKE OFF
Brake Relay
Power Gnd Failsafe Brake
+12 V Warner
Power –12 V ERS57
Lambda PM30 Series
DC-DC Converter +5 V
48 V Bus and Motor Connections
Not Shown
Joystick
Interpreter
GUI Joystick
Time-Domain
Trajectory Signals
Closed-Loop
Mission and Trajectory Control
Path Planner Generator Algorithm(s)
x,
⋅
⋅ y,⋅ θ Desired Angle and
Speed for Each Wheel
Switch II
Wheel Low-Level
Exception Wheel
Movement
Controller Control
Routine
V
Switch I e
h
i
c
Actual Angle and l
Speed of Each Wheel Wheel
Sensors e
Vehicle Position
and Orientation Inertial
Sensors
Odometry
Translate (Final_Point)
Figure 10. Maneuver grammar. Figure 11. Path generated for the sweep goal task shown in Fig. 9.
Figure 17. Final path (shown in red) chosen by the planner for
Figure 16. “Reasonable” paths through terrain objects. the mobility graph in Fig. 16.
αi
y Vi vi
θi
Wheel_i
y
Ri, ri y
Vy vy
rx Vx
Yaw Angle
ry vx
θ θ
x
Body Frame
x Inertial Frame x
Figure 18. x, y, and yaw definitions. Figure 19. Coordinate systems for controller design.
(
u = C o ( s ) q sp − q + qsp . )
we can see that the system has the form of a typical robotic
system Fig. 21 shows the overall control architecture for path
tracking. In this figure, P(s) represents the plant and the
Mq + Bq = R(q )G(u + d (α , q )) low-level PID controllers are represented by C(s). Notice
that we have added a feedforward term, qsp , which is simply
where M is the mass/inertia matrix, B represents viscous the final expected velocity for the desired trajectory. This
damping, we have defined R(q)=R(θ), and G is a constant can be shown to be a computed-torque term. A final point
matrix that depends on the vehicle geometry, given by
y V1 FT1
1 0 1 0 1 0 1 0 1 0 1 0
α1
G=0 1 0 1 0 1 0 1 0 1 0 1 .
y
−ry −rx −ry rx 0 −rx 0 rx ry −rx ry rx
V
FL1
u = GT (GGT ) R −1 (q )u
−1
the resulting system now looks like Figure 20. Forces acting on the vehicle.
.
q sp
Vsp .
u q q
u 1
_
qsp
+ Co (s) + R -1(q) GT + C(s) Motors G R(q) P(s)
s
V
GT R -1(q)
10 10
8 8
6 1.312 ft/s 0.656 6
y 0.656 ft/s ft/s y
4 4
0.328 ft/s
2 2
0.928 ft/s
0 0
−2 0 2 4 6 8 10 12 14 −2 0 2 4 6 8 10 12 14
x x
2 100
1.5 80
Yaw Yaw 60
1 (rad) 40
(rad)
0.5 20
0 0
0 20 40 60 80 100 120 0 20 40 60 80 100 120 140 160
Time (s) Time (s)