You are on page 1of 9

An Intelligent Approach to Coordinated Control of Multiple Unmanned Aerial Vehicles

George Vachtsevanos, Liang Tang, Johan Reimann


gjv@ece.gatech.edu ltang@ece.gatech.edu gtg221d@prism.gatech.edu
School of Electrical and Computer Engineering
Georgia Institute of Technology, Atlanta, GA, 30332. U.S.A.

Abstract

This paper introduces a novel architecture for the coordinated control of multiple Unmanned Aerial Vehicles (UAVs ) and a
differential game theoretical approach to formation control and collision avoidance. The hierarchical architecture features an
upper level with global situation awareness and team mission planning, a middle level with local knowledge, formation control
and obstacle avoidance, and a low level that interfaces with onboard baseline controllers, sensors, communication and weapon
systems. Each level consists of several interacting agents with dedicated functions. The formation control problem is viewed as a
Pursuit Game of n pursuers and n evaders. Stability of the formation of vehicles is guaranteed if the vehicles can reach their
destinations within a specified time, assuming that the destination points are avoiding the vehicles in an optimal fashion. A two-
vehicle example is shown to illustrate the approach. Vehicle model is simplified to point mass with acceleration limit. Collision
avoidance is achieved by designing the value function so that it ensures that the two vehicles move away from one another when
they come too close to each one. Simulation results are provided to verify the performance of the proposed algorithm.

In this paper, we introduce a novel architecture for the


Introduction coordinated control of multiple UAVs acting as intelligent
agents. Our approach differs from most others (Ref. 2,3) in
that, 1). A commander is placed at the highest level of the
The future urban warfare will utilize an unprecedented level
hierarchy. At the current level of autonomy, the system
of automation in which human-operated, autonomous, and under development is acting as a decision support tool for
semi -autonomous air and ground platforms will be linked the commander. 2) The architecture is generic and flexible
through a coordinated control system. Networked UAVs
to facilitate the fusion of diverse technologies.
bring a new dimension to future combat systems that must
include adaptable operational procedures, planning and The hierarchical architecture we introduce features an upper
deconfliction of assets coupled with the technology to
level with global situation awareness and team mission
realize such concepts. The technical challenges the control planning, a middle level with local knowledge, formation
designer is facing for autonomous collaborative operations control and obstacle avoidance, and a low level that
stem from real-time sensing, computing and
interfaces with onboard baseline controller, sensors,
communications requirements, environmental and communication and weapon systems. Each level consists of
operational uncertainty, hostile threats and the emerging several interacting agents with dedicated functions. Among
need for improved UAV and UAV team autonomy and
all the functional agents accommodated by the proposed
reliability. Figure 1 shows the autonomous control level architecture, this paper focuses on formation control and
trend according to the DoD UAV Roadmap (Ref. 1). To collision avoidance.
meet these challenges, innovative coordinated planning and
control technologies such as distributed artificial The problem of finding a control algorithm, which will
intelligence (DAI), computational intelligence and soft ensure that multiple autonomous vehicles can maintain a
computing, as well as game theory and dynamic
formation while traversing a desired path and avoid inter-
optimization, have been investigated intensively in recent
vehicle collisions, will be referred to as the formation
years. However, in this area, more work has been focused control problem. The formation control problem has
on solving particular problems, such as formation control recently received considerable attention due in part to its
and autonomous search, while less attention has been paid
wide range of applications in aerospace and robotics. A
to the system architecture, especially from an classic example involving the implementation of the virtual
implementation and integration point of view. potential problem is presented in (Ref. 4). The authors
performed simulations on a two-dimensional system, which
proved to be well behaved. However, as they mention in
Presented at the American Helicopter Society 60th Annual their conclusion, the drawback of the virtual potential
Forum, Baltimore, MD, June 7-10, 2004. Copyright © 2004 function approach is the possibility of being “trapped” in
by the American He licopter Society International, Inc. All local minima. Hence, if local minima exist, one cannot
rights reserved.
guarantee that the system is stable. In (Ref. 5), the acceptable formation maneuvers that can be performed
individual trajectories of autonomous vehicles moving in while maintaining the formation.
formation were generated by solving the optimal control
problem at each time step. This is computationally System Architecture
demanding and hence not possible to perform in real-time
While networked and autonomous UAVs can be centrally
with current hardware.
controlled, this requires that each UAV communicates all
the data from its sensors to a central location and receives
all the control signals back. Network failures and
communication delays are one of the main concerns in the
design of cooperative control systems. On the other hand,
distributed intelligent agent systems provide an environment
in which agents autonomously coordinate, cooperate,
negotiate, make decisions and take actions to meet the
objectives of a particular application or mission. The
autonomous nature of agents allows for efficient
communication and processing among distributed resources.

For the purpose of coordinated control of multiple UAVs,


each individual UAV in the team is considered as an agent
with particular capabilities engaged in executing a portion
of the mission. The primary task of a typical team of UAVs
Figure 1. Autonomous Control Level Trend is to execute faithfully and reliably a critical mission while
This paper views the formation control problem from a two satisfying local survivability conditions. In order to define
player differential game perspective, which provides a the domain of our research, we adopt an assumed mission
framework to determine acceptable initial vehicle scenario of a group of 5 UAVs executing reconnaissance
deployment conditions but also, provides insight into and surveillance (RS) missions in an urban warfare
environment, as depicted in Figure 2.

Urban Warfare
GTMax Manned Vehicle

Fixed Wing UAV GTMax


GTMav

OAV Sniper

Ground
Ground
Sensor Sensor

Moving
Target
Soldiers Ground
Sensor

Commander
Operator

Figure 2. A Team of 5 UAVs Executing RS Missions in an Urban Warfare Environment


Command Manned
& Control Vehicle

Level 3
Global
Team Mission Planning Global Situation Knowledge Fusion
Knowledge
/Re-planning Agent Awareness Agent Agent
Global Performance QoS Assessment
Measurement Agent Agent

Level 2
Local Formation Control Moving Obstacle Local Situation
Knowledge Agent Avoidance Agent Awareness Agent

Local Mission FDI/Reconfigurable


Planning Agent Control Agent

Level 1
Behavioral
Vehicle Weapon System
Knowledge Communication Sensing Agent
Control Agent Agent Agent

……

Figure 3. A Generic Hierarchical Multi-agent System Architecture


Worth pointing out is that the team consists of theory, etc. Market based methods (Ref. 6, 7), and especially
heterogeneous UAVs with various capabilities in terms of auction theory (Ref. 8, 9, 10) can be applied as a solution to
maneuverability, sensing, endurance, autonomy level, etc. autonomous mission re-planning. Planning the UAVs’ flight
This mission scenario covers a wide area of research route is also an integral part of mission planning. A modified
interests including mission planning and re-planning, A* search algorithm, which attempts to minimize a suitable
optimal sensor (UAV) placement, task allocation, cost function consisting of the weighted sum of distance,
autonomous search and tracking, knowledge fusion, hazard and maneuverability measures (Ref. 11,12), can be
formation control, moving obstacle avoidance, fault tolerant utilized to facilitate the design of the route planner. In the
control, etc. A generic hierarchical multi-agent architecture case of a leader-follower scenario, an optimal route is
that accommodates the aforementioned technologies is generated for the leader, while the followers fly in close
depicted in Figure 3. formation in the proximity of the leader. The global situation
awareness agent, interacting with the knowledge fusion
The highest level of the control hierarchy features functions agent, evaluates the world conditions based on data gathered
of global situation awareness and teamwork. The mission from each UAV (and ground sensors if available) and
planning agent is able to generate and refine mission plans reasons about the enemy's likely actions. Adversarial
for the team, generate or select flight routes, and create reasoning and deception reasoning are two important tasks
operational orders. It is also responsible for keeping track of executed here. The global performance measurement agent
the team’s plan, goals, and team members' status. The measures the performance of the team and suggests team re-
overall mission is usually planned by the command and configuration or mission re-planning, whenever necessary.
control center based on the capabilities of each individual Quality of service (QoS) is assessed to make the best effort
UAV agent, and is further decomposed into tasks/subtasks to accomplish the mission and meet the predefined quality
which are finally allocated to the UAV assets (individually criteria. Real world implementation of this level is not
or in coordination with other vehicles). This can usually be limited to the agents depicted in the figure. For example, in
cast as a constrained optimization problem and tackled with heterogeneous agent societies, knowledge of coordination
various approaches, such as integer programming, graph protocols and languages may also reside (Ref. 3).
Most functional agents belonging to level three reside in a dynamics and the positional information of both the evader
ground command and control center, or a manned vehicle in and the pursuer, that is, the Pursuit Game will be viewed as a
the team. Some functions and services might also be Perfect Information Game.
integrated onboard the UAVs with substantial computation
and communication capabilities. They function primarily as Formation Control as a Pursuit Game
decision support tools for the commander of the mission.
The formation control problem can be regarded as a Pursuit
The middle level with local situation awareness is Game, except that, it is in general, much more complex in
responsible for the planning, re-planning and management of terms of the combined dynamical equations, since the
missions (or sub-missions) allocated to a single UAV. A system consists of n pursuers and n evaders instead of only
level two mission plan is essentially a sequenced set of one of each. However, if the group of vehicles is viewed as
orders for level one. A task library is used when a higher the purs uer and the group of desired points in the formation
level task is decomposed into lower level orders. The local as the evader, the problem is essentially reduced to the
situation awareness agent monitors the world situation for standard but much more complex pursuit game.
obstacles and threats. A synergy between the formation
control agent and its obstacle avoidance counterpart is Stability of the formation of vehicles is guaranteed if the
required in order to avoid obstacles (stationary or moving) vehicles can reach their destination within some specified
and collisions. At the formation level, the UAV swarm must time, assuming that the destination points are avoiding the
avoid vehicle collisions as well as moving or stationary vehicles in an optimal fashion. It seems counterintuitive that
targets. The system configuration must assure the global the destination points should be avoiding the vehicles
asymptotic stability of the coordinated formation control optimally, however if the vehicles can reach the points under
strategy to be adopted so that vehicles meet safety such conditions then they will always be able to reach their
requirements without deviating substantially from their destination.
planned mission/trajectory objectives. At the vehicle level,
guided by the obstacle avoidance agent, each UAV must be As a consequence of our stability criterion, it is necessary
capable of avoiding moving and stationary threats/obstacles not only to determine the control strategies of the vehicles
while performing its assigned task (Ref. 13). The Fault but also the optimal avoidance strategies of the desired
Detection and Identification (FDI)/reconfigurable control points. Let us label the final control vector of the vehicles by
agent detects fault conditions and activates control φ and the control final vector of the desired points byψ .
reconfigurations. The formation control agent is discussed in
Then, the main equation which has to be satisfied is:
detail in a later section of this paper.
 r r 
Level one with behavioral knowledge is the level closest to φ ψ 

min max  V j ⋅ f j ( x, φ ,ψ ) + G( x ,φ ,ψ )  = 0

(1)
the physical dynamics and instrumentation of the UAV. It  j 
encapsulates the vehicle’s physical control systems, weapon which has to be true for both φ and ψ .
system, communication mechanism, and sensing capabilities r
The f j ( x, φ,ψ ) term is the jt h dynamic equation governing
for upper level agents, allowing it to deal with higher level
linguistic commands, such as “move”, “search”, “shoot”, the system, and the V j is the corresponding Value of the
and “communicate”. It is responsible for executing these r
game. G( x, φ ,ψ ) is a predetermined function which, when
tasks by translating them into physical commands, set points,
way points, etc. The agents belonging to this level are integrated, provides the payoff of the game. Notice, that the
designed to support heterogonous UAV models. They are only quantity that is not specified in the equation is the
not necessarily knowledgeable of any plans, strategy, or V j term.
teamwork.
From the main equation it is possible to determine the
Differential Game Approach to Formation retrograde path equations (RPEs), which will have to be
Control solved to determine the actual paths traversed by the
vehicles in the formation. However, initial conditions of the
retrograde path equations will have to be considered in order
Differential Game Theory was initially used to determine to integrate the RPEs. These initial condition requirements
optimal military strategies in continuous time conflicts provide us with the ability to introduce tolerance boundaries,
governed by some given dynamics and constraints (Ref. 14). within which we say that the formation has settled. Such
One such application is the so-called Pursuit Game in which boundaries naturally add complexity to the problem,
a pursuer has to collide with an evading target. Naturally, in however they also provide a framework for positional
order to solve such a problem it is advantageous to know the measurement errors.
The above formulation suggests a way for approaching the constant distance separating the two desired points, and that
solution to differential game. However, how does one ensure the formation can only perform translations and not rotations
that inter-vehicle collisions are avoided? To ensure this, it is in the three dimensional space. Hence the dynamic equations
necessary to consider the payoff function determined by the become:
r
integral of G( x, φ ,ψ ) . As an example, if we simply seek that x& d = vxd
the vehicles must reach their goal within a certain time τ, v& xd = Fd cos(ψ 1 ) sin(ψ 2 ) − kd ⋅ v xd
r
then G( x, φ ,ψ ) = 1. This can be verified by evaluating y& d = vyd
τ
r v& yd = Fd sin(ψ 1 ) sin(ψ 2 ) − kd ⋅ v yd
∫ G(x ,φ ,ψ )dt = τ . Hence, we have restricted our solutions
0 z& d = v zd
to the initial vehicle deployment, which will ensure that the v& zd = Fd cos(ψ 2 ) − kd ⋅ v zd
vehicles will reach the desired points in τ time. However, if In the above dynamical systems, the k i and kd factors are
r
G( x, φ,ψ ) is changed to penalize proximity of vehicles to simply linear drag terms to ensure that the velocities are
one-another, only initial conditions that ensure collision free bounded, and the Fd and Fi terms are the magnitudes of the
trajectories will be valid. applied forces. Figure 4 shows the coordinate system and the
associated angles.
r
However, G( x, φ ,ψ ) does not provide the means to perform
the actual collision avoidance, but merely limits the solution
space. So, in order to incorporate collision avoidance into
the controller, one can either change the value function or
add terms to the system of dynamic equations.

Limitations
Due to the solution’s dependence upon the dynamic
equations of the vehicles, some limitations of the differential
game approach naturally arise. As an example, consider the
case where it is desirable to replace one type of vehicle with
another in real time. Such systems can be accommodated
only if the new type of vehicle is capable of performing all Figure 4: Definition of Angles
the maneuvers assigned to its predecessor.
Substituting the dynamical equations into the main equation
(1), we obtain the following expressions:
min [ F1 ⋅ (Vvx1 ⋅ cos(φ1) ⋅ sin(φ2 ) +
Another issue that may arise, is the existence of a closed
form solution to the Retrograde Path Equations. For some φ
systems of dynamical equations it is simply not possible to
find the solution to the differential equations.
)
Vvy1 ⋅ sin(φ1) ⋅ sin(φ2 ) + Vvz 1 ⋅ cos(φ2 ) +
F2 ⋅ (Vvx 2 ⋅ cos(φ3 ) ⋅ sin(φ 4 ) +
Two-Vehicle Example Vvy 2 ⋅ sin(φ3) ⋅ sin(φ4 ) + Vvz 2 ⋅ cos(φ4 ) ] ) (2)
In order to illustrate some of the advantages and And
max [ Fd ⋅ (Vvxd ⋅ cos(ψ 1 ) ⋅ sin(ψ 2 ) +
disadvantages with the differential game approach to
formation control, consider the following system of simple ψ
point “Helicopters”, that is, points that can move in three
dimensions governed by the following dynamic equations:
Vvyd ⋅ sin(ψ 1) ⋅ sin(ψ 2 ) + Vvzd ⋅ cos(ψ 2 ) ] )
x& i = vxi To obtain the control law that results from the max-min
solution of equation (2), the following lemma is used:
v& xi = Fi cos(φ 2i −1) sin(φ 2i ) − ki ⋅ v xi Lemma 1:
y& i = vyi Let a, b ∈ ℜ :
v& yi = Fi sin(φ2i −1 ) sin(φ2i ) − ki ⋅ v yi ρ= a2 + b2
z&i = v zi Then
v& zi = Fi cos(φ2i ) − ki ⋅ vzi max( a ⋅ cos(θ ) + b ⋅ sin(θ ))
θ
Where i = 1,2. is obtained where
The two desired “points” are described by one set of
dynamic equations. This simply implies that there is a
a b the analysis is performed on the actual position and velocity
cos(θ ) = , and sin(θ ) = differential equations.
ρ ρ
and the max is
ρ Furthermore, it should also be noted that this solution
closely resembles the isotropic rocket pursuit game
described in (Ref. 14). This is due to the fact that the
By combining Lemma 1 with Equation 2, the following
dynamic equations are decoupled, and hence working within
control strategy for vehicle 1 is found:
a three-dimensional framework will not change the problem
Vvx1 Vvy1
cos(φ1 ) = − , sin(φ1 ) = − considerably.
ρ1 ρ1
V ρ Simulation Results without Collision Avoidance
cos(φ2 ) = − vz1 , sin(φ2 ) = − 1
ρ2 ρ2
Where From the closed form expression of the control presented in
the previous section, it is obvious that the optimal strategies
ρ1 = Vvx1 + Vvy1
2 2
are in fact bang-bang controllers. Since the forces in the
and system are not dependent on the proximity of the vehicles to
the desired points, there will always exist some positional
ρ2 = Vvx21 + Vvy21 + Vvz21 error. It is however possible to resolve this problem simply
Similar results are obtained for vehicle 2. For the optimal by switching controllers at some error threshold, or
avoidance strategy of the desired points, we obtain the introducing terms that minimize the force terms F1 and F2 as
following: the vehicles approach the desired points.
V Vvyd
cos(ψ 1 ) = + vxd , sin(ψ 1 ) = +
ρd 1 ρ d1
Vvzd ρ
cos(ψ 2 ) = + , sin(ψ 2 ) = + d 1
ρd 2 ρd2
From this , we see that the retrograde equations have the
following form:
o V
v x1 = − F1 ⋅ vx1 + k1 ⋅ v x1
ρ2
o
x1 = −v x1
o
Vx1 = 0
o
V vx1 = V x1 − k1 ⋅ Vvx1 Figure 5: Two-Vehicle Simulation with Sufficient Vehicle
Velocities
For this example, the final value will be zero, and occurs
when the difference between the desired position and the
actual position is zero. Naturally, to obtain a more general
solution, a solution manifold should be used; however, in
order to display the utility of this approach, the previously
mentioned final conditions will suffice. The closed form
expression of the value function is then of the form:
1 − e − k1t
Vvx1 = ( x1 − xd ) ⋅
k1

It should be noted that the above analysis could be


performed on a reduced set of differential equations, where
each equation would express the differences in distance and
velocity, and hence reduce the number of differential
equations by a factor of 2. However, for the sake of clarity, Figure 6:Two-Vehicle Simulation with Insufficient Vehicle
Velocities
The above plot shows the tracking capabilities of the derived never reach the desired trajectories and consequently the
controller. The two vehicles are attempting to follow two trajectories in Figure 8 remain smooth.
parameterized circular trajectories with a radius of three. In
Figure 5 the vehicles can move quickly enough to actually Collision Avoidance
reach the desired trajectories, while in Figure 6 the velocities Collision avoidance was added to this problem, simply by
of the vehicles are not sufficient to reach the desired observing that the control is only dependent on the value
trajectories. In the latter case, the vehicles simply move in a function V. Hence, if we simply design the value function
smaller circle, which ensures that the error remains constant. such that it ensures that the two vehicles moves away from
one another when they come too close to each other, the
Let us consider the magnitude of the tracking errors for these controller will in essence avoid collisions. However, since
two cases: the collision avoidance is not accomplished by changing the
dynamic equations that we base the value function on, the
convergence time is no longer guaranteed.

Figure 7: Position Error with Sufficient Vehicle Velocities

Figure 9: Two-Vehicle Simulation with Conflicting Goals


The simulation below displays a scenario where the
formation is obviously unstable. The two vehicles are
attempting to reach the same trajectory, however due to the
addition of collision avoidance the desired formation will
never be reached.

Figure 8: Position Error with Insufficient Vehicle Velocities


Notice, that the error curves in Figure 7 are fairly smooth up
until the desired trajectory is reached. However, as soon as
the desired trajectory is reached, the magnitude of the error
changes quite rapidly. This is due to the bang-bang control
law discussed earlier. When the system attempts to correct
for a small positional deviation, the force vector applied may
cause an overshoot, and hence the magnitude of the velocity
vector becomes somewhat irregular. This phenomenon is Figure 10: Position Error with Conflicting Goals
naturally not present in the second case, since the vehicles Notice, that the 3D position plot in Figure 9 seems to
suggest that the two vehicles actually reach the desired
trajectory. However, the position error plot shown in Figure
10 verifies that the desired safety region of two units is never
compromised, that is, the second vehicle is simply trailing
the first along the desired trajectory.

Simulation Results with Additional Vehicles

Since the dynamic equations used to derive the controller


have no inter-vehicle dependence, it stands to reason that the
system is scaleable to n-vehicles. Figure 11 shows a
simulation with ten vehicles following a circular trajectory
with a radius of six units. The circles overlap slightly,
however this does not cause conflicts to arise since no two
vehicles will be in those regions at the same time. However,
the overlap causes the system to converge slower than the Figure 13: Plot of Number of Collisions as Vehicles Attempt
two-vehicle system, as shown in Figure 12. This decrease in to Reach the Same Trajectory
convergence rate is due to conflict resolution on approach to
the desired trajectories. Such resolution is clearly displayed The approach seems to perform remarkably well under such
around time index 150, where one of the vehicles has to well-defined desired trajectories, however what would
leave the desired path to ensure that collision is avoided, and happen if they were all supposed to reach the same trajectory?
consequently increases its corresponding positional error. It Under such impossible conditions the proposed system will
should be noted that, after time index 300, the system has not guarantee collision avoidance.
settled on the desired trajectory.
As shown in Figure 13, the number of collisions appears
initially to be increasing drastically with the number of
vehicles. However, when the number of vehicles is large,
the addition of extra vehicles seems not to impact the
number of collisions significantly.

Based on the simulation, it is clear that well-defined


trajectories are not just vital to the stability of the formation
but also essential to the safety of the vehicles. In the
simulation of the system with increasing number of vehicles,
a collision was defined as a point in time when the distance
between two vehicles was less than or equal to one.

Figure 11: Ten-Vehicle Simulation without Conflicting Conclusion


Goals • A hierarchical multi-agent architecture for the
coordinated control of multiple UAVs has been
presented in this paper.

• The formation control problem is viewed as a Pursuit


Game of n pursuers and n evaders. Collision avoidance
is achieved by designing the value function so that it
ensures that the two vehicles move away from one
another when they come too close.

• By viewing the formation control problem as a


differential game, important performance information
about the formation can be determined, for example, the
existence of solutions for any given set of initial
conditions, the time to reach the target and whether a
designated formation flight path is reachable. However,
Figure 12: Convergence of Ten-Vehicle System the mathematical analysis required to obtain such
information is considerable, and in some cases obtaining 6. Voos, H., “Market-based Algorithms for Optimal
a closed form expression for the control law may be Decentralized Control of Complex Dynamic Systems ,”
impossible. Moreover, the analysis of one formation of Proc. of the 38th IEEE Conference on Decision and
vehicles cannot always be translated onto another Control, Vol.40, pp.3295-3296, Phoenix, AZ. 1999.
formation with different dynamics. The lack of a closed 7. Clearwater, S. H. E., "Market-Based Control: A
form solution could be remedied by using numerical Paradigm for Distributed Resource Allocation,"
methods; however, the dependency on the individual Singapore: World Scientic, 1996.
vehicle dynamics seem to be the price that has to be 8. Walsh, W., and Wellman, M., “A Market Protocol for
paid to obtain the performance measures mentioned Decentralized Task A llocation,” Proc. of the 3rd
above. International Conference on Multiagent Systems , 1998.
9. Engelbrecht, W. R., Shubik, M. , and Stark, R. M.,
"Auctions, Bidding, and Contracting: Uses and Theory,"
References: New York, NY: New York University Press, 1983.
10. Bertsekas, D., “Auction Algorithms for Network Flow
Problems: A Tutorial Introduction,” Com-putational
1. Office of the Secretary of Defense (Acquisition,
Technology, & Logistics), Air Warfare. "OSD UAV Optimization and Applications, Vol.1, pp.7-66, 1992.
Roadmap 2002-2027." December 2002. 11. Vachtsevanos, G., Kim, W., Al-Hasan, S., Rufus, F.,
Simon, M., Schrage, D., and Prasad, J. V. R., "Mission
2. Sousa, J. B., and Pereira, F., "A framework for
networked motion control," Proceedings of the 42nd Planning and Flight Control: Meeting the Challenge
IEEE Conference on Decision and Control, pp. 1526- with Intelligent Techniques," Journal of Advanced
Computational Intelligence, Vol. 1, (1), pp. 62-70, Oct.,
1531, Hawaii, USA, December 2003.
3. Howard, M., Hoff, B., Lee, C., "Hierarchical Command 1997.
and Control for Multi-agent Teamwork", Proceedings of 12. Al-Hasan, S. and Vachtsevanos, G., "Intelligent Route
5th Intl. Conf. on Practical Application of Intelligent Planning for Fast Autonomous Vehicles Operating in a
Agents and Mult-Agent Technology (PAAM2000), pp. Large Natural Terrain," Journal of Robotics and
1-13, Manchester, UK. Apr. 10, 2000. Autonomous Systems, Vol. 40, pp. 1-24, 2002.
4. Baras, J. S., Tan, X., and Hovareshti, P., "Decentralized 13. Al-Hasan, S., Vachtsevanos, G., "A Neural Fuzzy
Control of Autonomous Vehicles," Proceedings of the Controller for Moving Obstacle Avoidance," Third
42nd IEEE Conference on Decision and Control, pp. International NAISO Symposium on Engineering Of
1532-1537, Hawaii, USA, December 2003. Intelligent Systems , Malaga, Spain, September 24 - 27,
5. Dunbar, W. B., Murray, R. M., "Model Predictive 2002.
Control of Coordinated Multi-Vehicle Formation," 14. Isaacs, R., "Differential Games: a Mathematical Theory
Proceedings of the 41st IEEE Conference on Decision with Applications to Warfare and Pursuit, Control and
and Control, pp. 4631-4636, Las Vegas, USA, Optimization." New York: John Wiley and Sons, Inc.,
December 2002. 1965.

You might also like