Professional Documents
Culture Documents
Control of A Quadrotor Helicopter Using
Control of A Quadrotor Helicopter Using
Camillo J. Taylor
University of Pennsylvania
Philadelphia, PA, USA
Abstract tasks, since they can take-off/land in limited spaces and easily
hover above any target. Moreover, rotary wing aerial vehicles
In this paper we propose a vision-based stabilization and output (such as helicopters) are highly maneuverable making them
tracking control method for a model helicopter. A novel two-camera preferable for various tasks. Unfortunately this agility makes
method is introduced for estimating the full six-degrees-of-freedom control harder, due to the dynamical instabilities and sensitiv-
pose of the helicopter. One of these cameras is located on-board the ity to disturbances.
helicopter, and the other camera is located on the ground. Unlike A quadrotor (Altuğ, Ostrowski, and Mahony 2002; Altuğ,
previous work, these two cameras are set to see each other. The pose Ostrowski, and Taylor 2003), as shown in Figure 1, is a
estimation algorithm is compared in simulation to other methods and four-rotor helicopter. This helicopter design dates back to the
is shown to be less sensitive to errors on feature detection. In order to early twentieth century, with the first full-scale four-rotor heli-
build an autonomous helicopter, two methods of control are studied: copter being built by De Bothezat in 1921 (Gessow and Myers
one using a series of mode-based, feedback linearizing controllers 1967). Recent work in quadrotor design and control includes
and the other using a backstepping-like control law. Various simula- much smaller scale versions of quadrotors, such as the X4-
tions demonstrate the implementation of these controllers. Finally, Flyer (Hamel, Mahony, and Chriette 2002) and the Mesicopter
we present flight experiments where the proposed pose estimation al- (Kroo and Printz 2005). Also, related models for controlling
gorithm and non-linear control techniques have been implemented the VTOL aircraft are studied by Hauser, Sastry, and Meyer
on a remote-controlled helicopter. (1992) and Martin, Devasia, and Paden (1996).
KEY WORDS—helicopter control, pose estimation, un- A quadrotor is an underactuated, dynamic vehicle with four
manned aerial vehicle, vision-based control input forces (basically, the thrust provided by each of the pro-
pellers) and six output coordinates (fully spatial movements).
Unlike regular helicopters that have variable pitch angle ro-
tors, a quadrotor helicopter has fixed pitch angle rotors. It
1. Introduction
can be controlled by varying the rotor speeds of all four ro-
tors, thereby changing the lift forces. In a quadrotor, rotors
The purpose of this study is to explore control methodolo- are paired to spin in opposite directions in order to cancel out
gies and pose estimation algorithms that will provide some the resultant moment found in single-rotor helicopters. This
level of autonomy to an unmanned aerial vehicle (UAV). An eliminates the need for a tail rotor to stabilize the yaw motions
autonomous UAV will be suitable for applications such as of the quadrotor. Some of the advantages of using a multi-
search and rescue, surveillance, and remote inspection. Ro- rotor helicopter include high maneuverability and increased
tary wing aerial vehicles have distinct advantages over con- payload capacity, due to the increased load area. Also, as the
ventional fixed wing aircraft on surveillance and inspection number of engines on an aircraft increase, the total weight of
the engines (which give the same total thrust) tend to decrease
The International Journal of Robotics Research
Vol. 24, No. 5, May 2005, pp. 329-341, (Gessow and Myers 1967). The most important disadvantage
DOI: 10.1177/0278364905053804 is the increased energy consumption due to the extra motors.
©2005 Sage Publications
329
330 THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH / May 2005
F2
M2 Front
F3
F1
zb
M3 yb I xb
M1
\
T
B
l
F4
M4 z
y x
O
mg
Fig. 1. Quadrotor helicopter.
The Euler angle velocities to body angular velocities are constant terms as D, for hover or near-hover flight conditions,
mapped by ζ̇ = Jw
b this equation simplifies to
Fi = Dwi 2 . (11)
ψ̇ 1 sψ tθ cψ t θ p
θ̇ = 0 cψ −sψ q , (2)
Successful control of the helicopter requires direct control
φ̇ 0 sψ /cθ cψ /cθ r
of the rotor speeds, wi . Rotor speeds can be controlled by
where sφ denotes sin(φ) and cφ denotes cos(φ). controlling the motor torque. The torque of motor i, Mi , is
Using the Newton–Euler equations, we can represent the related to the rotor speed wi as
dynamics of the quadrotor as follows Mi = Ir wi 2 + Kwi 2 , (12)
1
V̇ = Fext − w
b
b × V b (3) where Ir is the rotational inertia of rotor i, and K is the reactive
m torque due to the drag terms.
b =M
Ib ẇ ext − w
b × Ib wb (4) For simplicity, we assume that the inertia of the rotor
ζ̇ =Jw
is small compared to the drag terms, so that the moment
b. (5)
generated by the rotor is proportional to the lift force, i.e.,
Mi = CFi , where C is the force-to-moment scaling factor.
Here, Ib is the inertia matrix, and Fext and M
ext are the external
For simulations, a suitable C value has been experimentally
forces and moments on the body-fixed frame given as calculated.
The total thrust force T and the body moments Mx , My ,
Fext = −dragx î − dragy jˆ + (T − dragz )k̂ (6)
and Mz are related to the individual rotor forces through
−R · mg k̂
T 1 1 1 1 F1
ext = Mx î + My jˆ + Mz k̂, Mx −l −l l
M (7)
l
F2 , (13)
My = −l l l −l F3
where T is the total thrust, Mx , My , and Mz are the body
Mz C −C C −C F4
moments, and î, jˆ, and k̂ are the unit vectors along the x-, y-,
and z-axis, respectively. A drag force acts on a moving body where Fi are the forces generated by the rotors, as given by
opposite to the direction it moves. The terms dragx , dragy , and eq. (11). The matrix above, which we denote by N ∈ R 4×4 ,
dragz are the drag forces along the appropriate axis. Letting is full rank for l, C = 0. This is logical since C = 0 would
ρ be the density of air, A the frontal area perpendicular to imply that the moment around the z-axis is zero, making the
the axis of motion, Cd the drag coefficient and V the velocity, yaw-axis control impossible. When l = 0, this corresponds
then the drag force on a moving object is to moving the rotors to the center of gravity, which eliminates
the possibility of controlling the tilt angles, again implying a
1
drag = Cd ρV 2 A. (8) lack of control over the quadrotor states.
2 In summary, to move the quadrotor, motor torques Mi
Assuming the density of air is constant, then the constants should be selected to produce the desired rotor velocities (wi )
at above equation can be collected, and the equation can be in eq. (12), which will change the body forces and moments
written as in eq. (13). This will produce the desired body velocities and
accelerations in eqs. (3) and (4).
drag = Cd V 2 . (9)
posn
posn
posn
5
10 20
0
One can design controllers separately by considering the he- 10
0 −5
licopter motions. Noticing that the motion along the y-axis 0
ÿ = −Kp y − Kd ẏ = − sin ψ.
angle (deg)
angle (deg)
angle (deg)
20
(21) −10
0
−20
10
The desired tilt angle (ψd ) can be written as −30
−0.5
−40 0
.b b
V P
V integ integ
w
Rotor Dynamics i Thrust
Pose
b
w
. integ map integ
wb
Euler euler
velocities angles
u1
u2
Controllers
u3 Helicopter
u4 Posn. & Orientation
1
u3 = (−5y − 10ẏ − 9u1 sin ψ cos φ
u1 cos ψ cos φ
−4u1 ψ̇ cos ψ cos φ + u1 ψ̇ 2 sin ψ cos φ
+2u1 ψ̇ sin ψ sin φ + u1 ψ̇ φ̇ cos ψ sin φ
−u1 φ̇ ψ̇ cos ψ sin φ − u1 φ̇ 2 sin ψ cos φ). (30)
z̈ = Kp1 (zd − z) + Kd1 (z˙d − ż) Fig. 5. Backstepping controller simulation results.
φ̈ = Kp2 (φd − φ) + Kd2 (φ̇d − φ̇). (31)
Fig. 6. Backstepping controller path. Fig. 8. The effect of the delay on the simulation.
X vs time Y vs time Z vs time tem, this problem has to be included in the model. Usually,
50 50 200
the frame rate of many cameras is 20–30 Hz. A frame rate
150 of 15 Hz will be used for the overall sampling rate of this
sensory system. Figure 8 shows the results of the simulation,
posn (cm)
posn (cm)
posn (cm)
0 0 100
where the x, y, and z positions are sampled at 15 Hz. The
50 controllers are robust enough to handle the discrete inputs. A
simple comparison of the plots shows that discrete sampling
−50 −50 0
0 10 20 0 10 20 0 10 20 causes an increased settling time.
time (s) time (s) time (s)
Considering the methods introduced in Section 3, the
teta vs time psi vs time phi vs time
feedback linearizing controllers have the disadvantage of
complexity. The controllers generated usually involve higher
50 50 50
derivatives, which are sensitive to sensor errors. The results
angle (deg)
angle (deg)
angle (deg)
zb
yb
6
1
5
Onboard
Camera xb
4
Quadrotor
w3
w1
z w2
Pan/Tilt
Ground Camera
2 x
Fig. 9. Helicopter pose estimation.
θ = a cos(w
1 · w
2 ). (38) The proposed two-camera pose estimation method is com-
pared with other methods using a MATLAB simulation. Other
Alternatively, one can use the cross-product of w1 and w2 , to methods used were a four-point algorithm (Ansar et al. 2001),
solve θ angle. The only unknown left in eq. (36) is the angle α. a state estimation algorithm (Sharp, Shakernia, and Sastry
Rewriting eq. (36) gives 2001), a direct method that uses the area estimations of the
blobs, and a stereo pose estimation method that uses two
(w3 × w
1 )×(w
3 × (Rot (w
1 × w
2 , θ) · Rot (w
1 , α))La ) = 0. ground cameras that are separated by a distance d.
(39) In this simulation, the quadrotor moves from the point (22,
22, 104) to (60, 60, 180) cm, while (θ, ψ, φ) changes from (0.7,
Let M be given as
0.9, 2) to (14, 18, 40) degrees. A random error up to five pixels
M = (w
3 × w
1 ) × {w
3 × [R(w
1 × w
2 , θ)]}. (40) was added on image values, to simulate the errors associated
with the blob extractor. A random error of magnitude ±2 was
Using Rodrigues’ formula, Rot (w1 , α) can be written as also added to the blob area estimates on image plane.
The errors are calculated using angular and positional dis-
Rot (w1 , α) = I + ŵ1 sin α + ŵ1 (1 − cos α).
2
(41) tances, given as
Table 1. Comparison of the Angular and Positional Errors strengths of ground-based and on-board estimation methods
of Different Pose Estimation Methods in obtaining the best estimate of the quadrotor’s pose.
Method Angular error Position error
(degree) (cm)
5. Experiments
Direct 10.2166 1.5575
Four point 3.0429 3.0807 The proposed controllers and the two-camera pose estima-
Two camera 1.2232 1.2668 tion algorithm have been implemented on a remote-controlled
Linear 4.3700 1.8731 battery-powered helicopter as shown in Figure 14. It is a
Stereo 6.5467 1.1681 commercially available model helicopter called HMX-4. It
is about 0.7 kg, 76 cm long between rotor tips, and has about
3 min flight time. This helicopter has three gyros on board
to enhance stability, by providing damping on the rotational
Effect of base distance change motion of the quadrotor. An experimental setup shown in Fig-
15
Position ure 15 was prepared to prevent the helicopter from moving
Orientation
too much on the x–y-plane, while enabling it to turn and as-
cend/descend freely.
Position and Orientation errors of stereo method
40 40
200
On board System
20 20
X position (cm)
Y position (cm)
Z position (cm)
150
-60 -60 0
0 20 40 0 20 40 0 20 40
time (seconds) time (seconds) time (seconds)
UHF Link R/C Link
pitch angle-1 vs time pitch angle-2 vs time yaw angle vs time
Ground System 80 80 80
60 60 60
40 40 40
Pan/tilt Vision
yaw (degrees)
teta (degrees)
psi (degrees)
camera Processor−2 20 20 20
Main
0 0 0
Vision Controller -20 -20 -20
Processor−1 R/C
transmitter -40 -40 -40
trees or near tall buildings. Therefore, it is better to use mul- For i = 1, let z1 = x1 and z2 = x2 + x1 , then x2 = z2 − z1 .
tiple sensors as much as possible to overcome limitations of Integrating these gives, ż1 = z2 − z1 and ż2 = ẋ2 + ẋ1 .
individual sensors. One possibility is to investigate methods A Lyapunov function of the form, V1 = (1/2)z1 2 , gives
to reduce the errors on the feature detection. For indoor envi-
ronments, an indoor GPS system can be built to increase the V̇1 = −z1 2 + z1 z2 . (54)
reliability of the system.
Also
A quadrotor helicopter can be teamed up with a ground
mobile robot to form a heterogeneous marsupial robotic team ż2 = u1 Cφ Sx3 + x2 = f¯2 + x3 ⇒ f¯2 = u1 Cφ Sx3 + x2 − x3 .
(Murphy et al. 1999), where the quadrotor rides on the mobile (55)
robot for deployment in the field. By placing a camera on top
of the mobile robot, mobility will be introduced to the two- For i = 2, define
camera system. Moreover, a platform can be built to enable the
helicopter to take-off/land on the mobile robot. The ground z3 = x3 − α2 (56)
robot will be responsible for transportation, communication ż2 = z3 + α2 + f¯2 . (57)
relay, assistance in landing and take-off with the camera, and
will provide assistance on computation of the visual informa- One can use a Lyapunov function of the form, V2 = V1 +
tion. Such functionalities will be useful for remote sensing, (1/2)z2 2 .
chase, and other ground–air cooperation tasks. Let us pick