You are on page 1of 6

2012 IEEE International Conference on Robotics and Automation

RiverCentre, Saint Paul, Minnesota, USA


May 14-18, 2012

Autonomous Landing of a VTOL UAV on a Moving Platform Using


Image-based Visual Servoing
Daewon Lee1 , Tyler Ryan2 and H. Jin. Kim3

Abstract— In this paper we describe a vision-based algo-


rithm to control a vertical-takeoff-and-landing unmanned aerial
vehicle while tracking and landing on a moving platform.
Specifically, we use image-based visual servoing (IBVS) to track
the platform in two-dimensional image space and generate a
velocity reference command used as the input to an adaptive
sliding mode controller. Compared with other vision-based
control algorithms that reconstruct a full three-dimensional
representation of the target, which requires precise depth
estimation, IBVS is computationally cheaper since it is less
sensitive to the depth estimation allowing for a faster method to
obtain this estimate. To enhance velocity tracking of the sliding
mode controller, an adaptive rule is described to account for
the ground effect experienced during the maneuver. Finally, the
IBVS algorithm integrated with the adaptive sliding mode con-
troller for tracking and landing is validated in an experimental
setup using a quadrotor.

I. INTRODUCTION Fig. 1. Quadrotor helicopter and moving ground target under consideration
Vertical take-off and landing (VTOL) unmanned aerial
vehicles (UAVs) have been used in many applications such
as search and rescue, reconnaissance, surveillance and ex- landing on a moving target (fig. 1). This situation would
ploration. Generally, these aircraft have highly coupled non- occur, for example, when landing on a moving ship or
linear dynamics and fly in unsteady conditions (e.g. due to truck. While GPS-based control is sufficient for general
turbulence or unmodelled dynamics), which has led to the positioning (assuming the signal is not blocked), the ac-
development of various robust and/or nonlinear control tech- curacy of GPS receivers fit for use on miniature UAVs is
niques. For example, [1] presents an internal-model based measured in meters, making them unsuitable for precision
error-feedback dynamic regulator that is robust to parametric tasks such as landing. Instead, we demonstrate that image-
uncertainty. Sliding mode disturbance observers ([2]) and based visual servoing (IBVS) is capable of providing the
adaptive-fuzzy control techniques ([3]) have also been used necessary precision. IVBS is a technique that performs the
to make the system more robust to external disturbances. A majority of the control calculations in 2D image space, rather
combined controller and observer using neural networks is than the typical 3D body or inertial reference frames. This
used in [4] to track reference trajectories. eliminates the need for expensive 3D position reconstruction
One feature becoming more common on miniature VTOL calculations allowing for real-time control. Finally, given
UAVs is an onboard camera, which has resulted in a quickly the need of flying inside ground effect during the landing
growing library of research making use of this camera. maneuver, the controller uses a model adaptation technique
Common isolated vision tasks include target tracking ([5] to dynamically compensate for this system disturbance.
and [6]) and simultaneous localization and mapping (SLAM, Most similar to our work is [11] which also calculates
[7]). Additionally, the onboard camera has been used within control in the image frame thereby avoiding full three dimen-
the control loop such as [8] and [9] which use the camera sional reconstruction. The biggest difference between their
for position control. work and ours is we adaptively compensate for the ground
For this paper, we use the adaptive sliding mode controller effect which is quite significant during a landing maneuver.
described in [10] for precision control of a quadrotor while Additionally they use a different image-based method for
their formulation.
1 Department of Mechanical and Aerospace Engineering, Ph.D
Candidate, Seoul National University, Seoul, Republic of Korea II. S YSTEM
lee.daewon@gmail.com
2 Department of Mechanical and Aerospace Engineering, Ph.D This section describes the dynamics of the quadrotor and
Student, Seoul National University, Seoul, Republic of Korea the camera systems.
ryantr@gmail.com
3 Department of Mechanical and Aerospace Engineering, Asso-
A. Quadrotor
ciate Professor Seoul National University, Seoul, Republic of Korea
hjinkim@snu.ac.kr Define the UAV state vector as x = [x, y, z, φ, θ, ψ]T ,

978-1-4673-1405-3/12/$31.00 ©2012 IEEE 971

Authorized licensed use limited to: UNIST. Downloaded on May 22,2020 at 05:59:29 UTC from IEEE Xplore. Restrictions apply.
Finally, the dynamics of the image space point p can be
expressed as eq. (4) [12].
ṗ = Jp ṙ. (4)
where
2
+u2
" #
− zfc 0 u
zc
uv
f −f f v
Jp = f +v 2
2 (5)
0 − zfc v
zc f − uv
f −u
Fig. 2. Geometry and coordinate frames for a pinhole camera model is called the image Jacobian matrix associated with the image
feature p.

where x, y and z denote translational position variables III. C ONTROL


and φ, θ and ψ denote roll, pitch and yaw, respectively. This section describes image-based visual servoing, in-
Define the control input vector as u = [u1 , u2 , u3 , u4 ]T cluding image roll and pitch compensation, and the adaptive
(inputs for thrust, roll, pitch and yaw respectively). Then sliding mode velocity controller.
system dynamics, including influence of external forces
fex (x), can be represented as eq. (1). (for derivation details, A. Image-based Visual Servoing
see [10]) In image-based visual servoing, the task space is defined in
two-dimensional image space and the control law is designed
ẍ = f (x) + g(x)u + fex (x) (1) as a function of the image error, e, defined as
where e = p − pd (6)
 T
f (x) = 0 0 −Fg 0 0 0 where pd is the desired location of the image feature. For
 
cos φ sin θ cos ψ + sin φ sin ψ 0 0 0 the landing maneuver considered in this paper, the desired
 cos φ sin θ sin ψ − sin φ cos ψ 0 0 0  image feature is static, i.e. ṗd = 0. Using this and eq. (4)
 
 cos φ cos θ 0 0 0  we calculate the error rate as,
g(x) =  
 0 l 0 0 

 0 0 l 0

 ė = Jp ṙ (7)
0 0 0 1 Define a Lyapunov candidate function as
 T
fex (x) = 0 0 Fgr (z) 0 0 0 , 1 T
L = e e (8)
2
l denotes the distance from the motor rotational axis to the T
L̇ = e Jp ṙ (9)
system center of gravity (assumed equal for all motors), Fg
is gravity force and Fgr (z) denotes the ground effect force. Define the desired camera velocity as
B. Vision System ṙd = −W JpT e (10)
The camera model we consider in this paper is a pinhole where W is a positive definite weighting matrix. As ṙ → ṙd
camera model as shown in fig. 2 where [XI , YI , ZI ]T are (to be achieved by the sliding mode controller described in
the axes of the three-dimensional inertial coordinate frame the next section) then
I, [Xcam , Ycam , Zcam ]T are the axes of the camera frame C,
i.e., the coordinate frame attached to the camera center Ocam , ṙ → −W JpT e (11)
[Uimage , Vimage ]T are the axes of the image frame S, OS and eq. (9) becomes
denotes the point where the z axis of the camera coordinate
frame intersects the image plane, and f is the focal length of L̇ = −eT (Jp )W (Jp )T e ≤ 0, (12)
the camera. The mapping from a point P = [xc , yc , zc ]T ∈ C
where Jp W (Jp )T is a full-rank matrix when the image
to a point p = [u, v]T ∈ S can be written as eq. (2).
    features are not colinear. This proves the asymptotic con-
u f xc vergence of e to zero assuming ṙd is “well” tracked.
p= = (2)
v zc yc Eq. (10) generates six references (for x, y, z, φ, θ, ψ).
However, the quadrotor cannot track all the references simul-
Suppose that the camera is moving such that P moves
T taneously because of its under-actuated property (four motor
with translational velocity T = [Tx , Ty , Tz ] and rotational
T inputs versus six states). Since the x, y, φ and θ channels
velocity Ω = [ωx , ωy , ωz ] both with respect to the camera
are highly coupled, we decouple the x and y channels from
frame C. The spatial velocity, ṙ ∈ <6 can then be defined
the φ and θ channels.
as (3)
  To do this, we define a virtual coordinate frame V with
T  T the position and yaw angles aligned to match the camera
ṙ = = Tx , Ty , Tz , ωx , ωy , ωz . (3)
Ω coordinate frame, C, and the z axis is aligned with the inertial

972

Authorized licensed use limited to: UNIST. Downloaded on May 22,2020 at 05:59:29 UTC from IEEE Xplore. Restrictions apply.
frame z axis, I. If references are generated from an image Pis , for which following relationship holds:
in V, the desired decoupling is achieved.
Pis = Pir + R(1, φ)R(2, θ)[δx , δy , δz ]T (16)
To map the captured image into V, first consider an
imaginary camera frame which is the same as the true camera Now, the only difference between the virtual coordinate
frame but rotated to zero roll and pitch. The image features frame and the imaginary camera frame is the offset position
in this imaginary frame become of the camera. Therefore, the i-th feature coordinate defined
in the virtual frame, Piv , is
Pir = R(1, φ)R(2, θ)Pi (13)
f
 r 
xci Piv = Pis − [δx , δy , δz ]T (17)
pri = r r f
 v 
xci
zci yci pvi = (18)
v v
"
ui cos θ+f sin θ
# Zci yci
−ui cos φ sin θ+vi sin φ+f cos φ cos θ
= f ui sin φ sin θ+vi cos φ−f sin φ cos θ (14) where pvi is the corresponding image feature coordinate of
−ui cos φ sin θ+vi sin φ+f cos φ cos θ
Piv .
uri
 
= B. Adaptive Sliding Mode Controller
vir
To track the translational and rotational velocity references
where
  of the UAV given in eq. (10), an adaptive sliding mode
1 0 0 controller is adopted. Since g(x) in eq. (1) is a 6-by-4 matrix
R(1, φ) =  0 cos φ − sin φ  we need to augment it with a 6-by-2 matrix, gs , to make it
0 sin φ cos φ invertible. This also requires, then, that the input vector u be

cos θ 0 sin θ
 augmented with a slack variable vector, us . Eq. (1) can then
R(2, θ) =  0 1 0 . be rewritten as
− sin θ 0 cos θ ẍ = f (x) + G(x)U − ν + fex (x) , (19)
Here Pir is the i-th feature coordinate defined in the roll where G = [g(x), gs ], U = [uT , uTs ]T , and ν = gs us . If
and pitch compensated camera frame, and pri is the corre- we set T
sponding image feature coordinate of Pir . For most normal

1 0 0 0 0 0
gs = , (20)
flight regimes, roll and pitch angles don’t approach 90 deg so 0 1 0 0 0 0
the denominators in eq. (14) will be non-zero. Interestingly,  T
so far no depth information is needed to perform the roll and then ν = gs us = u5 u6 0 0 0 0 , so we need
pitch compensation. to estimate the slack variables u5 and u6 .
Assuming that we know the nominal geometry of the Define the sliding surface as
target in the camera frame, we can now estimate depth. S = [s1 , s2 , s3 , s4 , s5 , s6 ]T = ẋ − ẋr (21)
First, find the position of another feature in image space,
prj . Combined with the already measured point pri we can where ẋr is a velocity reference vector,
calculate the distance between these points in the image [ẋr , ẏr , żr , φ̇r , θ̇r , ψ̇r ]T . As previously mentioned, the
frame, which we call Ls . Let us call Lc the, already known, quadrotor system is under-actuated such that the x and y
corresponding distance in the camera frame. Using these, the state variables cannot be controlled directly from the inputs.
depth from the camera to the image features can be simply Therefore, let us define the velocity references for φ̇ and θ̇
calculated as as eqs. (22) and (23) to control the x and y positions.
Lc · f φ̇r = (ÿ − ÿr ) + kφ (ẏ − ẏr ) (22)
zc = (15)
Ls θ̇r = (ẍ − ẍr ) + kθ (ẋ − ẋr ) (23)
At this point we could, in theory, calculate a full three- where kφ and kθ are proportional gains.
dimensional reconstruction of the target and perform control Using the IVBS output from eq. (10) the reference velocity
without IBVS. However, this simple depth calculation is only state variable, ẋr , is given by
a rough estimate, with a focus on computational efficiency
over accuracy, and would not produce an accurate enough ẋr = Er ṙd + φ̇r E4 + θ̇r E5 (24)
reconstruction for standard control algorithms. IBVS, how- = −Er W JpTv e + φ̇r E4 + θ̇r E5
ever, is less sensitive to this measurement so having only this
where Er = diag[1, 1, 1, 0, 0, 1], E4 = [0, 0, 0, 1, 0, 0]T and
rough estimate is sufficient.
E5 = [0, 0, 0, 0, 1, 0]T .
The last thing we need to address is distortion due to an
In order to cancel nonlinear terms in eq. (19), we need
offset from the origin of the quadrotor body frame to the
to define the estimated values of ν and fex (x), which we
origin of the camera frame, which is denoted as [δx , δy , δz ]T .
denote as ν̂ and fˆex (x), respectively. The uncertainty term
For this consider another imaginary camera frame which is
is defined as
translated to compensate for the distance caused by rotation
of the offset. Define the i-th feature location in this frame ∆ = −ν + fex = [u5 , u6 , Fgr (z), 0, 0, 0]T (25)

973

Authorized licensed use limited to: UNIST. Downloaded on May 22,2020 at 05:59:29 UTC from IEEE Xplore. Restrictions apply.
Then equation (19) becomes,
ẍ = f (x) + G(x)U + ∆. (26)
We set the Lyapunov candidate function as
1 T 1 ˜T ˜
L= S S+ ∆ ∆ (27)
2 2Γ
and define the control input as
ˆ + ẍr − C1 S − C2 sign(S)]
U = G−1 (x)[−f (x) − ∆ (28)
where C1 is a diagonal weighting matrix and C2 is a positive
constant gain. By setting the uncertainty estimate update law
as
ˆ˙ = ΓS
∆ (29) Fig. 3. Structure of the control system for the quadrotor helicopter
where Γ is positive semi-definite weighting matrix, the time
derivative of the Lyapunov function becomes negative semi- 2) IBVS-Guided Tracking Mode: After the target is de-
definite. tected for 0.2 seconds, the control logic is switched to
tracking mode. The purpose of this mode is to stabilize
L̇ = −S T (C1 S + C2 sign(S)) ≤ 0 , (30) the quadrotor 0.4m over the target before attempting the
landing maneuver (since a one-dimensional vertical ma-
Therefore the entire system converges to its desired state.
neuver is easier than attempting to land immediately via
IV. E XPERIMENTAL R ESULTS a three-dimensional maneuver). To do this the desired
In this section, the experimental setup is described and image feature locations are set to correspond to how
the proposed algorithm for autonomous landing on a moving they would look if the quadrotor were positioned at the
target is validated. desired location. Since the target usually first appears
A. Hardware Description at the edge of the FOV, this maneuver also moves the
target to the center where there is less risk of loosing
The quadrotor and landing target are shown in fig. 1. The
the target while landing.
length from each motor rotational axis to the center of the
3) IBVS-Guided Landing Mode: After the landing mode
quadrotor is 30 cm, the weight of the entire UAV system
is turned on, the desired image features are set to corre-
is 1700 grams including two batteries and the height is 19
spond to the position 0.20m directly over the target. If
cm. The onboard camera has a 50 degree of field of view
the error norm is below a defined threshold (implying
(FOV) and captures 320px × 240px images at 30fps which
that the quadrotor is at the target height and position)
are sent to the PC by a 1.2GHz onboard video transmitter.
the thrust is turned off so the quadrotor can finish
After control calculation, reference commands are sent to the
the landing maneuver. This small open-loop drop is
quadrotor by a 2.4GHz radio control (RC) transmitter which
necessary since the target is now so close to the camera
is connected to the PC via an Endurance R/C PCTx USB
that even small state errors can cause the camera to lose
dongle.
tracking for a number of reasons: the velocity of the
The quadrotor carries an inertial measurement unit (IMU)
features in image space becomes much higher, nonlinear
to measure angular rates which are used to estimate the Euler
distortions outside the camera calibration range become
angles. If the target is not detected, a Vicon [13] camera
much more significant, and the FOV is now very small.
system measures the position from which the translational
velocity is calculated. When the target is detected, the Vicon The ground target moves from its initial position (x, y and
data is used only for translational velocity. The entire control z) [0.1 m, 1 m, 0.07 m] with a velocity of approximately
system is depicted in fig 3. [0 m/s, − 0.07 m/s, 0 m/s]. During the tracking and
The landing pad is 90 cm long on each side and a fiducial landing modes, if the target is not detected for 2 seconds
marker is positioned approximately at the center. The four continuously, then the control logic is reverted back to
outside corners of the fiducial marker are used as the image patrol mode. The entire sequence from take-off to landing
features to compute Jp is operated automatically.

B. Mission Scenario C. Experimental Results


The entire flight sequence is depicted in fig. 4. There are The proposed landing algorithm is validated by experi-
three modes for a mission: ment. The vertical dash-dot line (14.2 second) in each of
1) Patrol Mode: After taking off, the quadrotor patrols over fig. 5 ∼ 11 shows the moment when the control logic
a certain area until the target is detected continuously switches to IBVS-based tracking mode, the vertical dotted
for a pre-determined time duration (0.2 seconds). In this line (18.2 second) shows the moment when logic switches
mode, the position data is obtained from Vicon. to IBVS-based landing mode, and the vertical dashed line

974

Authorized licensed use limited to: UNIST. Downloaded on May 22,2020 at 05:59:29 UTC from IEEE Xplore. Restrictions apply.
Patrol Tracking Landing
Mode Mode Mode
0.5

x (m)
0

−0.5
0 5 10 15 20

0.5

y (m)
0

−0.5
0 5 10 15 20
2

z (m)
1

0
0 5 10 15 20
time (sec)

Fig. 5. Time history of the position of the UAV (x, y and z)


5

φ (deg)
0

−5
0 5 10 15 20
5

θ (deg)
0
Fig. 4. Autonomous flight sequence
−5
(19.3 second) represents the moment when the mission is 0 5 10 15 20
10

ψ (deg)
completed (i.e. when the thrust is turned off).
0
Figs. 5 and 6 show position and attitude variables for the
quadrotor during the mission. Note that the height of the −10
0 5 10 15 20

quadrotor starts from 0.2 meter, which is the height of the time (sec)
markers used by the Vicon system to measure position. Also,
Fig. 6. Time history of the attitude of the UAV (φ, θ and ψ)
the height of the quadrotor at the end of the mission is 0.27
meter, because the height of the target is 0.07 meter. which has a larger desired feature size.
Fig. 7 shows the four control input signals for thrust, roll, Fig. 11 represents the estimated ground effect as shown in
pitch and yaw and fig. 8 shows the reference velocity (eq. eq. (29). As expected the estimated effect increases rapidly
(10)) tracking along the x, y and z axes. These reference as the quadrotor approaches the ground.
velocities are generated only once the IBVS-guided control
mode is turned on (dash-dot line), so there are no reference V. C ONCLUSION
velocities (which are recorded as zero) and the controller In this paper, an autonomous landing control algorithm
tracks a guidance command calculated from Vicon position using image-based visual servoing (IBVS) for vertical take-
data before this. During the IBVS-guided tracking mode off and landing (VTOL) unmanned aerial vehicles (UAVs)
(from dash-dot to dotted line) and landing mode (from dotted is described. The ground effect experienced during this
to dashed line), the proposed controller shows satisfactory maneuver is also compensated for by an adaptive control
results for tracking performances. law. The algorithm is also validated by experiment where
Fig. ?? shows the depth estimation value derived from eq. three control modes are defined: a patrol mode, a tracking
(15) and the depth value (considered to be the true depth) mode and a landing mode. During the patrol mode, since
measured by the Vicon (which is the same as the z position the target is not detected yet, an external position sensor
data in fig. 5). Note that eq. (15) calculates the depth between (i.e. Vicon) provides position guidance to the UAV. After
the camera and the target (0.05 m above the camera at rest)
while the Vicon measurement is actually the height of the 600
u1 (ppm)

550
quadrotor markers which are 0.2 meter above the ground at 500

rest. Therefore there is an expected 0.25 m offset between 450


650
0 5 10 15 20
u2 (ppm)

measurements. 600
550
Fig. 10 shows the error distance of the four image features 500
0 5 10 15 20
in the image frame. Before the dash-dot line the image is not 650
u3 (ppm)

600
visible (i.e. Vicon control) so the error is set to zero. Between 550
500
the dash-dot line and the dotted line the control logic is in 0 5 10 15 20
u4 (ppm)

tracking mode and the IBVS algorithm successfully tracks 600


500
the image until it achieves the pre-landing setpoint of 0.4m 400
0 5 10 15 20
above the target. The sudden change in tracking error at time (sec)
the dotted line is due to the image feature desired location
changing once the control logic enters the landing mode, Fig. 7. Time history of the control inputs

975

Authorized licensed use limited to: UNIST. Downloaded on May 22,2020 at 05:59:29 UTC from IEEE Xplore. Restrictions apply.
0.5
reference
vel x
the target is detected, control logic switches to the tracking

vx (m/s)
0
mode where position control is accomplished by tracking
−0.5
image features in the image frame. This mode positions
14 15 16 17 18 19 the UAV above the target where the UAV is now ready
reference
0.5 to perform the landing maneuver. Now in landing mode,
vy (m/s)

vel y

0
another desired image feature is defined to bring the UAV to
−0.5
14 15 16 17 18 19
the landing target. Once the image feature has converged
0.5
reference to its desired position, the throttle is turned off to allow
vz (m/s)

vel z
0
the UAV to land on the target. If the target leaves the
−0.5 FOV for longer than a couple seconds during either tracking
14 15 16 17 18 19
or landing modes, the control logic is switched back to
time (sec)
the patrol mode where the sequence can be started again
Fig. 8. Velocities of the quadrotor helicopter (solid line) and its references once if the target can be found. The experimental results
(dashed line) during the IBVS-guided control show satisfactory performance with the proposed IBVS in
the nonlinear control loop. A video of this can be seen at
http://www.youtube.com/watch?v=qrbVVJJzy Q.
VI. ACKNOWLEDGMENTS
This work was supported by KARI-University Partnership
Program (grant No. 2009-09-sunggwa-7) from the Korea
Aerospace Research Institute (KARI) and Basic Science Re-
search Program Through the National Research Foundation
of Korea(NRF) funded by the Ministry of Education, Science
and Technology (2011-0003656 ).
R EFERENCES
[1] L. Marconi, A. Isidori, and A. Serrani, Autonomous vertical landing
on an oscillating platform: An internal-model based approach, Auto-
Fig. 9. Depth estimation and its true values
matica, vol. 38, pp. 2132, 2002.
[2] L. Besnard, Y. Shtessel and B. Landrum, “Control of a Quadrotor
200 Vehicle Using Sliding Mode Disturbance Observer,” Proceedings of
180 Tracking Landing the 2007 American Control Conference, pp. 5230-5235, 2007.
Mode Mode
[3] C. Coza and C.J.B. Macnab, “A New Robust Adaptive-Fuzzy Control
Error Distance (px)

160
Method Applied to Quadrotor Helicopter Stabilization,” NAFIPS 2006
140
Annual meeting of the North American Fuzzy Information Society, pp.
120 454-458, 2006
100 [4] T. Dierks and S. Jagannathan, ”Output Feedback Control of a Quadro-
80 tor UAV Using Neural Networks”, IEEE Transaxtions on Neural
60
Networks, VOL. 21, NO. 1, 2010, pp. 50-66
[5] S. Saripalli, J. Montgomery and G. Sukhatme, Visually-Guided Land-
40
ing of an Unmanned Aerial Vehicle. IEEE Transactions on Robotics
20
and Automation, 19(3) pp. 371-81, 2003.
0
14 15 16 17 18 19
[6] S. Lange, N. Sunderhauf, and P. Protzel, A Vision Based Onboard Ap-
time (sec) proach for Landing and Position Control of an Autonomous Multirotor
UAV in GPS-Denied Environments, in Proceedings of the International
Conference on Advanced Robotics (ICAR), 2009.
Fig. 10. Error distance of the four corners in the image frame. The sudden
[7] M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart, ”Vision based
change at 18.2 seconds is due to the difference in desired feature locations
mav navigation in unknown and unstructured environments”, in Inter-
between tracking mode and landing mode.
national Conference on Robotics and Automation (ICRA), 2010.
[8] L. R. G. Carrillo, E. Randon, A. Sanchez, A. Dzul and R. Lozano,
8 ”Stabilization and trajectory tracking of a quad-rotor using vision”,
JournalL of Intelligent and Robotic Systems, 61(1-4), 2011, pp 103-
7
118.
6
[9] E. Altug, J. P. Ostrowski and R. Mahony, “Control of a Quadrotor
Helocopter using Visual Feedback,” Proceedings of the 2002 IEEE
Fgr (m/s2 )

5 International Conference on Robotics and Automation, pp. 72-77.


4
2002.
[10] D. Lee, H. J. Kim, S. Sastry, ”Feedback linearization vs. adaptive
3 sliding mode control for a quadrotor helicopter”, Int. J. Control
Autom. Syst. 7(3), 419428 (2009)
2
[11] B. Herisse, F.-X. Russotto, T. Hamel, and R. Mahony, Hovering flight
1 and vertical landing control of a vtol unmanned aerial vehicle using
optical flow, in Proc. IEEE/RSJ International Conference on Intelligent
0
14 15 16 17 18 19 Robots and Systems (IROS), 2008, pp. 801-806.
time (sec) [12] Hutchinson, S., Hager, G., and Corke, P., ”A tutorial on visual servo
control”, Robotics and Automation, IEEE Transactions on, 12(5) 1996,
Fig. 11. Estimated ground effect value with the adaptive sliding mode pp. 651-670.
controller [13] www.vicon.com

976

Authorized licensed use limited to: UNIST. Downloaded on May 22,2020 at 05:59:29 UTC from IEEE Xplore. Restrictions apply.

You might also like