You are on page 1of 14

This article has been accepted for publication in a future issue of this journal, but has not been

fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2020.2967851, IEEE
Transactions on Aerospace and Electronic Systems
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS 1

Output-feedback Image-based Visual Servoing for


Multirotor Unmanned Aerial Vehicle Line Following
Muhammad Awais Rafique and Alan F. Lynch

Abstract—This paper considers visual servoing-based motion system is proposed to improve the efficiency of collecting
control of multirotor UAVs. We employ output feedback and transmission line inspection data. In [8] various remote sensing
image-based visual servoing to control vehicle’s pose with respect techniques and data sources for monitoring transmission lines
to a static planar visual target with linear structure (e.g., electric
transmission lines or pipelines). The method uses measurements are compared, and the authors eventually recommend a UAV-
from inexpensive sensors typically found on-board: an inertial based approach. The problems of pipeline and transmission
measurement unit (IMU) and a monocular computer vision line inspection are similar. Work on pipeline inspection using
system. Unlike existing work, it does not require linear velocity, UAVs includes [2], [9], [10].
position measurements, or an optical flow sensor. The method The main topic of this paper is central to infrastructure
directly controls the relative pose to the visual target and does not
require Global Navigation Satellite System (GNSS) measurements inspection using UAVs. The objective is to automatically
of the vehicle or target. The visual servoing method ensures the control the relative pose of the on-board sensor to what is
vehicle flies centered above the lines at specified height and yaw. being inspected. This ensures that consistent and accurate
Such motion control is important in a number of applications measurements are obtained to perform the inspection while
such as efficient data collection for infrastructure inspection. allowing the UAV operator to focus on higher level objectives.
Our work exploits the inherent robustness of an image-based
approach where feature error is computed directly in the image We assume that 3D linear structures are being inspected and
plane. A virtual camera is combined with output feedback and that two or more lines are constrained to a horizontal plane.
convergence of the closed-loop is proven. The method is adaptive This assumption makes it possible to control the relative pose
to vehicle mass, thrust constant, desired depth, and a constant of the vehicle to the target using feedback from an on-board
disturbance force. Simulation and experimental results illustrate monocular computer vision system. Using feedback from a
the method’s performance and robustness to model uncertainty.
camera to perform motion control is known as visual servoing.
Index Terms—Output-feedback control, Adaptive Control, A recent survey on the use of vision for sensing, flight control,
Line-following, Image-based visual servoing, GNSS-denied en- navigation and guidance of UAVs is in [11].
vironment, Unmanned aerial vehicle
Visual servoing methods are normally categorized into two
main techniques: position-based visual servoing (PBVS), and
I. I NTRODUCTION image-based visual servoing (IBVS) [12], [13]. PBVS is a
more traditional approach which involves estimating or re-
Unmanned Aerial Vehicles (UAVs) are used in a number
constructing the robot’s 3D pose and then applying a motion
of applications such as surveillance, inspection, monitoring,
control algorithm [14], [15]. Reconstruction depends on an
search and rescue, and package delivery. In particular, UAVs
accurate 3D model of the target and camera calibration param-
are well-suited for efficient inspection of electric transmission
eters. On the other hand, IBVS employs features computed
lines and pipelines. Inspection of such infrastructure is an
from the image to directly control the relative pose. This
important and challenging problem and key to ensuring a high
approach is known to be insensitive to camera calibration error
standard of service. For example, regular detailed inspection
and does not require a 3D target model [16], [17]. Due to its
reduces the potentially disastrous effects of line failures. Given
benefits, we adopt an IBVS approach in this paper. Existing
that lines are spread over vast areas and inhospitable terrain,
work has applied IBVS to linear features. For example, [18]
using manned vehicle inspections is not efficient. Hence, UAVs
develops IBVS for landing a fixed wing UAV using line
are an emerging solution, e.g., [1]–[4].
features. Other work involving IBVS and line features is given
Existing work has considered various aspects of using UAVs
below.
equipped with a video camera for inspecting linear structures.
Work which focuses on the closed-loop stability of IBVS
Some of this work focuses on the automatic extraction of
can be divided into four major categories based on the image
useful information from the video. For example, [5] presents
features used in the feedback law. Image features provide a
a computer vision technique for estimating the 3D position
measure of the error in vehicle pose [19]. The first IBVS
of power transmission lines from video obtained using a
method projects features onto a virtual spherical image plane
quadrotor UAV. The method monitors transmission line sag
[20]. The spherical projection removes angular velocity de-
to ensure safe ground clearance. In [6] UAV video is used
pendence from the image feature kinematics. This provides a
to extract the position of power lines and map vegetation
triangular structure of the system dynamics so that backstep-
along the transmission line corridor. In [7] a multi-UAV
ping can be applied. This approach is further studied in [21]–
Department of Electrical and Computer Engineering, University of Alberta, [24]. Work [25] discusses the difficulty in defining a feature
Edmonton, AB, T6G 1H9 Canada. for yaw control using spherical projection. A second IBVS

0018-9251 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2020.2967851, IEEE
Transactions on Aerospace and Electronic Systems
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS 2

technique employs the homography matrix to define feature We propose an output feedback approach to avoid mea-
error between current and desired target view. A homography suring linear velocity or using an optical flow sensor. A few
matrix between two images provides the relative orientation output feedback IBVS approaches for point features have been
and translation of the camera [26]. The homography matrix can proposed in the past [43], [39]. However, the linear feature
be used to control the pose of the vehicle. Homography based case has not been considered to date.
techniques are studied in [27]–[30]. This method assumes As in [37], this paper employs a dynamic IBVS method
a planar target and a small range of error, particularly for to control the relative pose of a quadrotor vehicle viewing
yaw. The third IBVS method uses a virtual spring approach a target consisting of two or more parallel lines. The single
for points features [31], [32]. Here, the translational velocity camera is downward facing and the velocity dynamics of
component of image feature kinematics takes a simple form. the vehicle and image feature kinematics are used to prove
The work assumes that desired height is known and the image the convergence of the design. The line moment features are
plane is parallel to the planar target. Such an assumption defined in a virtual camera frame that removes nonlinear-
is clearly restrictive for traditional multirotor UAVs as their ity from the system model. In contrast to [37], an output
horizontal motion requires a nonzero roll or pitch. The fourth feedback control method is employed which eliminates the
IBVS virtual camera method uses attitude estimates from an need for GNSS or optical flow sensors. To estimate the linear
IMU to define a virtual image plane which remains parallel velocity a simple linear observer is employed motivated by
to the planar horizontal target [19], [25], [33]–[39]. In this the output feedback design [44] and used in [36] for points
work we use the virtual camera approach and moment features features. The proposed method considers uncertainty in the
to provide a linear kinematics with decoupled structure. Due system dynamics. This uncertainty includes vehicle mass and
to the virtual camera, the image kinematics are independent the thrust constant which depends on battery voltage. This
of roll and pitch rates, and this triangular dynamics structure uncertainty significantly affects thrust generation and vertical
simplifies control design. The abovementioned work focusses motion of the vehicle as remarked experimentally in [19].
on the point feature case, and less work has considered the Uncertainty also includes the unknown desired depth constant
line feature case. In [40], line following is considered using which appears in the moment feature kinematics. Finally, we
Euclidean Plücker coordinates. In [41], the work in [40] is include an additive constant disturbance acceleration input to
extended using bi-normalized Euclidean Plücker coordinates the translational dynamics. This disturbance models attitude
while considering uncertainty in the depth of the measured estimate bias and disturbance forces. A robust adaptive control
image feature. This work uses the point feature IBVS result in law is proposed to compensate for the model uncertainty and
[20] and inherits its lack of sensitivity to vehicle height. Other disturbance.
aspects of IBVS have attracted the attention of researchers The paper proves the exponential stability of the closed-
such as improved open-loop reference trajectory generation loop. This should be compared to the proof in [37] which
[42]. In our proposed method the focus is on the design of only shows asymptotic stability. Moreover, both simulation
feedback control and a constant reference feature is used. and experiments have been performed to validate the proposed
Little work has been performed on the virtual camera approach.
method for line following. The paper [25] proposes a virtual This paper is organized as follows: Section II describes the
camera-based IBVS with a simple PID controller for line modeling of vehicle, line features, moment features, and their
features. A back-stepping virtual camera-based IBVS for line kinematics. Section III presents the control design and stability
features was proposed in [37]. However, these two papers as- proof. Section IV presents the simulation results. Conclusions
sume the vehicle’s linear velocity measurements are available. and future work are described in Section VI.
Both of these articles assume that vehicle mass and thrust
constant are known constants. Further, they do not include II. DYNAMIC IBVS M ODELING
disturbance forces. We consider a quadrotor UAV with a downward facing
Traditionally, UAVs motion control tasks rely on GNSS for camera which sees a planar horizontal target containing more
vehicle position and linear velocity estimation. Indeed, GNSS than one line. Our objective is to control the relative pose of
is often well suited for absolute positioning of the vehicle in its the vehicle using IBVS. That is, the heading or yaw angle,
environment. Inspection tasks benefit from relative positioning lateral distance from the lines, and the height of the vehicle
to a visual target, GNSS data alone is not sufficient as target above the lines. The motion along the lines target is controlled
coordinates are often unknown or inaccurate. For example, manually by the user who assigns the reference value for the
accurate GNSS coordinates of a power transmission line are pitch of the vehicle. It is not possible to automatically control
generally not available. In such outdoor applications GNSS the vehicle’s linear velocity along lines since this quantity
data is often available and can continue to be used alongside is not measurable for an unmarked line target on a plain
vision-based control for enhanced GNC (Guidance Navigation background. The modelling used to perform the design is
and Control) capability. For example, GNSS-based navigation described in this section.
can bring the vehicle close enough to the power line, and
control can be switched to visual servoing for inspection. In
indoor or underground applications, GNSS is unavailable and A. Frames
vision-based control is a key technology for missions involving As shown in Fig. 1, we consider a navigation frame N
relative positioning. with basis vectors {n1 , n2 , n3 } pointing north, east, down,

0018-9251 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2020.2967851, IEEE
Transactions on Aerospace and Electronic Systems
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS 3

respectively, and having its origin at a fixed point on the earth. where v v = [v1v , v2v , v3v ]T is linear velocity in V, η =
We assume the optical center of the camera coincides with the [φ, θ, ψ]T , ω c = [ω1c , ω2c , ω3c ]T is angular velocity in C, and
vehicle center of mass and define the camera frame C with e3 = [0, 0, 1]T , m is mass, g is gravitational acceleration,
basis vectors {c1 , c2 , c3 } pointing forward, right, and down, J = diag(J1 , J2 , J3 ) is inertia,
respectively, relative to the vehicle. It should be noted that
1 sφ tθ cφ tθ
 
camera, real camera or body frame all refer to C in this paper.
W (η) = 0 cφ −sφ 
The real image plane with basis {y1 , y2 } is parallel to the plane s cφ
0 cφθ
defined by {c1 , c2 }. We introduce a virtual camera frame V cθ

defined by basis {ν1 , ν2 , ν3 } whose origin coincides with real tξ = tan ξ and
camera frame but has zero roll and pitch with respect to N .  
This implies the n1 − n2 plane is parallel to ν1 − ν2 plane. 0 −x3 x2
The position of the origin of C relative to the origin of N [x]× =  x3 0 −x1  , x = [x1 , x2 , x3 ]T
−x2 x1 0
The input vector τ c = [τ1c , τ2c , τ3c ]T is the torque due to the
C; V propellers expressed in C and F v = Rcv F c is the force input
vector in V where F c = −TM e3 and TM = KT fT is the
º1 total thrust generated by the propellers, where the normalized
c1 thrust input is denoted fT ∈ [0, 1].
As shown in [19], [45], during flight the thrust constant
c3 c2
º2 slowly decreases with battery voltage and this significantly
y1 º3 affects the vehicle’s open-loop behaviour. Over a short time
y2 pn frame we can effectively treat KT as an unknown constant
parameter and we will adapt the control to account for
real image plane slowly changing KT . The mass of the vehicle m is taken
y1v as an unknown parameter. This allows for components on
the vehicle to be changed without affecting motion control
y2v
performance (e.g., heavier batteries could be used for longer
flights). The vehicle payload and its mass can vary depending
virtual image plane on the application (e.g., additional sensors might be added to
Ã
n1
achieve obstacle avoidance). The acceleration δ = [δ1 , δ2 , δ3 ]T
N
models a constant unknown disturbance to the linear velocity
n2 dynamics. It accounts for a range of uncertainty including
target plane n3 attitude estimate bias [19] or constant external forces such
as wind or rotor drag [46].
Fig. 1: Navigation frame N , real camera frame C, virtual
camera frame V, real image and virtual image planes.
C. Line Modeling
is denoted pn expressed in N . The relative orientation of C Consider a static 3D point P = [X1 , X2 , X3 ]T expressed
and N is described by the rotation matrix R ∈ SO(3). It in C. The projection of P onto the image plane is denoted
is convenient to parameterize R with ZYX-Euler angles. We p = [y1 , y2 ]T and given by
denote roll φ, pitch θ, and yaw ψ. We have " #
  X1
R = Rψ Rθ Rφ y1
p= =λ X
X2
3
(2)
y2 X3
where
cψ −sψ 0 where λ is the focal length of camera whose value does not
   
1 0 0 cθ 0 sθ
h i
Rφ = 0 cφ −sφ , Rθ = 0 1 0 , Rψ = sψ c ψ 0
0 sφ cφ −sθ 0 cθ 0 0 1 need to be known exactly. As in [47] the kinematics of the
where cξ = cos ξ, sξ = sin ξ. The rotation matrix describing point feature for a fixed point in space with a moving camera
the orientation of C and V is is  c
 λ y1  v1
− X3 0
Rcv = Rθ Rφ = Rθφ ṗ = X3 v c 
y2
0 − Xλ3 X 2
3 v3c
B. Quadrotor UAV Dynamics # ω c  (3)
y12
"
y1 y2 1
The quadrotor dynamics in V as presented in [19] is given λ 2 −(λ + λ ) y2 c
ω2 
+ y
by (λ + λ2 ) − y1λy2 −y1 ω c
3
h i Fv
v̇ v = − ψ̇e3 v v + ge3 + +δ (1a) A line in 3D is represented by the intersection of two planes
× m
with a plane defined by its normal vector n = [a, b, c]T and a
η̇ =W (η)ω c (1b)
point P0 = [X01 , X02 , X03 ]T on the plane. For any arbitrary
ω̇ c = − J −1 [ω c ]× Jω c + J −1 τ c (1c) −−→
point P = [X1 , X2 , X3 ]T lying on the plane, the vector P0 P

0018-9251 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2020.2967851, IEEE
Transactions on Aerospace and Electronic Systems
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS 4

which points from P0 to P must be orthogonal to n. That is, where lλ = [sin α, cos α, −ρ]T where α ∈ − π2 , π2 . This

−−→
the dot product P0 P · n must be zero: equation is often referred as normal form of a 2D line.
−−→
P0 P · n = a(X1 − X10 ) + b(X2 − X20 ) + c(X3 − X30 )
Y1
= nT P + d = 0
where d = −aX10 − bX20 − cX30 . We consider a 3D line L
represented by the intersection of two planes ®
nTi P + di = ai X1 + bi X2 + ci X3 + di = 0, i = 1, 2 (4) ½
®
with n1 ×n2 6= 0, where ni = [ai , bi , ci ] is the normal vector
T
OI Y2
of ith plane expressed in C. The representation of a 3D line
Fig. 3: Projection of line parameterized by α and ρ

nT2 P + d2 = 0 Using (2), (3), (4), and (7) gives the line image feature
kinematics
  O  Y1
α̇ σI λ sin α σα λ cos α −σα ρ c
L l˙ = = α v
ρ̇ σρ λ sin α® σρ λ cos α Y1−σρ ρ
½
ρ
sin α ρ 90o1 ½(8)
λ cos α

nT1 P + d1 = 0 λ c sin ®
+ ρ2 2 ω
( λ + λ) cos α −( ρλ + λ) sin α 0
®
Fig. 2: A line L represented as the intersection of two planes where l = [α, ρ] , σαY2= −fα cos α , σρ = fα ρ sin α + fρ , ½
T
bi ρ
+c λ
fα = ai −bdii λtan α , and fρ = cos dαi λ i . ®
½
as the intersection of two planes does not define a unique pair OI cos
of planes. A given 3D line can be represented by an infinite
number of pairs of planes. Without loss of generality and as D. Transforming Line Features to the Virtual Camera Frame
shown in Fig. 2 we choose one of the planes parallel to the As discussed above, any point or vector expressed in C can
horizontal target in order to simplify the image kinematics be transformed to V using the rotation matrix Rcv = Rθ Rφ =
below. We exclude the degenerate case d1 = d2 = 0 which Rθφ . Therefore, P v = Rθφ P and nvi = Rθφ ni . So substituting
means L does not pass through the focal point or origin of C. P = RθφT
P v in (4), we have
For a downwards-facing camera this case is impractical as it
corresponds to the line passing through the camera or when nTi (Rθφ
T
P v )+di = (Rθφ ni )T P v +di = nvi T P v +dvi = 0 (9)
the vehicle has a 90◦ roll or pitch. As in [48] the projection where i = 1, 2. Hence, from (5) and (9) we have
of L for a camera of unit focal length with principal point or    
image center at (0, 0) can be parametrized as luv = nv1 nv2
  d1 
= n1 n2
 d1
= Rθφ lu .
d2 d2
luT ph = Ay1 + By2 + C = 0 (5)
Substituting lλv = Hluv from (6), we have
with v   nT2 P + d2 = 0

sin αv

lλ1

a d1 b d1 c d1
A = 1 , B = 1 , C = 1 lλv = lλ2
v 
= cos αv  = HRθφ H −1 lλL
a2 d2 b2 d2 c2 d2
v
lλ3 −ρv nT1 P + d1 = 0

and lu = [A, B, C]T = [n1 , n2 ][d1 , d2 ]T being the vector   (10)


sin α
representation of the 2D line in homogeneous form and −1 cos α
= HRθφ H
ph = [y1 , y2 , 1]T is the representation of the 2D point in
−ρ
homogeneous form. For a camera of focal length λ and image
center at (y10 , y20 ) the projection of L is Therefore, the line features in the virtual camera frame are
 v 
lλ = Hlu (6)  v arctan
lλ1
α v
lλ2
1
h0 0
i lv = v =  −lλ3 v 
where H = 0 1 0 . We remark that the representation ρ √ v 2 v 2.
−y10 −y20 λ lλ1 +lλ2
of the projection of a 3D line into the 2D image plane is
non-minimal as any scalar multiple of lλ represents the same Since we consider a horizontal target containing N > 1
line. parallel lines it can be shown
As shown in Fig. 3, a 2D line in the image plane can be −1
parameterized by two parameters α and ρ. The parameter α σαvk = 0, σρvk =
X3 v
is the angle the line makes with the Y1 -axis and ρ is the
where αkv , ρvk denote the line features of the kth line. In
perpendicular distance from the line to the origin OI of the
the experimental validation of the control law in Section V
image frame. We have
we consider a linear target with a change in direction to
lλT ph = y1 sin α + y2 cos α − ρ = 0 (7) demonstrate the method’s robustness.

0018-9251 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2020.2967851, IEEE
Transactions on Aerospace and Electronic Systems
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS 5

Using (8) the kth line feature kinematics is Since these last two assumptions are impractical, we treat X3v∗
as an unknown parameter in the control design.
α̇kv = ψ̇
Next we define three line moment features which relate to
−1  (11)
ρ̇vk = v λ sin αkv λ cos αkv −ρvk v v height, lateral distance, and yaw of the vehicle relative to the

X3 target. The height moment feature is
It is worth mentioning that using the virtual camera frame
greatly simplifies the line feature kinematics. The dynamics µ∗
r
sh =
for αkv is only a function of yaw rate ψ̇. This is to be expected µ
because the angle of a line as seen in the virtual image plane
parallel to the target should not be affected by linear velocity Taking its derivative and using (15) and (14) we have
or roll and pitch rate. Further, the depth X3v of all lines in the
1 v
horizontal target in the virtual image plane is the same. ṡh = − v (16)
X3v∗ 3
E. Line Moment Features As discussed earlier, ρm is a measure of the lateral position
In this subsection we define line moment features which of the vehicle. However, its sensitivity is inversely related to
further simplify the feature kinematics (11). We start by height. To obtain a moment feature kinematics which depends
defining linearly on lateral distance we define
N N
1 X v 1 X v sl = ρm sh (17)
ρm = ρk , αm = αk
N N
k=1 k=1
Taking the time derivative of sl and using (12), (16), and (15)
Since the lines are parallel, the angle is independent of k,
αkv
we have
therefore, αkv = αm . Although all lines in the target have the
same αk , in practice different values of αk are obtained due −λ   v1v
 
v v
to measurement error or when targets are not perfectly linear. ṡl = v∗ sin αm cos αm
X3 v2v
Using a mean value of αk helps reduces the effect of these (18)
−λ
nonidealities. The kinematics of αm , ρm are given by = v∗ (v1v sin αmv
+ v2v cos αm
v
)
X3
α̇m = ψ̇
−1  (12) The quantity (v1v sin αm
v
+ v2v cos αm
v
) in (18) is the projection
−ρm v v

ρ̇m = v λ sin αm λ cos αm of the linear velocity in V along the perpendicular direction
X3
to the lines.
The mean distance ρm is a measure of lateral position of the The yaw angle moment feature is defined as
lines in the image and the relative lateral displacement of the
vehicle to the target. Next, we define N
1 X v v
N sψ = αk = αm
X 2 N
µ= (ρvk − ρm ) (13) k=1
k=1
and its dynamics is
which is a measure of the distance between the lines seen
in virtual image plane and provides information about the ṡψ = ψ̇ (19)
height of the vehicle. This is because lines appear closer as
vehicle height increases. Its dynamics can be obtained by
differentiating (13) and using (11) and (12). We obtain III. DYNAMIC IBVS FOR L INE F OLLOWING
2µ v A. Control Structure
v µ̇ = (14)
X3v 3
√ To achieve the line tracking objective we use an inner-outer
As in [19] it can be shown that X3v µ is a constant, and this
loop control structure as shown in Fig. 4. The image feature
leads to
√ √ kinematics and linear velocity dynamics comprise the open-
X3v µ = X3v∗ µ∗ (15)
loop outer subsystem dynamics. The input to these dynamics
where X3v∗ is desired depth or height above the target and µ∗ is is taken as roll, pitch, yaw, and thrust. Therefore, an outer
the desired value of µ at the vehicle’s reference configuration. loop control designs a state feedback for these inputs. The
We remark that µ∗ is computed directly from the image of computed attitude is fed as a reference to an inner loop
the target when the vehicle is in its desired configuration. controller which determines the rotational states of the vehicle.
However, in order to improve usability of the control law The outer loop control also computes the normalized thrust
we assume that no value of X3v∗ is available. Only an image input fT which is sent directly to the vehicle.
of the desired goal configuration is needed. This image does The outer loop dynamics can be divided into three decou-
not provide a value of X3v∗ unless we assume knowledge of pled subsystems: yaw, height, and lateral. We define moment
accurate camera calibration parameters and target geometry. feature errors as eψ = sψ , eh = sh − 1, and el = sl . Using

0018-9251 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2020.2967851, IEEE
Transactions on Aerospace and Electronic Systems
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS 6

(19), (16), (18), (1a), and f¯T = fT cos θ cos φ we have the the following observer and parameter update law for Dh (22):
error dynamics for the outer loop:
ê˙ h = −v̂h + lh1 (eh − êh )
ėψ = ψ̇ (20a)
v̂˙ h = −lh1 lh2 (eh − êh ) (23)
1 v
ėh = − v (20b) ˙
X3v∗ 3 D̂h = −βh ((γh − lh2 )eh + lh2 êh − 2vh + v̂h )
KT f¯T
v̇3v = g − + δ3 (20c) where êh , v̂h , D̂h denote estimated quantities, and
m lh1 , lh2 , γh , βh are control gains to be determined.
−λ
ėl = v∗ (v1v sin αm v
+ v2v cos αm
v
) (20d) The parameter update law is written in terms of an unknown
X3 state vh to simplify the analysis. Ultimately, the control law is
KT f¯T expressed in terms of known quantities below in (25). Defining
v̇2v = −ψ̇v1v + tan φ + δ2 (20e)
m cos θ state estimation errors ẽh = eh − êh , ṽh = vh − v̂h and
where (20a) forms the first order yaw subsystem, (20b)– parameter error D̃h = D̂h − Dh . The error dynamics of (22),
(20c) is the height subsystem, and (20d)–(20e) is the lateral (23) is
subsystem. In the height and lateral subsystems, the linear
ẽ˙ h = −vh + v̂h − lh1 ẽh = −ṽh − lh1 ẽh
velocities viv , i = 1, 2, 3 are unmeasured. Therefore we design
an observer to estimate these states. Furthermore, adaptive ṽ˙ h = bh (Dh − f¯T ) + lh1 lh2 ẽh (24)
control will account for the unknown parameters KT , X3v∗ , D̃˙ = −β (γ e − l ẽ − v − ṽ )
h h h h h2 h h h
m and constant disturbance δ.
To stabilize the dynamics in (22) and (24) the control law is
fT taken as
s Outer Loop:
Dynamic Visual Ã
¤ f¯T = kh (v̂h − (lh2 + γh )eh + lh2 êh ) + D̂h
¤ Inner ¿ Vehicle
Servoing Á = kh (v̂h − (lh2 + γh )eh + lh2 êh ) + βh (êh − 2eh )
µ¤ Attitude Loop Zt (25)
Input
´ + βh ((lh2 − lh1 − γh )eh + (lh2 − lh1 )êh ) dτ
from user
0
R; pn
Camera where kh is a control gain to be determined.
Fig. 4: Inner-outer loop control structure In the lateral subsystem in (20d)–(20e), we have two un-
known states v1v , v2v , three unknown parameters KT , m, X3v∗
and a constant disturbance δ2 . In order to stabilize the lateral
feature error el we define a scaled version of the component
B. Controller and Observer Design of velocity along the shortest path connecting the origin of C
Defining the inner loop error for yaw ψ̃ = ψ − ψ ∗ , if we and the lines:
Rt
use the yaw reference ψ ∗ = −Kψ eψ (τ )dτ with Kψ > 0, λ λ
vl = v∗ (v1v sin αm v
+ v2v cos αm
v
) = v∗ v2v − ξ1 (t)
0 X3 X3
the yaw subsystem in (20a) becomes
˙ where
ėψ = −Kψ eψ + ψ̃ (21) −λ v
ξ1 (t) = (v sin αv + v2v (cos αv − 1))
which is exponentially stable assuming perfect inner-loop X3v∗ 1
tracking (i.e., ψ̃ = 0). Before presenting the controller and As with the height subsystem, we rewrite the lateral error
observer design for the remaining two subsystems we remark dynamics (20d)-(20e) as
that it is unnecessary to estimate viv , i = 1, 2, 3 to ensure
closed-loop convergence. As shown below, transformed veloc- ėl = −vl
ities can be used to derive the control with the transformation tan(φ∗ + eφ ) (26)
 
depending unknown model parameters. For example, for the v̇l = bl − Dl + ξ(t)
cos θ
height subsystem we estimate a scaled relative velocity defined
by vh = v3v /X3v∗ . Expressing (20b), (20c) in terms of vh , we where bl = λK T Dh
mX3v∗ and Dl = −m
KT Dh δ2 are unknown
rewrite the height subsystem as constants, vl is an unmeasured state,
ėh = −vh ξ(t) = ξ2 (t) − ξ˙1 (t)
(22)
v̇h = bh (Dh − f¯T ) with  
λ KT tan φ ˜
where bh = KT
and Dh =
mX3v∗ (g + δ3 ). We remark that
m
ξ2 (t) = fT − ψ̇v1v
KT X3v∗ m cos θ
Dh is the value of f¯T at hover.
Subsystem (22) involves one unmeasured state vh and two f˜T = f¯T − Dh , and eφ = φ − φ∗ . We note that (26) depends
unknown parameters bh and Dh . Our design will estimate Dh on yaw subsystem variable ψ̇ and height subsystem variable
and vh and will be robust to error in parameter bh . Consider fT . Since the closed-loop height and yaw subsystems will

0018-9251 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2020.2967851, IEEE
Transactions on Aerospace and Electronic Systems
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS 7

be proven exponentially stable, we treat this coupling as an Transforming the dynamics (22), (24) into the new state
exponentially decaying disturbance. We propose the following coordinates and substituting the control (33) gives the closed-
observer and adaptive law for the unknown parameter Dl : loop
żh1 = −(bh kh − γh )zh1 + bh kh zh2 − γh2 zh3 + bh zh5 (34a)
ê˙ l = −v̂l + ll1 (el − êl )
2
żh2 = −bh kh zh1 − (lh2 − bh kh )zh2 − lh2 zh4 + bh zh5
v̂˙ l = −ll1 ll2 (el − êl ) (27) (34b)
˙
D̂l = βl ((γl − ll2 )el + ll2 êl − 2vl + v̂l ) żh3 = zh1 − γh zh3 (34c)
żh4 = zh2 − (lh1 − lh2 )zh4 (34d)
where êl , v̂l , D̂l denote estimated quantities, ll1 , ll2 , γl , βl are
żh5 = −βh (zh1 + zh2 ) (34e)
controller gains to be determined. Defining ẽl = el − êl , ṽl =
vl − v̂l and D̃l = D̂l − Dl , as state and parameter estimation Consider a Lyapunov function candidate
errors, the error dynamics is  
1 2 2 2 2 2 2 bh 2
V = z + zh2 + γh zh3 + lh2 zh4 + z
ẽ˙ l = −vl + v̂l − ll1 ẽl = −ṽl − ll1 ẽl 2 h1 βh h5
∗ which is radially unbounded. Taking derivative and substitut-
 
˙ṽ l = bl tan(φ + eφ ) − Dl + ll1 ll2 ẽl + ξ(t) + bl eφ
cos θ ing (34), we have
˙
D̃ = β (γ e − l ẽ − v − ṽ )
2
V̇ = − (bh kh − γh )zh1 2
− (lh2 − bh kh )zh2 − γh3 zh3
2
l l l l l2 l l l
(28) 2
− lh2 2
(lh1 − lh2 )zh4
To stabilize the lateral subsystem dynamics (26) and (28), a
control law is taken as which is negative semi-definite if (30) are satisfied. This
implies that [zh1 , zh2 , zh3 , zh4 ]T converges to the origin. Using
φ∗ = arctan(cos θ(−kl (v̂l − (ll2 + γl )el + ll2 êl ) + D̂l )) LaSalle’s invariance principle, (34a) and the fact that V is
= arctan(cos θ(−kl (v̂l − (ll2 + γl )el + ll2 êl ) + βl (2el radially unbounded, implies the global asymptotic stability
(GAS) of [zh1 , zh2 , zh3 , zh4 , zh5 ]T = 0. Since (34) is linear,
Zt
GAS implies global exponentially stability (GES). Since GAS
− êl ) + βl ((γl + ll1 − ll2 )el − (ll1 − ll2 )êl ) dτ )) is preserved under the linear transformation (32), we have
0 proven GES of [eh , vh , ẽh , ṽh , D̃h ]T = 0.
(29) Now consider a similar transformation for the lateral sub-
system
where kl is a controller gain to be determined.     
zl1 γl −1 0 0 0 el
Theorem III.1. Assuming perfect inner-loop tracking, i.e., zl2   0 0 −l −1 0 v
l2  l 
  
[eφ , eθ , ψ̃]T = 0, and consider the height and lateral subsys-
  
zl3  =  1 0 0 0 0   ẽl 
tems (22) and (26), their respective observers (23) and (27),
    
zl4   0 0 1 0 0  ṽl 
the estimation error dynamics (24) and (28), and the control zl5 0 0 0 0 1 D̃l
laws (25) and (29). If the control gains satisfy
which transforms the control in (29) into
lh1 > lh2 > bh kh > γh > 0 (30) φ∗ = arctan(cos θ(−kl (zl2 − zl1 ) + D̂l )) (35)

and Since we have assumed eφ = 0, the closed loop dynamics in


the new coordinates are
ll1 > ll2 > bl kl > γl > 0 (31)
żl = Al zl + Bξ ξ(t) (36)
then the equilibrium [eh , vh , ẽh , ṽh , D̃h , el , vl , ẽl , ṽl , D̃l ]T = 0 where
is globally exponentially stable.
−γl2
 
−(bl kl −γl ) bl kl 0 −bl " −1 #
2
Proof. Consider the change of state coordinates Al =
 −bl kl −(ll2 −bl kl ) 0 −ll2 −bl −1
, Bξ =

 1 0 −γl 0 0 0
0 1 0 −(ll1 −ll2 ) 0 0
0
βl βl 0 0 0
    
zh1 γh −1 0 0 0 eh
zh2   0 0 −lh2 −1 0  vh  and zl = [zl1 , zl2 , zl3 , zl4 , zl5 ]T . The unforced part of the
(32)
    
  ẽh 
zh3  =  1 0 0 0 0   dynamics (36) has the same structure as (34) and is there-
  
zh4   0 0 1 0 0  ṽh  fore Al is Hurwitz provided (31) holds. The signal ξ(t) is
zh5 0 0 0 0 1 D̃h exponentially convergent since ξ1 (t), ξ2 (t) are exponentially
convergent as the closed-loop yaw and height subsystems are
which transforms the control (25) into are globally exponentially stable. Therefore, the equilibrium
[eh , vh , ẽh , ṽh , D̃h , el , vl , ẽl , ṽl , D̃l ]T = 0 of the entire closed-
f¯T = kh (zh2 − zh1 ) + D̂h (33) lop is globally exponentially stable.

0018-9251 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2020.2967851, IEEE
Transactions on Aerospace and Electronic Systems
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS 8

We remark that although conditions (30) and (31) depend on where


unknown parameters bh and bl , sufficiently small kh , kl , γh , γl  
−k1,φ k1,φ 0 
0

and sufficiently large ll1 , ll2 , lh1 , lh2 can always be chosen so −k k k2,φ
Aφ =  1,φ k1,φ − k2,φ  , Bφl = Kl Al 
that the conditions are satisfied. Estimates on the range of the

1,φ k1,φ
k2,φ k2,φ k3,φ
unknown parameters can determine worst case values for bl , bh −k1,φ k1,φ − − Kl Al
k1,φ k1,φ J1 k2,φ
and the controller gains.
and xφ = [x1,φ , x2,φ , x3,φ ]T . Again consider the outer-loop
lateral subsystem (38) with eφ 6= 0 together with the inner-
C. Inner-Loop Control and Entire Closed loop stability loop dynamics (42), using eφ = Bφ xφ from (41) where Bφ =
As discussed earlier, for the height subsystem the thrust [−k1,φ , k1,φ , 0]. The closed loop can be written as
input is algebraically related to the motor inputs, however
żl = Al zl + Bd ξ(t) + Blφ xφ (43a)
in case of lateral and yaw subsystems, we need to design
a control so that the reference set-points generated from the 1
ẋφ = Aφ xφ − Bφl zl (43b)
outer loop can be achieved. k1,φ
Using small angle assumption i.e. tan φ ≈ φ and cos θ ≈ 1, where Blφ = Bl Bφ . We remark that ξ(t) exponentially con-
(35) can be written as verges to zero due to the exponential stability of the height and
φ∗ = −kl (zl2 − zl1 ) + D̂l = K̄l zl + D̂l (37) yaw subsystems. From the Theorem above, since the system
in (38) with eφ = 0 is globally exponentially stable at origin.
where K̄l = [kl , −kl , 0, 0, 0]. Also, the dynamics in (36) for Using the Converse Lyapunov Theorem [49, Thm. 4.14], there
eφ 6= 0 become exists a Lyapunov function V1,φ (zl , t) defined on R5 × R that
satisfies the inequalities
żl = Al zl + Bξ ξ(t) + Bl eφ (38)
2 2
c1 kzl k 6 V1,φ (zl , t) 6 c2 kzl k
where Bl = [bl , bl , 0, 0, 0]T . Taking derivative of (37) w.r.t ∂V1,φ ∂V1,φ
˙ 2
time and substituting values for żl from (38), D̂l = βl (zl1 + + (Al zl + Bξ ξ(t)) 6 − c3 kzl k
∂t ∂zl
zl2 ) = e5 Al Zl where e5 = [0, 0, 0, 0, 1] and K̄l Bl =
T T
∂V1,φ

K̄l Bξ = 0, we have ∂zl 6 c4 kzl k

φ̇∗ = (K̄l + eT5 )Al zl = Kl Al zl for all zl ∈ R5 , t ≥ 0 for some positive constants c1 , c2 , c3 and
Using the small angle assumption, the inner-loop dynamics of c4 . We consider the following Lyapunov function candidate
a quadrotor (1b)-(1c) can be approximated as 1
Vφ (t, xφ , zl ) = V1,φ (zl , t) + xTφ xφ
2
η̈ = J −1 τ c .
and its time derivative is given as
The inner-loop dynamics for the lateral subsystem are there-
fore given by ∂V1,φ ∂V1,φ
V̇φ = + (Al zl + Blφ xφ + Bξ ξ(t))
ėφ = φ̇ − φ̇∗ ∂t ∂zl
1 1
1 (39) + ẋTφ xφ + xTφ ẋφ
φ̈ = τ1 2 2
J1 ∂V1,φ ∂V1,φ ∂V1,φ
= + (Al zl + Bξ ξ(t)) + Blφ xφ
Consider the controller ∂t ∂zl ∂zl
Zt 1 1 1 1
k3,φ k3,φ + (Aφ xφ − Bφl zl )T xφ + xTφ (Aφ xφ − Bφl zl )
τ1 = − φ̇ − eφ − k3,φ eφ (τ )dτ (40) 2 k1,φ 2 k1,φ
k2,φ k1,φ

2 ∂V1,φ 1 T T 1 T
0 6 −c3 kzl k + ∂zl Blφ xφ + 2 xφ Aφ xφ − 2k1,φ zl

and a linear transformation 1 1
T
Zt × Bφl xφ + xTφ Aφ xφ − xT Bφl zl
2 2k1,φ φ
x1,φ = eφ (τ )dτ

∂V1,φ
2 kBlφ k kxφ k + 1 xTφ ATφ + Aφ

6 −c3 kzl k +
0 ∂zl 2
1 (41) 1 T T
x2,φ = eφ + x1,φ × xφ − z B xφ (44)
k1,φ k1,φ l φl
1
x3,φ = φ̇ + x2,φ Since we can choose Aφ , we take
k2,φ
which transforms the controller (40) into τ1 = −k3,φ x3,φ and ATφ + Aφ = −Qφ
system in (39) with control substituted becomes
where Qφ satisfies
1
ẋφ = Aφ xφ − Bφl zl (42) 2
λmin (Qφ )kxφ k 6 xTφ Qφ xφ 6 λmax (Qφ )kxφ k
2
k1,φ

0018-9251 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2020.2967851, IEEE
Transactions on Aerospace and Electronic Systems
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS 9

Also IV. S IMULATION R ESULTS



1 T T 1 T T 1 T
In this section we present the simulation results for the
− zl Bφl xφ 6 zl B x
φl φ
6 kz k
l
Bφl kxφ k
k1,φ k1,φ k1,φ inner-outer closed-loop control. The rigid body dynamics of
the vehicle are implemented in MATLAB Simulink using
The inequality (44) becomes Peter Corke’s Machine Vision Toolbox [50]. The proposed
inner-outer loop control laws (21), (25), (29), (40), (47) and
2 1
V̇φ 6 −c3 kzl k + c4 kzl k kBlφ k kxφ k − xTφ Qφ xφ (46) are implemented. The simulation uses quadrotor and
2 camera parameters given in Table II which correspond to
1 T the ANCLQ 2.0 experimental platform used in Section V.

k1,φ Bφl kxφ k
+ kzl k
No image processing is performed in the simulation. Rather,
2 1 2 the pinhole model (2) is used to project 3D points which
6 −c3 kzl k − λmin (Qφ )kxφ k + kzl k (c4 kBlφ k
2 comprise the linear target onto the image plane. The controller
1 T
(45) gains are listed in Table III. In simulation, the lateral feature

+ k1,φ Bφl ) kxφ k
in (17) is scaled by a factor of ε = 1/100 so that it has
a similar range as the other feature components. This helps
If the controller gains k1,φ , k2,φ , k3,φ in p
(40) are selected with gain tuning since both lateral and height controllers have
so that λmin (Q
φ ) is large
enough such that 2c3 λmin (Qφ ) > similar structure and if the values of the feature errors have
c4 kBlφ k + k1,φ Bφl is satisfied then re-writing (45) gives the same order, the same gains can be used as a starting point
1 T
when tuning. The only affect this scaling has on controller
r !2 design is that bl is replaced by εbl in the conditions (31).
√ λmin (Qφ ) To ensure the target remains in field of view of the cam-
V̇φ 6 − c3 kzl k − kxφ k
2 era, the controller gains are appropriately selected to avoid
large overshoot in lateral motion. The simulation employs
q 
1 T

− 2c3 λmin (Qφ ) − c4 kBlφ k −
Bφl kzl k kxφ k two√ parallel lines in a horizontal plane and spaced apart by
k1,φ
1/ 3 m. The initial conditions for the vehicle are such that
<0
αm = 0.35π rad and ρm = 770 pixels. The initial position of
the vehicle in the navigation frame is pn = [−0.3, 0, −5]T m.
This proves the closed-loop system is GAS at the origin. Given
The initial conditions for v v , η, and ω are zero. This results
that system in (43) is linear, GAS implies GES. We remark
in initial conditions for [sψ0 , sh0 , sl0 ] = [0.35π, 0.82, 0.35].
that if we remove the small angle assumption in (43) only local
We take KT = 37.6 N/ms2 , X3v∗ = 6 m, [δ1 , δ2 , δ3 ] =
exponential stability can be shown. Similarly, the inner-outer
[0, −1, −1] m/s2 which results in Dh = 0.539, Dl = 0.0612,
closed loop yaw subsystem is
bh = 2.725 and bl = 6.12. The desired moment features are
˙ [s∗ψ , s∗h , s∗l ] = [0, 1, 0]. The initial conditions for the observers
ψ̃ = ψ̇ − ψ̇ ∗ are êh = 0, êl = 0, v̂h = 0, v̂l = 0. The initial values of
1 estimated parameters depend on initial conditions for output
ψ̈ = τ3
J3 and velocity estimation errors and given as D̂h = 0.731, D̂l =
0.171. In order to simulate the approximate measurement noise
which can be exponentially stabilized using
in the actual platform, Gaussian white noise is added to the
Zt measurements of the outer loop according to Table I. The noise
k3,ψ k3,ψ powers and means were obtained by collecting measurement
τ3 = − ψ̇ − ψ̃ − k3,ψ ψ̃(τ )dτ (46)
k2,ψ k1,ψ data on a fixed vehicle. A nominal value for focal length was
0 chosen with a 20% error. Such a large error could arise when
no calibration is performed and a rough estimate for λ is
Evidently the velocity along the line can not be measured.
chosen.
Therefore, vehicle pitch is controlled from a pitch reference
θ∗ given by the user. We define the pitch tracking error eθ = The trajectories of the system states, estimated parameters,
θ − θ∗ and its dynamics and control inputs are shown in Fig. 5. The line features
in Fig. 5a converge to a small neighbourhood of the origin.
ėθ = θ̇ When no noise in the attitude is present convergence to zero is
obtained. As expected from the theory, the unknown parameter
1
θ̈ = τ2 estimates D̂h and D̂l converge to their actual values in Fig.
J2
5c while the state estimation errors ṽh and ṽl converge to
which can be controlled using a PID controller as in (40) and zero in Fig. 5b. The plots in Fig. 5d show the trajectories of
(46): φ∗ , θ∗ , ψ ∗ and fT . The normalized thrust fT saturates at 1
initially and eventually settles to the value of Dh once the
Zt desired height is achieved. The plot for φ∗ shows more noise
k3,θ k3,θ than other plots. This is because of how the lateral feature
τ2 = − θ̇ − eθ − k3,θ eθ (τ )dτ (47)
k2,θ k1,θ sl is defined, involving the product of ρm and sh . In steady
0

0018-9251 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2020.2967851, IEEE
Transactions on Aerospace and Electronic Systems
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS 10
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS 10

TABLE I: Outer loop measurement noise


TABLE I: Outer loop measurement noise
Signal Mean Standard Deviation Mean PSD [dB] Sampling Rate [Hz]
Signal Mean Standard Deviation Mean PSD [dB] Sampling Rate [Hz]
e 7.85 × 10−4 0.0051 -45.0 21.7
ehh 7.85 × 10−4 0.0051 -45.0 21.7
e -0.0498 0.1785 -36.3 21.7
ell -0.0498−4 0.1785 −4 -36.3 21.7
eψ [rad] 7.14 × 10−4 6.62 × 10 -76.5 21.7
eψ [rad] 7.14 × 10 6.62 × 10−4 -76.5 21.7
φ [rad] -0.0026 2.62 × 10−4 -109 93.9
φ [rad] -0.0026 2.62 × 10−4
−4 -109 93.9
θ [rad] 0.0159 1.83 × 10−4 -94.6 93.9
θ [rad] 0.0159 1.83 × 10 -94.6 93.9

TABLE II:
TABLE II: Vehicle
Vehicle and
and Camera
Camera Parameters
Parameters state the
the absolute
absolute values
values ofof feature
feature errors
errors remain
remain bounded
bounded i.e.,i.e.,
state
Parameter Value |e | < 0.003, |e | < 0.012 and |e | < 0.02
|eψ | < 0.003, |el | < 0.012 and |eh | < 0.02 after 10 seconds.
ψ l h after 10 seconds.
Parameter Value
Inertial Matrix J
Inertial Matrix J
diag([0.03, 0.03, 0.05]) kgm22
diag([0.03, 0.03, 0.05]) kgm
Although the
Although the features
features and and the
the inputs
inputs areare affected
affectedby bynoise,
noise,they
they
Mass m 2.3 kg remain bounded to practically useful levels. Moreover, vehicle
Mass m 2.3 kg remain bounded to practically useful levels. Moreover, vehicle
Focal Length λ 415 pixels
Focal Length λ
Pixel size
415 pixels
4.8 µm position is
position is relatively
relatively unaffected
unaffected by by noise
noise asas shown
shown in in Fig.
Fig. 5e.
5e.
Pixel size 4.8 µm
Image size
Image size
[640, 480]
[640, 480]
Here we observe that p
Here we observe that p3 convergesn3 converges to −6 m. This satisfies the
n
to −6 m. This satisfies the
Image centre [y10 , y20 ] [320, 240] desired height
height requirement
requirement X X∗3∗ =
= 66m. The3D
m. The 3Dtrajectory
trajectoryof ofthe
the
Image centre [y10 , y20 ] [320, 240] desired 3
vehicle is shown in Fig. 5f. The lateral position
vehicle is shown in Fig. 5f. The lateral position error at steady error at steady
TABLE III:
TABLE III: Controller
Controller Gains
Gains state isis bounded
bounded to to 0.035
0.035m while the
m while the height
height error
error isis bounded
bounded
state
Gain
Gain
Value
Value
to 0.02 m. The simulation results demonstrate
to 0.02 m. The simulation results demonstrate accurate motion accurate motion
Kψ 4 Gain Value control in face of the unmodelled uncertainty in focal length
Kψ 4 Gain Value control in face of the unmodelled uncertainty in focal length
lh1 16 k1,ψ 0.0008
lh1
lh2
16
5
k1,ψ
k2,ψ
0.0008
0.002 and attitude measurement noise
and attitude measurement noise and bias. and bias.
lh2 5 k2,ψ 0.002
k 0.94 k3,ψ 0.1
khh 0.94 k3,ψ 0.1
γ 1.25
1.25
k1,φ 0.0003
0.0003 V. EEXPERIMENTAL
XPERIMENTAL R RESULTS
ESULTS
γhh
βh 0.9
k1,φ
0.0078 V.
β 0.9 kk2,φ
2,φ 0.0078
lll1
h
l1 14
14 kk3,φ
3,φ 0.1
0.1
The experimental validation
The experimental validation of the proposed of the proposed algorithm
algorithm isis
ll l2 4.37
4.37 kk1,θ 0.0008
0.0008 performed using the Applied Nonlinear Control Laboratory
l2
0.36
1,θ
0.003 performed using the Applied Nonlinear Control Laboratory
kkll 0.36 kk2,θ
2,θ 0.003 (ANCL) quadrotor platform which is described in detail inin
γ
γll 1.093
1.093 kk3,θ
3,θ 0.1
0.1
(ANCL) quadrotor platform which is described in detail
ββl 0.15
0.15
[45]. Fig. 6 shows the ANCLQ
[45]. Fig. 6 shows the ANCLQ 2.0 vehicle used in the 2.0 vehicle used in the
l
experiments. ANCLQ
experiments. ANCLQ 2.0 2.0 uses
uses aa Pixhawk
Pixhawk 11 [51] [51] autopilot
autopilot
and Computer Vision System (CVS)
and Computer Vision System (CVS) to evaluate the proposedto evaluate the proposed
control. The
control. The Pixhawk
Pixhawk firmware
firmware isis aa customized
customizedversion versionof ofopen
open
source v1.5.5 PX4 [52] which is based
source v1.5.5 PX4 [52] which is based on that used in [45] on that used in [45]
where additional details are provided.
where additional details are provided. We use the MAVLINK We use the MAVLINK
protocol for
protocol for communication
communication between between PX4 PX4 and and other
other devices
devices
(e.g., the CVS). The CVS hardware consists
(e.g., the CVS). The CVS hardware consists of a Chameleon of a Chameleon
33 camera
camera and and Nvidia
Nvidia Jetson
Jetson TX1 TX1 whichwhich runs runs the the Ubuntu
Ubuntu
OS and Robot Operating System
OS and Robot Operating System (ROS) environment with (ROS) environment with
OpenCV libraries. The CVS captures
OpenCV libraries. The CVS captures the image, performs the image, performs
(a) Feature error (b) Velocity estimation error
aa number
number of of image
image processing
processing steps, steps, computes
computes the the moment
moment
feature in V, and sends it to the PX4
feature in V, and sends it to the PX4 at a rate of 21.7 Hz. at a rate of 21.7 Hz.
MAVROS helps to provide two-way
MAVROS helps to provide two-way communication bridge communication bridge
between ROS
between ROS and and thethe PX4.
PX4. PX4 PX4 computes
computes attitude
attitude usingusing thethe
attitude_estimator_q module
attitude_estimator_q module which is a quaternion- which is a quaternion-
based attitude
based attitude estimator
estimator whichwhich fusesfuses raw raw IMUIMU measurements.
measurements.
This module provides the estimates
This module provides the estimates of attitude which of attitude which are are
used in the visual servoing control.
used in the visual servoing control. The raw IMU mea- The raw IMU mea-
surements used
surements used inin attitude_estimator_q
attitude_estimator_q come come from
from
(c) Parameter estimates (d) Outer loop control inputs a ST Micro L3GD20H gyroscope,
a ST Micro L3GD20H gyroscope, ST Micro LSM303D ac- ST Micro LSM303D ac-
celerometer/magnetometer, and Invensense
celerometer/magnetometer, and Invensense MPU 6000 3-axis MPU 6000 3-axis
gyroscope/accelerometer. We
gyroscope/accelerometer. We remark
remark that that Pixhawk
Pixhawk 11 has has two
two
gyros and two accelerometers which
gyros and two accelerometers which provides redundancy to provides redundancy to
sensor failure. The IMU sensors are initially
sensor failure. The IMU sensors are initially calibrated using calibrated using
QGroundControl. After
QGroundControl. After this
this calibration
calibration we we have
have observed
observed that that
attitude estimates includes a small bias and
attitude estimates includes a small bias and measurement noise measurement noise
and can
and can be be considered
considered accurate
accurate to to about
about 0.5 0.5◦◦ nearnear hover.
hover.
Attitude estimates are provided
Attitude estimates are provided at 93.9 Hz. at 93.9 Hz.
(f) Top view of vehicle trajectory To simplify
To simplify lineline detection
detection in in experiment,
experiment, the the line
line targets
targets
(e) Vehicle position used consist of coloured patches positioned
used consist of coloured patches positioned in a straight line. in a straight line.
The two target lines are parallel to
The two target lines are parallel to the n11 axis of N and the n axis of N and
Fig. 5:
Fig. 5: Line
Line following
following simulation
simulation results
results placed an equal distance on either side of n . A video of
placed an equal distance on either side of n11. A video of

0018-9251 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2020.2967851, IEEE
Transactions on Aerospace and Electronic Systems
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS 11

GPS Vicon Marker that the vehicle changes direction when it reaches 0.5 m from
Jetson TX1 the origin. Vicon data for pn1 is used generate an appropriate θ∗
Propeller
to simulate a user manually controlling linear velocity along
the line via θ∗ .
Motor Forward
Camera The experimental results are shown in Figs. 8-11. The
shaded area in each plot represents the time during which
PX4
IBVS is enabled. Fig. 8 shows the convergence of the image
feature errors to a practically small region of the origin. The
lateral feature error el has a relatively large variation since
Downward LIPO
the lateral feature is generally more sensitive to change in
LairdTech Radio
Camera Battery position than the other two features. There are two primary
reasons that all feature errors exhibit some level of variation
Fig. 6: The ANCLQ 2.0 quadrotor used in experiment
in steady-state. First, due to the relatively small ratio between
patch length (i.e., 10 cm) and the spacing between patches (i.e.,
the experiment is provided which shows the targets 1 . We 25 cm). This leads to a low density of points used to create
detect the coloured patches as points located at their centroids a line and any errors in the positions of these points leads to
and then fit a line through these points. Camera distortion a large error in feature. Secondly, since there is a change in
introduces noise to the image features. Direct line detection the vehicle direction along the line every 160 cm, this leads
algorithms such as the Hough transform could also be used. to a disturbance torque which periodically disturbs the feature
For each image two lines are determined, one for each colour. errors away from the origin. Fig. 9 shows the control input
for the outerloop. The plot for θ∗ includes periodic “spikes”
corresponding to the change in vehicle direction while moving
Robot Operating System (ROS)
along the line. The inner loop control error is in Fig. 10.
®k , ½k Virtual cam. ®kv , ½vk Line moment
Line ¯tting conversion features Clearly, roll and pitch errors remain close to the origin. Yaw
Blob data
error convergence is slower due to relatively low bound of
Á, µ
sà ,sh ,sl
torque about the c3 axis. The 3D vehicle position is shown in
´
AHRS
Fig. 11. The data shows that the vehicle remains within about
Camera
fT ,Á¤ ,ä MC IBVSLine
±10 cm of the desired position for both lateral and height
Quadrotor ¿ ,fT
Vehicle
MC Attiude
Control
Control control. This performance should be compared to consumer
µ¤ MC Position or civilian GNSS accuracy of a few meters. The statistics of
Control
the steady state performance is given in Table IV. Here, ep2
PX4 Autopilot
and ep3 denote lateral and height position errors, respectively.
Fig. 7: Block diagram of the controller showing implementa-
tion details TABLE IV: Statistics of experimental results
Parameter Mean Standard Deviation
This determines µ RÁρ, and l . Using Rv = RÃl and the vehicle attitude 0.004 0.026
v n
Rc = Rα, eh
λ λ
V el 0.108 0.294
received C from
Rv the
c
= RÁ PX4
T T

fº1 ; through
º2 ; º3 g Rna=MAVROS
v
RÃT N topic in ROS, we eψ [rad] 0.005 0.010
usefc(10)
1 ; c2 ; c3to
g obtain l vn. Using l v we canfncompute
Rλc = RÃ Rµ RÁ λ 1 ; n2 ; n3 g line moment ep2 [m] 0.009 0.018
features which areRncthen = RÁT RsentT T back to the PX4. The block
R
µ Ã
ep3 [m] 0.005 0.031
diagram of this implementation is shown in Fig. 7.
On the PX4 side, a module named mc_ibvsline was The experiment described above tests the robustness of the
created which subscribes to the img_moments topic con- proposed method to a range of model uncertainty and mea-
taining the line moment features received from MAVROS. surement noise. For example, the image features include noise
This module runs the outer loop control and its output is due to camera distortion, time delay for image processing,
normalized thrust and an attitude reference that serves as a implementation requires controller discretization, and many of
reference signal for the inner loop. These reference values the system variables are bounded in practice. We remark that
are obtained by implementing the control laws (25) and (29) the indoor lab environment has good lighting conditions and
together with the estimators (23) and (27). Since the main the visual targets used are not necessarily representative of
contribution of the paper lies in the outer loop design, the linear targets found in the field. In real outdoor environments
inner loop control is implemented using the stock v.1.5.5 PX4 image processing could introduce error which could lead to
firmware module mc_att_control. During the experiment performance limitations in the motion control.
we employ a Vicon motion capture system for ground truth In the following experiment we relax the assumption that
vehicle position in N . This allows us to evaluate the relative the target consists parallel lines. We also initialize the vehicle
position error of the control. We set the desired height above away from line to clearly demonstrate the convergence of the
the target to 1 m. For convenience, position along the line is feature error. We consider a visual target which is piecewise
controlled using the built-in position controller of the PX4 so linear including a 30o change of direction. We apply an initial
lateral displacement error of about 35 cm. The results for this
1 https://youtu.be/Nkaf59vUjKM case are in Figs. 12-14. We observe from el in Fig. 12 that

0018-9251 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2020.2967851, IEEE
Transactions on Aerospace and Electronic Systems
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS 12

Fig. 8: Image moment feature errors for line following exper- Fig. 10: Attitude control errors
iment

Fig. 11: 3D position trajectory of the UAV during experiment


Fig. 9: Attitude and thrust reference inputs

the initial lateral feature error when the IBVS is engaged is


about el = 2 and convergence to the origin is similar to that
in simulation in Fig. 5a. The plot of yaw feature in Fig. 5a has
intervals where the error is non-zero, this is when the vehicle
performs a change of direction and there are two non-parallel
sets of lines in the image. In this case α is an weighted average
with more weight given to the line with more visible points. So
as the vehicle moves above the turn, α is constantly changing.
Once it is past the change of direction the yaw feature error
stabilizes to zero. The attitude and thrust references are shown
in Fig. 13 and are similar to Fig. 9. The 3D position of vehicle
is shown in figure 14.

VI. C ONCLUSION AND F UTURE W ORK


This paper proposed an IBVS for a quadrotor UAV for Fig. 12: Image moment feature errors for a piecewise linear
line following. Output feedback is used to eliminate the target
need for linear velocity measurements. We consider model
uncertainty in thrust constant, mass, desired depth, and linear
acceleration disturbance. An inner-outer loop structure is used The inner-outer loop exponential stability is proven. Simu-
and global exponential stability of the outer loop is proven. lation and experimental results demonstrate the effectiveness

0018-9251 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2020.2967851, IEEE
Transactions on Aerospace and Electronic Systems
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS 13

R EFERENCES

[1] D. Jones, “Power line inspection - a UAV concept,” in 2005 The IEE
Forum on Autonomous Systems (Ref. No. 2005/11271), Nov 2005, pp.
8 pp.–.
[2] L. I. Kochetkova, “Pipeline monitoring with unmanned aerial vehicles,”
Journal of Physics: Conference Series, vol. 1015, no. 4, p. 042021, 2018.
[Online]. Available: http://stacks.iop.org/1742-6596/1015/i=4/a=042021
[3] “Drones - a revolution in transmission line inspection and
maintenance,” MIR Innovation, Hydro-Quebec, accessed 14 August
2018. [Online]. Available: http://mir-innovation.hydroquebec.com/
mir-innovation/en/transmission-solutions-uav.html
[4] “Unmanned aerial inspection,” ULC Robotics, accessed 14
August 2018. [Online]. Available: http://ulcrobotics.com/services/
unmanned-aerial-utility-inspection-services/
[5] J. Oh and C. Lee, “3D power line extraction from multiple
aerial images,” Sensors, vol. 17, no. 10, 2017. [Online]. Available:
http://www.mdpi.com/1424-8220/17/10/2244
[6] Y. Zhang, X. Yuan, W. Li, and S. Chen, “Automatic power line
inspection using uav images,” Remote Sensing, vol. 9, no. 8, 2017.
Fig. 13: Attitude and thrust reference inputs for a piecewise [Online]. Available: http://www.mdpi.com/2072-4292/9/8/824
linear target [7] C. Deng, S. Wang, Z. Huang, Z. Tan, and J. Liu, “Unmanned aerial
vehicles for power line inspection: A cooperative way in platforms and
communications,” JCM, vol. 9, pp. 687–692, 2014.
[8] L. Matikainen, M. Lehtomäki, E. Ahokas, J. Hyyppä, M. Karjalainen,
A. Jaakkola, A. Kukko, and T. Heinonen, “Remote sensing methods
for power line corridor surveys,” ISPRS Journal of Photogrammetry
and Remote Sensing, vol. 119, pp. 10 – 31, 2016. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0924271616300697
[9] C. Gómez and D. R. Green, “Small unmanned airborne systems to
support oil and gas pipeline monitoring and mapping,” Arabian Journal
of Geosciences, vol. 10, no. 9, p. 202, May 2017. [Online]. Available:
https://doi.org/10.1007/s12517-017-2989-x
[10] D. Hausamann, W. Zirnig, G. Schreier, and P. Strobl, “Monitoring
of gas pipelines : a civil UAV application,” Aircraft Engineering and
Aerospace Technology, vol. 77, no. 5, pp. 352–360, 2005. [Online].
Available: https://doi.org/10.1108/00022660510617077
[11] C. Kanellakis and G. Nikolakopoulos, “Survey on computer vision
for UAVs: Current developments and trends,” Journal of Intelligent &
Robotic Systems, vol. 87, no. 1, pp. 141–168, Jul. 2017.
[12] F. Chaumette and S. Hutchinson, “Visual servo control. I. Basic ap-
proaches,” IEEE Robotics and Automation Magazine, vol. 13, no. 4, pp.
82–90, Dec. 2006.
[13] ——, “Visual servo control. II. Advanced approaches,” IEEE Robotics
and Automation Magazine, vol. 14, no. 1, pp. 109–118, Mar. 2007.
[14] L. Carrillo, G. Flores Colunga, G. Sanahuja, and R. Lozano, “Quad ro-
Fig. 14: 3D position trajectory of the UAV for a piecewise torcraft switching control: An application for the task of path following,”
linear target IEEE Transactions on Control Systems Technology, vol. 22, no. 4, pp.
1255–1267, Jul. 2014.
[15] E. Rondon, L.-R. Garcia-Carrillo, and I. Fantoni, “Vision-based altitude,
position and speed regulation of a quadrotor rotorcraft,” in Proceedings
of the 2010 IEEE/RSJ International Conference on Intelligent Robots
and Systems, Oct. 2010, pp. 628–633.
of the method. Future work focuses on compensating for [16] S. Hutchinson, G. Hager, and P. Corke, “A tutorial on visual servo
camera field of view constraints, and relaxing the small angle control,” IEEE Transactions on Robotics and Automation, vol. 12, no. 5,
pp. 651–670, Oct. 1996.
assumption in the inner-outer loop stability proof. Future [17] B. Espiau, “Effect of camera calibration errors on visual servoing in
work will improve experimental validation by considering robotics,” in Experimental Robotics III, T. Yoshikawa and F. Miyazaki,
outdoor environments using real-world linear targets such as Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 1994, pp. 182–
192.
transmission lines. A possible limitation of the work is that [18] J. R. Azinheira and P. Rives, “Image-based visual servoing for vanishing
linear velocity along the line must be controlled manually features and ground lines tracking: Application to a UAV automatic
with a user-supplied pitch setpoint. Manual control is required landing,” International Journal of Optomechatronics, vol. 2, no. 3, pp.
275–295, Sep. 2008.
since the velocity along the line cannot be estimated if the [19] H. Xie, “Dynamic visual servoing of rotary wing unmanned aerial vehi-
lines have no distinctive features along their length and their cles,” Ph.D. dissertation, Dept. of Electrical and Computer Engineering,
background is plain. However, in practice the background University of Alberta, Edmonton, AB, Feb. 2016.
will usually have texture (e.g., due to vegetation and non [20] T. Hamel and R. Mahony, “Visual servoing of an under-actuated dynamic
rigid-body system: an image-based approach,” IEEE Transactions on
uniform terrain). Therefore, an optical flow measurement could Robotics and Automation, vol. 18, no. 2, pp. 187–198, 2002.
be used to estimate scaled linear velocity along the line. [21] O. Bourquardez, R. Mahony, T. Hamel, and F. Chaumette, “Stability
Alternately, accelerometer measurements could be fused with and performance of image based visual servo control using first order
spherical image moments,” in Proceedings of the 2006 IEEE/RSJ Inter-
known distances between features which can be detected by a national Conference on Intelligent Robots and Systems, Beijing, China,
camera (e.g., joints between pipes of known length). Oct. 2006, pp. 4304–4309.

0018-9251 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2020.2967851, IEEE
Transactions on Aerospace and Electronic Systems
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS 14

[22] T. Hamel and R. Mahony, “Image based visual servo control for a class [45] G. Fink, “Computer vision-based motion control and state estimation for
of aerial robotic systems,” Automatica, vol. 43, no. 11, pp. 1975–1983, unmanned aerial vehicles (uavs),” Ph.D. dissertation, Dept. of Electrical
Nov. 2007. and Computer Engineering, University of Alberta, Edmonton, AB, 2018.
[23] N. Guenard, T. Hamel, and R. Mahony, “A practical visual servo control [46] M. Bangura, “Aerodynamics and control of quadrotors,” Ph.D. disser-
for an unmanned aerial vehicle,” IEEE Transactions on Robotics and tation, College of Engineering and Computer Science, The Australian
Automation, vol. 24, no. 2, pp. 331–340, 2008. National University, 2017.
[24] O. Bourquardez, R. Mahony, N. Guenard, F. Chaumette, T. Hamel, and [47] M. W. Spong, S. Hutchinson, and M. Vidyasagar, Robot Modelling and
L. Eck, “Image-based visual servo control of the translation kinematics Control. New York, NY: Wiley, 2006.
of a quadrotor aerial vehicle,” IEEE Transactions on Robotics, vol. 25, [48] B. Espiau, F. Chaumette, and P. Rives, “A new approach to visual
no. 3, pp. 743–749, Jun. 2009. servoing in robotics,” IEEE Transactions on Robotics and Automation,
[25] H. Xie and A. F. Lynch, “State transformation-based dynamic visual vol. 8, no. 3, pp. 313–326, Jun. 1992.
servoing for an unmanned aerial vehicle,” International Journal of [49] H. K. Khalil, Nonlinear Systems, 3rd ed. Upper Saddle River, USA:
Control, vol. 89, no. 5, pp. 892–908, 2016. Prentice Hall, 2001.
[26] R. Hartley and A. Zisserman, Multiple View Geometry in Computer [50] P. Corke, Robotics, Vision and Control: Fundamental Algorithms in
Vision, 2nd ed., ser. Cambridge Books Online. Cambridge, England: MATLAB, ser. Spring Tracts in Advanced Robotics. New York City,
Cambridge University Press, 2003, vol. 1. USA: Springer-Verlag, 2011.
[27] N. Metni, T. Hamel, and F. Derkx, “Visual tracking control of aerial [51] “Pxhawk 1 autopilot,” Institute for Visual Computing, Swiss Federal
robotic systems with adaptive depth estimation,” in Proceedings of the Institute of Technology Zurich, accessed 1 Mar 2019. [Online].
44th IEEE Conference on Decision and Control, and the European Available: https://docs.px4.io/en/flight controller/pixhawk.html
Control Conference 2005, Seville, Spain, Dec. 2005, pp. 6078–6084. [52] L. Meier, “PX4 autopilot,” http://dev.px4.io/en/ [accessed 01Mar 2019],
[28] S. Benhimane and E. Malis, “Homography-based 2D visual tracking and Institute for Visual Computing, Swiss Federal Institute of Technology
servoing,” International Journal of Robotics Research, vol. 26, no. 7, pp. Zurich, 2019. [Online]. Available: http://dev.px4.io/en/
661–676, 2007.
[29] H. de Plinval, P. Morin, P. Mouyon, and T. Hamel, “Visual servoing for
underactuated VTOL UAVs: A linear, homography-based approach,” in
Proceedings of the 2011 IEEE International Conference on Robotics
and Automation, Shanghai, China, May 2011, pp. 3004–3010.
[30] ——, “Visual servoing for underactuated VTOL UAVs: a linear, Muhammad Awais Rafique (S’11, GS’18) was
homography-based framework,” International Journal of Robust. Non- born in Dipalpur, Punjab, Pakistan in 1990. He
linear Control, vol. 24, no. 16, pp. 2285–2308, Apr. 2013. received the B.Sc. degree in Electrical Engineering
[31] R. Ozawa and F. Chaumette, “Dynamic visual servoing with image mo- from the University of Engineering & Technology,
ments for an unmanned aerial vehicle using a virtual spring approach,” Lahore, Punjab, Pakistan, in 2012, with seven hon-
Advanced Robotics, vol. 27, no. 9, pp. 683–696, 2013. ours and a Gold Medal. During his undergraduate
[32] ——, “Dynamic visual servoing with image moments for a quadrotor years he held the positions of President of the So-
using a virtual spring approach,” in Proceedings of the 2011 IEEE ciety of Electrical Engineering Department (SEED)
International Conference on Robotics and Automation, Shanghai, China, and Chair of IEEE Student Branch. In 2014 he
May 2011, pp. 5670–5676. received the M.S. degree in Systems Engineering
[33] D. Lee, T. Ryan, and H. Kim, “Autonomous landing of a VTOL UAV on from the Pakistan Institute of Engineering and Ap-
a moving platform using image-based visual servoing,” in Proceedings plied Sciences, Islamabad, Pakistan. He joined the Pakistan Atomic Energy
of the 2012 IEEE International Conference on Robotics and Automation, Commission in 2014 at Chashma Nuclear Power Plants, Mianwali, Punjab,
Saint Paul, MN, 2012, pp. 971–976. Pakistan where he worked as Electrical Maintenance Engineer and later as
[34] D. Lee, H. Lim, H. Kim, Y. Kim, and K. Seong, “Adaptive image- Planning Engineer. Currently he is a Ph.D. student in the Department of
based visual servoing for an underactuated quadrotor system,” Journal Electrical & Computer Engineering at the University of Alberta, Edmonton,
of Guidance, Control, and Dynamics, vol. 35, no. 4, pp. 1335–1353, Canada. He is also a member of IEEE Educational Activities and IEEE
2012. Membership Development committees in IEEE Region 7 Canada. His research
[35] H. Jabbari, G. Oriolo, and H. Bolandi, “Dynamic IBVS control of interests include vision-based control and estimation, unmanned aerial manip-
an underactuated UAV,” in 2012 IEEE International Conference on ulators, multi-UAV cooperative control, chaos synchronization, and time-delay
Robotics and Biomimetics (ROBIO), Dec 2012, pp. 1158–1163. systems.
[36] H. Jabbari Asl, G. Oriolo, and H. Bolandi, “An adaptive scheme for
image-based visual servoing of an underactuated UAV,” International
Journal of Robotics and Automation, vol. 29, no. 1, pp. 92–104, 2014.
[37] H. Xie, A. F. Lynch, and M. Jagersand, “Dynamic IBVS of a rotary wing
UAV using line features,” Robotica, vol. 34, no. 9, pp. 2009–2026, 2014. Alan F. Lynch (S’89, M’00) was born in Toronto,
[38] H. Xie, G. Fink, A. F. Lynch, and M. Jagersand, “Adaptive dynamic Ontario, Canada in 1969. He received the B.A.Sc.
visual servoing of a UAV,” IEEE Transactions on Aerospace and degree in Engineering Science (Electrical Option)
Electronic Systems, vol. 52, no. 5, pp. 2529–2538, 2016. at the University of Toronto, Toronto, Ontario,
[39] J. Li, H. Xie, R. Ma, and K. H. Low, “Output feedback image- Canada, in 1991, the M.A.Sc. degree in Electrical
based visual servoing of rotorcrafts,” Journal of Intelligent & Engineering from University of British Columbia,
Robotic Systems, Apr 2018. [Online]. Available: https://doi.org/10.1007/ Vancouver, British Columbia, Canada, in 1994,
s10846-018-0826-4 and the Ph.D. degree in Electrical and Computer
[40] R. Mahony and T. Hamel, “Visual servoing using linear features for Engineering from the University of Toronto, in
under-actuated rigid body dynamics,” in IEEE/RSJ International Con- 1999.
ference on Intelligent Robots and Systems, vol. 2, 2001, pp. 1153–1158. From 1999 to 2001 he was a postdoctoral re-
[41] ——, “Image-based visual servo control of aerial robotic systems using searcher at the Institut fuer Regelungs und Steuerungstheorie at Technische
linear image features,” IEEE Transactions on Robotics, vol. 21, no. 2, Universitaet Dresden, Dresden, Germany. Since 2001 he has been a faculty
pp. 227–239, 2005. member at the Department of Electrical & Computer Engineering, University
[42] S. Cho and D. H. Shim, “Sampling-based visual path planning of Alberta, Edmonton, Alberta, Canada, and currently holds the rank of Profes-
framework for a multirotor uav,” International Journal of Aeronautical sor. From 2009 to 2010 he was on sabbatical as a Humboldt Research Fellow
and Space Sciences, Mar 2019. [Online]. Available: https://doi.org/10. at the Instituts fuer Systemtheorie und Regelungstechnik (IST), Universitaet
1007/s42405-019-00155-8 Stuttgart, Stuttgart, Germany. In 2017 he was a visiting professor at the
[43] D. Zheng, H. Wang, J. Wang, S. Chen, W. Chen, and X. Liang, “Image- Lehrstuhl fuer Systemtheorie und Regelungstechnik (LSR), Universitaet des
based visual servoing of a quadrotor using virtual camera approach,” Saarlandes, Saarbruecken, Germany. He is Senior Editor of the Journal of
IEEE/ASME Transactions on Mechatronics, vol. 22, no. 2, pp. 972–982, Intelligent & Robotic Systems. His interests include nonlinear control and its
2017. application to electrical and electromechanical and robotic systems.
[44] H. Berghuis and H. Nijmeijer, “Robust control of robots via linear Dr. Lynch is a Registered Professional Engineering in the province of
estimated state feedback,” IEEE Transactions on Automatic Control, Alberta.
vol. 39, no. 10, pp. 2159–2162, Oct 1994.

0018-9251 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

You might also like