You are on page 1of 7

Proceedings of the 20th World Congress

Proceedings
Proceedings
The of
of the
the
International 20th
20th World
World
Federation of Congress
Congress
Automatic Control
Proceedings
The of the 20th World
International Congress Control
The
Toulouse, France,Federation
International Federation of
of Automatic
Automatic
July 9-14, 2017 Control
Available online at www.sciencedirect.com
The International Federation
Toulouse, of Automatic Control
Toulouse, France,
France, July
July 9-14,
9-14, 2017
2017
Toulouse, France, July 9-14, 2017
ScienceDirect
IFAC PapersOnLine 50-1 (2017) 10488–10494
Autonomous
Autonomous Landing
Landing ofof a
a
Autonomous
Multirotor Landing
Micro Air of a
Vehicle
Multirotor Micro Air Vehicle 
on Multirotor Micro Air Vehicle
on a High
a Velocity Ground Vehicle 
on a High Velocity ∗ Ground Vehicle
High Velocity Ground Vehicle


Alexandre Borowczyk ∗ Duc-Tien Nguyen ∗
Alexandre
Alexandre
André Phu-Van Borowczyk
Borowczyk ∗ Duc-Tien Nguyen ∗ ∗
Nguyen ∗∗∗ Dang Duc-Tien Quang Nguyen Nguyen ∗ ∗
Alexandre
André Phu-Van Borowczyk
Nguyen ∗ Duc-Tien
Dang Quang Nguyen∗Nguyen
André Phu-Van David Nguyen
Saussié ∗ Dang
Jerome
∗ Quang
Le Ny Nguyen ∗∗
André Phu-Van David Nguyen ∗ Dang Quang ∗Nguyen
David Saussié
Saussié ∗ Jerome
Jerome Le Le NyNy ∗

David Saussié ∗ Jerome Le Ny ∗
∗ Mobile Robotics and Autonomous Systems Laboratory, Polytechnique
∗ Mobile Robotics and Autonomous Systems Laboratory, Polytechnique
∗ Mobile Robotics
Montrealand andAutonomous
GERAD, Montreal,Systems CanadaLaboratory, Polytechnique
(e-mail:
Mobile Robotics
Montrealand andAutonomous
GERAD, Systems Canada
Montreal, Laboratory, Polytechnique
(e-mail:
Montreal
{alexandre.borowczyk, and GERAD, Montreal,
duc-tien.nguyen, Canada (e-mail:
andre-phu-van.nguyen,
Montreal and GERAD,
{alexandre.borowczyk, Montreal, andre-phu-van.nguyen,
duc-tien.nguyen, Canada (e-mail:
{alexandre.borowczyk,
dang-quang.nguyen, duc-tien.nguyen, andre-phu-van.nguyen,
d.saussie, jerome.le-ny}@polymtl.ca).
{alexandre.borowczyk,
dang-quang.nguyen, duc-tien.nguyen, andre-phu-van.nguyen,
dang-quang.nguyen, d.saussie, d.saussie, jerome.le-ny}@polymtl.ca).
jerome.le-ny}@polymtl.ca).
dang-quang.nguyen, d.saussie, jerome.le-ny}@polymtl.ca).
Abstract: While autonomous multirotor micro aerial vehicles (MAVs) are uniquely well suited
Abstract:
Abstract:
for certain typesWhile
Whileofautonomous
autonomous
missions benefitingmultirotor
multirotorfrom micro
micro aerial
aerial vehicles
stationary vehicles (MAVs)
(MAVs) are
flight capabilities, are uniquely
uniquely
their well
well suited
more widespread suited
Abstract:
for certain Whileofautonomous
types missions multirotor
benefiting from micro aerial vehicles
stationary flight (MAVs) are
capabilities, uniquely
their more well suited
widespread
for certain
usage still typesmany
faces of missions
hurdles, benefiting
due in from stationary
particular to theirflight capabilities,
limited range and their
the more widespread
difficulty of fully
for
usagecertain typesmany
still faces
faces of missions
hurdles, benefiting
due from stationary
in particular
particular theirflight
topaper capabilities,
limited rangethese andtheir
the more widespread
difficulty of fully
fully
usage
automatingstill the many hurdles,
deployment anddue in
retrieval. In to
this their limited
we addressrange and the
issues difficulty
by solvingof the
usage
automatingstill faces
the many hurdles,
deployment anddue in particular to their limited rangethese and issues
the difficulty of fully
automating
problem of thetheautomated
deployment and retrieval.
landing retrieval. In
In this
of a quadcopter this paper
paper we
we address
on a ground address
vehicle these
moving issues by
by solving
solvinghigh
at relatively the
the
automating
problem of the
the deployment and retrieval. In this paper we address these issues by solving the
problem
speed. We the automated
of present automated
our system landing
landing of
of aa quadcopter
quadcopter
architecture, includingon
on aathe
ground
ground
structurevehicle
vehicle moving
moving
of our Kalman at
at relatively
filter forhigh
relatively high
the
problem
speed. Weof present
the automatedour systemlanding of a quadcopter
architecture, includingon atheground
structurevehicle of moving
our Kalman at relatively
filter for high
the
speed.
estimation We present
of the our system
relative architecture,
position and including
velocity between the structure
the quadcopter of our Kalman
and the filter
landing forpad,
the
speed.
estimation We present
of the
the our system
relative architecture,
position and including
velocity between the the
structure
quadcopter of our and Kalman filter forpad,
the landing
landing the
estimation
as well as our of relative
controller position
design for and
the velocity between
full rendezvous thelanding
quadcopter and the pad,
estimation
as well as of the
our relativedesign
controller positionfor and
the velocity
full betweenand
rendezvous andthelanding
quadcopter maneuvers.
and theThe
maneuvers. The
system
landing
system pad,is
is
as well as our controller
experimentally validated design for the full
by successfully rendezvous
landing and landing
in multiple maneuvers. quadcopter
trials a commercial The systemon is
as well as
experimentally our controller
validated design
by for the
successfully full rendezvous
landing in and
multiple landing
trials a maneuvers.
commercial The system
quadcopter is
on
experimentally
the roof of a carvalidated
moving byspeeds
at successfully
of up landing
to 50 in multiple trials a commercial quadcopter on
km/h.
experimentally
the validated by successfully landing in multiple trials a commercial quadcopter on
the roof
roof ofof a a car
car moving
moving at at speeds
speeds of of up
up toto 50
50 km/h.
km/h.
the roofIFAC
© 2017, of a car moving at
(International speeds ofof up
Federation to 50 km/h.
Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
Keywords: Kalman filters, Autonomous vehicles, Mobile robots, Guidance systems, Computer
Keywords:
Keywords:
vision, Kalman
Kalmancontrol
Aerospace filters, Autonomous vehicles,
filters, Autonomous vehicles, Mobile Mobile robots,
robots, Guidance
Guidance systems, systems, Computer
Computer
Keywords:
vision, Kalmancontrol
Aerospace filters, Autonomous vehicles, Mobile robots, Guidance systems, Computer
vision, Aerospace control
vision, Aerospace control
1. INTRODUCTION where a custom visual marker made of concentric rings was
1. INTRODUCTION
1. INTRODUCTION where
where aa custom
created to allowvisual
custom visual marker
markerpose
for relative made of
of concentric
madeestimation,
concentric andrings
rings was
was
control
1. INTRODUCTION where
created a custom
to allow visual
for markerpose
relative madeestimation,
of concentric andrings was
control
The ability of multirotor micro aerial vehicles (MAVs) to was created to allowusing
performed for relative
optical pose and
flow estimation,
velocity and control
commands.
created
was to
performed allow for
using et relative
optical pose
flow andestimation,
and velocity and control
commands.
The ability
The
performability of multirotor
of multirotor
stationary micro
hover micro aerial
flight aerial vehicles
makesvehicles (MAVs) to
(MAVs)
them particularly to was Moreperformed
recently, Yang using optical
al. (2015)flow usedvelocity
the ArUco commands.
library
The ability
perform of multirotor
stationary hover micro
flight aerial
makes vehicles
them (MAVs) to was
particularly More
More
performed
recently,
recently,
using et
Yang
Yang
optical
et al.
al.
flow and
(2015)
(2015) used
used
velocity
the
the
commands.
ArUco
ArUco library
library
perform
interesting stationary
for a hover
wide flight ofmakes
variety them particularly
applications, e.g., site from
More Garrido-Jurado
recently, Yang et
et al.
al. (2014)
(2015) as
used a visual
the fiducial
ArUco and
library
perform
interesting stationary
for a hover
wide flight
variety makes
of them
applications, particularly
e.g., site from
from Garrido-Jurado
Garrido-Jurado et
et al.
al. (2014)
(2014) as
as a
a visual
visual fiducial
fiducial and
and
interesting
surveillance, for a
parcel wide variety
delivery, of
or of applications,
search and rescue e.g., site
opera- IMU measurements fused in a Square Root Unscented
interesting for
surveillance, a wide
parcel variety
delivery, or applications,
search and rescue e.g., site from
opera- IMU Garrido-Jurado
measurements
IMU measurements
et al. (2014)
fused
fused in in a as a visual
Square
a Square Root
RootThe
fiducial
Unscented
Unscented
and
surveillance,
tions. At the parcel
same delivery,
time or search
however, they and
are rescue opera-
challenging to Kalman
IMU Filter
measurements for relative
fused pose
in a estimation.
Square Root system
Unscented
surveillance,
tions. At parcel
theownsame delivery,
time however, or
however, search
they are and rescue
are challenging
challenging opera- Kalman
to Kalman Filter for relative
relative poseflowestimation. The velocity
system
tions.
use onAt the
their same time
because they
of their relatively short battery to however Filterstill reliesfor on optical pose estimation.
for accurate The system
tions.
use on At theown
their same time however,
because of their they are challenging
relatively short battery to Kalman
however
however
Filter
still
still
for relative
relies
relies on
on optical
optical
poseflowestimation.
flow for
for accurate
accurate
The velocity
system
velocity
use
life on
and their
shortownown
range.because
Deploying of their relatively
andrelatively
recovering short
MAVs battery estimation. This becomes problematic
from however still relies on optical flow for accurate velocity as soon as the MAV
use
life on
and their
short range.because
Deploying of their
and recovering short
MAVs battery
from estimation.
estimation. This
This becomes
becomes problematic
problematic as
as soon
soon as
as the
the MAV
MAV
life
mobileand short range. Deploying and recovering MAVs from aligns itself with a moving ground platform, at which point
life and Ground
mobile short range.
Ground
Vehicles
Vehicles
(GVs)and
Deploying
(GVs)
could
could
alleviateMAVs
recovering
alleviate
this issue
this from estimation.
issue aligns
aligns itself
itself
This abecomes
with
with a moving
moving
problematic
ground
ground
as soon
platform,
platform, at
at
as the MAV
which
which point
point
mobile
and allowGround
more Vehicles
efficient (GVs) couldand
deployment alleviate
recoverythisinissue
the the optical
aligns flow
itself flow camera
with camera
a moving suddenly measures
ground measures
platform, the the velocity
at which of
point
mobile
and allowGround
more Vehicles
efficient (GVs) couldand
deployment alleviate
recoverythisinissue
the the
the optical
optical flow camera suddenly
suddenly measures the velocity
velocity of
of
and
field. allow
For more efficient
example, delivery deployment
trucks, and buses
public recoveryor in the the optical
marine platform flowrelative
camera to the
suddenly MAV instead
measures of the
thethe velocity
velocity of
and
field.allow
For more efficient
example, delivery deployment
trucks, and buses
public recoveryor in the the
marine the platform
platform relative
relative to
to the
the MAV
MAV instead
instead of
of the velocity
velocity
field.
carriersFor example,
could delivery
be used trucks,
to transport public buses
MAVsbuses or
between marine of the MAV in the ground frame.
loca- the platform relative to the MAV instead of the velocity
field. For
carriers example,
could beand delivery
usedallow trucks,
to transport
transport public
MAVs or
between marine of the MAV in the
loca- of the MAV in the ground frame. ground frame.
carriers
tions of could
interest be used to them to MAVs recharge between loca-
periodically
carriers
tions of could
interest beandusedallow
to transport
them MAVs
to 2015).
recharge between loca- of
periodically
the MAVetin al.
Muskardin the (2016)
grounddeveloped
frame. a system to land a
tions
(Garone of interest
et al., and Mathew
2014; allow them et to
al., recharge Forperiodically
search and Muskardin et al. (2016) developed
tions
(Garone of interest
et al., and
2014; allow
Mathew them
et to
al., recharge
2015). Forperiodically
search and Muskardin
fixed wing
Muskardin
et
MAV
et
al.
al. on(2016)
top
(2016) of a moving aaaGV.
developed
developed
system
system to
to land
However,
system
land
to land
a
a
their
a
(Garone
rescue et al., 2014;the
operations, Mathew
synergy et al., 2015).ground
between For searchand and
air fixed
fixed wing
wing MAV
MAV on
on top
top of
of a
a moving
moving GV.
GV. However,
However, their
their
(Garone
rescue et al.,
operations, 2014; theMathew
synergy et al., 2015).
between For
ground search
and and
air approach
fixed wing requires
MAV that
on top the
of GV
a cooperates
moving GV. with
However,the MAV
their
rescue
vehicles operations,
could help the the synergy
savesynergy between
preciousbetween ground
mission ground
time andand and
wouldair
rescue
vehicles operations,
could help save precious mission of time andfleets
wouldair approach
approach
during
approach therequires
landingthat
requires
requires
that the
the GV
maneuver
that
GV cooperates
the GVand
cooperates
makes use
cooperates
with
with the
the MAV
MAV
of expensive
with the MAV
vehicles
pave the could
way help
for thesave precious
efficient mission
deployment time
largeand would of during
during the
the landing
landing maneuver
maneuver and
and makes
makes use
use of
of expensive
expensive
vehicles
thecould
pave the way for help
for thesave precious
efficient mission of
deployment time
largeandfleets
would of RTK-GPS units. Kim
during the landing et al. (2014)
maneuver and makes showuse that it is pos-
of expensive
pave
autonomous way the efficient deployment of large fleets of
pave the wayMAVs.
autonomous for the efficient deployment of large fleets of RTK-GPS
MAVs. RTK-GPS
sible
RTK-GPS to land units.
on aKim
units.
units.
Kim
moving
Kim
et
et al.
et
al.
target
al.
(2014)
(2014)
(2014)usingshow
show
show
that
simple
that
it
it is
that color is pos-
pos-
it is blob
pos-
autonomous MAVs. sible
sible to
to land
land on
on a
a moving
moving target
target using
using simple
simple color
color blob
blob
autonomous MAVs. detection and
The idea of better integrating GVs and MAVs has indeed sible to land on a moving target using simple color blob a non-linear Kalman filter, but test their
The idea
ideaattracted
of better
better integrating
integrating GVs and MAVsMAVs has indeed detection and aa speeds
non-linear Kalman
than 1filter,
m/s. but Mosttest their
The
already
The
of
of betterthe
ideaattracted attentionGVs
integrating
and
of multiple
GVs and MAVs car has
has MAV detection
andindeed
indeed
solution
detection
andfor
only
andfor
non-linear
a speeds
non-linear of less Kalman
Kalman
filter,
filter,
but
but
test
test
their
notably,
their
already the attention of multiple car and MAV
MAV Ling solution
solution
(2014)only
only for speeds of
of less
less than
than 1
1 m/s.
m/s. Most
Most notably,
notably,
already
manufacturers
already
attracted
attracted
the attention
(Kolodny,
the attention
of multiple2016).
2016; Lardinois,
of multiple
car and
car Research
and MAV solution
Ling (2014)onlyshows
shows
that itofis less
for speeds
that it is
possible
thanto
possible 1 use
to m/s.low
use low
costnotably,
Most
cost
sensors
sensors
manufacturers
manufacturers
groups (Kolodny,
(Kolodny,
have previously 2016; Lardinois,
2016;
considered Lardinois,
the problem2016).ofResearch
2016). Research Ling (2014)
landing combined shows
with an that it is possible
AprilTag fiducial to use low
marker cost sensors
(Olson, 2011)
manufacturers (Kolodny, 2016; Lardinois, 2016).ofResearch Ling (2014) shows that it is possible to use low cost sensors
agroups
groups
MAV have
have
on apreviously
previously
mobile considered
considered
platform, but the
the problem
problem
most of the landing combined
landing
of existing combined
to land on with
with
a smallan
an AprilTag
AprilTag
ground fiducial
fiducial
robot. He marker
marker
further (Olson,
(Olson, 2011)
2011)
demonstrates
groups
a MAV have
on apreviously
mobile considered
platform, but the problem
most of the landing combined
of existing to
to land
land on
on
with
a
a small
small
an AprilTag
ground
ground
fiducial
robot.
robot. He
He
marker
further
further
(Olson, 2011)
demonstrates
demonstrates
aworkMAV is on a mobile
concerned with platform,
landing but
on amost
marineof the existing
platform or different
to land onmethods
a small to help
ground accelerating
robot. He the
further AprilTag detec-
demonstrates
a MAV on a mobile platform, but most of the existing
work is
work
precisionis concerned
concerned
landing on with
with landing
a landing
static oron onslowly
aa marine
marine platform
platform
moving ground or different
or He methods
different
tion. methods to
to help
notes in particular help accelerating
that as a the
accelerating the AprilTag detec-
AprilTagpitches
quadcopter detec-
work is concerned
precision landing with
on a landing
static oronslowly
a marine platform
moving ground or different
tion.
tion. He
He
methods
notes
notes in
in
to help accelerating
particular
particular that
that as
as aa the AprilTagpitches
quadcopter
quadcopter
detec-
pitches
precision
target. landing
Lange et on
al. a static
(2009) or
provideslowly an moving
early ground
example, forward
tion. He to follow
notes in the ground
particular platform,
that as a the bottom
quadcopter facing
pitches
precisionLange
target. landing on (2009)
et al.
al. a staticprovideor slowly an moving ground forward
early example,
example, to
to follow
follow the
forward frequently the ground
ground platform,
platform, the
the bottom
bottom facing
facing
target. Lange et (2009) provide an early camera loses track of the visual target, which
target. Lange et al. (2009) provide an early example, forward camera
camera
to follow the
frequently
frequently
ground
loses
loses track
track
platform,
of
of the
the
the bottom
visual
visual target,
target,
facing
which
which
 This work was partially supported by CFI JELF Award 32848 and stresses the
camera frequently importance of
loses track a model-based estimator
of the visualestimator
target, whichsuch

 This work was partially supported by CFI JELF Award 32848 and stresses
stresses
as a Kalmanthe
the importance
importance
filter to of
of aa model-based
compensate. model-based estimator such
such
a This work
hardware was partially
donation from supported
DJI. by CFI JELF
 This work was partially supported by CFI JELF Award 32848 and Award 32848 and stresses the importance of a model-based estimator such
a
a hardware
hardware donation
donation from
from DJI.
DJI.
as a Kalman filter to
as a Kalman filter to compensate. compensate.
a hardware donation from DJI. as a Kalman filter to compensate.
Copyright
2405-8963 ©© 2017
2017, IFAC 10975Hosting by Elsevier Ltd. All rights reserved.
IFAC (International Federation of Automatic Control)
Copyright
Copyright © 2017
2017 IFAC
© under IFAC 10975
10975Control.
Peer review
Copyright © 2017 IFAC responsibility of International Federation of Automatic
10975
10.1016/j.ifacol.2017.08.1980
Proceedings of the 20th IFAC World Congress

Toulouse, France, July 9-14, 2017 Alexandre Borowczyk et al. / IFAC PapersOnLine 50-1 (2017) 10488–10494 10489

The references above address the terminal landing phase track of the AprilTag even at close range during the very
of the MAV on a moving platform, but a complete system last instants of the landing maneuver. The approach phase
must also include a strategy to guide the MAV towards the can also benefit from having an additional velocity sensor
GV during its approach phase. Proportional Navigation on board. Many commercial MAVs are equipped with
(PN) (Kabamba and Girard, 2014) is most commonly velocity sensors relying on optical flow methods, which
known as a guidance law for ballistic missiles, but can visually estimate velocity by computing the movement of
also be used for UAV guidance. Holt and Beard (2010) features in successive images, see, e.g., (Zhou et al., 2015).
develop a form of PN specifically for road following by
a fixed-wing vehicle and show that it is suitable for use Four main coordinate frames are defined and illustrated
with visual feedback coming from a gimbaled camera. in Fig. 1. The global North-East-Down (NED) frame,
Gautam et al. (2015) compare pure pursuit, line-of-sight denoted {N}, is located at the first point detected by the
and PN guidance laws to show that out of the three, MAV. The MAV body frame {B} is chosen according to
PN is the most efficient in terms of the total required the cross “×” configuration, i.e., its forward x-axis points
acceleration and the time required to reach the target. between two of the arms and its y-axis points to the
On the other hand, within close range of the target PN right. The gimbaled camera frame {G} is attached to the
becomes inefficient. To alleviate this problem, Tan and lens center of the moving camera. Its forward z-axis is
Kumar (2014) propose a switching strategy to move from perpendicular to the image plane and its x-axis points to
PN to a PD controller. Finally, to maximize the likelihood the right of the gimbal frame. Finally, the bottom facing
of visual target acquisition for a smooth transition from rigid camera frame {C} is obtained from the MAV body
PN to PD, it is possible to follow the strategy of Lin and frame by a 90◦ rotation around the z B axis and its origin
Yang (2014) to point a gimbaled camera towards a target. is located at the optical center of camera.

Contributions and organization of the paper. We describe yC


{C}
a complete system allowing a multirotor MAV to land au- Rigid Camera
tonomously on a moving ground platform at relatively high C
z xC
M3
speed, using only commercially available and relatively {B} xB
low-cost sensors. The system architecture is described in M4 M2
Section 2. Our algorithms combine a Kalman filter for B Quadrotor
z
relative position and velocity estimation, described in Sec- M1
yG yB
tion 3, with a PN-based guidance law for the approach {G}
phase and a PID controller for the terminal phase. Both xG Gimbaled camera
zG
controllers are implemented using only acceleration and
attitude controls, as discussed in Section 4. Our design
was tested both in simulations and through extensive ex-
periments with a commercially available MAV, as Section
5 illustrates. To the best of our knowledge, we demonstrate
experimentally automatic landing of a multirotor MAV on AprilTag
a moving GV traveling at the highest speed to date, with Global NED N
x
successful tests carried up to a speed of 50 km/h. {N} Horizontal plane
N
y
2. SYSTEM ARCHITECTURE zN

In this section we describe the basic elements of our system


Fig. 1. Frames of reference used
architecture, both for the GV and the MAV. Specific
details for the hardware used in our experiments are given
in Section 5. 3. KALMAN FILTER
The GV is equipped with a landing pad, on which we place
a 30 × 30 cm visual fiducial named AprilTag designed by Estimation of the relative position, velocity and accelera-
Olson (2011), see Fig. 4. This allows us to visually measure tion between the MAV and the landing pad, as required
the 6 Degrees of Freedom (DOF) pose of the landing pad by our guidance and control system, is performed by a
using onboard cameras. In addition, we use position and Kalman filter running at 100 Hz. The architecture of this
acceleration measurements for the GV. In practice, low filter is shown in Fig. 2 and described in the following
quality sensors are enough for this purpose, and in our paragraphs.
experiments we simply place a mobile phone on the landing
pad, which can transmit its GPS data to the MAV at 1 Hz 3.1 Process model
and its Inertial Measurement Unit (IMU) data at 25 Hz at
most. We can also integrate the rough heading and velocity In order to land on a moving target, the system estimates
estimates typically returned by basic GPS units, based the three dimensional position p(t), linear velocity v(t) and
simply on successive position measurements. acceleration a(t) of the MAV and of the GV. Thus, the
state variables can be written as
The MAV is equipped with an Inertial Navigation System x = [x   18
m xa ] ∈ R (1)
(INS), an orientable 3-axis gimbaled camera (with separate          
IMU) for target tracking purposes, as well as a camera with where xm = pm vm am and xa = pa va aa are
a wide angle lens pointing down, which allows us to keep respectively the state vectors for the MAV and AprilTag,

10976
Proceedings of the 20th IFAC World Congress
10490
Toulouse, France, July 9-14, 2017 Alexandre Borowczyk et al. / IFAC PapersOnLine 50-1 (2017) 10488–10494

MAV Mobile flies above a moving platform. Therefore, we increase the


INS phone IMU standard deviation of the velocity noise from 0.1 m/s in
𝑅𝑅𝐶𝐶𝑁𝑁 𝑁𝑁 𝑁𝑁 100Hz
, 𝑣𝑣𝑚𝑚𝑁𝑁 , 𝑎𝑎𝑚𝑚 𝑎𝑎𝑎𝑎𝑁𝑁 25Hz
𝑝𝑝𝑚𝑚 the approach phase to 10 m/s in the landing phase.
𝑝𝑝𝑎𝑎𝑁𝑁 𝑁𝑁
− 𝑝𝑝𝑚𝑚
Bottom Transformation
Camera 𝑝𝑝𝑎𝑎𝐶𝐶
20Hz 𝑁𝑁
𝑝𝑝𝑝𝑚𝑚 , 𝑣𝑣𝑣𝑚𝑚𝑁𝑁 , 𝑎𝑎𝑎𝑚𝑚
𝑁𝑁 Target’s GPS measurements The GPS unit of the mobile
Gimbal
phone on the GV provides position, speed and heading
𝑝𝑝𝑎𝑎𝐺𝐺
Camera 𝑝𝑝𝑎𝑎𝑁𝑁 − 𝑝𝑝𝑚𝑚
𝑁𝑁
measurements. This information is sent to the MAV on-
Transformation Kalman filter
Gimbal 30Hz board computer (OBC) via a wireless link, which gives
𝑅𝑅𝐺𝐺𝑁𝑁 𝑝𝑝𝑝𝑎𝑎𝑁𝑁 , 𝑣𝑣𝑣𝑎𝑎𝑁𝑁 , 𝑎𝑎𝑎𝑎𝑎𝑁𝑁
IMU
access to the landing pad’s position in the global NED
Mobile 𝑝𝑝𝑎𝑎𝑁𝑁 , 𝑣𝑣𝑎𝑎𝑁𝑁
phone GPS GPSa
Transformation
1Hz Prediction: 100Hz frame {N}
xN N
a ≈ (la − la0 )RE , ya ≈ (lo − lo0 ) cos(la)RE
Fig. 2. Kalman filter architecture zaN = al0 − al,
expressed in the NED frame. The superscript  denotes where RE = 6378137 m is the Earth radius, la, lo, al are
the matrix transpose operation. the latitude, longitude (both in radians) and altitude of
the landing pad respectively. The subscripts 0 corresponds
We use a simple second-order kinematic model for the to the starting point. The above equations are valid when
MAV and the GV dynamics the current position is not too far from the starting point
a(t) = p̈(t) = w(t) (2) and under a spherical Earth assumption, but more precise
where w(t) is a white noise process with power spectral transformations could be used (Groves, 2013).
density (PSD) qw . The corresponding discrete time model
The GPS heading ψa and speed Ua are also used to
using zero-order hold (ZOH) sampling is given by
calculate the current velocity in the global NED frame
xk+1 = Fxk + wk (3)
 2
ẋN N
a = Ua cos(ψa ), ẏa = Ua sin(ψa ).
  T s
1T
Fm 0  s 2  So the measurement model (4) is expressed as follows
F= , F m = Fa =  
0 1 Ts  ⊗ I3  
0 Fa N N N N 
zk = xN a ya za ẋa ẏa , H = [05×9 I5 05×4 ]
0 0 1
However, because the GPS heading measurements have
where Ts is the sampling period, ⊗ denotes the Kronecker poor accuracy at low speed, we discard them if Ua <
product and I3 is the 3 × 3 identity matrix. The process 2.5 m/s, in which case
noise wk is assumed to be a Gaussian white noise with a  
N N 
covariance matrix Q given by zk = xNa ya za , H = [05×9 I3 05×6 ]
 5 4 3
Ts Ts Ts
The measurement noise covariance matrix Vk is provided
 
   204 83 62  by the GPS device itself.
qw m 0  T s Ts Ts 
Q= ⊗ Q0 , Q0 =   ⊗ I3 Our GPS receiver is a low-cost device with output rate of
0 q wa  8 3 2 
 3 2  about 1 Hz. This source of data is only used to approach
T s Ts
Ts the target but is insufficient for landing on the moving GV.
6 2
For the landing phase, it is necessary to use the AprilTag
where qwm and qwa are the PSD of the MAV and GV
detection with the gimbaled and bottom facing cameras.
acceleration, respectively. These parameters are generally
chosen a priori as part of an empirical tuning process.
Gimbaled Camera measurements This camera provides
measurements of the relative position between the MAV
3.2 Measurement Model
and the landing pad with centimeter accuracy, at range up
to 5 m. This information is converted into the global NED
In general, the measurement vector at time k is given by
frame {N} by
zk = Hxk + vk , (4)
pN N N G
m − pa = RG pm/a
where H is a known matrix, vk is a Gaussian white noise
with covariance matrices Vk and not correlated with the where RNG is the rotation matrix from {N} to {G} returned
process noise wk . The following subsections describe the by the gimbal IMU. Therefore, the observation model (4)
rows of the matrix H for the various kinds of sensor corresponds to
measurements, which we simply call H for simplicity of  
N 
zk = xN N N N N
m − xa ym − ya zm − za
notation.
H = [I3 03×6 −I3 03×6 ] .
MAV position, velocity and acceleration from INS The
INS of our MAV combines IMU, GPS and visual measure- Here the standard deviation of the measurement noise is
ments to provide us with position, velocity and gravity empirically set to 0.2 m. To reduce the chances of target
compensated acceleration data directly expressed in the loss, the gimbaled camera centers the image onto the
global NED frame AprilTag as soon as visual detection is achieved. When the
 
 
AprilTag cannot be detected, we follow the control scheme
zk = p 
m v m am , H = [I9 09×9 ] (5) proposed by Lin and Yang (2014) to point the camera
As mentioned previously, the velocity measurement relying towards the landing pad using the estimated line-of-sight
on optical flow methods is not correct when the MAV (LOS) information obtained from the Kalman filter.

10977
Proceedings of the 20th IFAC World Congress

Toulouse, France, July 9-14, 2017 Alexandre Borowczyk et al. / IFAC PapersOnLine 50-1 (2017) 10488–10494 10491

Bottom camera measurements The bottom facing cam- where λ is a gain parameter, u = pN N N
a − pm and u̇ = va −
era is used to assist the last moments of the landing, when vmN
are obtained from the Kalman filter and represent
the MAV is close to the landing pad, yet too far to cut the LOS vector and its derivative expressed in the NED
off the motors. At that moment, the gimbaled camera can frame, and Ω is the rotation vector of the LOS. We then
not perceive the whole AprilTag but a wide angle camera supplement the PN guidance law (6) with an approach
like the mvBlueFOX can still provide measurements. velocity controller determining the acceleration a along
This camera measures the target position in the frame the LOS direction, which in particular allows us to specify
{C}, as illustrated in Fig. 1. Hence, the observation model a high enough velocity required to properly overtake the
is the same as (3.2.3), except for the transformation to the target. This acceleration component is computed using the
global NED frame following PD structure
pN N N C a = Kp u + Kd u̇
m − pa = RC pm/a
where Kp and Ki are constant gains. The total accelera-
where RN
C denotes the rotation matrix from {N} to {C}. tion command is obtained by combining both components
Landing pad acceleration using the mobile phone’s IMU a = a⊥ + a  .
Finally, since most mobile phones also contain an IMU, we As only the horizontal control is of interest, the accelera-
leverage this sensor to estimate the GV’s acceleration. tion along z-axis is disregarded. The desired acceleration
 N N 
 then needs to be converted to attitude control inputs that
zk = ẍNa ÿa z̈a , H = [03×15 I3 ]
2
are more compatible with the MAV input format. In frame
Vk = diag(0.62 , 0.62 , 0.62 ) m/s . {N}, the quadrotor dynamic equations of translation read
as follows
The Kalman filter algorithm follows the standard two    
0 0
steps, with the prediction step running at 100 Hz and N
mam = 0 + RB 0 + FD
the measurement update step executed as soon as new
mg −T
measurements become available. The output of this filter
is the input to the guidance and control system described where RN B denotes the rotation matrix from {N} to {B},
in the next section, which is used by the MAV to approach  
cθ cψ sφ sθ cψ − cφ sψ cφ sθ c ψ + sφ sψ
and land safely on the moving platform. N
RB = cθ sψ sφ sθ sψ + cφ cψ cφ sθ sψ − sφ cψ ,
−sθ sφ cθ cφ cθ
4. GUIDANCE AND CONTROL SYSTEM
T the total thrust created by rotors, FD the drag force,
m the MAV mass and g the standard gravity. The force
For GV tracking by the MAV, we use a guidance strategy
equations simplify as
switching between a Proportional Navigation (PN) law        
(Kabamba and Girard, 2014) for the approach phase and ẍm 0 cφ sθ cψ + sφ sψ −kd ẋm |ẋm |
a PID for the landing phase, which is similar in spirit to m ÿm = 0 − cφ sθ sψ − sφ cψ T + −kd ẏm |ẏm |
the approach in (Tan and Kumar, 2014). z̈m mg cφ cθ −kd żm |żm |
where the drag is roughly modeled as a force proportional
The approach phase is characterized by a large distance to the signed quadratic velocity in each direction and kd
between the MAV and the GV and the absence of visual is constant, which we estimated by recording the terminal
localization data. Hence, in this phase, the MAV has velocity for a range of attitude controls at level flight and
to rely on the data transmitted by the GV’s GPS and performing a least squares regression on the data. For
IMU. The goal of the controller for this phase is to follow constant flight altitude, T = mg/cφ cθ and assuming ψ = 0,
an efficient “pursuit” trajectory, which is achieved here it yields
by a PN controller augmented with a closing velocity      
controller. In contrast, the landing phase is characterized ẍm − tan θ kd ẋm |ẋm |
m = mg −
by a relatively close proximity between the MAV and the ÿm tan φ/ cos θ kd ẏm |ẏm |
GV and the availability of visual feedback to determine The following relations can then be obtained:
the target’s position. This phase requires a higher level θ = − arctan ((mẍm + kd ẋm |ẋm |)/mg)
of accuracy and faster response time from the controller,
φ = arctan (cos θ (mÿm + kd ẏm |ẏm |) /mg) ,
and a PID controller can be more easily tuned to meet
these requirements than a PN controller. In addition, the where θ and φ are the desired pitch and roll angles for
system should transition from one controller to the other specific acceleration demands.
seamlessly, avoiding discontinuity in the commands sent PID controller The landing phase is handled by a PID
to the MAV. controller, with the desired acceleration computed as

Proportional Navigation Guidance The well-known PN
a = Kp u + Ki u + Kd u̇,
guidance law uses the fact that two vehicles are on a
collision course if their LOS remains at a constant angle where Kp , Ki and Kd are constant gains. The tuning
in order to steer the MAV toward the GV. It works by for the PID controller was selected to provide aggressive
keeping rotation of the velocity vector proportional to the dynamic path following, promoting a quick disturbance
rotation of the LOS vector. Our PN controller provides an rejection. The controller was first tuned in simulation and
acceleration command that is normal to the instantaneous then the settings were manually adjusted in flight.
LOS
u u × u̇ Controller switching The controller switching scheme cho-
a⊥ = −λ|u̇| × Ω, with Ω = , (6)
|u| u·u sen is a simple fixed distance switching condition with

10978
Proceedings of the 20th IFAC World Congress
10492
Toulouse, France, July 9-14, 2017 Alexandre Borowczyk et al. / IFAC PapersOnLine 50-1 (2017) 10488–10494

a slight hysteresis. The switching distance selected was As mentioned in Section 4, we implemented our control
6 m to allow for any perturbation due to the switching to system using pure attitude control in the xy axes and
dissipate before reaching the landing platform. velocity control in the z axis. The reason for this is that the
internal velocity estimator of the M100 fuses optical flow
Vertical control The entire approach phase is done at
measurements from the Guidance system. These measure-
a constant altitude, which is handled by the internal
ments become extremely inaccurate once the quadcopter
vertical position controller of the MAV. The descent is
flies over the landing platform. Although optical flow could
initiated once the quadrotor has stabilized over the landing
be used to measure the relative velocity of the car, it
platform. A constant vertical velocity command is then
would be difficult to pinpoint the moment where flow
issued to the MAV and maintained until it reaches a height
measurements transition from being with respect to the
of 0.2 m above the landing platform at which point the
ground to being with respect to the moving car.
motors are disarmed.
5.2 Experimental Results
5. EXPERIMENTAL VALIDATION

5.1 System Description

We implemented our system on a commercial off-the-shelf


DJI Matrice 100 (M100) quadcopter shown in Fig. 3. All
computations are performed on the standard OBC for this
platform (DJI Manifold), which contains an Nvidia Tegra
K1 SoC. The 3-axis gimbaled camera is a Zenmuse X3,
from which we receive 720p YUV color images at 30 Hz. To
reduce computations, we drop the U and V channels and Fig. 4. Experimental setup showing the required equip-
downsample the images to obtain 640 × 360 monochrome ment on the car. In practice the mobile phone could
images. We modified the M100 to rigidly attach a down- also be held inside the car as long as GPS reception
ward facing Matrix Vision mvBlueFOX camera, equipped is still good.
with an ultra-wide angle Sunex DSL224D lens with a
diagonal field of view of 176 degrees. The M100 is also A first experiment was done using only data from the
equipped with the DJI Guidance module, an array of up mobile phone (no visual feedback) to prove the validity
to five stereo cameras, which are meant to help develop- of our Proportional Navigation controller for the long
ers create mapping and obstacle avoidance solutions for range approach. Figure 5 shows the output of our EKF
robotics applications. This module seamlessly integrates estimating the MAV’s and the target’s positions. With the
with the INS to provide us with position, velocity and target and the MAV starting at about 30 meters from each
acceleration measurements for the M100 using a fusion other, we see the MAV fly a smooth rendezvous trajectory
of on-board sensors described in Zhou et al. (2015). This eventually ending on top of the target.
information is used in our Kalman filter in equation (5).
The close range system was experimentally validated with
the GV moving at speeds as low as a human jogging and on
a private race track at 30, 40 and 50 km/h with successful
landings in each case. A further experiment was attempted
at 55 km/h; this resulted in a missed landing as it was too
close to the maximum speed of the MAV. However we can
affirm that we can land easily at 40 km/h or less, and
acceptably at 50 km/h. Videos of our experiments can be
found at https://youtu.be/ILQqD2xQ4tg.
Figure 6 shows the landing sequence where the quadrotor
takes off, tracks and lands on the landing pad. The curves
gain in altitude as the trajectory progresses because of the
elevation profile of the race track. The effect is seen more
clearly in Fig. 7, where we can also see the filtered AprilTag
altitude rise, thanks to visual data and the M100’s internal
altitude estimator, even before the phone’s GPS data
Fig. 3. The M100 quadcopter. Note that all side facing indicates a change in altitude.
cameras of the Guidance module were removed and
the down facing BlueFox camera sits behind the Furthermore, we can see in Fig. 7 how the M100 closely
bottom guidance sensor. matches the velocity of the AprilTag to perform the
landing maneuver. The two peaks at the 24 and 27 second
Our algorithms were implemented in C++ using ROS marks are strongly correlated to the visual loss of the
(Quigley et al., 2009). Using an open source implemen- tag by the BlueFOX camera which we can observe in
tation of the AprilTag library based on OpenCV and with Fig. 8. The descent starts at the 24 second mark slightly
accelerations provided by OpenCV4Tegra, we can run the before the car hits its designated velocity of 14 m/s or
tag detection at a full 30 fps using the X3 camera and 20 50.4 km/h. We can also see in Fig. 7 how the M100’s
fps on the BlueFOX camera. velocity estimation from the INS becomes incorrect when

10979
Proceedings of the 20th IFAC World Congress

Toulouse, France, July 9-14, 2017 Alexandre Borowczyk et al. / IFAC PapersOnLine 50-1 (2017) 10488–10494 10493

it is on top of the car. Which explains why we decided Estimated LOS X Estimated LOS Y
2 0.5
to dynamically increase the standard deviation of the
measurement as described in Section 3.2. Figure 9 shows 1 0

x (m)

y (m)
the quadcopter’s attitude during the flight. Notice how
the roll is stable close to 0◦ while the pitch stays between 0 -0.5

−10◦ and −25◦ for consistent forward flight. Finally, the


-1 -1
yaw changes drastically after 10 seconds at the moment 0 10 20 30 0 10 20 30

the car starts moving. time (s) time (s)


Estimated LOS Z Distance
6 5

Distance (m)
4
3

z (m)
2 KF
2 X3
1 Bluefox

0 0
0 10 20 30 0 10 20 30
time (s) time (s)

Fig. 8. LOS and distance (coordinates in NED frame).

M100 roll M100 pitch


20 50

0 0

degree

degree
-20 -50

-40 -100
0 10 20 30 0 10 20 30
time (s) time (s)
M100 yaw
50

0
degree
Fig. 5. The PN controller efficiently catches up with the -50
target at long range even when the only source of
information is the mobile phone’s GPS and IMU. -100
0 10 20 30
time (s)

Fig. 9. M100 attitude.


6. CONCLUSION

The problem of the automatic landing of a MAV on a


moving vehicle was solved with experimental tests going
up to 50 km/h. A Proportional Navigation controller was
used for the long range approach, which subsequently tran-
sitioned into a PID controller at close range. A Kalman
filter was used to estimate the position of the MAV relative
to the landing pad by fusing together measurements from
the MAV’s onboard INS, a visual fiducial marker and a
mobile phone. Furthermore, it was shown that this system
can be implemented using only commercial off-the-shelf
components.
Fig. 6. Landing trajectory at 50km/h using the PID Future work may include using a better multiscale visual
controller. fiducial on the landing pad to allow visual target tracking
at longer and closer ranges using a single camera or simpli-
Estimated Speed Altitude
fying the system by removing the requirement for a mobile
15 6
phone. Performance improvements could also be achieved
5 by adding a model of the ground vehicle’s turbulence or
10 4 adding wind estimation in the control system.
speed (m/s)

altitude (m)

5 2
REFERENCES
1 Garone, E., Determe, J.F., and Naldi, R. (2014). Gen-
0 0
eralized traveling salesman problem for carrier-vehicle
0 5 10 15
time (s)
20 25 30 0 5 10 15
time (s)
20 25 30
systems. AIAA Journal of Guidance, Control and Dy-
namics, 37(3), 766–774.
Fig. 7. Motions of M100 and AprilTag. Garrido-Jurado, S., noz Salinas, R.M., Madrid-Cuevas, F.,
and Marı́n-Jiménez, M. (2014). Automatic generation

10980
Proceedings of the 20th IFAC World Congress
10494
Toulouse, France, July 9-14, 2017 Alexandre Borowczyk et al. / IFAC PapersOnLine 50-1 (2017) 10488–10494

and detection of highly reliable fiducial markers under Computer Vision and Pattern Recognition Workshops
occlusion. Pattern Recognition, 47(6), 2280 – 2292. (CVPRW), 9–14.
Gautam, A., Sujit, P.B., and Saripalli, S. (2015). Appli-
cation of guidance laws to quadrotor landing. In Un-
manned Aircraft Systems (ICUAS), 2015 International
Conference on, 372–379.
Groves, P.D. (2013). Principles of GNSS, inertial,
and multisensor integrated navigation systems. Artech
House, 2nd edition.
Holt, R. and Beard, R. (2010). Vision-based road-following
using proportional navigation. Journal of Intelligent and
Robotic Systems, 57(1-4), 193 – 216.
Kabamba, P.T. and Girard, A.R. (2014). Fundamentals
of Aerospace Navigation and Guidance. Cambridge
University Press.
Kim, J., Jung, Y., Lee, D., and Shim, D.H. (2014). Out-
door autonomous landing on a moving platform for
quadrotors using an omnidirectional camera. In Un-
manned Aircraft Systems (ICUAS), 2014 International
Conference on, 1243–1252.
Kolodny, L. (2016). Mercedes-benz and Matternet unveil
vans that launch delivery drones. http://tcrn.ch/
2c48his. (Accessed on 09/09/2016).
Lange, S., Sunderhauf, N., and Protzel, P. (2009). A
vision based onboard approach for landing and position
control of an autonomous multirotor uav in gps-denied
environments. In Advanced Robotics, 2009. ICAR 2009.
International Conference on, 1–6.
Lardinois, F. (2016). Ford and DJI launch $100,000 devel-
oper challenge to improve drone-to-vehicle communica-
tions. tcrn.ch/1O7uOFF. (Accessed on 09/28/2016).
Lin, C.E. and Yang, S.K. (2014). Camera gimbal tracking
from uav flight control. In Automatic Control Confer-
ence (CACS), 2014 CACS International, 319–322.
Ling, K. (2014). Precision Landing of a Quadrotor UAV
on a Moving Target Using Low-cost Sensors. Master’s
thesis, University of Waterloo.
Mathew, N., Smith, S.L., and Waslander, S.L. (2015).
Planning paths for package delivery in heterogeneous
multirobot teams. IEEE Transactions on Automation
Science and Engineering, 12(4), 1298–1308.
Muskardin, T., Balmer, G., Wlach, S., Kondak, K., La-
iacker, M., and Ollero, A. (2016). Landing of a fixed-
wing uav on a mobile ground vehicle. In 2016 IEEE
International Conference on Robotics and Automation
(ICRA), 1237–1242.
Olson, E. (2011). Apriltag: A robust and flexible visual
fiducial system. In Robotics and Automation (ICRA),
2011 IEEE International Conference on, 3400–3407.
Quigley, M., Conley, K., Gerkey, B.P., Faust, J., Foote, T.,
Leibs, J., Wheeler, R., and Ng, A.Y. (2009). ROS: an
open-source robot operating system. In ICRA Workshop
on Open Source Software.
Tan, R. and Kumar, M. (2014). Tracking of ground
mobile targets by quadrotor unmanned aerial vehicles.
Unmanned Systems, 2(02), 157–173.
Yang, S., Ying, J., Lu, Y., and Li, Z. (2015). Precise
quadrotor autonomous landing with SRUKF vision per-
ception. In 2015 IEEE International Conference on
Robotics and Automation (ICRA), 2196–2201.
Zhou, G., Fang, L., Tang, K., Zhang, H., Wang, K., and
Yang, K. (2015). Guidance: A visual sensing platform
for robotic applications. In 2015 IEEE Conference on

10981

You might also like