You are on page 1of 6

Proceedings, 2nd IFAC Conference on

Proceedings, 2nd IFAC Conference on


Modelling, Identification
Proceedings,
Proceedings, 2nd
2nd IFAC and Controlon
IFAC Conference
Conference of Nonlinear Systems
Modelling, Identification and Controlon
of Nonlinear
Available Systems
online at www.sciencedirect.com
Guadalajara,
Proceedings,
Modelling, Mexico,
2nd IFACJune
Modelling, Identification
Identification 20-22,
and
Conference
and 2018
Control
Control of
on
of Nonlinear
Nonlinear Systems
Systems
Guadalajara, Mexico, June 20-22, 2018
Guadalajara,
Modelling, Mexico, June
Identification 20-22,
and 2018
Control
Guadalajara, Mexico, June 20-22, 2018 of Nonlinear Systems
Guadalajara, Mexico, June 20-22, 2018
ScienceDirect
IFAC PapersOnLine 51-13 (2018) 344–349
Ground
Ground Vehicle
Vehicle Tracking
Tracking with
with a
a Quadrotor
Quadrotor
Ground
using Vehicle
Image Tracking
Based with
Visual a Quadrotor
Servoing
Ground
using Vehicle
Image Tracking
Based with Servoing
Visual a Quadrotor
using Image Based Visual Servoing
using
Javier ImageCarlos
Gomez-Avila Based Visual Servoing
Lopez-Franco Alma Y. Alanis
Javier
Javier Gomez-Avila Carlos Lopez-Franco Alma Y. Alanis
Javier Gomez-Avila
Nancy
Gomez-Avila
Nancy
Carlos
Carlos Lopez-Franco
Arana-Daniel
Arana-Daniel Michel
Lopez-Franco
Michel
Alma
Alma Y.
Lopez-Franco
Lopez-Franco Y. Alanis
Alanis
Javier Gomez-Avila
Nancy Arana-Daniel
Nancy Carlos Lopez-Franco
Arana-Daniel Michel Lopez-Franco
Michel Alma Y. Alanis
Lopez-Franco
UniversityNancy
University of
Arana-Daniel
of Guadalajara,
Guadalajara, CUCEI,
CUCEI,
Michel
Computer
Computer
Lopez-Franco
Science Department,
Department,
University of Guadalajara,
Guadalajara,
University of Guadalajara,
Mexico (e-mail:
CUCEI, Computer Science
CUCEI,carlos.lopez@cucei.udg.mx).
Computer Science Department,
Science Department,
Guadalajara,
University of Guadalajara,
Guadalajara, Mexico
Mexico (e-mail:CUCEI,carlos.lopez@cucei.udg.mx).
(e-mail: Computer Science Department,
carlos.lopez@cucei.udg.mx).
Guadalajara, Mexico (e-mail: carlos.lopez@cucei.udg.mx).
Guadalajara, Mexico (e-mail: carlos.lopez@cucei.udg.mx).
Abstract:
Abstract: In
In this
this paper
paper the
the authors
authors present
present an
an approach
approach to
to track aa ground vehicle with aa
Abstract:equipped
quadrotor
Abstract: In this
In this paper
paper
with a the
monocular
the authorsvision
authors present
present an approach
system.
an approach
The to track
proposed
to track
approach
track a ground
a ground
ground is
vehicle
vehicle
based
vehicle on a
with
with
visual
with aa
quadrotor
Abstract: equipped
In this with
paper aa the
monocularauthors vision
present system.
an The
approach proposed
to approach
track a ground is based
vehicle on aawith
visual
quadrotor
servoing
quadrotor
servoing
equipped
technique,
equipped to
technique,
with
to
with monocular
estimate
a monocular
estimate the
the
vision
quadrotor system.
vision system.
quadrotor desired
desired
The
The proposed
velocities.
proposed
velocities. The
The
approach
approach paper
paper
is based
includes
is based on
includes
on a visuala
visual
simulation
simulation
quadrotor
servoing
and equipped
technique,
experimental
servoing technique, with
to
results
to atomonocular
estimate
estimate show thethe
the vision system.
quadrotor
effectiveness
quadrotor desired
of
desired Thethe proposed
velocities.
algorithm
velocities. Theapproach
The paper includes
paper is based on
includes a visual
simulation
simulation
and experimental
servoing
and experimental
experimental results
technique,results to estimate
results to show
to show
show the
thethe
the effectiveness
quadrotor
effectiveness of
desired the
of the algorithm
velocities.
the algorithm The paper includes simulation
and to effectiveness of algorithm
© 2018,
and IFAC (International
experimental resultsVisual Federation
to show theof effectiveness
Automatic Control) of theHosting
algorithm by Elsevier Ltd. All rights reserved.
Keywords:
Keywords: Quadcopter,
Quadcopter, Visual Servoing.
Servoing.
Keywords: Quadcopter,
Keywords: Quadcopter, Visual Visual Servoing.
Servoing.
Keywords: Quadcopter,
1. INTRODUCTION
INTRODUCTION Visual Servoing.
1.
1.
y z

1. INTRODUCTION
INTRODUCTION y zx
y z
y zx
1. INTRODUCTION x
x

InIn recent
recent years,
years, Unmanned
Unmanned Aerial Vehicles
Aerial Vehicles (UAV) (UAV) have have y z
x

In
In recent
gained
recent years,
special
years, Unmanned
popularity,
Unmanned Aerial
mainly
Aerial Vehicles
because
Vehicles of (UAV)
their
(UAV) have
speed
have
gained
In recent special
years, popularity,
Unmanned mainly
Aerial because
Vehicles of their speed
gained
and
gained
and
special
ability
special
ability to popularity,
to move in aa 3D
popularity,
move in
mainly
3D
mainlyspace.
space.
because
This kind
because
This of(UAV)
of
kind their
of robots
their
of
have
speed
robots
speed
andgained
are
and special
ability
equipped
ability to popularity,
towith
movelight
move in mainly
in aaonboard
3D
3D space.
space. because
sensorsThissuch
This of their
kind
kind as speed
ofInertial
of robots
robots
areand equipped
ability towith movelight in aonboard
3D sensorsThisinsuch as Inertial Image
are
are equipped
Measurement
equipped
Measurement
with
Units
with
Units
light
(IMU)
light
(IMU) andspace.
onboard
onboard
and
sensors
cameras
sensors
cameras in
kind
such
order
such
order asofto
as to robots
Inertial
know
Inertial
know Image Off-board station
are equipped
Measurement with
Units light
(IMU) onboard
and sensors
cameras insuch
order as toInertial
know Image
Image s Off-board station
the
the robot
Measurement
robot attitude
Units
attitude and
(IMU)
and information
and
information cameras of
of the
in
the environment
order to
environment know Feature extraction s Off-board station
Off-board station
theMeasurement
robot Units
attitude (IMU)
and and
information cameras of in
the order to
environment know Image Feature extraction ss Image-Based
respectively.
the robot
respectively. However,
attitude
However, and aerial
aerial robots
information
robots have
have of aa limited
the payload
environment
limited payload Feature extraction
Feature extraction
s *
Off-board station
Image-Based
Visual Servo Control
s Image-Based
Image-Based
Visual Servo Control
the robotand
respectively.
capacity
respectively. attitude
However, and
in consequence,
consequence,
However, information
aerial
aerial robots
they have
robots areoflimited
have aathe
limited environment
limited
limited in payload
battery
payload vFeature s*
extraction Visual Servo Control
Control
vx
capacity and in they are in battery ss **
x
vxy
v Visual Servo
Image-Based
respectively.
capacity
consumption
capacity andHowever,
and and
in the aerial robots
in consequence,
consequence,
amount they
and
they have
are
weight
are a limited
limited
limitedof the in
in payload
battery
onboard
battery vxxzy
v s * Visual Servo
vxy
Control
v
consumption
capacityInand
consumption and
and the
the amount
in consequence,
amount and
they
and weight
are limited
weight of
of the
the in onboard
battery
onboard ωv
v zy ROS
xy
vxzzyyz
sensors.
consumption
sensors. In contrast,
and
contrast,the Unmanned
amount
Unmanned Grounded
and weight
Grounded Vehicles
of the
Vehicles (UGV)
onboard
(UGV) ωv
vxzzyzz
v ROS
Driver
ω
ω
v
v
v xz
ωv yz ROS v zyzz
consumption
sensors.
can be
sensors. In
equipped
In and with
contrast,
contrast,the amount
Unmanned
heavier
Unmanned and weight
Grounded
more
Grounded of the onboard
Vehicles
powerful
Vehicles (UGV)
sensors
(UGV) ωvzz ROS
Driver ω
ωvzz
can
can be
sensors.
be equipped
Inmovement with
contrast,
equipped with heavier and
Unmanned more
Grounded powerful
Vehicles sensors
(UGV) ωz Driver
Driver
ROS ωz
but
can
but their
be
their equipped
movement with isheavier
is limitedand
heavier
limited to the
and
to
more
the
more ground
ground
powerful
powerful andsensors
and slower
sensors
slower Driver
butcan be
speed
but
speed
their equipped
than
their
than UAVs.with
movement
movement
UAVs. With
Withisheavier
is limited and
to
to the
the collaboration
limited
the
more
collaboration
the ground
groundpowerful andsensors
between
and
between these Fig. 1. Experiment diagram. Image is sent wireless to
slower
slower
these
Fig. 1. Experiment diagram. Image is sent wireless to an
an
but
speed their
than movement
UAVs. Withis limited
the to the
collaboration ground and
between slower
these Fig.
Fig. 1.
1. Experiment
Off-board
Experiment station diagram.
where
diagram. Image
feature
Image is
is sent
extraction
sent wireless
wireless and to
IBVS
to an
an
robots
speed
robots we
than
we can
can exploit
UAVs.
exploitWith the
the advantages
the collaboration
advantages of
of both
both and
between
and minimize
minimizethese Off-board station where feature extraction and IBVS
speed
robots than
we can UAVs.
exploitWiththe the collaboration
advantages of both between
and minimizethese Fig. 1. Experiment
Off-board
control
Off-board is station
carried
station diagram.
where
out.
where Image
feature
feature is sent
extraction
extraction wireless and
and to
IBVS
IBVS an
their drawbacks
robotsdrawbacks
their we can exploit for
for the
the purpose
thepurpose
advantages of
of developing
of both and
developing applications
minimize
applications control is carried out.
robots
their we can exploit
drawbacks for the the advantages
purpose of of both and minimize Off-board
control
control is
is station
carried
carried where
out.
out. feature extraction and IBVS
such
their
such as mapping
drawbacks and
for the surveillance.
purpose of developing
developing applications
applications
their as
such mapping
drawbacks
as mapping and
for
and thesurveillance.
purpose of developing applications the
surveillance. the
control isThe
features. carried
algorithm out. was implemented in the Robot
such as
Multi-robot mapping and
systems surveillance.
can be used in exploration tasks in the features.
the features.
Operating
features.
The
The algorithm
System
The algorithm
(ROS).
algorithm A
was
was
brief
was
implemented
implemented
overview
implemented of
in
in the
the
in the
the
Robot
Robot
proposed
Robot
such as mapping
Multi-robot and can
systems surveillance.
be used in exploration tasks in Operating System (ROS). A brief overview of the proposed
Multi-robot
unknown
Multi-robot systems
environments.
systems can
can Inbe
beLi used
et
used al. in exploration
(2011)
in and
exploration Zhang tasks
tasks et al.
in the
in Operating features.
Operating
approach is The algorithm
System
illustrated
System (ROS).
(ROS).in Awas
Fig.
A brief
1
brief implemented
overview of
overview in the
of the
the Robot
proposed
proposed
unknown environments. In Li et al. (2011) and Zhang et al. approach is illustrated in Fig. 11 overview of the proposed
Multi-robot
unknown
(2009),
unknown a systems can
environments.
multi-robot
environments. Inbe
system
In Li
Li used
et al.
integrated
et al. in exploration
(2011)
(2011) byand
andan Zhang tasks
aerial
Zhang et
et in Operating
al.
and
al. approach
approach System
is
is illustrated
illustrated (ROS).in
in A brief
Fig.
Fig. 1
(2009), a multi-robot system integrated byandan aerial etand
al. The approach remainder of
is illustrated thisin paper Fig. 1 is organized as follows:
a aunknown
(2009),
grounded
(2009),
grounded
aa environments.
multi-robot
vehicle is
multi-robot
vehicle
In Li et
system
is presented;
presented;
system al.they
integrated
integrated(2011)
they
by
implement
by
implement anZhang
an aerial and
a visual
aerial
a visual
and The
The remainder
remainder of
of this this paper
paper is is organized
organized as
as follows:
follows:
(2009),
aabased
grounded a multi-robot
vehicle is system integrated byAlso
an aerial Section
The 2
remainder describes of the
this
and Section 2 describes the quadrotor dynamic model. quadrotor
paper is dynamic
organized model.
as The
follows:
based tracking
grounded
tracking system
vehicle
systemis presented;
with optical
presented;
with optical they
theyflow. implement
flow.
implement
Also in aaHerisse
in
visual
Herisse
visual The
Section
IBVS remainder
22 describes
algorithm of is this
the
presented paper inis Section
quadrotor organized
dynamic 3. as follows:
model.
Simulations
The
The
a grounded
based
et al. tracking
(2008), vehicle
system
Herisse isetpresented;
with
al. optical
(2009) theyflow.
and implement
Romero Also et in
al. aHerisse
(2012), Section describes the
visual IBVS algorithm is presented in Section 3. Simulations quadrotor dynamic model. The
based
et al. tracking
(2008), system
Herisse et with
al. optical
(2009) and flow.
Romero Also et in
al. Herisse
(2012), Section
IBVS
and 2
experimentaldescribes
algorithm is the
presented
results quadrotor
are in
shown dynamic
Section
in 3.
Section model.
Simulations
4 and The5,
et based
al. tracking
(2008), system
Herisse et with
al. optical
(2009) and flow.
Romero Also et in
al. Herisse
(2012),
used and IBVS algorithm is presented in Section 3. Simulations
ananal.
et optical
(2008),
optical flow
flow approach
Herisse
approach with
et al.with
(2009) aa monocular
monocular
and Romero camera
camera is used
et al. (2012),
is and experimental
IBVS
and algorithm
experimental
respectively.
experimental Finally, isresults
presented
results
conclusions
results
are
are shown
are inare
shown
shown
in
in Section
Section
given
in 3.
Section
in
Section
4
4 and
Simulations
Section 4 and
and 6
5,
5,
5,
an et al.
to
an (2008),
optical
control
optical Herisse
flow approach
position
flow of
approach etaal.UAV.
(2009)
with
with and
aaInmonocularRomero
Altuğ
monocular et al.et(2005),
camera
camera al. (2012),
is
is used
the
used respectively. Finally, conclusions are given in Section 6
to control position of a UAV. In Altuğ et al. (2005), the and experimental
respectively.
respectively. Finally,
Finally, results
conclusions
conclusions are shown are in Section
are given
given in Section
in Section 4 and 6 5,
6
to an control
optical
control
to controlof aaflow approach
position
quadrotor
position of
of aais with
isUAV. aIn
Inmonocular
implemented
UAV. Altuğ et camera
et
using
Altuğusing al. (2005),
dual
al.dual
(2005), is used
camera the
the respectively. Finally, conclusions are given in Section 6
control
to control
control of
of quadrotor
aaposition
quadrotorofposeais implemented
UAV. In Altuğ
implemented etAngeletti
using al.dual
(2005),camera
camera the
feedback
control
feedback of for
for robot
quadrotor
robot pose is estimation.
implemented
estimation. In
using
In dual
Angeletti et
camera
et al.
al. 2. QUADROTOR DYNAMIC MODEL
control an
feedback
(2008)
feedback of fora quadrotor
robot
on-board
for robot pose is implemented
laser,
pose estimation.
an off-board
estimation. using
In
monocular
In dual camera
Angeletti
Angeletti et
et al.
al. 2.
2. QUADROTOR
QUADROTOR DYNAMIC
DYNAMIC MODEL
MODEL
(2008)
feedback an on-board
for robot laser,
pose an off-board
estimation. monocular
In Angeletti camera
et al. 2. QUADROTOR DYNAMIC MODEL
(2008)
and
(2008) an
artificial
an on-board
markers
on-board laser,
are
laser, an
an off-board
employed
off-board for monocular
the
monocular hovering camera
cameraof a 2. QUADROTOR DYNAMIC MODEL
and(2008)artificial
an on-boardmarkers are
laser, employed for the hovering of a
of aa In this work, we use an AR.Drone quadrotor which
In this work, we use an AR.Drone quadrotor is
and artificial
quadrotor.
and artificial
quadrotor.
markers
markers are an
are off-board
employed
employed formonocular
for the hovering
the hovering cameraof In
an this
aerial work,
vehicle we use
with an
two AR.Drone
pairs of quadrotor
propellers (1,
which
which
3) and
is
is
In this
and artificial markers are employed for the hovering of a an aerial vehicle with two pairs of propellers (1, 3) and
quadrotor.
quadrotor. work, we use an AR.Drone quadrotor which is
In this
quadrotor. paper, a vision based tracking for a multi-robot In
an
(2,
an this
aerial
4) work,
turning
aerial vehicle
vehicle we
in use
with an
two
opposite
with two AR.Drone
pairs
directions
pairs of
of quadrotor
propellers
as
propellersshown (1,
(1, which
3)
in
3) andis
Fig.
and
In
In this
this paper,
paper, a vision
a vision
vision The based
based tracking
trackingused for
for in a multi-robot
a multi-robot
multi-robot (2,
(2, 4)
4) turning
an Increasing
aerial
turningvehicle in opposite
with
indecreasing two pairs
opposite directions
directions as
of propellers
as shown
shown (1,in Fig.
3)rotors
in and
Fig.
system
In
system this is implemented.
paper, a based quadrotor
tracking for a this work is 2.
(2, 4) turning orin opposite the
directionsspeed of
as the
shown four in Fig.
system
an
systemthis is
In AR.Drone
is implemented.
ispaper, a vision
implemented.
which
implemented.
The
based
The
is equipped
equipped
The
quadrotor
tracking
quadrotor
quadrotor
used
with aa used for in
monocular
used in
this work
a multi-robot
in this work
work is
camera
this camera
is 2.
is 2. 2. Increasing
(2, 4) turning
Increasing
results on
Increasing the
or
orin
or
decreasing
opposite
decreasing
vertical
decreasing motion
the
the ofspeed
directions
the speed
the
speed
of
of the
as shown
the
vehicle
of the
four
four
(Fig.
four inrotors
Fig.
rotors
2.a).
rotors
an
an AR.Drone
system
AR.Drone which
is implemented.
which is
is The
equipped with
quadrotor
with monocular
used
aaKuka
monocularinyouBot.
this camera
work is 2. results on
Increasing
results on the
the orvertical
decreasing
vertical motion
motion the of
of the
speed
the vehicle
of the
vehicle (Fig.
four
(Fig. 2.a).
rotors
2.a).
and
an
and a IMU.
AR.Drone
aa IMU. The
which
The mobile
is
mobile robot
equipped
robot is
with
is the
the monocular The
camera Changing
results on the
the speed
vertical difference
motion of of thethepairs (1,
vehicle 3) (Fig.and (2, 4)
2.a).
andan AR.Drone
vision
and a IMU.
system
IMU. which
The
Theextractsis equipped
mobile
mobile robot
four
robot with
is
features,
is the
the aKuka
monocular
Kuka
these
Kuka
youBot.
youBot.
features
youBot.
The
camera
The
are
The
Changing
results
Changing
generates
Changing on the
athe
the
the
speed
rollvertical
speed
speedrotation
difference
motion
difference
difference and
of
of
of ofthe
the
thethepairs
pairs
translational
pairs
(1,
vehicle
(1,
(1,
3)
3) (Fig.
movement
3)
and
and
and
(2,
(2,
(2,
4)
2.a).
4)
in
4)
vision
vision system
and aasIMU.
system Theextracts
mobile
extracts four
robot
four features,
is theVisual
features, these
Kuka
these features
youBot.
features are
The
are generates
Changing
generates a
the roll
a rollspeed
roll rotation
rotation difference and
and translational
of theother
pairs hand,
translational movement
(1,movement
3) and
movement in
(2, 4)
in
used
vision
used as inputs
system
inputs of
of the
extracts
the Image
four
Image Based
features,
Based these
Visual Servo
features
Servo (IBVS)
(IBVS) are y
y axis
generates
axis direction
a
direction (Fig.
rotation
(Fig. 2.b).
2.b). andOn
On the
translational
the other hand, changing
changing in
visionas
used
control
used assystem
inputs
approach.
inputs extracts
of
of the
This
the four features,
Image
algorithm
Image Based
Based is faster these
Visual
Visual than features
Servo
optical
Servo (IBVS) are ythe
flow
(IBVS) generates
y axis
axisspeed adifference
direction
direction roll (Fig.
rotation
(Fig. 2.b).
of
2.b). and
the On translational
Onpairs the other
the other
(1, 2) andmovement
hand,
hand, (3, changing
4)
changing in
will
control
used asapproach.
control inputs
approach. of This
This algorithm
the Image
algorithm Based is
is faster
Visual
faster than
than optical
Servo
optical flow
(IBVS)
flow the
ytheaxisspeed
speed difference
direction (Fig.
difference of
2.b).
of the
the Onpairs
pairs (1,
the other
(1, 2)
2) and
hand,
and (3,
(3, 4)
changing
4) will
will
approaches
control
approaches since
approach.
since detection
This
detection and
algorithm
and description
is faster
description thanof
of features
optical
features flow is
is produce
the
producespeed a pitch rotation
difference
aa pitch rotation of and
the
and translational
pairs (1,
translational 2) andmovement
(3,
movement 4) in
will
in
control approach.
approaches
computational
approaches since This
detection
expensive
since detectionalgorithm
andand
andin is faster
description
our case
description than
we of optical
features
compute
of features flow
theis
is the
produce
x axis
producespeed
direction
a difference
pitch
pitch rotation
(Fig.
rotation of the
2.c). and
Yaw
and pairs
rotation (1, 2)
translational
translational is and
generated (3, 4)when
movement
movement will
in
in
computational
approachesrelative
computational expensive
since detection
expensive and
andandin
in our
our case
description
case we
we compute
of featuresthe
compute theis x x axis
produce
x axis direction
axis directiona or
directionpitch (Fig.
rotation
(Fig. 2.c).
2.c).one Yaw
and
Yaw rotation
translational
rotation is
is generated
movement
generated when
when in
quadrotor
computational
quadrotor relative position
expensive
position andwith
in
with the
our
the pixel
case
pixel wecoordinates
compute
coordinates of
the
of increasing
increasing or decreasing
(Fig.
decreasing 2.c). one Yaw pair
pair (1,
rotation
(1, 4)
4) or
is
or (2, 3)
generated
(2, 3) and
and it
when
it is
computational
quadrotor
quadrotor expensive
relative
relative position
position andwith
in our
with the case
the pixelwecoordinates
pixel compute the
coordinates of x
of axis direction
increasing
increasing (Fig. 2.c).one
or decreasing
or decreasing one Yaw pair
pairrotation 4) is
(1, 4)
(1, orgenerated
or (2, 3)
(2, 3) and andwhen it is
it is
is
quadrotor
2405-8963 © relative
Proceedings, 2018, IFAC
2nd IFAC
position with
(International
Conference
the pixel coordinates of344 Hosting
onFederation of Automatic Control)
increasing or decreasing
by Elsevier Ltd. All rights one reserved.
pair (1, 4) or (2, 3) and it is
Proceedings, 2nd IFAC
Peer reviewIdentification
under Conference
responsibility on
of International 344 Control.
Federation of Automatic
Modelling,
Proceedings,
Proceedings, 2nd IFAC
2nd IFAC and Control
Conference
Conference onof Nonlinear 344
Modelling, Identification and Controlon
10.1016/j.ifacol.2018.07.302 of Nonlinear 344
Systems
Modelling,
Proceedings,
Modelling, Identification
2nd IFAC and Control
Conference onof Nonlinear 344
Systems Identification and Control of Nonlinear
Guadalajara, Mexico, June
Systems Identification
Modelling, 20-22,
and 2018
Control of Nonlinear
2018 IFAC MICNON

Guadalajara, Mexico, June 20-22, 2018 Javier Gomez-Avila et al. / IFAC PapersOnLine 51-13 (2018) 344–349 345

1 2 Rotational subsystem Translational subsystem


3 4 φ x
U1
x
((
U2 φ
θ y
U3
a) b) c) d) e)
θ y
( ψθ ( z
U4 ψ φ
Fig. 2. Quadrotor motion, the arrow width in the pro-
peller is proportional to its rotation. The white arrow
ψ z
describes the motion of the quadrotor.
Fig. 4. U2 , U3 and U4 are inputs for the rotational
F3
F1 subsystem; U1 , roll, pitch and yaw are inputs for the
B
following translation subsystem.
z
E
Z y x U1
Y X F2 ẍ = (cos (φ) sin (θ) sin (ψ) + sin (φ) sin (ψ))
F4 l m
mg
U1
ÿ = (cos (φ) sin (θ) sin (ψ) − sin (φ) sin (ψ))
m
U1
Fig. 3. Quadrotor configuration, B represents the quadro- z̈ = −g + (cos (φ) cos (θ))
tor fixed frame and E the inertial frame.   m
Iy − I z Jr l (4)
φ̈ = θ̇ψ̇ − θ̇Ω + U2
 I x  I x I x
the result of the difference of the counter-torque between
Iz − I x Jr l
the pairs (Fig. 2.d and Fig. 2.e). θ̈ = φ̇ψ̇ + φ̇Ω + U3
 Iy  Iy Iy
Without loss of generality, we can consider that the center Ix − I y U4
of mass and the body fixed frame origin coincide, see Fig. ψ̈ = φ̇θ̇ +
Iz Iz
3. It is important to note that in the AR.Drone Parrot,
the x and y axis, do not match with the robot structure where Ui are the system inputs and Ω the changing
(it can be seen in Fig. 3). The quadrotor orientation in attitude angle which is part of the gyroscopic effects
space is given by a rotation matrix R ∈ SO(3) from the induced by the propellers. Gyroscopic effects provide a
body fixed frame to the inertial frame. The dynamics of a more accurate model, nevertheless, they have insignificant
rigid body under external forces F and τ are expressed as roles in the overall attitude of the quadcopter (Schmidt
follows (Bouabdallah et al. (2004)) (2011)). The inputs of the system are defined as
        
mI3×3 0 V̇ ω × mV F U1 = b√ Ω21 + Ω22 + Ω23 + Ω24
+ = (1)
0 I ω̇ ω × Iω τ 2  2 
U2 = b Ω1 + Ω22 − Ω23 − Ω24
√2
where I is the inertia matrix; V the the body linear speed 2  2  (5)
U3 = b Ω + Ω2 − Ω2 − Ω2
vector and ω the angular speed. 2 2 1 2 3 2 2 2  4
U4 = b Ω1 + Ω4 − Ω2 − Ω3
The quadrotor equations of motion can be expressed as Ω = Ω2 + Ω4 − Ω1 − Ω3
ζ̇ = v   The quadrotor is a rotating rigid body with six degrees of
b  2 freedom and a rotational and a translational dynamics.
v̇ = −ge3 + Re3 Ωi
m (2) Inputs Ui and their relation with both subsystems are
Ṙ = Rω̂  shown in Fig. 4.
I ω̇ = −ω × Iω − Jr (ω × e3 ) Ωi + τa
3. VISUAL SERVO CONTROL
where ζ is the position vector, g is the gravity action on In our case, the camera is fixed to the robot, therefore
z axis (e3 ), the rotation matrix is denoted by R, ω̂ the we use an eye-in-hand IBVS. In this case the movement
skew symmetric matrix, Ωi represent the rotor speed, I of the quadrotor induces camera motion (Chaumette and
the body inertia, Jr the rotor inertia, d drag factor, b the Hutchinson (2006)).
thrust factor, l distance from the body fixed frame origin
to the rotor and τa the torque applied to the quadrotor The purpose of the vision based control is to minimize the
and is expressed as error
√  e (t) = s (m (t) , a) − s∗ (6)
2  2 2 2 2

 √2 lb Ω 1 + Ω 2 − Ω 3 − Ω 4 
 where s represents the coordinates in the image plane of
τa =  2  2  (3)
 lb Ω1 + Ω23 − Ω22 − Ω24  a feature and it depends on m(t), which is a vector of
2  2  2D points coordinates in image plane and a the set of
d Ω1 + Ω24 − Ω22 − Ω23 known parameters of the camera (e.g. camera intrinsic
parameters). Desired values are contained in vector s∗ .
The full Quadrotor dynamic model is defined as shown in Since the error e(t) is defined on the image space and
Bouabdallah et al. (2004) is represented by the robot moves in the 3D space, it is necessary to

345
2018 IFAC MICNON
346
Guadalajara, Mexico, June 20-22, 2018 Javier Gomez-Avila et al. / IFAC PapersOnLine 51-13 (2018) 344–349

relate changes in the image features with the quadrotor


displacement. The image Jacobian described in Weiss et al.
(1987) (also known as interaction matrix) captures the
relation between features and robot velocities as follows
ṡ = Ls vc (7)

where ṡ is the variation of the features position and the


control input vc = (vc , ω̇c ) denotes the camera translational
(v̇c ) and rotational (ω̇c ) velocities. Considering vc as the
control input, we can try to ensure an exponential decrease
of the error with Fig. 5. Simulation setup. At the left, youBot is holding the
pattern and the AR.Drone is flight trying to center the
vc = −λLs + e (8) pattern in the image.
where λ is a positive constant, k is the number of features, the size of our pattern can change. To control the 6
Ls ∈ R6×k is the pseudo-inverse of Ls and e the feature DOF, at least three points are necessary (Chaumette and
error. Hutchinson (2006)), in that particular case, we would have
three interaction matrices Lx1 , Lx2 , Lx3 , one for each
To calculate Ls consider a 3D point X with coordinates
feature, and the complete interaction matrix is
(X, Y, Z) in the camera frame, the projected point in the  
image plane x with coordinates (x, y) is defined as Lx 1
L x = Lx2 (20)
x = X/Z = (u − cu ) /f α (9) Lx3
y = Y /Z = (v − cv ) /f (10) Using three points, there are some configurations where Lx
where (u, v) are the coordinates of the point in the image is singular and has four global minima (Michel and Rives
space expressed in pixel units, (cu , cv ) are the coordinates (1993)). More precisely, there are four poses for the camera
of the principal point, α is the ratio of pixel dimensions such that ṡ = 0, according to Fischler and Bolles (1981),
and f the focal length. If we derive (10) we have these four poses are impossible to differentiate. With this
in mind, it is usual to consider more points (Chaumette
Ẋ − xŻ Ẏ − y Ż
ẋ = ẏ = (11) and Hutchinson (2006)).
Z Z
On the other hand, only one pose achieves s = s∗ when
The relation between a fixed 3D point and the camera the algorithm has four points. Moreover, instead of using
spatial velocity is stated as follows pseudo-inverse of Lx as in Lippiello et al. (2013) we can
Ẋ = −vc − ωc × X (12) use the transpose of the interaction matrix indistinctly
to solve for vc in (18) (Chaumette (1998); Wu et al.
then we can write the derivatives of the 3D coordinates as (2012)). In order to reduce computational complexity, we
Ẋ = −vx − ωy Z + ωz Y (13) use the transpose of the interaction matrix instead of its
pseudoinverse in (8)
Ẏ = −vy − ωz X + ωx Z (14)
vc = −λLs T e (21)
Ż = −vz − ωx Y + ωy X (15)
In this paper, four points are used. In addition, we suppose
Substituting (13)-(15) in (11) we can state the pixel that the pattern will never be rotated since any rotation
coordinates variation as follows in roll or pitch will produce a translation. In other words,
vx xvz   since it is an underactuated system, it is not possible to
ẋ = − + + xyωx − 1 + x2 ωy + yωz (16) have this kind of robots static and tilted at the same time
Z Z and consequently, rotational velocities related to roll and
vy yvz  
ẏ = − + + 1 + y 2 ωx − xyωy − xωz (17) pitch in vc are 0.
Z Z
which can be written
4. SIMULATION
ẋ = Lx vc (18)
Simulations were carried out in the Gazebo robotic simula-
where ẋ is the rate of change of the feature coordinates in
tor and ROS. The features were extracted from a QR code
image plane and
  since its robustness to illumination changes and rotations.
1 x  2
 Fig. 5 shows the configuration of the experiment and Fig.
 −Z 0 Z
xy − 1 + x y 
6 shows the camera image from the quadrotor. The four
Lx =  1 y  (19)
0 − 1 + y2 −xy −x features used in the experiment are the corners of the QR
Z Z code.
where Z represents the distance from the vision sensor Fig. 7 and 8 display the error in image coordinates. For
to the feature. Most of the IBVS algorithms need to the sake of visual representation, both directions appear
approximate this depth and it is possible if they know separated. At the beginning of the experiment, it can be
the actual size of the pattern. In our case we use a RGB- noted the take off in Fig. 8; once the quadrotor is flying and
D sensor and this distance is always known, therefore, has reached the desired position, the QR code is moved in

346
2018 IFAC MICNON

Guadalajara, Mexico, June 20-22, 2018 Javier Gomez-Avila et al. / IFAC PapersOnLine 51-13 (2018) 344–349 347

Cartesian Velocities
4
Vy
Vx
Vz
3
Wz

Velocity (m/s)
0

−1

−2

Fig. 6. Camera view from Ar.Drone Parrot during flight −3

simulation. −4
0 100 200 300 400 500 600 x10^−2
Time (s)
Features Error

Fig. 9. Cartesian Velocities. Time is in seconds ×10−2 .


100
x1
x2
Abrupt changes at t = 3s and t = 4.6s represent the
x3
x4
QR code displacement.
50
Error in x (pixels)

−50

−100

0 100 200 300 400 500 600 x10^−2


Time (s)

Fig. 7. Error in image coordinates in x direction. Time


is in seconds ×10−2 . Abrupt changes at t = 3s and
t = 4.6s represent the QR code displacement.
Fig. 10. Actual experiment configuration during quadrotor
Features Error

y1
flight.
y2
100 y3
y4

50
Error in y (pixels)

−50

−100

0 100 200 300 400 500 600 x10^−2


Time (s)

Fig. 8. Error in image coordinates in y direction. Time is Fig. 11. Actual camera view during quadrotor flight.
in seconds ×10−2
x direction simulating movement of the UGV, this can be relative position to the pattern should always be the same
seen at t = 3s and t = 4.6s. As shown in Fig. 7 and 8 the position as when it sees the pattern for the first time. Fig.
error is below 10 pixels and considering 640 × 480 images, 10 shows the experiment during the flight of the robot.
the quadrotor reaches desired position. In the first experiment, the youBot is always at the same
Fig. 9 shows the cartesian velocities computed by IBVS position. The objective is that the quadrotor remains at
with λ = 0.02. It can be seen at the beginning of the the same position. Fig. 12 and 13 shows the features error
experiment that Vz decreases because of the error in y axis when the youBot and the quadrotor are supposed to be at
at the take off. Then, at t = 3s and t = 4.6s, the input the same position. For the sake of visual representation,
tries to minimize the error in x due to the movement of we split the error in x and y direction. Fig. 14 shows the
the QR code. control input. The image size is 640 × 360, it can be seen
that the quadrotor stays near the first position.
5. EXPERIMENTAL RESULTS In the second experiment, the QR code in youBot is
occluded for 5 seconds while the grounded vehicle slightly
For this experiment, the QR code is placed on the platform moves in x direction. Fig 15 and 16 shows the features
of the youBot base. The quadrotor is controlled by an off error in image coordinates. It can be seen how the reference
board computer. The desired (x, y) feature coordinates are abruptly changes at 2.5s and the quadrotor is able to follow
the first coordinates seen by the camera, i.e. the quadrotor it.

347
2018 IFAC MICNON
348
Guadalajara, Mexico, June 20-22, 2018 Javier Gomez-Avila et al. / IFAC PapersOnLine 51-13 (2018) 344–349

Features Error
200
Features error
120

x1 150
100 x2
x3
100
80 x4

60 50

Error in X (pixels)
Error in x (pixels

40
0

20
−50
0

−100 x1
−20
x2
x3
−40 −150 x4

−60
−200
0 100 200 300 400 500 600 x10^−2

−80
Time (s)
0 100 200 300 400 500 600 700 800 900 x10^−2
Time (s)
Fig. 15. Error in x in image coordinates for the second
Fig. 12. Error in x in image coordinates for the first experiment when QR code is occluded while youBot
experiment when quadrotor and youBot are supposed slighlty moves in x direction as can be seen at t = 2.5s.
to remain at the same position.
Features Error
150
y1
y2
y3
Features Error y4
100
100
y1
80
y2
y3
60 y4
Error in Y (pixels)

50

40
Error in Y (pixels)

20
0

−20
−50

−40

−60
−100
0 100 200 300 400 500 600 700x10^−2
−80 Time (s)

−100
0 100 200 300 400 500 600 700 800 900 x10^−2
Fig. 16. Error in y in image coordinates for the second
Time (s) experiment when QR code is occluded while youBot
Fig. 13. Error in y in image coordinates for the first slighlty moves in x direction.
experiment when quadrotor and youBot are supposed
to remain at the same position. Cartesian Velocities
0.3
Vx
Vy
0.2

Cartesian Velocities
0.3
Vx 0.1
Vy
0.2 Vz
Velocity (m/s)

0.1
−0.1
Velocity (m/s)

0
−0.2

−0.1
−0.3

−0.2
−0.4
0 100 200 300 400 500 600 700x10^−2
Time (s)
−0.3

Fig. 17. Quadrotor control input when the desired position


−0.4
0 100 200 300 400 500 600 700 800 900 x10^−2
changes.
Time (s)

Fig. 14. Quadrotor control input when youBot stays at the Finally, Fig. 17 shows the cartesian velocities computed
same position. by the IBVS, for the sake of visual representation, only x
and y velocities are presented.

348
2018 IFAC MICNON
Guadalajara, Mexico, June 20-22, 2018 Javier Gomez-Avila et al. / IFAC PapersOnLine 51-13 (2018) 344–349 349

6. CONCLUSIONS Michel, H. and Rives, P. (1993). Singularities in the


determination of the situation of a robot effector from
In this paper, a visual based tracking for a multi-robot the perspective view of 3 points. Ph.D. thesis, INRIA.
system integrated by a UGV and a UAV was presented. It Romero, H., Salazar, S., and Lozano, R. (2012). Visual
has been shown that the quadrotor is able to follow a visual servoing applied to real-time stabilization of a multi-
reference given by the UGV. The problem was solved rotor uav. Robotica, 30(7), 1203–1212.
using the transpose of the interaction matrix instead of Schmidt, M.D. (2011). Simulation and control of a quadro-
the pseudoinverse. The results show that the transpose tor unmanned aerial vehicle.
approximation is able to compute the velocity vector fast Weiss, L., Sanderson, A., and Neuman, C. (1987). Dy-
and well. The advantages of the transpose interaction namic sensor-based control of robots with visual feed-
matrix instead of the pseudoinverse interaction matrix is back. IEEE Journal on Robotics and Automation, 3(5),
the reduction of the computational complexity and the 404–417.
avoidance of singularities due to the computations of the Wu, Z., Sun, Y., Jin, B., and Feng, L. (2012). An approach
pseudoinverse. to identify behavior parameter in image-based visual
servo control. Information Technology Journal, 11(2),
ACKNOWLEDGEMENTS 217.
Zhang, T., Li, W., Achtelik, M., Kühnlenz, K., and Buss,
The authors thank the support of CONACYT Mexico, M. (2009). Multi-sensory motion estimation and control
through Projects CB256769 and CB258068 (Project sup- of a mini-quadrotor in an air-ground multi-robot system.
ported by Fondo de Investigación Sectorial para la Edu- In Robotics and Biomimetics (ROBIO), 2009 IEEE
cación FOINS 241246). International Conference on, 45–50. IEEE.
REFERENCES
Altuğ, E., Ostrowski, J.P., and Taylor, C.J. (2005). Con-
trol of a quadrotor helicopter using dual camera visual
feedback. The International Journal of Robotics Re-
search, 24(5), 329–341.
Angeletti, G., Valente, J.P., Iocchi, L., and Nardi, D.
(2008). Autonomous indoor hovering with a quadrotor.
In Workshop Proc. SIMPAR, 472–481.
Bouabdallah, S., Murrieri, P., and Siegwart, R. (2004).
Design and control of an indoor micro quadrotor. In
Robotics and Automation, 2004. Proceedings. ICRA’04.
2004 IEEE International Conference on, volume 5,
4393–4398. IEEE.
Chaumette, F. (1998). Potential problems of stability and
convergence in image-based and position-based visual
servoing. The confluence of vision and control, 66–78.
Chaumette, F. and Hutchinson, S. (2006). Visual servo
control. i. basic approaches. IEEE Robotics & Automa-
tion Magazine, 13(4), 82–90.
Fischler, M.A. and Bolles, R.C. (1981). Random sample
consensus: a paradigm for model fitting with applica-
tions to image analysis and automated cartography.
Communications of the ACM, 24(6), 381–395.
Herisse, B., Hamel, T., Mahony, R., and Russotto, F.X.
(2009). A nonlinear terrain-following controller for a
vtol unmanned aerial vehicle using translational optical
flow. In Robotics and Automation, 2009. ICRA’09.
IEEE International Conference on, 3251–3257. IEEE.
Herisse, B., Russotto, F.X., Hamel, T., and Mahony,
R. (2008). Hovering flight and vertical landing con-
trol of a vtol unmanned aerial vehicle using optical
flow. In Intelligent Robots and Systems, 2008. IROS
2008. IEEE/RSJ International Conference on, 801–806.
IEEE.
Li, W., Zhang, T., and Kühnlenz, K. (2011). A vision-
guided autonomous quadrotor in an air-ground multi-
robot system. In Robotics and automation (ICRA), 2011
IEEE international conference on, 2980–2985. IEEE.
Lippiello, V., Mebarki, R., and Ruggiero, F. (2013). Visual
coordinated landing of a uav on a mobile robot manipu-
lator. In Safety, Security, and Rescue Robotics (SSRR),
2013 IEEE International Symposium on, 1–7. IEEE.

349

You might also like