Professional Documents
Culture Documents
1. INTRODUCTION
INTRODUCTION y zx
y z
y zx
1. INTRODUCTION x
x
InIn recent
recent years,
years, Unmanned
Unmanned Aerial Vehicles
Aerial Vehicles (UAV) (UAV) have have y z
x
In
In recent
gained
recent years,
special
years, Unmanned
popularity,
Unmanned Aerial
mainly
Aerial Vehicles
because
Vehicles of (UAV)
their
(UAV) have
speed
have
gained
In recent special
years, popularity,
Unmanned mainly
Aerial because
Vehicles of their speed
gained
and
gained
and
special
ability
special
ability to popularity,
to move in aa 3D
popularity,
move in
mainly
3D
mainlyspace.
space.
because
This kind
because
This of(UAV)
of
kind their
of robots
their
of
have
speed
robots
speed
andgained
are
and special
ability
equipped
ability to popularity,
towith
movelight
move in mainly
in aaonboard
3D
3D space.
space. because
sensorsThissuch
This of their
kind
kind as speed
ofInertial
of robots
robots
areand equipped
ability towith movelight in aonboard
3D sensorsThisinsuch as Inertial Image
are
are equipped
Measurement
equipped
Measurement
with
Units
with
Units
light
(IMU)
light
(IMU) andspace.
onboard
onboard
and
sensors
cameras
sensors
cameras in
kind
such
order
such
order asofto
as to robots
Inertial
know
Inertial
know Image Off-board station
are equipped
Measurement with
Units light
(IMU) onboard
and sensors
cameras insuch
order as toInertial
know Image
Image s Off-board station
the
the robot
Measurement
robot attitude
Units
attitude and
(IMU)
and information
and
information cameras of
of the
in
the environment
order to
environment know Feature extraction s Off-board station
Off-board station
theMeasurement
robot Units
attitude (IMU)
and and
information cameras of in
the order to
environment know Image Feature extraction ss Image-Based
respectively.
the robot
respectively. However,
attitude
However, and aerial
aerial robots
information
robots have
have of aa limited
the payload
environment
limited payload Feature extraction
Feature extraction
s *
Off-board station
Image-Based
Visual Servo Control
s Image-Based
Image-Based
Visual Servo Control
the robotand
respectively.
capacity
respectively. attitude
However, and
in consequence,
consequence,
However, information
aerial
aerial robots
they have
robots areoflimited
have aathe
limited environment
limited
limited in payload
battery
payload vFeature s*
extraction Visual Servo Control
Control
vx
capacity and in they are in battery ss **
x
vxy
v Visual Servo
Image-Based
respectively.
capacity
consumption
capacity andHowever,
and and
in the aerial robots
in consequence,
consequence,
amount they
and
they have
are
weight
are a limited
limited
limitedof the in
in payload
battery
onboard
battery vxxzy
v s * Visual Servo
vxy
Control
v
consumption
capacityInand
consumption and
and the
the amount
in consequence,
amount and
they
and weight
are limited
weight of
of the
the in onboard
battery
onboard ωv
v zy ROS
xy
vxzzyyz
sensors.
consumption
sensors. In contrast,
and
contrast,the Unmanned
amount
Unmanned Grounded
and weight
Grounded Vehicles
of the
Vehicles (UGV)
onboard
(UGV) ωv
vxzzyzz
v ROS
Driver
ω
ω
v
v
v xz
ωv yz ROS v zyzz
consumption
sensors.
can be
sensors. In
equipped
In and with
contrast,
contrast,the amount
Unmanned
heavier
Unmanned and weight
Grounded
more
Grounded of the onboard
Vehicles
powerful
Vehicles (UGV)
sensors
(UGV) ωvzz ROS
Driver ω
ωvzz
can
can be
sensors.
be equipped
Inmovement with
contrast,
equipped with heavier and
Unmanned more
Grounded powerful
Vehicles sensors
(UGV) ωz Driver
Driver
ROS ωz
but
can
but their
be
their equipped
movement with isheavier
is limitedand
heavier
limited to the
and
to
more
the
more ground
ground
powerful
powerful andsensors
and slower
sensors
slower Driver
butcan be
speed
but
speed
their equipped
than
their
than UAVs.with
movement
movement
UAVs. With
Withisheavier
is limited and
to
to the
the collaboration
limited
the
more
collaboration
the ground
groundpowerful andsensors
between
and
between these Fig. 1. Experiment diagram. Image is sent wireless to
slower
slower
these
Fig. 1. Experiment diagram. Image is sent wireless to an
an
but
speed their
than movement
UAVs. Withis limited
the to the
collaboration ground and
between slower
these Fig.
Fig. 1.
1. Experiment
Off-board
Experiment station diagram.
where
diagram. Image
feature
Image is
is sent
extraction
sent wireless
wireless and to
IBVS
to an
an
robots
speed
robots we
than
we can
can exploit
UAVs.
exploitWith the
the advantages
the collaboration
advantages of
of both
both and
between
and minimize
minimizethese Off-board station where feature extraction and IBVS
speed
robots than
we can UAVs.
exploitWiththe the collaboration
advantages of both between
and minimizethese Fig. 1. Experiment
Off-board
control
Off-board is station
carried
station diagram.
where
out.
where Image
feature
feature is sent
extraction
extraction wireless and
and to
IBVS
IBVS an
their drawbacks
robotsdrawbacks
their we can exploit for
for the
the purpose
thepurpose
advantages of
of developing
of both and
developing applications
minimize
applications control is carried out.
robots
their we can exploit
drawbacks for the the advantages
purpose of of both and minimize Off-board
control
control is
is station
carried
carried where
out.
out. feature extraction and IBVS
such
their
such as mapping
drawbacks and
for the surveillance.
purpose of developing
developing applications
applications
their as
such mapping
drawbacks
as mapping and
for
and thesurveillance.
purpose of developing applications the
surveillance. the
control isThe
features. carried
algorithm out. was implemented in the Robot
such as
Multi-robot mapping and
systems surveillance.
can be used in exploration tasks in the features.
the features.
Operating
features.
The
The algorithm
System
The algorithm
(ROS).
algorithm A
was
was
brief
was
implemented
implemented
overview
implemented of
in
in the
the
in the
the
Robot
Robot
proposed
Robot
such as mapping
Multi-robot and can
systems surveillance.
be used in exploration tasks in Operating System (ROS). A brief overview of the proposed
Multi-robot
unknown
Multi-robot systems
environments.
systems can
can Inbe
beLi used
et
used al. in exploration
(2011)
in and
exploration Zhang tasks
tasks et al.
in the
in Operating features.
Operating
approach is The algorithm
System
illustrated
System (ROS).
(ROS).in Awas
Fig.
A brief
1
brief implemented
overview of
overview in the
of the
the Robot
proposed
proposed
unknown environments. In Li et al. (2011) and Zhang et al. approach is illustrated in Fig. 11 overview of the proposed
Multi-robot
unknown
(2009),
unknown a systems can
environments.
multi-robot
environments. Inbe
system
In Li
Li used
et al.
integrated
et al. in exploration
(2011)
(2011) byand
andan Zhang tasks
aerial
Zhang et
et in Operating
al.
and
al. approach
approach System
is
is illustrated
illustrated (ROS).in
in A brief
Fig.
Fig. 1
(2009), a multi-robot system integrated byandan aerial etand
al. The approach remainder of
is illustrated thisin paper Fig. 1 is organized as follows:
a aunknown
(2009),
grounded
(2009),
grounded
aa environments.
multi-robot
vehicle is
multi-robot
vehicle
In Li et
system
is presented;
presented;
system al.they
integrated
integrated(2011)
they
by
implement
by
implement anZhang
an aerial and
a visual
aerial
a visual
and The
The remainder
remainder of
of this this paper
paper is is organized
organized as
as follows:
follows:
(2009),
aabased
grounded a multi-robot
vehicle is system integrated byAlso
an aerial Section
The 2
remainder describes of the
this
and Section 2 describes the quadrotor dynamic model. quadrotor
paper is dynamic
organized model.
as The
follows:
based tracking
grounded
tracking system
vehicle
systemis presented;
with optical
presented;
with optical they
theyflow. implement
flow.
implement
Also in aaHerisse
in
visual
Herisse
visual The
Section
IBVS remainder
22 describes
algorithm of is this
the
presented paper inis Section
quadrotor organized
dynamic 3. as follows:
model.
Simulations
The
The
a grounded
based
et al. tracking
(2008), vehicle
system
Herisse isetpresented;
with
al. optical
(2009) theyflow.
and implement
Romero Also et in
al. aHerisse
(2012), Section describes the
visual IBVS algorithm is presented in Section 3. Simulations quadrotor dynamic model. The
based
et al. tracking
(2008), system
Herisse et with
al. optical
(2009) and flow.
Romero Also et in
al. Herisse
(2012), Section
IBVS
and 2
experimentaldescribes
algorithm is the
presented
results quadrotor
are in
shown dynamic
Section
in 3.
Section model.
Simulations
4 and The5,
et based
al. tracking
(2008), system
Herisse et with
al. optical
(2009) and flow.
Romero Also et in
al. Herisse
(2012),
used and IBVS algorithm is presented in Section 3. Simulations
ananal.
et optical
(2008),
optical flow
flow approach
Herisse
approach with
et al.with
(2009) aa monocular
monocular
and Romero camera
camera is used
et al. (2012),
is and experimental
IBVS
and algorithm
experimental
respectively.
experimental Finally, isresults
presented
results
conclusions
results
are
are shown
are inare
shown
shown
in
in Section
Section
given
in 3.
Section
in
Section
4
4 and
Simulations
Section 4 and
and 6
5,
5,
5,
an et al.
to
an (2008),
optical
control
optical Herisse
flow approach
position
flow of
approach etaal.UAV.
(2009)
with
with and
aaInmonocularRomero
Altuğ
monocular et al.et(2005),
camera
camera al. (2012),
is
is used
the
used respectively. Finally, conclusions are given in Section 6
to control position of a UAV. In Altuğ et al. (2005), the and experimental
respectively.
respectively. Finally,
Finally, results
conclusions
conclusions are shown are in Section
are given
given in Section
in Section 4 and 6 5,
6
to an control
optical
control
to controlof aaflow approach
position
quadrotor
position of
of aais with
isUAV. aIn
Inmonocular
implemented
UAV. Altuğ et camera
et
using
Altuğusing al. (2005),
dual
al.dual
(2005), is used
camera the
the respectively. Finally, conclusions are given in Section 6
control
to control
control of
of quadrotor
aaposition
quadrotorofposeais implemented
UAV. In Altuğ
implemented etAngeletti
using al.dual
(2005),camera
camera the
feedback
control
feedback of for
for robot
quadrotor
robot pose is estimation.
implemented
estimation. In
using
In dual
Angeletti et
camera
et al.
al. 2. QUADROTOR DYNAMIC MODEL
control an
feedback
(2008)
feedback of fora quadrotor
robot
on-board
for robot pose is implemented
laser,
pose estimation.
an off-board
estimation. using
In
monocular
In dual camera
Angeletti
Angeletti et
et al.
al. 2.
2. QUADROTOR
QUADROTOR DYNAMIC
DYNAMIC MODEL
MODEL
(2008)
feedback an on-board
for robot laser,
pose an off-board
estimation. monocular
In Angeletti camera
et al. 2. QUADROTOR DYNAMIC MODEL
(2008)
and
(2008) an
artificial
an on-board
markers
on-board laser,
are
laser, an
an off-board
employed
off-board for monocular
the
monocular hovering camera
cameraof a 2. QUADROTOR DYNAMIC MODEL
and(2008)artificial
an on-boardmarkers are
laser, employed for the hovering of a
of aa In this work, we use an AR.Drone quadrotor which
In this work, we use an AR.Drone quadrotor is
and artificial
quadrotor.
and artificial
quadrotor.
markers
markers are an
are off-board
employed
employed formonocular
for the hovering
the hovering cameraof In
an this
aerial work,
vehicle we use
with an
two AR.Drone
pairs of quadrotor
propellers (1,
which
which
3) and
is
is
In this
and artificial markers are employed for the hovering of a an aerial vehicle with two pairs of propellers (1, 3) and
quadrotor.
quadrotor. work, we use an AR.Drone quadrotor which is
In this
quadrotor. paper, a vision based tracking for a multi-robot In
an
(2,
an this
aerial
4) work,
turning
aerial vehicle
vehicle we
in use
with an
two
opposite
with two AR.Drone
pairs
directions
pairs of
of quadrotor
propellers
as
propellersshown (1,
(1, which
3)
in
3) andis
Fig.
and
In
In this
this paper,
paper, a vision
a vision
vision The based
based tracking
trackingused for
for in a multi-robot
a multi-robot
multi-robot (2,
(2, 4)
4) turning
an Increasing
aerial
turningvehicle in opposite
with
indecreasing two pairs
opposite directions
directions as
of propellers
as shown
shown (1,in Fig.
3)rotors
in and
Fig.
system
In
system this is implemented.
paper, a based quadrotor
tracking for a this work is 2.
(2, 4) turning orin opposite the
directionsspeed of
as the
shown four in Fig.
system
an
systemthis is
In AR.Drone
is implemented.
ispaper, a vision
implemented.
which
implemented.
The
based
The
is equipped
equipped
The
quadrotor
tracking
quadrotor
quadrotor
used
with aa used for in
monocular
used in
this work
a multi-robot
in this work
work is
camera
this camera
is 2.
is 2. 2. Increasing
(2, 4) turning
Increasing
results on
Increasing the
or
orin
or
decreasing
opposite
decreasing
vertical
decreasing motion
the
the ofspeed
directions
the speed
the
speed
of
of the
as shown
the
vehicle
of the
four
four
(Fig.
four inrotors
Fig.
rotors
2.a).
rotors
an
an AR.Drone
system
AR.Drone which
is implemented.
which is
is The
equipped with
quadrotor
with monocular
used
aaKuka
monocularinyouBot.
this camera
work is 2. results on
Increasing
results on the
the orvertical
decreasing
vertical motion
motion the of
of the
speed
the vehicle
of the
vehicle (Fig.
four
(Fig. 2.a).
rotors
2.a).
and
an
and a IMU.
AR.Drone
aa IMU. The
which
The mobile
is
mobile robot
equipped
robot is
with
is the
the monocular The
camera Changing
results on the
the speed
vertical difference
motion of of thethepairs (1,
vehicle 3) (Fig.and (2, 4)
2.a).
andan AR.Drone
vision
and a IMU.
system
IMU. which
The
Theextractsis equipped
mobile
mobile robot
four
robot with
is
features,
is the
the aKuka
monocular
Kuka
these
Kuka
youBot.
youBot.
features
youBot.
The
camera
The
are
The
Changing
results
Changing
generates
Changing on the
athe
the
the
speed
rollvertical
speed
speedrotation
difference
motion
difference
difference and
of
of
of ofthe
the
thethepairs
pairs
translational
pairs
(1,
vehicle
(1,
(1,
3)
3) (Fig.
movement
3)
and
and
and
(2,
(2,
(2,
4)
2.a).
4)
in
4)
vision
vision system
and aasIMU.
system Theextracts
mobile
extracts four
robot
four features,
is theVisual
features, these
Kuka
these features
youBot.
features are
The
are generates
Changing
generates a
the roll
a rollspeed
roll rotation
rotation difference and
and translational
of theother
pairs hand,
translational movement
(1,movement
3) and
movement in
(2, 4)
in
used
vision
used as inputs
system
inputs of
of the
extracts
the Image
four
Image Based
features,
Based these
Visual Servo
features
Servo (IBVS)
(IBVS) are y
y axis
generates
axis direction
a
direction (Fig.
rotation
(Fig. 2.b).
2.b). andOn
On the
translational
the other hand, changing
changing in
visionas
used
control
used assystem
inputs
approach.
inputs extracts
of
of the
This
the four features,
Image
algorithm
Image Based
Based is faster these
Visual
Visual than features
Servo
optical
Servo (IBVS) are ythe
flow
(IBVS) generates
y axis
axisspeed adifference
direction
direction roll (Fig.
rotation
(Fig. 2.b).
of
2.b). and
the On translational
Onpairs the other
the other
(1, 2) andmovement
hand,
hand, (3, changing
4)
changing in
will
control
used asapproach.
control inputs
approach. of This
This algorithm
the Image
algorithm Based is
is faster
Visual
faster than
than optical
Servo
optical flow
(IBVS)
flow the
ytheaxisspeed
speed difference
direction (Fig.
difference of
2.b).
of the
the Onpairs
pairs (1,
the other
(1, 2)
2) and
hand,
and (3,
(3, 4)
changing
4) will
will
approaches
control
approaches since
approach.
since detection
This
detection and
algorithm
and description
is faster
description thanof
of features
optical
features flow is
is produce
the
producespeed a pitch rotation
difference
aa pitch rotation of and
the
and translational
pairs (1,
translational 2) andmovement
(3,
movement 4) in
will
in
control approach.
approaches
computational
approaches since This
detection
expensive
since detectionalgorithm
andand
andin is faster
description
our case
description than
we of optical
features
compute
of features flow
theis
is the
produce
x axis
producespeed
direction
a difference
pitch
pitch rotation
(Fig.
rotation of the
2.c). and
Yaw
and pairs
rotation (1, 2)
translational
translational is and
generated (3, 4)when
movement
movement will
in
in
computational
approachesrelative
computational expensive
since detection
expensive and
andandin
in our
our case
description
case we
we compute
of featuresthe
compute theis x x axis
produce
x axis direction
axis directiona or
directionpitch (Fig.
rotation
(Fig. 2.c).
2.c).one Yaw
and
Yaw rotation
translational
rotation is
is generated
movement
generated when
when in
quadrotor
computational
quadrotor relative position
expensive
position andwith
in
with the
our
the pixel
case
pixel wecoordinates
compute
coordinates of
the
of increasing
increasing or decreasing
(Fig.
decreasing 2.c). one Yaw pair
pair (1,
rotation
(1, 4)
4) or
is
or (2, 3)
generated
(2, 3) and
and it
when
it is
computational
quadrotor
quadrotor expensive
relative
relative position
position andwith
in our
with the case
the pixelwecoordinates
pixel compute the
coordinates of x
of axis direction
increasing
increasing (Fig. 2.c).one
or decreasing
or decreasing one Yaw pair
pairrotation 4) is
(1, 4)
(1, orgenerated
or (2, 3)
(2, 3) and andwhen it is
it is
is
quadrotor
2405-8963 © relative
Proceedings, 2018, IFAC
2nd IFAC
position with
(International
Conference
the pixel coordinates of344 Hosting
onFederation of Automatic Control)
increasing or decreasing
by Elsevier Ltd. All rights one reserved.
pair (1, 4) or (2, 3) and it is
Proceedings, 2nd IFAC
Peer reviewIdentification
under Conference
responsibility on
of International 344 Control.
Federation of Automatic
Modelling,
Proceedings,
Proceedings, 2nd IFAC
2nd IFAC and Control
Conference
Conference onof Nonlinear 344
Modelling, Identification and Controlon
10.1016/j.ifacol.2018.07.302 of Nonlinear 344
Systems
Modelling,
Proceedings,
Modelling, Identification
2nd IFAC and Control
Conference onof Nonlinear 344
Systems Identification and Control of Nonlinear
Guadalajara, Mexico, June
Systems Identification
Modelling, 20-22,
and 2018
Control of Nonlinear
2018 IFAC MICNON
Guadalajara, Mexico, June 20-22, 2018 Javier Gomez-Avila et al. / IFAC PapersOnLine 51-13 (2018) 344–349 345
345
2018 IFAC MICNON
346
Guadalajara, Mexico, June 20-22, 2018 Javier Gomez-Avila et al. / IFAC PapersOnLine 51-13 (2018) 344–349
346
2018 IFAC MICNON
Guadalajara, Mexico, June 20-22, 2018 Javier Gomez-Avila et al. / IFAC PapersOnLine 51-13 (2018) 344–349 347
Cartesian Velocities
4
Vy
Vx
Vz
3
Wz
Velocity (m/s)
0
−1
−2
simulation. −4
0 100 200 300 400 500 600 x10^−2
Time (s)
Features Error
−50
−100
y1
flight.
y2
100 y3
y4
50
Error in y (pixels)
−50
−100
Fig. 8. Error in image coordinates in y direction. Time is Fig. 11. Actual camera view during quadrotor flight.
in seconds ×10−2
x direction simulating movement of the UGV, this can be relative position to the pattern should always be the same
seen at t = 3s and t = 4.6s. As shown in Fig. 7 and 8 the position as when it sees the pattern for the first time. Fig.
error is below 10 pixels and considering 640 × 480 images, 10 shows the experiment during the flight of the robot.
the quadrotor reaches desired position. In the first experiment, the youBot is always at the same
Fig. 9 shows the cartesian velocities computed by IBVS position. The objective is that the quadrotor remains at
with λ = 0.02. It can be seen at the beginning of the the same position. Fig. 12 and 13 shows the features error
experiment that Vz decreases because of the error in y axis when the youBot and the quadrotor are supposed to be at
at the take off. Then, at t = 3s and t = 4.6s, the input the same position. For the sake of visual representation,
tries to minimize the error in x due to the movement of we split the error in x and y direction. Fig. 14 shows the
the QR code. control input. The image size is 640 × 360, it can be seen
that the quadrotor stays near the first position.
5. EXPERIMENTAL RESULTS In the second experiment, the QR code in youBot is
occluded for 5 seconds while the grounded vehicle slightly
For this experiment, the QR code is placed on the platform moves in x direction. Fig 15 and 16 shows the features
of the youBot base. The quadrotor is controlled by an off error in image coordinates. It can be seen how the reference
board computer. The desired (x, y) feature coordinates are abruptly changes at 2.5s and the quadrotor is able to follow
the first coordinates seen by the camera, i.e. the quadrotor it.
347
2018 IFAC MICNON
348
Guadalajara, Mexico, June 20-22, 2018 Javier Gomez-Avila et al. / IFAC PapersOnLine 51-13 (2018) 344–349
Features Error
200
Features error
120
x1 150
100 x2
x3
100
80 x4
60 50
Error in X (pixels)
Error in x (pixels
40
0
20
−50
0
−100 x1
−20
x2
x3
−40 −150 x4
−60
−200
0 100 200 300 400 500 600 x10^−2
−80
Time (s)
0 100 200 300 400 500 600 700 800 900 x10^−2
Time (s)
Fig. 15. Error in x in image coordinates for the second
Fig. 12. Error in x in image coordinates for the first experiment when QR code is occluded while youBot
experiment when quadrotor and youBot are supposed slighlty moves in x direction as can be seen at t = 2.5s.
to remain at the same position.
Features Error
150
y1
y2
y3
Features Error y4
100
100
y1
80
y2
y3
60 y4
Error in Y (pixels)
50
40
Error in Y (pixels)
20
0
−20
−50
−40
−60
−100
0 100 200 300 400 500 600 700x10^−2
−80 Time (s)
−100
0 100 200 300 400 500 600 700 800 900 x10^−2
Fig. 16. Error in y in image coordinates for the second
Time (s) experiment when QR code is occluded while youBot
Fig. 13. Error in y in image coordinates for the first slighlty moves in x direction.
experiment when quadrotor and youBot are supposed
to remain at the same position. Cartesian Velocities
0.3
Vx
Vy
0.2
Cartesian Velocities
0.3
Vx 0.1
Vy
0.2 Vz
Velocity (m/s)
0.1
−0.1
Velocity (m/s)
0
−0.2
−0.1
−0.3
−0.2
−0.4
0 100 200 300 400 500 600 700x10^−2
Time (s)
−0.3
Fig. 14. Quadrotor control input when youBot stays at the Finally, Fig. 17 shows the cartesian velocities computed
same position. by the IBVS, for the sake of visual representation, only x
and y velocities are presented.
348
2018 IFAC MICNON
Guadalajara, Mexico, June 20-22, 2018 Javier Gomez-Avila et al. / IFAC PapersOnLine 51-13 (2018) 344–349 349
349