You are on page 1of 6

2013 IEEE/RSJ International Conference on

Intelligent Robots and Systems (IROS)


November 3-7, 2013. Tokyo, Japan

A Robust Visual Servo Control Scheme with Prescribed Performance


for an Autonomous Underwater Vehicle
Charalampos P. Bechlioulis1 , George C. Karras1 , Sharad Nagappa2 , Narcı́s Palomeras2 ,
Kostas J. Kyriakopoulos1 , Marc Carreras2

Abstract— This paper describes the design and implemen- were based on complex path planning techniques while the
tation of a visual servo control scheme for an Autonomous computed points were fed to kinematic controllers designed
Underwater Vehicle (AUV). The purpose of the control scheme for point to point motions or stabilization at a fixed point in
is to navigate and stabilize the vehicle towards a visual target.
The controller does not utilize the vehicle’s dynamic model the workspace. Also, a model-based switching visual servo
parameters and guarantees prescribed transient and steady control scheme for the semi-autonomous operation of an
state performance despite the presence of external disturbances under-actuated Remotely Operated Vehicle (ROV) has been
representing ocean currents and waves. The proposed control presented in [9]. In all the above schemes, the controller was
scheme is of low complexity and can be easily integrated to either model-based (i.e., an accurate dynamic model of the
an embedded control platform of an Autonomous Underwater
Vehicle (AUV) with limited power and computational resources. vehicle is required) or strictly kinematic, without guaranteed
Moreover, through the appropriate selection of certain per- performance in the presence of external disturbances such as
formance functions, the proposed scheme guarantees that the ocean currents and waves.
target lies inside the onboard camera’s field of view for all In this paper, a novel position–based visual servo control
time. The resulting control scheme has analytically guaranteed scheme for an Autonomous Underwater Vehicle is presented.
stability and convergence properties, while its applicability and
performance are experimentally verified using the Girona500 The controller is responsible for navigating and stabilizing
AUV. the AUV in front of a panel consisting of various valves
and handles. It does not utilize the vehicle’s dynamic model
I. INTRODUCTION parameters and guarantees prescribed transient and steady
Underwater vehicles usually operate under difficult cir- state performance despite the presence of external distur-
cumstances and perform complex tasks such as ship hull bances. Moreover, through the appropriate selection of cer-
inspection, surveillance of underwater facilities (e.g oil plat- tain performance functions, the proposed scheme guarantees
forms) and handling of underwater equipment (e.g con- also the satisfaction of visual constraints which in this case
trol panels, valves) etc. These tasks require motion control are defined as keeping the target inside the camera’s field of
schemes with enhanced robustness and a sensor suite that view. A high level computer vision algorithm tracks the panel
can provide an accurate and detailed description of the un- and calculates its pose vector with respect to the vehicle.
derwater environment. When an underwater vehicle operates The pose vector is fused with the rest of the navigation
autonomously, the use of onboard cameras is of utmost measurements using an extended Kalman filter (EKF). The
importance. Monocular or stereo vision systems can provide resulting estimated state vector is used as state feedback to
information regarding target tracking or pose estimation that the proposed motion control scheme. The applicability and
can be incorporated to the motion or force control schemes of performance of the overall system are demonstrated using
the vehicle, depending on the task and the mission properties. the Girona500 AUV in a test pool, where a valve panel is
Concerning visual servo control in underwater robotics, appropriately mounted.
an application of image based visual servoing was realized II. PRELIMINARIES
in [1]. In that case the vehicle was fully actuated, implying
A. AUV Kinematics and Dynamics
that the camera had 6 degrees of freedom (DOF). Interesting
stereo vision approaches can be found at [2], [3]. The The visual servo control scheme proposed in this work is
problem of keeping the target inside the field of view has applied to the Girona500 AUV. A simplified 3D dynamic
been examined in the past, in robotic manipulators [4], model (in surge, sway, heave and yaw) of the vehicle is
cartesian robots [5], differential drive mobile robots [6], [7] presented in this subsection in accordance to the standard
and underwater vehicles [8]. The proposed methodologies underwater vehicle modeling properties [10]. The roll and
pitch degrees of freedom are neglected for the clarity of the
This work was supported by the EU funded project PANDORA: Persistent presentation and owing to page limitations. It will be men-
Autonomy through learNing, aDaptation, Observation and ReplAnning”,
FP7-288273, 2012-2014.
tioned in the sequel, however, that both degrees of freedom
1 School of Mechanical Engineering, National Technical Univer- can be considered in the analysis without compromising the
sity of Athens, Athens 15780, Greece {chmpechl, karrasg, achieved results. In this respect, consider the vehicle modeled
kkyria}@mail.ntua.gr. as a rigid body subject to external forces and torques. Let
2 University of Girona, Edifici Politecnica IV, Campus
Montilivi, Girona 17071, Spain {snagappa, npalomer, {I} be an inertial coordinate frame and {B} a body-fixed
marcc}@eia.udg.edu. coordinate frame, whose origin OB is located at the center of

978-1-4673-6357-0/13/$31.00 ©2013 IEEE 3879


mass of the vehicle. Furthermore, let (x, y, z) be the position If only these two updates are available, the navigation
of OB in {I} and ψ denote the yaw angle. Let (u, v, w) be the module is a dead reckoning algorithm that drifts over time.
longitudinal (surge), transverse (sway) and vertical (heave) However, if landmarks are detected in the environment, the
velocities of OB with respect to {I} expressed in {B} and r navigation module is able to keep its position covariance
be the vehicle’s angular speed (yaw) around the vertical axis. bounded. A visual detection algorithm, detailed in section III-
Thus, the kinematic equations of motion for the considered A, gives information about the relative position of a landmark
vehicle can be written as: with respect to the vehicle. This information not only updates
the detected landmark position but also the vehicle. The
ẋ = u cos ψ − v sin ψ + δx (t) (1) visual detection algorithm uses an a priori known template to
ẏ = u sin ψ + v cos ψ + δy (t) (2) identify and compute the relative position of these landmarks.
ż = w + δz (t) (3) The mathematical description of the navigation module
and the Visual EKF SLAM algorithm are trivial and thus
ψ̇ = r + δψ (t) (4)
omitted.
where δx (t), δy (t), δz (t), δψ (t) denote bounded ocean cur- C. Prescribed Performance
rents. Neglecting the motion in roll and pitch, the simplified
equations for the surge, sway, heave and yaw can be written It will be clearly demonstrated in Subsection III-B, that
as: the control design is connected to the prescribed performance
notion that was originally employed to design neuro-adaptive
mu u̇ = mv vr + Xu u + X|u|u |u| u + X + δu (t) (5) controllers for various classes of nonlinear systems [11]–
[13], capable of guaranteeing output tracking with prescribed
mv v̇ = −mu ur +Yv v +Y|v|v |v| v +Y + δv (t) (6)
performance. In this work, by prescribed performance, it
mw ẇ = Zw w + Z|w|w |w| w + (W − B) + Z + δw (t) (7) is meant that the tracking error converges to a predefined
mr ṙ = (mu − mv ) uv + Nr r + N|r|r |r| r + N + δr (t) (8) arbitrarily small residual set with convergence rate no less
than a certain predefined value. For completeness and com-
where mu , mv , mw , mr denote the vehicle’s mass, moment pactness of presentation, this subsection summarizes prelimi-
of inertia, added mass and moment of inertia terms, Xu , nary knowledge on prescribed performance. Thus, consider a
X|u|u , Yv , Y|v|v , Zw , Z|w|w , Nr , N|r|r are negative hydrodynamic generic scalar error e (t). Prescribed performance is achieved
damping coefficients of first and second order, W and B are if e (t) evolves strictly within a predefined region that is
the vehicle weight and buoyancy respectively, δu (t), δv (t), bounded by decaying functions of time. The mathematical
δw (t), δr (t) denote bounded exogenous forces and torques expression of prescribed performance is given, ∀t ≥ 0, by
acting on surge, sway, heave and around yaw owing to ocean the following inequalities:
waves and X, Y , Z, N denote the control input forces and
torque respectively that are applied by the thrusters in order −ρ (t) < e (t) < ρ (t) (9)
to produce the desired motion of the body fixed frame. where ρ (t) is a smooth, bounded, strictly positive and de-
creasing function of time satisfying limt→∞ ρ (t) > 0, called
B. Navigation Module
performance function [11]. Hence, for an exponentially de-
The navigation module is responsible for estimating the creasing performance function ρ (t) = (ρ0 − ρ∞ ) e−lt + ρ∞
vehicle position and velocity vector. The linear positions with ρ0 , ρ∞ , l, the constant ρ0 = ρ (0) is selected such that
([x y z]) and velocities ([u v w]) are estimated using a ρ0 > |e (0)|, the constant ρ∞ = limt→∞ ρ (t) represents the
Vision EKF SLAM algorithm, while the angular positions maximum allowable size of the tracking error e (t) at the
([φ θ ψ ]) and velocities ([p q r]) are directly measured steady state and finally the decreasing rate of ρ (t), which
using an internal motion reference unit (IMU). The Vision is affected by the constant l in this case, introduces a lower
EKF SLAM algorithm provides simultaneously estimation bound on the required speed of convergence of e (t).
updates for the visual landmarks and the linear positions and
velocities of the vehicle. D. Dynamical Systems
The augmented system model (vehicle and landmarks) Consider the initial value problem:
consists of a constant velocity kinematic model for the
ξ̇ = h (t, ξ ) , ξ (0) = ξ 0 ∈ Ωξ (10)
vehicle and a constant time model for the landmarks. Regard-
ing the measurement models, position and velocity sensors with h : ℜ+ × Ωξ → ℜn where Ωξ ⊂ ℜn is a non-empty open
are available. A global positioning system (GPS) measures set.
the vehicle position in (x, y) plane when the vehicle is Definition 1: [14] A solution ξ (t) of the initial value
not submerged and a pressure sensor transforms pressure problem (10) is maximal if it has no proper right extension
values into depth measurements (z). The velocity updates are that is also a solution of (10).
provided by a doppler velocity log (DVL). This sensor is able Theorem 1: [14] Consider the initial value problem (10).
to measure linear velocities with respect to the sea bottom or Assume that h (t, ξ ) is: a) locally Lipschitz on ξ for almost
the water around the vehicle. The pose and velocity updates all t ∈ ℜ+ , b) piecewise continuous on t for each fixed ξ ∈
are direct measurements of the state vector. Ωξ and c) locally integrable on t for each fixed ξ ∈ Ωξ .

3880
Then, there exists a maximal solution ξ (t) of (10) on the aforementioned visual tracking system. The objective of this
time interval [0, τmax ) with τmax > 0 such that ξ (t) ∈ Ωξ , paper is to design a controller without incorporating any
∀t ∈ [0, τmax ). information regarding the vehicle model such that it hovers
Proposition 1: [14] Assume that the hypotheses of Theo- in front of the target with bounded closed loop signals and
rem 1 hold. For a maximal solution ξ (t) on the time interval prescribed transient and steady state performance despite
[0, τmax ) with τmax < ∞ and for any compact set Ωξ′ ⊂ Ωξ the presence of exogenous disturbances representing ocean
there exists a time instant t ′ ∈ [0, τmax ) such that ξ (t ′ ) ∈
/ Ω′ξ . currents and waves. Moreover, assuming that the target
initially lies inside the onboard camera’s field of view, the
III. METHODOLOGY controller should guarantee that the target never escapes it.
A. Valve Panel Tracking Such configuration constraint is important in visual servo
Position control relies on the use of position measurements control schemes where the feedback depends mainly on the
with respect to a static external reference. In this work, the visual contact of the vehicle and the target.
vehicle is required to hover in front of an underwater panel Let us now define the position errors:
for an intervention task; in this respect, the panel provides
ex = x − (xd − dn ¯ y ), ez = z − (zd − dn
¯ x ), ey = y − (yd − dn ¯ z ),
an external reference if we are able to accurately measure its d d d
(11)
distance from the vehicle. Detection of the underwater panel
with d¯ denoting the desired distance from the center of the
is performed using vision, by comparing the images from the
target, where the vehicle should hover. Let us also define the
camera against an a priori known template of the panel. By
desired yaw angle ψd (x, y) as follows:
detecting and matching features between the camera image
and template, it is possible to detect the presence of the panel,  
−1 yd − y
as well as accurately estimate the pose when a sufficient ψd (x, y) = tan (12)
xd − x
number of features are matched.
In this work, we choose the oriented FAST and rotated and the orientation error:
BRIEF (ORB) [15] feature extractor for its suitability to
real-time applications. The ORB feature extractor relies eψ = ψ − ψd (x, y) . (13)
on features from accelerated segment test (FAST) corner
detection [16] to detect keypoints in the image. These are It can be easily verified that the desired orientation ψd (x, y),
obvious features to detect on man-made structures and can which depends on the current position of the vehicle on
be detected very quickly. Moreover, there is a (binary) the horizontal plane (x, y), is the angle that the vector
descriptor vector of the keypoint based on binary robust [xd − x, yd − y]T (i.e., the vector defined from the vehicle to
independent elementary features (BRIEF) [17]. Differences the target) forms from the x-axis of the inertial frame. We
between descriptors can be calculated rapidly, allowing real- have designed the desired yaw angle as in (12) because we
time matching of keypoints at higher image frame-rates when want to incorporate in our analysis the orientation constraint
compared to other commonly used feature extractors such as owing to the limited field of view of the onboard camera.
scale invariant feature transform (SIFT) [18] and speeded-up Notice forcing the orientation
error (13) close to zero satis-
robust features (SURF) [19]. fying simultaneously eψ (t) < ψc , ∀t ≥ 0, where ψc denotes
A minimum number of keypoints must be matched be- the camera’s angle of view, moving towards the target (i.e.,
tween the template and the camera image to satisfy the panel forcing position errors (11) to zero) achieves both control
detection requirement. A low number of matched keypoints objectives, i.e., hovering in front of the target and keeping
indicates that the panel is not in the camera field of view. The the target in the camera’s field of view.
correspondences between the template and camera image 1) Control Scheme: Given the position of the center of the
can be used to compute the transformation (or homography) target xd , yd, zd , the normal to the target plane vector nd =
of the template image to the detected panel in the camera  T
nxd , nyd , nzd pointing inwards and the position/orientation
image. This allows us to compute the image-coordinates errors (11)-(13):
of the corners of the panel in the camera image. Then, I. Kinematic Controller
using the known geometry of the panel and the camera
Select the exponentially decaying position/orientation per-
matrix, we are able to determine the pose of the panel in
formance functions ρx (t), ρy (t), ρz (t), ρψ (t) that i) satisfy:
the camera coordinate system. Once the panel is detected
as a landmark, it can be added to the list of landmarks as
a. |ex (0)| < ρx (0) 0 < ρx (t) 0 < lim ρx (t)
discussed previously, and subsequent updates to the panel t→∞
pose are incorporated through the EKF-SLAM algorithm. b. ey (0) < ρy (0) 0 < ρy (t) 0 < lim ρy (t)
t→∞
B. Visual Servo Control Scheme with Prescribed Perfor- c. |ez (0)| < ρz (0) 0 < ρz (t) 0 < lim ρz (t)
t→∞
mance d. eψ (0) < ρψ (0) < ψc 0 < ρψ (t) 0 < lim ρψ (t)
t→∞
Let xd , yd , zd denote the position of the center of the
T
target and nd = nxd , nyd , nzd denotes the normal to the and ii) incorporate the desired performance specifications re-
target plane vector pointing inwards, obtained both via the garding the steady state error and the speed of convergence;

3881
and design the desired velocities: Differentiating the normalized errors with respect to time
   −1  −k ex  and substituting (1)-(8) as well as (14)-(16), we obtain in
ud cos ψ − sin ψ 0 x ρx (t)
a compact form, the dynamical system of the overall state
 ey 
 vd  =  sin ψ cos ψ 0   −ky ρy (t)  (14) vector:
wd 0 0 1 e z
−kz ρz (t) ξ̇ = h (t, ξ ) (19)

rd = −kψ (15) where the function h (t, ξ ) includes all terms found at the
ρψ (t)
right hand side after the differentiation of ξ . Let us also
with positive control gains kx , ky , kz , kψ . define the open set:
II. Dynamic Controller
Select exponentially decreasing velocity performance func- Ωξ = (−1, 1) × · · · × (−1, 1) .
| {z }
tions ρu (t), ρv (t), ρw (t), ρr (t) that satisfy: 8-times

a. |u (0) − ud (0)| < ρu (0) 0 < ρu (t) 0 < lim ρu (t) In the sequel, we proceed in two phases. First, the existence
t→∞ of a maximal solution ξ (t) of (19) over the set Ωξ for
b. |v (0) − vd (0)| < ρv (0) 0 < ρv (t) 0 < lim ρv (t) a time interval [0, τmax ) (i.e., ξ (t) ∈ Ωξ , ∀t ∈ [0, τmax )) is
t→∞
c. |w (0) − wd (0)| < ρw (0) 0 < ρw (t) 0 < lim ρw (t) ensured. Then, we prove that the proposed control scheme
t→∞
d. |r (0) − rd (0)| < ρr (0) 0 < ρr (t) 0 < lim ρr (t) guarantees, for all t ∈ [0, τmax ): a) the boundedness of all
t→∞ closed loop signals as well as that b) ξ (t) remains strictly
and design the external forces in the surge, sway and heave within a compact subset of Ωξ , which subsequently will lead
as well as the external torque around yaw as: by contradiction to τmax = ∞ and consequently to the solution
u − ud u − ud w − wd r − rd of the control problem stated at the beginning of Subsection
X = −ku , Y = −kv , Z = −kw , N = −kr III-B.
ρu (t) ρu (t) ρw (t) ρr (t)
(16) Phase A. The set Ωξ is nonempty and open. Moreover,
with positive control gains ku , kv , kw , kr . owing to the selection of the performance functions ρi (t)
Remark 1: The proposed control scheme does not incor- (i.e., |ei (0)| < ρi (0)), i ∈ {x, y, z, ψ , u, v, w, r} we conclude
porate the vehicle’s dynamic model parameters or knowledge that ξ (0) ∈ Ωξ . Additionally, due to the smoothness of a)
of the external disturbances. Furthermore, no estimation (i.e., the system nonlinearities and b) the proposed control scheme,
adaptive control) has been employed to acquire such knowl- over Ωξ , it can be easily verified that h (t, ξ ) is continuous on
edge. Moreover, compared with the traditional backstepping- t and continuous for all ξ ∈ Ωξ . Therefore, the hypotheses of
like approaches, the proposed methodology proves signif- Theorem 1 stated in Subsection II-D hold and the existence
icantly less complex. Notice that no hard calculations are of a maximal solution ξ (t) of (19) on a time interval [0, τmax )
required to output the proposed control signals thus making such that ξ (t) ∈ Ωξ , ∀t ∈ [0, τmax ) is ensured.
its implementation straightforward. Phase B. We have proven in Phase A that ξ (t) ∈ Ωξ ,
2) Stability Analysis: The main results of this work are ∀t ∈ [0, τmax ) or equivalently that:
summarized in the following theorem where it is proven
that the aforementioned control scheme solves the tracking ξi (t) ∈ (−1, 1) , i ∈ {x, y, z, ψ , u, v, w, r} (20)
control problem presented at the beginning of this subsection. for all t ∈ [0, τmax ). Therefore, the signals:
Theorem 2: Consider: i) the underwater vehicle model  
T 1 + ξi (t)
(1)-(8),
 ii) the targetT described by pd = [xd , yd , zd ] and εi (t) = ln
1 − ξi (t)
, i ∈ {x, y, z, ψ , u, v, w, r} (21)
nd = nxd , nyd , nzd and the desired distance d¯ from it at
which hovering is required, iii) any initial configuration are well defined for all t ∈ [0, τmax ). Consider now the
with the target lying inside the onboard camera’s field of positive definite
1
 and radially unbounded function Vp =
2 + ε 2 + ε 2 . Differentiating with respect to time, sub-
view, described by ψc and iv) the position/orientation errors 2 εx y z
defined in (11)-(13). There exist positive control gains kx , stituting (1)-(3) as well as employing (14),(15) and (18), we
obtain:
ky , kz , kψ , ku , kv , kw , kr such that the proposed control " #
scheme (14)-(16) guarantees that the vehicle approaches the εx εy εz
V̇p = ,  , 
¯ d with prescribed transient and steady (1 − ξx2 ) ρx (t) 1 − ξy2 ρy (t) 1 − ξz2 ρz (t)
hovering point pd − dn  
state performance, keeping simultaneously the target in the −kx ξx + δx (t) − ξx ρ̇x (t) + cos (ψ ) ξu ρu (t) − sin (ψ ) ξv ρv (t)
×   −ky ξy + δy (t) − ξy ρ̇y (t) + sin (ψ ) ξu ρu (t) + sin (ψ ) ξv ρv (t)  .
camera’s field on view, that is eψ (t) < ψc , ∀t ≥ 0. −kz ξz + δz (t) − ξz ρ̇z (t) + ξw ρw (t)
Proof: Let us first define the normalized errors:
ex ey ez eψ Furthermore, utilizing (20) and the fact that ρ̇x (t), ρ̇y (t),
ξx = , ξx = , ξz = , ξψ = (17) ρ̇z (t), ρu (t), ρv (t), ρw (t), δx (t), δy (t), δz (t) are bounded
ρx (t) ρy (t) ρz (t) ρψ (t)
u − ud v − vd w − wd r − rd by construction and by assumption, we arrive at:
ξu = , ξv = , ξw = , ξr = (18)
ρu (t) ρv (t) ρw (t) ρr (t) |δx (t) − ξx ρ̇x (t) + cos (ψ ) ξu ρu (t) − sin (ψ ) ξv ρv (t)| ≤ F̄x

and the overall closed loop system state vector as: δy (t) − ξy ρ̇y (t) + sin (ψ ) ξu ρu (t) + sin (ψ ) ξv ρv (t) ≤ F̄y
 T |δz (t) − ξz ρ̇z (t) + ξw ρw (t)| ≤ F̄z
ξ = ξx , ξy , ξz , ξψ , ξu , ξv , ξw , ξr .

3882
5
for some positive constants F̄x , F̄y , F̄z . Moreover,
1
, 1 , 1 > 1 and ρx (t) , ρy (t) , ρz (t) > 0.
(1−ξx2 ) (1−ξy2 ) (1−ξz2 )

ex
0
Therefore, employing the fact that εi and ξi have the same
sign (see (21)), i ∈ {x, y, z, ψ , u, v, w, r}, we conclude that V̇p
−5
is negative when the following inequalities hold: |ξx (t)| > 5
F̄x
F̄y F̄z
kx , ξy (t) > ky ,|ξz (t)| > kz . Thus, if we select kx , ky , kz such

F̄x F̄y F̄z
that kx , ky , < 1 then it can be easily concluded that:

ey
kz 0

F̄i F̄i
−1 < − ≤ ξi (t) ≤ < 1, i ∈ {x, y, z} (22) −5
ki ki 1
for all t ∈ [0, τmax ). Subsequently, following similar analysis
with Vo = 21 εψ2 , we arrive at:

ez
0

F̄ψ F̄ψ
−1 < − ≤ ξψ (t) ≤ <1 (23) −1
kψ kψ 100

for a positive constant F̄ψ and a gain kψ satisfying kψ >


F̄ψ . Additionally, the desired velocities ud , vd , wd , rd re- 0
main bounded for all t ∈ [0, τmax ). Thus, invoking (18), the
boundedness of u (t), v (t), w (t), r (t) for all t ∈ [0, τmax ) −100
0 10 20 30 40 50 60 70
is also deduced. Finally, differentiating (14) and (15) with t(sec)
respect to time and after some algebraic manipulations it is
straightforward to obtain the boundedness of u̇d (t), v̇d (t), Fig. 1. Tracking error evolution. The grey dashed lines indicate the desired
ẇd (t), ṙd (t), ∀t ∈ [0, τmax ). performance bounds. The black solid lines indicate the evolution of ex (t),
ey (t), ez (t) and eψ (t).
Applying the aforementioned line of proof for the
dynamic part of the
1
 vehicle (5)-(8), considering Vd =
2 + ε 2 + ε 2 + ε 2 and the proposed control law (16), we
2 εu v w r
arrive at: target lies in the camera’s field of view for all t ≥ 0), which
F̄i F̄i completes the proof.
−1 < − ≤ ξi (t) ≤ < 1 (24)
ki ki Remark 2: From the aforementioned proof, it is worth
for some positive constants F̄i and control gains ki satisfying noticing that the proposed control scheme achieves its
ki > F̄i , i ∈ {u, v, w, r} as well as at the boundedness of the goals without residing to the need of rendering F̄kii , i ∈
control law (16) for all t ∈ [0, τmax ). {x, y, z, ψ , u, v, w, r} arbitrarily small, through extreme values
Up to this point, what remains to be shown is that τmax = of the control gains ki , i ∈ {x, y, z, ψ , u, v, w, r}. In this respect,

∞. Notice that (22), (23) and (24) imply that ξ (t) ∈ Ωξ , the actual tracking performance, which is determined by the
∀t ∈ [0, τmax ), where: performance functions ρx (t), ρy (t), ρz (t), ρψ (t), becomes
  isolated against model uncertainties thus extending the ro-
′ F̄i F̄i bustness of the proposed control scheme.
Ωξ = ∏ − ,
ki ki Remark 3: Assuming that the target initially lies
i∈{x,y,z,ψ ,u,v,w,r} inside
the onboard camera’s field of view (i.e., eψ (0) < ψc ),
is a nonempty and compact set. Moreover, it can be easily the proposed controller guarantees, through the appropriate

verified that Ωξ ⊂ Ωξ for ki > F̄i , i ∈ {x, y, z, ψ , u, v, w, r}. selection of the exponentially
′ decaying
orientation perfor-
Hence, assuming τmax < ∞ and since Ωξ ⊂ Ωξ , Proposition 1 mance function ρψ (t) (i.e., eψ (0) < ρψ (0) < ψc ), that the

in Subsection II-D dictates
 ′  the existence of a time instant t ∈ target never escapes the field of view. Thus, there is no

[0, τmax ) such that ξ t ∈ / Ωξ , which is a clear contradiction. need of employing special techniques (e.g., potential fields,
Therefore, τmax = ∞. As a result, all closed loop signals navigation functions, planning, etc.) in the control loop to

remain bounded and moreover ξ (t) ∈ Ωξ ⊂ Ωξ , ∀t ≥ 0. preserve the visual contact of the target, which is important
Additionally, from (17), (22) and (23), we conclude that: in visual servo control schemes since the feedback depends
mainly on the visual contact of the vehicle and the target.
F̄i F̄i
−ρi (t) < − ρi (t) ≤ ei (t) ≤ ρi (t) < ρi (t)
ki ki IV. EXPERIMENTS
for i ∈ {x, y, z, ψ }, ∀t ≥ 0 and consequently that prescribed To illustrate the performance of the proposed visual servo
performance is achieved, as presented in Subsection II-C. Fi- control scheme, an experimental procedure was carried out.
nally, since the exponential decaying orientation performance The experiments took place inside a water tank using the
function ρψ (t) was designed such that ρψ (0) < ψc then it Girona500 AUV. A panel consisting of valves and handles
follows that eψ (t) < ρψ (t) < ψc for all t ≥ 0 (that is, the located inside the pool was used as the visual target. The

3883
goal of this experiment is to illustrate the ability of the guaranteed stability and convergence properties, while its
proposed control scheme to navigate and stabilize the vehicle applicability and performance were experimentally verified
in front of the panel, while retaining the panel always using the Girona500 AUV.
inside the the camera’s field of view. The vehicle starts
R EFERENCES
from an arbitrary initial configuration with the panel lying
inside the onboard camera’s field of view. The required [1] J. Lots, D. Lane, E. Trucco, and F. Chaumette, “A 2d visual servoing
for underwater vehicle station keeping,” Proc. of IEEE International
transient and steady state specifications, (that is, maximum Conference on Robotics and Automation, pp. 2767–2772, 2001.
steady state position errors 0.05m, maximum steady state [2] C. Silpa-Anan, T. Brinsmead, S. Abdallah, and A. Zelinsky, “Prelim-
orientation error 5o and exponential convergence e−0.1t ), are inary experiments in visual servo control for autonomous underwater
vehicle,” Proc. of IEEE/RSJ Intelligent Robots and Systems, pp. 1824–
described by the following performance functions: ρx (t) = 1829, 2001.
(4.5 − 0.05) e−0.1t + 0.05, ρy (t) = (3.0 − 0.05) e−0.1t + 0.05, [3] S. Negahdaripour and P. Firoozfam, “An rov stereovision system for
ρz (t) = (1 − 0.05) e−0.1t + 0.05, ρψ (t) = (85 − 5) e−0.1t + 5, ship-hull inspection,” IEEE Journal of Oceanic Engineering, pp. 551–
564, 2006.
ρu (t) = (1 − 0.1) e−0.1t + 0.1, ρv (t) = (1 − 0.1) e−0.1t + 0.1, [4] Y. Mezouar and F. Chaumette, “Path planning in image space for
ρw (t) = (1 − 0.1) e−0.1t + 0.1, ρr (t) = (1 − 0.1) e−0.1t + 0.1. robust visual servoing,” Proc. of IEEE International Conference on
Given that the angle of view of the onboard camera is Robotics and Automation, pp. 2759–2764, 2000.
[5] B. Thuilot, P. Martinet, L. Cordesses, and J. Gallice, “Position based
ψc = 90o , notice also that we have selected the orientation visual servoing: keeping the object in the field of vision,” Proc. of
performance function ρψ (t) such that ρψ (0) = 85o < ψc IEEE International Conference on Robotics and Automation, pp. 1624
which guarantees that the target never escapes the camera’s –1629, 2002.
[6] S. Bhattacharya, R. Murrieta, and S. Hutchinson, “Optimal paths for
field of view. Finally, the control gains were chosen as landmark-based navigation by differential-drive vehicles with field-of-
follows: kx = 1, ky = 0.5, kz = 0.5, kψ = 0.5, ku = 15, view constraints,” IEEE Transactions on Robotics, pp. 47–59, 2007.
kv = 7.5, kw = 15, kr = 7.5. [7] P. Murrieri, D. Fontanelli, and A. Bicchi, “A hybrid-control approach
to the parking problem of a wheeled vehicle using limited view-angle
The tracking error evolution is depicted in Fig. 1. The grey visual feedback,” The International Journal of Robotics Research, pp.
dashed lines indicate the desired performance bounds and the 437–448, 2004.
black solid lines indicate the evolution of the tracking errors [8] G. Karras and K. Kyriakopoulos, “Visual servo control of an underwa-
ter vehicle using a laser vision system,” Proc. of IEEE/RSJ Intelligent
ex (t), ey (t), ez (t) and eψ (t) respectively. Notice that all Robots and Systems, pp. 4116–4122, 2008.
errors have met the transient and steady state specifications [9] G. Karras, S. Loizou, and K. Kyriakopoulos, “Towards semi-
imposed by the previously selected performance function. As autonomous operation of under-actuated underwater vehicles: sensor
fusion, on-line identification and visual servo control,” Autonomous
it is demonstrated by the experiments and predicted from the Robots, pp. 67–86, 2011.
theoretical analysis, the control objective has been achieved. [10] T. Fossen, “Guidance and control of ocean vehicles,” Wiley, New York,
However, it should be noted that we were not able to create 1994.
[11] C. P. Bechlioulis and G. A. Rovithakis, “Robust adaptive control
external disturbances in the form of waves and currents with of feedback linearizable mimo nonlinear systems with prescribed
the existing setup, hence, future directions would involve performance,” IEEE Transactions on Automatic Control, vol. 53, no. 9,
experiments with real conditions in the open sea to verify pp. 2090–2099, 2008.
[12] ——, “Adaptive control with guaranteed transient and steady state
the efficiency of the proposed scheme. tracking error bounds for strict feedback systems,” Automatica, vol. 45,
no. 2, pp. 532–538, 2009.
V. VIDEO [13] ——, “Prescribed performance adaptive control for multi-input multi-
output affine in the control nonlinear systems,” IEEE Transactions on
This paper is accompanied by a small video demonstrating Automatic Control, vol. 55, no. 5, pp. 1220–1226, 2010.
the efficiency of the proposed visual servo control scheme, [14] E. D. Sontag, Mathematical Control Theory. London, U.K.: Springer,
using the Girona500 AUV inside the CIRS water tank 1998.
[15] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient
facilities at the University of Girona in Spain. The video alternative to sift or surf,” in Computer Vision (ICCV), 2011 IEEE
presents the experimental procedure described in Section International Conference on, nov. 2011, pp. 2564–2571.
IV. It illustrates not only the tracking performance of the [16] E. Rosten and T. Drummond, “Machine learning for high-speed corner
detection,” in In European Conference on Computer Vision, 2006, pp.
controller but also its ability to always retain the target inside 430–443.
the camera’s field of view. [17] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: binary
robust independent elementary features,” in Proceedings of the 11th
VI. CONCLUSIONS European conference on Computer vision: Part IV, ser. ECCV’10.
Berlin, Heidelberg: Springer-Verlag, 2010, pp. 778–792. [Online].
This paper describes the design and implementation of Available: http://dl.acm.org/citation.cfm?id=1888089.1888148
a novel visual servo control scheme for an Autonomous [18] D. G. Lowe, “Distinctive image features from scale-
invariant keypoints,” Int. J. Comput. Vision, vol. 60,
Underwater Vehicle (AUV). The proposed control scheme no. 2, pp. 91–110, Nov. 2004. [Online]. Available:
does not utilize the vehicle’s dynamic model parameters and http://dx.doi.org/10.1023/B:VISI.0000029664.99615.94
guarantees prescribed transient and steady state performance [19] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-
up robust features (surf),” Comput. Vis. Image Underst., vol.
despite the presence of external disturbances. The purpose of 110, no. 3, pp. 346–359, June 2008. [Online]. Available:
the controller is to navigate and stabilize the vehicle towards http://dx.doi.org/10.1016/j.cviu.2007.09.014
a visual target. The proposed scheme is of low complexity
and guarantees the satisfaction of visual constraints, e.g.,
keeping the target inside the onboard camera’s field of
view. Finally, the resulting control scheme has analytically

3884

You might also like