You are on page 1of 6

Second Order Sliding Mode Visual Tracking

in Finite Time for Uncertain Planar
Manipulators with Uncalibrated Camera
J. D. Fierro-Rojas

V. Parra-Vega

A. Espinosa-Romero
∗∗

Mechatronics Division - CINVESTAV, M´exico
(jfierro,vparra)@mail.cinvestav.mx
∗∗
Institute for Applied Mathematics and Systems -
UNAM, M´exico
arturoe@cic3.iimas.unam.mx
Abstract: This paper considers the problem of tracking control of planar robot
manipulators through visual servoing under uncertain knowledge of the robot and
camera parameters for fixed-camera configuration. We designed a controller based
on a passivity-based second order sliding mode approach which achieves finite
time convergence of tracking errors specified in the screen coordinate frame by
introducing a time base generator into the sliding surface. Simulations results for
a 2 degrees of freedom direct drive manipulator with uncalibrated CCD camera
are presented to illustrate the controller’s performance.
Keywords: Control of robots, visual servoing, uncertain robot dynamics, camera
calibration, second order sliding mode control.
1. INTRODUCTION
In order for visual servo controllers can result in
satisfactory control under high performance re-
quirements, including high speed tasks and direct-
robot actuators, robot dynamics must be taking
into account. Nevertheless most of previous works
assumed the ideal performance of the joint servo
mechanism and ignored the robot dynamics. As
solutions to this problem some adaptation meth-
ods have been proposed (Shen et al., 2001), (Hsu
and Aquino, 1999) and (Bishop and Spong, 1997),
which guarantee local tracking for the dynamic
model of robot arms subject to uncertainty on the
parameters of the vision system. These schemes
yield local tracking by exploiting the fact that
the rotation matrix is constant, and formal and
rigorous stability analysis support these results.
However, these papers assume knowledge of the
analytic jacobian matrix, and furthermore, those
are singular at rotation angle θ =
π
2
. In contrast,
a first order sliding mode (1SM) controller pro-
posed by (Fierro-Rojas et al., 2002) shows global
tracking for planar robots when all physical robot
and vision parameters are considered unknown.
Notice that our previous approach (Fierro-Rojas
et al., 2002) is not singular at θ =
π
2
, and does
not require knowledge of jacobian matrix.
In this paper, and similarly to (Fierro-Rojas et
al., 2002), we develop a second order sliding mode
(2SM) with global tracking visual feedback con-
troller for planar manipulators in an image-based
approach under unknown parameters. A change
of coordinates parameterized by a TBG is pro-
posed into the sliding surface such that finite-time
convergence of tracking error arises. However, the
uncertainty on camera parameters does not allow
to obtain chattering-free control. We stress that
the semi-continuous 2SM control yields global
exponential tracking versus the stable regime of
the pise-wise continuous 1SM control. To illus-
trate the performance of the proposed controller
we present some simulations that confirms the
expected convergence behavior of the trajectory
errors in screen coordinates.
2. ROBOT-CAMERA MODEL
Consider the set-up of a planar manipulator us-
ing a vision system as depicted in Fig. (1).
In order to describe motion of the end-effector
in an screen coordinate system, some coordi-
nates frames are defined, namely the robot base
frame Σ
B
= ¦X
B
, Y
B
, Z
B
¦, the end-effector
frame Σ
E
= ¦X
E
, Y
E
, Z
E
¦, the camera frame
Σ
C
= ¦X
C
, Y
C
, Z
C
¦, the CCD image frame Σ
I
=
¦X
I
, Y
I
¦ and the screen frame Σ
S
= ¦u, v¦, which
are refereed in the following subsection.
Fig. 1. System coordinates frames: Z
B
|Z
C
and
angle (X
B
, X
C
) = θ.
2.1 Camera Model and Forward Kinematics
The position of the robot end-effector in the screen
coordinate frame Σ
S
, based on the perspective
projection model (Hutchinson et al., 1996) is given
by
1
x =

u
v

=
αλ
f
λ
f
−z

−1 0
0 1

R

f(q)

c
O
b1
c
O
b2

+

−α 0
0 α

O
I
+ O
X
, (1)
where α > 0 is the scale factor in pixels; z > 0
is the distance of separation of the planes X
C

1
For a detailed procedure to obtain the explicit relation-
ship see for instance (Kelly et al., 1996).
Y
C
∈ Σ
C
and X
B
−Y
B
∈ Σ
B
; λ
f
> 0 is the focal
length; R = R(θ) ∈ IR
2×2
denotes the rotation
matrix of Σ
B
with respect to Σ
C
; f(q) is the
direct kinematics function;
C
O
B
= [
c
O
b1
,
c
O
b2
]
T
is the position of Σ
C
with respect to Σ
B
; O
I
is
the position of the intersection of the optical axis
with respect to Σ
I
; and finally, O
X
= [O
x
1
, O
x
2
]
T
denotes the origin of Σ
I
in the Σ
S
coordinate
system.
2.2 Differential Kinematics
By differentiating equation (1) we obtain the
velocity of the end-effector with respect to the
screen frame
˙ x =α
λ
f
λ
f
−z

−1 0
0 1

RJ ˙ q
=R
α
J ˙ q (2)
where J = J (q) is the Jacobian matrix of the
manipulator and
R
α
= α
λ
f
λ
f
−z

−1 0
0 1

R. (3)
2.2.1. Inverse differential kinematics Accord-
ing to the equation (2), the following mapping
appears
˙ q = J
−1
R
−1
α
˙ x (4)
to establish an explicit dependence of joint veloc-
ity coordinates in terms of image velocity vector.
Proposition 1. For any vector z ∈ IR
2
the product
J
−1
R
−1
α
z can be represented in the following
linear form
J
−1
R
−1
α
z = Y
v
(q, z) θ
v
(5)
whose elements of Y
v
(q, z) ∈ IR
2×p
2
do not depend
neither on the rotation matrix nor links length,
and θ
v
∈ IR
p
2
×1
is composed of parameters of the
rotation matrix and parameters of the Jacobian
matrix.
2.3 Robot Dynamics
In the absence of friction or other disturbances,
the dynamics of a serial n−link rigid, non-
redundant, fully actuated robot manipulator can
be written as follows
2
H (q) ¨ q + C (q, ˙ q) ˙ q + G(q) = τ (6)
2
Without loss of generality, our controller can be applied
with similar results if we consider dynamic friction, for
instance the LuGre model.
where q ∈ IR
n
is the vector of joint displacements,
τ ∈ IR
n×1
stands for the vector applied joint
torques, H(q) ∈ IR
n×n
is the symmetric positive
definite manipulator inertia matrix, C(q, ˙ q) ˙ q ∈
IR
n
stands for the vector of centripetal and Corio-
lis torques, and finally G(q) ∈ IR
n
is the vector of
gravitational torques. Two important properties
of robot dynamics useful for stability analysis are
the following.
Property 1. The time derivative of the inertia
matrix, and the centripetal and Coriolis matrix
satisfy a skew-symmetric matrix
X
T

1
2
˙
H (q) −C (q, ˙ q)

X = 0, ∀X ∈ IR
n
(7)
Property 2. Robot dynamics are linearly param-
eterizable in terms of a known regressor Y
b
=
Y
b
(q, ˙ q, ¨ q) ∈ IR
n×p
1
and a vector θ
b
∈ IR
p
1
of
robot parameters as follows
H (q) ¨ q + C (q, ˙ q) ˙ q + G(q) = Y
b
θ
b
(8)
The time base generator concept, necessary to
achieve finite-time visual tracking, and the prob-
lem statement are discussed in the following sec-
tion.
3. TIME BASE GENERATOR
In (Parra-Vega and Hirzinger, 2000), a well-posed
TBG algorithm is proposed to guarantee finite-
time convergence of robot manipulators. For com-
pleteness we present the basics of TBG-based con-
trol (Parra-Vega and Hirzinger, 2000). Consider
the following first order time-varying ordinary dif-
ferential equation
˙ y = −λ(t)y (9)
where
λ(t) = λ
0
˙
ξ
(1 −ξ) + δ
(10)
where λ
0
= 1+, < 1, and 0 < δ < 1. The time
base generator ξ = ξ(t) ∈ C
2
must be provided
by the user so as to ξ goes smoothly from 0 to 1
in finite-time t = t
b
> 0, and
˙
ξ =
˙
ξ(t) is a bell
shaped derivative of ξ such that
˙
ξ(t
0
) =
˙
ξ(t
b
) ≡ 0.
In this conditions, the solution of (9) is y(t) =
y(t
0
)[(1 − ξ) + δ]
1+
, with λ(t
b
) > 0. Note that
y(t
b
) = y(t
0

1+
> 0 can be made arbitrarily
small in arbitrary finite time t
b
. Also note that
the transient of y(t) is shaped by ξ(t) over time.
Thus, if our controller yields a closed-loop equa-
tion similar to (9), for y the position tracking
errors of the robot, then finite-time convergence
arises.
4. PRELIMINARY CONTROLLER DESIGN
4.1 Problem Statement
We consider the problem of designing a vi-
sual servo controller for the dynamic model
of robot manipulators under uncalibrated cam-
era and unknown physical robot parameters,
that guarantees finite-time tracking of a given
time-varying image-based trajectory denoted by
(x
T
d
(t), ˙ x
T
d
(t), ¨ x
T
d
(t))
T
∈ IR
3n
, with the following
assumptions:
Assumptions 1. Image coordinates x, and ˙ x are
available.
Assumptions 2. Inertial robot parameters are un-
known, and the camera is not calibrated.
The fix camera is modelled as a static operator (1)
that relates screen and joint coordinates. Thus,
there exists a functional that relates image errors
and joint errors. Then, we are interested in design-
ing a joint output error manifold s
q
, in terms of
visual error manifold s
x
, which satisfies a passivity
inequality 's
q
, τ

` with respect to the virtual joint
input τ

. To this end, we need to derive the robot
dynamics in s
q
coordinates, and the passivity in-
equality will dictate the control structure as well
as the storage function. To proceed we first derive
the known parametric case (the camera is cali-
brated), and afterwards we present the unknown
parametric case (the camera is not calibrated)
that satisfies the problem above.
4.2 Visual Error Manifold
Consider the following nominal reference with
respect to the screen frame
˙ x
r
= ˙ x
d
−λ(t)∆x + s
d
−K
i
υ (11)
˙ υ =sign(s
δ
) (12)
where ˙ x
r
is base on a time-varying continuous
state-independent TBG gain λ(t), x
d
and ˙ x
d
de-
note the desired position and velocity of the end-
effector with respect to the screen frame, respec-
tively, and
s
δ
=s −s
d
(13)
s =∆˙ x + λ(t)∆x (14)
s
d
=s(t
0
) exp
−κt
(15)
with the integral feedback gain K
i
> 0 whose
precise lower bound is to be defined yet; κ > 0; the
sgn(y) is the discontinuous signum(y) function
of y ∈ IR
n
; ∆x = x − x
d
is the image-based
end-effector position tracking error; s
d
= s(t
0
) ∈
C
1
⇒ s
δ
(t
0
) = 0. In this way, the derivative of
(11) becomes
¨ x
r
= ¨ x
d
−λ(t)∆˙ x −
˙
λ(t)∆x + ˙ s
d
−K
i
sign(s
δ
) (16)
Then, the visual error manifold (screen coordi-
nates extended error) is given by
s
x
= ˙ x − ˙ x
r
=s
δ
+ K
i
t

t
0
sgn(s
δ
)(ζ)dζ (17)
Note that if s
δ
= 0 then tracking is obtained.
4.3 Joint Error Manifold
According to (4), a nominal reference ˙ q
r
in the
joint space is defined as follows
˙ q
r
= J
−1
R
−1
α
˙ x
r
(18)
Thus, the joint error manifold s
q
in joint space is
given by
s
q
= ˙ q − ˙ q
r
=J
−1
R
−1
α
( ˙ x − ˙ x
r
)
=J
−1
R
−1
α
s
x
(19)
We can see that if we design a controller that
yields convergence of s
q
, then s
x
will converge
since by assumptions J and R
α
are well-posed.
Note that convergence of s
x
implies ∆˙ x, ∆x → 0.
Because of the time derivative of ˙ q
r
is required in a
passivity-based controller designing, this is obtain
as follow
¨ q
r
=J
−1
R
−1
α
¨ x
r
+
˙
J
−1
R
−1
α
˙ x
r
(20)
Remark 1. Parameter uncertainty. Having de-
fined the nominal references in both the joint and
screen frames, it is possible to design a controller
based on the calibrated joint error manifold, so
the intrinsic α and λ
f
, and the extrinsic z and θ
camera parameters are required, which is quite re-
strictive since usually some of them are unknown,
or at least very difficult to compute in real time.
Therefore, in the following, we present a controller
that yields finite time tracking with neither knowl-
edge of inertia robot parameters nor knowledge of
intrinsic and extrinsic camera parameters.
5. SECOND ORDER SLIDING MODE WITH
TBG VISUAL SERVOING
5.1 Uncalibrated Joint Error Manifold
To handle the parametric uncertainty of the cam-
era system, note that ˙ q
r
allows a linear parame-
terization, that is ˙ q
r
= J
−1
R
−1
α
˙ x
r
≡ Y (q, ˙ x
r

v
,
where θ
v
incorporates intrinsic and extrinsic cam-
era parameters and Y (q, ˙ x
r
) is composed of known
variables. Then, since θ
v
is unknown, we define a
new nominal reference
˙
¯ q
r
as follows:
˙
¯ q
r
= Y
v
¯
θ
v
(21)
where Y
v
= Y
v
(q, ˙ x
r
), and
¯
θ
v
is tuned such that
J
−1
R
−1
α
˙ x
r
is well-posed. From equations (19),
(21) and proposition (5), the uncalibrated joint
error manifold ¯ s
q
vector is given by
¯ s
q
= ˙ q −
˙
¯ q
r
= ˙ q −
˙
¯ q
r
± ˙ q
r
=s
q
−Y
v
¯
θ
v
+ Y
v
θ
v
=s
q
−Y
v
∆θ
v
(22)
where ∆θ
v
=
¯
θ
v
−θ
v
. It is useful to give ¨ q
r
now
¨
¯ q
r
=
˙
Y
v
¯
θ
v
(23)
In order to compensate the effects on robot dy-
namics due to definition of new nominal references
(
˙
¯ q
r
= ˙ q
r
,
¨
¯ q
r
= ¨ q
r
, and therefore ¯ s
q
= s
q
), it is
convenient to express the error
˙
¯ s
q
in terms of ˙ s
q
as follows
˙
¯ s
q
= ˙ s
q

˙
Y
v
∆θ
v
(24)
5.2 Open-loop Error Equation
Using nominal references (21)-(23), the uncali-
brated open-loop system can be written as follows
H (q)
˙
¯ s
q
+ C (q, ˙ q) ¯ s
q
= τ −
¯
Y
br
θ
b
(25)
where
¯
Y
br
= Y
br
(q, ˙ q,
˙
¯ q
r
,
¨
¯ q
r
) is available for mea-
surement. Considering equations (22) and (24),
open loop dynamics is expressed in terms of s
q
and ˙ s
q
by
H (q) ˙ s
q
+ C (q, ˙ q) s
q
=τ −
¯
Y
br
θ
b
+ Y
v
e
∆θ
v
e
(26)
where
Y
v
e
∆θ
v
e
=H(q)
˙
Y
v
∆θ
v
+ C(q, ˙ q)Y
v
∆θ
v
with Y
v
e
= Y
v
e
(q, ˙ q, ˙ x
r
, ¨ x
r
). Since H(q), C(q, ˙ q)
are linearly parameterizable, then last equation
can be written in terms of a linear parameteri-
zation, too. At this stage the problem becomes
in computing τ in (26) such that s
q
be bounded
subject to unknown θ
b
, ∆θ
v
e
.
5.3 Main Result
We propose the following controller
τ =−
¯
Y
br
Θ
b
sgn(
¯
Y
T
br
s
q
) −γsgn(s
q
) (27)
where Θ
b
∈ IR
p1×p1
, Θ
b
ii
≥ [θ
b
i
[ , and γ > 0.
Theorem 1. Consider a robot manipulator (6)
with the second order sliding mode with time base
generator visual servoing scheme (27), subject to
robot and camera parametric uncertainties. Then,
the closed-loop system yields finite-time conver-
gence of image tracking errors.
Proof. The following closed loop error equation
between (6) and (27) arises
H(q) ˙ s
q
=−C(q, ˙ q)s
q

¯
Y
br
Θ
b
sgn(
¯
Y
T
br
s
q
) −
¯
Y
br
θ
b
−Y
v
e
Θ
v
sgn(Y
T
v
e
s
q
) −γsgn(s
q
)
+Y
v
e
∆θ
v
e
+ τ

(28)
for τ

≡ 0 a virtual control input. Note that the
passivity inequality 's
q
, τ

` =
˙
V +γ[s
q
[, with the
following energy storage function
V =
1
2
s
T
q
H (q) s
q
(29)
whose rate of change yields
˙
V ≤−γ [s
q
[ + s
T
q
Y
v
e
∆θ
v
e
≤−γ [s
q
[ +[s
q
[[Y
v
e
[[∆θ
v
e
[
where we have used Property 1. Note that
Y
v
e
∆θ
v
e
= f
1
( ˙ x
r
, ¨ x
r
, θ
v
e
, ∆θ
v
e
, θ
b
), s
q
= f
2
( ˙ x
r
,
¨ x
r
, θ
v
e
, υ), and there exists an upper bound for
the regressors θ
v
e
, θ
b
because the entries of these
regressors depend on trigonometry functions and
link lengths, bounded desired trajectories and the
state of the system, then there exists a large
enough feedback gain γ such that
˙
V ≤−γ [s
q
[ + f
0
[s
q
[
for smooth and bounded function f
0
≥ g(f
1
, f
2
).
Then, according to the second method of Lya-
punov, there arises stability of s
q
, that is, s
q
is
bounded, with L

boundedness for ˙ s
q
, therefore
multiplying equation (19) by R
α
J, becomes in
s
x
= R
α
Js
q
with a derivative ˙ s
x
= R
α
J ˙ s
q
+
R
α
˙
Js
q
, that is, it gives rise to, from equation (17),
˙ s
δ
= −K
i
sgn(s
δ
) + R
α
J ˙ s
q
+ R
α
˙
Js
q
(30)
Now, in order to produce the sliding mode condi-
tion for s
δ
, we multiply the previous equation by
s
T
δ
to obtain
s
T
δ
˙ s
δ
=−s
T
δ
K
i
sgn(s
δ
) + s
T
δ
R
α
J ˙ s
q
+ s
T
δ
R
α
˙
Js
q
≤−K
i
[s
δ
[ +
0
[s
δ
[[ ˙ s
q
[ +
1
[s
δ
[[s
q
[
≤−K
i
[s
δ
[ +
2
[s
δ
[ +
3
[s
δ
[
≤−K
i
[s
δ
[ +
4
[s
δ
[
≤−µ[s
δ
[, µ = K
i

4
> 0, (31)
where
0
≥ [R
α
J[,
1
≥ [R
α
˙
J[,
2
≥ [ ˙ s
q
[,
3
≥ [s
q
[

4
=
2
+
3
. Thus, if K
i
>
4
, equation (31)
qualifies as the sliding mode condition for s
δ
= 0
for all time since s
δ
(t
0
) = 0 ∀t
0
. Thus, a second
order sliding mode regime is induced at s
δ
= 0 for
all time.
Now, as shown in section 3, the TBG induces
finite time convergence if we substitute y = ∆x in
equation (9), that is the following equation arises
x(t
s
) = x
d
(t
s
) + ∆x(t
0

1+
(32)
In this way, tracking errors converge to an ar-
bitrary small vicinity of ∆x = 0 in arbitrary
finite time t = t
b
without knowledge of manip-
ulator dynamics, and with uncalibrated camera.
Afterwards, for t > t
b
, s
δ
(t) = 0, which implies
∆˙ x = −λ
0
∆x+ε. Then, since s
d
(t) → 0, ∆x → 0
exponentially. ♦
Remark 2. Signum of s
q
. Because of robot and
vision system parameters are unknown, s
q
is not
available. However, its signum can be easily deter-
mined from equation (19) and using proposition
(5), namely the sign of s
q
= Y
v
(q, s
x

v
is deter-
mined by the sign of the known regressor Y
v
(q, s
x
),
since the vector θ
v
it is assumed unknown but
constant.
Remark 3. Experimental evaluation. The discon-
tinuous nature of the signum function make phys-
ical implementation of our controller impractical,
and hence at least a piecewise continuous approx-
imation of the signum function must be imple-
mented in order to not only reduce chattering but
also to be able to physically realize the controller.
Remark 4. Extension to 3D. With the exception
of the camera model (1) and proposition (5),
the controller design was conducted taking no
account of dimension of the robot workspace,
which indicate the possibility of extending our
scheme to the 3D spatial case as a future research
topic.
6. SIMULATIONS
A two-rigid link, planar robot without friction
forces is considered. Dimensions of the robot and
camera parameters are given in Table 1, where
subindex 1 and 2 stand for first and second link,
respectively. The endpoint of the manipulator is
requested to draw a circle defined with respect to
the vision frame x
d
= (x
d1
, x
d2
)
T
= (0.1 cos ωt +
0.05, 0.1 sin ωt+0.05)
T
, where ω = 2 rad/sec, with
t
b
= 1.0 sec as the desired convergence time. Data
allows to visualize the stability properties stated
in Theorem 1.
Table 1. Camera and robot parameters.
ROBOT SYSTEM Value - Unit
Length link l
1
, l
2
0.4, 0.3 m
Center of gravity 1,2 l
c1
, l
c2
0.1776, 0.1008 m
Mass link m
1
, m
2
9.1, 2.5714 kg
Inertia link I
1
, I
2
0.284, 0.0212 kgm
2
Gravity acceleration g
z
9.8 m/sec
2
VISION SYSTEM .
Clock-wise rotation angle θ
π
8
rad
Scale factor α 72727 pixeles/m
Depth field of view z 1.5 m
Camera offset
C
O
B
[−0.2 −0.1]
T
m
Offset Σ
I
O
I
[0.0005 0.0003]
T
m
Focal length λ
f
0.008 m
0 1 2 3 4 5
−400
−300
−200
−100
0
100
[
p
i
x
e
l
s
]
Position errors
0 1 2 3 4 5
−100
0
100
200
300
400
500
[
p
i
x
e
l
s
/
s
]
Velocity errors
0 1 2 3 4 5
−20
−10
0
10
20
30
[
N
m
]
Applied torques
−400 −200 0 200
−300
−200
−100
0
100
200
300
400
x
1
,xd
1
[pixels]
x
2
,
x
d
2

[
p
i
x
e
l
s
]
Desired and end−effector trajectories
joint 2
joint 1
joint 1
joint 2
joint 1
joint 2
Robot workspace
boundary
x(t)
x
d
(t)
Fig. 2. Tracking of image-based desired trajecto-
ries: Theorem 1 controller for t
b
= 1 sec.
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
−2000
−1500
−1000
−500
0
500
1000
1500
[N
m
]
Applied torques
joint 1
joint 2
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
−15
−10
−5
0
5
10
15
20
25
[N
m
]
Applied torques
joint 1
joint 2
Fig. 3. Applied torques: 1SM (left), and 2SM
Theorem 1 (right).
7. CONCLUSIONS
We have proposed a new image-based servo con-
troller for uncertain planar robots and uncali-
brated camera in a passivity-based second order
sliding mode with time base generator approach.
The closed-loop system exhibit exponential con-
vergence of tracking errors for any given initial
conditions despite of the size of the parametric
uncertainty. Finite time convergence is visualized
through simulation results when all parameters
are unknown.
REFERENCES
Bishop, B.E. and M.W. Spong (1997). Adaptive
calibration and control of 2d monocular vi-
sual servo systems. IFAC Symp. on Robot
Control, Nantes, France.
Fierro-Rojas, J.D., V. Parra-Vega
and A. Espinosa-Romero (2002). 2d sliding
mode visual servoing for uncertain manipu-
lators with uncalibrated camera. IEEE`RSJ
Conf. IROS.
Hsu, L. and P. Aquino (1999). Adaptive visual
tracking with uncertain manipulator dynam-
ics and uncalibrated camera. Proc. 38th IEEE
CDC, Phoenix, Arizona, pp.1248–1253.
Hutchinson, S., G.D. Hager and P.I. Corke (1996).
A tutorial on visual servo control. Trans. on
Robotics and Automation, 12, 651–670.
Kelly, R., P. Shirkey and M.W. Spong (1996).
Fixed-camera visual servo control for planar
robots. Proc. of the 1996 IEEE Int. Conf. on
Robotics and Automation, Minnesota.
Parra-Vega, V. and G. Hirzinger (2000). Finite-
time tracking for robot manipulators with
continuous control. SYROCO, Wien.
Shen, Y., Y.H. Liu and K. Li (2001). Asymptotic
trajectory tracking of manipulators using un-
calibrated visual feedback. Submitted to the
IEEE/ASME Trans. Mechatronics.

C OB = [c Ob1 . ROBOT-CAMERA MODEL Consider the set-up of a planar manipulator using a vision system as depicted in Fig. namely the robot base frame ΣB = {XB . z > 0 is the distance of separation of the planes XC − 1 For a detailed procedure to obtain the explicit relationship see for instance (Kelly et al.the semi-continuous 2SM control yields global exponential tracking versus the stable regime of the pise-wise continuous 1SM control. which are refereed in the following subsection. 2 Without loss of generality. v}. 0 1 (3) 2. 0 α (1) Ob1 Ob2 where α > 0 is the scale factor in pixels. 2. Ox2 ]T denotes the origin of ΣI in the ΣS coordinate system. fully actuated robot manipulator can be written as follows 2 H (q) q + C (q. YC . 1. ZC }. . nonredundant. OI is the position of the intersection of the optical axis with respect to ΣI . ZB }. YE . YB .1.3 Robot Dynamics In the absence of friction or other disturbances.1 Camera Model and Forward Kinematics The position of the robot end-effector in the screen coordinate frame ΣS .2 Differential Kinematics By differentiating equation (1) we obtain the velocity of the end-effector with respect to the screen frame x=α ˙ λf λf − z = Rα J q ˙ −1 0 RJ q ˙ 0 1 (2) where J = J (q) is the Jacobian matrix of the manipulator and Rα = α λf λf − z −1 0 R. In order to describe motion of the end-effector in an screen coordinate system. Proposition 1. our controller can be applied with similar results if we consider dynamic friction. For any vector z ∈ I 2 the product R −1 J −1 Rα z can be represented in the following linear form −1 J −1 Rα z = Yv (q. the dynamics of a serial n−link rigid. (5) 2. the CCD image frame ΣI = {XI . λf > 0 is the focal length. and θv ∈ I p2 ×1 is composed of parameters of the R rotation matrix and parameters of the Jacobian matrix. 2. (1). some coordinates frames are defined. for instance the LuGre model. 1996) is given by 1 x= − u v c c whose elements of Yv (q. z) θv Fig. YI } and the screen frame ΣS = {u. Inverse differential kinematics According to the equation (2). f (q) is the direct kinematics function. To illustrate the performance of the proposed controller we present some simulations that confirms the expected convergence behavior of the trajectory errors in screen coordinates. YC ∈ ΣC and XB − YB ∈ ΣB . q) q + G (q) = τ ¨ ˙ ˙ (6) = αλf λf − z + −1 0 R f (q) 0 1 −α 0 OI + O X . XC ) = θ. the camera frame ΣC = {XC . ZE }.c Ob2 ]T is the position of ΣC with respect to ΣB .. R = R(θ) ∈ I 2×2 denotes the rotation R matrix of ΣB with respect to ΣC . System coordinates frames: ZB ZC and angle (XB . and finally. the end-effector frame ΣE = {XE . based on the perspective projection model (Hutchinson et al. 1996).2. OX = [Ox1 . z) ∈ I 2×p2 do not depend R neither on the rotation matrix nor links length. the following mapping appears −1 ˙ q = J −1 Rα x ˙ (4) to establish an explicit dependence of joint velocity coordinates in terms of image velocity vector. 2..

xd and xd de˙ note the desired position and velocity of the endeffector with respect to the screen frame. Thus. Consider the following first order time-varying ordinary differential equation y = −λ(t)y ˙ where ˙ ξ λ(t) = λ0 (1 − ξ) + δ (10) (9) 4. in terms of visual error manifold sx . The time base generator ξ = ξ(t) ∈ C 2 must be provided by the user so as to ξ goes smoothly from 0 to 1 ˙ ˙ in finite-time t = tb > 0. and x are ˙ available. xT (t). Image coordinates x. and afterwards we present the unknown parametric case (the camera is not calibrated) that satisfies the problem above. if our controller yields a closed-loop equation similar to (9).where q ∈ I n is the vector of joint displacements. Note that y(tb ) = y(t0 )δ 1+ > 0 can be made arbitrarily small in arbitrary finite time tb . and ξ = ξ(t) is a bell ˙ ˙ shaped derivative of ξ such that ξ(t0 ) = ξ(tb ) ≡ 0. which satisfies a passivity inequality sq . Assumptions 2. for y the position tracking errors of the robot. The time derivative of the inertia matrix. For completeness we present the basics of TBG-based control (Parra-Vega and Hirzinger. Two important properties of robot dynamics useful for stability analysis are the following. and finally G(q) ∈ I n is the vector of R gravitational torques. 1. respectively. Then. PRELIMINARY CONTROLLER DESIGN 4. To this end. where xr is base on a time-varying continuous ˙ state-independent TBG gain λ(t). ∀X ∈ I n (7) ˙ R 2 4. xT (t))T ∈ I 3n . and the camera is not calibrated. The fix camera is modelled as a static operator (1) that relates screen and joint coordinates. Inertial robot parameters are unknown. Property 1. q)q ∈ ˙ ˙ I n stands for the vector of centripetal and CorioR lis torques. we need to derive the robot dynamics in sq coordinates. and the centripetal and Coriolis matrix satisfy a skew-symmetric matrix XT 1 ˙ H (q) − C (q. there exists a functional that relates image errors and joint errors. with λ(tb ) > 0. that guarantees finite-time tracking of a given time-varying image-based trajectory denoted by (xT (t). and 0 < δ 1. H(q) ∈ I n×n is the symmetric positive R definite manipulator inertia matrix. ∆x = x − xd is the image-based R end-effector position tracking error.2 Visual Error Manifold Consider the following nominal reference with respect to the screen frame xr = xd − λ(t)∆x + sd − Ki υ ˙ ˙ υ = sign (sδ ) ˙ (11) (12) where λ0 = 1 + . a well-posed TBG algorithm is proposed to guarantee finitetime convergence of robot manipulators. then finite-time convergence arises. and the passivity inequality will dictate the control structure as well as the storage function. 2000). 2000). and the problem statement are discussed in the following section. the solution of (9) is y(t) = y(t0 )[(1 − ξ) + δ]1+ . κ > 0. C(q. τ ∗ with respect to the virtual joint input τ ∗ . Robot dynamics are linearly parameterizable in terms of a known regressor Yb = R Yb (q. To proceed we first derive the known parametric case (the camera is calibrated). q) X = 0. necessary to achieve finite-time visual tracking. TIME BASE GENERATOR In (Parra-Vega and Hirzinger. sd = s(t0 ) ∈ . R τ ∈ I n×1 stands for the vector applied joint R torques.1 Problem Statement We consider the problem of designing a visual servo controller for the dynamic model of robot manipulators under uncalibrated camera and unknown physical robot parameters. Thus. 3. q ) ∈ I n×p1 and a vector θb ∈ I p1 of ˙ ¨ R robot parameters as follows H (q) q + C (q. q) q + G (q) = Yb θb ¨ ˙ ˙ (8) The time base generator concept. q. and sδ = s − s d s = ∆x + λ(t)∆x ˙ sd = s(t0 ) exp −κt (13) (14) (15) with the integral feedback gain Ki > 0 whose precise lower bound is to be defined yet. with the following ˙d ¨d R d assumptions: Assumptions 1. the sgn (y) is the discontinuous signum(y) function of y ∈ I n . Also note that the transient of y(t) is shaped by ξ(t) over time. we are interested in designing a joint output error manifold sq . In this conditions. Property 2.

Since H(q). the uncalibrated open-loop system can be written as follows ¯ ˙ H (q) sq + C (q. open loop dynamics is expressed in terms of sq and sq by ˙ ¯ (26) H (q) sq + C (q. In this way. q. Then. xr ) is composed of known ˙ variables. ∆x → 0.1 Uncalibrated Joint Error Manifold To handle the parametric uncertainty of the camera system. note that qr allows a linear parame˙ −1 ˙ ˙ terization. ∆θve . we define a ˙ new nominal reference q r as follows: ¯ ¯ ˙ q r = Y v θv ¯ (21) Then. the uncalibrated joint error manifold sq vector is given by ¯ ˙ ˙ sq = q − q r = q − q r ± qr ¯ ˙ ¯ ˙ ¯ ˙ ¯v + Yv θv = sq − Yv θ = sq − Yv ∆θv (22) ¯ where ∆θv = θv − θv . q r . Having defined the nominal references in both the joint and screen frames. since θv is unknown. q)Yv ∆θv ˙ ˙ ¨ ˙ with Yve = Yve (q. ¯ where Yv = Yv (q. q) sq = τ − Ybr θb ¯ ˙ ¯ (25) We can see that if we design a controller that yields convergence of sq . a nominal reference qr in the ˙ joint space is defined as follows −1 ˙ qr = J −1 Rα xr ˙ (18) Thus. then sx will converge since by assumptions J and Rα are well-posed. which is quite restrictive since usually some of them are unknown.2 Open-loop Error Equation Using nominal references (21)-(23). xr . ˙ (21) and proposition (5).3 Joint Error Manifold According to (4). q) are linearly parameterizable. Considering equations (22) and (24). . the joint error manifold sq in joint space is given by sq = q − q r ˙ ˙ −1 ˙ ˙ = J −1 Rα (x − xr ) −1 = J −1 Rα sx In order to compensate the effects on robot dynamics due to definition of new nominal references ˙ (q r = qr . ˙ Because of the time derivative of qr is required in a ˙ passivity-based controller designing. it is ¯ ˙ q ¯ ¨ ¯ ˙ convenient to express the error sq in terms of sq ¯ ˙ as follows ˙ ˙ sq = sq − Yv ∆θv ¯ ˙ (24) (19) 5. ¨r ) is available for mea˙ ¯ q surement. Therefore. xr ). ¨r = qr . too. in the following. and therefore sq = sq ). From equations (19). the derivative of (11) becomes xr = xd − λ(t)∆x − λ(t)∆x + sd ¨ ¨ ˙ ˙ ˙ −Ki sign(sδ ) (16) 5. C(q. this is obtain as follow −1 −1 ˙ qr = J −1 Rα xr + J −1 Rα xr ¨ ¨ ˙ (20) ¯ ˙ ¯ where Ybr = Ybr (q. or at least very difficult to compute in real time. and the extrinsic z and θ camera parameters are required. then last equation can be written in terms of a linear parameterization. the visual error manifold (screen coordinates extended error) is given by sx = x − x r ˙ ˙ t = s δ + Ki t0 sgn(sδ )(ζ)dζ (17) Note that if sδ = 0 then tracking is obtained. it is possible to design a controller based on the calibrated joint error manifold. Parameter uncertainty. Remark 1. and θv is tuned such that ˙ −1 −1 J Rα xr is well-posed. xr )θv . SECOND ORDER SLIDING MODE WITH TBG VISUAL SERVOING 5. q) sq = τ − Ybr θb + Yve ∆θve ˙ ˙ where ˙ ˙ Yve ∆θve = H(q)Yv ∆θv + C(q. we present a controller that yields finite time tracking with neither knowledge of inertia robot parameters nor knowledge of intrinsic and extrinsic camera parameters. q.C 1 ⇒ sδ (t0 ) = 0. xr ). ˙ where θv incorporates intrinsic and extrinsic camera parameters and Y (q. that is qr = J −1 Rα xr ≡ Y (q. so the intrinsic α and λf . At this stage the problem becomes in computing τ in (26) such that sq be bounded subject to unknown θb . Note that convergence of sx implies ∆x. It is useful to give qr now ¨ ˙ ¯ ¨r = Yv θv q ¯ (23) 4.

5. Rα Js ˙ sδ = −Ki sgn(sδ ) + Rα J sq + Rα Jsq ˙ ˙ (30) Now. if Ki > 4 . its signum can be easily determined from equation (19) and using proposition (5). the controller design was conducted taking no account of dimension of the robot workspace. Signum of sq . Θbii ≥ |θbi | . planar robot without friction forces is considered. υ). a second order sliding mode regime is induced at sδ = 0 for all time. tracking errors converge to an arbitrary small vicinity of ∆x = 0 in arbitrary finite time t = tb without knowledge of manipulator dynamics. bounded desired trajectories and the state of the system. in order to produce the sliding mode condition for sδ . since the vector θv it is assumed unknown but constant. Extension to 3D. Remark 3. The endpoint of the manipulator is whose rate of change yields ˙ V ≤ −γ |sq | + sT Yve ∆θve q ≤ −γ |sq | + |sq ||Yve ||∆θve | where we have used Property 1. SIMULATIONS A two-rigid link. according to the second method of Lyapunov. Because of robot and vision system parameters are unknown. Then. sx )θv is determined by the sign of the known regressor Yv (q. f2 ). sδ (t) = 0. as shown in section 3. respectively. where subindex 1 and 2 stand for first and second link. Dimensions of the robot and camera parameters are given in Table 1. since sd (t) → 0. and there exists an upper bound for ¨ the regressors θve . Then. Consider a robot manipulator (6) with the second order sliding mode with time base generator visual servoing scheme (27). and γ > 0. θb ). Note that the ˙ passivity inequality sq . the TBG induces finite time convergence if we substitute y = ∆x in equation (9). However. Now. we multiply the previous equation by sT to obtain δ . becomes in sx = Rα Jsq with a derivative sx = Rα J sq + ˙ ˙ ˙ q . that is. then there exists a large enough feedback gain γ such that ˙ V ≤ −γ |sq | + f0 |sq | for smooth and bounded function f0 ≥ g(f1 . 6. that is. R Theorem 1. ∆θve . Thus. 2 ≥ |sq |. q)sq − Ybr Θb sgn(Ybr sq ) − Ybr θb ˙ ˙ −Yve Θv sgn(YvT sq ) − γsgn(sq ) e +Yve ∆θve + τ ∗ (28) µ = Ki − > 0. xr . ∆x → 0 ˙ exponentially. Note that ˙ ˙ ¨ Yve ∆θve = f1 (xr . sx ). (31) ˙ where 0 ≥ |Rα J|.3 Main Result We propose the following controller τ ¯ ¯T = −Ybr Θb sgn(Ybr sq ) − γsgn(sq ) (27) ˙ ˙ sT sδ = −sT Ki sgn(sδ ) + sT Rα J sq + sT Rα Jsq δ δ δ δ ˙ ≤ −Ki |sδ | + ≤ −Ki |sδ | + ≤ −Ki |sδ | + ≤ −µ|sδ |. With the exception of the camera model (1) and proposition (5). The following closed loop error equation between (6) and (27) arises ¯ ¯T ¯ H(q)sq = −C(q. that is the following equation arises x(ts ) = xd (ts ) + ∆x(t0 )δ 1+ (32) for τ ∗ ≡ 0 a virtual control input. therefore ˙ multiplying equation (19) by Rα J. the closed-loop system yields finite-time convergence of image tracking errors. sq is not available. which implies ∆x = −λ0 ∆x + ε. and hence at least a piecewise continuous approximation of the signum function must be implemented in order to not only reduce chattering but also to be able to physically realize the controller. for t > tb . Experimental evaluation. from equation (17). equation (31) 4 qualifies as the sliding mode condition for sδ = 0 for all time since sδ (t0 ) = 0 ∀t0 . τ ∗ = V + γ|sq |. namely the sign of sq = Yv (q. with L∞ boundedness for sq . θb because the entries of these regressors depend on trigonometry functions and link lengths. subject to robot and camera parametric uncertainties. xr . Proof. with the following energy storage function V = 1 T s H (q) sq 2 q (29) In this way. Then. Thus. which indicate the possibility of extending our scheme to the 3D spatial case as a future research topic. θve . there arises stability of sq . θve . sq = f2 (xr . ♦ Remark 2. sq is bounded. Afterwards. it gives rise to. 1 ≥ |Rα J|. 3 ≥ |sq | ˙ = 2 + 3 . ˙ 0 |sδ ||sq | 2 |sδ | 4 |sδ | 4 + 1 |sδ ||sq | + 3 |sδ | where Θb ∈ I p1×p1 . and with uncalibrated camera. Remark 4. The discontinuous nature of the signum function make physical implementation of our controller impractical.

Camera and robot parameters.5 m [−0. 2d sliding mode visual servoing for uncertain manipulators with uncalibrated camera. 0.1 sin ωt+0.0212 kgm2 9.D.0003]T m 0. 651–670. Parra-Vega and A. Value . Conf.W.I.2 lc1 . and G. Hsu.3 m 0. where ω = 2 rad/sec. with tb = 1.008 m Velocity errors REFERENCES Bishop. Liu and K. Hager and P. Y. 1500 Applied torques 25 joint 1 1000 20 Applied torques 15 500 joint 1 10 0 [Nm] joint 2 −500 [Nm] 5 joint 2 0 −1000 −5 −1500 −10 −2000 0 0. G. 12. Espinosa-Romero (2002).5 2 2. Proc. Applied torques: 1SM (left).. V. J. Wien. Shen. of the 1996 IEEE Int. Parra-Vega.1]T m [0..5 5 Fig. P. 3.5 2 2. on Robot Control. and M.5 3 3. Phoenix.5 1 1. Tracking of image-based desired trajectories: Theorem 1 controller for tb = 1 sec. Aquino (1999).E. pp. V. IFAC Symp.xd [pixels] Applied torques −400 −300 −200 −200 0 200 x1. ROBOT SYSTEM Length link l1 . 2. A tutorial on visual servo control. xd2 )T = (0.0 sec as the desired convergence time. Table 1.1248–1253. and P. Hutchinson. Corke (1996). Data allows to visualize the stability properties stated in Theorem 1. 38th IEEE CDC. on Robotics and Automation. Minnesota.5 4 4.D. Spong (1997). Fixed-camera visual servo control for planar robots.5 1 1. SYROCO.8 m/sec2 . lc2 Mass link m1 . Adaptive calibration and control of 2d monocular visual servo systems.05.2 −0. IROS. Kelly. IEEE\RSJ Conf.5 5 −15 0 0. L. l2 Center of gravity 1.0005 0.5 3 3. .. on Robotics and Automation. R.5 4 4. Trans. Position errors joint 2 500 400 300 200 100 0 joint 2 0 1 −100 −200 −300 −400 joint 1 0 1 2 3 4 5 −100 2 3 4 5 Desired and end−effector trajectories 30 20 10 0 −10 −20 joint 2 0 1 2 3 4 5 x . Finitetime tracking for robot manipulators with continuous control. Finite time convergence is visualized through simulation results when all parameters are unknown.1 cos ωt + 0. I2 Gravity acceleration gz VISION SYSTEM Clock-wise rotation angle θ Scale factor α Depth field of view z Camera offset C OB Offset ΣI OI Focal length λf 100 0 joint 1 [pixels/s] [pixels] The closed-loop system exhibit exponential convergence of tracking errors for any given initial conditions despite of the size of the parametric uncertainty. B.1008 m 9.1776.284. 0. Shirkey and M.4. Li (2001). S.xd1 [pixels] joint 1 [Nm] −100 0 100 200 300 400 x(t) xd(t) 2 2 Robot workspace boundary Fig. 0. France.requested to draw a circle defined with respect to the vision frame xd = (xd1 . Arizona. Proc. 7.05)T .5714 kg 0.1. Spong (1996). Asymptotic trajectory tracking of manipulators using uncalibrated visual feedback.W. CONCLUSIONS We have proposed a new image-based servo controller for uncertain planar robots and uncalibrated camera in a passivity-based second order sliding mode with time base generator approach.H. π rad 8 72727 pixeles/m 1. Mechatronics. Hirzinger (2000). Adaptive visual tracking with uncertain manipulator dynamics and uncalibrated camera.Unit 0. 2. 0. Submitted to the IEEE/ASME Trans.. Fierro-Rojas. Nantes. m2 Inertia link I1 . and 2SM Theorem 1 (right). Y.