Professional Documents
Culture Documents
17 VisualServoing
17 VisualServoing
Visual servoing
n objective
use information acquired by vision sensors (cameras)
for feedback control of the pose/motion of a robot
(or of parts of it)
Robotics 2 2
Some applications
automatic navigation of robotic systems (agricolture, automotive)
video video
video video
Robotics 2 3
Image processing
n real-time extraction of characteristic parameters useful for
robot motion control
n features: points, area, geometric center, higher-order moments, …
n low-level processing
n binarization (threshold on grey levels), equalization (histograms),
edge detection, …
n segmentation
n based on regions (possibly in binary images)
n based on contours
n interpretation
n association of characteristic parameters (e.g., texture)
n problem of correspondence of points/characteristics of objects in
different images (stereo or on image flows)
Robotics 2 4
Configuration of a vision system
n one, two, or more cameras
n grey-level or color
n 3D/stereo vision
n obtained even with a single (moving) camera, with the
object taken from different (known) points of view
n camera positioning
n fixed (eye-to-hand)
n mounted on the manipulator (eye-in-hand)
n robotized vision heads
n motorized (e.g., stereo camera on humanoid head or
pan-tilt camera on Magellan mobile robot)
Robotics 2 5
Camera positioning
eye-in-hand eye-to-hand
Robotics 2 6
Vision for assembly
video video
Robotics 2 7
Indirect/external visual servoing
Robotics 2 8
Direct/internal visual servoing
replace Cartesian controller with one based on vision that directly computes
joint reference commands
• control law is more complex (involves robot kinematics/dynamics)
• preferred for fast situations (control sampling 𝑓 = 1/𝑇 > 50Hz)
Robotics 2 9
Classification of visual servoing schemes
Robotics 2 10
Comparison between the two schemes
n position-based visual servoing (PBVS)
current image Z
current
OC C
XC 3D pose
+ YC ZO
- 3D Cartesian error
driving the robot
O YO
(T, R)
3D information XO
3D reconstruction desired 3D pose
current image
2D error in the
- image space
motion commands
to the robot
desired image
processing
Robotics 2 11
PBVS architecture
here, we consider only
Cartesian commands
𝑥 ∗, 𝑦 ∗, 𝑧 ∗, relatively EASY
of the kinematic type
𝛼 ∗, 𝛽∗, 𝛾 ∗
+ 𝑒(𝑡) Cartesian 𝑉, Ω
_ control law
Robotics 2 12
Examples of PBVS
video video
feature extraction
𝑠(𝑡)
Robotics 2 14
An example of IBVS
video
here, the
features
are points
(selected from
the given set,
or in suitable
combinations)
desired feature positions the error in the image plane (task space!)
current feature positions drives/controls the motion of the robot
Robotics 2 15
PBVS vs IBVS
PBVS = position-based IBVS = image-based
visual servoing visual servoing
video video
n image acquisition
n frame rate, delay, noise, ...
n feature extraction
n with image processing techniques (it could be a difficult and
time consuming step!)
n comparison with “desired” feature values
n definition of an error signal in the image plane
n generation of motion of the camera/robot
n perspective transformations
n differential kinematics of the manipulator
n control laws of kinematic (most often) or dynamic type
(e.g., PD + gravity cancelation --- see reference textbook)
Robotics 2 17
Image processing techniques
Robotics 2 19
Example of IBVS using moments
video
Robotics 2 20
Camera model
Robotics 2 21
Thin lens camera model
𝑝
• geometric/projection optics
𝑥 fundamental equation
of a thin lens
image
plane 1 1 1 dioptric
+ = (optical)
𝑍 𝑧 𝜆 power
𝑂F ℱB : 𝑂, 𝑋B , 𝑌B , 𝑍⃗B
𝑢
𝑃 • camera reference frame
𝑣
𝑍A
𝑂A 𝑝 ℱA : 𝑂A , 𝑋A , 𝑌A , 𝑍⃗A
• image plane reference frame
𝑋A
𝑌A ℱF : 𝑂F , 𝑢, 𝑣⃗
𝑍B
• position/orientation of ℱA
w.r.t. ℱB
𝑂 𝑌B (𝑇, 𝑅)
(𝑇, 𝑅)
(translation, rotation)
𝑋B
Robotics 2 24
Interaction matrix
J
n given a set of feature(s) parameters 𝑓 = 𝑓G ⋯ 𝑓I ∈ ℝI
n we look for the (kinematic) differential relation between
motion imposed to the camera and motion of features on the
image plane
̇ 𝑉
𝑓 = 𝐽(O)
Ω
n (𝑉, Ω) ∈ ℝM is the camera linear/angular velocity, a vector
expressed in ℱA
n 𝐽(O) is a 𝑘×6 matrix, known as interaction matrix
n in the following, we consider mainly point features
n other instances (areas, lines, …) can be treated as extensions of the
presented analysis
Robotics 2 25
Computing the interaction matrix
point feature, pinhole model
T T
n from the perspective equations 𝑢 = 𝜆U , 𝑣= 𝜆 U , we have
V Y
0 −U 𝑋̇ 𝑋̇
𝑢̇ U
= 𝑌̇ = 𝐽G (𝑢, 𝑣, 𝜆) 𝑌̇
𝑣̇ 0
V Z
−U
U 𝑍̇ 𝑍̇
n the velocity (𝑋,̇ 𝑌,̇ 𝑍)
̇ of a point 𝑃 in frame ℱA is actually due to the
roto-translation (𝑉, Ω) of the camera (𝑃 is assumed fixed in ℱB )
n kinematic relation between (𝑋,̇ 𝑌,̇ 𝑍) ̇ and (𝑉, Ω)
𝑋̇ 𝑋
𝑌̇ = − 𝑉 − Ω × 𝑌
𝑍̇ 𝑍
Note: ALL quantities are expressed in the camera frame ℱA
without explicit indication of subscripts
Robotics 2 26
Computing the interaction matrix (cont)
point feature, pinhole model
𝑋̇ −1 0 0 0 −𝑍 𝑌
𝑉 𝑉
𝑌̇ = 0 −1 0 𝑍 0 −𝑋 = 𝐽[(𝑋, 𝑌, 𝑍)
Ω Ω
𝑍̇ 0 0 −1 −𝑌 𝑋 0
𝑝 = point (feature)
Robotics 2 27
Comments
n the interaction matrix in the map
𝑢̇ 𝑉
= 𝐽\ (𝑢, 𝑣, 𝑍)
𝑣̇ Ω
n depends on the actual value of the feature and on its depth Z
n since dim ker 𝐽\ = 4, there exist ∞e motions of the camera that
are unobservable (for this feature) on the image
n for instance, a translation along the projection ray
n when more point features are considered, the associated
interaction matrices are stacked one on top of the other
n 𝑝 point features: the interaction matrix is 𝑘 × 6, with 𝑘 = 2𝑝
Robotics 2 28
Other examples of interaction matrices
n distance between two point features
𝑑= 𝑢G − 𝑢[ [ + 𝑣G − 𝑣[ [
(𝑢G , 𝑣G )
𝑢̇ G
1 𝑣̇ G
𝑑̇ = 𝑢G − 𝑢[ 𝑣G − 𝑣[ 𝑢[ − 𝑢G 𝑣[ − 𝑣G (𝑢[ , 𝑣[ )
𝑑 𝑢̇ [
𝑣̇ [
𝐽\G 𝑢G , 𝑣G , 𝑍G 𝑉
= 𝐽\ 𝑢G , 𝑢[ , 𝑣G , 𝑣[
𝐽\[ 𝑢[ , 𝑣[ , 𝑍[ Ω
n image moments
Robotics 2 29
IBVS with straight lines as features
video
Robotics 2 31
Image Jacobian
n combining the step involved in the passage from the velocity of
point features on the image plane to the joint velocity/feasible
velocity commands of the robot, we finally obtain
𝑓̇ = 𝐽\ 𝑓, 𝑍 𝐽q 𝑞 𝑢 = 𝐽 𝑓, 𝑍, 𝑞 𝑢
n matrix 𝐽 𝑓, 𝑍, 𝑞 is called the Image Jacobian and plays the same
role of a classical robot Jacobian
n we can thus apply one of the many standard kinematic (or even
dynamics-based) control techniques for robot motion
n the (controlled) output is given by the vector of features in the
image plane (the task space!)
n the error driving the control law is defined in this task space
Robotics 2 32
Kinematic control for IBVS
n defining the error vector as 𝑒 = 𝑓r − 𝑓, the general choice
𝑢 = 𝐽# 𝑓ṙ + 𝑘𝑒 + 𝐼 − 𝐽# 𝐽 𝑢B
minimum norm solution term in ker 𝐽 does not “move” the features
will exponentially stabilize the task error to zero (up to singularities,
limit joint range/field of view, features exiting the image plane, ...)
n in the redundant case, vector 𝑢B (projected in the image Jacobian
null space) can be used for optimizing behavior/criteria
n the error feedback signal depends only on feature values
n there is still a dependence of 𝐽 on the depths 𝑍 of the features
n use the constant and “known” values at the final desired pose
𝐽(𝑓, 𝑍 ∗ , 𝑞)
n or, estimate on line their current values using a dynamic observer
Robotics 2 33
Example with NMM
n mobile base (unicycle) + 3R manipulator
n eye-in-hand configuration
n 5 commands
n linear and angular velocity of mobile base
n velocities of the three manipulator joints
Robotics 2 34
Simulation
view from
camera
processed
image
video
• simulation in Webots
behavior of • current value of 𝑍 is supposed to be known
the 2 features • diagonal task gain matrix 𝑘 (decoupling!)
• “linear paths” of features motion on the
image plane
Robotics 2 35
IBVS control with Task Sequencing
n approach: regulate only one (some) feature at the time, while keeping
“fixed” the others by unobservable motions in the image plane
n Image Jacobians of single point features (e.g., 𝑝 = 2)
𝑓Ġ = 𝐽G 𝑢, 𝑓[̇ = 𝐽[ 𝑢
n the first feature 𝑓G is regulated during a first phase by using
𝑢 = 𝐽G# 𝑘G 𝑒G + (𝐼 − 𝐽G# 𝐽G )𝑢B
n feature 𝑓[ is then regulated by a command in the null-space of the
first task
𝑢 = (𝐼 − 𝐽# 𝐽 )𝐽J 𝑘 𝑒
G G [ [ [
n in the first phase there are two (more) degrees of redundancy w.r.t.
the “complete” case, which can be used, e.g., for improving robot
alignment to the target
n if the complete case is not redundant: singularities are typically met
without Task Sequencing, but possibly prevented with TS ...
Robotics 2 36
Simulation with TS scheme
mobile base (unicycle) + polar 2R arm: Image Jacobian is square (4×4)
point feature 1
(regulated in
first phase)
point feature 2
(in second phase)
video
behavior of
the 2 features • simulation in Webots
• current value of 𝑍 is supposed to be known
Robotics 2 37
Experiments
Magellan (unicycle) + pan-tilt camera (same mobility of a polar 2R robot)
video
• comparison between Task Priority (TP) and Task Sequencing (TS) schemes
• both can handle singularities (of Image Jacobian) that are possibly encountered
• camera frame rate = 7 Hz
Robotics 2 38
In many practical cases…
n the uncertainty in a number of relevant data may be large
n focal length 𝜆 (intrinsic camera parameters)
n hand-eye calibration (extrinsic camera parameters)
n depth 𝑍 of point features
n ….
n one can only compute an approximation of the Image Jacobian (both in
its interaction matrix part, as well as in the robot Jacobian part)
n in the closed loop, error dynamics on features becomes
𝑒̇ = −𝐽 𝐽v# 𝐾𝑒
n ideal case: 𝐽 𝐽# = 𝐼 real case: 𝐽 𝐽v# ≠ 𝐼
n it is possible to show that a sufficient condition for local convergence of
the error to zero is
𝐽 𝐽v# > 0
Robotics 2 39
Approximate Image Jacobian
n use a constant Image Jacobian 𝐽(𝑍v ∗) that is computed at the desired
target 𝑠 ∗ (with a known, fixed depth 𝑍 ∗)
𝑞̇ = 𝐽# 𝑍 𝐾𝑒 video
Robotics 2 40
An observer of the depth Z
n it is possible to estimate on line the current value (possibly time-
varying) of the depth 𝑍, for each considered point feature, using a
dynamic observer
n define 𝑥 = 𝑢 𝑣 1/𝑍 J , 𝑥z = 𝑢 z 𝑣z 1/𝑍v J as current state
and estimated state, and 𝑦 = 𝑢 𝑣 J as measured output
J
n a (nonlinear) observer of 𝑥 with input 𝑢A = 𝑉 Ω
𝑥ż = 𝛼 𝑥,
z 𝑦 𝑢A + 𝛽(𝑥,
z 𝑦, 𝑢A )
guarantees lim 𝑥 𝑡 − 𝑥(𝑡)
z = 0 provided that
n→}
n linear velocity of the camera is not zero
n the linear velocity vector is not aligned with the projection ray of the
considered point feature
⇒ these are persistent excitation conditions (~ observability conditions)
Robotics 2 41
Block diagram of the observer
𝑍v
Robotics 2 42
Results on the estimation of 𝑍
real and estimated initial state 𝑣• 𝑡 = 0.1 cos 2𝜋𝑡
J 𝑣… 𝑡 = 0.5 cos 𝜋𝑡 open-loop
𝑥 𝑡B = 10 −10 2 commands
𝑥z 𝑡B = 10 −10 1 J 𝜔• 𝑡 = 0.6 cos(𝜋⁄2)𝑡
𝜔… 𝑡 =1
Robotics 2 43
Simulation of the observer
with stepwise displacements of the target
video
Robotics 2 44
IBVS control with observer
n the depth observer can be easily integrated on line with any IBVS
control scheme
𝑠∗ + 𝑒(𝑡) 𝑉, Ω
IBVS control law
⎯
Depth observer
𝑍v
𝑠(𝑡)
Feature extraction
Robotics 2 45
Experimental set-up
n visual servoing with fixed camera on a skid-steering mobile robot
n very rough eye-robot calibration
n unknown depths of target points (estimated on line)
camera 4 target points (on a plane)
𝑍v ⟶ 𝑍 ⟹ 𝐽v ⟶ 𝐽
n “true” initial and final depths
start
𝑍 𝑡B ≅ 4 m 𝑍r ≅ 0.9 m
Robotics 2 48
Experiments with the observer
n the same task was executed with five different initializations for the
depth observers, ranging between 0.9 m (= true depth in the final
desired pose) and 16 m (much larger than in the true initial pose)
Robotics 2 49
Visual servoing with Kuka robot
set-up
true and
estimated depth
Kuka experiment
norm of
centering error
of feature point
[in pixels]
video
Robotics 2 52
Gazing with humanoid head
stereo vision experiment
n gazing with a robotized head (7 joints, with 𝑛 = 6 independently controlled)
at a moving target (visual task dimension 𝑚 = 2), using redundancy (to keep
a better posture and avoid joint limits) and a predictive feedforward