You are on page 1of 29

Visual servoing

Franois Chaumette
Lagadic group
Inria Rennes Bretagne Atlantique & Irisa
http://www.irisa.fr/lagadic
Where is my lab?
In Rennes, Brittany, France





near Mont Saint Michel





and Saint Malo
X Rennes
X Paris
2
Experimental platforms available in the lab
Eye-in-hand, eye-to hand systems

Mobile robot, medical robot

Planned in 2012:
Romeo (Naos big brother
by Aldebaran)


3
4
ViSP (http://www.irisa/fr/lagadic/visp)
Open source library for real time visual tracking and visual servoing

currently available for Linux, Mac OS and Windows with GPL license
written in C++ (~ 300 000 lines of code)
more than 700 downloads for each release (every 6 months)
forum for help and debug

interface with OpenCV
ROS modules from ViSP for
- eye-to-hand calibration
- 3D model-based tracking

5
Few videos from ViSP
Real time image processing, computer vision, augmented reality






6
Few videos from ViSP
Simulations






7
Few videos from ViSP
Visual servoing






8






9
Target tracking by gaze (pan-tilt) control
[Crtual 1998]









Direct link between visual features (cog) and pan-tilt
No 3D data at all required in the control scheme
Trying to reduce the tracking errors due to the unknown target motions
Image processing: image motion-based estimation
10
Navigation of a mobile robot
in indoor environment
Following a wall from the image of a straight line in omnidirectional images
Task redundancy framework:
2 dof controlled by visual servoing
1 dof used to move along the wall
11
6 dof positioning and target tracking task








Using a robust 3D model-based visual tracking

12






13






14






15
Stability results






Visual servoing GAS for simple cases:
IBVS for pan-tilt control or till 4 dof
PBVS assuming pose estimation is perfect (!)
IBVS LAS in general for the 6 dof case
but the convergence domain is large in practice

But be careful for very large rotational displacements.





16
Example 0: pose estimation problem






Pose estimation from one image knowing the shape of the object (PnP) is an
inverse problem, sometimes ill-conditioned.





17
mire






18






19
Intermediate conclusion






VS is a non linear control problem -> potential problems (local minima,
singularities, inadequate robot trajectories, stability difcult to demonstrate)
Recent developments:
Modeling: design the features so that it becomes a linear control problem (as
most as possible)
Consider advanced control strategies (predictive control, )
Coupling path planning and visual servoing
Decompose the problem (task sequencing, switching strategies, )
Consider a particular application
Consider other vision sensors (structured light, stereo, omni, RGB-D)






20
Modeling: Cartesian coordinates of an image point
Interaction:



Modeling: cylindrical coordinates of an image point
Interaction:

Selecting adequate visual features
21
Visual servoing with 2D moments [Tahri 2005]






Image moments: generic features that can be used on object of any shape
Adequate combinations of moments can be selected to control the 6 dof (area, cog,
orientation, moment invariants)


22
Modeling with a spherical projection model
[Tatsambon 2009]
Geometric visual features revisited using the spherical projection model
(points, straight lines, ellipses, moments)
Can be used for classical perspective cameras and omnidirectional vision sensors
Nice invariance properties
Selection of optimal features for a sphere (3 dof) and a marked sphere (6 dof):
almost linear and decoupled system, GAS, robustness

23
Visual servoing based on image intensity [Collewet 2011]
Goal: Using directly the intensity level of all pixels visual features
Advantages:
no image processing: neither features tracking nor matching
excellent positioning accuracy
Problems:
modeling the interaction between the intensity level and the 3D motion
- Lambertian model and Blinn-Phong model to be robust to lighting variations and
specularities
corresponding Lyapunov function highly non linear
Scheme efcient for textured and non textured environments

24
Visual servoing using mutual information [Dame 2011]
Still considering the image as a whole
ensuring robustness wrt perturbations (occlusion, light,)
using various image modalities
Approach
using mutual information (based on the entropy)
modeling the interaction between MI and motion parameters




SSD MI
25
Path planning in the image [Mezouar 2002]
Dealing with very large displacements
Improving robustness wrt. calibration errors
Introduction of constraints
- in the workspace
- in the image (visibility, occlusion)
- in the joint space (joint limits)
Here, potential eld approach



26
without planning
with errors
without error
with planning
with constraints without constraints
s* no more constant: s*(t)
Micromanipulation [Tamadatze 2010]
Assembly of MEMS compounds by sequencing visual tasks









Size 400 !m x 400!m
27
Visual servoing for a humanoid robot [Mansard 2007]
Catching a ball while walking by task sequencing and redundancy
Humanoid robot: highly redundant system
Gaze control
Walking planned (for security)
Equilibrium control
Visual tasks and constraints managed by a stack
Remove the appropriate task when needed for ensuring the constraints
Put the task back as soon as possible
28
Autonomous navigation [Diosi 2007, Cherubini 2009]
Classical approach:
teaching: global 3D reconstruction and accurate 3D localization (SLAM)
following a specied 3D trajectory through accurate 3D localization
Approach developed: Accurate localization and mapping not mandatory
teaching: topological description of the environment with key frames
only local 3D reconstruction (points tracking and points transfer)
navigation expressed as visual features to be seen (and not successive poses to be
reached)
simple IBVS for navigation
29

You might also like