You are on page 1of 38

PRASHANTH B N

Assistant Professor
Department of Mechanical Engineering
Amrita School of Engineering
Introduction to Robotic Vision
Robotic vision may be defined as the process of extracting,
characterizing and interpreting information from images of a 3-
dimensional world.
Robotic vision (also termed as “computer vision” or machine
vision”) is an important sensor technology with potential
applications in many industrial operations.
Robotic vision is primarily targeted at controlling of
manipulator and interpretation of image and use of this
information in robot operation control.
Robotic vision requires two aspects to be addressed:
 Provision for visual input;
 Processing required to productively utilize the visual
information in a computer-based systems.
Introduction to Robotic Vision
The greatest virtues of a vision system are:
 Repeatability;
 Accuracy;
 Ability to produce approximately the same results when given
approximately the same inputs.
Robotic Vision Systems
The basic purpose of robot vision system is to identify an
object and determine its location (position and orientation).
The vision system must be capable of handling multiple views
to deal with the multiple stable states. For this purpose, the
system has to be fast and work in parallel with the robot
system.
Further, the system must be able to work in an industrial
environment including factory lighting and be insensitive to
normal light variation.
A gray scale system is required so that numerous different
shades of gray can be assigned to each point of the image and
the vision system’s performance is not affected by the object’s
form, colour or surface texture.
Functions of Robotic Vision System
The operation of the vision system consists of three functions:
Sensing
Process yields a visual image of sufficient contrast.
Digitization
Process of converting information into a digital format. Text
and images can be digitized similarly: a scanner captures
an image (which may be an image of text) and converts it to
an image file, such as a bitmap.
Image Processing and Analysis
The digitized image is subjected to image processing and
analysis for data reduction and interpretation of the image.
Functions of Robotic Vision System
Image processing may be further subdivided as follows:
 Pre-processing - Deals with techniques such as noise reduction
and enhancement of details.
 Segmentation – Technique of dividing or partitioning an image
into parts, called segments. It is mostly because it is inefficient
to process the whole image.
 Description - deals with the computation of features (e.g., size,
shape) suitable for differentiating one type of object from
another.
 Recognition - Process that identifies these objects (e.g.,
wrench, bolt, engine block).
 Interpretation - Assigns meaning to an ensemble (a group of
items viewed as a whole rather than individually) of recognized
objects.
Functions of Robotic Vision System
Application: The current applications of robot vision include the
following:
Guidance - Report the location and orientation of a part. It can
also be used for alignment to other machine vision tools.
Identification - Identify parts by locating a unique pattern or
identify items based on color, shape, or size.
Gauging - Calculates the distances between two or more points
or geometrical locations on an object and determines whether
these measurements meet specifications. 
Inspection - Inspection detects defects, contaminants,
functional flaws, and other irregularities in manufactured
products.
Functions of Robotic Vision System
The various areas of vision processing may be grouped as
follows, depending upon the sophistication involved in their
implementation:
Low level vision – Sensing and processing.
Medium level vision – Segmentation, description and
recognition of individual object.
High level vision – Interpretation.
Components of Vision System
A complete vision system consists of hardware and software for
performing the functions of sensing and processing image (the
scene) and utilizing the results obtained to command the robot.

Components of Machine Vision System


Components of Vision System
Components of Vision System
Camera-Illumination
Process of imaging
In the basic process of imaging the light source illuminates the
object and camera captures the reflected light.
The image formed in the camera is converted into analog signal
(voltage) with the help of suitable transducers. Finally, the
analog voltages are digitized & converted into an algebraic
array.
The array is the image to be processed and interpreted by the
computer according to predefined algorithms.
 The image presented to the camera is light reflected from the
environment. This varies in wavelength and intensity throughout
the image and is directly dependent on the illumination of the
scene or lighting.
Components of Vision System
Camera-Illumination
 A poor lighting produces low-contrast images, shadows, and
noise in 2-D vision system, the desired contrast can often be
accomplished by using a controlled lighting system. A 3-D
system may require more sophisticated lighting system.
 In a single camera system, triangulation is used to detect
shape and depth. Some systems use two cameras to get 2-D
images to obtain a stereoscopic view of the scene.
 The robotic vision cameras are essentially optoelectronic
transdusers, which convert optical input signal to electrical
o/p signal and fall in the domain of cameras. There are variety
of camera technologies available for imaging:
• Black-and-white vidicon tube; Solid-state cameras based on
Charge-coupled Devices (CCD), and Charge Injection
Devices (CID); Silcon bipolar sensor cameras.
Components of Vision System
Camera-Illumination
 “Vidicons” are the most common tube-type cameras.
 The image is focused on a photosensitive surface where a
corresponding electrical signal is produced by an electron
beam scanning the photosensitive surface.
 The electron beam passes easily through the photosensor
at a highly constructive point caused by very intense light.
 Fewer electrons pass through the photosensor where lower
levels have made it less conductive.
 Scanning the electron beam carefully across the entire
surface produces electrical information about the entire
image.
Components of Vision System
Camera-Illumination
Illumination techniques:
It is of significance importance to have a well-designed
illumination system since such a system minimises the
complexity of the resulting image, while the information
required for object detection and extraction is enhanced.
In robot vision, the following two basic illumination
techniques are used.
 Front light source
 Black light source
Components of Vision System
Camera-Illumination
 Front light source
 Front illumination: The feature of the image is defined by
the surface flooded by the light.
 Light field specular illumination: Used for recognition of
surface defects with light background.
 Dark field specular illumination: Used for recognition of
surface defects in dark background.
 Front images: Superimposition of imaged light on object
surface.
 Black light source
 Light-field rear illumination: Used in simple measurement
and inspection parts.
Components of Vision System
Camera-Illumination
 Condensed rear illumination: High contrast of images
produced in high magnification.
 Rear illumination collimator: Parallel light ray source
produced so that the same plane objects are featured.
 Offset rear illumination: Highlights the object features
in transparent medium.
The basic types of lighting devices in robot vision may
be grouped into the following categories:
 Defuse surface devices
 Flood or spot projectors
 Imagers
 Condenser projectors
 Collimators
Components of Vision System
Camera-Illumination
Figure shows four of the principal schemes used for
illuminating a robot work space.
(a) The Diffuse-lighting approach can be employed for
objects characterized by smooth, regular surfaces.
(b) Backlighting, produces a black and white (binary)
image.
(c) The Structured-lighting approach consists of
projecting points, stripes, or grids onto the work surface.
(d) The Directional-lighting approach is useful primarily
for inspection of object surfaces.
Four Basic Illumination Schemes
Components of Vision System
A/D (Analog-to-Digital) Converter and Frame Grabber
A/D converter is required to convert analog picture signal
from the camera into digital form that is suitable for computer
processing.
 The analog voltage signal from the camera is sampled
periodically at an appropriate sampling rate.
 Each sampled voltage is approximated to predefined voltage
amplitude.
 The accuracy of this approximation depends on sampling rate
of the A/D converter.
 The quantized voltage is encoded into a digital code
represented by a binary number.
Components of Vision System
A/D (Analog-to-Digital) Converter and Frame Grabber
 Invariably, A/D converter is a part either of the digital
camera or the front end of frame grabber.
 The Frame grabber (a hardware device) is an image
storage and computation device which stores a given pixel
array.
 Can vary in capability from one which simply stores an
image to a more powerful frame grabber in which
thresholding, windowing and calculations for histogram
modification can be carried out under computer control.
 The stored image is subsequently processed and analyzed by
the combination of the frame grabber and the vision
controller.
Components of Vision System
Image Processing
The processing of the visual information, which is voluminous,
is slow.
Preprocessed to filter the noise and to retain only the useful
information from the acquired image and enhance the details.
The digitized image is subjected to image enhancement; a
process that partitions the image into objects of interest.
The digitized and preprocessed image matrix for each frame is
stored in the memory and then subjected to processing as per
the image processing algorithms for computation of features
such as size, shape, etc., called image description.
The preprocessed image frame is stored in computer memory
for further processing.
Components of Vision System
Image Processing
Image Improvement
During image acquisition, the captured image may contain
shadows, noise, distortions, and other imperfections as a result
of sampling, transmission, improper lighting, or disturbance in
the environment.
The distorted poor-quality image is also produced by the faulty
equipment or their incorrect use.
The image enhancement process is employed to obtain a
second binary or gray-scale image of much improved quality.
Distortions in the image may categorized as:
 Dimensional distortions due to imperfections in the camera
lens; or
Components of Vision System
Image Processing
 Brightness distortions due to improper illumination;
 Overlapping frames or incomplete neutralization of
photodetectors in camera produces ghosting.
 A moving object may produce blurring of the image.
 Camera may be poorly focused.
In such cases as above, image enhancement is necessary
before processing it to extract the correct information from
the image.
Some of techniques to improve the quality of image are:
(i) Segmentation; (ii) Smooth etc.
Components of Vision System
Image Processing
Segmentation
Process of identifying a group of related pixels for locating
connected regions or areas of image having similar
characteristics.
Process divides the image into its constituent parts.
Segmentation algorithms are generally based on one of the two
basic principles.
(a) Similarity: Principle approaches are based on:
- Thresholding - Region growing
(b) Discontinuity: Principle involves:
- Edge detection

Segmentation concepts are applicable to both static and


dynamic (or time varying) scenes.
Components of Vision System
Image Processing
Thresholding:
 A binary conversion technique in which each pixel is converted
into a binary value, either black or white and is accomplished
by using a frequency histogram of the image and establishing
what intensity (gray lend) is to be the border between black and
white.
 Method is used by many commercial available robot vision
systems these days.
 Global thresholds find applications in situations where there is
a clear distinction between objects and the background and
where illumination is relatively uniform.
Components of Vision System
Image Processing
Region growing:
 Region growing is a technique that groups pixels having similar
attributes into regions and is called pixel aggregation.
 Gray scale raw image is scanned and the region is grown by
appending or connecting together the neighbouring axel that
have same property say gray level, texture, colour, etc.
 Each region is labelled with a unique integer number.
 Region oriented segmentation process must meet the following
conditions:
 Segmentation must be complete (Every pixel must be in a
region)
 The points in a region must be connected
 The regions must be disjoint.
Components of Vision System
Image Processing
Edge detection:
 The outline boundary of an object within an image is equivalent
to identifying the edges of the object that separate the object
from its background.
 Algorithms to identify whether an image pixel lies on the edges
of an object or not, are known as “edge detection algorithms.
 A common method is based on the intensity change or intensity
discontinuity that occurs in adjacent pixels at the boundary or
edge of an object.
 The idea underlying most of edge detection algorithms is the
computation of local gradient of image intensity.
 The magnitude of the first derivative of intensity function can
be used to detect edges in the image.
Components of Vision System
Image Processing
Smoothing (or noise reduction)
Smoothening operations are used to improve the quality of the
image by reducing noise and other spurious effects that are
introduced in a image as a result of sampling, quantization,
transmission or disturbances in the environment during image
acquisition and digitizing.
Image intensity is modified using local techniques based on the
assumption that the pixel value for a pixel in some sense is
similar to that of its neighbours.
Neighbourhood averaging is one of the several techniques of
smoothing.
Main disadvantage is that smoothing blurs the images and other
sharp details but blurring can be reduced by use of median
filters.
Components of Vision System
Image Processing
Object descriptors
In order to identify a given object in different images,
characteristics like shape, size, perimeter etc. are computed
from the image.
The extracted geometric features of an image are known as
“object descriptors”.
Several descriptors are simultaneously obtained for a given
object and then, the object may be identified with good
statistical confidence level.
Components of Vision System
Image Processing
Object recognition
Object recognition deals with unique identification of each
object in the image.
The algorithms for object recognition, for robotic
application, should not only be powerful to uniquely, identify
the object but should be fast enough to match the work cycle.
For object recognition, image comparison technique is a
simple approach.
The images of known objects are stored in the computer and
the regions identified in the image are compared with these
to recognize the parts in the image and the method is called
“template matching”.
Components of Vision System
Image Presentation
The primary requirement of vision system is to generate an
appropriate method of representing image data.
The method should provide for convenient and efficient
storage and processing by computer and it should
encapsulate all the information, which defines important
characteristics of the scene in the image.
For computer-based processing, a digital form of the image
is required.
A suitable approximation of the intensity function I(x, y) is
made for convenience of representation and processing.
The approximation is known as “digitization” and is carried
out in two stages:
 Spatial digitization
 Amplitude digitization
Advantages of Machine Vision
Flexibility
 A wide variety of measurement can be made by the vision
systems.
 When the applications change, the software can be easily
modified or upgraded to accommodate new requirements.
Cost effectiveness
 Machine vision systems are becoming increasingly cost
effective, with the price of computer processing drooping
rapidly.
Precision
 Possible to measure dimensions to one part in a thousand or
better by a well-designed vision system.
Advantages of Machine Vision
 Since the measurements do not require contact, therefore,
there is no wear or danger to delicate components.
Consistency
 Elimination of operational variability (since vision
systems are not prone to the fatigue suffered by human
operators).
 Multiple systems can be configured to produce identical
results.
Applications of Machine Vision
Industrial manufacture on large scale.
Manufacture of short-run unique objects.
Retail automation.
Quality control and refinement of food products
Monitoring of agriculture production
Medical remote examination and procedures
Medical imaging processes
Safety systems in industrial environments
Automated monitoring of sites for security and safety
Consumer equipment control
Inspection of pre-manufactured objects
Control of Automated Guided Vehicles (AGVs)
Industrial Applications of Vision-
Controlled Robotic Systems
The effective use of robotic system makes assembly, quality
control parts handling, and classification tasks more robust.
By using a single camera, it is possible to track multiple
objects in visually cluttered environments.
A visions – controlled robotic system can be deployed for a
number of different applications mentioned below:
 Presence
 Several sensors such as proximity or touch sensors can be
used to find the presence or absence of a part at a specific
location (say on a conveyor belt or in a bin).
 However, the visual detection of presence combined with
other applications gives much more accurate and versatile
information.
Industrial Applications of Vision-
Controlled Robotic Systems
 Object location
 For tracking motion of manipulator, end-effector, objects,
or obstacles, accurate coordinate assessment is used.
 Sometimes special identification markings are made on
the part.
 Object identification
 Part of sorting process, involves determination of location
and pick-and-place tasks to effect physical movement of
parts.
Industrial Applications of Vision-
Controlled Robotic Systems
 Pick and place
 Possible to guide the manipulator to pick a part from a
specific location after its presence has been detected or
from any imaged location in the workcell and place it at
the desired location.
 The gripper can be oriented according to orientation of
the part to hold the part properly.
 Visual inspection
 Based on extracting specific quantitative measurements
of desired parameters from an image.
 The robot employed in visual inspection can do sorting of
good and bad parts and also control the manufacturing
process based on measured values.
Industrial Applications of Vision-
Controlled Robotic Systems
 Visual guidance
 The image of the scene can be used for accurate
specification of relative positions of the manipulator and
the part in the scene as well as their relative movements.
 For example:
(i) Assembly operation, which requires accurate position
and orientation control of two parts to be fitted together.
(ii) Guiding the motion of the manipulator through
stationary or mobile obstacles in the environment of the
robot.

You might also like