You are on page 1of 23

Instrumentation for Robotics and automation

EXPERIMENT NO: 2

Title: Robot Vision


Aim: To study of robot vision techniques.
Objectives:
1. To identify the different robot vision techniques.
2. To Understand the working of robot vision techniques.
3. To Understand advanced vision techniques available in the market.

Theory:
What do you mean by Robot Vision?

The method of processing, characterizing, and decoding data from photographs leads
to vision-based robot arm guidance, dynamic inspection, and enhanced identification
and component position capability, called Robot Vision or Robotic Vision.
The robot is programmed through an algorithm, and a camera, either fixed on the
robot or in a fixed location, captures pictures of each workpiece with which it can
communicate.

The function of robotic vision was created in the 1980s and 1990s. Engineers devised
a method for teaching a robot to see. The piece is rejected if it does not complement
the formula, and the robot can not deal with it. This are most commonly applied in
material handling and selection applications in packing industry, pick and placing,
deburring, grinding, and other industrial processes.

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 1


Instrumentation for Robotics and automation

Vision Guided Robotic Systems

The demand for vision-guided robotics (VGR) will significantly witness


considerable growth as it provides more intelligent and quicker 3D measurement and
guidance support. It can also address an aging population and rising labor costs by
performing tasks as same as human workers do.

Vision and Robotics

Robotic vision is one of the most recent advancements in robotics and automation.
In essence, robotic vision is a sophisticated technology that aids a robot, usually, an
autonomous robot, in better-recognizing items, navigating, finding objects,
inspecting, and handling parts or pieces before performing an application. The
robotic vision mechanism consists of two basic steps:

Imaging:

Scanning or “reading” is done by the robot using its vision technology. This basically
scans 2D objects such as lines and barcodes and 3D and X-ray imaging for inspection
purposes.

Image Processing:

The robot “thinks about” the entity or image after it has been detected. This
processing includes identification of image’s edge, detects the existence of an
interruption, pixel counting, manipulation of objects as per requirements, pattern
recognition, and processing it as per its program.

Every Robotic Vision System works under the following six-step architecture.

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 2


Instrumentation for Robotics and automation

 Sensing – Process that yields a visual image.


 Pre-processing – Noise reduction, enhancement of details.
 Segmentation – Partitioning of an image into an object of interest.
 Description – Computation of features to differentiate objects.
 Recognition – Process to identify objects.
 Interpretation – Assigning meaning to a group of objects.

Fig no 1: Robotic Vision System Block Diagram

Robot Vision Applications

Robots are static and limited to executing pre-determined pathways in highly


regulated settings without a vision system. A robotic vision system’s fundamental
goal is to allow for slight variations from pre-programmed paths while keeping
output going.

Robots may account for variables in their work environment if they have a sound
vision system. Parts don’t have to be shown in the same order. And when conducting
in-process inspection operations, the robot may ensure it is performing the mission
correctly. When industrial robots are fitted with sophisticated vision systems, they

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 3


Instrumentation for Robotics and automation

become even more dynamic. The primary motivation for the application of robotic
vision systems is flexibility.

Robots with robotic vision can perform a variety of activities, including:

 Taking measurements
 Scanners and reading barcodes.
 Inspection of engine parts
 Inspection of the packaging
 Assessment of the consistency of the wood
 Examination of the surface
 Orientation of modules and parts is directed and verified.
 Defect Inspection
Without robotic vision, robots are blind machines that move according to their
programming. They rigidly follow the code that dictates their functions, making
them ideal for repetitive tasks that could be physically draining and challenging to
people. Now that we are at the advent of Industry 4.0, robots are also evolving. It
will allow them to keep up with the demands and trends of the fourth industrial
revolution. Central to the evolution of robotics is the creation of a robot vision
system for collaborative robots. Machine or robot vision is a key feature of this
evolution, introducing new levels of precision and accuracy in smart automated
processes. Vision systems help robots perform tasks such as inspecting, identifying,
counting, measuring, or reading the barcode. Ultra-high-speed imaging and lens
quality facilitate multi-operations in one process. Machine learning is also being
applied to robotics, teaching collaborative robots to perform new tasks based on data
patterns. It gives vision robots sophisticated search and corrective movement skills,
such as the elimination of overlaps, distortions, or misalignments. Vision systems,

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 4


Instrumentation for Robotics and automation

however, are also useful in non-robotic functions. They can be placed in critical
production floor locations such as conveyor lines to aid with product quality control
reviews. At present, most robot vision systems are used for materials handling and
removals. See examples of these varied robot vision applications at Techman Robot.
Giving robots the ability to see is a game-changer. The ability to perceive their
immediate surroundings significantly enhances robot capability, which in turn
benefits human workers, companies, and industries at large. We understood this
early on at Techman Robot, motivating us to integrate built-in vision systems in our
TM Robot Series of collaborative robots.

Why Should Manufacturers Use Robot Vision Systems?

The average robot can be programmed to execute tasks quite effectively with
minimal supervision. This has, in fact, been the case in most factories for a while
now. So, you may be wondering, is it necessary for you to integrate robot vision
systems into your robots? Does it add any value to your robot operations? The brief
answer to that is, yes. Here are some of the ways your industrial robots would be
improved by robot vision systems.
Accuracy – consider how well you would execute a task blindfolded vs how you
would perform with full vision. The latter is better, isn’t it? This too applies to robots.
While they can rely on sensors to perform tasks, a vision system makes them way
more accurate in how they handle parts and work on them.
Safety– visual capabilities has been found to be highly effective in making robots
safer for humans to work with. The combination of a robotic vision system and safety
sensors provides double assurance that the robot will halt or slow down when it
senses and sees an obstacle in its path.
Higher operational efficiencies when a robot is fitted with a robot vision system, it
is more adaptive. Take, for instance, a delta robot tasked with sorting objects rolling

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 5


Instrumentation for Robotics and automation

on a conveyor belt. Its vision system would perceive the objects and the rate at which
they are passing through the system. As the controller interprets such visual data, it
would signal the robot arms to work at a speed that matches that of the convert belt.
There would be no need to start and stop the belt for the robot to keep up and this
improves efficiency.
Improved cost efficiency– when operational efficiencies improve, costs of
operation also go down. Additionally, decreased need for supervision and lesser
need for extra inputs like positioning pallets to support robot accuracy all cut down
on extra costs.
Higher value: Robots that have robotic vision systems can do more which gives you
more value out of the robot. Whereas you may have used a SCARA robot to only
assemble parts a vision system can be configured to perform quality inspection after
the assembly.
How Does Robotic Vision Work?

At least one robot vision camera will be mounted on the robotic arm itself, literally
serving as the eye of the machine. In some cases, additional cameras are installed in
strategic locations in the cobot’s working cell. This set-up allows the camera to have
a wider visual angle and capture as much visual data as it needs to perform its
function in collaboration with human workers.

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 6


Instrumentation for Robotics and automation

Fig no: 2 Robot vision

Before the machine is deployed, it is first programmed and taught to identify the
objects with which it should interact. The cobot’s camera will take 2D or 3D scans
of the object. The image will then be stored in the cobot’s database and programmed
to trigger the machine to move and perform specific tasks.
Once the programming is complete, the cobot can finally be installed in the assembly
line.

There are three segments in a robot vision system:

Image capture
The cameras capture footage of the objects that enter a cobot’s workspace. If the set-
up is halfway through the assembly line, there is a good chance that conveyor
systems will deliver the products directly before the cobots. The camera/s will start
capturing visual data from a calculated distance. Afterward, the machine will
analyze the images or footage and enhance it to produce a clear picture.

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 7


Instrumentation for Robotics and automation

Image processing
The picture will go through further processing and analyzed by pixel. The system
will compare the colors and apparent shape of the object with the image programmed
in its database.
Connectivity and response
Once the machine recognizes that the object in the picture matches the pre-
programmed image, it will perform a corresponding action onto the object before it.
This entire process happens in quick succession within seconds. To put things into
context: one good example of this process would be a robot vision system for the
icing and decorating line of a cake factory. Two custom cobot arms are stationed on
either end of this assembly line: one spreads buttercream frosting on the entire cake,
while the other pipes complex icing designs on top.A conveyor belt brings in the
cakes and gets them frosted first before conveying them to the decorating arm.

Fig no: 3 advanced vision camera

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 8


Instrumentation for Robotics and automation

Both arms have cameras (eyes) that scan each cake that comes before it. Every time
the frosting arm registers an un-iced cake, it releases its icing nozzle and scraper,
which spreads the buttercream evenly on the top and sides of the cake. The same
goes with the decorating arm: the sight of an iced cake, whose frosting matches the
image of the model cake programmed into its system, would trigger the machine to
create the corresponding icing design. In this scenario, the cake factory can mass-
produce cakes with impressive speed and precision. The cobots in this factory are
clearly recipe-driven, which means the bakery can easily change cake designs or
even produce more than one cake style from a single decorating line.

The Advantages of Robot Vision

The above is a basic demonstration of how cobots enhanced with vision systems can
fit into an assembly line. The example also gives us a clear idea of their benefits:

Increases efficiency.
Robot vision camera can help take images for your own AI model training, and the
image data can be automatically collected. The training module can then be used in
factories for robots to better identify various kinds of product defects.

Ensures product consistency.


Human workers are capable of icing a cake, but it would be unrealistic to expect
anyone to be consistently precise for six to eight hours straight. This is an example
of repetitive tasks for which robots are an excellent replacement. And with robot
vision, factories can be more flexible in these assembly lines and take advantage of
the visual recognition feature. They can produce more if they allow cobots to react
to the variables that enter their fields of vision.

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 9


Instrumentation for Robotics and automation

Reliable
Vision-guided robots are more reliable than their non-seeing counterparts. They
don’t function blindly, after all. If something vastly different from the 2D or 3D
images programmed in the system comes across their path, they can be made to skip
that object and move on to the ones that pass this first level of quality control. In this
respect, factories can boost efficiency with the use of their raw materials or parts in
the assembly line.

Promotes a safe workplace.


Traditional, automatic machines move at directions and speeds per their
programming. Put a human worker on its path, and it will still keep going, regardless
if they get out of its path or not. Cobots with robot vision AI, on the other hand, have
sensors that detect obstructions along their pre-programmed paths and movements.
They stop at once their sensors register an object in the way instead of powering on
at full speed. With vision-guided cobots, factories can reduce the risk of on-site
accidents significantly.

Reduces operating costs.


Cobots with robot vision systems constitute a major investment. Although returns
on any form of investment are never guaranteed, enterprises can be sure of gains,
such as time and resources saved, higher production rates, better and consistent
product quality, and better-rested employees. In time, the returns can manifest as
lower operating costs and increasing sales. As a creator, designer, and innovator of
collaborative robots and application software, we have witnessed clients reap these
benefits after integrating vision-guided cobots into their factories. These companies
are from the industrial and manufacturing sectors as well as industries where the
need for automation is moderate but highly beneficial. Explore the possibilities for

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 10


Instrumentation for Robotics and automation

growth that robot vision software can bring to your business. The earlier you do it,
the bigger your head start against your competitors.

What Is a CCD (Charge-Coupled Device)?

A CCD, or charged-coupled device, is an electronic sensor that converts light to


digital signals through charges generated by bouncing photons on a thin silicon
wafer. CCDs were the gold standard for camera sensors from the early 80s till the
late 2000s. This is because around 2010, CMOS sensors gained significant
technological innovations that would make them cheaper to manufacture as a system
on a chip (SoC) while having comparable image quality to a CCD sensor. Since
CMOS gained popularity, it has become rare to see CCD sensors on smartphones
and cameras this past decade. However, CCD sensors aren't exactly obsolete.
Although they may have been phased out of the consumer camera market, CCD
sensors are still the preferred sensor used in certain areas of photography.

Applications of CCD Technology in Photography

Aside from being expensive to manufacture, CCD also had other problems that
caused it to be phased out of the consumer market. This would include its high-
power requirement, which is 100 times more than what CMOS would use, and slow
image processing, which is a problem when taking photos in bursts and shooting
video. Despite all these disadvantages, CCDs are still thriving in various industrial
and scientific applications that need machine vision. This is because CCDs still
provide higher quality low-noise images that these areas of specialized photography
require. Plus, the cost of buying and operating CCD cameras isn't really a problem
for well-funded institutions and businesses. So, what exactly are these specialized
areas of photography that still use CCD? Let's find out below:

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 11


Instrumentation for Robotics and automation

Optical Microscopy

CCDs are used in various microscopy applications to observe food, chemistry,


engineering, and other applications where clear visuals of microscopic objects are
necessary. A CCD is chosen for optical microscopy because it can record objects
with over 10 pixels with high sensitivity and low noise ratios.

Space Photography

Taking photos of space is best done on CCD cameras. This is because CCD sensors
have the highest quantum efficiencies, resulting in low noise, high dynamic range,
and better uniformity—all critical aspects of space photography.

Near-Infrared Imaging

CCDs are used in various industrial imaging applications, one of which is near-
infrared imaging. A sensor needs to have highly efficient photon absorption to do
near-infrared imaging, as infrared photons are less visible than regularly visible
photons. Since CCDs provide highly sensitive sensors that can capture infrared
photons better, they are always used in these applications. CCDs thrive in the
scientific, industrial, and medical photography space primarily because of their high
quantum efficiencies, low noise imagery, and high level of uniformity. But how
exactly do CCD sensors provide such qualities? You'll first need to learn how CCD
sensors work to understand this better.

How Does a CCD System Work?

CCD is just one of the various types of camera sensors. And just like other camera
sensors, CCDs capture light and convert it into digital signals, which are then
processed and displayed as pixels when viewed on an electronic display such as a

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 12


Instrumentation for Robotics and automation

monitor. Although all imaging sensors have the same task of capturing the analog to
make digital signals, the mode or process it takes to achieve said tasks would be
different from other sensors. For a CCD sensor to capture images, it goes through a
five-step process, starting with light-to-charge conversion, charge accumulation,
charge transfer, charge-to-voltage conversion, and then signal amplification. Let's
go through the process step by step:

Step 1: Light-to-Charge Conversion

A CCD sensor captures light by allowing photons (energy from light) to bounce off
a thin silicon wafer which then releases an electron. A tiny positively charged
capacitor then acts as a bucket that collects and stores the released electrons. A unit
of this thin silicon wafer on top of a tiny capacitor is known as a photosite.

Steps 2 and 3: Charge Accumulation and Charge Transfer

A CCD sensor continues to collect and store such electrons until the camera shutter
closes. All the stored electrons from the capacitor are what make the charge. When
the camera shutter closes, all the charge from the photosites is transferred to a sense
capacitor circuit. The transfer is done by shifting the charges horizontally to the edge
of the sensor and then vertically until each charge is sent to the sense capacitor
circuit. CCD sensors use this shift register mechanism to transfer charge, while
CMOS sensors use local voltage conversion and signal amplification. Although this
makes CMOS the faster sensor, it also makes their output quite noisy as the sheer
number of local amplifiers create noise or artifacts in an image. In contrast, a CCD
only uses one amplifier circuit to amplify signals. Another disadvantage of using
local amplification at high speeds is that it causes unevenness to the imagery. CCD
sensors don't have such problems because of their linear process when processing
charges in each photosite.

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 13


Instrumentation for Robotics and automation

Steps 4 and 5: Charge-to-Voltage Conversion and Signal Amplification

Analog charges sent to the sense capacitor are automatically converted into voltages
which makes the raw digital data used to make images. After the charge-to-voltage
conversion, the digital signals are still too low for a processor to use. To boost the
digital signals, a signal amplifier is used. This amplified signal is then sent to an
image processor which then assembles the image.

What Is Image Acquisition in Image Processing?

Image acquisition is the process of converting an analogue image into digital form.
This usually happens in a camera or scanner, but it can be done with any device that
produces analog images. Image acquisition is often used to create a digital
representation of data from surveys and experiments, but it can be also used for other
purposes such as printing pictures or other types of graphics. Image acquisition is
always the first step in a workflow sequence because images are needed for
processing. Image processing starts by acquiring an unprocessed picture, a step
that’s always necessary before other work can be done on it. The best part about this
process? It doesn’t matter what device was used – all you need is your camera! The
hardware used to generate an image can be important as it has no adjustments made
yet, so if that’s not what you’re looking for then don’t bother with this type of input.
The image acquisition is where you establish the parameters of your input. One of
the goals in image processing is to create a source of input that works within certain
defined, and measurable parameters that makes it easier to replicate an experiment.
Many factors go into acquiring good images; one such factor being how well-setup
the hardware used for capturing them has been from inception onward. If not
properly configured or aligned at all initial stages—from desktop scanners right on
through enormous optical telescopes–then visual artifacts might result and

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 14


Instrumentation for Robotics and automation

complicate image processing later on because equipment quality may be low enough
where even extensive postprocessing won’t salvage bad captures as time goes along.
Certain areas of image processing, such as comparative image processing look for
specific differences between image sets.

In image processing, one type of image acquisition is real-time. Real time imaging
often involves retrieving images from a source that can automatically capture them
for you, and it creates files which are processed when they come into your system.
Background imagery works with real time by both software and hardware to quickly
preserve any incoming images flooding through your systems before they’re lost or
corrupted.
There are some advanced methods of image acquisition that require specialized
hardware. Three-dimensional (3D) imaging is one such method, which can use two
or more cameras positioned at precise distances around a target to create an accurate
3D model of the object in question from multiple angles and elevations. Some
satellites utilize this type of technology for mapping terrain by building up an
accurate 3D model with data gathered on different surfaces throughout space.

Illumination

Image processing systems generally comprise a camera or sensor and a processing


unit (usually, but not exclusively a computer equipped with a frame grabber and
image analysis software). Smart cameras are often also regarded as image
processing systems. But alongside these obvious components, the illumination
system plays a crucial role. Strictly speaking, image processing tools do not inspect
the object itself, but instead examine the visual image of the object as captured by
the system, and stable, reproducible illumination conditions must be in place in to

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 15


Instrumentation for Robotics and automation

ensure constant image quality of identical objects or identical conditions. Therefore,


fluctuations in illumination must be avoided if strict quality criteria are to be applied
to the inspection of objects.
Only when it is possible to view the specific features or faults with sufficient contrast
is it possible to evaluate them using image processing software. This is generally
achieved by illuminating the object using a light source, although fluorescent objects
are an exception to this rule. The principle of “illuminating the object under
inspection may seem banal, but experience shows that one of the main difficulties in
image processing is to make faults in the object visible to the camera at all.
Applications

Fig no: 4 Illumination vision

Applications that demand carefully chosen illumination might include a transparent


glass bottle on which the embossed lettering at the bottom must be read. In other
words, the object under inspection and the features to be inspected are of the same
material, and to make things worse, the material is also transparent! By contrast, a
scratch on a metallic surface generally requires only the recognition of the surface
feature as distinct from the surface itself. Here again, we have an inspection situation
in which faults must be identified despite the material being identical. The same also
applies to embossing and deformations in materials.

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 16


Instrumentation for Robotics and automation

The crucial role here is always the illumination and its interaction with the 3 main
considerations, namely; the illumination, the object and the camera
It is often only by the skilful exploitation of the special characteristics of a
particular light source, lighting geometry, object characteristics and the camera
which allows difficult applications to be solved.

The key characteristics are as follows:

 Light: Wavelength (colour), direct/diffuse illumination.


 Object: Material, surface, geometry, colour
 Camera: Sensitivity, resolution, monochrome/colour.
Since the physical characteristics of the object under inspection can only be
influenced in exceptional cases (e.g. by colouring components or using UV-
sensitive pigment additives), the object itself will usually determine the choice
of illumination and camera type. The lens is determined by the connector thread
of the camera and the working distance from the object. The data format and data
rate supported by the camera also determine the frame grabber used.

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 17


Instrumentation for Robotics and automation

Various light sources

Fig no: 6 Various light sources

The light employed in image processing applications is generated in a wide


variety of different ways. Depending on the task, required light intensity, object
size and the space available for installation, the following are generally used:

 LED illumination units


 Metal halide light source.

 Laser illumination units


 Fluorescent light (high frequency)
 Halogen lamps

For several years now, [LED illumination(glossary/121) has taken an increasing


share of the market compared to other light sources. This trend is explained by
the large number of benefits offered by LED technology. These benefits include
a considerably longer service life of up to 50,000 hours, extremely simple

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 18


Instrumentation for Robotics and automation

control facilities, the mechanical resilience and small physical size of the
units, design flexibility, lower operating costs and excellent value for money. As
far as ring lights and similar illumination shapes are concerned, the LED has
already become well established as the light source of choice
Illumination techniques

The angle of incidence of light on the object also influences the result. There are
several different techniques, such as front illumination or backlighting, direct or
diffuse illumination, bright-field or dark-field illumination. Figures 11 through
15 illustrate how different an object may appear depending how the illumination
is organized.

Direct front illumination (a ring light illuminates the objects directly, more or
less parallel to the optical axis of the camera). The image appears non-uniform
and mottled.
Diffuse bright-field illumination: The image appears more uniform. There is a
strong contrast between the object and background, but the reflective surface of
the connector 'floods' the camera, i.e. the camera is "dazzled" and no longer
detects some details. Furthermore, shadows are formed over the upper part of the
connector.
Diffuse dark-field illumination: Light with an oblique angle of incidence from a
ring light with an angle between the front illumination unit and the object.
Further detail can be seen on the connector and no shadows are formed.

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 19


Instrumentation for Robotics and automation

Fig no: 7 Direct front illumination

Fig no: 8 Diffuse bright field illumination.

Fig no:9 Diffuse dark field illumination.

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 20


Instrumentation for Robotics and automation

Dark-field illumination: Shallow angle of incidence of the light on the object


plane. The top edges of the pins, the connector and the holes appear as bright
circles and can thus be easily identified busing image analysis software. The
missing pin (no bright circle) and the bent pin (incorrect position) are more easily
visible when compared to front illumination.
Backlighting: Light is aimed towards the camera from the rear of the object. The
light only penetrates where there is nothing to obstruct it. This allows the drill
holes on each side of the connector to be measured accurately. An easily detected
bright spot appears in place of the missing pin.

Fig 10: Dark-field illumination

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 21


Instrumentation for Robotics and automation

Fig no: 11 Backlighting

Robotic Vision vs Computer Vision

Robot Vision or Robotic Vision is closely linked to Machine Vision. They have a
lot in common when it comes to Computer Vision. Computer Vision might be
considered their “father” if we talked about a family tree. However, to comprehend
where they all blend into the universe, we must first add the “grandparent” – Signal
Processing.

Signal processing entails cleaning up electronic signals, extracting information,


preparing them for display, or converting it. Something, in any sense, maybe a
warning. Images are essentially a two-dimensional (or more) signal.

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 22


Instrumentation for Robotics and automation

Link:

1) https://www.youtube.com/results?search_query=robot+vision+system .

Exercise:

(Use separate sheets for answering the following questions)

1. Draw all the diagrams related to practical content.

2. Mention new trend /latest technology available/launch in market regarding


practical content.

3. Give Typical Conclusion.

Department of Robotics and automation Engineering, BVDUCOE, Pune Page 23

You might also like