You are on page 1of 8

UNIT 3

MACHINE VISION
Introduction
The term “machine vision” refers to the industrial usage of computer vision for automatic
inspection, process control, and robot guidance. In other words, it’s the application of
computer vision to factory automation. We turn to machine vision when we need to
execute a certain function or outcome on the basis of the image analysis performed by the
vision system. The software the machine vision system uses helps identify the pre-
programmed features. Such a system triggers a variety of set actions according to the
findings.
The machine vision process begins with imaging. Next, the automated analysis of the
image and extraction of the required information come. Finally, the solution is created.
To put it simply, a machine vision system resembles a human inspector who visually
controls the quality of the products on assembly lines. Its “eyes” (digital cameras) and its
“brain” (image processing software) are able to perform similar inspections. As a result,
it makes decisions on the basis of the digital images analysis.
A machine vision system includes:
 An image capture device (a camera with an optical sensor);
 Lighting appropriate for the specific application;
 Camera interface card for a computer (frame grabber);
 Computer software for processing images;
 Digital signal hardware for reporting the results.

Raditionally, machine vision systems are programmed for performing narrowly


defined tasks. For instance, they can count objects on the conveyor, search for
defects, or read serial numbers. Unlike humans, they don’t possess the intelligence
or learning capability. However, they are really helpful in many ways due to their
high speed, accuracy, and ability to operate 24/7. Primarily, machine vision systems
are used for image-based automatic inspection, sorting, and robot guidance.

What Is Computer Vision?

Computer vision is all about extracting information about an object (scene) via computer
analysis of its image or sequence of images. It employs optical character
recognition, image recognition, video recognition, video tracking, and other algorithms to
make the most of the digital visual data.
As usual, a computer vision system consists of the following components:
 An image capture device (mostly a camera with an image sensor and a lens);
 Lighting appropriate for the specific application;
 An image capture board (frame grabber or digitizer);
 Image processing software.
 In fact, the computer vision system approximately resembles the human vision.
An image capture device serves as human eyes while image processing software
works like a human brain. As a result, we get the precious information simply
irreplaceable for many business fields.
 Actually, the applications of computer vision are more than numerous. They
include agriculture, geoscience, biometrics, augmented reality, medical image ,
analysis, robotics, industrial quality inspection, security and surveillance to many
others.
Computer vision vs Machine vision

Key Difference Between Machine Vision vs Computer Vision

Perhaps, we have already realized that computer vision and machine vision differ not
only in their names. Though they are two overlapping technologies and the boundaries
between them are often blurred, they aren’t the same thing.
1. Computer vision doesn’t depend on machine vision. It can be used separately for
a wide range of fields. On the contrary, machine vision can’t exist without
computer vision because it employs computer vision algorithms. It goes without
saying that all of you have drawn a family tree at the primary school. If we decide
to place computer and machine vision on such a tree, machine vision will be,
probably, the child of computer vision.
2. Computer vision is more a technique, whereas machine vision is more about
specific industrial applications. In other words, computer vision is a scientific
domain while machine vision is an engineering one.
3. Used in industrial settings, machine vision deals with light and motion that are
controlled. Besides, the viewed objects are already known and the observed
events are predictable. Computer vision often deals with the objects of the
“outside world” and their activities which are uncontrolled and sometimes quite
unpredictable.

CCD and CMOS image sensing


Image sensors are everywhere. They are present in single shot digital cameras, digital
video cameras, embedded in cellular phones, and many more places. When many people
purchase a digital imager, the primary metric they use as a comparison is the pixel array
size, expressed in megapixels. The higher the megapixel count, the better the imager is
the prevailing wisdom to most consumers. There are many more metrics with which to
compare imagers that may give a better indication of performance than raw pixel counts.
Further, many of these metrics may be based on the type of imaging technology, CCD
(charge coupled device) or CMOS (complementary metal oxide semiconductor).

CCD Image Sensors

An alternate way to look at direct integration is to think about the capacitance that is
present from the formation of the depletion region. When the photodiode is reset, the
maximum amount of charge is placed on this capacitance. As photons are converted into
charges, these charges are removed from the capacitor creating the photocurrent. At the
end of integration, the number of charges left in the capacitor would be directly
proportional to the number of photons that hit the sensor. If the number of charges could
be measured, then the amount of light that hit the sensor could be as well. CCD sensors
work by transferring the charge from one pixel to another until they end up at the
periphery, and are then converted into a voltage to be read out. The charge transfer is
accomplished by applying voltages that form wells of different potentials, so the charges
transfer completely from one pixel to the next. Charges typically are shifted downward to
the end of a column, then rightward to the end of a row, where the readout circuitry is
present.

There are two main types of CCD architectures, frame transfer and interline transfer. In
frame transfer, the charges from the pixels are moved from the photosensitive pixels to a
non-photosensitive array of storage elements. They are then shifted from the storage
elements to the periphery where they are converted and read out. In interline CCDs the
non-photosensitive storage element is directly next to the photodiode in the pixel. The
charges are then shifted from storage element to storage element until they reach then
readout circuitry.

CMOS Image Sensors

CCDs are designed to move charges from pixel to pixel until they reach amplifiers that
are present in the dedicated readout area. CMOS image sensors integrate some amplifiers
directly into the pixel. This allows for a parallel readout architecture, where each pixel
can be addressed individually, or read out in parallel as a group. There are two main types
of CMOS image sensor modes, current mode and voltage mode. Voltage mode sensors
use a readout transistor present in the pixel that acts as a source follower. The
photovoltage is present at the gate of the readout transistor, and the voltage read out is a
linear function of the integrated photovoltage, to a first order approximation. Current
mode image sensors use a linear relationship between the gate voltage of the readout
transistor and the output current through the transistor to measure the photocurrent.
Sensing & Digitizing Image Data

A camera is used in the sensing and digitizing tasks for viewing the images. It will make
use of special lighting methods for gaining better picture contrast. These images are
changed into the digital form, and it is known as the frame of the vision data. A frame
grabber is incorporated for taking digitized image continuously at 30 frames per second.
Instead of scene projections, every frame is divided as a matrix. By performing sampling
operation on the image, the number of pixels can be identified. The pixels are generally
described by the elements of the matrix. A pixel is decreased to a value for measuring the
intensity of light. As a result of this process, the intensity of every pixel is changed into
the digital value and stored in the com memory.

(A frame grabber is an electronic device that captures (i.e., "grabs") individual,


digital still frames from an analog video signal or a digital videostream. It is usually
employed as a component of a computer vision system, in which video frames are
captured in digital form and then displayed, stored, transmitted, analyzed, or
combinations of these.)

Image Processing & Analysis:

In this function, the image interpretation and data reduction processes are done. The
threshold of an image frame is developed as a binary image for reducing the data. The
data reduction will help in converting the frame from raw image data to the feature
value data. The feature value data can be calculated via computer programming. This is
performed by matching the image descriptors like size and appearance with the
previously stored data on the computer.

The image processing and analysis function will be made more effective by training the
machine vision system regularly. There are several data collected in the training process
like length of perimeter, outer & inner diameter, area, and so on. Here, the camera will
be very helpful to identify the match between the computer models and new objects of
feature value data.

Applications:

Some of the important applications of the machine vision system in the robots are:

1. Inspection
2. Orientation
3. Part Identification
4. Location

Machine Vision System

Machine vision system is a sensor used in the robots for viewing and recognizing an
object with the help of a computer. It is mostly used in the industrial robots
for inspection purposes. This system is also known as artificial vision or computer
vision. It has several components such as a camera, digital computer, digitizing
hardware, and an interface hardware & software. The machine vision process includes
the following important tasks, namely:

1. Sensing & Digitizing Image Data


2. Image Processing & Analysis

Applications

PICK AND PLACE ROBOT

The pick and place robot is a microcontroller based mechatronic system that detects the
object, picks that object from source location and places at desired location. For detection
of object, infrared sensors are used which detect presence of object as the transmitter to
receiver path for infrared sensor is interrupted by placed object.
COMPONENTS OF ROBOT:-
1. STRUCTURE
The structure of a robot is usually mostly mechanical and can be called a
kinematic chain (A kinematic chain is an assembly of rigid bodies connected by joints
to provide constrained (or desired) motion that is the mathematical model for a
mechanical system). The chain is formed of links, actuators, and joints which can
allow one or more degrees of freedom. Most contemporary robots use open serial
chains in which each link connects the one before to the one after it. These robots
are called serial robots and often resemble the human arm. Robots used as
manipulators have an end effector mounted on the last link. This end effector can
be anything from a welding device to a mechanical hand used to manipulate the
environment.
2. POWER SOURCE
At present mostly (lead-acid) batteries are used, but potential power sources could
be:

 Pneumatic (compressed gases)


 Hydraulics (compressed liquids)
 Flywheel energy storage
 Organic garbage (through anaerobic digestion)
 Still untested energy sources (e.g. Nuclear Fusion reactors)

3. ACTUATION
Actuators are like the "muscles" of a robot, the parts which convert stored energy
into movement. By far the most popular actuators are electric motors that spin a
wheel or gear, and linear actuators that control industrial robots in factors. But
there are some recent advances in alternative types of actuators, powered by
electricity, chemicals, or compressed air.

4. TOUCH
Current robotic and proetsthic hands (a prosthesis or prosthetic implant is
an artificial device that replaces a missing body part, which may be lost through
trauma, disease, or a condition present at birth (congenital disorder). ... Prostheses can
be created by hand or with computer-aided design (CAD),) receive far less tactile
information (This tactile information supports our ability to recognize objects
by touch) than the human hand.

Recent research has developed a tactile sensor array that mimics (imitate, copy)
the mechanical properties and touch receptors of human fingertips. The sensor
array is constructed as a rigid core surrounded by conductive fluid contained by
an elastomeric skin. Electrodes are mounted on the surface of the rigid core and
are connected to an impedance-measuring device within the core. When the
artificial skin touches an object the fluid path around the electrodes is deformed,
producing impedance changes that map the forces received from the object.

5. VISION
Computer vision is the science and technology of machines that see. As a
scientific discipline, computer vision is concerned with the theory behind artificial
systems that extract information from images. The image data can take many
forms, such as video sequences and views from cameras. In most practical
computer vision applications, the computers are pre-programmed to solve a
particular task, but methods based on learning are now becoming increasingly
common. Computer vision systems rely on image sensors which detect
electromagnetic radiation which is typically in the form of either visible light or
infra-red light. The sensors are designed using solid-state physics. The process by
which light propagates and reflects off surfaces is explained using optics.
Sophisticated image sensors even require quantum mechanics to provide a
complete understanding of the image formation process.

6. MANIPULATION

https://www.youtube.com/watch?v=4HVSr3ouCdk

Robots which must work in the real world require some way to manipulate
objects; pick up, modify, destroy, or otherwise have an effect. Thus the 'hands' of
a robot are often referred to as end effectors, while the arm is referred to as a
manipulator. Most robot arms have replaceable effectors, each allowing them to
perform some small range of tasks. Some have a fixed manipulator which cannot
be replaced, while a few have one very general purpose manipulator, for example
a humanoid hand.
(https://www.youtube.com/watch?v=IfojHo9cVOk)

Mechanical Grippers:
One of the most common effectors is the gripper. In its simplest manifestation it
consists of just two fingers which can open and close to pick up and let go of a
range of small objects. Fingers can for example be made of a chain with a metal
wire run trough it.

Vacuum Grippers:
Pick and place robots for electronic components and for large objects like car
windscreens, will often use very simple vacuum grippers. These are very simple
astrictive devices, but can hold very large loads provided the pretension surface is
smooth enough to ensure suction.

Magnetic Grippers
Magnetic grippers are most commonly used in a robot as an end effector for
grasping theferrous materials. It is another type of handling the work parts other
than themechanical grippers and vacuum grippers.

Types of magnetic grippers:


The magnetic grippers can be classified into two common types, namely:
Magnetic grippers with

1. Electromagnet
2. Permanent magnets

Electromagnets: Grippers using electromagnets are generally powered by DC power for


handling material. This type of magnetic end effector is easy to control because the
attraction can be turned off by shutting down the current. This can also help to
remove the magnetism of the handled part.

Permanent magnets: These kinds of end effectors do not need power to operate. They are
always on. In order to separate the piece from the magnet, a push-off pin is
included on the end effector. The permanent magnet offers an advantage when it
comes to safety, since the gripper remains on even if a blackout occurs. Moreover,
there is no possibility of generating sparks during production since no electric
current is required.

You might also like