Professional Documents
Culture Documents
COURSE
VI-2000MK-II
VISION SYSTEM
DEGEM SYSTEMS
Copyright © 1999 by I.T.S. Inter Training Systems Ltd. All rights
reserved. This book, or parts thereof, may not be reproduced in any
form without prior written permission from I.T.S. This publication is
based on the exclusive methodology of Degem ® Systems.
INTRODUCTION ............................................................................ 5
1 Introduction ............................................................................................. 6
1
2.3.1 Gray Image .....................................................................................16
2
2.6.6 Analyzed Objects Report ................................................................45
3
4.2 Image Acquisition ........................................................................... 110
4
INTRODUCTION
5
1 Introduction
6
1.3 Equipment Requirements
7
1.5 Release History
1.6 Terminology
Running the Setup.exe program of ISPOT it will install the files are
required to run the vision system. Setup will do for you anything it is
needed. It will find out the Operating System you are using and it will
install the ISPOT files needed to run on this Operating System. It may be
that in some configuration Setup will ask you to reboot the system during
the setup process.
8
GRAPHICAL USER INTERFACE
9
2 Graphical User Interface
2.1 Overview
10
• Main
menu -
• Control bar
• Working Area
• Status bar
The Status bar contains the following items (panels):
▪ Hints Panel – provides the Hints on mouse movements
▪ Frame Grabber Status Panel – provides the information about the
accessibility of the PXC200 board by the ISPOT
▪ Camera Status Panel – presents the information about the status of
the camera current selected in ISPOT
▪ The ISPOT configuration name – it is the name of the
configuration of ISPOT the user is licensed to use.
11
2.2 File menu
The File menu contains a set of items that allows the user to create, open
or save a project, an AOI, a Gray Image, a Binary Image, etc.
Using the File/Open Project option you can open a project from long term
memory (HDD, FDD, CD-ROM, etc).
This option from File menu allows you to save a new project or an opened
project.
12
2.2.4 New menu
13
2.2.5 Open menu
• Patterns (*.ptt)
• A Robot Scene (*.rsc)
• Options (*.opt)
14
2.2.6 Open menu
• Patterns (*.ptt)
• A Robot Scene (*.rsc)
• Options (*.opt)
2.2.7 Exit menu
15
2.3 View menu
We are going to select now the View menu, which contains the following
items:
16
2.3.2 Histogram
Displays the histogram according to the current gray level image. If you
modify the settings within Tools/Histogram Optimization , a new
histogram will appear on the screen (only if you select View/Histogram
).
17
2.3.4 Display Patterns
The Pattern Name and the Number of samples are displayed at the bottom
of the window; on the same row you can find four buttons used to navigate
between the different considered patterns.
If you are not satisfied with the defined Pattern or Dimensions you can
used one of the Delete buttons.
18
2.3.4.1 Standard Features Page
19
2.3.4.2 Dimensions Page
Selecting Dimensions in the Display Pattern window you can see the
values and the Min, Max limits for the orientation and used dimensions as
defined with Tools/Define Dimensions command.
20
2.3.5 Live Image
Allows you to view the image directly from the camera in order to give the
user the possibility to adjust the focus and the iris of the camera. An
example is showed in the frame below.
The same commands (from the View bar menu) are also available from the
View Toolbar.
21
2.4 Commands menu
2.4.1 Snap
Place a set of objects, which you want to analyze in the camera's view
field.
A gray scale image of the objects will appear on the screen, as seen by the
camera.
If the objects lie outside the AOI, adjust their position and snap again.
Another alternative is to modify the AOI size (in Tools/Define AOI )
and snap again.
22
2.4.2 Apply Filter
Select Commands/Undo Apply Filter if you are not satisfied with the
results of the Apply Filter procedure. The result will be the previous
image. Only one undo apply filter operation is available.
2.4.4 Binarization
Under less than optimal lighting conditions, the image may not contain
clearly defined objects.
In order to improve the quality of the image, the threshold binarization has
to be changed (Select Tools/Define Threshold ).
23
2.4.5 Object Analysis
After you have obtained a satisfactory binary image, the objects in it can
be analyzed.
In the Object Analysis screen the system displays the individual objects
within the current image allowing the user to view their characteristics.
The Pattern Name and the Number of samples are displayed at the bottom
of the window; on the same row you can find four buttons used to navigate
between the different considered objects.
24
2.4.5.1 Standard Features Page
By activating Standard features you can see the object area, perimeter,
compactness and all the other standard features.
25
26
2.4.5.3 Position Page
The dimensions of the rectangle box that encompasses the object (as
seeing in Position window) are displayed considering Xmin, Ymin, Xmax,
Ymax.
27
2.4.5.4 Shape Page
The same commands from (the Commands bar menu) are also available
from the Commands Toolbar.
28
2.5 Tools menu
29
Within the Frame Grabber box you can enter the two values for offset and
gain. The results will be a better definition of the gray scale levels in the
image
The Optimization box allows you to select the way (Auto or Manual) the
offset and gain will be defined.
In Auto mode the system automatically optimizes the Offset and Gain in
order to obtain a histogram, which spans most levels of the gray scale
range (0-255).
In Manual mode the system generates the histogram based on the Offset
and Gain values prescribed by the user.
30
2.5.3 Calibrate AOI
By typing the object size (Width and Height) the application automatically
sets a new scale. This scale will be considered for all analyzed objects.
By default the system will consider the scale on both axis (X and Y)
equal 1 - meaning one mm per pixel.
31
2.5.4 Define Threshold
Choosing Auto mode for Threshold Type ISPOT will set the threshold
value according to an optimization algorithm for the current gray image.
32
2.5.5 Edit Filter
The gray image filter is a 3 x 3 - convolution mask and a ratio. You have
to set all the 3 x 3 (9 items) of the convolution mask and the ratio values
according to the filtering goals. The filter has also attached a Description
string.
33
2.5.6 Build Pattern
34
2.5.6.1 Standard Features Page
Standard features page displays the values of the parameters of the current
object (number of holes, area, perimeter, etc.).
In this dialog box, for a given Pattern is also displayed the number of
samples of it.
35
2.5.6.2 Dimensions Page
36
2.5.6.3 Shape Page
By selecting Shape you can see the defined points of and dimensions of
the object and the associated orientation axis (an arrow marks it's main
axis of orientation passing through the object's center of gravity).
37
From this moment you can measure the desired dimension of the object
between two defined points. The steps are:
5. To make the connections between two points select the desired points
from the point box, setting them as first and second within take point
box
38
6. Go to Dimensions box, select the desired dimension number and click
Define; the connection between the two considered points will be
displayed on the object.
2.5.8 Options
The options menu allows to specify some parameters which will remain
unchanged during a work session or until you modify them. All parameters
have default values.
• Select Camera (White, Red, Blue or Green) allows you to select the
camera (the color of the input cable) which will be used to acquire
the image from the selected frame.
• Communication Port select Com 1 or Com 2 as serial interface port
for communication with the robot controller.
• Picking Height represents the predefined height of picking objects
by the robot in pick command from the Build Robot Scene (Z
axis of the robot in picking moment)
39
2.5.8.2 Image Page
Selecting Image within the Options window you enter a page which
displays and allows you to define the settings and values which are going
to be used for image processing.
40
• Object Orientation box is used to mark the object axis of orientation
type: Minimum Distance Point, Maximum Distance Point or Inertia
Moment Axis.
41
2.5.8.3 Standard Features Page
Standard Features Page allows you to select a set of desired features that
is used in the object identification process.
The same commands (from the Tools bar menu) are also available from
the Tools Toolbar.
42
2.6 Robot menu
The communication between the robot controller and the ISPOT system is
made through the RS232 communication port of the PC, selected by the
user for communication with the robot (from the Tools / Option). In order
to communicate with the robot controller, ISPOT is sending a string of
characters through the dedicated RS232. This string of characters
represents a direct command for the robot controller.
In Scene Status it is displayed the current status of the Robot Scene. When
the Status is synchronized the user can test the accuracy of the Scene
sending the command to the robot to pick the object in the image (the pick
action is made with the height value of the gripper contained in Picking
Height box.
You can select the next or previous object in the displayed image by using
the navigation buttons: First , Previous , Next , Last .
43
2.6.2 Initialize Robot
44
2.6.4 Vision Stages
45
BEGINNER’S GUIDE
46
3 Beginner’s Guide
• Starting ISPOT.
• Image acquisition and preprocessing.
This part also gives a briefly description of how to:
• Define AOI
• Calibrate AOI calibrating
• View the image histogram
• Filter the image
• Get a binary image.
It also gives a briefly description of how to use the Options tool. Please
refer to the Advanced User’s Guide for a more detailed description.
47
3.2.1 Getting Started
The first step is to check if the entire vision system (computer, frame-
grabber card and camera) is properly installed and operable. In this stage
the robotic system is not required.
After running ISPOT, the software main menus will appear on the screen.
Check in the Status bar if the Frame Grabber and Camera are connected.
There are three possible situations:
48
• Status bar – Frame Grabber Ok / No Camera. In this case you
should check the connections between Camera and Frame Grabber,
or check if the Camera is power on. If, form any reason, you choose
another Camera, it is recommended to go on Options and make the
change there.
Click View/Live Image or click the Live Image button in the View
toolbar. Place an object in the camera’s field of view, and try to get the
best illumination and focus.
49
3.2.2.1 Grabbing the Image
If the objects lie outside the frame, adjust their position and snap again or
change the frame size and snap again.
50
3.2.2.2 Frame Grabber optimization.
51
The histogram optimization is used to determine the proper setting of two
parameters that control the image acquisition: Gain and Offset.
This tool allows the user to change Frame Grabber parameters values in
order to improve acquisition system resolution.
Note that this tool is used to determine the proper setting of two
parameters that control the image acquisition: gain and offset. Gain and
Offset are set in order to obtain a better gray level resolution. In general,
the offset controls the position of the lowest levels of the histogram and
the gain controls the length of the histogram (the difference between the
lowest and the highest pixel intensity levels of the image).
Examples:
52
The gray image histogram for the Frame Grabber parameters (Gain and
Offset) on default values.
53
The Gray Image Histogram after the histogram optimization process:
54
The gray image histogram after manually change of the Gain parameter
value to 50
55
The gray image histogram after manually change of the Offset parameter
value to 130. The image is partially saturated, loosing gray level
information on pixel of high intensity.
56
The Gray Image for these parameters:
57
The gray image histogram after manually change of the Offset parameter
value to 80. Remark that the image is loosing gray level information on
pixels of low intensity.
58
3.2.3 Defining a framework
Click Tools/Define AOI or click the Define AOI button in the Tools
toolbar. An image, as seen by the camera will appear.
In order to eliminate unrelated data form the images (e.g., dirt specks,
other objects), you should define a frame by cropping the picture, thereby
creating an AOI.
Click on the image. A frame cursor will appear around the image. Drag
each side of the cursor so that only the desired part of the image will
remain inside the marked area.
59
Press [OK] to save the frame definitions or Cancel to discard the changes
on the previous AOI.
60
Click to the mouse cursor on an image object. A frame will appear around
the object. Right click inside the market area. A Binary Image Popup
menu, which contains several system tools, will appear and you can apply
those tools to the referred object.
The purpose of this tool is to obtain a good binary image. It refers to the
threshold used for the binarization: all pixels below the threshold level are
considered 0’s and all pixels above this level are considered 1’s. Click
Tools/Define Threshold or click the Define Threshold button in the
Tools toolbar.
Select Auto mode. In this way the system will fix automatically a
binarization threshold which is in fact the threshold suggest by the system
in a bimodal histogram assumption; that is, all objects have the same color
and background color is uniform. The program searches for the first local
maximum and the last local maximum point of the histogram from 0 to
255th level, and then determines the absolute minimum, which lies within
61
these points. The values of these points generally correspond to the first
object and to the background (or the opposite).
Select Manual mode and move the cursor in the histogram until it
produces a clearly defined black and white image in the frame and then
click [OK]. In Manual mode, the user selects the threshold value, generally
based on the histogram. It is recommended that the threshold be selected in
local minimum points of the histogram in order to eliminate some objects.
Note that in Auto mode the defined threshold will be changed according
to the current histogram. In Manual mode the defined threshold will be
fixed and it will keep this value during the histogram changes.
62
Examples:
1. Binary Image with Automatic Threshold (Threshold Value 107)
63
Observations:
If lighting conditions are less than optimal, the image may not contain
clearly defined objects. In order to improve the quality of the image,
you need to change the binarization threshold.
This tool will be used to correlate the size of a pixel on horizontal and
vertical in mm.
64
Click Tools/Calibrate AOI, click the Calibrate AOI button in the
Commands toolbar or right-click on the object from the image you want to
analyze. The Calibrate Frame window will be opened. Search in the frame
until you will find a disk with known vertical and horizontal dimensions
(is better to use a disk for calibration because it is not dependent on
rotation effect).
Enter the exact Width (X) and Height (Y) measurements of the object you
are using for the calibration, and click [OK].
65
3.2.4 Filtering the Image
3.2.4.1 Filter
Filter is use in order to improve the quality of the image. ISPOT system
provides a number of five predefined gray level filters (Blur, Emboss,
Pond, Sharp and Smooth) which are most used in filtering technique.
Click Tools/Edit Filter or click the Edit Filter button in the Tools
toolbar to open Edit Filter dialog box.
66
What filters to use? The five standard filters provided with the system are
most used for the image processing. In some cases extended feature of the
filter is needed and the system enables the user to edit a new filter in order
to implement a particular filter method.
Note that filtering the image represents a method of image
enhancement. By filtering an image you can obtain a higher quality
image from processing point of view (this means to remove or to point
out some characteristics of the gray level image). An example of this is
“smooth” filter that removes the noise from the gray image, or “sharp”
for pointing out the edges. The higher is the image quality, the better is
image analysis.
67
Examples
The original Gray Image is the gray image resulted from Histogram
Optimization. In the following examples you can see the gray image
changes after applying each of the 5th predefined gray level filters. Please
refer to the Advanced User’s Guide for a more detailed description.
• Blur Filter
The Gray Image after applying the filter five times: (observe that
the image are blurred, the edges are not so clear).
• Emboss Filter
68
The Gray Image after applying the filter one time: (the filter
extracts the objects contour, homogenous areas become black).
69
• Pond Filter
The Gray Image after applying the filter five times: (the effect is to
reduce the intensity of noise in the image).
70
• Sharp Filter
The Gray Image after applying the filter two times: (image contrast
is enhanced).
71
• Smooth Filter
The Gray Image after applying the filter two times: (observe that
contour lines tend to diffuse).
1. For instance if you want to enhanced the image contrast more then the
Sharp Filter does, you can increase both the middle value (more than 5)
and the negative values (less then –1). These changes are made in order
to keep the weights of all the pixels in the equation and to increase the
difference between their values. The differentiation effect will be
greater, but image noise will also be amplified.
72
The Gray Image after applying the filter two times: observe that
image noise is also amplified.
2. In order to enhance the object contours, you can use a filter defined as
follows:
• The values of the middle column (from top to bottom) equal: 1, 0, -1.
• The values in the left column of the array equal 1.
• The values in the right column of the array equal -1.
• The ratio is 1.
As a result, homogenous areas become black and object contour is
detected, and, because of the differences that are done, this tend to become
white. Observe that the edges from N-E, S-W directions are emphasized
(which is opposite direction of the Emboss filter action presented before).
73
The Gray Image after applying the filter one time.
3. For contour detection, you can use a Sobel mask. In terms of a digital
image, a mask is an array designed to detect some invariant regional
property. We are now interesting in detecting transitions between
regions, and for this we can consider two Sobel operators (templates)
applied on vertical and horizontal direction respectively.
74
The Gray Image after applying the mask.
75
4. We can also use the Laplacean operator, which is considered to be an
estimation of the second derivative of the gray level image.
Click Tools/Undo Filter to open Undo Filter dialog box. The system
provides only a single level of undo command.
76
3.2.5 Options
Click Tools/Options.
Point to General page. You can select the camera you want to use. You
can also set the Communication Port for the communication with the
robot. You can see the cameras available and you can select which one you
use. After that you can see it on Status bar. Otherwise, in Status bar you
will see No Camera.
Point to Image page. Through Object Orientation you can choose the
method for object orientation estimation. Object Constraints let you adjust
the limits of object analyzed by the system. You can also see both the
binarization threshold value and the method of the binarization. You can
also change the Frame Grabber parameters.
77
Point to Standard Features page. This tab enables the user to
activate/deactivate the values of the standard features (such as area,
perimeter, compactness, eccentricity and invariant moment) for the
identification process.
78
Let’s come back to Object Orientation. The information about the
orientation axis of the object, together with the information about the
object position, allows the system to calculate the position and the roll
angle of the robot gripper.
There is a trick you can do if after defining an AOI you still don’t have
a clean image. In order to remove unrelated data (like specks or other
objects) from the AOI, you can go on Object Constraints and change
Min. Area, Min. Perimeter, etc.
79
3.2.6 Summary
Observation 1
Observation 2
This is recommended if after you first define an AOI, you still have
unrelated data in your frame.
80
Notes:
Use this page for notes relating to image acquisition and image
preprocessing scenarios.
81
3.3 Medium level vision
• Image segmentation
• Image description (Object Analysis)
Image segmentation is performed on binary images and it has the purpose
to determine which pixels are part of the object and which are parts of the
background.
This tool will be used to analyze the objects in a binary image. A set of
features is calculated for each object. Features includes area, perimeter,
center of gravity, and so on, and can be used to identify objects, to
measure position and orientation so that the robot can pick it up, and check
dimensions for quality control.
Once you have got a good binary image, you can analyze the objects in it.
Click on Shape. There you can see one by one, using navigation buttons
the individual objects within the current image. The white arrow on the
object marks the object’s axis of orientation, which passes through the
object’s center of gravity.
82
83
Click on Standard Features to see the parameters of the selected object.
The number of holes is estimated for the object.
Standard Features are estimated. This means that the user can not
change their values. Options tool enables the user to activate/deactivate
some features for the identification process.
84
Click on Position. There you can see the dimensions of the rectangle
which frame the object and also you can see the coordinates of the object
position relative to the upper left hand corner of the frame.
85
Click on Dimensions. There you can see the values of the dimensions
defined for these classes of objects. For a given object, a dimension
defined between two points has a value. This value 10% (by default) is
taken by the system and displayed in Define Dimensions text box. View
Pattern enables you to change these dimensions.
If the pattern is recognized, you can see its name in Pattern name text
box. Therefore in Object Analysis the system displays the pattern
dimensions and the actual value of the defined dimensions for the
recognized object.
86
3.3.2 Summary
Object Analysis tool will give you fast and complete information you
required.
87
Notes:
Use this page for notes relating to image analyzing and segmentation
scenarios.
88
3.4 High level vision
• Object recognition
• Interpretation (building pattern and dimensions)
89
90
To define a point, do the following:
1. Using the mouse, place the cross-cursor anywhere on the contour of the
object.
2. On the Point list box, select a point number you want to define as a first
point, Point 4 for instance, and the click Define. Repeat this operation
once again and choose another point number, Point 2 for instance.
3. In the Point list box, go to the point number which you choose as a first
point and in the Take point, click First button. You can see in the left
box the number of point you selected.
4. Repeat this operation with the other point, and in the Take point, click
Second button. You can see in the left box the number of the second
point you selected and also the Define button on the Dimension is
enabled. This means that between the first and the second point you can
define a dimension. There are up to 10 dimensions (user defined), object
orientation (predefined), and for these there are provided a number of
maximum 20 points.
91
5. In the Dimension list box, choose a dimension number, Dimension 1 for
instance, and then click Define. Now you can see in the Dimension list
box a text like Dimension 1, Define (1, 2) or Dimension 3, Define (4, 2),
and also a blue line between the two points defined on the object’s
contour. In the Point list box, the points that define a dimension are now
marked as connected, Point 1, Connected, and Point 4, Connected for
instance.
92
3.4.2 Display Pattern
Display Pattern tool allow the user to inspect the pattern database, to
erase a pattern or the defined dimensions (for the selected pattern), or to
change a dimension value, Min value, Max value or a percentage value
(the default values of dimensions are defined in Define Dimensions). For
a given object, a dimension defined between two points has a value. This
value 10% (by default) is taken by the system and displayed in
Dimensions text page. If you want to have this information in percentage,
enable the % check box.
93
3.4.3 Build Pattern
Point on Standard Features page. There you can find information about
some features including the orientation. You can also see the number of
samples in a dedicated textbox.
94
Point on Dimensions page. You can see the dimensions for a recognized
object.
95
Point on Shape tab. There you can select the object you want to learn using
the First , Previous , Next , Last toolbar. You can also see
the number of samples of this pattern in the dedicated textbox.
96
In the Pattern Name dialog box, type a name for the selected object (e.g.
PENTAGON), and then press Learn. Any name other than “?” is valid.
Once an object has been learned, the system automatically selects the next
object in the image.
Repeat the learning process for each object, assigning each type a different
name (e.g. PEN).
Once names have been assigned to all objects in the image, the Learn
button becomes unavailable.
To create a statistical database for the system, you must capture several
images of the objects in various positions.
97
Slightly turn and move the objects in the camera’s field of view. If the
objects have been moved outside the frame, readjust their location or
redefine AOI.
Continue the learning process for different images until the system
automatically and correctly fills in the Pattern name the name you
assigned to the object. The system identifies an object by matching its
patterns to the ones contained in a predefined pattern database.
In the Standard Feature tab, make sure the most power features have been
selected.
When the robot and vision coordinate systems are synchronized, ISPOT
can instruct the robot to pick up identified objects. In this chapter you will
learn the procedure for synchronization, which involves teaching the same
physical points to both the robot and the vision coordinates systems.
Make sure the entire system (robot, controller, frame-grabber card, and
camera) is properly installed and operable.
98
• Use Tools/Histogram Optimization to adjust and set the gain and
offset so that the object can be analyzed.
• Use Tools/Calibrate AOI to calibrate the frame.
• ISPOT may already have robot synchronization loaded. If either the
robot position or the camera position has been physically altered
since the robot synchronization was last performed, press Delete
box in both Robot Point and Vision Point definition.
In order to perform the synchronization follow the next steps:
Place the object within the camera’s field of view, and within the robot’s
working envelope. The object used for synchronization should have a
shape that can easily be manipulated by the robot in the center of the
gravity of it.
Open the gripper. Move the robot to the object. Close the gripper, and
make sure the robot can grasp the object in this position.
When the gripper successfully grasp the object, select Robot Point / Point
4, and click Define.
99
Move the robot away from the camera’s field of view. Be sure that the
object position is not modified. Click on Snap button to capture the image
of the object.
Click Robot/Build Robot Scene, click the Build Robot Scene button
in the Robot toolbar, or right-click on the object you want to use for
synchronization. From this point, the robot’s point of origin, the camera,
and the objects must remain fixed.
Record the vision coordinates by marking the object grasped by the robot.
Select Vision Point / Point 4, and click Define.
100
Then repeat the steps from 4 to 9 for another position of the object in the
image. In this way you will define Robot/Point 3, Vision/Point 3. You can
stop now but it is recommended to repeat the process once again (for a
different position) for Select Robot/Point 2, Vision/Point 2. You can define
up to 6 pairs of points (Vision and Robot). Repeat the procedure to
improve the accuracy of the scene parameters because they are estimated
statistical.
101
If the Scene Status dialog box displays Synchronized, the synchronization
is successful. Then the Pick button will become enabled.
102
3.4.5 Initialization Auto Robot Mode
This command will initialize the communication with the robot in order to
put the robot in status ready to automatic mode.
If the user is choosing [Ok] then ISPOT is sending the direct command to
the robot “rn 10” (run from line 10).
Line 10 is the first line of the Initialization routine inside the robot
controller.
From this stage the robot will be in Auto Mode. In automatic mode both
systems are connected. Vision receives commands from the robot to snap
and analyze the image and will communicate to the robot if the object is
rejected or accepted.
In Robot Mode the relationship between the ISPOT and the robot is the
following: the ISPOT is slave and the robot is the master of this
relationship.
In the “Waiting for the robot” stage of the ISPOT system – ISPOT is
polling periodically the robot (with a 500 milliseconds period) about the
orders the robot is sending.
103
Counter 20 going to 2 is interpreted by ISPOT as “Wake up” message
from the robot. This command is used by the dual ISPOT – robot system
for their synchronization (the handshake). Receiving this value ISPOT
will perform it’s own initialization of the Robot Mode (and will go to
“Waiting for the Robot” Vision Stage) and then will send to the robot
controller the command:
Line 30 is the first line of the Automatic mode routine inside the robot
controller.
The Vision Stages, others then “Waiting for the Robot” are “Snap Image”,
“Apply Filter”, “Binarization” and “Object Analysis”. All this stages are
delayed by a 2 seconds sleep period (in order to allow the user to notice
the steps in processing the image).
104
At the end of the cycle after the object identification ISPOT is displaying
“Test Passed” if the object is identified and its dimensions are in the
prescribed tolerances or “Test Failed” in any other case.
'sc 20,10' (set counter 20 to value 10) in the “Test Passed” case, or
'sc 20,11' (set counter 20 to value 11) in the “Test Failed” case.
'rn 270' (run line 270) from ISPOT. From line 270 the robot will read the
counter 20 value and it will peek the object and handle it according to the
counter 20 value.
105
3.4.8 Summary
Observation 1
After how many samples for objects with different shape the system
will be able to make no error in your own application?
Observation 2
Observation 3
After how many samples the synchronization will become valid in your
case?
106
Notes:
Use this page for notes relating to defining dimensions, build pattern, robot
scene building scenarios.
107
ADVANCED USER’S GUIDE
108
4 Advanced User’s Guide
4.1 Introduction
• The gray scale image is an array of 512 x 384 pixels in which each
pixel has a value in the range 0. .255 (one byte). A gray scale image
therefore requires 194Kb (197,686 bytes) memory.
• A binary image is an array of 512 x 384 pixels in which each pixel
has a value of 0 or 1. For binary images, the background color is the
logical "0" (light pixel), while the object color is the logical "1"
(dark pixel) – or the opposite, depending on the image scene.
109
• Image Preprocessing (the image can be enhanced by applying
filters): filters are applied to improve the image quality. To improve
the image quality means to reduce the features (of the image) that
disturb the following image processing stages (like noise) and to
accentuate those features important for the other image processing
stages (for example: object boundaries edges, object contour, etc.).
This stage is optional.
• Image Binarization: the transformation of the gray image into a
binary image based on a global threshold value estimated by
histogram analysis.
• Connectivity Analysis (Image Segmentation): by analyzing the
connections between pixels, objects are extracted from the image.
Any holes within objects are numbered during the segmentation
stage.
• Object Analysis: some features are calculated for each object. These
features called standard features are used to identify the objects.
Also in this stage the system measures the location on the objects
(position and the orientation of the objects) so the robot can pick it.
• Object Identification: using the set of the standard features selected
by the user the system is identifying each object in the current
binary image. This stage is performed using statistical pattern
recognition algorithm on the pattern database built by the user.
• Quality Control: for any identified object in the image the system is
checking the dimensions values (Orientation and up to 10 Distances
between up to 20 points defined on the object borders) defined and
selected by the user for each pattern (class of object).
The image histogram for gray scale images is a function of 256 equal,
adjacent classes within the gray scale. The histogram value of a class
represents the number of pixels having a particular gray level. A histogram
is used to determine the proper settings for the gain and offset frame
grabber parameters controlling image acquisition.
The gain and offset are set in order to obtain an image whose histogram
spans most levels in the gray scale range (0 ... 255), producing by this an
image with maximum contrast (gray level resolution). These two
110
parameters can be selected choosing Manual or Auto mode. In Auto mode,
the program will automatically adjust the gain and the offset in order to
obtain a histogram that spans the entire scale. In general, the offset
controls the position of the lowest levels of the histogram and the gain
controls the length of the histogram (the difference between the lowest and
the highest pixel intensity levels of the image).
To solve this problem, ISPOT vision system uses two scaling factors (mm
per pixel scales), both on horizontal and vertical axes. All the standard
features, location values and dimensions are scaled using these factors.
The best results in AOI Calibration will be obtain with a disk shape
calibration object, because the disk shape is not sensitive to the orientation
of the object in the image. For error minimization, the object should fill the
AOI (without extending beyond it).
AOI Definition and Calibration data can be saved on disk for future use.
dx dy
Sx = Sy =
px py
111
The error in computing the horizontal and vertical dimensions of an object
has the value of 1 pixel; considering this, the relative scale errors are:
S x 1
= 100[%]
Sx px + 1
S y 1
= 100[%]
Sy py + 1
The ISPOT vision system uses gray level filters. These filters are applied
to all pixels in the defined AOI. Pixels without neighbors in all eight
directions (that is, at the edge of the image) remain unchanged.
We use the gray level filter in gray level images. The pixel's gray level
value is used as showed below:
1 x +1 y +1
f ( x, y ) = pij f (i, j)
R i = x −1j = y −1
where pij are the filter’s integer constants (positive, zero or negative) and R
is a user-selectable non-zero integer ratio. The filtered value (the result of
applying the filter) is the new gray scale value of the (x, y) pixel (truncated
to integer domain 0 ... 255).
The following significant gray level filters are going to be supplied with
the software.
The “blur filter” is used to study the effect of a blurred image on the
features of an object. By applying this filter, homogenous areas are left
unchanged. The image contour is distorted because the value of the new
middle pixel is given by its neighbors’ values more then its own value. The
filter is defined as follows:
112
• The other four values in the array equal 2.
• The ratio is 25 - the sum of all the values in the 3 x 3 array.
• The values of the middle column (from top to bottom) are 1, 0, -1.
• The values in the left column of the array are -1.
• The values in the right column of the array are 1.
• The ratio is 1.
113
4.3.4 Smooth Filter
For reducing the intensity of noise in the image it is often used the
“smooth” filter .By applying it, homogenous areas are left unchanged, but
contour will diffuse. The filter is defined as follows:
Another filter used in order to reduce the noise in the image, is the “pond”
filter; the effect is opposite to that of the blur filter. By applying this filter
the weighting of the middle pixel is greater than that of its neighbors. The
pond filter is defined as follows:
We can transform a gray level image to a black and white image using the
binarization process. The algorithm is based on the threshold value
estimated by analyzing the gray image histogram.
114
In Manual mode, the user selects the threshold value, generally based on
the histogram. It is recommended that the threshold be selected in local
minimum points of the histogram in order to eliminate some objects.
In Auto mode, the program searches for the first local maximum point, l,
and the last local maximum point, L, of the histogram (from 0 to 255th
level). After that the algorithm determines the absolute minimum which
lies within l and L levels. These levels (l and L) generally correspond to
the first object and to the background (or the opposite). This algorithm is
suitable if the histogram is bimodal-objects have the same color and the
background is uniform.
The connectivity analysis and all high level processing are performed on
binary image. Segmentation is used to determine the sets of pixels - parts
of the objects, and the set of pixels - parts of the background.
An object is a conex set of binary pixels. If this set is simple conex then
the object is without holes. If the set is multiple conex the object has holes.
115
4.6 Object Analysis
Some features describe the position of the object within the image, while
others describe the object's shape (these are invariant to translation,
rotation, reflection and scaling). Only object features are used for pattern
building and for identification.
116
4.6.3 Standard Features
In the estimation of the standard features values of there are used object’s
moments. An object’s is defined as follows:
Nx Ny
m pq ( A) = x p y q f A ( x, y )
x =0 y =0
where:
4.6.3.1 Area
For objects with holes, area represents the area of the object's body.
4.6.3.2 Perimeter
Perim(A) = sum of pixels along the outer contour of the object in the
vertical and horizontal directions.
117
The perimeter is computed with two corrections, in order to handle pixel
representation of lines. If p1 and p2 are two pixels, such that p1.x = p2.x+1
and p1.y = p2.y+1, then the perimeter (length) of the segment p1p2 will be
computed as 2 . This algorithm leads to a maximum possible error of 8%
of the perimeter when an object orientation changes.
4.6.3.3 Compactness
Perim( A) 2
Compact ( A) =
Area ( A)
4.6.3.4 Eccentricity
[ 20 ( A) − 02 ( A)]2 + 411 ( A)
Ecc( A) =
m00 ( A)
118
Normalized centered moments are defined as follows:
pq ( A) ( p + q)
n pq ( A) =
where = +1
00 ( A) 2
M i1 = n20 + n02
M i 2 = (n30 − 3n12 ) 2 + (n03 − 3n21 ) 2
M i 3 = (n30 + n12 ) 2 + (n03 + n21 ) 2
M i 4 = (n30 − 3n12 )(n30 + n12 )[(n30 + n12 ) 2 − 3(n03 + n21 ) 2 ] +
(n03 − 3n21 )(n03 + n21 )[(n03 + n21 ) 2 − 3(n30 + n12 ) 2 ]
M i 5 = (n20 − n02 )[(n30 + n12 ) 2 − (n03 + n21 ) 2 ] + 4n11 (n30 + n12 )(n03 + n21 )
M i 6 = (n03 − 3n21 )(n03 + n21 )[(n30 + n12 ) 2 − 3(n03 + n21 ) 2 ] +
(n30 − 3n12 )(n30 + n12 )[(n03 + n21 ) 2 − 3(n30 + n12 ) 2 ]
The object's position is defined by its center of gravity. For defining the
center of gravity we can write down the following:
m10 ( A) m01 ( A)
x g ( A) = , y g ( A) =
m00 ( A) m00 ( A)
The following coordinates define the rectangle (the black line box) which
contains the object:
119
4.6.4.3 Angle between Orientation and Horizontal Axes
The angle defined by the object's main axis of orientation (the line arrow
on the Shape Image) and the horizontal axis is computed using one of the
following methods.
211
= tan −1 +k
20 − 02 2
where:
k = 0 when 20 − 02 0
k = 1 when 20 − 02 0
3
when 20 − 02 = 0 , is either or , depending on the sign of 11 .
4 4
It is defined like the angle of the maximum distance from the center of
gravity to a point on the perimeter.
120
It is defined like the angle of the minimum distance from the center of
gravity to a point on the perimeter.
For identifying objects, the vision system needs a pattern database. Any
pattern in the pattern database is built by training (by analyzing several
images – samples - of known objects) object classes. For each sample
image of an object class the system is estimating the standard features
values. Pattern analysis is performed, by estimating statistical parameters
(the mean feature vector and the covariance array) for each object instance.
The collection of statistical parameters values for all class of object
represents the pattern database.
A pattern is stored and updated as the mean vector and the covariance
array. The mean value of a feature is defined as follows: i = E{fi} where i
=l...N and fi is the ith feature considered. This average is computed with
respect to all samples that define the pattern.
Let f be the vector of all features and the vector of the corresponding mean
values:
f = (f1 , f2,…., fN )T
= (1 , 2,…., N )T
121
where fi is an arbitrary feature variable and f is an arbitrary feature vector,
for which a set of N samples is known.
11 ... 1N
= E ( f − )( f − ) T = : : :
N 1 ... NN 1
where:
ij = covf i , f j = E( f i − i )( f j − j )
ii = i = var x i = E( f i − i )( f i − i )
Let f be the feature vector of an object, and q the mean value vector far
the pattern q; the Bayes criteria is defined by the following:
d ( f , q ) = ( f − q ) T −1 ( f − q ) + ln(det )
Using Bayes criteria the system computes all distances between the mean
value of the object's features and the mean vector of every pattern. The
object is matched to the pattern having the minimum distance.
122
This process is done considering the selected standard features: (area,
perimeter, compactness, eccentricity and the 6 invariant moments)
evaluated for the object’s body.
Starting with the first pattern in the pattern database, a value is computed
for each selected feature, as follows:
The object can be identified only if the distance between the object and
any pattern is less than the maximum distance. If all distances between the
object and all patterns exceed the maximum distance, the analyzed object
is considered unknown. Under these conditions, the system will display the
nearest matching pattern.
123