You are on page 1of 125

STUDENT MANUAL

COURSE

VI-2000MK-II
VISION SYSTEM

DEGEM SYSTEMS
Copyright © 1999 by I.T.S. Inter Training Systems Ltd. All rights
reserved. This book, or parts thereof, may not be reproduced in any
form without prior written permission from I.T.S. This publication is
based on the exclusive methodology of Degem ® Systems.

In the interest of product improvement, circuits, components or


values of components may be changed at any time without prior
notification.

First English edition printing, 2000.

Catalogue Number: 9021395620


TABLE OF CONTENT

INTRODUCTION ............................................................................ 5

1 Introduction ............................................................................................. 6

1.1 General Information ........................................................................... 6

1.2 Product Overview ............................................................................... 6

1.3 Equipment Requirements .................................................................... 7

1.4 Manual Organization ......................................................................... 7

1.5 Release History .................................................................................. 8

1.6 Terminology ....................................................................................... 8

1.7 Hardware Installation ........................................................................ 8

1.8 Software Installation .......................................................................... 8

GRAPHICAL USER INTERFACE ............................................... 9

2 Graphical User Interface ....................................................................... 10

2.1 Overview ........................................................................................... 10

2.2 File menu .......................................................................................... 12

2.2.1 New Project ...........................................................................................12

2.2.2 Open Project ...................................................................................12

2.2.3 Save Project ....................................................................................12

2.2.4New menu .................................................................................13

2.2.5 Open menu .....................................................................................14

2.2.6 Open menu .....................................................................................15


2.2.7 Exit menu ..............................................................................................15
2.3 View menu ........................................................................................ 16

1
2.3.1 Gray Image .....................................................................................16

2.3.2 Histogram .......................................................................................17

2.3.3 Binary Image ..................................................................................17

2.3.4 Display Patterns .............................................................................18

2.3.5 Live Image .....................................................................................21


2.4 Commands menu .............................................................................. 22

2.4.1 Snap ................................................................................................22

2.4.2 Apply Filter ....................................................................................23

2.4.3 Undo Apply Filter ..........................................................................23

2.4.4 Binarization ....................................................................................23

2.4.5 Object Analysis ..............................................................................24


2.5 Tools menu ....................................................................................... 29

2.5.1 Histogram Optimization .................................................................29

2.5.2 Define AOI .....................................................................................30

2.5.3 Calibrate AOI .................................................................................31

2.5.4 Define Threshold ............................................................................32

2.5.5 Edit Filter .......................................................................................33

2.5.6 Build Pattern ..................................................................................34

2.5.7 Define Dimensions .........................................................................37


2.5.8 Options ..................................................................................................39
2.6 Robot menu ....................................................................................... 43

2.6.1 Build Robot Scene .........................................................................43

2.6.2 Initialize Robot ...............................................................................44

2.6.3 Robot Mode ....................................................................................44

2.6.4 Vision Stages ..................................................................................45

2.6.5 Display Report ...............................................................................45

2
2.6.6 Analyzed Objects Report ................................................................45

BEGINNER’S GUIDE ................................................................... 46

3 Beginner’s Guide .................................................................................. 47

3.1 General Considerations ................................................................... 47

3.2 Low level vision ................................................................................ 47

3.2.1 Getting Started.......................................................................................48


3.2.2 Image Acquisition .................................................................................49
3.2.3 Defining a framework ...........................................................................59
3.2.4 Filtering the Image ................................................................................66
3.2.5 Options ..................................................................................................77
3.2.6 Summary ...............................................................................................80
3.3 Medium level vision .......................................................................... 82

3.3.1 Analyzing the Image .............................................................................82


3.3.2 Summary ...............................................................................................87
3.4 High level vision ............................................................................... 89

3.4.1 Defining dimensions..............................................................................89


3.4.2 Display Pattern ......................................................................................93
3.4.3 Build Pattern ..........................................................................................94
3.4.4 Build Robot Scene .................................................................................98
3.4.5 Initialization Auto Robot Mode ..........................................................103
3.4.6 Auto Robot Mode ................................................................................103
3.4.7 Vision Stages .......................................................................................104
3.4.8 Summary .............................................................................................106

ADVANCED USER’S GUIDE .................................................... 108

4 Advanced User’s Guide ...................................................................... 109

4.1 Introduction .................................................................................... 109

4.1.1 Images Formats ...................................................................................109


4.1.2 The Functional Architecture ................................................................109

3
4.2 Image Acquisition ........................................................................... 110

4.2.1 Image Histogram .................................................................................110


4.2.2 AOI Calibration ...................................................................................111
4.3 Image Preprocessing (Applying Filters) ........................................ 112

4.3.1 Blur Filter ............................................................................................112


4.3.2 Emboss Filter.......................................................................................113
4.3.3 Sharp Filter ..........................................................................................113
4.3.4 Smooth Filter .......................................................................................114
4.3.5 Pond Filter ...........................................................................................114
4.4 Image Binarization ......................................................................... 114

4.5 Connectivity Analysis (Image Segmentation) ................................. 115

4.6 Object Analysis ............................................................................... 116

4.6.1 Object’s Features .................................................................................116


4.6.2 Number of Holes .................................................................................116
4.6.3 Standard Features ................................................................................117
4.6.4 Object’s location..................................................................................119
4.7 Object Identification ....................................................................... 121

4.7.1 Building Patterns .................................................................................121


4.7.2 Minimum Distance Identification Algorithm ......................................122

4
INTRODUCTION

5
1 Introduction

1.1 General Information

The ISPOT User’s Manual is a comprehensive documentation manual for


the ISPOT system. This manual describes the ISPOT system in detail, its
functions, how to use it in applications, how to do the configuration, and
how to operate it. The purpose is to provide all the information needed for
the people who are expected to handle it (students, system designers,
commissioning engineers, service engineers).

1.2 Product Overview

ISPOT is a powerful vision software package that provides low, medium


and high level image processing. The enhanced data processing
capabilities, based on sophisticated algorithms and methods, allows image
filtering, pattern building and object analysis and identification.

ISPOT can operate as a stand-alone vision system or as an intelligent


sensor for robotics applications serving both industrial and educational
facilities as well.

6
1.3 Equipment Requirements

The ISPOT system operates on a 486 or Pentium IBM Compatible PC


with keyboard, mouse, monitor and serial/parallel interfaces. In addition,
the real-time frame grabber card allows high-resolution image acquisition.
ISPOT allows the connection of an up to four cameras frame grabber.

1.4 Manual Organization

The ISPOT User's Manual provides you with comprehensive


documentation to meet your most requested documentation needs. The
manual provides easy access to descriptions and procedures that cover all
application features and functions.

The organization of the manual allows beginning users to learn the


operation of the system, and gives advanced users explanations and
instructions for performing more extensive work with all of the system
facilities.

The manual is divided into four parts:

Part 1 -Introduction- describes the system requirements; instructions for


hardware and software installation; configuration and setup procedures.

Part 2 – Graphical User Interface- presents the elements of the graphical


interface ISPOT is providing for the interaction with the user. From image
acquisition to object recognition the section provides comprehensive
description of the items in the File, View, Commands, Tools and Robot
menus.

Part 3 -Beginner’s Guide- describes the functional tools in general and


tools common properties. This section provides a series of tutorials, which
will teach you how to perform basic and frequently required operations.

Part 4 -Advanced User’s Guide- is the part of this manual, which


provides the mathematical and theoretical background of the data
processing and functions, performed by the system.

7
1.5 Release History

This is the initial version of the ISPOT User’s Manual.

1.6 Terminology

In the ISPOT system we use AOI as an abbreviation from “Area of


Interest”. In this case it has the signification of a selected area in the
purpose of eliminate unrelated data form an image.

1.7 Hardware Installation

ISPOT is designed to work with the PXC200 Frame grabber (Imagenation


Corporation), with PCI bus configuration. In order to install it the user has
to plug in the board in an empty PCI slot of the computer, to power on the
PC and after the boot sequence to install in the Operating System (is using)
the driver of the frame grabber coming with the board. There are different
board drivers for Windows NT and Windows 95 (Windows 98) Operating
Systems.

1.8 Software Installation

Running the Setup.exe program of ISPOT it will install the files are
required to run the vision system. Setup will do for you anything it is
needed. It will find out the Operating System you are using and it will
install the ISPOT files needed to run on this Operating System. It may be
that in some configuration Setup will ask you to reboot the system during
the setup process.

8
GRAPHICAL USER INTERFACE

9
2 Graphical User Interface

2.1 Overview

ISPOT is an application that runs under Windows 95/98/NT OS and


has a friendly user interface. Starting the application the first displayed
window (main window) will be the following:

The most important components of this main window are:

10
• Main
menu -
• Control bar
• Working Area
• Status bar
The Status bar contains the following items (panels):
▪ Hints Panel – provides the Hints on mouse movements
▪ Frame Grabber Status Panel – provides the information about the
accessibility of the PXC200 board by the ISPOT
▪ Camera Status Panel – presents the information about the status of
the camera current selected in ISPOT
▪ The ISPOT configuration name – it is the name of the
configuration of ISPOT the user is licensed to use.

11
2.2 File menu

The File menu contains a set of items that allows the user to create, open
or save a project, an AOI, a Gray Image, a Binary Image, etc.

2.2.1 New Project

To create a new project you have to select File/New Project.

2.2.2 Open Project

Using the File/Open Project option you can open a project from long term
memory (HDD, FDD, CD-ROM, etc).

2.2.3 Save Project

This option from File menu allows you to save a new project or an opened
project.

12
2.2.4 New menu

File/New allows you to create:

• A new Aria Of Interest - AOI (*.aoi)

• A new Gray Image (*.bmp)


• A new Filter (*.flt)

• A new Binary Image (*.bmp)

• New Patterns (*.ptt)


• A New Robot Scene (*.rsc)
• Options (*.opt)

13
2.2.5 Open menu

File/Open allows you to open from long term memory:

• An Aria Of Interest - AOI (*.aoi)

• A Gray Image (*.bmp)


• A Filter (*.flt)

• A Binary Image (*.bmp)

• Patterns (*.ptt)
• A Robot Scene (*.rsc)
• Options (*.opt)

14
2.2.6 Open menu

File/Open allows you to open from long term memory:

• An Aria Of Interest - AOI (*.aoi)

• A Gray Image (*.bmp)


• A Filter (*.flt)

• A Binary Image (*.bmp)

• Patterns (*.ptt)
• A Robot Scene (*.rsc)
• Options (*.opt)
2.2.7 Exit menu

Close the ISPOT application.

15
2.3 View menu

We are going to select now the View menu, which contains the following
items:

2.3.1 Gray Image

Open a window that contains the gray image.

This image can be obtained from the camera by selecting


Commands/Snap or it can be loaded directly from a long-term
storage device (HDD, CD, FDD) by selecting File/Open/Gray Image .

16
2.3.2 Histogram

Displays the histogram according to the current gray level image. If you
modify the settings within Tools/Histogram Optimization , a new
histogram will appear on the screen (only if you select View/Histogram
).

2.3.3 Binary Image

Open a window that contains the binary image.

This image is result after the Binarization of a gray image


(Commands/Binarization ).

17
2.3.4 Display Patterns

Selecting View/Display Pattern a two-pages - Page Control is available.

The Pattern Name and the Number of samples are displayed at the bottom
of the window; on the same row you can find four buttons used to navigate
between the different considered patterns.

If you are not satisfied with the defined Pattern or Dimensions you can
used one of the Delete buttons.

18
2.3.4.1 Standard Features Page

By activating standard features you can display a list of categories


(number of holes, area, perimeter, etc.) for the considered object.

19
2.3.4.2 Dimensions Page

Selecting Dimensions in the Display Pattern window you can see the
values and the Min, Max limits for the orientation and used dimensions as
defined with Tools/Define Dimensions command.

20
2.3.5 Live Image

Allows you to view the image directly from the camera in order to give the
user the possibility to adjust the focus and the iris of the camera. An
example is showed in the frame below.

The same commands (from the View bar menu) are also available from the
View Toolbar.

21
2.4 Commands menu

2.4.1 Snap

Place a set of objects, which you want to analyze in the camera's view
field.

Select Commands/Snap from Main menu click on Snap .

A gray scale image of the objects will appear on the screen, as seen by the
camera.

If the objects lie outside the AOI, adjust their position and snap again.
Another alternative is to modify the AOI size (in Tools/Define AOI )
and snap again.

22
2.4.2 Apply Filter

1. To improve the image quality ISPOT system provides five


predefined gray level filters.

2. Click on File/Open/Filter to chose a filter.

3. Click on Open to load a filter to be applied to the image.

4. Select Apply Filter from the Commands menu. The filter


is applied to the AOI only. An improved quality of the image
can be observed.

2.4.3 Undo Apply Filter

Select Commands/Undo Apply Filter if you are not satisfied with the
results of the Apply Filter procedure. The result will be the previous
image. Only one undo apply filter operation is available.

2.4.4 Binarization

Click on Commands/Binarization . The system will binarize the gray


image considering the current threshold value. To view the binary image
select View/Binary Image .

Under less than optimal lighting conditions, the image may not contain
clearly defined objects.

In order to improve the quality of the image, the threshold binarization has
to be changed (Select Tools/Define Threshold ).

23
2.4.5 Object Analysis

After you have obtained a satisfactory binary image, the objects in it can
be analyzed.

To achieve object analysis click on Commands/Object Analysis .


Within the Object Analysis window a four-pages - Page Control is
available.

In the Object Analysis screen the system displays the individual objects
within the current image allowing the user to view their characteristics.

The Pattern Name and the Number of samples are displayed at the bottom
of the window; on the same row you can find four buttons used to navigate
between the different considered objects.

24
2.4.5.1 Standard Features Page

By activating Standard features you can see the object area, perimeter,
compactness and all the other standard features.

2.4.5.2 Dimensions Page

If the object is identified, the dimensions defined on this object are


displayed inside the Dimension Page. If the object identified the Pattern
Name edit box is completed (also the member of samples).

25
26
2.4.5.3 Position Page

The object positions co-ordinates are given, measurements being relative


to the upper left corner of the image.

The dimensions of the rectangle box that encompasses the object (as
seeing in Position window) are displayed considering Xmin, Ymin, Xmax,
Ymax.

27
2.4.5.4 Shape Page

By selecting Shape an arrow is associated to the object marking it's main


axis of orientation, which passes through the object's center of gravity. In
the image here are also drown the defined points and dimensions.

The same commands from (the Commands bar menu) are also available
from the Commands Toolbar.

28
2.5 Tools menu

2.5.1 Histogram Optimization

Select Tools/Histogram Optimization . If the AOI definition has been


changed since the last time this screen was opened, the system
automatically optimizes the offset and gain values for the gray level image
in the currently active frame and builds the image histogram.

29
Within the Frame Grabber box you can enter the two values for offset and
gain. The results will be a better definition of the gray scale levels in the
image

The Optimization box allows you to select the way (Auto or Manual) the
offset and gain will be defined.

In Auto mode the system automatically optimizes the Offset and Gain in
order to obtain a histogram, which spans most levels of the gray scale
range (0-255).

In Manual mode the system generates the histogram based on the Offset
and Gain values prescribed by the user.

2.5.2 Define AOI

Select Tools/Define AOI and a gray image will be displayed. The


rectangle that defines the AOI has initially the same dimensions with the
gray image that can be seen in define AOI window. In order to select only
the desired object the dimensions of the rectangle have to be modified and
it's position can be modified using a drag procedure. In Order to confirm
the selection click on [Ok].

30
2.5.3 Calibrate AOI

Select Tools/Calibrate AOI . The system opens calibrate AOI


window, used to match the size of the pixel and a physical dimension.

By typing the object size (Width and Height) the application automatically
sets a new scale. This scale will be considered for all analyzed objects.

By default the system will consider the scale on both axis (X and Y)
equal 1 - meaning one mm per pixel.

31
2.5.4 Define Threshold

Click on Tools/Define Threshold . Within the Threshold Type you


have to select the operating mode. Click on the slider and drag the mouse
to move it. During the mouse movement you can observe the change of the
image within binary image window according to the new threshold value.
The threshold value automatically is connected to the slider position. You
have also the alternative to type a certain value for the threshold, observing
changes for both slider position and the binary image content.

Choosing Auto mode for Threshold Type ISPOT will set the threshold
value according to an optimization algorithm for the current gray image.

32
2.5.5 Edit Filter

The gray image filter is a 3 x 3 - convolution mask and a ratio. You have
to set all the 3 x 3 (9 items) of the convolution mask and the ratio values
according to the filtering goals. The filter has also attached a Description
string.

33
2.5.6 Build Pattern

Selecting Tools/Build Pattern opens a three-pages - Page Control,


which is used to build (and update) pattern databases. Pattern data can be
obtained from the currently loaded binary image or by acquiring a new
image from the camera. The selected object is showed in the Build
Pattern-Shape window.

• Pattern name shows the name of the object as determined by the


system, based on the currently available pattern database. If the
system can not determine a name for the object, question mark '?' is
displayed. Type the name you want to be associated with the object
and press [Enter] or [Learn].
• Learn command adds the current object parameters to the pattern
database and assigns the object the specified pattern name. The
system continues selecting objects in order to name it, until the last
object in the image is reached.
You can select the next or previous object in the displayed image by using
the navigation buttons: First , Previous , Next , Last .

34
2.5.6.1 Standard Features Page

Standard features page displays the values of the parameters of the current
object (number of holes, area, perimeter, etc.).

In this dialog box, for a given Pattern is also displayed the number of
samples of it.

35
2.5.6.2 Dimensions Page

36
2.5.6.3 Shape Page

By selecting Shape you can see the defined points of and dimensions of
the object and the associated orientation axis (an arrow marks it's main
axis of orientation passing through the object's center of gravity).

2.5.7 Define Dimensions

Click on Tools/Define Dimensions . The considered object is


displayed. Using the navigation buttons you can select the desired object.

37
From this moment you can measure the desired dimension of the object
between two defined points. The steps are:

1. From the Point box chose the point number

2. Click on the object to select the first measurement point

3. Click Define; the point number will be displayed on the object

4. Repeat the procedure as many times as needed.

5. To make the connections between two points select the desired points
from the point box, setting them as first and second within take point
box

38
6. Go to Dimensions box, select the desired dimension number and click
Define; the connection between the two considered points will be
displayed on the object.

ISPOT provides a predefined Dimension type: Orientation and


Dimension. In the moment you define a dimension ISPOT will compute
its value and will memorize it like a dimension value.

2.5.8 Options

The options menu allows to specify some parameters which will remain
unchanged during a work session or until you modify them. All parameters
have default values.

Click on Tools/Options. Within the Options window a three-pages - Page


Control is available.

2.5.8.1 General Page

• Select Camera (White, Red, Blue or Green) allows you to select the
camera (the color of the input cable) which will be used to acquire
the image from the selected frame.
• Communication Port select Com 1 or Com 2 as serial interface port
for communication with the robot controller.
• Picking Height represents the predefined height of picking objects
by the robot in pick command from the Build Robot Scene (Z
axis of the robot in picking moment)

39
2.5.8.2 Image Page

Selecting Image within the Options window you enter a page which
displays and allows you to define the settings and values which are going
to be used for image processing.

• Within the Background box you can define Black or White as


background color. The background color must be opposite to the
object to be analyzed.
• Binarization Threshold box allows to choose the mode (Auto or
Manual) in defining the threshold values, selection that will become
default for Define Threshold window.
• Object Constraints settle the minimum and maximum values for
area and perimeter, specifying the range of values for object
extraction. Only objects within this limit will be processed.

40
• Object Orientation box is used to mark the object axis of orientation
type: Minimum Distance Point, Maximum Distance Point or Inertia
Moment Axis.

41
2.5.8.3 Standard Features Page

Standard Features Page allows you to select a set of desired features that
is used in the object identification process.

The same commands (from the Tools bar menu) are also available from
the Tools Toolbar.

42
2.6 Robot menu

The communication between the robot controller and the ISPOT system is
made through the RS232 communication port of the PC, selected by the
user for communication with the robot (from the Tools / Option). In order
to communicate with the robot controller, ISPOT is sending a string of
characters through the dedicated RS232. This string of characters
represents a direct command for the robot controller.

2.6.1 Build Robot Scene

Selecting Robot/Build Robot Scene opens a window, which is used


to build the robot scene. A robot scene can be obtained from the currently
loaded binary image or by acquiring a new image from the camera. The
selected object is showed in the window. Its center of gravity represents on
one hand the point the user has to define as a specific Robot Point by
moving in manual mod the robot to pick the object, and on the other hand
the point to be defined as the same specific Vision Point.

In Scene Status it is displayed the current status of the Robot Scene. When
the Status is synchronized the user can test the accuracy of the Scene
sending the command to the robot to pick the object in the image (the pick
action is made with the height value of the gripper contained in Picking
Height box.

You can select the next or previous object in the displayed image by using
the navigation buttons: First , Previous , Next , Last .

43
2.6.2 Initialize Robot

Click Robot/Initialize Robot, or click the Initialize Robot button in


the Robot toolbar.

2.6.3 Robot Mode

Click Robot/Robot Mode, or click the Robot Mode button in the


Robot toolbar.

44
2.6.4 Vision Stages

Click Robot/Vision Stages, or click the Vision Stages button in the


Robot toolbar.

2.6.5 Display Report

2.6.6 Analyzed Objects Report

45
BEGINNER’S GUIDE

46
3 Beginner’s Guide

3.1 General Considerations

This section gives a detailed description of the functional tools of ISPOT


system.

The purpose of this section is to give useful hints regarding configuration,


and to point out some questions that should be answered, in order to get
the most out of ISPOT.

3.2 Low level vision

Low level vision includes the following items:

• Sensing (Image Acquisition)


• Image Preprocessing
In this chapter you will learn the basic procedures for:

• Starting ISPOT.
• Image acquisition and preprocessing.
This part also gives a briefly description of how to:

• Define AOI
• Calibrate AOI calibrating
• View the image histogram
• Filter the image
• Get a binary image.
It also gives a briefly description of how to use the Options tool. Please
refer to the Advanced User’s Guide for a more detailed description.

47
3.2.1 Getting Started

The first step is to check if the entire vision system (computer, frame-
grabber card and camera) is properly installed and operable. In this stage
the robotic system is not required.

After running ISPOT, the software main menus will appear on the screen.

Check in the Status bar if the Frame Grabber and Camera are connected.
There are three possible situations:

• Status bar – Frame Grabber Ok / Camera Ok. This is the situation


you are looking for. In the Status bar you can see the number of
Camera you are working with. If all the 4 Cameras (or more than
one) are connected to the Frame Grabber, the system is connected
by default to the first camera available in decreasing order.

48
• Status bar – Frame Grabber Ok / No Camera. In this case you
should check the connections between Camera and Frame Grabber,
or check if the Camera is power on. If, form any reason, you choose
another Camera, it is recommended to go on Options and make the
change there.

• Status bar – No Frame Grabber / No Camera. Obviously, when you


have no Frame Grabber, you have no Camera. The Frame Grabber
should be plugged-in in your system and also Frame Grabber driver
should be installed.

3.2.2 Image Acquisition

Click View/Live Image or click the Live Image button in the View
toolbar. Place an object in the camera’s field of view, and try to get the
best illumination and focus.

49
3.2.2.1 Grabbing the Image

Click Commands/Snap or click the Snap button in the Commands


toolbar. The image that appears on the screen is a gray scale image of the
objects, as seen by the camera.

If the objects lie outside the frame, adjust their position and snap again or
change the frame size and snap again.

50
3.2.2.2 Frame Grabber optimization.

Click View/Histogram or click the Histogram button in the View toolbar


. The histogram for the gray scale image is a function over 256 equal
adjacent classes within the gray scale. The histogram value of a class is the
number of pixels with a particular gray level.

51
The histogram optimization is used to determine the proper setting of two
parameters that control the image acquisition: Gain and Offset.

Click Tools/Histogram Optimization or click the Histogram


Optimization button in the Tools toolbar. The system opens the
Histogram Optimization box allowing the optimization of the Gain and
Offset parameters. Wait for the histogram to settle at the optimal values
(the graph stops changing and the system changes to Manual mode), then
click [OK] to accept the setup. If you are not satisfied, you can change the
Gain and Offset parameters in Manual mode.

This tool allows the user to change Frame Grabber parameters values in
order to improve acquisition system resolution.
Note that this tool is used to determine the proper setting of two
parameters that control the image acquisition: gain and offset. Gain and
Offset are set in order to obtain a better gray level resolution. In general,
the offset controls the position of the lowest levels of the histogram and
the gain controls the length of the histogram (the difference between the
lowest and the highest pixel intensity levels of the image).

Examples:

52
The gray image histogram for the Frame Grabber parameters (Gain and
Offset) on default values.

The Gray Image for these parameters:

53
The Gray Image Histogram after the histogram optimization process:

The Gray Image for these parameters:

54
The gray image histogram after manually change of the Gain parameter
value to 50

The Gray Image for these parameters:

55
The gray image histogram after manually change of the Offset parameter
value to 130. The image is partially saturated, loosing gray level
information on pixel of high intensity.

56
The Gray Image for these parameters:

57
The gray image histogram after manually change of the Offset parameter
value to 80. Remark that the image is loosing gray level information on
pixels of low intensity.

The Gray Image for these parameters:

58
3.2.3 Defining a framework

3.2.3.1 Defining Area of Interest (AOI)

Take a image from the camera (click on Commands/Snap or click the


Snap button in the Commands toolbar). You can also use bitmaps
from the existing library selecting File/Open/Gray Image.

Click Tools/Define AOI or click the Define AOI button in the Tools
toolbar. An image, as seen by the camera will appear.

In order to eliminate unrelated data form the images (e.g., dirt specks,
other objects), you should define a frame by cropping the picture, thereby
creating an AOI.

Click on the image. A frame cursor will appear around the image. Drag
each side of the cursor so that only the desired part of the image will
remain inside the marked area.

59
Press [OK] to save the frame definitions or Cancel to discard the changes
on the previous AOI.

3.2.3.2 Getting a Binary Image

Click Commands / Binarization or click the Binarization button in


the Commands toolbar. The binarization process reduces a gray scale
image to a black and white image. The algorithm is based on the gray scale
image histogram.

Click View/Binary Image or click the Binary Image button in the


View toolbar. The binary image will be displayed.

60
Click to the mouse cursor on an image object. A frame will appear around
the object. Right click inside the market area. A Binary Image Popup
menu, which contains several system tools, will appear and you can apply
those tools to the referred object.

3.2.3.3 Define Threshold

The purpose of this tool is to obtain a good binary image. It refers to the
threshold used for the binarization: all pixels below the threshold level are
considered 0’s and all pixels above this level are considered 1’s. Click
Tools/Define Threshold or click the Define Threshold button in the
Tools toolbar.

Select Auto mode. In this way the system will fix automatically a
binarization threshold which is in fact the threshold suggest by the system
in a bimodal histogram assumption; that is, all objects have the same color
and background color is uniform. The program searches for the first local
maximum and the last local maximum point of the histogram from 0 to
255th level, and then determines the absolute minimum, which lies within

61
these points. The values of these points generally correspond to the first
object and to the background (or the opposite).

Select Manual mode and move the cursor in the histogram until it
produces a clearly defined black and white image in the frame and then
click [OK]. In Manual mode, the user selects the threshold value, generally
based on the histogram. It is recommended that the threshold be selected in
local minimum points of the histogram in order to eliminate some objects.
Note that in Auto mode the defined threshold will be changed according
to the current histogram. In Manual mode the defined threshold will be
fixed and it will keep this value during the histogram changes.

62
Examples:
1. Binary Image with Automatic Threshold (Threshold Value 107)

2. Binary Image with Manual Threshold value 200:

3. Binary Image with Manual Threshold value 40

63
Observations:

If lighting conditions are less than optimal, the image may not contain
clearly defined objects. In order to improve the quality of the image,
you need to change the binarization threshold.

Click Commands / Binarization or click the Binarization button .


The system will now binarize the gray scale image according to a default
threshold value. Click Tools/Define Threshold or click the Define
Threshold button in the Commands toolbar. Move the cursor in the
histogram. The image changes according to the new threshold value.

3.2.3.4 Calibrating AOI

This tool will be used to correlate the size of a pixel on horizontal and
vertical in mm.

64
Click Tools/Calibrate AOI, click the Calibrate AOI button in the
Commands toolbar or right-click on the object from the image you want to
analyze. The Calibrate Frame window will be opened. Search in the frame
until you will find a disk with known vertical and horizontal dimensions
(is better to use a disk for calibration because it is not dependent on
rotation effect).

Enter the exact Width (X) and Height (Y) measurements of the object you
are using for the calibration, and click [OK].

Note the Pixel Dimension the system has calculated.


Note that the actual geometry configuration of the image acquired from
the camera depends on camera lens, distance from camera to the
analyzed scene, horizontal and vertical pixel resolution of the CCD
camera and so on. If the camera or the camera position is changed,
previously acquired images will be corrupted. The Define Threshold
and Histogram Optimization procedures should be performed prior to
the frame calibration, since they affect the precision of the frame
calibration procedure.

65
3.2.4 Filtering the Image

3.2.4.1 Filter

Filter is use in order to improve the quality of the image. ISPOT system
provides a number of five predefined gray level filters (Blur, Emboss,
Pond, Sharp and Smooth) which are most used in filtering technique.

Click Tools/Edit Filter or click the Edit Filter button in the Tools
toolbar to open Edit Filter dialog box.

Click File/Open/Filter to choose the filter to be applied to the image. For


example from the list of files in the ISPOT directory, choose
SHARP.FLT, and click on [OK] to load the filter. If you choose to change
the filter values, you have to save these changes on another name.

Click on Commands/Apply Filter or click the Apply Filter button in


the Commands toolbar. The process on the image in Apply Filter is made
only inside AOI. Observe the improved quality of the image.

66
What filters to use? The five standard filters provided with the system are
most used for the image processing. In some cases extended feature of the
filter is needed and the system enables the user to edit a new filter in order
to implement a particular filter method.
Note that filtering the image represents a method of image
enhancement. By filtering an image you can obtain a higher quality
image from processing point of view (this means to remove or to point
out some characteristics of the gray level image). An example of this is
“smooth” filter that removes the noise from the gray image, or “sharp”
for pointing out the edges. The higher is the image quality, the better is
image analysis.

67
Examples

The original Gray Image is the gray image resulted from Histogram
Optimization. In the following examples you can see the gray image
changes after applying each of the 5th predefined gray level filters. Please
refer to the Advanced User’s Guide for a more detailed description.

• Blur Filter

The Gray Image after applying the filter five times: (observe that
the image are blurred, the edges are not so clear).

• Emboss Filter

68
The Gray Image after applying the filter one time: (the filter
extracts the objects contour, homogenous areas become black).

69
• Pond Filter

The Gray Image after applying the filter five times: (the effect is to
reduce the intensity of noise in the image).

70
• Sharp Filter

The Gray Image after applying the filter two times: (image contrast
is enhanced).

71
• Smooth Filter

The Gray Image after applying the filter two times: (observe that
contour lines tend to diffuse).

The system allows you to edit your own filter.

1. For instance if you want to enhanced the image contrast more then the
Sharp Filter does, you can increase both the middle value (more than 5)
and the negative values (less then –1). These changes are made in order
to keep the weights of all the pixels in the equation and to increase the
difference between their values. The differentiation effect will be
greater, but image noise will also be amplified.

72
The Gray Image after applying the filter two times: observe that
image noise is also amplified.

2. In order to enhance the object contours, you can use a filter defined as
follows:

• The values of the middle column (from top to bottom) equal: 1, 0, -1.
• The values in the left column of the array equal 1.
• The values in the right column of the array equal -1.
• The ratio is 1.
As a result, homogenous areas become black and object contour is
detected, and, because of the differences that are done, this tend to become
white. Observe that the edges from N-E, S-W directions are emphasized
(which is opposite direction of the Emboss filter action presented before).

73
The Gray Image after applying the filter one time.

3. For contour detection, you can use a Sobel mask. In terms of a digital
image, a mask is an array designed to detect some invariant regional
property. We are now interesting in detecting transitions between
regions, and for this we can consider two Sobel operators (templates)
applied on vertical and horizontal direction respectively.

• Sobel operator applied on vertical direction:

74
The Gray Image after applying the mask.

• Sobel operator applied on horizontal direction:

The Gray Image after applying the mask.

75
4. We can also use the Laplacean operator, which is considered to be an
estimation of the second derivative of the gray level image.

The Gray Image after applying the mask.

3.2.4.2 Undo Filter

Click Tools/Undo Filter to open Undo Filter dialog box. The system
provides only a single level of undo command.

76
3.2.5 Options

This is an overview of the most important options of the system. It


provides a collection of the most important settings of the system.

Click Tools/Options.

Point to General page. You can select the camera you want to use. You
can also set the Communication Port for the communication with the
robot. You can see the cameras available and you can select which one you
use. After that you can see it on Status bar. Otherwise, in Status bar you
will see No Camera.

Point to Image page. Through Object Orientation you can choose the
method for object orientation estimation. Object Constraints let you adjust
the limits of object analyzed by the system. You can also see both the
binarization threshold value and the method of the binarization. You can
also change the Frame Grabber parameters.

77
Point to Standard Features page. This tab enables the user to
activate/deactivate the values of the standard features (such as area,
perimeter, compactness, eccentricity and invariant moment) for the
identification process.

78
Let’s come back to Object Orientation. The information about the
orientation axis of the object, together with the information about the
object position, allows the system to calculate the position and the roll
angle of the robot gripper.

There is a trick you can do if after defining an AOI you still don’t have
a clean image. In order to remove unrelated data (like specks or other
objects) from the AOI, you can go on Object Constraints and change
Min. Area, Min. Perimeter, etc.

79
3.2.6 Summary

There are several aspects this chapter points out:

1. After grabbing an image, this is set as a current image. In order to


improve the gray level acquisition resolution of the system,
change the Frame Grabber parameters.

Observation 1

Binarization threshold change after modification of the parameters of


the Frame Grabber.

2. In order to eliminate unrelated data form the images (e.g., dirt


specks, other objects) you have to define an AOI. Use
Calibrate AOI in order to get the real position and dimension
of the objects. If after binarization the result is not satisfied,
you can improve it using gray level filter.

Observation 2

Observe the Histogram changes after defining an AOI.

Observe the Histogram changes after defining and applying a filter.

3. The system provides several gray level filters in order to


improve the quality of the image.

4. In order to remove unrelated data (like speaks or other


objects) from the AOI you have to go on Options/Object
Constraints and change Min. Area, Min. Perimeter, etc.;
Observation 3

This is recommended if after you first define an AOI, you still have
unrelated data in your frame.

80
Notes:

Use this page for notes relating to image acquisition and image
preprocessing scenarios.

81
3.3 Medium level vision

Medium level vision includes the following items:

• Image segmentation
• Image description (Object Analysis)
Image segmentation is performed on binary images and it has the purpose
to determine which pixels are part of the object and which are parts of the
background.

Image description gives a description of the object in terms of the various


features that may be used to make a decision about the object. Those
features are divided into two categories: intrinsic features such: number of
holes, area, perimeter, compactness, eccentricity, invariant moments; and
dimensions. Some features describe the position of the object within the
frame, and others describe the object’s shape. Please refer to Advanced
User’s Guide for a more detailed explanation.

3.3.1 Analyzing the Image

This tool will be used to analyze the objects in a binary image. A set of
features is calculated for each object. Features includes area, perimeter,
center of gravity, and so on, and can be used to identify objects, to
measure position and orientation so that the robot can pick it up, and check
dimensions for quality control.

Once you have got a good binary image, you can analyze the objects in it.

Click Commands/Object Analysis, click the Object Analysis button


in the Commands toolbar, or right-click on the object you want to analyze.

Click on Shape. There you can see one by one, using navigation buttons
the individual objects within the current image. The white arrow on the
object marks the object’s axis of orientation, which passes through the
object’s center of gravity.

82
83
Click on Standard Features to see the parameters of the selected object.
The number of holes is estimated for the object.
Standard Features are estimated. This means that the user can not
change their values. Options tool enables the user to activate/deactivate
some features for the identification process.

84
Click on Position. There you can see the dimensions of the rectangle
which frame the object and also you can see the coordinates of the object
position relative to the upper left hand corner of the frame.

85
Click on Dimensions. There you can see the values of the dimensions
defined for these classes of objects. For a given object, a dimension
defined between two points has a value. This value 10% (by default) is
taken by the system and displayed in Define Dimensions text box. View
Pattern enables you to change these dimensions.

If the pattern is recognized, you can see its name in Pattern name text
box. Therefore in Object Analysis the system displays the pattern
dimensions and the actual value of the defined dimensions for the
recognized object.

86
3.3.2 Summary

There are several aspects this chapter points out:

1. Object Analysis is a informational tool. In this stage you can not


change the object characteristic, you can only display and analyze
them.

2. The information contained in Object Analysis / Position tab allows


the system to calculate the position and the value of the roll angle of
the robot gripper.
Observation 1

Object Analysis tool will give you fast and complete information you
required.

87
Notes:

Use this page for notes relating to image analyzing and segmentation
scenarios.

88
3.4 High level vision

High level vision includes the following items:

• Object recognition
• Interpretation (building pattern and dimensions)

In this chapter you will learn about defining dimensions, building a


statistical / pattern database and how to realize a synchronization between
the vision and the robot systems. Synchronization transforms positions in
the camera coordinate system to positions in the robot coordinate system,
thus allowing the robot to manipulate objects that the vision system has
identified. In automatic mode the system establish a bi-directional
communication between vision system and robot system. In this case, the
stages of the vision system will be displayed.

3.4.1 Defining dimensions

This tool allows you to define features as limits of acceptance for a


specified object. For every object you can define up to 10 such
dimensions. In Define Dimensions, the dimensions could be defined only
for a recognized pattern. The dimensions are not statistical estimated. For a
given object, a dimension defined between two points has a value. This
value 10% (by default) is taken by the system and displayed in Define
Dimensions text box. View Pattern enables you to change these
dimensions.

Click Tools/Define Dimensions or click Define Dimensions button in


the Tools Toolbar. In the window you can see the image objects one by
one. This screen allows you to define points on the object’s contour. These
points will be used for defining dimensions. These points are defined for a
particular type of objects, which means the identification was already
done. All points, lies on the contour of the object, and are defined by the
system as polar coordinates relative to both the object’s center of gravity
and the orientation axes of the object.

89
90
To define a point, do the following:

1. Using the mouse, place the cross-cursor anywhere on the contour of the
object.

2. On the Point list box, select a point number you want to define as a first
point, Point 4 for instance, and the click Define. Repeat this operation
once again and choose another point number, Point 2 for instance.

3. In the Point list box, go to the point number which you choose as a first
point and in the Take point, click First button. You can see in the left
box the number of point you selected.

4. Repeat this operation with the other point, and in the Take point, click
Second button. You can see in the left box the number of the second
point you selected and also the Define button on the Dimension is
enabled. This means that between the first and the second point you can
define a dimension. There are up to 10 dimensions (user defined), object
orientation (predefined), and for these there are provided a number of
maximum 20 points.

91
5. In the Dimension list box, choose a dimension number, Dimension 1 for
instance, and then click Define. Now you can see in the Dimension list
box a text like Dimension 1, Define (1, 2) or Dimension 3, Define (4, 2),
and also a blue line between the two points defined on the object’s
contour. In the Point list box, the points that define a dimension are now
marked as connected, Point 1, Connected, and Point 4, Connected for
instance.

The management of dimensions and points includes delete commands. The


user can delete any defined dimension and any defined point but not
connected points. Deleting a dimension, any of the two points that is not
connected in another dimension will have the status: Point Defined instead
of Point Connected (and it can be deleted).

92
3.4.2 Display Pattern

Display Pattern tool allow the user to inspect the pattern database, to
erase a pattern or the defined dimensions (for the selected pattern), or to
change a dimension value, Min value, Max value or a percentage value
(the default values of dimensions are defined in Define Dimensions). For
a given object, a dimension defined between two points has a value. This
value 10% (by default) is taken by the system and displayed in
Dimensions text page. If you want to have this information in percentage,
enable the % check box.

Click View/Display Pattern or click the Display Pattern button in


the View toolbar.

93
3.4.3 Build Pattern

This tool provides an assisted learning environment for building a


statistical / pattern database. We are not interested in object position at this
stage. In Build Pattern you can display the pattern dimensions and the
actual value of the defined dimensions for the recognized object.

Click Tools/Build Pattern, click the Build Pattern button in the


Tools toolbar, or right-click on the object you want to analyze.

Point on Standard Features page. There you can find information about
some features including the orientation. You can also see the number of
samples in a dedicated textbox.

94
Point on Dimensions page. You can see the dimensions for a recognized
object.

95
Point on Shape tab. There you can select the object you want to learn using
the First , Previous , Next , Last toolbar. You can also see
the number of samples of this pattern in the dedicated textbox.

96
In the Pattern Name dialog box, type a name for the selected object (e.g.
PENTAGON), and then press Learn. Any name other than “?” is valid.

Once an object has been learned, the system automatically selects the next
object in the image.

Repeat the learning process for each object, assigning each type a different
name (e.g. PEN).

Once names have been assigned to all objects in the image, the Learn
button becomes unavailable.

To create a statistical database for the system, you must capture several
images of the objects in various positions.

97
Slightly turn and move the objects in the camera’s field of view. If the
objects have been moved outside the frame, readjust their location or
redefine AOI.

With this new image, repeat the learning process.

Continue the learning process for different images until the system
automatically and correctly fills in the Pattern name the name you
assigned to the object. The system identifies an object by matching its
patterns to the ones contained in a predefined pattern database.

In the Standard Feature tab, make sure the most power features have been
selected.

To identify an object, the system compares objects features values in a


new or saved image to patterns in a database. The object name determined
by the system based on the selected features appears at the Pattern Name
text box. If the system is unable to provide the name that matches the
name you assigned to the object, you must enlarge the database by
building more patterns.

Statistically, after 3 or 4 learning there will be no error if the objects are


different in shape.

3.4.4 Build Robot Scene

When the robot and vision coordinate systems are synchronized, ISPOT
can instruct the robot to pick up identified objects. In this chapter you will
learn the procedure for synchronization, which involves teaching the same
physical points to both the robot and the vision coordinates systems.

Make sure the entire system (robot, controller, frame-grabber card, and
camera) is properly installed and operable.

Home the robot.

Make sure you have done the following:

98
• Use Tools/Histogram Optimization to adjust and set the gain and
offset so that the object can be analyzed.
• Use Tools/Calibrate AOI to calibrate the frame.
• ISPOT may already have robot synchronization loaded. If either the
robot position or the camera position has been physically altered
since the robot synchronization was last performed, press Delete
box in both Robot Point and Vision Point definition.
In order to perform the synchronization follow the next steps:

Place the object within the camera’s field of view, and within the robot’s
working envelope. The object used for synchronization should have a
shape that can easily be manipulated by the robot in the center of the
gravity of it.

Open the gripper. Move the robot to the object. Close the gripper, and
make sure the robot can grasp the object in this position.

When the gripper successfully grasp the object, select Robot Point / Point
4, and click Define.

99
Move the robot away from the camera’s field of view. Be sure that the
object position is not modified. Click on Snap button to capture the image
of the object.

Click Robot/Build Robot Scene, click the Build Robot Scene button
in the Robot toolbar, or right-click on the object you want to use for
synchronization. From this point, the robot’s point of origin, the camera,
and the objects must remain fixed.

Record the vision coordinates by marking the object grasped by the robot.
Select Vision Point / Point 4, and click Define.

100
Then repeat the steps from 4 to 9 for another position of the object in the
image. In this way you will define Robot/Point 3, Vision/Point 3. You can
stop now but it is recommended to repeat the process once again (for a
different position) for Select Robot/Point 2, Vision/Point 2. You can define
up to 6 pairs of points (Vision and Robot). Repeat the procedure to
improve the accuracy of the scene parameters because they are estimated
statistical.

101
If the Scene Status dialog box displays Synchronized, the synchronization
is successful. Then the Pick button will become enabled.

Check the synchronization.

• Move the robot away from the camera’s field of view.


• Place an object in the camera’s field of view in a point inside the
robot envelope.
• Press Commands/Snap to capture the image of the object.
• Go to Build Robot Scene Dialog box, choose the object and press
Pick.
• The robot will move to the location of the object, grasp it and pick it
up.

102
3.4.5 Initialization Auto Robot Mode

Click Robot/Initialize Robot, or click the Initialize Robot button in


the Robot toolbar.

This command will initialize the communication with the robot in order to
put the robot in status ready to automatic mode.

The system displays a dialog message bog with the message:

' MAKE SURE THE ROBOT IS IN "NST" POSITION !!! '

This message box has two buttons: [Ok] and [Cancel].

If the user is choosing [Ok] then ISPOT is sending the direct command to
the robot “rn 10” (run from line 10).

Line 10 is the first line of the Initialization routine inside the robot
controller.

3.4.6 Auto Robot Mode

Click Robot/Robot Mode, or click the Robot Mode button in the


Robot toolbar.

From this stage the robot will be in Auto Mode. In automatic mode both
systems are connected. Vision receives commands from the robot to snap
and analyze the image and will communicate to the robot if the object is
rejected or accepted.

In Robot Mode the relationship between the ISPOT and the robot is the
following: the ISPOT is slave and the robot is the master of this
relationship.

In the “Waiting for the robot” stage of the ISPOT system – ISPOT is
polling periodically the robot (with a 500 milliseconds period) about the
orders the robot is sending.

This polling is performed by ISPOT reading the counter 20 value of the


robot controller.

103
Counter 20 going to 2 is interpreted by ISPOT as “Wake up” message
from the robot. This command is used by the dual ISPOT – robot system
for their synchronization (the handshake). Receiving this value ISPOT
will perform it’s own initialization of the Robot Mode (and will go to
“Waiting for the Robot” Vision Stage) and then will send to the robot
controller the command:

“rn 30” (run from line 30).

Line 30 is the first line of the Automatic mode routine inside the robot
controller.

Counter 20 going to 1 is interpreted by ISPOT as Inspect command. This


is an “every cycle command” the robot ordering to ISPOT to perform all
the cycle from Snap the image to Identification of the object. Every time
the robot places an object in the vision image scene, the robot will restart
this cycle.

3.4.7 Vision Stages

Click Robot/Vision Stages, or click the Vision Stages button in the


Robot toolbar.

The Vision Stages, others then “Waiting for the Robot” are “Snap Image”,
“Apply Filter”, “Binarization” and “Object Analysis”. All this stages are
delayed by a 2 seconds sleep period (in order to allow the user to notice
the steps in processing the image).

104
At the end of the cycle after the object identification ISPOT is displaying
“Test Passed” if the object is identified and its dimensions are in the
prescribed tolerances or “Test Failed” in any other case.

At this point ISPOT is sending the command

'sc 20,10' (set counter 20 to value 10) in the “Test Passed” case, or

'sc 20,11' (set counter 20 to value 11) in the “Test Failed” case.

The cycle is ended by the command.

'rn 270' (run line 270) from ISPOT. From line 270 the robot will read the
counter 20 value and it will peek the object and handle it according to the
counter 20 value.

ISPOT is entering in “Waiting for the Robot”.

105
3.4.8 Summary

1. To identify an object, you must build a statistical / pattern database.


The system compares objects features values to patterns in a
database. If the system is unable to provide the name that matches the
name you assigned to the object, learn more samples under the same
name.

Observation 1

After how many samples for objects with different shape the system
will be able to make no error in your own application?

2. The Define Dimensions tool allows you to define features as limits


of acceptance for a recognized object. For every object you can
define up to 10 such dimensions.

Observation 2

These dimensions are characteristic for a given object.

3. Try to make the procedure for synchronization in your own


application. This procedure involves teaching the same physical
points to both the robot and the vision coordinates systems. When the
robot and vision coordinate systems are synchronized, the system can
instruct the robot to pick up the objects.

Observation 3

After how many samples the synchronization will become valid in your
case?

106
Notes:

Use this page for notes relating to defining dimensions, build pattern, robot
scene building scenarios.

107
ADVANCED USER’S GUIDE

108
4 Advanced User’s Guide

This part of the manual presents some of the mathematical fundamentals,


functions and algorithms used by ISPOT to perform low-level, medium-
level and high-level machine vision data processing.

4.1 Introduction

ISPOT is an artificial vision system using statistical pattern recognition


methods on binary images. The system is using binary images loaded from
the configuration drives or from gray images transformed in binary images
trough binarization process. ISPOT has the ability to obtain gray images
acquired from the camera trough the frame grabber or loaded from the
configuration drives and is transforming these gray images in binary
images for farther processes.

4.1.1 Images Formats

In the ISPOT vision system the images are represented as follows:

• The gray scale image is an array of 512 x 384 pixels in which each
pixel has a value in the range 0. .255 (one byte). A gray scale image
therefore requires 194Kb (197,686 bytes) memory.
• A binary image is an array of 512 x 384 pixels in which each pixel
has a value of 0 or 1. For binary images, the background color is the
logical "0" (light pixel), while the object color is the logical "1"
(dark pixel) – or the opposite, depending on the image scene.

4.1.2 The Functional Architecture

The functional architecture, of an artificial vision system using statistical


pattern recognition methods on binary images, contains:

• Image Acquisition (image grabbing): in order to obtain a good gray


image (a good contrast of the image) acquired by the hardware, the
gain and offset parameters of the frame grabber are used.

109
• Image Preprocessing (the image can be enhanced by applying
filters): filters are applied to improve the image quality. To improve
the image quality means to reduce the features (of the image) that
disturb the following image processing stages (like noise) and to
accentuate those features important for the other image processing
stages (for example: object boundaries edges, object contour, etc.).
This stage is optional.
• Image Binarization: the transformation of the gray image into a
binary image based on a global threshold value estimated by
histogram analysis.
• Connectivity Analysis (Image Segmentation): by analyzing the
connections between pixels, objects are extracted from the image.
Any holes within objects are numbered during the segmentation
stage.
• Object Analysis: some features are calculated for each object. These
features called standard features are used to identify the objects.
Also in this stage the system measures the location on the objects
(position and the orientation of the objects) so the robot can pick it.
• Object Identification: using the set of the standard features selected
by the user the system is identifying each object in the current
binary image. This stage is performed using statistical pattern
recognition algorithm on the pattern database built by the user.
• Quality Control: for any identified object in the image the system is
checking the dimensions values (Orientation and up to 10 Distances
between up to 20 points defined on the object borders) defined and
selected by the user for each pattern (class of object).

4.2 Image Acquisition

4.2.1 Image Histogram

The image histogram for gray scale images is a function of 256 equal,
adjacent classes within the gray scale. The histogram value of a class
represents the number of pixels having a particular gray level. A histogram
is used to determine the proper settings for the gain and offset frame
grabber parameters controlling image acquisition.

The gain and offset are set in order to obtain an image whose histogram
spans most levels in the gray scale range (0 ... 255), producing by this an
image with maximum contrast (gray level resolution). These two

110
parameters can be selected choosing Manual or Auto mode. In Auto mode,
the program will automatically adjust the gain and the offset in order to
obtain a histogram that spans the entire scale. In general, the offset
controls the position of the lowest levels of the histogram and the gain
controls the length of the histogram (the difference between the lowest and
the highest pixel intensity levels of the image).

4.2.2 AOI Calibration

The image acquired from the camera is characterized by geometry


depending on a number of elements (camera lens, frame grabber
resolution, distance from camera to the analyzed scene, horizontal and
vertical pixel resolution of the CCD camera, etc.). By changing the camera
position the geometric parameters of the considered object no longer
correspond to the learned ones (to the pattern).

To solve this problem, ISPOT vision system uses two scaling factors (mm
per pixel scales), both on horizontal and vertical axes. All the standard
features, location values and dimensions are scaled using these factors.

The AOI Calibration procedure allows the definition of scaling factors,


based either on a standard calibration object, or on a previously acquired
image containing such an object. In either situation, the actual horizontal
and vertical dimensions (width and height) of the object have to be known.
The scaling factors permit the system to convert dimensions from pixel
units to standard units (millimeters).

The best results in AOI Calibration will be obtain with a disk shape
calibration object, because the disk shape is not sensitive to the orientation
of the object in the image. For error minimization, the object should fill the
AOI (without extending beyond it).

AOI Definition and Calibration data can be saved on disk for future use.

Before AOI calibration, you should perform Image Histogram


optimization and Binarization Threshold selection, because both of them
affect the precision of the AOI calibration procedure.

Considering px and py to define the dimensions of an object in pixels, and


dx and dy for defining the same dimensions in standard units, the scaling
factors are computed as follows:

dx dy
Sx = Sy =
px py

111
The error in computing the horizontal and vertical dimensions of an object
has the value of 1 pixel; considering this, the relative scale errors are:

S x 1
=  100[%]
Sx px + 1
S y 1
=  100[%]
Sy py + 1

4.3 Image Preprocessing (Applying Filters)

The ISPOT vision system uses gray level filters. These filters are applied
to all pixels in the defined AOI. Pixels without neighbors in all eight
directions (that is, at the edge of the image) remain unchanged.

We use the gray level filter in gray level images. The pixel's gray level
value is used as showed below:

1 x +1 y +1
f ( x, y ) =   pij f (i, j)
R i = x −1j = y −1

where pij are the filter’s integer constants (positive, zero or negative) and R
is a user-selectable non-zero integer ratio. The filtered value (the result of
applying the filter) is the new gray scale value of the (x, y) pixel (truncated
to integer domain 0 ... 255).

The following significant gray level filters are going to be supplied with
the software.

4.3.1 Blur Filter

The “blur filter” is used to study the effect of a blurred image on the
features of an object. By applying this filter, homogenous areas are left
unchanged. The image contour is distorted because the value of the new
middle pixel is given by its neighbors’ values more then its own value. The
filter is defined as follows:

• The value in the middle of the array is 1.


• The values in all 4 corners of the array equal 4.

112
• The other four values in the array equal 2.
• The ratio is 25 - the sum of all the values in the 3 x 3 array.

4.3.2 Emboss Filter

It is used for enhance the object contours. Homogenous areas become


black (if all pixels have the same value, the result is 0). Values of 1 and -1
can be interpreted as differences (derivatives) across verticals and
horizontals. Object borders are detected, and, because of the differences
that are done, they tend to become white. The filter is defined as follows:

• The values of the middle column (from top to bottom) are 1, 0, -1.
• The values in the left column of the array are -1.
• The values in the right column of the array are 1.
• The ratio is 1.

4.3.3 Sharp Filter

It is used to sharpen the object contour. By applying this filter, image


contrast is enhanced. Homogenous areas are left unchanged (if all pixels
have the same value, the middle pixel does not change). This filter is
defined as follows:

• The value in the middle of the array is 5.


• The values in the corners of the array are 0.
• All other values are -1.
• The ratio is 1.
The enhancement of the contours is obtained because the result is the
difference between the five times middle pixel value and its four
immediate neighbors.

113
4.3.4 Smooth Filter

For reducing the intensity of noise in the image it is often used the
“smooth” filter .By applying it, homogenous areas are left unchanged, but
contour will diffuse. The filter is defined as follows:

• All values in the array equal 1.


• The ratio is 9 - the sum of all the values in the 3 x 3 array.

4.3.5 Pond Filter

Another filter used in order to reduce the noise in the image, is the “pond”
filter; the effect is opposite to that of the blur filter. By applying this filter
the weighting of the middle pixel is greater than that of its neighbors. The
pond filter is defined as follows:

• The value in the middle equals 4.


• Values in the corners of the array equal 1.
• All other values equal 2.
• The ratio is 16 - the sum of all the values in the 3 x 3 array.

4.4 Image Binarization

We can transform a gray level image to a black and white image using the
binarization process. The algorithm is based on the threshold value
estimated by analyzing the gray image histogram.

Binarization can be performed in one of two modes: Auto and Manual.


These modes refer to the threshold used for the binarization: all pixels
below the threshold level are considered 0's (black), and all pixels above
this level are considered l's (white).

In Automatic threshold modes, local minimum points from the histogram


(values close to 0) are selected. These correspond to the stable, constant
zones of the accumulated histogram, which is the total number of pixels
whose levels are below the defined level. Therefore an object lies in this
area. This algorithm assumes the histogram is bimodal; that is, all objects
have the same color and the background color is almost uniform.

114
In Manual mode, the user selects the threshold value, generally based on
the histogram. It is recommended that the threshold be selected in local
minimum points of the histogram in order to eliminate some objects.

In Auto mode, the program searches for the first local maximum point, l,
and the last local maximum point, L, of the histogram (from 0 to 255th
level). After that the algorithm determines the absolute minimum which
lies within l and L levels. These levels (l and L) generally correspond to
the first object and to the background (or the opposite). This algorithm is
suitable if the histogram is bimodal-objects have the same color and the
background is uniform.

4.5 Connectivity Analysis (Image Segmentation)

The connectivity analysis and all high level processing are performed on
binary image. Segmentation is used to determine the sets of pixels - parts
of the objects, and the set of pixels - parts of the background.

A set of connected pixels that cannot be expressed as a disjoint reunion of


two sets is called simple conex set. In this type of set a path formed by
pixels in the same set can link any pair of pixels. A set of one or more
simple conex sets it is called as a multiple conex set.

An object is a conex set of binary pixels. If this set is simple conex then
the object is without holes. If the set is multiple conex the object has holes.

A hole is defined as a set of pixels having the same color as the


background and which is completely contained within an object whose
color is the opposite of the background color.

115
4.6 Object Analysis

4.6.1 Object’s Features

This chapter intends to give a description of the object considering the


various features that may be used to make a decision regarding the object.

• A feature is a description of an object, which can be quantified


numerical. Mathematically, a feature f is a function from the object
space to the set of real numbers.
If the codomain of a feature f is denoted as Xf, then the set of all features
will define a function from the object space to Xf 1 x Xf 2 x Xf n (the
Cartesian product of all n features), which we refer to as the feature space.

The feature space is used in pattern identification. A particular object is a


point in the feature space, while a particular feature of an object is the
projection of that object on a specific Xf axis.

Patterns are represented by statistical variables in the features space.

Features are divided into two categories:

Standard Features are predefined global functions applicable to objects.

Some features describe the position of the object within the image, while
others describe the object's shape (these are invariant to translation,
rotation, reflection and scaling). Only object features are used for pattern
building and for identification.

Besides Orientation, Dimensions of the object are distances applicable to


Points. Dimensions are only locals (specific to a particular type of object).
The system can use up to ten dimensions – defined like distances between
up to 20 points.

4.6.2 Number of Holes

The number of holes is a qualitative feature, considered in the first process


identification stage.

116
4.6.3 Standard Features

Standard features of objects are predefined categories used by the system.


You can view their values by selecting [Standard Features] whenever
available throughout the software. All features described below are
computed for the shape of an object.

In the estimation of the standard features values of there are used object’s
moments. An object’s is defined as follows:

Nx Ny
m pq ( A) =   x p y q f A ( x, y )
x =0 y =0

where:

mpq(A) = pq moment (of p+q order) of the object A

x = x coordinate of the analyzed pixel of object A

y = y coordinate of the analyzed pixel of object A

0 pixel not belonging to object A


f A ( x, y) = 
1 pixel belonging to object A

Nx, N y = maximum values for x, y coordinates.

4.6.3.1 Area

Object’s Area is defined as follows:

Area (A) = m00(A)

For objects with holes, area represents the area of the object's body.

4.6.3.2 Perimeter

The definition of the perimeter of the object is:

Perim(A) = sum of pixels along the outer contour of the object in the
vertical and horizontal directions.

117
The perimeter is computed with two corrections, in order to handle pixel
representation of lines. If p1 and p2 are two pixels, such that p1.x = p2.x+1
and p1.y = p2.y+1, then the perimeter (length) of the segment p1p2 will be
computed as 2 . This algorithm leads to a maximum possible error of 8%
of the perimeter when an object orientation changes.

4.6.3.3 Compactness

Compactness indicates if object's contour is elongated or irregular:

Perim( A) 2
Compact ( A) =
Area ( A)

4.6.3.4 Eccentricity

Eccentricity indicates if object's contour is long and thin, or short and


wide:

[  20 ( A) −  02 ( A)]2 + 411 ( A)
Ecc( A) =
m00 ( A)

4.6.3.5 Invariant Moments

Invariant Moments equations are based on Centered Moments and


Normalized Centered Moments of the object

Centered moments are defined as follows:


Nx Ny
 pg ( A) =  ( x − x g ) p ( y − y g ) q f A ( x, y)
x =0 y =0

118
Normalized centered moments are defined as follows:

 pq ( A) ( p + q)
n pq ( A) = 
where  = +1
 00 ( A) 2

The following six moments of object A are invariant to translation,


rotation, scaling and mirroring:

M i1 = n20 + n02
M i 2 = (n30 − 3n12 ) 2 + (n03 − 3n21 ) 2
M i 3 = (n30 + n12 ) 2 + (n03 + n21 ) 2
M i 4 = (n30 − 3n12 )(n30 + n12 )[(n30 + n12 ) 2 − 3(n03 + n21 ) 2 ] +
(n03 − 3n21 )(n03 + n21 )[(n03 + n21 ) 2 − 3(n30 + n12 ) 2 ]
M i 5 = (n20 − n02 )[(n30 + n12 ) 2 − (n03 + n21 ) 2 ] + 4n11 (n30 + n12 )(n03 + n21 )
M i 6 = (n03 − 3n21 )(n03 + n21 )[(n30 + n12 ) 2 − 3(n03 + n21 ) 2 ] +
(n30 − 3n12 )(n30 + n12 )[(n03 + n21 ) 2 − 3(n30 + n12 ) 2 ]

4.6.4 Object’s location

4.6.4.1 Object’s Center of Gravity

The object's position is defined by its center of gravity. For defining the
center of gravity we can write down the following:

m10 ( A) m01 ( A)
x g ( A) = , y g ( A) =
m00 ( A) m00 ( A)

4.6.4.2 Rectangle Containing the Object

The following coordinates define the rectangle (the black line box) which
contains the object:

Xmin, Xmax = horizontal axis

Ymin, Ymax = vertical axis

119
4.6.4.3 Angle between Orientation and Horizontal Axes

The angle defined by the object's main axis of orientation (the line arrow
on the Shape Image) and the horizontal axis is computed using one of the
following methods.

4.6.4.3.1 Inertia Moment Axis

Angle of the minimum inertia axis:

 211  
 = tan −1  +k

 20 −  02  2

where:

k = 0 when  20 −  02  0

k = 1 when  20 −  02  0

 3
when  20 −  02 = 0 ,  is either or , depending on the sign of 11 .
4 4

when both  20 −  02 = 0 and 11 = 0 ,  is undefined and any direction


can be selected as the minimum inertia axis (as example for a circle).

4.6.4.3.2 Maximum Point Angle

It is defined like the angle of the maximum distance from the center of
gravity to a point on the perimeter.

4.6.4.3.3 Minimum Point Angle

120
It is defined like the angle of the minimum distance from the center of
gravity to a point on the perimeter.

4.7 Object Identification

The image of an unknown object is processed going up to Object Analysis


stage; a vector of feature values will result. The obtained values are
compared with any pattern from the pattern database using an object
recognition algorithm. The minimum distance (or nearest neighborhood)
algorithm selects the class of object (pattern) whose average feature vector
is closest (in standard features space) to the feature vector of the unknown
object.

4.7.1 Building Patterns

For identifying objects, the vision system needs a pattern database. Any
pattern in the pattern database is built by training (by analyzing several
images – samples - of known objects) object classes. For each sample
image of an object class the system is estimating the standard features
values. Pattern analysis is performed, by estimating statistical parameters
(the mean feature vector and the covariance array) for each object instance.
The collection of statistical parameters values for all class of object
represents the pattern database.

A set of samples, described by their statistical properties with respect to a


number of N features, and an associated name represents a pattern.
Patterns are built by converting objects extracted from images. At least
two images (samples) are needed to build-up a correct pattern, because of
the statistical character of patterns.

A pattern is stored and updated as the mean vector and the covariance
array. The mean value of a feature is defined as follows: i = E{fi} where i
=l...N and fi is the ith feature considered. This average is computed with
respect to all samples that define the pattern.

Let f be the vector of all features and the vector of the corresponding mean
values:

f = (f1 , f2,…., fN )T

 = (1 , 2,…., N )T

121
where fi is an arbitrary feature variable and f is an arbitrary feature vector,
for which a set of N samples is known.

The covariance array is defined by the following:

  11 ...  1N 
  
= E ( f −  )( f −  ) T =  : : : 

 
 N 1 ...  NN 1 

where:

 ij = covf i , f j  = E( f i −  i )( f j −  j )

 ii =  i = var x i  = E( f i −  i )( f i −  i )

By giving an object and a pattern database, the identification task is to find


the pattern, which the object matches.

The algorithm used for object identification is the Minimum Distance


(nearest neighbor) algorithm. The algorithm applies to all the patterns
contained by the patterns database and all selected features. Only patterns
having the same number of holes as the object are taken into consideration.
We call relevant patterns, those patterns matching these conditions.

4.7.2 Minimum Distance Identification Algorithm

The minimum distance algorithm used by ISPOT for statistical pattern


recognition process is based on Bayes criteria.

The object is denoted as a vector of feature values.

Let f be the feature vector of an object, and  q the mean value vector far
the pattern q; the Bayes criteria is defined by the following:

d ( f ,  q ) = ( f −  q ) T  −1 ( f −  q ) + ln(det )

where d ( f ,  q ) is the distance between the pattern q and the analyzed


object.

Using Bayes criteria the system computes all distances between the mean
value of the object's features and the mean vector of every pattern. The
object is matched to the pattern having the minimum distance.

122
This process is done considering the selected standard features: (area,
perimeter, compactness, eccentricity and the 6 invariant moments)
evaluated for the object’s body.

Starting with the first pattern in the pattern database, a value is computed
for each selected feature, as follows:

 i + K i , where  is the mean value of the feature,  is the variance of the


feature, and K is the decision threshold value defined by the user. The
result with the highest value is considered to be the Maximum Distance.

The object can be identified only if the distance between the object and
any pattern is less than the maximum distance. If all distances between the
object and all patterns exceed the maximum distance, the analyzed object
is considered unknown. Under these conditions, the system will display the
nearest matching pattern.

123

You might also like