Professional Documents
Culture Documents
Revolution
Agricultural Revolution Industrial Revolution Electrification Transportation Communication Computers Industrial Robots Service Robots
R.SENTHILNATHAN RESEARCH SCHOLAR DEPARTMENT OF PRODUCTION TECHNOLOGY MIT CAMPUS, ANNA UNIVERSITY CHENNAI
Analogies
Industrial Robot
An automatically controlled, reprogrammable, multipurpose manipulator programmable in three or more axes, which may either fixed in place or mobile for use in an industrial automation application.
Software and Applications Third Party Applications Mobile, Personal and Household
4
23092011
Service Robot
A service robot is a robot which operates semi- or fully autonomously to perform services useful to the well-being of humans and equipment, excluding manufacturing operations.
Locate with eyes Calculate target with brain Guide with arm and fingers
Locate with camera Calculate target with software Guide with robot and grippers
sensors: vision, stereo, range sensors, acoustics problems: scene modeling/classification/recognition integration: localization/mapping algorithms (e.g. SLAM)
sensors: vision, range, haptics (force+tactile) problems: structure/range estimation, modeling, tracking, materials, size, weight, inference integration: navigation, manipulation, control, learning
sensors: vision, stereo, range sensors, acoustics, sounds, smell problems: object recognition, qualitative modeling integration: collision avoidance/navigation, learning
sensors: vision, range, haptics. problems: categorization by function/shape/context integrations: inference, navigation, manipulation, control, learning
23092011
Many Years of Experience Motors, Speed, Precision Vision, Force, Torque, Encoders
10
Scene
Lighting
Technique (Frontlight, Backlight) Source (Fluorescent tubes, Halogen and xenon lamps, LED, Laser)
Parts
12
Type of sensor (CCD, CMOS etc) Spec. of Camera (Resolution, Frame rate, etc) Type of Camera (Line Scan, Area Scan, Structured Light, Time of Flight) Interface (Standalone, Computer Interface)
11
Discrete parts or endless material (e.g., paper) Minimum and maximum dimensions Changes in shape Description of the features that have to be extracted Changes of these features concerning error parts and common product variation Surface finish Color Corrosion, oil films, or adhesives Changes due to part handling
23092011
Contd.
Part Presentation
If there is more than one part in view, the following topics are important:
13
14
Robot Configuration
Camera Mounting
Industrial
Eye to Hand
Eye in Hand
15
16
23092011
Applications
Labor Savings: Often alone justifies Throughput Gains in Production Quality Improvements Safety and Medical Cost Savings Flexible Change to Multiple Products Floor Footprint Reduction Reutilize Conveyors, Racks, Bins
18
2D
Indexed Conveyor Flexible Feeding Autoracking Packaging
2.5D
Stacked Objects Geometry for depth perception
3D
Auto racking Discrete Bag Handling Palletizing Bin Picking
17
3D Vision 3D Vision with Real Time Motion 3D Vision with GPS navigation 3D Vision with SLAM
Aerospace
Land
Defenseand security, Farming, Wildlife, Food, Transportation, Outdoor Logistics, Office and Warehouse, Health: Care, Rehabilitation, Surgical, Entertainment, Entertainment
Water
Defense and security, Research and Exploration, Preventive Maintenance, Rescue and Recovery.
19
20
23092011
Software
2D Object Recognition
Edge Detection Boundary analysis Geometric Pattern Matching Mono camera if geometry is consistent Stereo matching: Redundant reliability Laser, Range or time of flight methods Projected points/Lines of Light 3D volume Scans Scene-specific Heuristics
3D Object Recognition
Additional Techniques
21
22
IMAGING FUNDAMENTALS
23
24
23092011
Types
Intensity Images
Optical parametres
Lens type Focal length FOV Intensity Direction of illumination Reflectance properties Sensors structure Types of Projections Pose of the camera
Photometric parameters
Geometric parameters
25
26
Basic Optics
The Thin Lens model: Fundamental Equation
where
27
28
23092011
29
30
31
32
23092011
33
34
35
36
23092011
World Coordinate
37
38
World Coordinate
39
40
10
23092011
Perspective Projection
A point p[x,y,z] in the image plane is given by x = f [ X / Z] y = f [ Y / Z]
Range Images
Reconstructing a 3D shape from a single intensity image is DIFFICULT. Range Images
Are also called as depth images, depth maps, xyz maps, surface profiles and 2.5D images. Each pixel of a range image expresses the distance between a known reference frame and a visible point in the scene. Cloud of points (xyz form) Spatial form
42
Agenda
Physics of Light Optics Camera Sensors Camera Interface Camera Calibration Software Applications and Case Study
44
43
11
23092011
PHYSICS OF LIGHT
45
46
Properties of Light
Electromagnetic Radiation Used to explain the propagation of light through various substances.
Particle Used to explain the interaction of light and matter that result in a change in energy, such as in a video sensor
47
48
12
23092011
Points oscillate in the same plane on a axis perpendicular to the direction of motion.
Frequency (f) is the no. of oscillations per second. Wavelength () is the distance between two points in the same position on the wave (nm) f=c/ where c is the speed of light
Energy vs Intensity
Electromagnetic Spectrum
Higher frequency
51
52
13
23092011
Visible light contains a continuum of frequencies We perceive color as a result of predominance of certain wavelengths of light
The eye responds to visible light with varying efficiencies across the visible spectrum Cameras have a very different response Ultraviolet Visible Infrared While the eye can see only in the visible spectrum, the energy above and below visible light is also important to machine vision.
54
53
Reflected Light is controlled by engineering the lighting. The reflected light (and therefore the digital image) is impacted by
55
56
14
23092011
Additive Color
Red light gets reflected from red objects. Your eyes see the reflected light. The camera also see the reflected light.
Demonstrates what happens when colored lights are mixed together Additive primaries are red, green and blue which altogether make white RGB used for color TV and Cameras
58
All other color gets absorbed by the material. This radiation gets turned into heat.
57
Subtractive Color
Used to describe why objects appear the color they do Pigments added to paint will absorb all colors of that wavelength CMYK used for printing ink (K is for carbon black, less expensive pigment than other colors)
59 60
15
23092011
The rainbow exiting a prism or seen in the sky is the inverse of the additive color wheel.
Both demonstrate that white light is actually a very complex function which needs precise definition.
OPTICS
61
62
Optical Filter
An optical device which selectively transmits light of certain wavelengths and absorbs or reflects all other wavelengths.
Red light reflects off the red background but absoebed by the blue circle.
63 64
Blue light reflects off the blue circle but but absorbed by the red background.
16
23092011
An example
Resulting Images would be the same: No available red light to be reflected, so red appears dark. Light is reflected from blue, appears light.
65 66
Spectral Response
How efficiently light is emitted or received as the wavelength (or color) of the light changes.
17
23092011
69
70
Majority of the vision systems record the reflected light. A well designed lighting system provides high contrast between the features of interest and background (noise).
Regions of high reflectivity, regions of minimal reflected light. Spectral properties of light sources, combined with spectral properties of surface can be used to provide high contrast.
Geometrical
considerations
are
important
for
Reflection
71
Refraction
18
23092011
73
74
Surface Finish
Complex Geometries
75
76
19
23092011
Lens
Lumen and Lux are photometric parameters that represent the amount of light that falls upon a surface per second.
Bright Sun Light: Cloudy day: Full moon night: Over cast night:
The human eye is sensitive to this full range (10 orders of magnitude!) But cameras are only sensitive to 3 orders of magnitude.
The Lens uses refraction to bend light as it passes through, generating image at the other side.
78
77
Focal Length Angular field of view or magnification Working Distance or Field of view minimum at focus Depth of Focus Aperture Resolution Camera Sensor Size Camera mounting configuration
80
79
20
23092011
Focal Length
f The focal length is the distance between the optical centre and the image plane when the lens is focused at infinity.
81 82
Field Of View
Field Of View
The area imaged or the FOV is determined by the intersection of the stand off distance and the angle of viewing.
83 84
21
23092011
Moving closer
A shorter focal length lens can image the same field of view as a longer focal length lens by decreasing the stand off distance. Shorter focal length lens will have more parallax distortion (fish eye effect). Stand off distance has a larger effect on magnification for short focal length (wide angle) lenses.
85
86
Focus
Use a lens with longer focal length Move the camera closer to the part
Using the lens outside of the design region impacts image quality.
For example, stand off distance for focusing can be reduced using spacer rings.
Use a shorter focal length lens Move the camera further from the part
Wide open aperture: small depth of focus Small aperture: large depth of focus
87
88
22
23092011
Extension Rings
The rings increase the image distance, and allow the lens to focus at shorter distances
Lens Adaptor
89
90
An Example
Captured with a 100-mm lens with F/4 Captured with a 28-mm lens with F/4
LIGHTING
Captured with a 100-mm lens with F/22 Captured with a 28-mm lens with F/22
91 92
23
23092011
Lighting Concerns
Stability of the light source Flicker rate Change in spectral properties Need to control diffusion of light (bright spots are bad) Ambient lighting needs to be blocked off Ambient temperature has very large effect on lighting
93
Sources have different spectral properties which cause objects to look differently under different sources.
Thermal (Incandescent)
1000 lumens for 75W 5% efficiency (12 15 lumens/Watt) 1000 hours Rs 50/Klumen 10,000 lumens 25% efficiency (50 lumens/ Watt) 10,000 hours (output degrades, then fails) Rs 25/Klumen 30-35 lumens/Watt 100,000 hours (output degrades over time, not hard failure) Rs 2500/Klumen
95
96
24
23092011
Use a high frequency ballast for Fluorescent lights (10 KHz) Use DC sources for LEDs Shroud your cell from ambient light if it is bright.
97
98
Optical Filtering
99
100
25
23092011
Lighting Techniques
Back Light
Back Light
Diffuse Collimated
Front Light
Placing a light behind the part such that the part is between the light and the camera, providing a silhouette of the part
102
101
Types
103
104
26
23092011
Front Light
Dark field illumination is used to subdue background and highlight pin stamped characteristics
Light positioned at an oblique angle to the part. Angle of incidence set up such that angle of reflection is away from the camera lens.
Placing the light in front of the part, on the same side of the camera. Provides an Image with surface features and shading
105
106
Lighting Component
107
108
27
23092011
109
110
CAMERA SENSORS
111
112
28
23092011
Digital Image
Types
is a numerical
digital
image
representation of a real physical object. The objective is to obtain an accurate spatial (geometric) and spectral (light) representation (resolution).
with
sufficient
detail by
measuring and recording the light that Any Digital Image, irrespective of its type is a 2D array of numbers
113 114
Optical parametres
Area Scan
Photometric parameters
Line Scan
Geometric parameters
115
116
29
23092011
Properties of Sensors
Some materials will generate electrical charge proportional to the number of photons striking it.
Images are repeatable Features in the image exist in the physical world
No noise or artifacts
Changes in the environment should have minimal impact on the image. How to achieve this? Good lighting and optics Understanding the requirements Choosing the right camera for the application.
117
118
Vaccum Diode
An image is focused on the sensor for a preset exposure time. The light pattern is captured and transformed into a new medium. There is a integral relationship between the amount of light measured and exposure time.
CCD COMS
120
119
30
23092011
Film has a continuous surface, down to the grain of the film Video Sensors have discrete imaging surface
The individual square is called a photo site, and is similar to a light meter.
Sizes of solid state sensors 2/3 inch: 6.6 x 8.8 mm 1/2 inch: 6.4 x 4.8 mm 1/3 inch: 3.6 x 4.8 mm 1/4 inch: 2.4 x 3.2 mm
121
The individual photo sites in an video sensor are called picture elements - PIXEL
122
Each photo site can be modeled by a bucket to collect charges generated by photons As photons strike the sensor, charge is developed and the bucket begins to fill How full the bucket gets is determined by
How much light (intensity) How long you collect charge (exposure time or shutter speed)
The amount of light in each photo site is sampled and converted into a number. This number, or gray scale value, is an indicator of brightness.
124
123
31
23092011
How long you keep the bucket under the running water - exposure time
INCREASING EXPOSURE TIME Photons
ANALOGY
Electrons
As you increase the exposure time, you allow more time for photons to get converted into electrons in the sensor; hence more charge accumulation for more brighter image.
126
125
127
128
32
23092011
Double it again....
8 x 8 grid 64 photo sites or (64 pixels) When the blue object fills more than 50% of the photo site, it will be turned black, otherwise the site is considered to be white.
Attributes of Sampling
You might not even detect the object if the sampling resolution is too low
If you sample at two times the resolution, the total number of sample sites is increased by a factor of 4
129
130
Other Attributes
The new digitized information contains much less information A three dimensional scene is reduced to a 2D representation
No color information
Size and location are now estimates whose precision and accuracy depends on the sampling resolution.
131
33
23092011
The sensor consists of an array of individual photo cells. Typical array size in pixels is
640 x 480 or 768 x 480, 1280 x 760 1600 x 1200, and larger
Usually between 5 and 10 microns Impact Sensor Noise and Dynamic Range
Spatial Resolution
Field of view should be large enough to accommodate variations in position. Might require more than one camera
135
136
34
23092011
What If Pixel Arrays Are Not Big Enough And Sub-pixels Wont Work?
Use sub-pixels
Use Line Scan cameras: A digital camera with pixels arranged in a single line. Can generate extremely large contiguous images not possible with area scan cameras
1K, 2K, 4K, 8K, 10K are some available sizes Cost of the line scan sensor is low relative to large format array cameras (2000 x 2000)
Motion of the camera or part is required for the 2nd axis Similar to scanners, copiers and fax machines
137
138
Saturation Blooming Dynamic Range Grayscale Resolution Dark Current Noise Fill Factor
Field of View is 30 x 200 100 dpi 3,000 x 20,000 pixel image 60 Mbyte image data
139
140
35
23092011
Saturation
Blooming
At certain light levels and exposure times, the bucket (photo site) gets filled with charge and can hold no more. The photo cell is now saturated Any additional charge generated by the sensor has to go somewhere
When light saturates in a pixel area it spills over into adjacent pixels. Spill over occurs
Into adjacent pixels In CCD spillover also occurs in the pixel columns
Prevent blooming by
141
Dynamic Range
Examples
Low Dynamic Range High Dynamic Range
Ratio of Amount of Light it takes to saturate the sensor to the least amount of light detectable above background noise.
A good dynamic range allows very bright and very dim areas to be viewed simultaneously
143
144
36
23092011
Grayscale Resolution
The number of bits used to represent the amount of light in the pixel
8
However stray charge gets generated in the silicon from the thermal energy causing low level noise
This charge is called dark current Result is that black is not 0.0 volts
Dark current noise increases with temperature, doubles with every 6 degree rise above room temperature.
145
146
Fill Factor
Sensitivity to Light
147
148
37
23092011
CCD
High quality, low noise images Good Pixel-to-Pixel uniformity Electronic Shutter without artifacts 100% fill factor Highest Sensitivity High Power consumption Multiple Voltages Required Increased system integration complexity and cost
CCD Sensor
CCD Sensor reads out a single row of pixels at a time, after the charge is moved down the sensor lock step by rows
CMOS
Low Power consumption Camera functions and additional control circuitry can be implemented in the CMOS sensor chip itself Random pixel read out capability (Windowing) Fixed Pattern noise Higher Dark Current Noise Lower Light Sensitivity
149
150
You would have 3x the amount of data to process, or 1/3 the spatial resolution with color imaging Need to evaluate the benefit of color information relative to increased complexity and reduced resolution
Machine vision implemented with color camera are suitable for sorting, nor colorimetry
CAMERA INTERFACE
For robustness, colors being differentiated need to be widely spaced Watch for uniform spectral output of your light source for color applications (remember that the camera measures the reflected light)
151
152
38
23092011
Take a picture Process the image data Make a decision or measurement Do something useful with the results
Not everything enclosed in the box is required Computer Frame Grabber
Camera
Processor
Optics
Lighting
Other Accessories
155
39
23092011
In detail
Additional cameras can be very low incremental costs PC is available for complex image processing or post processing tasks PC can be used for storing images, collecting process data, programming system updates
Small number of cameras are required Operation of each smart cameras are independent of others in the cell Minimal post-processing of data is required No logic between cameras Lower end vision algorithms sufficient
Analog Signal
ADC
Embedded Vision System provides complete hardware packaging and software integration solution
157
Image Buffer
Digital Signal
MAY BE, or becomes a part of the camera
DAC
Analog Signal
ADC
40
23092011
250 MBps
No
Low
CAMERA CALIBRATION
Yes
Moderate
USB
Yes
Extensive
GigE
1Gbps
No
Moderate
5 Excellent;
161
1 - Poor
162
World Coordinate
The process of finding the intrinsic and the extrinsic parameters of a camera is called camera calibration and it depends on the model chosen for the camera
163 164
41
23092011
World Coordinate
Camera Coordinates
165
166
An approximate linear model. =s Validity depends on the working distance and the relative depths of objects in the scene
x = f [ X / Z] y = f [ Y / Z]
42
23092011
169
170
171
172
43
23092011
173
174
Cna yuo raed this? It dsenot mtaetr in wtah oerdr the ltteres in the wrod are, the olny iproamtnt tihng is taht the frsit and lsat ltteer be in the rghit pclae.
175
Just as the camera is no match for Human Vision, well see that the computer cannot even begin to duplicate how the human brain processes the image data.
Almost all are based on a priori information Vision not up to the anything, anywhere problem
176
44
23092011
Contd.
Parts
177
Discrete parts or endless material (e.g., paper) Minimum and maximum dimensions Changes in shape Description of the features that have to be extracted Changes of these features concerning error parts and common product variation Surface finish Color Corrosion, oil films, or adhesives Changes due to part handling
If there is more than one part in view, the following topics are important:
178
IMAGE ANALYSIS
Perform mathematical or logical calculations on an image and convert the image into another image where the pixel have different values
Reduce or eliminate noise Enhance Information Subdue unnecessary or confusing background information
Image Analysis
Perform mathematical or logical calculations on an image to extract features which describe the image content in numerical terms
To make decisions
179
180
45
23092011
Point Transformations
EDGE DETECTOR
181
182
Image Analysis
Algorithm
How Vision Systems Extract and Use Features from the Image
Output Feature Vectors
Centroid
Feature Vectors
Despite the wide range of feature vectors that can be extracted from the image, what you do with the values is quite consistent
Location
Compare to a known good part Calculate distance from one feature to another Calculate the size of the feature Locate feature in the field of view
From a priori information, you know where the important features are, and process pixels only in that region
183
184
46
23092011
Set up a window, or Region of Interest, and process only those pixels in that region
Allow enough extra coverage for part and fixture tolerances Or use a tool to find the part then automatically adjust the window location for the new part location
Called fixturing
185
186
Spectral
Spectral Analysis
Can be used for presence or absence No location information available in feature vector
Spatial
Temporal
187
188
47
23092011
You can have more than one They can touch or overlap
Threshold the grayscale image to binary Count the pixels in each ROI (white or black) Computer returns the number of pixels Then? Compare the measured number of pixels to some standard value to make decision
These algorithms measure how light or dark image is, and make decisions based on that measured value.
189
190
When you count the no. of pixels in the ROI, it may change from image to image for the same part even if the part and its location is maintained the same, due to camera noise and lighting. If the part is moved slightly you get more variation. If you measure different parts, the variability increases more.
Threshold for part OK
Plot the histogram distribution for both good and bad parts
Verify
wide
separation
in
If you make multiple measures you can plot the distribution of pixel counts in a histogram to study how much variation you have in the process.
191 192
Red is bad part Green is good part Pixel count somewhere in between is set as threshold
48
23092011
False Rejects
Threshold are set incorrectly at this level to guarantee that only All feature vectors measured by vision systems have normal process variations. During setup you need to verify that you have sufficient separation in the measurements the vision system makes between good and bad parts
193 194
good parts are accepted by machine. Many specifications read SHALL ACCEPT NO BAD PARTS. Result is falsely rejecting good parts, which interferes with production efficiencies.
False Accepts
System calculates the average of the grayscale values of the pixels in the ROI. The measured value is compared with the value for good and bad parts in order to make an accept / reject decision. Can be used for presence or absence.
Thresholds are set incorrectly at this level to relieve production concerns about rejecting too many parts that the operator may say OK. Result is accepting bad parts
195
196
49
23092011
Histogram Analysis
Spatial Analysis
System calculates the histogram of the grayscale values of the pixels in an ROI
Features of the Histogram are compared to values for good and bad parts to make an accept/reject decision.
Measurement Location
Good for texture analysis, or for dynamically adjusting the binary threshold
197
Connectivity Analysis
Area (no of white pixels) Perimeter (blue + red) Convex perimeter (blue + green) Compactness
Initiate algorithm System returns a list of geometric features about each blob in the image
Roughness
199
200
50
23092011
Centre of Gravity (average in x and y) Bounding box (red) Minimum x (or y) coordinate No. of holes Aspect ratio (ratio of span in x to span in y) No. of runs
Location: The centre of gravity, or minimum (maximum) pixel locations can be used for identifying where the object is in the image.
Verification: Similar to presence/absence evaluation with spectral analysis, except that more information is present providing a more robust decision.
201
202
Averages the centre position of each line of pixels in the rows and columns Provides sub-pixel accuracy Each coordinate determined by the location of one pixel
Provides information on the object location and geometry Better than pixel counting because you can count only contiguous pixels
Eliminate unwanted features or noise, such as specular reflections You can size the object Geometric verification of blob features provide additional check that you are counting the right pixels
203
204
51
23092011
Edge Analysis
Edge Pixels
Identify edge pixels Measurement tools available would give distances in pixels
Measure from line to line (caliper tool) Measure angle between two lines Measure from point to line (perpendicular to line)
Vertical edges are identified where grayscale changes as you scan along horizontal direction Horizontal edges are identified by scanning in the vertical direction. Oblique edges are calculated from a combination of the horizontal and vertical edge strength
206
Sub-pixel accuracy can be achieved if contiguous pixels along an edge are combined into a line used for measurement
205
Template Matching
Matching a trained model to the image Does not require the user to know much about the features or grayscale values
Must understand features versus noise or background clutter Good image contrast is important Best Match
Powerful technique used extensively in vision for electronics and printing Normalized correlation or geometric vector matching
A model of a golden part is taught Trained template is moved over the image System records the percentage match between template and image Template is scanned over the entire search region Location of best fit and % match is returned
207
208
52
23092011
Model
Change in Scale
For Normalized grayscale correlation, the average grayscale intensities of the model and the search area are made equal.
210
209
Less sensitive to scale, rotation, color variation than Normalized Grayscale Correlation
Edge strength too low % acceptable match might be too low Search region too large includes background noise that could be misclassified
212
53
23092011
An Application Example
Shadow
Results
Ensure high contrast, consistent images Use a ROI which minimizes background noise in the search area
Rather than one large ROI for the template, use multiple smaller ROIs which includes unique features not seen elsewhere in the image
215
216
54
23092011
Summary
Lighting flexibility and agility Camera resolution and speed Vision recognition tools Computational processing power Mathematical Algorithms Robot Work Volume Gripper design and Versatility Part and Material Handling
217
55