You are on page 1of 38

Image Processing and Analysis

What is Image Processing?


• Image Processing is a technique to enhance raw
images received from cameras/sensors placed on
satellites, space probes and aircrafts or pictures
taken in normal day-to-day life for various
applications.
There are two methods available in Image
Processing.
• Analog Image Processing
• Digital Image Processing
• Analog Image Processing - Analog Image
Processing refers to the alteration of image
through electrical means. The most common
example is the television image.
• Digital Image Processing - In this case, digital
computers are used to process the image. The
image will be converted to digital form using a
scanner – digitizer and then process it.
Purpose of Image processing
• Visualization - Observe the objects that are not
visible.
• Image sharpening and restoration - To create a
better image.
• Image retrieval - Seek for the image of interest.
• Measurement of pattern – Measures various
objects in an image.
• Image Recognition – Distinguish the objects in an
image.
Image Processing Techniques
• Image data reduction
• Segmentation
• Feature extraction
• Object recognition
Image Data Reduction
Digital Conversion
 Digital Conversion reduces the number of gray levels used by the machine
vision system.

 For example, an 8-bit register used for each pixel would have 28 = 256 gray
levels.

 Depending on the requirements of the application, digital conversion can be


used to reduce the number of gray leves by using fewer bits to represent the
pixel light intensity.

 Four bits would reduce the number of gray levels of 16.

 This kind of conversion would significantly reduce the magnitude of the image
processing problem.
Note: Gray Level: Discrete brightness level of a pixel or group of pixels. When
an image is digitised or processed, a brightness levels that vary continuously
must be quantised i.e. assigned a value on a scale between white and black
and shades of grey in between: that value is the grey level.
Windowing
 Windowing involves using only a portion of the total
image stored in the frame buffer for image processing
and analysis. This portion is called windowing.

 For example, for inspection of printed circuit boards,


one may wish to inspect and analyze only one
component on the board.

 A rectangular window is selected to surround the


component of interest and only pixel within the
window are analyzed.
Segmentation
 In segmentation, the objective is to group areas of an image
having similar characteristics or features into distinct
entities representing the part of the image.

 For example,
boundaries(edges) or regions(areas) represent two
natural segments of an image.
The important Techniques are:

1.Thresholding
2.Region Growing
3.Edge Detection
Thresholding
• Thresholding is the simplest, powerful and most
frequently/widely used technique for image segmentation.
• It is useful in discriminating foreground from the background.

• Thresholding operation is used to convert a multilevel/gray


scale image into binary image.

• The advantage of obtaining first a binary image is that it


reduces the complexity of the data and simplifies the process
of recognition and classifiction.
Thresholding
• The most common way to convert a gray-level image
into a binary image is to select a single threshold
value(T).Then all the gray level values below T will be
classified as black(0) i.e. background and those above
T will be white(1) i.e. objects.
• The Thresholding operation is a grey value remapping
operation g defined by:
0 if f(x,y) < T
g(x,y)=
Where (x,y) represents a gray value/
1 if f(x,y) ≥ T, are the coordinates of the threshold value point
T represent threshold value
g(x,y) represents threshold image
f(x,y) represents gray level image pixels/
input image
Histogram are constructed by splitting the range of the data into
equal-sized bins (called classes). Then for each bin, the number of
points from the data set that fall into each bin are counted.
Vertical axis: Frequency (i.e., counts for each bin)
Horizontal axis: Response variable

h(i)
2500.00

2000.00

Background
1500.00

1000.00
Object

500.00

0.00 i
0.00 50.00 100.00 150.00 200.00 250.00

T
14
Histogram
REGION GROWING
• Group pixels or sub-regions into PIXEL AGGREGATION:
larger regions when homogeneity
10 10 10 10 10 10 10
criterion is satisfied
• Region grows around the seed 10 10 10 69 70 10 10
point based on similar properties 59 10 60 64 59 56 60
(grey level, texture, color) 10 59 10 60 70 10 62
PROS: 10 60 59 65 67 10 65
• Better in noisy image where 10 10 10 10 10 10 10
edges are hard to identify 10 10 10 10 10 10 10
CONS:
• Seed point must be specified Homogeneity criteria:
• Different seed point will give • The difference between 2 pixel
values is less than or equal to 5
different results
• Horizontal, vertical, diagonal
Region-Oriented Segmentation
 Region Splitting
 Region growing starts from a set of seed points.
 An alternative is to start with the whole image as a single region and
subdivide the regions that do not satisfy a condition of homogeneity.
 Region Merging
 Region merging is the opposite of region splitting.
 Start with small regions (e.g. 2x2 or 4x4 regions) and merge the
regions that have similar characteristics (such as gray level, variance).
 Typically, splitting and merging approaches are used iteratively.

18
Edge Detection
Goal of Edge Detection
Produce a line “drawing” of a scene from an
image of that scene.
Why is Edge detection Useful?
• Important features can be extracted from the
edges of an image (e.g., corners, lines, curves).
• These features are used by higher-level
computer vision algorithms (e.g., recognition).
Effect of Illumination
• Step edge: the image intensity abruptly
changes from one value on one side of the
discontinuity to a different value on the
opposite side.
• Ramp edge: a step edge where the intensity
change is not instantaneous but occur over
a finite distance.
• Ridge edge: the image intensity abruptly
changes value but then returns to the
starting value within some short distance
(i.e., usually generated by lines).
• Roof edge: a roof edge where the intensity
change is not instantaneous but occur over
a finite distance (i.e., usually generated by
the intersection of two surfaces).
Feature Extraction
In machine vison applications, it is necessary to distinguish one
object from another.

This is usually accomplished by means of features that usually


characterized the object.

 Examples of Features:
 Area, diameter, perimeter etc..

A feature, in the context of vision system, is a single parameter


that permits ease of comparison and identification.
Object recognition
• The object recognition problem is accomplished
using the extracted feature information.

• The recognition algorithm must be powerful


enough to uniquely identify the object.

• Techniques of Object recognition are:


– Template-matching technique
– Structural Technique

You might also like