You are on page 1of 61

Digital Image Processing:

Introduction
Introduction

“One picture is worth more than ten thousand words”


Anonymous
References
“Digital Image Processing”, Rafael C.
Gonzalez & Richard E. Woods,
Addison-Wesley, 2002
• Much of the material that follows is taken from this
book
“Machine Vision: Automated Visual
Inspection and Robot Vision”, David
Vernon, Prentice Hall, 1991
• Available online at:
homepages.inf.ed.ac.uk/rbf/BOOKS/VERNON/
What is a Digital Image?
A digital image is a representation of a
two-dimensional image as a finite set of digital
values, called picture elements or pixels
Pixel values typically represent gray levels, colours,
heights, opacities etc
Remember digitization implies that a digital image
is an approximation of a real scene

1 pixel
Common image formats include:
• 1 sample per point (B&W or Grayscale)
• 3 samples per point (Red, Green, and Blue)
• 4 samples per point (Red, Green, Blue, and “Alpha”,
a.k.a. Opacity)

For most of this course we will focus on grey-scale


images
The figure is an example of digital
image that you are now viewing
on your computer screen. But
actually , this image is nothing but
a two dimensional array of
numbers ranging between 0 and
255.
Each number represents the value
of the function f(x,y) at any point.
In this case the value 128 , 230
,123 each represents an individual
pixel value. The dimensions of the
picture is actually the dimensions
of this two dimensional array. 128 230 123
232 123 321
123 77 89
80 255 255
What is Digital Image Processing?
Digital image processing focuses on two major tasks
• Improvement of pictorial information for human interpretation
• Processing of image data for storage, transmission and representation for
autonomous machine perception.
How it works

In the above figure , an image has been captured by a camera and has
been sent to a digital system to remove all the other details , and just
focus on the water drop by zooming it in such a way that the quality of
the image remains the same.
The continuum from image processing to
computer vision can be broken up into low-, mid-
and high-level processes

Low Level Process Mid Level Process High Level Process


Input: Image Input: Image Input: Attributes Output:
Output: Image Output: Attributes Understanding
Examples: Noise Examples: Object Examples: Scene
removal, image recognition, understanding,
sharpening segmentation autonomous navigation
History of Digital Image Processing
Early 1920s: One of the first applications of digital
imaging was in the news-
paper industry
• The Bartlane cable picture
transmission service
• Images were transferred by submarineEarly
cable between
digital image
London and New York
• Pictures were coded for cable transfer and
reconstructed at the receiving end on a telegraph
printer
History of DIP (cont…)
Mid to late 1920s: Improvements to the Bartlane
system resulted in higher quality images
• New reproduction
processes based
on photographic
techniques
• Increased number
of tones in
reproduced images

Improved
digital image Early 15 tone digital image
History of DIP (cont…)
1960s: Improvements in computing technology
and the onset of the space race led to a surge of
work in digital image processing
• 1964: Computers used to
improve the quality of
images of the moon taken
by the Ranger 7 probe
• Such techniques were used
in other space missions
including the Apollo landings

A picture of the moon taken by


the Ranger 7 probe minutes
before landing
History of DIP (cont…)
1970s: Digital image processing begins to be used in
medical applications
• 1979: Sir Godfrey N.
Hounsfield & Prof. Allan M.
Cormack share the Nobel
Prize in medicine for the
invention of tomography,
the technology behind
Computerised Axial
Tomography (CAT) scans
Typical head slice CAT image
History of DIP (cont…)
1980s - Today: The use of digital image processing techniques has
exploded and they are now used for all kinds of tasks in all kinds of
areas
• Image enhancement/restoration
• Artistic effects
• Medical visualisation
• Industrial inspection
• Law enforcement
• Human computer interfaces
Examples: Image Enhancement
One of the most common uses of DIP techniques:
improve quality, remove noise etc
Examples: The Hubble
Telescope
Launched in 1990 the Hubble
telescope can take images of
very distant objects
However, an incorrect mirror
made many of Hubble’s
images useless
Image processing
techniques were
used to fix this
Examples: Artistic Effects
Artistic effects are
used to make images
more visually
appealing, to add
special effects and to
make composite
images
Examples: Medicine
Take slice from MRI scan of canine heart, and find
boundaries between types of tissue
• Image with gray levels representing tissue
density
• Use a suitable filter to highlight edges

Original MRI Image of a Dog Heart Edge Detection Image


Examples: GIS
Geographic Information Systems
• Digital image processing techniques are used
extensively to manipulate satellite imagery
• Terrain classification
• Meteorology
Examples: GIS (cont…)
Night-Time Lights of the
World data set
• Global inventory of
human settlement
• Not hard to imagine the
kind of analysis that
might be done using this
data
Examples: Industrial Inspection
Human operators are
expensive, slow and
unreliable
Make machines do the
job instead
Industrial vision systems
are used in all kinds of
industries
Can we trust them?
Examples: PCB Inspection
Printed Circuit Board (PCB) inspection
• Machine inspection is used to determine that all
components are present and that all solder joints are
acceptable
• Both conventional imaging and x-ray imaging are
used
Examples: Law Enforcement
Image processing
techniques are used
extensively by law
enforcers
• Number plate recognition
for speed
cameras/automated toll
systems
• Fingerprint recognition
• Enhancement of CCTV
images
Examples: HCI
Try to make human computer
interfaces more natural
• Face recognition
• Gesture recognition
Does anyone remember the
user interface from “Minority
Report”?
These tasks can be extremely
difficult
Applications of Digital Image
Processing
• Image sharpening and restoration
• Medical field
• Remote sensing
• Transmission and encoding
• Machine/Robot vision
• Color processing
• Pattern recognition
• Video processing
• Microscopic Imaging
• Others
Image sharpening and restoration
• Image sharpening and restoration refers here to process images
that have been captured from the modern camera to make them
a better image or to manipulate those images in way to achieve
desired result. It refers to do what Photoshop usually does.
• This includes Zooming, blurring , sharpening , gray scale to color
conversion, detecting edges and vice versa , Image retrieval and
Image recognition. The common examples are:

Original Zoomed Blurr


Sharp image Edges
Hurdle detection
• Hurdle detection is one of the common task that has been
done through image processing, by identifying different type
of objects in the image and then calculating the distance
between robot and hurdles.
Line follower robot
• Most of the robots today work by following the line and thus are called line follower robots. This
help a robot to move on its path and perform some tasks. This has also been achieved through
image processing.
Fundamental Steps in Digital Image Processing
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Step 1: Image Acquisition
The image is captured by a sensor (eg. Camera),
and digitized if the output of the camera or
sensor is not already in digital form, using
analogue-to-digital convertor
Step 2: Image Enhancement
The process of manipulating an image so that the result
is more suitable than the original for specific
applications.

The idea behind enhancement techniques is to bring out


details that are hidden, or simple to highlight certain
features of interest in an image.
Step 3: Image Restoration

- Improving the appearance of an image


- Tend to be mathematical or probabilistic models.

Enhancement, on the other hand, is based on human


subjective preferences regarding what constitutes a
“good” enhancement result.
Step 4: Colour Image Processing

Use the colour of the image to extract features of interest


in an image.
Colour modeling and processing in a digital domain etc.
Step 5: Wavelets
Are the foundation of representing images in various
degrees of resolution. It is used for image data
compression where images are subdivided into smaller
regions.
Step 6: Compression
Techniques for reducing the storage required to
save an image or the bandwidth required to
transmit it.
Step 7: Morphological Processing
Tools for extracting image
components that are useful in
the representation and
description of shape.
In this step, there would be a
transition from processes that
output images, to processes that
output image attributes.
Step 8: Image Segmentation
Segmentation procedures partition an image into its
constituent parts or objects.
Important Tip: The more accurate the segmentation, the
more likely recognition is to succeed.
Step 9: Representation and Description
- Representation: Make a decision whether the data should
be represented as a boundary or as a complete region. It is
almost always follows the output of a segmentation stage.
- Boundary Representation: Focus on external shape
characteristics, such as corners and inflections.
- Region Representation: Focus on internal properties,
such as texture or skeleton shape.

Transforming raw data into a form suitable for subsequent


computer processing. Description deals with extracting
attributes that result in some quantitative information of
interest or are basic for differentiating one class of objects
from another.
Step 10: Object Recognition
Recognition: the process that assigns label to an object
based on the information provided by its description.
Recognition is the process that assigns a label, such as,
“vehicle” to an object based on its descriptors.
Fundamentals of Digital Images

Origin

y
Image “After snow storm” f(x,y)

⬥ An image: a multidimensional function of spatial coordinates.


⬥ Spatial coordinate: (x,y) for 2D case such as photograph,
(x,y,z) for 3D case such as CT scan images
(x,y,t) for movies
⬥ The function f may represent intensity (for monochrome images)
or color (for color images) or other associated values.
Conventional Coordinate for Image
Representation

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Digital Image Types : Intensity
Image
Intensity image or monochrome image
each pixel corresponds to light intensity
normally represented in gray scale (gray
level).

Gray scale values


Digital Image Types : RGB
Image
Color image or RGB image:
each pixel contains a vector
representing red, green and
blue components.

RGB components
Image Types : Binary
Image
Binary image or black and white image
Each pixel contains one bit :
1 represent white
0 represents black

Binary data
Digital Image Acquisition
Process

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Generating a Digital
Image
Basic Relationship of
Pixels
(0,0) x

(x-1,y-1) (x,y-1) (x+1,y-1)

(x-1,y) (x,y) (x+1,y)


y

(x-1,y+1) (x,y+1) (x+1,y+1)

Conventional indexing method


Neighbors of a
Pixel
Neighborhood relation is used to tell adjacent pixels. It is useful for analyzing
regions.

(x,y-1) 4-neighbors of p:

(x−1,y)
(x-1,y) p (x+1,y)
(x+1,y)
N4(p) = (x,y−1)
(x,y+1)
(x,y+1)

4-neighborhood relation considers only vertical and horizontal neighbors.

Note: q ∈ N4(p) implies p ∈ N4(q)


Neighbors of a Pixel
(cont.)

(x-1,y-1) (x,y-1) (x+1,y-1) 8-neighbors of p:

(x−1,y−1)
(x-1,y) p (x+1,y)
(x,y−1)
(x+1,y−1)
(x−1,y)
(x-1,y+1) (x,y+1) (x+1,y+1) (x+1,y)
N8(p) = (x−1,y+1)
(x,y+1)
(x+1,y+1)

8-neighborhood relation considers all neighbor pixels.


Neighbors of a Pixel
(cont.)

(x-1,y-1) (x+1,y-1) Diagonal neighbors of p:

(x−1,y−1)
p
(x+1,y−1)
ND(p) = (x−1,y+1)
(x+1,y+1)
(x-1,y+1) (x+1,y+1)

Diagonal -neighborhood relation considers only diagonal


neighbor pixels.
Image Sampling and
Quantization

Image sampling: discretize an image in the spatial domain


Spatial resolution / image resolution: pixel size or number of pixels

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
How to choose the spatial
resolution
Spatial resolution = Sampling locations

Original image
Sampled image

Under sampling, we lost some image details!


Effect of Spatial
Resolution

256x256 pixels 128x128 pixels

64x64 pixels 32x32 pixels


Effect of Spatial
Resolution

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Moire Pattern Effect : Special Case of
Sampling

Moire patterns occur when frequencies of two superimposed


periodic patterns are close to each other.

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Effect of Spatial
Resolution

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Can we increase spatial resolution by
interpolation ?

Down sampling is an irreversible process.


(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Python code

• import cv2

• img = cv2.imread('The-original-cameraman-image.png')

• print("Image Properties")
• print("- Number of Pixels: " + str(img.size))
• print("- Shape/Dimensions: " + str(img.shape))

• import matplotlib.pyplot as plt


• plt.imshow(img)

You might also like