You are on page 1of 72

Digital

IMAGE Image
PROCESSING
Processing
ANISHA M. Dr.LAL
Anisha M. Lal,
SCOPE,
Introduction
VIT University.
Contents

•This lecture will cover:


•Image Processing Fields
•Computerized Processes Types
•Digital Image Definition
•Fundamental steps in Digital Image
Image Processing Fields

• Computer Graphics: The creation of images

• Image Processing: Enhancement or other


manipulation of the image

• Computer Vision: Analysis of the image content


Computerized Processes Types

• Low-Level Processes:
• Input and output are images
• Tasks: Primitive operations, such as, image
processing to reduce noise, contrast
enhancement and image sharpening
Computerized Processes Types

• Mid-Level Processes:
• Inputs, generally, are images. Outputs are
attributes extracted from those images (edges,
contours, identity of individual objects)
• Tasks:
• Segmentation (partitioning an image into regions or objects)
• Description of those objects to reduce them to a form suitable
for computer processing
• Classifications (recognition) of objects
Computerized Processes Types

• High-Level Processes:
• Image analysis and computer vision
Digital Image Definition

• An image can be defined as a two-dimensional function f(x,y)


• x,y: Spatial coordinate
• f: the amplitude of any pair of coordinate x,y, which is called the
intensity or gray level of the image at that point.
• x,y and f, are all finite and discrete quantities.
Digital Image Types : Intensity Image

Intensity image or monochrome image


each pixel corresponds to light intensity
normally represented in gray scale (gray
level).

Gray scale values


10 10 16 28
 9 6 26 37 
 
15 25 13 22 
 
32 15 87 39 
Digital Image Types : COLOR (RGB) Image

Color image or RGB image:


each pixel contains a vector
representing red, green and
blue components.

RGB components
10 10 16 28
 9 656 7026 5637  43
 32 99 54 70  67  78
96 56
15  256013 902296  67 
  21  54 47  42  
  85 85 43  92 
32 15 87 39
54  65 65 39  
32 65 87 99 
Image Types : Binary Image

Binary image or black and white image


Each pixel contains one bit :
1 represent white
0 represents black

Binary data
0 0 0 0
0 0 0 0
 
1 1 1 1
 
1 1 1 1
Image Types : Index Image
Index image
Each pixel contains index number
pointing to a color in a color table

Color Table

Index Red Green Blue


component component component
No.
1 0.1 0.5 0.3
2 1.0 0.0 0.0
1 4 9
6 4 7  3 0.0 1.0 0.0
 
6 5 2 4 0.5 0.5 0.5
5 0.2 0.8 0.9
Index value … … … …
3D, 4D and 5D

https://youtu.be/TGsGGtPWzfo
Fundamental Steps in Digital Image Processing:
Outputs of these processes generally are images

Outputs of these processes generally are image attributes


Wavelets &
Colour Image Image Morphological
Multiresolution
Processing Compression Processing
processing

Image
Restoration
Segmentation

Image Knowledge Base


Enhancement Representation &
Description

Image
Acquisition Object
Recognition

Problem Domain
Step 1: Image Acquisition

The image is captured by a sensor (eg. Camera), and


digitized if the output of the camera or sensor is not
already in digital form, using analogue-to-digital
convertor
Step 2: Image Enhancement

The process of manipulating an image so that the result is more


suitable than the original for specific applications.

The idea behind enhancement techniques is to bring out details that


are hidden, or simple to highlight certain features of interest in an
image.
Step 3: Image Restoration

- Improving the appearance of an image


- Tend to be mathematical or probabilistic models.
Enhancement, on the other hand, is based on human subjective
preferences regarding what constitutes a “good” enhancement result.
Step 4: Colour Image Processing

Use the colour of the image to extract features of


interest in an image
Step 5: Wavelets

Are the foundation of representing images in various


degrees of resolution. It is used for image data
compression.
Step 6: Compression

Techniques for reducing the storage required to save


an image or the bandwidth required to transmit it.
Step 7: Morphological Processing

Tools for extracting image components that are useful in the


representation and description of shape.
In this step, there would be a transition from processes that
output images, to processes that output image attributes.
Step 8: Image Segmentation

Segmentation procedures partition an image into its constituent


parts or objects.

Important Tip: The more accurate the segmentation, the


more likely recognition is to succeed.
Fundamental Steps in DIP:
(Description)
Step 9: Representation and Description
- Representation: Make a decision whether the data
should be represented as a boundary or as a complete
region. It is almost always follows the output of a
segmentation stage.
- Boundary Representation: Focus on external
shape characteristics, such as corners and
inflections
- Region Representation: Focus on internal
properties, such as texture or skeleton shape
Step 9: Representation and
Description
- Choosing a representation is only part of the
solution for transforming raw data into a form
suitable for subsequent computer processing
(mainly recognition)
- Description: also called, feature selection, deals
with extracting attributes that result in some
information of interest.
Step 9: Recognition and
Interpretation

Recognition: the process that assigns label to an object based on


the information provided by its description.
Step 10: Knowledge Base

Knowledge about a problem domain is coded into an


image processing system in the form of a knowledge
database.
Brain tumour detection using image
processing
Components of an Image Processing System
Network

Image displays Computer Mass storage

Specialized image Image processing


Hardcopy
processing hardware software

Typical general-
Image sensors purpose DIP
Problem Domain system
Components of an Image
Processing System
1. Image Sensors
Two elements are required to acquire digital images.
The first is the physical device that is sensitive to
the energy radiated by the object we wish to image
(Sensor). The second, called a digitizer, is a device for
converting the output of the physical sensing
device into digital form.
Components of an Image
Processing System
2. Specialized Image Processing Hardware
Usually consists of the digitizer, mentioned before, plus
hardware that performs other primitive operations, such as
an arithmetic logic unit (ALU), which performs arithmetic
and logical operations in parallel on entire images.

This type of hardware sometimes is called a front-end


subsystem, and its most distinguishing characteristic is
speed. In other words, this unit performs functions that
require fast data throughputs that the typical main
computer cannot handle.
Components of an Image
Processing System
3. Computer
The computer in an image processing system is a
general-purpose computer and can range from a PC
to a supercomputer. In dedicated applications,
sometimes specially designed computers are used to
achieve a required level of performance.
Components of an Image
Processing System
4. Image Processing Software
Software for image processing consists of specialized
modules that perform specific tasks. A well-designed
package also includes the capability for the user to
write code that, as a minimum, utilizes the specialized
modules.
Components of an Image
Processing System
5. Mass Storage Capability
Mass storage capability is a must in a image processing applications.
And image of sized 1024 * 1024 pixels requires one megabyte of
storage space if the image is not compressed.

Digital storage for image processing applications falls into three


principal categories:
1. Short-term storage for use during processing.
2. on line storage for relatively fast recall
3. Archival storage, characterized by infrequent access
Components of an Image
Processing System
5. Mass Storage Capability
One method of providing short-term storage is computer memory. Another
is by specialized boards, called frame buffers, that store one or more images
and can be accessed rapidly.

The on-line storage method, allows virtually instantaneous image zoom, as


well as scroll (vertical shifts) and pan (horizontal shifts). On-line storage
generally takes the form of magnetic disks and optical-media storage. The
key factor characterizing on-line storage is frequent access to the stored data.

Finally, archival storage is characterized by massive storage requirements but


infrequent need for access.
Components of an Image
Processing System
6. Image Displays
The displays in use today are mainly color
(preferably flat screen) TV monitors. Monitors are
driven by the outputs of the image and graphics
display cards that are an integral part of a
computer system.
Components of an Image
Processing System
7. Hardcopy devices
Used for recording images, include laser printers,
film cameras, heat-sensitive devices, inkjet units and
digital units, such as optical and CD-Rom disks.
Components of an Image
Processing System
8. Networking
Is almost a default function in any computer system, in
use today. Because of the large amount of data inherent
in image processing applications the key consideration
in image transmission is bandwidth.

In dedicated networks, this typically is not a problem,


but communications with remote sites via the internet
are not always as efficient.
Visual Perception: Human Eye

(Picture from Microsoft Encarta 2000)


Cross Section of the Human Eye

The lens and the ciliary muscle focus the reflected lights from objects
into the retina to form an image of the objects.
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
1. The lens contains 60-70% water, 6% of fat.
2. The iris diaphragm controls amount of light that enters the eye.
3. Light receptors in the retina
- About 6-7 millions cones for bright light vision called photopic
- Density of cones is about 150,000 elements/mm2.
- Cones involve in color vision.
- Cones are concentrated in fovea about 1.5x1.5 mm2.
- About 75-150 millions rods for dim light vision called scotopic
- Rods are sensitive to low level of light and are not involved
color vision.
4. Blind spot is the region of emergence of the optic nerve from the eye.
5. Fovea : Circular indentation in center of retina, about 1.5mm diameter,
dense with cones.
•Photoreceptors around fovea responsible for spatial vision (still images).
•Photoreceptors around the periphery responsible for detecting motion.
Image Formation in the Human Eye

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.

(Picture from Microsoft Encarta 2000)


Distribution of Rods and Cones in the Retina

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Brightness Adaptation of Human Eye : Simultaneous Contrast

Simultaneous contrast. All small squares have exactly the same intensity
but they appear progressively darker as background becomes lighter.
Optical illusion

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Digital Image Acquisition Process

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
A Simple Image Formation Model
Mathematical representation of monochromatic
images.
– Two dimensional function f(x,y), where f is the gray
level of a pixel at location x and y.
0< f(x,y) <
– The values of the function f at different locations are
proportional to the energy radiated from the imaged
object.
f(x,y)=i(x,y)*r(x,y)
0< i(x,y) <
0 < r(x,y) < 1
i(x,y) : Sun on clear day 90,000 lm/m2
: Sun on cloudy day 10,000 lm/m2
: Full moon 0.1 lm/m2
: Commercial office 1,000 lm/m2

r(x,y) :Black Velvet 0.01


:Stainless Steel 0.65
:Flat-white Wall Paint 0.80
:Silver-plated Metal 0.90
:Snow 0.93
Image Sampling and Quantization

• The output of most sensors is continuous in amplitude


and spatial coordinates.
• Converting an analog image to a digital image require
sampling and quantization
• Sampling: is digitizing the coordinate values
• Quantization: is digitizing the amplitude values
Sampling and Quantization

• Sampling rate
• Nyquist sampling rate
• 2*maximum frequency
• Aliasing
• Quantization
• Rounding off
• Truncation
Aliasing and Anti-aliasing
Generating a Digital Image

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Image Sampling and Quantization

Spatial resolution / image resolution: pixel size or number of pixels

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Representing Digital Images
Representing Digital Images

The pixel intensity levels (gray scale levels) are in the


interval of [0, L-1].

0  ai,j  L-1 Where L = 2k


The dynamic range of an image is the range of values spanned by
the gray scale.

The number, b, of bits required to store a digitized image of size M


by N is
b=MNk
4.2. Representing Digital Images
Effect of Spatial Resolution

256x256 pixels 128x128 pixels

64x64 pixels 32x32 pixels


Effect of Quantization Levels

256 levels 128 levels

64 levels 32 levels
Effect of Quantization Levels (cont.)

16 levels 8 levels

4 levels 2 levels
Basic Relationship of Pixels

(0,0) x

(x-1,y-1) (x,y-1) (x+1,y-1)

(x-1,y) (x,y) (x+1,y)


y

(x-1,y+1) (x,y+1) (x+1,y+1)

Conventional indexing method


Neighbors of a Pixel

Neighborhood relation is used to tell adjacent pixels. It is


useful for analyzing regions.

(x,y-1) 4-neighbors of p:

(x-1,y)
(x-1,y) p (x+1,y)
(x+1,y)
N4(p) = (x,y-1)
(x,y+1)
(x,y+1)

4-neighborhood relation considers only vertical and


horizontal neighbors.

Note: q N4(p) implies p N4(q)


Neighbors of a Pixel (cont.)

(x-1,y-1) (x,y-1) (x+1,y-1) 8-neighbors of p:

(x-1,y-1)
(x-1,y) p (x+1,y)
(x,y-1)
(x+1,y-1)
(x-1,y)
(x-1,y+1) (x,y+1) (x+1,y+1) (x+1,y)
N8(p) = (x-1,y+1)
(x,y+1)
(x+1,y+1)

8-neighborhood relation considers all neighbor pixels.


Neighbors of a Pixel (cont.)

(x-1,y-1) (x+1,y-1) Diagonal neighbors of p:

(x-1,y-1)
p
(x+1,y-1)
ND(p) = (x-1,y+1)
(x+1,y+1)
(x-1,y+1) (x+1,y+1)

Diagonal -neighborhood relation considers only diagonal


neighbor pixels.
Connectivity

Connectivity is adapted from neighborhood relation.


Two pixels are connected if they are in the same class (i.e. the
same color or the same range of intensity) and they are
neighbors of one another.

For p and q from the same class


w 4-connectivity: p and q are 4-connected if q  N4(p)
w 8-connectivity: p and q are 8-connected if q  N8(p)
w mixed-connectivity (m-connectivity):
p and q are m-connected if q  N4(p) or
q  ND(p) and N4(p)  N4(q) = 
Adjacency

-A pixel p is adjacent to pixel q then they are said to be connected.

-Two image subsets S1 and S2 are adjacent if some pixel in S1 is


adjacent to some pixel in S2

S1
S2

We can define type of adjacency: 4-adjacency, 8-adjacency


or m-adjacency depending on type of connectivity.
Path

A path from pixel p at (x,y) to pixel q at (s,t) is a sequence


of distinct pixels:
(x0,y0), (x1,y1), (x2,y2),…, (xn,yn)
such that
(x0,y0) = (x,y) and (xn,yn) = (s,t)
and
(xi,yi) is adjacent to (xi-1,yi-1), i = 1,…,n

q
p
Path (cont.)

8-path m-path

p p p

q q q

m-path from p to q
8-path from p to q
solves this ambiguity
results in some ambiguity
Distance Measure (Euclidean)
For pixels p, q, and z, with coordinates (x,y), (s,t), and
(v,w), respectively, D is a distance function or metric if

(a) D(p,q)  0 ,
(b) D(p,q) = D(q,p), (symmetry)
(c) D(p,z)  D(p,q) + D(q,z) (triangular inequality)

Euclidean distance between p and q is


De(p,q) = [ (x - s)2 + (y - t)2 ]1/2
Distance Measure (City block, Chessboard)
2
The D4 distance (city-block) between p and q is 2 1 2
D4(p,q) = |x – s | + |y – t | 2 1 0 1 2
2 1 2
2
Diamond shape

The D8 distance (chessboard) between p and q is


D8(p,q) = max ( |x – s | , |y – t | )
2 2 2 2 2
2 1 1 1 2
2 1 0 1 2
2 1 1 1 2
2 2 2 2 2
Distance Measure of Path
If distance depend on the path between two pixels such
as m-adjacency then the Dm distance between two pixels
is defined as the shortest m-path between the pixels.

0 0 1 q 0 0 1 0 1 1

0 1 0 1 1 0 1 1 0

p 1 0 0 1 0 0 1 0 0

Dm( p , q ) = 2 Dm( p , q ) = 3 Dm( p , q ) = 4


Path Length
Find the shortest 4-, 8-, m-path
(q)
between p and q for
3 1 2 1
V= {0, 1} and V={1, 2}
2 2 0 2

1 2 1 1

1 0 1 2
(p)

You might also like