You are on page 1of 20

DIGITAL IMAGE PROCESSING

Ankon Gopal Banik


According to the syllabus of Department of CSE, Gono Biaswabidyalay
Digital Image Processing

Without the help of our respective teacher


Roena Afroze Anney this work was impossible

1
Digital Image Processing
Contents
Chapter 01 – Introduction ---------------------------------------------------------------------------------------02
Chapter 02 – Color Model ---------------------------------------------------------------------------------------04
Chapter 03 – Edge Detection ------------------------------------------------------------------------------------06
Chapter 04 – Image compression -------------------------------------------------------------------------------10
Chapter 05 – Morphology ---------------------------------------------------------------------------------------13
Overview with short questions ----------------------------------------------------------------------------------18

2
Digital Image Processing
Chapter 01 – Introduction

Q:01| What is image?


Ans: An image may be defined as a two dimensional function f(x, y) where x and y are spatial (plane)
coordinates.
The amplitude of f at any pair of coordinates (x, y) is intensity or gray level.

Q:02| What is digital image processing?


Ans: Digital image processing focuses on two major tasks:
◦ Improvement of pictorial information for human interpretation
◦ Processing of image data for storage, transmission and representation for autonomous
machine perception

Q:03| Show the process from image processing to computer version.


Ans: The process from image processing to computer vision can be broken up into low-, mid- and
high-level processes

Low Level Process


Mid Level Process
Input: Image
Output: Image High Level Process
Input: Image
Examples: Noise Output: Attributes
removal, image Input: Attributes
Examples: Object Output:
sharpening
recognition, Understanding
segmentation
Examples: Scene
understanding,
autonomous
navigation

Q:04| Describe the key stage of digital image processing with diagram.
Ans: Key stage of digital image processing with diagram are as follows-

3
Digital Image Processing

Q:05| Discuss the application area of digital image processing.


Ans: The use of digital image processing techniques has exploded and they are now used for all kinds
of tasks in all kinds of areas
◦ Image enhancement/restoration
◦ Artistic effects
◦ Medical visualisation
◦ Industrial inspection
◦ Law enforcement
◦ Human computer interfaces

4
Digital Image Processing
Chapter 02 – Color Model

Q:01| What do you mean by color model?


Ans: Color models attempt to mathematically describe the way that humans perceive color.

Q:02| What is primary color?


Ans: The human eye combines 3 primary colors (using the 3 different types of cones) to discern all
possible colors.
Colors are just different light frequencies
 red – 700nm wavelength
 green – 546.1 nm wavelength
 blue – 435.8 nm wavelength
Lower frequencies are cooler colors
Primary colors of light are additive
 Primary colors are red, green, and blue
 Combining red + green + blue yields white
Primary colors of pigment are subtractive
 Primary colors are cyan, magenta, and yellow
 Combining cyan + magenta + yellow yields black

Q:03| Describe RGB color model.


Ans: The RGB color model is a color model used largely in display technologies that use light.
In this model, the colors red (R), green (G) and blue (B) are added together at different intensities to
produce millions of different colors on modern video display screens. This model is extremely
common for TV and video displays, video game console displays, digital cameras and other types of
light-based display devices. The RGB model is as an "additive" model, as colors are added, in the form
of light, the result becomes lighter. For instance, the full combination of red, green and blue produces
white.

Q:04| Write down the differences between CMYK and HIS color model.
Ans: We can make a decision about how we can differ CMYK color model from HIS color model
through considering the following statements about CMYK and HSI individually-

5
Digital Image Processing
CMYK color model: This model uses the colors cyan (C), magenta (M), yellow (Y) and black (K),
which is called the “key.” This model is used for color printing. CMYK is subtractive. This is because
the CMYK system uses colored inks to make colors on a white background and "subtracts" brightness
from that white background.
HIS color model: HIS color model, stands for Human Saturation Intensity is based on human
perception of colors. Humans describe a color object by its hue, saturation, and brightness. The HSI
model decouples the intensity component from the color-carrying information (hue & saturation).
So, from the above discussion we can understand that CMYK can be an efficient color model for
printing but unfortunately it is not good for human interpretation but on the other hand HIS do so.

Q:05| Explain YIQ color model.


Ans: YIQ color model consists Luminance (Y), In phase (I), and Quadrature (Q)
 Luminance is apparent brightness, how bright an object appears to the human.
 The term in phase refers to an image in which the signals from two spectral components (such
as fat and water) add constructively in a voxels.
 Quadrature images are caused by an imbalance in the magnitude of signals in the x and y
channels of the receiver.
It is used for TV broadcasts – backward compatible with monochrome TV standards.

6
Digital Image Processing
Chapter 03 – Edge Detection

Q:01| What is edge?


Ans: Edges in the image can be defined as the discontinuities or abrupt changes in intensity. The edges
indicate where objects are, their shape and size and something about their texture.
In typical images, edges characterize object boundaries and they are useful for segmentation,
registration and identification of objects in a scene.

Q:02| Why edge detection is important?


Ans: Edge detection is a problem of fundamental importance in image processing for the analysis of
images. It considers the intensity change that occurs in the pixel at the boundary or edges of an
image
Edge detection of an image significantly reduces the amount of data and filters out useless information,
while preserving the important structural properties of an image. The most common approach for edge
detection is detecting the meaningful discontinuities in intensity values.
Most of the shape information of an image is enclosed in edges. First we detect these edges in an image
by using filters. Then by enhancing those areas of image which contains edges, sharpness of the image
will increase and image will become clearer.

Q:03| Write some commonly used edge detection algorithms.


Ans: The most used edge detection operators are:
❖ Sobel Operator
❖ Prewitt Operator
❖ Robert Operator
❖ LOG Operator
❖ Laplacian Operator

Q:04| Show the diagram basic principle of edge detection technique.


Ans: The diagram basic principle of edge detection technique is shown below -

7
Digital Image Processing

Q:05| Write down the application areas of edge detection.


Ans: Major applications of edge detection include:
❖ Industrial inspection
❖ 3-D measurement of objects
❖ Autonomous vehicles, robotics
❖ Medical, biomedical and bioengineering scanning
❖ Transport (traffic scene analysis)
❖ 3D database for urban and town planning, and so on.

Q:06| How the Robert, Prewitt and Sobel operators works? Explain with necessary figures.
Ans: Roberts Operator: The Roberts Edge filter is use to detect edges based applying a horizontal
and vertical filter in sequence. Both filters are applied to the image and summed to form the final
result.

Prewitt Operator: Prewitt operator is used for detecting edges horizontally and vertically. We can
approximate the magnitude of the gradient at the center of a 3x3 region as

8
Digital Image Processing

Sobel Operator: The sobel operator is very similar to Prewitt operator. It is also a derivate mask and
calculates edges in both horizontal and vertical direction. Derivative filters are terribly sensitive to
noise. The Sobel filter (shown below) both blurs and differentiates an image providing good noise-
resistant edge detection.

9
Digital Image Processing
Chapter 04 – Image compression

Q:01| Define image compression.


Ans: Image compression address the problem of reducing the amount of data required to represent a
digital image with no significant loss of information. It is minimization of the number of bits needed
to represent the visual information in the scene, without noticeable loss of information.

Q:02| Write down the goals of image compression.


Ans: The goals of image compression are as follows-
• The goal of image compression is to reduce the amount of data required to represent a digital
image.
• Huge size of the data files, in uncompressed form.
• The Storage devices have relatively slow access.
• Limited bandwidth of the communication channels for which it is impossible to do image
transmission in real time.

Q:03| What do you mean by compression ratio?


Ans: The compression ratio (that is, the size of the compressed file compared to that of the
uncompressed file) of the audio and still-image equivalents.

𝑛
Compression ratio, CR = 𝑛1
2

10
Digital Image Processing
Q:04| Describe the general algorithm for data compression and image reconstruction.
Ans: The general algorithm for data compression and image reconstruction can be describe vividly
through the following flowchart-

An input image is fed into the encoder which creates a set of symbols from the input data. After
transmission over the channel, the encoded representation is fed to the decoder, where a reconstructed
output image f’(x,y) is generated . In general, f’(x,y) may or may not an exact replica of f(x,y). If it is,
the system is error free or information preserving, if not, some level of distortion is present in the
reconstructed image.

Q:05| What are the two approaches of image compression technique? Explain.
Ans: The two approaches of image compression technique are-
• Lossless
➢ Information preserving
➢ Low compression ratios
• Lossy
➢ Not information preserving
➢ High compression ratios
Lossless algorithms remove only redundancy present in the data. The reconstructed image is identical
to the original, i.e., all of the information present in the input image has been preserved by compression.
Higher compression is possible using lossy algorithms which create redundancy (by discarding some
information) and then remove it.

Q:06| Describe Fidelity criteria.


Ans: When lossy compression techniques are employed, the decompressed image will not be identical
to the original image. In such cases, we can define fidelity criteria that measure the difference between
this two images. Two general classes of criteria are used:
(1) Objective fidelity criteria and
(2) Subjective fidelity criteria

11
Digital Image Processing
A good example for (1) objective fidelity criteria is root-mean square (RMS) error between on input
and output image For any value of x, and y, the error e(x,y) can be defined as :
𝑒(x,y) = 𝑓 ′ (x,y) - 𝑓(x,y)
The total error between two images is-
𝑀−1 𝑛−1

∑ ∑[𝑓 ′ (x, y) − 𝑓(x, y)]


𝑥=0 𝑦=0

The root –mean square error is-

𝑀−1 𝑛−1
1 1
𝑒𝑟𝑚𝑠 =[ ∑ ∑[𝑓 ′ (x, y) − 𝑓(x, y)]2 ]2
𝑀𝑁
𝑥=0 𝑦=0

How close is to: 𝑓̂(x, y) = 𝑓(x, y) + 𝑒(x, y)


Criteria:
• Subjective: based on human observers
• Objective: mathematically defined criteria

12
Digital Image Processing
Chapter 05 – Morphological Operations

Q:01| Define Morphology.


Ans: Morphology relates to the “shape” of a connected component. Morphological image processing
(or morphology) refers to a range of image processing techniques that deal with the shape (or
morphology) of objects in an image. Morphological operations are typically applied to remove
imperfections.

Q:02| What happens in hitting and fitting? Explain.


Ans: Fitting: A structuring element is said to fit an image if, for each of its pixels that is set to 1, the
corresponding image pixel is also 1.
Hitting: A structuring element is said to intersect or hit, an image if, for any of its pixels that is set to
1, the corresponding image pixel is also 1.
In both cases, we ignore image pixels for which the corresponding structuring element pixel is 0.

Q:03| What do you know about structuring elements.


Ans: Structuring element is a characteristic of certain structure and features to measure the shape of
an image and is used to carry out other image processing operations. It is also called kernel.

Q:04| Describe the dilation and erosion operation.


Ans: Dilation: Dilation is Used to increase the area of a component. It adds pixels around the
boundaries and fills interior holes.
An image is processed by applying a structuring element, center the structuring element S on pixel P.
If P is OFF then set it to ON if any part of S overlaps an ON image pixel. This process can only turn
pixels from OFF to ON. The component can only grow.
Given binary image f and structuring element s the dilated image g can be described as:

13
Digital Image Processing
g = f s
n m
2 2
g ( x, y ) = n m f ( x − k , y − l ) s (k , l )
k =− l =−
2 2

1 if s hits f
g ( x, y ) = 
0 otherwise

Erosion: Erosion is Used to decrease the area of a component. It removes pixels around the boundaries
and enlarges interior holes.
An image is processed by applying a structuring element, center the structuring element S on pixel P.
If P is ON then set it to OFF if any part of S overlaps an OFF image pixel. This process can only turn
pixels from ON to OFF. The component can only shrink,
Given binary image f and structuring element s the errotted image g can be described as:
g=f s

14
Digital Image Processing
n m
2 2
g ( x, y ) = n m f ( x − k , y − l ) s(k , l )
k =− l =−
2 2

1 if s fits f
g ( x, y ) = 
0 otherwise

Q:05| Briefly describe opening and closing operation.


Ans: Opening operations: The open operation is defined as a dilation followed by an erosion that-
• smoothens boundaries
• enlarges narrow gaps
• eliminates “spikes”
The opening of image f by structuring element s, denoted f ○ s is simply an erosion f by s, followed
by the dilation of the result by s. This operation may be written as:
f ○ s = (f s) ⊕ s

where and denote erosion and dilation, respectively.


Closing Operations: The close operation is defined as an erosion followed by a dilation that:
• fills narrow gaps
• eliminates small holes and breaks.
Repeated applications of either “open” or “close” have no effect. The closing of image f by structuring
element s, denoted f • s is simply dilation of f by s, followed by the erosion of the result by s. Closing
operation may be defined by the following expression:
f ○ s = (f ⊕ s) s
Together with closing, the opening serves in computer vision and image processing as a basic
workhorse of morphological noise removal. Opening removes small objects from the foreground of an

15
Digital Image Processing
image, placing them in the background, while closing removes small holes in the foreground, changing
small islands of background into foreground.

Q:06| Show the-


i. dilated
ii. eroded
iii. opening
iv. closing
output image using the square size structuring element.
1 1 1 0 1 0

0 0 1 1 1 1

0 0 1 1 1 1

1 1 1 1 0 1

1 1 1 1 0 0

Input image

1 1 1
1 1 1
1 1 1
Structuring element
Ans: i. Dealation: f ⊕ s
1 1 1 1 1 1

1 1 1 1 1 1

1 1 1 1 1 1

1 1 1 1 1 1

1 1 1 1 1 1

16
Digital Image Processing
ii. Erosion: f ⊖ s
0 0 0 1 0 1

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

1 1 1 0 0 0

iii. Opening: f ○ s = (f s) ⊕ s


0 0 1 0 1 0

0 0 1 1 1 1

0 0 0 0 0 0

1 1 1 1 0 0

1 1 1 1 0 0

iv. Closing: f ○ s = (f ⊕s)  s


1 1 1 1 1 1

1 1 1 1 1 1

1 1 1 1 1 1

1 1 1 1 1 1

1 1 1 1 1 1

17
Digital Image Processing
Overview with short questions
1. The process from image processing to computer vision can be broken up into _____
process.
Ans: three
2. In digital image, _____ are the dots that make up the picture.
Ans: pixels
3. The _____ of an image is the number of distinct pixels in each dimension.
Ans: resolution
4. Morphological operations are typically applied to remove _____.
Ans: imperfections
5. Noise removal is the example of _____ level process.
Ans: low
6. Filtering is used to remove _____ from images.
Ans: noise
7. True or False: Artificial Intelligence is not a field of Digital Image Processing.
Ans: False.
8. True or False: Robot vision is a field of Digital Image Processing.
Ans: True.
9. How the Resolution of an image is represented?
Ans: Width × height.
10. How many primary colors the human eye can combine to discern all possible colors?
Ans: Three.
11. The goal of image compression is to _____ the amount of data required to represent a
digital image.
Ans: reduce
12. Mention an application area of edge detection.
Ans: Face detection
13. Dilation is used to the _____ area of an image component.
Ans: increase
14. The brightness or darkness of an object is called _____.
Ans: intensity
15. Morphological processing refers to a range techniques that deal with the _____ objects
in an image.
Ans: shape
16. What is the last key stage of DIP?
Ans: Representation and description.
17. Mention 2 applications of digital image processing.
Ans: Medical imaging, Image transmission.
18. Capturing an image from a camera is a _____ process.
Ans: physical
19. In digital image, each element of matrix is called _____.
Ans: pixel
20. Elaborate CAD.
Ans: Computer Aided Diagnosis.
21. Object recognition, Segmentation are the examples of _____ level process.
Ans: Mid-level
22. Explain the term written in this format: "1024x768".
Ans: 1024 pixels at width and 768 pixels at height.
23. What is the first key stage of DIP?

18
Digital Image Processing
Ans: Image Acquisition.
24. In digital signal, mention those two values which are used to represent information.
Ans: 0, 1.
25. In capturing an image from camera, _____ is used as a source of energy.
Ans: Sunlight

19

You might also like