You are on page 1of 31

LAB MANUAL

Lab Name : DIGITAL IMAGE PROCESSING LAB

Lab Code : 6IT4-21

Branch : Information Technology

Year : 3rd Year

Jaipur Engineering College and Research Centre, Jaipur


Department of Information Technology
(Rajasthan Technical University, KOTA)

1
INDEX
S.NO CONTENTS PAGE
NO.

1 VISION/MISION 4

2. PEO 4

3. POS 5

4. COS 7

5. MAPPING OF CO & PO 7

6. SYLLABUS 8

7. BOOKS 8

8. INSTRUCTIONAL METHODS 9

9. LEARNING MATERIALS 9

10. ASSESSMENT OF OUTCOMES 9

LIST OF EXPERIMENTS (RTU SYLLABUS)

Exp:- 1 Objectives: -1. To understand and implement program for 12-13


thresholding an image

Exp:- 2 Objectives: -2.To understand and implementprogram to 14-15


obtain image histogram

Exp:-3 Objectives:-3.To understand and implement program to 16-17


obtain histogram equalization of an image

Exp:-4 Objectives: - 4. To understand and implement program to 18


rotate an image

Exp:-5 Objectives: - 5.To understand and implement program to 19


scale(resize) an image

2
Exp:-6 Objectives: - 6.To understand and implementprogram to 20-21
translate an image.

Exp:-7 Objectives: - 7.To understand and implement program of 22-24


linear filtering using convolution

Exp:-8 Objectives: -8.To understand and implementImage 25-27


filtering in spatial and frequency domain

Exp:-9 Objectives: - 9. To understand and implement program for 28


edge detection in an image.

Exp:-10 Objectives: -10. To understand and 29-30


implementMorphological operations in analyzing image
structures

3
JAIPUR ENGINEERING COLLEGE AND RESEARCH CENTER
Department of Computer Science and Engineering
Branch: Computer Science and Engineering Semester: 6th
Course Name: DIGITAL IMAGE PROCESSING LAB Code: 6CS4-21
External Marks: 30 Practical hrs: 3hr/week
Internal Marks: 45 Total Marks: 75

1. VISION & MISSION


VISION: To become renowned Centre of excellence in computer science and engineering and
make competent engineers & professionals with high ethical values prepared for lifelong
learning.

MISSION:

M1: To impart outcome based education for emerging technologies in the field of
computer science and engineering.
M2: To provide opportunities for interaction between academia and industry.
M3: To provide platform for lifelong learning by accepting the change in technologies
M4: To develop aptitude of fulfilling social responsibilities

2. PEO
PEO1: To provide students with the fundamentals of Engineering Sciences with more emphasis
in Computer Science & Engineering by way of analysing and exploiting engineering challenges.

PEO2: To train students with good scientific and engineering knowledge so as to comprehend,
analyse, design, and create novel products and solutions for the real life problems in Computer
Science and Engineering

PEO3: To inculcate professional and ethical attitude, effective communication skills, teamwork
skills, multidisciplinary approach, entrepreneurial thinking and an ability to relate engineering
issues with social issues for Computer Science & Engineering.

PEO4: To provide students with an academic environment aware of excellence, leadership,


written ethical codes and guidelines, and the self-motivated life-long learning needed for a
successful professional career in Computer Science & Engineering.

PEO5: To prepare students to excel in Industry and Higher education by Educating Students
along with High moral values and Knowledge in Computer Science & Engineering.

4
3. PROGRAM OUTCOMES

1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering


fundamentals, and Computer Science & Engineering specialization to the solution of complex
Computer Science & Engineering problems.
2. Problem analysis: Identify, formulate, research literature, and analyze complex Computer
Science and Engineering problems reaching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences.
3. Design/development of solutions: Design solutions for complex Computer Science and
Engineering problems and design system components or processes that meet the specified needs
with appropriate consideration for the public health and safety, and the cultural, societal, and
environmental considerations.
4. Conduct investigations of complex problems: Use research-based knowledge and research
methods including design of Computer Science and Engineering experiments, analysis and
interpretation of data, and synthesis of the information to provide valid conclusions.
5. Modern tool usage: Create, select, and apply appropriate techniques,resources, and modern
engineering and IT tools including prediction andmodeling to complex Computer Science
Engineering activities with an understanding of the limitations.
6. The engineer and society: Apply reasoning informed by the contextual knowledge to assess
societal, health, safety, legal and cultural issues and the consequent responsibilities relevant to
the professional Computer Science and Engineering practice.
7. Environment and sustainability: Understand the impact of the professional Computer
Science and Engineering solutions in societal and environmental contexts, and demonstrate the
knowledge of, and need for sustainable development.
8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the Computer Science and Engineering practice.
9. Individual and team work: Function effectively as an individual, and as a member or leader
in diverse teams, and in multidisciplinary settings in Computer Science and Engineering.
10. Communication: Communicate effectively on complex Computer Science and Engineering
activities with the engineering community and with society at large, such as, being able to
comprehend and write effective reports and design documentation, make effective presentations,
and give and receive clear instructions.
11. Project management and finance: Demonstrate knowledge and understanding of the
Computer Science and Engineering and management principles and applythese to one’s own
work, as a member and leader in a team, to manageprojects and in multidisciplinary
environments.
12. Life-long learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change in Computer
Science and Engineering.

5
MAPPING OF PEOs & POs
PROGRAM PROGRAM OUTCOMES
OBJECTIVES
1 2 3 4 5 6 7 8 9 10 11 12

I H L H

II M H M H H L H

III L H M H L M

IV L M H M H M

V M M

6
4. COURSE OUTCOMES
Graduates would be able:
CO1. Implement and execute digital image acquisition, representation and methods to
segment various types of images.
CO2. Implement, analyze and compare various filters in images processing and algorithms of
image compression.

5. MAPPING OF CO & PO
P P
L/ P P P P P P P P P P O O
Seme Subje Co T/ O O O O O O O O O O 1 1
ster ct de P CO 1 2 3 4 5 6 7 8 9 10 1 2
VIII DIGIT 6CS P 1. Implement
AL 4-
21
and execute
IMAG digital image
E acquisition,
representation
PROC
ESSIN
and methods to
G LAB segment various
types of images.
M M H L H L M - L M M H
P 2. Implement,
analyze and
compare
various filters in
images
processing and
algorithms of
image
compression.
H M H M H M L - M M M H

7
6. SYLLABUS
6CS4-21 DIGITAL IMAGE PROCESSING LAB
Class: VI Sem. B.Tech. Evaluation

Branch: Computer Science & Engineering Examination Time = Hours


Schedule per Week Practical Hrs.: 3 Maximum Marks = 75
[Internal (45) & External (30)

Objectives: At the end of the semester, the students should have clearly
understood and implemented the following:

List of exercises:

1. To understand and implement program for thresholding an image


2. To understand and implement program to obtain image histogram

3. To understand and implement program to obtain histogram equalization of an image


4. To understand and implement program to rotate an image
5. To understand and implement program to scale(resize) an image
6. To understand and implement program to translate an image.
7. To understand and implement program of linear filtering using convolution
8. To understand and implement Image filtering in spatial and frequency domain

9. To understand and implement program for edge detection in an image.


10. To understand and implementMorphological operations in analyzing image structures

Outcomes:
At the end of the semester, the students should have clearly understood and implemented the
following:
• Perform the programming by writing programs in MATLAB
• Performed &implemented operations on images

7. BOOKS

Text books

1. Gonzalez and Woods: Digital Image Processing ISDN 0-201-600- 781, Addison
Wesley 1992.

8
2. Boyle and Thomas: Computer Vision - A First Gurse 2nd Edition, ISBN 0-632-028-
67X, BlackwellScience 1995.
3. Pakhera Malay K: Digital Image Processing and Pattern Recogination, PHI.
Reference Books:-
1. Fundamentals of Digital Image Processing: A Practical Approach with Examples in
MATLAB By Chris Solomon, To by Breckon, Wiley-Blackwell
2 Digital Image Processing By S. Sridhar, Oxford

3 DigitalImageProcessing usingMATLABbyRafael CGonzalez&Richard E. Woods, Pearson


Education

8. INSTRUCTIONAL METHODS:-
8.1. Direct Instructions:

• White board presentation

8.2. Interactive Instruction:

• coding

8.3.Indirect Instructions:

 Problem solving

9. LEARNING MATERIALS:-

9.1. Text/Lab Manual

10. ASSESSMENT OF OUTCOMES:-

1. End term Practical exam (Conducted by RTU, KOTA)


2. Daily Lab interaction.

OUTCOMES WILL BE ACHIEVED THROUGH FOLLOWING:-

1. Lab Teaching (through chalk and board).

9
2. Discussion on website work

INSTRUCTIONS OF LAB

DO’s
• Please switch off the Mobile/Cell phone before entering Lab.
• Enter the Lab with complete source code and data.
• Check whether all peripheral are available at your desktop before proceeding for
program.
• Intimate the Lab In Charge whenever you are incompatible in using the system or
in case software get corrupted/ infected by virus.
• Arrange all the peripheral and seats before leaving the lab.
• Properly shutdown the system before leaving the lab.
• Keep the bag outside in the racks.
• Enter the lab on time and leave at proper time.
• Maintain the decorum of the lab.
• Utilize lab hours in the corresponding experiment.
• Get your CD / Pen Drive checked by lab in charge before using it in the lab.

DON’TS
• Don’t mishandle the system.
• Don’t leave the system on standing for long
• Don’t bring any external material in the lab.
• Don’t make noise in the lab.
• Don’t bring the mobile in the lab. If extremely necessary then keep ringers off.
• Don’t enter in the lab without permission of lab In charge.
• Don’t litter in the lab.
• Don’t delete or make any modification in system files.
• Don’t carry any lab equipment outside the lab.
We need your full support and cooperation for smooth functioning of the lab.

10
INSTRUCTIONS FOR STUDENT

BEFORE ENTERING IN THE LAB

• All the students are supposed to prepare the theory regarding the next program.
• Students are supposed to bring the practical file and the lab copy.
• Previous programs should be written in the practical file.
• Any student not following these instructions will be denied entry in the lab.

WHILE WORKING IN THE LAB


• Adhere to experimental schedule as instructed by the lab in charge.
• Get the previously executed program signed by the instructor.
• Get the output of the current program checked by the instructor in the lab copy.
• Each student should work on his/her assigned computer at each turn of the lab.
• Take responsibility of valuable accessories.
• Concentrate on the assigned practical and do not play games.
• If anyone caught red handed carrying any equipment of the lab, then he will have to face
serious consequences.

11
Experiment No.1
Object:To understand and implement program for thresholding an image.

Software Required: MATLAB

Theory:

Image thresholding is a simple form of image segmentation. It is a way to create a binary image
from a gray scale or full color image. This is typically done in order to separate “object” or
foreground pixels from background pixels to aid in image processing. The simplest thresholding
methods replace each pixel in an image with a black pixel if the image intensity I i,j is less than
some fixed constant T ( i.eIi,j< T), or a white pixel, if the image intensity is greater than that
constant.

Program:

% Program for thresholding


clear all;
clc;
close all;
a=imread(‘pout.jpg’);
[m,n]= size(a);
fori= 1:1:m
for j= 1:1:n
if(a(i,j)<125)
b(i,j)=0;
else
b(i,j)=255;
end
end
end
subplot(1,2,1), subimage(a), title(‘Original Image’);
subplot(1,2,2), subimage(b), title(‘threshold image’);

12
Output:

13
Experiment No.2
Object:To understand and implement program to obtain image histogram

Software Required: MATLAB

Theory:

An image histogram is a type of histogram that acts as a graphical representation of the tonal
distribution in a digital image. It plots the number of pixels for each tonal value. By looking at
the histogram for a specific image a viewer will be able to judge the entire tonal distribution at a
glance.
Image histograms are present on many modern digital cameras. Photographers can use them as
an aid to show the distribution of tones captured, and whether image detail has been lost to
blown-out highlights or blacked-out shadows.[2] This is less useful when using a raw image
format, as the dynamic range of the displayed image may only be an approximation to that in the
raw file.
The horizontal axis of the graph represents the tonal variations, while the vertical axis represents
the number of pixels in that particular tone. The left side of the horizontal axis represents the
black and dark areas, the middle represents medium grey and the right hand side represents light
and pure white areas. The vertical axis represents the size of the area that is captured in each one
of these zones. Thus, the histogram for a very dark image will have the majority of its data points
on the left side and center of the graph. Conversely, the histogram for a very bright image with
few dark areas and/or shadows will have most of its data points on the right side and center of
the graph.
The Histogram shows the total tonal distribution in the image. It's a barchart of the count of
pixels of every tone of gray that occurs in the image. It helps us analyze, and more importantly,
correct the contrast of the image.

Program

clear all;
clc;
close all;
a=imread('D:/ak/anil.jpg');
b=zeros(1,256);
[row,col]=size(a);

for x=1:1:row
for y=1:1:col
if a(x,y)<1
continue;
else
t=a(x,y);
end
b(t)=b(t)+1;

14
end
end

subplot(1,2,1);
imshow(uint8(a));
title('Original Image');

subplot(1,2,2);
bar(b);
title('Histogarm of image');

Output

Histogram of the image

15
Experiment No.3
Object:To understand and implement program to obtain histogram equalization of an
image.

Software Required: MATLAB

Theory:

Histogram Equalization
This method usually increases the global contrast of many images, especially when the usable data of the
image is represented by close contrast values. Through this adjustment, the intensities can be better
distributed on the histogram. This allows for areas of lower local contrast to gain a higher contrast.
Histogram equalization accomplishes this by effectively spreading out the most frequent intensity values.
The method is useful in images with backgrounds and foregrounds that are both bright or both dark. In
particular, the method can lead to better views of bone structure in x-ray images, and to better detail in
photographs that are over or under-exposed. A key advantage of the method is that it is a fairly
straightforward technique and an invertibleoperator. So in theory, if the histogram equalization function is
known, then the original histogram can be recovered. The calculation is not computationally intensive. A
disadvantage of the method is that it is indiscriminate. It may increase the contrast of background noise,
while decreasing the usable signal.
Histogram equalization often produces unrealistic effects in photographs; however it is very useful for
scientific images like thermal, satellite or x-ray images, often the same class of images to which one
would apply false-color. Also histogram equalization can produce undesirable effects (like visible image
gradient) when applied to images with low color depth. For example, if applied to 8-bit image displayed
with 8-bit gray-scale palette it will further reduce color depth (number of unique shades of gray) of the
image. Histogram equalization will work the best when applied to images with much higher color depth
than palette size, like continuous data or 16-bit gray-scale images.

Program:

clear all
clc
I=imread('cameraman.tif');
I=double(I);
maximum_value=max((max(I)));
[row col]=size(I);
c=row*col;
h=zeros(1,300);
z=zeros(1,300);
for n=1:row
for m=1:col
if I(n,m) == 0
I(n,m)=1;
end

16
end
end
for n=1:row
for m=1:col
t = I(n,m);
h(t) = h(t) + 1;
end
end
pdf = h/c;
cdf(1) = pdf(1);
for x=2:maximum_value
cdf(x) = pdf(x) + cdf(x-1);
end
new = round(cdf * maximum_value);
new= new + 1;
for p=1:row
for q=1:col
temp=I(p,q);
b(p,q)=new(temp);
t=b(p,q);
z(t)=z(t)+1;
end
end
b=b-1;
subplot(2,2,1), imshow(uint8(I)) , title(' Image1');
subplot(2,2,2), bar(h) , title('Histogram of d Orig. Image');
subplot(2,2,3), imshow(uint8(b)) , title('Image2');
subplot(2,2,4),bar(z) , title( 'Histogram Equalisation of image2');

Output:

17
Experiment No.4
Object:To understand and implementprogram to rotate an image.

Software Required: MATLAB

Theory:

J = imrotate( I , angle ) rotates image I by angle degrees in a counterclockwise direction around its center
point. To rotate the image clockwise, specify a negative value for angle. imrotate makes the
output image J large enough to contain the entire rotated image.

% Program to rotate an image


clear all;
clc;
close all;
a=imread(‘pout.jpg’);
b=imrotate(a,15);
subplot(1,2,1);
subimage(a);
title(‘input image’);
subplot(1,2,2);
subimage(b);
title(‘output image’);

Output:

18
Experiment No.5
Object:To understand and implement program to scale(resize) an image.

Software Required: MATLAB

Theory:

In computer graphics and digital imaging, image scaling refers to the resizing of a digital image. In video
technology, the magnification of digital material is known as upscaling or resolution enhancement.B =
imresize( A , scale ) returns image B that is scale times the size of A . The input image A can be a
grayscale, RGB, or binary image. If A has more than two dimensions, imresize only resizes the first two
dimensions. If scale is in the range [0, 1], B is smaller than A.

% Program for scaling an image


clear all;
clc;
close all;
I=imread(‘pout.jpg’);
J=imresize(I,0.5);
Figure
imshow(I)
title(‘original image’);
figure
imshow(J);
title(‘resized image’);

Output:

19
Experiment No.6
Object:To understand and implement program to translate an image.

Software Required: MATLAB

Theory:

The translate operator performs a geometric transformation which maps the position of
each picture element in an input image into a new position in an output image, where the
dimensionality of the two images often is, but need not necessarily be, the same. Under
translation, an image element located at in the original is shifted to a new
position in the corresponding output image by displacing it through a user-specified
translation . The treatment of elements near image edges varies with implementation.
Translation is used to improve visualization of an image, but also has a role as a preprocessor in
applications where registration of two or more images is required. A translation operation shifts
an image by a specified number of pixels in either the x- or y-direction, or both.

Program:

clear all;
clc;
close all;
I=imread(‘pout.jpg’);
figure(1)
imshow(I)
[r c w] = size(I)
Shift(1:r,1+10:c+10,:)=I;
Figure(2)
imshow(shift)

Output:

20
21
EXPERIMENT NO. 7

Object:To understand and implement program of linear filtering using convolution.

Software Required: MATLAB

Theory:
Linear filtering is one of the most powerful image enhancement methods. It is a process in
which part of the signal frequency spectrum is modified by the transfer function of the filter. In
general, the filters under consideration are linear and shift-invariant, and thus, the output images
are characterized by the convolution sum between the input image and the filter impulse
response.

Program:
clc;
closeall;
clearall;

%Read the test


imageanddisplayitmyimage=imread('
grayleaf.jpg'); subplot(3,3,1);
imshow(myimage); title('OriginalImage');
%The command fspecial()is used to create mask
%The command imfilter()is used to applythegaussian filter mask to the image
%CreateaGaussian lowpass filter ofsize3
gaussmask=fspecial('gaussian',3);
filtimg=imfilter(myimage,gaussmask);
subplot(3,3,2);
imshow(filtimg,[]),title('Output of Gaussian filter3 X 3');

%Generatealowpass filterof size7 X 7


%The commandconv2 is used the applythe filter
%Thisis anotherwayofusingthe filter

avgfilt =[1 1 1 1 1 1 1;
1 1 1 1 1 1 1;
1 1 1 1 1 1 1;
1 1 1 1 1 1 1;
1 1 1 1 1 1 1;

22
1 1 1 1 1 1 1;
1 1 1 1 1 1 1];

avgfiltmask =avgfilt/sum(avgfilt);
convimage=conv2(double(myimage),double(avgfiltmask));
subplot(3,3,3);
imshow(convimage,[]);
title('Averagefilterwithconv2()');
%Add noiseto an image
%Displaythenoisy Image

subplot(3,3,4);
myimage=imread('grayleaf.jpg');
noisyimg =imnoise(myimage,'Salt&Pepper', 0.5);
imshow(noisyimg,[]);
title('Noisy Image');
%generate Medianfilterof size3
%The command medianfilt2()is used to filtertheimage

mymed3img=medfilt2(noisyimg,[3 3]);
subplot(3,3,5);
imshow(mymed3img,[]),title('Output of 3 x3 Median filter');

%generate Medianfilterof size7


%The command medianfilt2()is used to filtertheimage

mymed7img=medfilt2(noisyimg,[7 7]);
subplot(3,3,6);
imshow(mymed7img,[]),title('Output of 7 x7 Median filter');
%Generateahigh pass filtermask
%The commandconv2 is used the applythefiltermask

h =[1-2 -1; -1 5 -1; 1 -21];


hpt3 = conv2(double(myimage),double(h));
subplot(3,3,7);
imshow(hpt3/100),title('Output of High pass filter');
%GenerateaUser defined mask for sharpening
%The commandconv2 is used the applythe filtermask

h =[-1 -1 -1; -1 9 -1; -1-1 -1];


hpt3 = conv2(double(myimage),double(h));

23
subplot(3,3,8);
imshow(hpt3/100),title('Sharpening -User defined mask');

%Generateaunsharpfiltermask with alpha=0.3


%The commandconv2 is used the applythe filtermask

h =fspecial('unsharp',0.3);
hpt3 = imfilter(myimage,h);
subplot(3,3,9);
imshow(hpt3,[]),title('OutputofUnsharp mask filter');

Output:

24
EXPERIMENT NO. 8

Object: To understand and implementimage filtering in spatial and frequency domain.

Software Required:MATLAB

Theory:-

SPATIAL DOMAIN

In simple spatial domain , we directly deal with the image matrix. Whereas in frequency domain
, we deal an image like this.

 It is manipulating or changing an image representing an object in space to enhance the image


for a given application.
 Techniques are based on direct manipulation of pixels in an image
 Used for filtering basics, smoothing filters, sharpening filters, unsharp masking and laplacian

FREQUENCY DOMAIN

We first transform the image to its frequency distribution. Then our black box system perform
what ever processing it has to performed , and the output of the black box in this case is not an
image , but a transformation. After performing inverse transformation , it is converted into an
image which is then viewed in spatial domain.

It can be pictorially viewed as

 Techniques are based on modifying the spectral transform of an image


 Transform the image to its frequency representation
 Perform image processing
 Compute inverse transform back to the spatial domain
 High frequencies correspond to pixel values that change rapidly across the image (e.g. text,
texture, leaves, etc.)
 Strong low frequency components correspond to large scale features in the image (e.g. a
single, homogenous object that dominates the image)

Program

b = remez(10,[0 0.4 0.6 1],[1 1 0 0])


h = ftrans2(b);
[H,w] = freqz(b,1,64,'whole');
colormap(jet(64))
plot(w/pi-1,fftshift(abs(H)))
figure, freqz2(h,[32 32])
Hd = zeros(11,11); Hd(4:8,4:8) = 1;
[f1,f2] = freqspace(11,'meshgrid');
mesh(f1,f2,Hd), axis([-1 1 -1 1 0 1.2]), colormap(jet(64))
h = fsamp2(Hd);
figure, freqz2(h,[32 32]), axis([-1 1 -1 1 0 1.2])
Hd = zeros(11,11); Hd(4:8,4:8) = 1;
[f1,f2] = freqspace(11,'meshgrid');
mesh(f1,f2,Hd), axis([-1 1 -1 1 0 1.2]), colormap(jet(64))
h = fwind1(Hd,hamming(11));
figure, freqz2(h,[32 32]), axis([-1 1 -1 1 0 1.2])
[f1,f2] = freqspace(25,'meshgrid');
Hd = zeros(25,25); d = sqrt(f1.^2 + f2.^2) < 0.5;
Hd(d) = 1;
mesh(f1,f2,Hd)
h =[0.1667 0.6667 0.1667
0.6667 -3.3333 0.6667
0.1667 0.6667 0.1667];
freqz2(h)
[H,f1,f2] = freqz2(h);

Output:
EXPERIMENT NO. 9

Object: To understand and implement program for edge detection in an image.

Software Required:MATLAB

Theory:-Edge detection is an image processing technique for finding the boundaries of objects
within images. It works by detecting discontinuities in brightness. Edge detection is used
for image segmentation and data extraction in areas such as image processing, computer vision,
and machine vision.
Common edge detection algorithms include Sobel, Canny, Prewitt, Roberts, and fuzzy
logic methods.

Program
close all;
img=imread(‘pout.jpg’);
A=[0 -1 -1; -1 8 -1; 0 -1 0];
subplot(1,3,1); imshow(img);
img=rgb2gray(img);
result1=imfilter(img,A);
subplot(1,3,2);
imshow(result1);
title(‘line detection mask horizontal’);
A=[-1 2 -1; -1 2 -1; -1 2 -1];
result1=imfilter(img, A);
subplot(1,3,3);
imshow(result1);
title(‘line detection mask verticle’);

Output:
EXPERIMENT NO. 10

Object: To understand and implement morphological operations in analyzing image


structures

Software Required :MATLAB

Theory: -The morphological transformations extract or modify the structure of the particles in
an image. Such transformations can be used to prepare the particles for the quantitative analysis,
for the analysis of the geometrical properties or for extracting the simplest modeling shapes and
other operations. The morphological operations can also be used for expanding or reducing the
particle dimensions, gap “filling” or closing inclusions, the averaging of the particle edges and
others. The morphological transformations are separated in two main categories:
Binary morphological functions, which are applied for binary images
Gray-level morphological functions, which are applied for gray-level images
A binary image is an image which was segmented into an object region (which contains particles
– typically the object pixels are coded by ones) and a background region (typically the
background pixels are coded by zeros). The most simple segmentation process is by binary
thresholding the gray-level images.

The basic morphological transformations include two types of processing: erosion and dilation.
The other types of transformations are obtained by combining these two operations.
Erosion
The erosion eliminates the isolated pixels from the background and erodes the boundaries of the
object region, depending on the shape of the structuring element. For a given pixel P0 we will
consider the structuring element centered in P0 and we will denote with Pi the neighboring
pixels that will be taken into consideration (the ones corresponding to the coefficients of the
structuring element having the value 1).
Dilation
The dilation process has the inverse effect of the erosion process, because the particle dilation is
equivalent to the background erosion. This process eliminates the small and isolated gaps from
the particles and enlarges the contour of the particles depending on the shape of the structuring
element. For a given pixel P0 we will consider the structuring element centered in P0 and we
will denote with Pi the neighboring pixels.

Program:
BW = imread('circles.png');
imshow(BW);

BW2 = bwmorph(BW,'remove');
figure
imshow(BW2)

BW3 = bwmorph(BW,'skel',Inf);
figure
imshow(BW3)

BW1 = Array(imread('circles.png'));
figure
imshow(BW1)

BW2 = bwmorph(BW1,'remove');
figure
imshow(BW2)

BW3 = bwmorph(BW1,'skel',Inf);
figure
imshow(BW3)

Output

You might also like