You are on page 1of 41

LAB MANUAL

Subject Code : Digital Image Processing Lab


Subject Name : 8CS7A
Branch : Computer Science Engineering
Year : IV Year/ VIII Semester

Arya Group of Colleges


Department of Computer Science & Engineering
(Rajasthan Technical University, KOTA)
INDEX

S. No OBJECT

1 Color image segmentation algorithm development

2 wavelet/vector quantization compression

3 Deformable templates applied to skin tumor border finding

4 Helicopter image enhancement

5 High-speed film image enhancement

6 Computer vision for skin tumor image evaluation

7 New border images


DO’S AND DON’TS

DO’S

1. Student should get the record of previous experiment checked before starting the new
experiment.
2. Read the manual carefully before starting the experiment.
3. Checked the program by the instructor.
4. Get your results checked by the teacher.
5. Computers must be handled carefully.
6. Maintain strict discipline.
7. Keep your mobile phone switched off or in vibration mode.
8. Students should get the experiment allotted for next turn, before leaving the lab.

DON’TS

1. Do not touch or attempt to touch the mains power supply Wire with bare hands.
2. Do not overcrowd the tables.
3. Do not tamper with equipments.
4. Do not leave the without permission from the teacher.
INSTRUCTIONS TO THE STUDENTS

GENERAL INSTRUCTIONS

• Maintain separate observation results for each laboratory.


• Observations or readings should be taken only in the observation copy.
• Get the readings counter signed by the faculty after the completion of the
experiment.
• Maintain Index column in the observation copy and get the signature of the
faculty before leaving the lab.

BEFORE ENTERING THE LAB

• The previous experiment should have been written in the practical file, without
which the students will not be allowed to enter the lab.
• The students should have written the experiment in the observation copy that
they are supposed to perform in the lab.
• The experiment written in the observation copy should have aim, apparatus
required, circuit diagram/algorithm, blank observation table (if any), formula (if
any), programme (if any), model graph (if any) and space for result.
WHEN WORKING IN THE LAB

• Necessary equipments/apparatus should be taken only from the lab


assistant by making an issuing slip, which would contain name of the
experiment, names of batch members and apparatus or components
required.
• Never switch on the power supply before getting the permission from
the faculty.

BEFORE LEAVING THE LAB

• The equipments/components should be returned back to the lab


assistant in good condition after the completion of the experiment.
• The students should get the signature from the faculty in the
observation copy.
• They should also check whether their file is checked and counter
signed in the index.
Program Educational Objective

ARYA Institute of Engineering & Technology


Branch: CS Year/Semester: IV /VIII

Subject Name/code: -DIGITAL IMAGE PROCESSING LAB (7EC7)

External Marks: - 40 Practical Hours: 2 hrs/week


Internal Marks: 60
Total Marks: 100

(1). Program Description To offer high quality education in the field of Computer
Science and to prepare students abreast of latest global industrial and research
requirements and fulfill responsibility towards community.”
(2). Program Objective-
I. Preparation:- To prepare to pursue advanced graduate studies in Computer &
having strong background in basic science and basic mathematics and able to
pinpoint and define engineering problems in the fields of electronics and
communication engineering.
II. Core competence: To provide students broad-based education in core areas of
Computers , student will be able to employ necessary techniques, hardware,
and communication tools for modern engineering applications, and student can
solve problems through analytical thinking in their own or related fields.
III. Breadth: To train students with good scientific and Electronics Engineering
breadth so as to comprehend, analyze design ,and create novel products and
solutions for the real life problem
IV. Professionalism: To inculcate in students professional and ethical attitude,
effective Communication Skills, teamwork Skills, Multidisciplinary approach,
and an ability to relate engineering issues to broader social context.
V. Learning Environment: To provide the excellent learning environment,
which can enhance the learning ability of student to generate innovative idea
in every aspects of life, it help the not only individual but also complete
society and nation.

(2)Program Outcomes:
(a )Graduates will demonstrate knowledge of differential equations, vector calculus,
complex variables, matrix theory, probability theory, and Electronics and
Communication engineering,
(b)Graduates will demonstrate an ability to identify, formulate and solve electronics
engineering, problems.
(c)Graduate will demonstrate an ability to design electrical and electronic circuits and
conduct experiments with communication systems, analyze and interpret data.
(d)Graduates will demonstrate an ability to design digital and analog systems and
component.
(e)Graduates will demonstrate an ability to visualize and work on laboratory and
multidisciplinary tasks.
(f)Graduate will demonstrate skills to use modern engineering tools, software and
equipment to analyze problems.
(g)Graduates will demonstrate knowledge of professional and ethical responsibilities.
(h) Graduate will be able to communicate effectively in both verbal and written form.
(i) Graduate will show the understanding of impact of engineering solutions on the
society and also will be aware of contemporary issue
(j) Graduate will develop confidence for self education and ability for life-long
Learn
(k) Graduate who can participate and succeed in competitive examinations like
Public sector and GATE, GRE.

3). Mapping of Program Objective with Program Outcome

Program Outcomes
Program
a b c d e f g h i j k
Objective
I X X X X X X
II X X X X
III X X X
IV X X
V X X X X X X X X

4). Course Objective


1. This course deals with the breadth and depth of the area of signal processing ,
building upon the foundation laid in its pre-requisite course 'Signals & Systems'. The

objective is to equip the students with the knowledge of periodic and non-periodic
Signals, random numbers, , and practical implementation of signal system.
2. To develop the students' ability to analyze discrete-time signals and systems in both
time and frequency-domains and to perform Matlab based signal system tasks.
3. This course is next-in-line after 'Network Analysis' and builds foundation for the
later courses of 'Digital Signal Processing', 'Linear Control Systems' and
'Communication Systems'.
4. This course deals with the processing of discrete signals and systems, sampling
theory, frequency domain analysis.
5. At the completion of the course, students should be able to do the Computation of
the discrete- time convolution of two signals.

5. Course Outcomes:-
I. After completing the course, the student is expected to be able to understand
the concepts of signal processing.
II. An ability to identify, formulates, and solves engineering problems in the area
electrical signals and systems.
III. Interpretation of the delta function and determination of the impulse response
of a system.
IV. Convolution of continuous and discrete time domain studying different
functions and representations (impulse response, linear and log magnitude)
V. Study of Fourier analysis is very necessary in convolution of to signals.
VI. Fourier series analysis results in that any signal can be represented in the
combination of sinusoidal signals and the circuits can be easily obtained.
VII. Linear convolution using Discrete Fourier Transform.
6). Course Objective Contribution to Program Outcomes

Students who have successfully completed this course will have full understanding of
following concepts
COURSE OBJECTIVE PROGRAM OUTCOME
1.This course deals with the breadth and a) Graduates will demonstrate
depth of the area of signal processing, knowledge of mathematics, science
building upon the foundation laid in its pre- and engineering.
requisite course 'Signals & Systems'. The
objective is to equip the students with the b) Graduates will demonstrate the
knowledge of periodic and non-periodic ability to identify, formulate and solve
Signals, random numbers, , and practical engineering problems.
implementation of signal system.
2. To develop the students' ability to c) Graduates will demonstrate the
analyze discrete-time signals and systems in ability to design and conduct
both time and frequency-domains and to experiments, analyze and interpret
perform Matlab based signal system tasks. data.
3. This course is next-in-line after 'Network
Analysis' and builds foundation for the later k) Graduates will show the ability to
courses of 'Digital Signal Processing', participate and try to succeed in
'Linear Control Systems' and competitive examinations.
'Communication Systems'.
4. This course deals with the processing of
discrete signals and systems, sampling
theory, frequency domain analysis.
5. At the completion of the course, students
should be able to do the Computation of the
discrete- time convolution of two signal.
MAPPING OF COURSE OBJECTIVE WITH PROGRAM OUTCOMES
COURSE
OBJECTIVE a b c d e f g h i j k

I X X X X
II X X X X
III X X
IV X X
V X X X X
9) Instructional Methods:-

1. Direct Instructions:
I. Black board presentation
2. Interactive Instruction:
I. Think, pair, share
II. Quiz
1. Indirect Instructions:
I. Problem solving

2. Independent Instructions:
I. Assigned questions

10) Learning Materials:

1. Text/lecturer notes/lab manual


2 . Web Resources
I. www.circuit-magic.com
II. http://ocw.mit.edu
III. www.allaboutcircuits.com
IV. www.analyzethat.net
12) Assessment of Outcomes:
1. End term practical exam (Conducted by RTU, KOTA)
2. Surprise Quiz/ practical exam.
3. Presentation by students.
4. Daily class room interaction.
5. Assignments.
13). Outcomes will be achieved through following:
1. Class room teaching (through chalk and board).
2. Suggested research papers.
3. Video lectures through NPTEL.
EXPERIMEMT No-1

OBJECTIVE: Color Image Segmentation algorithm development

SOFTWARE REQURIED: MATLAB 7.7

THEORY:-

Image Segmentation:
Image segmentation is the division of an image into regions or categories, which
correspond to different objects or parts of objects. Every pixel in an image is allocated
to one of a number of these categories. A good segmentation is typically one in
which:
pixels in the same category have similar greyscale of multivariate values and form a
connected region, neighbouring pixels which are in different categories have
dissimilar values.
Segmentation is often the critical step in image analysis: the point at which we move
from considering each pixel as a unit of observation to working with objects (or parts
of objects) in the image, composed of many pixels. If segmentation is done well then
all other stages in image analysis are made simpler.

A great variety of segmentation methods has been proposed in the past decades, and
some categorization is necessary to present the methods properly here. A disjunct
categorization does not seem to be possible though, because even two very different
segmentation approaches may share properties that defy singular categorization. The
categorization presented in this chapter is therefore rather a categorization regarding
the
emphasis of an approach than a strict division.
The following categories are used:
• Threshold based segmentation. Histogram thresholding and slicing techniques are
used to segment the image. They may be applied directly to an image, but can also be
combined with pre- and post-processing techniques.
• Edge based segmentation. With this technique, detected edges in an image are
assumed to represent object boundaries, and used to identify these objects.
• Region based segmentation. Where an edge based technique may attempt to find the
object boundaries and then locate the object itself by filling them in, a region based
technique takes the opposite approach, by (e.g.) starting in the middle of an object and
then “growing” outward until it meets the object boundaries.

Matlab Code:

he = imread('hestain.png');
imshow(he), title('H&E image');
text(size(he,2),size(he,1)+15,...
'Image courtesy of Alan Partin, Johns Hopkins University', ...
'FontSize',7,'HorizontalAlignment','right');
cform = makecform('srgb2lab');
lab_he = applycform(he,cform);
ab = double(lab_he(:,:,2:3));
nrows = size(ab,1);
ncols = size(ab,2);
ab = reshape(ab,nrows*ncols,2);
nColors = 3;
% repeat the clustering 3 times to avoid local minima
[cluster_idx, cluster_center] = kmeans(ab,nColors,'distance','sqEuclidean', ...
'Replicates',3);
pixel_labels = reshape(cluster_idx,nrows,ncols);
imshow(pixel_labels,[]), title('image labeled by cluster index');
segmented_images = cell(1,3);
rgb_label = repmat(pixel_labels,[1 1 3]);
for k = 1:nColors
color = he;
color(rgb_label ~= k) = 0;
segmented_images{k} = color;
end
imshow(segmented_images{1}), title('objects in cluster 1');
imshow(segmented_images{2}), title('objects in cluster 2');
S.No. Questions

What is segmentation techniques What do you understand by Digital Image


1. Processing?

Explain image segmentation.


2.
.
3. What is a pixel

How many pixels are needed for a gray scale image and for a colored image?
4.
What are the different segmentation technique useful?
5.
What is an object in an image?
6.
Define cluster?
7.
What are the benefits of segmenting an image?
8.

9. Can we segment an image without recognizing an object?


EXPERIMENT No-2

OBJECTIVE: - Wavelet/vector quantization compression

SOFTWARE REQURIED: MATLAB 7.7


.
Theory:-
Image compression is an application of data compression that encodes the original
image with few bits. The objective of image compression is to reduce the
redundancy of the image and to store or transmit data in an efficient form.
A common characteristic of image data is that, they contain a significant amount of
information that is redundant. The amount of data associated with visual information
is so large and its storage requires enormous storage capacity.
The transmission of this redundant data is wasteful of primary communication
resources. For efficient data transmission, the redundant information should be
removed from the signal prior to transmission. The image coding may be lossless or
lossy based on application. Huffman coding has been widely used in lossless image
compression.
The famous Huffman code is an instantaneous uniquely decodable block code which
assigns shorter code words to frequent source symbols, and longer code words to rare
source symbols of the image to be encoded for transmission and storage. The goal of
this paper is to provide a complete coding system which includes wavelet transform
and quantization plus balanced binary partition Huffman coding, and propose a novel
context-based compression approach to provide high compression performance with
reduced memory utilization.

WAVELET
As we are going to deal with compression of images, it is obvious that manipulation
in raw image is quite impossible and if the image is represented in some mathematical
form then the manipulation will be simpler and easier. Hence, the raw image needs to
be transformed. To what extend a particular transform will support data compression
depends on both the transform and the nature of images being compressed. The
practicality of an image coding scheme depends on the computational workload of the
encoding and decoding steps, as well as the degree of compression obtained. The
availability of a fast implementation algorithm can greatly enhance the appeal of the
particular transform. Some of the transformations are Sine transform, Cosine
transform, Haar transform, Slant transform etc

QUANTIZATION
In color quantization a true color image is transformed into a color-mapped image
consisting of K carefully selected representative colors. The goal of this quantization
is to discard information, which is not visually significant. Quantization is a many-to-
one mapping, and therefore is fundamentally lossy.
A fast and effective (improves image quality) method for color quantization, which
uses a histogram to weight, each color proportion to its frequency,is presented here.
This is a modified version of simple minmax algorithm proposed by Gonzalez. This
quantization is applied to transformed image. The quantized image is then subjected
to entropy coding and the coded image is stored in a storage medium.

Matlab Code:

clear all;
close all;
input_image1=imread('peppers.png');
input_image=imnoise(input_image1,'speckle',.01);
figure;
imshow(input_image);
n=input('enter the decomposition level=');
[Lo_D,Hi_D,Lo_R,Hi_R] = wfilters('haar');
[c,s]=wavedec2(input_image,n,Lo_D,Hi_D);
disp(' the decomposition vector Output is');
disp(c);
[thr,nkeep] = wdcbm2(c,s,1.5,3*prod(s(1,:)));
[compressed_image,TREED,comp_ratio,PERFL2]
=wpdencmp(thr,'s',n,'haar','threshold',5,1);
disp('compression ratio in percentage');
disp(comp_ratio);
re_ima1 = waverec2(c,s,'haar');
re_ima=uint8(re_ima1);
subplot(1,3,1);
imshow(input_image);
title('i/p image');
subplot(1,3,2);
imshow(compressed_image);
title('compressed image');
subplot(1,3,3);
imshow(re_ima);
title('reconstructed image');
Viva Questions:

S.No. Questions

1. What do you understand by wavelet transformation?

What are the applications of wavelets?


2.

3. Give some examples of wavelets.

4. What do you understand by sampling?

5. Explain quantization.

6. What do you understand by compression?

7. How is the process of compressing an image useful?

8. Explain the process of compression using wavelets.

Give the difference between lossy and loss-less compression.


9.
EXPERIMENT No-3

OBJECTIVE: Deformable templates applied to skin tumor border finding

SOFTWARE REQURIED : MATLAB 7.5


.

Theory:-

The ultimate goal of computer vision is to simulate the human perception and
interpretation of the world around us. Given an image of a scene, in terms of an array
of pixels, the challenge is to locate and recognize different objects present in it. One
major difficulty in object recognition tasks is how to integrate and interpret the
diverse local image cues (intensity, gradient, texture, etc.).
The bottom-up methods often fail due to poor-contrast, occlusion, adverse viewing
conditions, and noise. A model-free or structure-free image interpretation approach is
doomed by the under constrained nature of the problem. Imperfect image data can be
augmented with extrinsic information such as geometrical models of the objects that
are likely to be present in the scene in order to facilitate object recognition. The
geometrical shape information can vary from local and generic to global and specific.
For example, it can incorporate smoothness or elasticity constraints, or the shape can
be specified using a hand-crafted parametric form. Such model information is
determined based on the specific application of interest, and should be incorporated
explicitly in an integrated and robust computer vision system. As has been said,
‘‘there are no two leaves of the same shape’’, so an object shape will have intrinsic
intraclass variations. In addition, object deformation is expected in most imaging
applications because of the varying imaging conditions, sensor noise, occlusion and
imperfect segmentations.
Deformable models which are receiving increasing attention provide a promising and
powerful approach for solving computer vision problems because of their versatility
and flexibility in object modeling and representation. They are capable of dealing
with a variety of shape deformations and variations, while maintaining a certain
structure. The deformable models have wide applications in pattern recognition and
computer vision, including image/video database retrieval [6,70,74], object
recognition and identification [9,42,54,64], image segmentation [11,21,24,38,45],
restoration, and object tracking [3,5,7,35,41,46,49,68]

Skin Tumor Border Finding:


The most predictive features for various skin cancers will be targeted by the computer
vision system allowing automatic induction software to classify the tumor The
problem of interest in this article is identifying skin tumor boundaries; the border is
the first and most critical feature to identify. Object boundaries and surface contours
are fairly easily detected by the human observer, but automatic border detection is a
difficult problem. The images may contain reflections, shadows or extraneous
artifacts that make the process of finding the border more difficult. Skin tumor may be
distinguished from surrounding skin by features such as color, brightness or
luminance, texture and shape, and any combination thereof. The use of color as a
means to identify the tumor border is of particular importance, since in some cases, it
is difficult to identify the tumor border in a monochrome image. The border finding
algorithm presented here involves a series of preprocessing steps to remove noise
from the image, followed by color image segmentation, data reduction, object
localization, and contour encoding.

clear all
close all
clc
%k parameter can be changed to adjust intensity of image
ei=25;
st=35;
%k=10
k=ei*st;
I = imread('C:\Users\admin\Desktop\skintumor.jpg');
%h=filter matrx
h = ones(ei,st) / k;
I1 = imfilter(I,h,'symmetric');
figure
subplot(2,2,1),imshow(I), title('Original image');
subplot(2,2,2), imshow(I1), title('Filtered Image');
IG=rgb2gray(I1);
%Converting to BW
I11 = imadjust(IG,stretchlim(IG),[]);
level = graythresh(I11);
BWJ = im2bw(I11,level);
dim = size(BWJ)
IN=ones(dim(1),dim(2));
BW=xor(BWJ,IN); %inverting
subplot(2,2,3), imshow(BW), title('Black and White');
%Finding of initial point
row = round(dim(1)/2);
col = min(find(BW(row,:)))
%Tracing
boundary = bwtraceboundary(BW,[row, col],'W');
subplot(2,2,4),imshow(I), title('Traced');
hold on;
%Display traced boundary
plot(boundary(:,2),boundary(:,1),'g','LineWidth',2);
hold off
% figure
% plot(boundary(:,2),boundary(:,1),'black','LineWidth',2);

nn=size(boundary);
KM=zeros(dim(1),dim(2));
ii=0;
%Create new matrix with boundary points. there fore we can get rid off
%other distortions outside boundaries
while ii<nn(1)
ii=ii+1;
KM(boundary(ii,1),boundary(ii,2))=1;
end
figure
subplot(2,2,1),plot(boundary(:,2),boundary(:,1),'black','LineWidth',2);
subplot(2,2,2),imshow(KM)
%Fill inner boundaries where lesion is located
KM2 = imfill(KM,'holes');
subplot(2,2,3),imshow(KM2)
KM1=xor(KM2,IN);
% subplot(2,2,4),imshow(KM1)
%Geometrical center
IVx=[1:dim(2)];
IVy=[1:dim(1)];
IMx=ones(dim(1),1)*IVx;
IMy=ones(dim(2),1)*IVy;
IMy = imrotate(IMy,-90);
Koordx=IMx.*KM2;
Koordy=IMy.*KM2;
xmean=mean(Koordx,2);
yc=round(sum(xmean.*IMy(:,1))/sum(xmean));
ymean=mean(Koordy);
xc=round(sum(ymean.*IVx)/sum(ymean));
figure
imshow(I)
hold on
plot(boundary(:,2),boundary(:,1),'green','LineWidth',2);
hold on
plot(xc,1:dim(1),'red','LineWidth',2);
plot(1:dim(2),yc,'red','LineWidth',2);
hold off
% ID=im2double(I);
ID1(:,:,1)=im2double(I(:,:,1));
ID1(:,:,2)=im2double(I(:,:,2));
ID1(:,:,3)=im2double(I(:,:,3));
figure
subplot(2,2,1), imshow(ID1);
subplot(2,2,2), imshow(ID1(:,:,1));
hold on
plot(xc,1:dim(1),'red','LineWidth',2);
plot(1:dim(2),yc,'red','LineWidth',2);
hold off
subplot(2,2,3), imshow(ID1(:,:,2));
subplot(2,2,4), imshow(ID1(:,:,3));
Viva-Questions:

S.No. Questions

1. What is segmentation?

What is a gray scale?


2.
.
How do you define an image on a gray scale?
3.

4. How would you convert a colored image into a black and white image?

5. What is a filter?

6. Explain the use of filter in digital image processing.

7. How do you define the border of an image?

8. What do you understand by object tracking?

Define deformable templates.


9.

10. How have image processing helped in the skin tumor finding?
EXPERIMENT No-4

OBJECTIVE: Image Enhancement

SOFTWARE REQURIED : MATLAB 7.5


.

Theory:-

It is a method of improving the definition of a video picture by a computer program,


which reduces the lowest grey values to black and the highest to white: used for
pictures from microscopes, surveillance cameras, and scanners.

The principal objective of image enhancement is to process a given image so that the
result is more suitable than the original image for a specific application.

• It accentuates or sharpens image features such as edges, boundaries, or contrast to


make a graphic display more helpful for display and analysis.

• The enhancement doesn't increase the inherent information content of the data, but it
increases the dynamic range of the chosen features so that they can be detected easily.
The greatest difficulty in image enhancement is quantifying the criterion for
enhancement and, therefore, a large number of image enhancement techniques are
empirical and require interactive procedures to obtain satisfactory results.

• Image enhancement methods can be based on either spatial or frequency domain


techniques. Spatial domain enhancement methods:

• Spatial domain techniques are performed to the image plane itself and they are based
on direct manipulation of pixels in an image.

• The operation can be formulated as g(x,y) =T[f(x,y)], where g is the output, f is the
input image and T is an operation on f defined over some neighborhood of (x,y).

• According to the operations on the image pixels, it can be further divided into 2
categories:
-Point operations and spatial operations (including linear and non-linear operations).

Frequency domain enhancement methods:

• These methods enhance an image f(x,y) by convoluting the image with a linear,
position invariant operator.
• The 2D convolution is performed in frequency domain with DFT.
Spatial domain: g(x,y)=f(x,y)*h(x,y)
Frequency domain: G(w1,w2)=F(w1,w2)H(w1,w2)

clear all
clc
I=imread('pout.tif');
I=double(I);
maximum_value=max((max(I)));
[row col]=size(I);
c=row*col;
h=zeros(1,300);
z=zeros(1,300);
for n=1:row
for m=1:col
if I(n,m) == 0
I(n,m)=1;
end
end
end
for n=1:row
for m=1:col
t = I(n,m);
h(t) = h(t) + 1;
end
end
pdf = h/c;
cdf(1) = pdf(1);
for x=2:maximum_value
cdf(x) = pdf(x) + cdf(x-1);
end
new = round(cdf * maximum_value);
new= new + 1;
for p=1:row
for q=1:col
temp=I(p,q);
b(p,q)=new(temp);
t=b(p,q);
z(t)=z(t)+1;
end
end
b=b-1;
subplot(2,2,1), imshow(uint8(I)) , title(' Image1');
subplot(2,2,2), bar(h) , title('Histogram of d Orig. Image');
subplot(2,2,3), imshow(uint8(b)) , title('Image2');
subplot(2,2,4),bar(z) , title( 'Histogram Equalisation of image2');
Viva-Questions:

S.No. Questions

1. What do you understand by image enhancement?

Explain histogram.
2.

3. What do you understand by histogram equalization?

4. How do you represent an image?

5. How can you define an image in frequency domain?

6. How can you define an image in spatial domain?

7.
Define point operations

8. What do you understand by neighborhood of an image?

9. Why is image enhancement necessary?

10.
Give the name of the filters used in frequency and spatial domain

1. .
EXPERIMENT No-5

OBJECTIVE: High Speed Film Image Enhancement

SOFTWARE REQURIED : MATLAB 7.5


.

Theory:-

It is a method of improving the definition of a video picture by a computer program,


which reduces the lowest grey values to black and the highest to white: used for
pictures from microscopes, surveillance cameras, and scanners.

High speed images:

When an object is in motion, it becomes difficult to capture it into a camera. Even


when clicked, the image becomes blur. Thus the information present in the scene is
also lost and the image becomes useless.
However, with time and improvement many processes were developed to recreate the
image and to restore the information in it. Digital image processing provides many
methods to do the same.
Image enhancement is one of the most common methods used for restoring the
information that was originally lost.

Also in the present times we have high speed cameras that capture the high speed
objects with accuracy.

Matlab Code:

I = im2double(imread('cameraman.tif'));
imshow(I);
title('Original Image');
LEN = 21;
THETA = 11;
PSF = fspecial('motion', LEN, THETA);
blurred = imfilter(I, PSF, 'conv', 'circular');
imshow(blurred);
title('Blurred Image');
wnr1 = deconvwnr(blurred, PSF, 0);
figure, imshow(wnr1);
title('Restored Image');
noise_mean = 0;
noise_var = 0.0001;
blurred_noisy = imnoise(blurred, 'gaussian', ...
noise_mean, noise_var);
figure, imshow(blurred_noisy)
title('Simulate Blur and Noise')
wnr2 = deconvwnr(blurred_noisy, PSF, 0);
figure, imshow(wnr2)
title('Restoration of Blurred, Noisy Image Using NSR = 0')
signal_var = var(I(:));
wnr3 = deconvwnr(blurred_noisy, PSF, noise_var / signal_var);
figure, imshow(wnr3)
title('Restoration of Blurred, Noisy Image Using Estimated NSR');
Viva-Questions:

S.No. Questions

1. What do you understand by image enhancement?

2. How do you define noise in an image?

3. What are the different types of noise that can affect the image?

4. What are filters?

5. How does a filter help in reducing noise from an image?

6. Define signal to noise ratio.

7. What are high speed images?

8. What do you understand by shutter speed?

9.
Sometimes even blurring of an image is required. Give some examples of it
10. Define restoration process in DIP.
EXPERIMENT No-6

OBJECTIVE: Computer vision for skin tumor image evaluation

SOFTWARE REQURIED : MATLAB 7.5

Theory:-

In recent days, skin cancer is seen as one of the most Hazardous form of the Cancers
found in Humans. Skin cancer is found in various types such as Melanoma, Basal and
Squamous cell Carcinoma among which Melanoma is the most unpredictable. The
detection of Melanoma cancer in early stage can be helpful to cure it. Computer
vision can play important role in Medical Image Diagnosis and it has been proved by
many existing systems. In this paper, we present a computer aided method for the
detection of Melanoma Skin Cancer using Image Processing tools. The input to the
system is the skin lesion image and then by applying novel image processing
techniques, it analyses it to conclude about the presence of skin cancer. The Lesion
Image analysis tools checks for the various Melanoma parameters Like Asymmetry,
Border, Colour, Diameter,(ABCD) etc. by texture, size and shape analysis for image
segmentation and feature stages. The extracted feature parameters are used to classify
the image as Normal skin and Melanoma cancer lesion.

Application of computational intelligence methods helps physicians as well as


dermatologists in faster data process to give better and more reliable diagnoses.
Studies related to the automated classification of pigmented skin lesion images have
appeared in the literature as early as 1987 [36]. After some successful experiments on
automatic diagnostic systems for melanoma diagnosis [36–42], utility of machine
vision and computerized analysis is getting more important every year.
Computer aided decision support tools are important in medical imaging for diagnosis
and evaluation. Predictive models are used in a variety of medical domains for
diagnostic and prognostic tasks. These models are built based on experience which
constitutes data acquired from actual cases. The data can be preprocessed and
expressed in a set of rules, such as that it is often the case in knowledge-based expert
systems, and consequently can serve as training data for statistical and machine
learning models.

Matlab is one such software which can be used for object detection as well as viewing
the clear image using image enhancement. Also it can create the boundary around the
object and thus define the area on the skin that is affected by the tumor.
Matlab Code:

I = imread('cell.tif');
figure, imshow(I), title('original image');
text(size(I,2),size(I,1)+15, ...
'Image courtesy of Alan Partin', ...
'FontSize',7,'HorizontalAlignment','right');
text(size(I,2),size(I,1)+25, ....
'Johns Hopkins University', ...
'FontSize',7,'HorizontalAlignment','right');
[junk threshold] = edge(I, 'sobel');
fudgeFactor = .5;
BWs = edge(I,'sobel', threshold * fudgeFactor);
figure, imshow(BWs), title('binary gradient mask');
se90 = strel('line', 3, 90);
se0 = strel('line', 3, 0);
BWsdil = imdilate(BWs, [se90 se0]);
figure, imshow(BWsdil), title('dilated gradient mask');
BWdfill = imfill(BWsdil, 'holes');
figure, imshow(BWdfill);
title('binary image with filled holes');
BWnobord = imclearborder(BWdfill, 4);
figure, imshow(BWnobord), title('cleared border image');
seD = strel('diamond',1);
BWfinal = imerode(BWnobord,seD);
BWfinal = imerode(BWfinal,seD);
figure, imshow(BWfinal), title('segmented image');
Imdep = title('original image');
Imseg = title('segmented image');
function valeur= Inter_LN(Imdep,Imseg)
Viva-Questions:

S.No. Questions

1. What do you understand by image evaluation?

2. How is image evaluation beneficial?

3. What are the different processes involved in image evaluation?

4. How many gray level does a black and white image has?

5. What role does a boundary play in defining an image or an object in an image?

.
6. Explain the role of filters while evaluating an image?

7. Define a gray level scale with the help of a diagram?

8. How is Matlab helpful in finding the boundary of skin tumor?

9. Why most of the processing in image processing is done in gray level images?

Define the formula for finding gray levels in an image?


10.
EXPERIMENT No-7

OBJECTIVE: New border images

SOFTWARE REQURIED : MATLAB 7.5


.

Theory:-

Another major use of matlab is we can create and erase the borders form the image as
and when required. If a particular part of the image is to be highlighted then that part
can be bordered and thus could be easily detectable. Objects are usually created at the
beginning of the program and then those are used throughout it. Little attention is paid
to the borders or the outlines. Thus without disturbing the original object, matlab
provides tools to create borders after finishing the work too.

If a particular change is to be made in some small part of the image then one doesn’t
need to change the whole image. Select the particular part of the image, create a
border around that particular part and make the changes in only that part of the image
without disturbing the original one.

When a particular part is bordered, the morphological algorithms are made to detect
only that part of the image and process it. With its help, we can fill a hole in the
image, lighten a dark part, smooth the image, blur it, and many more things are
possible.

Another part in which the boundary of an object plays an important role is object
detection. Feed the algorithm the dimensions of the abject and you will have the exact
object detected in the whole image leaving every other part that is not of any concern.

Matlab Code:

I1=imread('peppers.png'); % import your image


%the following step is not necessary but just shows the boundary box..
imshow(I1);
hold on
w=[26 77];
x=[26 555];
y=[426 77];
z=[426 555];
Points = [w;x;z;y;w]; %in desired order
plot( Points(:,1), Points(:,2), 'r-'); % draw bounding box

hold off;
%figure
%I2=imcrop(I1,[26 77 400 478]) % crop the bounding box
%imshow(I2) % show cropped image
Viva-Questions:

S.No. Questions

1. What is a color model?

2. Define RGB color model in detail.

How do you define HSI color model? How do you extract it from the RGB
3. model?

4. How does a colored pixel differ from a gray level pixel?

5. How is a border helpful in defining an image?

.Is it possible to create a border only on a small portion of an image? If yes then
6. explain how?

7. Define edges.

8. If required to find out a particular object in an image, is it possible to do it?

9. Give the different types of edges.

10. What are boundary descriptors?

You might also like