You are on page 1of 63

RAMNIRANJAN JHUNJHUNWALA COLLEGE

GHATKOPAR (W), MUMBAI - 400 086

DEPARTMENT OF INFORMATION TECHNOLOGY


2019 - 2020

M.Sc.( I.T.) SEM I


Image and Vision Processing

Name: Rajesh Tiwari


Roll No.: 2
INDEX

NO. PRACTICAL PAGE

1. Implement Basic Intensity transformation functions 1


A) Image Inverse
B) Log Transformation
C) Power-law Transformation
2. Piecewise Transformation 8
A) Contrast Stretching
B) Thresholding
C) Bit-Plane Slicing
3. Implement Histogram Equalization 17

4. Image filtering in Spatial Domain 20


A) Low-pass Filter/Smoothing Filters
(Average, Weighted Average, Median and Gaussian)

B) High-pass Filter / Sharpening Filter


(Laplacian Filter, Sobel, Robert and Prewitt Filter to detect edge)
5. Analyze image in Frequency Domain 33
A) Low Pass/Smoothing filter
B) High Pass/Sharpening filter
6. Color Image Processing 44
A) Pseudocoloring
B) Separating the RGB Channels
C) Color Slicing
7. Image Compression Techniques and watermarking 53
A) Implement Huffman Coding
B) Add a watermark to the image
8. Basic Morphological Transformations 57
A) Boundary Extraction
B) Thinning and Thickening
C) Hole filling and Skeletons
Practical 1: Implement Basic Intensity transformation functions

Image Inverse: The negative or inverse of an image with intensity levels in the range [0, L-1] is
obtained by using the negative transformation, which is given by the expression,

S=L–1–r

Where L – 1 (Maximum pixel value)

r (Pixel of an image)

In intensity, this means that the true black becomes true white and vise-versa.

Reversing the intensity levels of an image in this manner produces the equivalent of a photographic
negative. This type of processing is particularly suited for enhancing white or gray details embedded
in dark regions of an image, especially when the black areas are dominant in size.

Code:

x = imread('input.png'); #Reading the image


x = rgb2gray(x); #Converting RGB image to gray-level
x = im2double(x); #Converting image into double data type
[row col] = size(x); #taking image size into matrix form
for i = 1:row #reading row value from starting to end and storing in (i) variable
for j = 1:col #reading row value from starting to end and storing in (j) variable
N(i,j)=1-x(i,j); #subtracting input matrix values from 255 and storing in new Matrix(N)
endfor
endfor
figure
imshow(x);
title('Original Image');
figure
imshow(N);
title('Negative Transformation Filtered Image');

1
Output:
Original Image:

Negative Image:

2
Log Transformation: The log transformation maps a narrow range of low intensity values in the
input into a wider range of output levels. We use the transformation if this type to extend the values
of dark pixel in an image while compress the higher-level values.

The general form of the log transformation is:

s = c log (r + 1)

Where c is a constant, and r ≥ 0.

Code:

x = imread('input.png'); #Reading the image

x = rgb2gray(x); #Converting RGB image to gray-level image

x = im2double(x); #Converting image into double data type

[row col] = size(x); #taking image size into matrix form

c=2; #here we are taking constant value into c variable

for i = 1:row #reading row value from starting to end and storing in (i) variable

for j = 1:col #reading row value from starting to end and storing in (j) variable

N(i,j)=c*log(1+x(i,j)); #Here we are doing log calculation and storing the value into N

endfor

endfor

figure

imshow(x);

title('Original Image');

figure

imshow(N);

title('Log Transformation Filtered Image');

3
Output:

Original Image:

Log Transformation Filtered image:

4
Power-Law Transformation: Power-law curves with fractional values of γ map a narrow range of
dark input values into a wider range of output values, with the opposite being true for higher values
of input levels.
The nth power and nth root curves shown in below figure can be given by the expression as s = c r γ,
This transformation function is also called as gamma correction. For various values of γ different levels
of enhancements can be obtained. It is used to correct power law response phenomena. The different
display monitors display images at different intensities and clarity. That means, every monitor has
built-in gamma correction in it with certain gamma ranges and so a good monitor automatically
corrects all the images displayed on it for the best contrast to give user the best experience. The
gamma variation changes ratio of red green & blue along with intensity in color images. The difference
between the log-transformation function and the power-law functions is that using the power-law
function a family of possible transformation curves can be obtained just by varying the λ. This process
is also called a gamma correction.

The Power Low Transformations can be given by the expression:

s=c*r^γ

where,
s is the output pixels value
r is the input pixel value
c and γ are the real numbers

5
Code:

x = imread('baby.jpg'); #Reading the image

x = rgb2gray(x); #Converting RGB image to gray-level image

x = im2double(x); #Converting image into double data type

[row col] = size(x); #taking image size into matrix form

gamma=2; #here we are taking constant value into “gamma” variable

c=1; #here we are taking constant value into “c” variable

for i = 1:row #reading row value from starting to end and storing in (i) variable

for j = 1:col #reading row value from starting to end and storing in (j) variable
N(i,j)=c*(x(i,j)^gamma); #Here we are doing power-law calculation and storing there values
into N matrix
endfor

endfor

figure

imshow(x);

title('Original Image');

figure

imshow(N);

title('Gamma Transformation Filtered Image');

6
Output:

Original Image:

Gamma Transformation Filtered Image:

7
Practical 2: Piecewise Transformation.

Image enhancement by using Contrast Stretching transformation techniques and also show
on histogram.

Contrast stretching (often called normalization) is a simple image enhancement technique


that attempts to improve the contrast in an image by `stretching' the range of intensity values
it contains to span a desired range of values, e.g. the full range of pixel values that the image
type concerned allows.

It differs from the more sophisticated histogram equalization in that it can only apply
a linear scaling function to the image pixel values. As a result the `enhancement' is less harsh.

Program:

pkg load image;


r = imread("baby.jpg");
r = rgb2gray(r);
r = im2double(r);
[m n] = size(r); % Getting the dimensions of the image.

#here we are taking 4 input from user


r1=input("Enter R1: ");
r2=input("Enter R2: ");

8
s1=input("Enter S1: ");
s2=input("Enter S2: ");

#Calculation of contrast stretching


a = s1/r1;
b = (s2-s1)/(r2-r1);
c = (255-s2)/(255-r2);

for i=1:m
for j=1:n

if r(i,j) < r1
s(i,j) = a*r(i,j);

elseif r(i,j) < r2


s(i,j) = b*(r(i,j)-r1)+s1;

else
s(i,j) = c*(r(i,j)-r2)+s2;
endif
endfor
endfor

#Displaying the Original and Contrast Images


figure;
subplot(1,2,1)
imshow(r);
title("Original Image");

subplot(1,2,2)
imhist(r);
title('Histogtram Of Original Image');

figure;
subplot(1,2,1)
imshow(s);
title("Contrast Streched Image");

subplot(1,2,2)
imhist(s);
title('Histogtram Of Contrast Streched Image');

9
Output:
Original Image:

Contrast Stretched Image:

10
Image enhancement by using Thresholding transformation techniques and also show on
histogram.

The input to a thresholding operation is typically a grayscale or color image. In the simplest
implementation, the output is a binary image representing the segmentation. Black pixels
correspond to background and white pixels correspond to foreground (or vice versa). In
simple implementations, the segmentation is determined by a single parameter known as
the intensity threshold. In a single pass, each pixel in the image is compared with this
threshold. If the pixel's intensity is higher than the threshold, the pixel is set to, say, white in
the output. If it is less than the threshold, it is set to black.

Code:

pkg load image;


r = imread("baby.jpg");
r = rgb2gray(r);
threshold = 150; #Here we are taken a value for thresholding calculation
[m n] = size(r);
OutImage=zeros(m,n);

%%%%%% Thresholding %%%%%%%

for i=1:m
for j=1:n

if (r(i,j) > threshold) #If dimension value is greater than user input then convert into
1
OutImage(i,j) = 1;

else
OutImage(i,j) = 0; #If dimension value is less than user input then convert into 0
endif
endfor
endfor

#Displaying the Original Image with histogram


figure;
subplot(1,2,1)
imshow(r);
title("Original Image");

11
subplot(1,2,2)
imhist(r);
title('Histogtram Of Original Image');

#Displaying the Threshold Image with histogram

figure;
subplot(1,2,1)
imshow(OutImage);
title("Contrast Streched Image");

subplot(1,2,2)
imhist(OutImage);
title('Histogtram Of Contrast Streched Image');

Output:

Original Image:

12
Threshold Image:

Image enhancement by using Bit Plane Slicing Technique.

Bit plane slicing is well known technique used in Image processing. In image compression Bit
plane slicing is used. Bit plane slicing is the conversion of image into multilevel binary image.
These binary images are then compressed using different algorithm. With this technique, the
valid bits from gray scale images can be separated, and it will be useful for processing these
data in very less time complexity.

Digitally, an image is represented in terms of pixels. These pixels can be expressed further in
terms of bits. Separating a digital image into its bit-planes is useful for analyzing the relative
importance played by each bit of image, a process aids in determining the adequacy of the
no. of bits used to quantize each pixel. This type of decomposition is useful for image
compression. This term of bit-plane extraction for an 8-bit image, it is not difficult to show
that the (binary) image for bit-plane 7 can be obtained by processing input image with a
thresholding gray-level transformation function.

bitget:

C = bitget(A,bit)

C= bitget(A,bit) returns the value of the bit at position bit in A. Operand A must be an unsigned
integer; a double or array containing unsigned integers, doubles or both. the bit input must

13
be a number between 1 and the number of bits in the unsigned class of A. (e.g., 32 for
the uint32 class).

Code:

pkg load image;


g = imread("baby.jpg");
g = rgb2gray(g);

#Getting the bit at specified position#


g1 = bitget(g,1);
g2 = bitget(g,2);
g3 = bitget(g,3);
g5 = bitget(g,5);
g6 = bitget(g,6);
g7 = bitget(g,7);
g8 = bitget(g,8);

#Displaying the output#


figure;
subplot(2,2,1)
imshow(g);
title("Original Image");

subplot(2,2,2)
imshow(g1);
title('Bit 1');

subplot(2,2,3)
imshow(g2);
title("Bit 2");

subplot(2,2,4)
imshow(g3);
title('Bit 3');

figure;
subplot(2,2,1)
imshow(g5);
title("Bit 5");

14
subplot(2,2,2)
imshow(g6);
title('Bit 6');

subplot(2,2,3)
imshow(g7);
title("Bit 7");

subplot(2,2,4)
imshow(g8);
title('Bit 8');

Output:

15
16
Practical 3: Histogram Equalization

Image enhancement by using Histogram Equalization transformation


techniques and also show on histogram.
Histogram modeling techniques (e.g. histogram equalization) provide a sophisticated method
for modifying the dynamic range and contrast of an image by altering that image such that
its intensity histogram has a desired shape. Unlike contrast stretching, histogram modeling
operators may employ non-linear and non-monotonic transfer functions to map
between pixel intensity values in the input and output images. Histogram equalization
employs a monotonic, non-linear mapping which re-assigns the intensity values of pixels in
the input image such that the output image contains a uniform distribution of intensities
(i.e. a flat histogram). This technique is used in image comparison processes (because it is
effective in detail enhancement) and in the correction of non-linear effects introduced by,
say, a digitizer or display system.

The process of adjusting intensity values can be done automatically using histogram
equalization. Histogram equalization involves transforming the intensity values so that the
histogram of the output image approximately matches a specified histogram. By default, the
histogram equalization function, histeq, tries to match a flat histogram with 64 bins, but we
can specify a different histogram instead.

Syntax:

X=histeq(y);

histeq enhances the contrast of images by transforming the values in an intensity image, or
the values in the colormap of an indexed image, so that the histogram of the output image
approximately matches a specified histogram.

Program:

pkg load image;


x = imread('inputvalue.jpg');
x = rgb2gray(x);

r=histeq(x);

figure;
subplot(1,2,1)
imshow(x);
title("Original Image");

subplot(1,2,2)
imhist(x);

17
title('Histogtram Of Original Image');

figure;
subplot(1,2,1)
imshow(r);
title("Histogtram Equalization");

subplot(1,2,2)
imhist(r);
title('Histogtram Of Hist Equalization Image');

OutPut:

Original Image:

18
Histogram Equalization:

19
Practical 4: Image Filtering in Spatial Domain.

Image enhancement by using Low-pass filter techniques.

Low pass filtering (smoothing), is employed to remove high spatial frequency noise from a
digital image. The low-pass filters usually employ moving window operator which affects one
pixel of the image at a time, changing its value by some function of a local region (window) of
pixels. The operator moves over the image to affect all the pixels in the image.

Average Filter

Average or mean filtering is easy to implement. It is used as a method of smoothing images,


reducing the amount of intensity variation between one pixel and the next resulting in
reducing noise in images.
The idea of average filtering is simply to replace each pixel value in an image with the mean
(`average') value of its neighbors, including itself. This has the effect of eliminating pixel values
which are unrepresentative of their surroundings. Mean filtering is usually thought of as a
convolution filter. Like other convolutions it is based around a kernel, which represents the
shape and size of the neighborhood to be sampled when calculating the mean. Often a 3×3
square kernel is used, as shown below:

Syntax:

img = imread ('hawk.png');


mf = ones(3,3)/9;

The mf is the mean filter:

>> mf = ones(3,3)/9
mf =

0.1111 0.1111 0.1111


0.1111 0.1111 0.1111
0.1111 0.1111 0.1111

20
The filter2() is defined as:

Y = filter2(h,X) filters the data in X with the two-dimensional FIR filter in the matrix h. It computes the
result, Y, using two-dimensional correlation, and returns the central part of the correlation that is the
same size as X.

Y = filter2(h,X,shape)
It returns the part of Y specified by the shape parameter. shape is a string with one of these values:

'full' : Returns the full two-dimensional correlation. In this case, Y is larger than X.
'same' : (default) Returns the central part of the correlation. In this case, Y is the same size as X.
'valid' : Returns only those parts of the correlation that are computed without zero-padded edges. In
this case, Y is smaller than X.

Code:

input_image=imread("hawk.png");

input_image=rgb2gray(input_image);

input_image1= im2double(input_image);

f=ones(3,3)/9;
average_image = filter2(f, input_image1);

#In this figure displaying the original and average filtered image

figure
subplot(1,2,1)
imshow(input_image);
title('Original Image');

subplot(1,2,2)
imshow(average_image);
title("Avarage filtred image");

21
Output:

Weighted Average:

The second mask is a little more interesting. This mask yields a so-called weighted average,
terminology used to indicate that pixels are multiplied by different coefficients, thus giving more
importance (weight) to some pixels at the expense of others. In the mask the pixel at the center of the
mask is multiplied by a higher value than any other, thus giving this pixel more importance in the
calculation of the average.
Weighted Filter mask is as follows:

22
Code:

pkg load image;


CIm=imread('hawk.png'); % To read image
f=rgb2gray(CIm); % To convert RGB to Grayimage

NIm=imnoise(f,'salt & pepper'); % Adding salt & pepper noise to image

w=(1/16)*[1 2 1;2 4 2;1 2 1]; % Defining the box filter mask

% get array sizes


[ma, na] = size(NIm);
[mb, nb] = size(w);

% To do convolution
c = zeros( ma+mb-1, na+nb-1 );
for i = 1:mb
for j = 1:nb
r1 = i;
r2 = r1 + ma - 1;
c1 = j;
c2 = c1 + na - 1;
c(r1:r2,c1:c2) = c(r1:r2,c1:c2) + w(i,j) * double(NIm);
end
end

% extract region of size(a) from c


r1 = floor(mb/2) + 1;
r2 = r1 + ma - 1;
c1 = floor(nb/2) + 1;
c2 = c1 + na - 1;
c = c(r1:r2, c1:c2);

figure
subplot(1,2,1)
imshow(NIm);
title('Noisy Image(Salt & Pepper Noise)');

subplot(1,2,2)
imshow(uint8(c));
title('Denoised Image using Weighted Average Operation of Box Filter');

23
Output:

Median:

The best-known example in this category is the median filter, which, as its name implies,
replaces the value of a pixel by the median of the gray levels in the neighborhood of that pixel
(the original value of the pixel is included in the computation of the median). Median filters
are quite popular because, for certain types of random noise, they provide excellent noise-
reduction capabilities, with considerably less blurring than linear smoothing filters of similar
size. Median filters are particularly effective in the presence of impulse noise, also called salt-
and-pepper noise because of its appearance as white and black dots superimposed on an
image.

How It work:

Like the mean filter, the median filter considers each pixel in the image in turn and looks at
its nearby neighbors to decide whether or not it is representative of its surroundings. Instead
of simply replacing the pixel value with the mean of neighboring pixel values, it replaces it
with the median of those values. The median is calculated by first sorting all the pixel values
from the surrounding neighborhood into numerical order and then replacing the pixel being
considered with the middle pixel value. (If the neighborhood under consideration contains an
even number of pixels, the average of the two middle pixel values is used.)

24
Figure 1 illustrates an example calculation.

Figure 1 Calculating the median value of a pixel neighborhood. As can be seen, the central
pixel value of 150 is rather unrepresentative of the surrounding pixels and is replaced with
the median value: 124. A 3×3 square neighborhood is used here --- larger neighborhoods will
produce more severe smoothing.

Syntax:

B = medfilt2(A) performs median filtering of the matrix A using the default 3-by-3
neighborhood.

Code:

pkg load image;


input_image=imread("hawk.png");

input_image=rgb2gray(input_image);

input_image1= imnoise(input_image, 'salt & pepper');

median_image=medfilt2(input_image1);
#In this figure displaying the original and median filtered image
figure
subplot(1,2,1)
imshow(input_image1);
title('Original Image');

25
subplot(1,2,2)
imshow(median_image);
title("Median filtred image");

Output:

Gaussian Spatial Low Pass Filter:

Code:

pkg load image;


#take input image
img=imread('cameraman.tif');

#Add some noise into the input image


A=imnoise(img,'gaussian', 0.04, 0.003);
figure, imshow(A);
I=double(A);
#we have taken some constant value for gaussian calculation
sigma=1.76;
sz=3;

[X,Y]=meshgrid(-sz:sz,-sz:sz);

M=size(X,1)-1

26
N=size(Y,1)-1

exp_comp=-(X.^2+Y.^2)/(2*sigma*sigma);
kernel=exp(exp_comp)/(2*pi*sigma*sigma);

output=zeros(size(I));
[o,p]=size(output)
I=padarray(I,[sz,sz],'both');
[o1,p1]=size(I)

for i=1:size(I,1)-M
for j=1:size(I,1)-N
temp=I(i:i+M,j:j+M).*kernel;
output(i,j)=sum(temp(:));
end
end
output=uint8(output);
figure, imshow(output);

Output:
Original Image:

27
Gaussian Low Pass Filtered Image:

Image enhancement by using High-pass filter techniques.

Laplacian Filter:

The Laplacian is a 2-D isotropic measure of the 2nd spatial derivative of an image. The
Laplacian of an image highlights regions of rapid intensity change and is therefore often used
for edge detection.
The Laplacian is often applied to an image that has first been smoothed with something
approximating a Gaussian smoothing filter in order to reduce its sensitivity to noise, and
hence the two variants will be described together here. The operator normally takes a single
gray-level image as input and produces another gray-level image as output.

How It Work:

The Laplacian L(x,y) of an image with pixel intensity values I(x,y) is given by:

28
This can be calculated using a convolution filter.

Since the input image is represented as a set of discrete pixels, we have to find a discrete
convolution kernel that can approximate the second derivatives in the definition of the
Laplacian. Two commonly used small kernels are shown in below.

Code:

a=imread('moon.jpg');
a=rgb2gray(a);
[r c]=size(a);
a=im2double(a);

filter=[-1 -1 -1;-1 8 -1; -1 -1 -1];


result=a;
for i=2:r-1
for j=2:c-1
sum=0;
row=0;
col=1;

for k=i-1:i+1
row=row+1;
col=1;
for l=j-1:j+1
sum = sum+a(k,l)*filter(row,col);
col=col+1;
end
end
result(i,j)=sum;
end
end
subplot(1,2,1)

29
imshow(a);
subplot(1,2,2)
imshow(result);

Output:

Edge detection using Sobel, Robert, Prewitt filter:

Code:

#load package of image


pkg load image;

#Take input image


img=imread("cameraman.tif");

#function to find edge using sobel filter


sobel = edge(img,'Sobel');

figure 1,
subplot(1,2,1)
imshow(img);
title('Original Image');

subplot(1,2,2)

30
imshow(sobel);
title("Edge detection using sobel filter");

#function to find edge using sobel filter


robert = edge(img,'Roberts');
prewitt = edge(img,'Prewitt');

figure 2,
subplot(1,2,1)
imshow(robert);
title('Edge detection using robert filter');

subplot(1,2,2)
imshow(prewitt);
title("Edge detection using prewitt filter");

Output:
Original Image and Sobel edge detected Image:

31
Robert and Prewitt edge detected Image:

32
Practical 5: Analyze Image in Frequency Domain

Image enhancement by low pass/smoothing filter techniques.


The most basic of filtering operations is called "low-pass". A low-pass filter, also called a "blurring" or
"smoothing" filter, averages out rapid changes in intensity. The simplest low-pass filter just calculates
the average of a pixel and all of its eight immediate neighbors. The result replaces the original value
of the pixel. The process is repeated for every pixel in the image.

Gaussian Smoothing:

The Gaussian smoothing operator is a 2-D convolution operator that is used to `blur' images and
remove detail and noise. In this sense it is similar to the mean filter, but it uses a different kernel that
represents the shape of a Gaussian (`bell-shaped') hump. This kernel has some special properties
which are detailed below.

How It Works:

The Gaussian distribution in 1-D has the form:

where σ is the standard deviation of the distribution. We have also assumed that the distribution has
a mean of zero (i.e. it is centered on the line x=0). The distribution is illustrated in below figure

33
In 2-D, an isotropic (i.e. circularly symmetric) Gaussian has the form:

This distribution is shown in below figure:

Above figure: 2-D Gaussian distribution with mean (0,0) and σ =1

The idea of Gaussian smoothing is to use this 2-D distribution as a `point-spread' function, and this is
achieved by convolution. Since the image is stored as a collection of discrete pixels, we need to
produce a discrete approximation to the Gaussian function before we can perform the convolution.
In theory, the Gaussian distribution is non-zero everywhere, which would require an infinitely large
convolution kernel, but in practice it is effectively zero more than about three standard deviations
from the mean, and so we can truncate the kernel at this point. Below figure shows a suitable integer-
valued convolution kernel that approximates a Gaussian with a σ of 1.0. It is not obvious how to pick
the values of the mask to approximate a Gaussian. One could use the value of the Gaussian at the
centre of a pixel in the mask, but this is not accurate because the value of the Gaussian varies non-
linearly across the pixel. We integrated the value of the Gaussian over the whole pixel (by summing
the Gaussian at 0.001 increments). The integrals are not integers: we rescaled the array so that the
corners had the value 1. Finally, the 273 is the sum of all the values in the mask.

34
Above Figure: Discrete approximation to Gaussian function with σ =1.0

Once a suitable kernel has been calculated, then the Gaussian smoothing can be performed using
standard convolution methods. The convolution can in fact be performed fairly quickly since the
equation for the 2-D isotropic Gaussian shown above is separable into x and y components. Thus the
2-D convolution can be performed by first convolving with a 1-D Gaussian in the x direction, and then
convolving with another 1-D Gaussian in the y direction. (The Gaussian is in fact the only completely
circularly symmetric operator which can be decomposed in such a way.) Figure 4 shows the 1-
D x component kernel that would be used to produce the full kernel shown in Figure 3 (after scaling
by 273, rounding and truncating one row of pixels around the boundary because they mostly have the
value 0. This reduces the 7x7 matrix to the 5x5 shown above.). The y component is exactly the same
but is oriented vertically.

Program:

dftub.m
function [U, V] = dftuv(M, N)
%DFTUV Computes meshgrid frequency matrices.
% [U, V] = DFTUV(M, N) computes meshgrid frequency matrices U and
% V. U and V are useful for computing frequency-domain filter
% functions that can be used with DFTFILT. U and V are both M-by-N.

% Set up range of variables.


u = 0:(M-1);
v = 0:(N-1);

% Compute the indices for use in meshgrid


idx = find(u > M/2);

35
u(idx) = u(idx) - M;
idy = find(v > N/2);
v(idy) = v(idy) - N;

% Compute the meshgrid arrays


[V, U] = meshgrid(v, u);

lpfilter.m
function H = lpfilter(type, M, N, D0, n)

% LPFILTER Computes frequency domain lowpass filters


% H = LPFILTER(TYPE, M, N, D0, n) creates the transfer function of a lowpass filter, H, of the
specified % % TYPE and size (M-by-N).
% To view the filter as an image or mesh plot, it should be centered using H = fftshift(H).
% Valid values for TYPE, D0, and n are:
% 'ideal' Ideal lowpass filter with cutoff frequency D0. n need not be supplied. D0 must be
positive
% 'btw' Butterworth lowpass filter of order n, and cutoff D0.
% The default value for n is 1.0. D0 must be positive.
% 'gaussian' Gaussian lowpass filter with cutoff (standard deviation) D0. n need not be supplied.
% D0 must be positive.
% Use function dftuv to set up the meshgrid arrays needed for
% computing the required distances.
[U, V] = dftuv(M, N);

% Compute the distances D(U, V).


D = sqrt(U.^2 + V.^2);

% Begin fiter computations.


switch type
case 'ideal'
H = double(D <=D0);
case 'btw'
if nargin == 4
n = 1;
end
H = 1./(1 + (D./D0).^(2*n));
case 'gaussian'
H = exp(-(D.^2)./(2*(D0^2)));
otherwise
error('Unknown filter type.')
end

36
lowpassfrequncy.m

pkg load image;


footBall=imread('football.jpg');

%Convert to grayscale
footBall=rgb2gray(footBall);

%Determine good padding


PQ = size(footBall);

%Create a Gaussian Lowpass filter 5% the width


D0 = 0.05*PQ(1);

H = lpfilter('gaussian', PQ(1), PQ(2), D0);

% Calculate the discrete Fourier transform of the image


F=fft2(double(footBall),size(H,1),size(H,2));
% Apply the highpass filter to the Fourier spectrum of the image
LPFS_football = H.*F;

% convert the result to the spacial domain.


LPF_football=real(ifft2(LPFS_football));

% Crop the image to undo padding


LPF_football=LPF_football(1:size(footBall,1), 1:size(footBall,2));

%Display the Original and filterred image


figure
subplot(1,2,1)
imshow(footBall);
title('Original Image');

subplot(1,2,2)
imshow(LPF_football, [])
title('Filtered Image');

37
OutPut:

Applied Gaussian low pass filter:

Applied Ideal low pass filter:

38
Applied Butterworth low-pass filter:

39
Image enhancement by high pass/sharping filter techniques.
A high-pass filter can be used to make an image appear sharper. These filters emphasize fine details
in the image – exactly the opposite of the low-pass filter. High-pass filtering works in exactly the same
way as low-pass filtering; it just uses a different convolution kernel.

A high-pass filter is a filter that passes high frequencies well, but attenuates frequencies lower than
the cut-off frequency. Sharpening is fundamentally a high-pass operation in the frequency domain.
There are several standard forms of high-pass filters such as Ideal, Butterworth and Gaussian high-
pass filter. All high-pass filter (Hhp) is often represented by its relationship to the lowpass filter (Hlp):

𝐻ℎ𝑝 = 1 −𝐻𝑙𝑝

Program:

hpfilter.m

function H = hpfilter(type, M, N, D0, n)


% HPFILTER Computes frequency domain highpass filters
% H = HPFILTER(TYPE, M, N, D0, n) creates the transfer function of a highpass filter, H, of the
specified % TYPE and size (M-by-N).
% Valid values for TYPE, D0, and n are:
%
% 'ideal' Ideal highpass filter with cutoff frequency D0. n need not be supplied. D0 must be
positive
%
% 'btw' Butterworth highpass filter of order n, and cutoff D0. The default value for n is 1.0. D0
must % be positive.
%
% 'gaussian' Gaussian highpass filter with cutoff (standard deviation) D0. n need not be supplied.
D0 % must be positive.
%
% The transfer function Hhp of a highpass filter is 1 - Hlp, where Hlp is the transfer function of the
% corresponding lowpass filter. Thus, we can use function lpfilter to generate highpass filters.

if nargin == 4
n = 1; % Default value of n.
end

% Generate highpass filter.


Hlp = lpfilter(type, M, N, D0, n);
H = 1 - Hlp;

40
highpassfrequncy.m

pkg load image;


footBall=imread('football.jpg');
%Convert to grayscale
footBall=rgb2gray(footBall);

%Determine good padding


PQ = size(footBall);

%Create a Highpass filter 5% the width


D0 = 0.05*PQ(1);
H = hpfilter('gaussian', PQ(1), PQ(2), D0);

% Calculate the discrete transform of the image


F=fft2(double(footBall),size(H,1),size(H,2));
% Apply the highpass filter to the image
HPFS_football = H.*F;

% convert the result to the spacial domain.


HPF_football=real(ifft2(HPFS_football));

% Crop the image to undo padding


HPF_football=HPF_football(1:size(footBall,1), 1:size(footBall,2));

%Display the "Original" and "Sharpened" image

figure
subplot(1,2,1)
imshow(footBall);
title('Original Image');

subplot(1,2,2)
imshow(HPF_football, [])
title('High-pass Filtered Image');

41
OutPut:
Applied Gaussian high pass filter:

Applied Ideal high pass filter:

42
Applied Butterworth high-pass filter:

43
Practical 6: Color Image Processing

The human visual system can distinguish hundreds of thousands of different color shades and
intensities, but only around 100 shades of grey. Therefore, in an image, a great deal of extra
information may be contained in the color, and this extra information can then be used to simplify
image analysis, e.g. object identification and extraction based on color.

Three independent quantities are used to describe any particular color. The hue is determined by the
dominant wavelength. Visible colors occur between about 400nm (violet) and 700nm (red) on the
electromagnetic spectrum, as shown in figure.

The saturation is determined by the excitation purity, and depends on the amount of white light mixed
with the hue. A pure hue is fully saturated, i.e. no white light mixed in. Hue and saturation together
determine the chromaticity for a given color. Finally, the intensity is determined by the actual amount
of light, with lighter corresponding to more intense colors.

Color model:

The purpose of a color model is to facilitate the specification of colors in some standard, generally
accepted way. In essence, a color model is a specification of a coordinate system and a subspace within
that system where each color is represented by a single point.

Types of color model:

1. The RGB Color Model


2. The CMY and CMYK Color Model
3. The HIS Color Model

44
Separating the RGB Channels:

Code:

pkg load image;

img = imread('rgbmixer.jpg'); % Read image


red = img(:,:,1); % Red channel
green = img(:,:,2); % Green channel
blue = img(:,:,3); % Blue channel

a = zeros(size(img, 1), size(img, 2));


just_red = cat(3, red, a, a);
just_green = cat(3, a, green, a);
just_blue = cat(3, a, a, blue);
back_to_original_img = cat(3, red, green, blue);

figure, imshow(img),title('Original image')


figure, imshow(just_red), title('Red channel')
figure, imshow(just_green), title('Green channel')
figure, imshow(just_blue), title('Blue channel')
figure, imshow(back_to_original_img), title('Back to original image')

Output:

Original Image:

45
Red channel:

Green channel:

46
Blue channel:

Back to original image:

47
Pseudocolor Image Processing:

Pseudo color image processing shall be an important tool in digital image processing of breast images.
Pseudo coloring comprises of assigning colors to gray values based on a specific criterion. The term
Pseudo color or false color is used to differentiate the process of assigning colors to monochrome
images from the process associated with true color images. The first and foremost use of Pseudo color
is for human visualization and interpretation of gray scale events in an image or sequent of images.

Code:

#clc;
clear all;
#im=input('Enter the file name);
input_image=imread('hawk.png');
k=rgb2gray(input_image);
[x y z]=size(k);
% z should be one for the input image
k=double(k);

for i=1:x
for j=1:y
if k(i,j)>=0 && k(i,j)<50
m(i,j,1)=k(i,j,1)+5;
m(i,j,2)=k(i,j)+10;
m(i,j,3)=k(i,j)+10;
end

if k(i,j)>=50 && k(i,j)<100


m(i,j,1)=k(i,j)+35;
m(i,j,2)=k(i,j)+28;
m(i,j,3)=k(i,j)+10;
end

if k(i,j)>=100 && k(i,j)<150


m(i,j,1)=k(i,j)+52;
m(i,j,2)=k(i,j)+30;
m(i,j,3)=k(i,j)+15;
end

if k(i,j)>=150 && k(i,j)<200


m(i,j,1)=k(i,j)+50;
m(i,j,2)=k(i,j)+40;
m(i,j,3)=k(i,j)+25;
end

48
if k(i,j)>=200 && k(i,j)<=256
m(i,j,1)=k(i,j)+120;
m(i,j,2)=k(i,j)+60;
m(i,j,3)=k(i,j)+45;
end
end
end
figure,
imshow(uint8(k),[]);
title('Original Image');

figure,
imshow(uint8(m),[]);
title("Pseudo filtred image");

Output:
Original Image:

49
Pseudo Filtered image:

Color Slicing using HSV color space:

Highlighting a specific range of colors in an image is useful for separating objects from their
surroundings.
• Color slicing is either to display the colors of interest so that they stand out from the
background or,
• Use the region defined by the colors as a mask for further processing.

The most straightforward approach is to extend the gray-level slicing techniques. Because a color pixel
is an n-dimensional quantity, however, the resulting color transformation functions are more
complicated than their gray-scale counterparts. In fact, the required transformations are more
complex than the color component transforms considered thus far. This is because all practical color
slicing approaches require each pixel's transformed color components to be a function of all n original
pixel's color components.

One of the simplest ways to “slice” a color image is to map the colors outside some range of interest
to a non-prominent neutral color

Code:

pkg load image;


#Take input image and displaying
input_image=imread("baby1.jpg");
figure,
imshow(input_image);
title('Original Image');

50
#Here we are converting RGB image to HSV and Displaying with colorbar
HSV = rgb2hsv(input_image);
H = HSV(:,:,1);
figure,imshow(H);colorbar;

#Here we are finding mean and if finded mean


#value are greter than hsv colored image than convert those value to 0
if ( H > mean2(H) )
H = 1;
else
H = 0;
endif

#At end we are converting HSV color to RGB color and Displying output

HSV(:,:,1) = H;
C = hsv2rgb(HSV);
figure,imshow(C);title('Sliced_Image');

Output:
Original Image:

51
Sliced Image:

52
Practical 7: Image Compression Techniques and Watermarking

Image Compression Techniques:

Decreasing the irrelevance or redundancy of an image is the fundamental aim of the image
compression techniques to provide the facility for storing and transmitting the data in an effective
manner .The initial step in this technique is to convert the image from the representation of their
spatial domain into a separate type of the representation by the use of few already known conversions
and then encodes the converted values i.e., coefficients. This technique allows the huge compression
of data as compared to the predictive techniques, though at the cost of the huge computational needs.

Compression is obtained by eliminating any of one or more of the below three fundamental data
redundancies:

1. Coding redundancy: This is presented when the less than best (that is the smallest length)
code words were used.
2. Inter-pixel redundancy: This results from the correlations between the pixels of an image.
3. Psycho-visual redundancy: This is because of the data which is neglected by human visual-
system.

Why do we need compression?

1. Increased enough amount of the storage space.


2. Decrease the transmission time of an image to get sent on the internet or get downloaded
from webpages.
3. Multimedia Applications: Desktop Editing
4. Image Archiving: Data from Satellite.
5. Image Transmission: Data from the web.

Implement Huffman Coding:

Code:
pkg load communications
sig = repmat([3 3 1 3 3 3 3 3 2 3],1,50);
symbols = [1 2 3];
p = [0.1 0.1 0.8];
dict = huffmandict(symbols,p);
hcode = huffmanenco(sig,dict);
dhsig = huffmandeco(hcode,dict);
isequal(sig,dhsig)
binarySig = de2bi(sig);
seqLen = numel(binarySig)
binaryhcode = de2bi(hcode);
encodedLen = numel(binaryhcode)

53
Output:

Watermarking Techniques:

The methods and standard of Section 8.2 make the distribution of images on digital media and over
the Internet practical. Unfortunately, the image so distributed can be copied repeatedly and without
error, putting the rights of their owners at risk.

Even when encrypted for distribution, images are unprotected after decryption. One way to
discourage illegal duplication is to insert one or more items of information, collectively called
watermark, into potentially vulnerable images in such a way that the watermarks are inseparable from
the images themselves. As integral parts of the watermarked images, they protect the rights of their
owners in c variety of ways, including:

1. Copyright identification. Watermarks can provide information that serves as proof of


ownership when the rights of the owner have been infringed.
2. User identification or fingerprinting. The identity of legal users can be en-coded in watermarks
and used to identify sources of illegal copies.
3. Authenticity determination. The presence of a watermark can guarantee that an image has
not been altered—assuming the watermark is designed to be destroyed by any modification
of the image.
4. Automated monitoring. Watermarks can be monitored by systems that track when and where
images are used (e.g., programs that search the Web for images placed on Web pages).
Monitoring is useful for royalty collection and/or the location of illegal users.
5. Copy protection. Watermarks can specify rules of image usage and copying (e.g., to DVD
players).

Add a watermark to the input image:

Code:

pkg load image;


#Input Image where we want to apply watermark
f=imread('inputimage.jpg');

#For watermarking, size of inputimage and watermarking image should be same


#there for we changed the size of image using imresize and dispalyed
fr=imresize(f,[560 560]);
figure;imshow(fr);
title('Original Image with resized');

54
#Watermarking Image
w=imread('watermark.jpeg');
#Again Resized the Watermarking Image
wr=imresize(w,[560 560]);

#Applied watermarking
alpha=0.3;
fw=(1-alpha)*fr+alpha.*wr;

#Display the watermarked Image


figure;imshow(fw);
title('Watermaked Image');

Output:
Original Image:

55
Watermarked Image:

56
Practical 8: Basic Morphological Transformations

Morphological image processing:

Morphological image processing is a collection of non-linear operations related to the shape or


morphology of features in an image. Morphological operations rely only on the relative ordering of
pixel values, not on their numerical values, and therefore are especially suited to the processing of
binary images. Morphological operations can also be applied to greyscale images such that their light
transfer functions are unknown and therefore their absolute pixel values are of no or minor interest.

Morphological techniques probe an image with a small shape or template called a structuring
element. The structuring element is positioned at all possible locations in the image and it is compared
with the corresponding neighbourhood of pixels. Some operations test whether the element "fits"
within the neighbourhood, while others test whether it "hits" or intersects the neighbourhood.

When dealing with binary images, one of the principal applications of morphology is in extracting
image components that are useful in the representation and description of shape. In particular, we
consider morphological algorithms for extracting boundaries, connected components, the convex hull,
and the skeleton of a region.

Boundary Extraction:

The boundary of a set A, denoted by β (A), can be obtained by first eroding A by B and then performing
the set difference between A and its erosion. That is,

Where B is a suitable structuring element.

Thinning:

Thinning is a morphological operation that is used to remove selected foreground pixels from binary
images, somewhat like erosion or opening. It can be used for several applications, but is particularly
useful for skeletonization. In this mode it is commonly used to tidy up the output of edge detectors by
reducing all lines to single pixel thickness. Thinning is normally only applied to binary images, and
produces another binary image as output.

57
Thickening:

Thickening is the morphological dual of thinning and is defined by the expression;

Hole Filling:

A hole may be defined as a background region surrounded by a connected border of foreground pixels.
We develop an algorithm based on set dilation, complementation, and intersection for filling holes in
an image.

Let A denote a set whose elements are 8-connected boundaries, each boundary enclosing a
background region (i.e., a hole). Given a point in each hole, the objective is to fill all the holes with Is.
We begin by forming an array. X0. of 0s (the same size as the array containing A), except at the
locations in X0 corresponding to the given point in each hole, which we set to I.

Then, the following procedure fills all the holes with ls:

Code:

pkg load image;


#Read input image
org_im = imread('morph.jpg');
figure 1,
subplot(1,2,1);
imshow(org_im);
title('Original Image');

#Convert RGB image to Binary Image and Displayed as Input Image


bw_im=im2bw(rgb2gray(org_im));
subplot(1,2,2);
imshow(bw_im);
title('Original Image in Binary Format');

se=strel('disk',2,0);

# performing Dialation
mydilatedimg=imdilate(bw_im,se);

# performing Erosion
myderodeimg=imerode(bw_im,se);

#find internal boundary


#internal boundary= dialated image and NOT of eroded image
int_boundary= mydilatedimg&~myderodeimg;

58
figure 2,
subplot(1,2,1);
imshow(int_boundary); title('Internal boundary');

#find external boundary


#external boundary= dialated image and NOT of original image
ext_boundary= mydilatedimg&~bw_im;
subplot(1,2,2);
imshow(ext_boundary,[]); title('External boundary');

#performing Thinning
thin_im=bwmorph(bw_im,'thin');
figure 3,
subplot(1,2,1);
imshow(thin_im); title('Thin Image');

#performing THickening
thick_im=bwmorph(bw_im,'thicken');
subplot(1,2,2);
imshow(thick_im); title('Thick Image');

#morphological gradient
im_gradiant=imsubtract(bw_im,myderodeimg);
figure 4,
subplot(1,2,1);
imshow(im_gradiant,[]); title('Gradient Image');

#performing Skeletonozation with 8 iteration and display the dialated image


skele_im=bwmorph(bw_im,'skel', 8);
subplot(1,2,2);
imshow(skele_im); title('Sleleton Image');

59
Output:
Display Original Image and Binary format of Original Image:

Internal boundary and External boundary:

60
Thinning and Thicking Image:

Gradient Image and Sleleton Image:

61

You might also like