Professional Documents
Culture Documents
DATE OF PERFORMANCE
DATE OF CORRECTION
GRADE
TIMLEY LAB
ATTENDANC
CORRECTIO PERFORMANC ORAL TOTAL
E
N E
ALLOTTED 2 2 3 3 10
OBTAINED
EXPERIMENT NO: 9
⮚ NAME OF EXPT:
Morphological operation – Erosion, dilation, opening, closing, hit-miss transform,
Boundary extraction
⮚ AIM OF EXPT:
Implementation of Morphological operation – Erosion, dilation, opening, closing, hit-
miss transform, Boundary extraction
⮚ EQUIPMENT/COMPONENTS:
IMAGE PROCESSING TOOLBOX (MATLAB/Python)
THEORY:
Morphology is a broad set of image processing operations that process images based on
shapes. Morphological operations apply a structuring element to an input image, creating an
output image of the same size. In a morphological operation, the value of each pixel in the
output image is based on a comparison of the corresponding pixel in the input image with its
neighbours. By choosing the size and shape of the neighbourhood, you can construct a
morphological operation that is sensitive to specific shapes in the input image. In order to
understand morphology operations, it is necessary to get used to following concepts of set
theory. Let 𝑍 2 represents two-dimensional integer space and A is set of 𝑍 2 .
1. If 𝑎(𝑥, 𝑦)is an element of A:
𝑎∈𝐴
Dilation:
The basic effect of the operator on a binary image is to gradually enlarge the boundaries of
regions of foreground pixels (i.e. white pixels, typically). Thus areas of foreground pixels grow
in size while holes within those regions become smaller.
If at least one pixel in the structuring element coincides with a foreground pixel in the image
underneath, then the input pixel is set to the foreground value. If all the corresponding pixels
in the image are background, however, the input pixel is left at the background value.
With A and B as sets in 𝑍 2 , the dilation of A by B, denoted by 𝐴 ⨁𝐵, is defined as;
𝐴⨁𝐵 = {𝑧|(𝐵^)z∩ 𝐴 ≠ 𝜙}
This equation is based on reflecting B about its origin, and shifting this reflection by z.
Dilation of A and B is set of all displacements z such that B^ and A overlap by at least one
element. Based on this interpretation above equation can also be written as;
Erosion:
The basic effect of the operator on a binary image is to erode away the boundaries of regions
of foreground pixels (i.e. white pixels, typically). Thus areas of foreground pixels shrink in size,
and holes within those areas become larger.
If for every pixel in the structuring element, the corresponding pixel in the image underneath
is a foreground pixel, then the input pixel is left as it is. If any of the corresponding pixels in
the image are background, however, the input pixel is also set to background value.
Opening:
The basic effect of an opening is somewhat like erosion in that it tends to remove some of the
foreground (bright) pixels from the edges of regions of foreground pixels. However it is less
destructive than erosion in general. As with other morphological operators, the exact
operation is determined by a structuring element. The effect of the operator is to preserve
foreground regions that have a similar shape to this structuring element, or that can
completely contain the structuring element, while eliminating all other regions of foreground
pixels.
An opening is defined as erosion followed by a dilation using the same structuring element for
both operations.
All pixels which can be covered by the structuring element with the structuring element being
entirely within the foreground region will be preserved. However, all foreground pixels which
cannot be reached by the structuring element without parts of it moving out of the foreground
region will be eroded away.
The opening of set A by structuring element B, denoted as 𝐴 ∘ 𝐵, is defined as;
Closing:
Closing is similar in some ways to dilation in that it tends to enlarge the boundaries of
foreground (bright) regions in an image (and shrink background color holes in such regions),
but it is less destructive of the original boundary shape. As with other morphological
operators, the exact operation is determined by a structuring element. The effect of the
operator is to preserve background regions that have a similar shape to this structuring
element, or that can completely contain the structuring element, while eliminating all other
regions of background pixels.
Closing is opening performed in reverse. It is defined simply as dilation followed by erosion
using the same structuring element for both operations.
For any background boundary point, if the structuring element can be made to touch that
point, without any part of the element being inside a foreground region, then that point
remains background. If this is not possible, then the pixel is set to foreground. After the closing
has been carried out the background region will be such that the structuring element can be
made to cover any point in the background without any part of it also covering a foreground
point, and so further closings will have no effect.
The closing of set A by structuring element B, denoted as 𝐴 • 𝐵, is defined as;
ALGORITHM:
1. Read an image and store it in variable.
2. Enter structuring element and store it in variable.
3. Compute size of image and store it in r and c.
4. Convert 2D structuring element into 1D array.
5. Perform dilation.
a. Compare neighboring elements of each pixel of image with 1D array.
b. If any element matches with the respective element in array, make center pixel 1.
c. Else leave center pixel as it is.
d. Do this for all possible pixel values of given image.
e. Store result of each iteration in another matrix.
f. Display resultant matrix.
6. Perform erosion.
a. Compare neighboring elements of each pixel of image with 1D array.
b. If all elements match with respective elements in array, make center pixel 1.
c. Else make every pixel in neighborhood and center 0.
d. Do this for all possible pixel values of given image.
e. Store result of each iteration in another matrix.
f. Display resultant matrix.
7. Perform Opening.
a. Perform erosion on the given image.
b. Perform dilation on the resultant image.
c. Display final resultant image.
8. Perform Closing.
a. Perform dilation on the given image.
b. Perform erosion on the resultant image.
c. Display final resultant image.
9. Perform all above operations using in-built commands and display result alongside.
CONCLUSION:
Erosion can also be used to remove small spurious bright spots (`salt noise‘ ) in images. We can
also use erosion for edge detection by taking the erosion of an image and then subtracting it
away from the original image, thus highlighting just those pixels at the edges of objects that
were removed by the erosion. Finally, erosion is also used as the basis for many other
mathematical morphology operators.
Dilation is the dual of erosion i.e. dilating foreground pixels is equivalent to eroding the
background pixels. Dilation can also be used for edge detection by taking the dilation of an
image and then subtracting away the original image, thus highlighting just those new pixels at
the edges of objects that were added by the dilation.
Opening isolates the objects in an image which may be just touching one another.
Closing is used to fill gaps and helps to fuse narrow breaks in an image.
REFERENCES:
PROGRAM:
1) Erosion & Dilation
# Python program to demonstrate erosion
and # dilation of images.
import cv2
import numpy as np
cv2.imshow('Input', img)
cv2.imshow('Erosion', img_erosion)
cv2.imshow('Dilation', img_dilation)
cv2.waitKey(0)
Output:
2) Opening
import cv2
import numpy as np
3) Closing
import cv2
import numpy as np
Output:
4) Hit-miss transform
# creating
region #
numpy.ndarray
regions = np.zeros((10, 10), bool)
[0, 1, 1],
[0, 1, 1],
[0, 1, 1]])
(Permanently Affiliated to University of Mumbai)
Department of Electronics and Telecommunication Engineering
# showing image
print("Image after hit
miss transform")
imshow(img)
show()
Output:
Image Image after hit miss transform
5) Boundar
y extraction
import
numpy as np
import cv2
from matplotlib import pyplot as plt
image = cv2.imread('letter_A.jpg',0)
retVal,mask =
cv2.threshold(image,155,255,cv2.THRESH_BINARY_INV)
kernel = np.ones((7,7),np.uint8)
gradient = cv2.morphologyEx(mask,
cv2.MORPH_GRADIENT, kernel) titles = ['Original
Image',"Binary Image",'Morphological gradient'] images =
[image,mask,gradient]
plt.figure(figsize=(13,5))
for i in range(3):
plt.subplot(1,3,i+1)
plt.imshow(images[i],'gray')
plt.title(titles[i])
plt.xticks([])
plt.yticks([])
plt.tight_layout()
plt.show()
Output:
EXPERIMENT NUMBER 10
DATE OF PERFORMANCE
DATE OF CORRECTION
GRADE
TIMLEY LAB
ATTENDANCE ORAL TOTAL
CORRECTION PERFORMANCE
ALLOTTED 2 2 3 3 10
OBTAINED
EXPERIMENT NO: 10
⮚ NAME OF EXPT:
Detect Edge Using Canny Edge Detection
⮚ AIM OF EXPT:
Detect Edge Using Canny Edge Detection
⮚ EQUIPMENT/COMPONENTS:
IMAGE PROCESSING TOOLBOX (MATLAB/Python)
THEORY:
The Canny edge detector is an edge detection operator that uses a multi- stage
algorithm to detect a wide range of edges in images.
The Canny filter is a multi-stage edge detector. It uses a filter based on the derivative
of a Gaussian in order to compute the intensity of the gradients. The Gaussian reduces
the effect of noise present in the image. Then, potential edges are thinned down to 1-
pixel curves by removing non-maximum pixels of the gradient magnitude. Finally, edge
pixels are kept or removed using hysteresis thresholding on the gradient magnitude.
The Canny has three adjustable parameters: the width of the Gaussian (the noisier the
image, the greater the width), and the low and high threshold for the hysteresis
thresholding.
The general criteria for edge detection include:
1. Detection of edge with low error rate, which means that the detection should
accurately catch as many edges shown in the image as possible
2. The edge point detected from the operator should accurately localize on the center of
the edge.
3. A given edge in the image should only be marked once, and where possible, image noise
should not create false edges.
ALGORITHM:
1) Apply Gaussian Filter to smooth the image and remove the noise.
PROGRAM:
import numpy as np
import os
import cv2
import matplotlib.pyplot as plt
from google.colab import files
uploaded = files.upload()
# In y-axis direction
elif grad_ang>(22.5 + 45) and grad_ang<=(22.5 + 90
):
neighb_1_x, neighb_1_y = i_x, i_y-1
neighb_2_x, neighb_2_y = i_x, i_y + 1
mag[i_y, i_x]= 0
continue
weak_ids = np.zeros_like(img)
strong_ids = np.zeros_like(img)
ids = np.zeros_like(img)
if grad_mag<weak_th:
mag[i_y, i_x]= 0
elif strong_th>grad_mag>= weak_th:
ids[i_y, i_x]= 1
else:
ids[i_y, i_x]= 2
#img=Image.open(BytesIO(uploaded['outimage.jpg']))
#img = cv2.imread('lion.jpg')
#frame = cv2.imread('test.jpeg')
frame = cv2.imread('outimage.jpg')
plots[0].imshow(frame)
plots[1].imshow(canny_img)
EXPERIMENT NUMBER 11
DATE OF PERFORMANCE
DATE OF CORRECTION
GRADE
TIMLEY LAB
ATTENDANCE ORAL TOTAL
CORRECTION PERFORMANCE
ALLOTTED 2 2 3 3 10
OBTAINED
EXPERIMENT NO: 11
⮚ NAME OF EXPT:
Chain Code
⮚ AIM OF EXPT:
Generate 8 neighbor Chain Code
⮚ EQUIPMENT/COMPONENTS:
IMAGE PROCESSING TOOLBOX (MATLAB/Python)
THEORY:
Chain code is a lossless compression technique used for representing an object in
images. The co-ordinates of any continuous boundary of an object can be
represented as a string of numbers where each number represents a particular
direction in which the next point on the connected line is present. One point is
taken as the reference/starting point and on plotting the points generated from
the chain, the original figure can be re-drawn.
Chain codes are used to represent the binary by a connected sequence of straight
–line segments. This represented is based on 4-connectivity and 8-connectivity of
the segments.
The chain code works best with binary images and is a concise way of representing a
shape contour. The chain code direction convention is given below:
As an edge is traced from its beginning point to the end point the direction that must
be taken to move from one pixel to the next is given by the number represented in
either the 4-chain code or the 8-chain code.
As an edge can be completely described in terms of its starting coordinate and its
sequence of chain codes descriptors. Of the two chain codes, the 4-chain is easier
requiring only four different code values.
ALGORITHM:
THEORETICAL CALCULATION:
CONCLUSION:
REFERENCES:
PROGRAM:
Points) - 1): a =
ListOfPoints[i]
b = ListOfPoints[i + 1]
chainCode.append(getChainCode(a[0],
a[1], b[0], b[1]))
return chainCode
else:
ys = -1
if (dx > dy):
# Driving axis
is the X-axis p =
2 * dy - dx
while (x1 != x2):
x1 += xs
if (p >= 0):
y1 += ys
p
-= 2 *
dx p
+= 2 *
dy
ListOfPoints.append([x1, y1])
else:
# Driving axis
is the Y-axis p =
2 * dx-dy
while(y1 != y2):
y1 += ys
if (p >= 0):
x1 += xs
p
-= 2 *
dy p
+= 2 *
dx
ListOfPoints.appe
nd([x1, y1]) return ListOfPoints
def DriverFunction():
(x1, y1) = (-9, -3)
(x2, y2) = (10, 1)
ListOfPoints = Bresenham2D(x1, y1, x2,
y2) chainCode =
generateChainCode(ListOfPoints)
chainCodeString = "".join(str(e) for e in
chainCode) print ('Chain code for the
straight line from', (x1, y1),
'to', (x2, y2), 'is', chainCodeString)
DriverFunction()
OUTPUT:
EXPERIMENT NUMBER 12
DATE OF PERFORMANCE
DATE OF CORRECTION
GRADE
TIMLEY LAB
ATTENDANCE ORAL TOTAL
CORRECTION PERFORMANCE
ALLOTTED 2 2 3 3 10
OBTAINED
EXPERIMENT NO: 12
⮚ NAME OF EXPT:
Digit Recognition using Multi-Layer Perceptron
⮚ AIM OF EXPT:
Digit Recognition using Multi-Layer Perceptron
⮚ EQUIPMENT/COMPONENTS:
IMAGE PROCESSING TOOLBOX (MATLAB/Python)
THEORY:
To implement a multilayer perceptron neural network for recognition of dot matrix
digits 6, 8, and 9. Assume each dot matrix of size 7x5.
Relate practical application of Neural Networks to concepts and methods studied in
theory
Six = [ 0 1 1 1 0
100 01
100 00
1111 0
1000 1
1000 1
0111 0]
Eight =[ 0 1 1 1 0
100 0 1
100 0 1
011 1 0
100 0 1
100 0 1
011 1 0]
Nine =[ 0 1 1 1 0
100 0 1
100 0 1
011 1 1
000 0 1
100 0 1
011 1 0]
ALGORITHM:
1. Training: Define the digit matrices, and train the network using relevant
parameters.
CONCLUSION:
Comment on the noise tolerance, ease of design and speed of MLP.
PROGRAM:
%program for digit recognition
clc;
clear all;
close all;
echo on
pause;
six_m=[0 1 1 1 0;
1 0 0 0 1;
1 0 0 0 0;
1 1 1 1 0;
1 0 0 0 1;
1 0 0 0 1;
0 1 1 1 0];
imshow(~six_m,'InitialMagnification', 5000);
pause;
eight_m=[0 1 1 1 0;
1 0 0 0 1;
1 0 0 0 1;
0 1 1 1 0;
1 0 0 0 1;
1 0 0 0 1;
0 1 1 1 0];
imshow(~eight_m,'InitialMagnification', 5000);
pause;
nine_m=[0 1 1 1 0;
1 0 0 0 1;
1 0 0 0 1;
0 1 1 1 1;
0 0 0 0 1;
1 0 0 0 1;
0 1 1 1 0];
imshow(~nine_m,'InitialMagnification', 5000);
pause;
six=reshape(six_m',[35,1]);
eight=reshape(eight_m',[35,1]);
nine=reshape(nine_m',[35,1]);
I=[six,eight,nine];
T=eye(3);
net=newff(I,T,[20,20],{'logsig','tansig','tansig'});
pause
net.performFcn='mse';
net.trainParam.goal=0.001;
net.trainParam.show=20;
net.trainParam.epochs=50;
[net,tr]=train(net,I,T);
disp('End_of_train_no');
clc;
close all;
J=I(:, result_char);
J=reshape(J,[5,7]);
J=J';
figure;
imshow(~J, 'InitialMagnification', 5000);
%imshow(imcomplement(J),'InitialMagnification',5000);
title('detected digit');
EXPECTED OUTPUT:
test_no= [0 .3 1 .1 0
1 0 0 0 1
1 0 0 0 1
0 1 1 1 1
0 0 0 0 1
1 0 0 0 1
0 1 .2 1 0 ]
Noisy data of digit nine from testing data set in complemented form
EXPERIMENT NUMBER 13
CLASSIFICATION OF IRIS FLOWER DATASET
USING SVM CLASSIFIER USING PYTHON
EXPERIMENT NAME
DATE OF PERFORMANCE
DATE OF CORRECTION
GRADE
TIMLEY LAB
ATTENDANCE ORAL TOTAL
CORRECTION PERFORMANCE
ALLOTTED 2 2 3 3 10
OBTAINED
EXPERIMENT NO: 13
⮚ NAME OF EXPT:
Classification of given dataset Using SVM Classifier using Python
⮚ AIM OF EXPT:
Classification of Iris Flower dataset Using SVM Classifier using Python
⮚ EQUIPMENT/COMPONENTS:
IMAGE PROCESSING TOOLBOX (MATLAB/Python)
THEORY:
MachineLearning is about learning to predict something or extracting knowledge from d
ata. ML is a part of artificial intelligence. ML algorithms build a model based on sample d
ata or known as training data and based upon the training data the algorithm can predic
t something on new data.
Categories of Machine Learning :
Supervised machine learning: Supervised machine learning are types of machine learnin
g that are trained on well-
labeled training data. Labeled data means the training data is already tagged with the co
rrect output.
Unsupervised machine learning: Unlike supervised learning, unsupervised learning doe
sn’t have any tagged data. It learned patterns from untagged data. Basically, it creates a
group of objects based on the input data/features.
Semisupervised machine learning: Semisupervised learning falls between supervised an
d unsupervised learning. It has a small amount of tagged data and a large amount of unt
agged data.lem using a supervised learning approach. We’ll use an algorithm cal “Suppo
rt vector machine”.
2. Recommendation Engine: Using the past behavior of a human’s search data the recom
mendation engine can produce new data to cross-
3. Chatbot: Chatbots are used to give customer services without any human agent. It tak
es questions from users and based on the question it gives an answer as a response.
Numpy will be used for any computational operations. We’ll use Matplotlib and seaborn
for data visualization. Pandas help to load data from various sources like local storage,
database, excel file, CSV file, etc.
From this visualization, we can tell that iris-setosa is well separated from the other two
flowers. And iris virginica is the longest flower and iris setosa is the shortest. Now let’s
plot the average of each feature of each class.
Since we have already done a general analysis of this data in earlier lectures, let's go
ahead and move on to using the Naive Bayes Method to separate this data set into
multiple classes.
● create and fit the model
● continue by separating into training and testing sets:
● fit the model using the training data set:
● predict the outcomes from the Testing Set:
CONCLUSION :
The Iris flower dataset is classified using Gaussian Model using 97.36% accuracy.
Program:
#DataFlair Iris Flower Classification
# Import Packages
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from pandas import Series,DataFrame
from sklearn import datasets
from sklearn import metrics
from sklearn.naive_bayes import GaussianNB
%matplotlib inline
# load the iris datasets
iris = datasets.load_iris()
# Grab features (X) and the Target (Y)
X = iris.data
Y = iris.target
# Show the Built-in Data Description
print(iris.DESCR)
iris = datasets.load_iris()
# Since this is a bunch, create a dataframe
iris_df=pd.DataFrame(iris.data)
iris_df['class']=iris.target
sns.pairplot(iris_df, hue='class')
model = GaussianNB()
Output: