You are on page 1of 17

MINI PROJECT – SSS LAB

T.MAHESH KUMAR
ROLL NO – 1602-17-735-027
Abstract

In this research, a new vision system to characterize the


recognition of vegetables in images has been developed. It is
always related to image processing, which can control the
classification, qualification and segmentation of images. It is a
recognition system for super market and grocery stores. From
the captured images multiple recognition clues such as colour,
shape, size, texture and weight are extracted and analysed to
classify and recognize the vegetables. The results show that it
has good robustness and a very high success. An important
characteristic of the proposed algorithm is that it is able to
work with several elements inside the camera field of view.
This adds flexibility to the application in order to work in the
country or in a greenhouse, where the elements are very close
to each other and the location of all them must be obtained in
real time. This approach is less complex and relatively very
faster than other approach. If the system can identify
uniquely, the checkout process at superstores will be fast and
efficient.
Image segmentation is to partition an image into meaningful
regions with respect to a particular application. Object
recognition is the task of finding a given object in an image or
video sequence.
CONTENTS

List of figures

1)Topic 1: Introduction

1.1:Definations

2)Topic 2: Problem Statement

3)Topic 3:Working

4)Topic 4:Applications

5)Topic 5:Methodology

5.1:Flow Chart

5.2:Matlab Coding

6)Topic 6:Conclusion
Figure no.

List of figures Name

3.1 Original Image

3.2 Reference Image

3.3 Extracted Image

3.4 Flow Chart


1.Introduction

We present automatic recognition system (vegetable vision)


to facilitate the checkout process of a supermarket or grocery
stores. Vegetable vision is done with the image processing and
image analyzing. Image processing is done by MATLAB. This
system consists of an integrated measurement and imaging
technology with a user friendly interface. When we bring a
vegetable at checkout point, an image is taken, a variety of
features such as color, shape, size, density, texture etc are
then extracted. These features are compared to stored data.
Depending on the certainty of the classification and
recognition, the final decision is made by the system.
Vegetable quality is frequently referred to size, shape, mass,
firmness, color and bruises from which fruits can be classified
and sorted. The classification technique is used to recognize
the vegetable’s shape, size, color and texture at a unique
glance. Digital image classification uses the spectral
information represented by the digital numbers in one or
more spectral bands, and attempts to classify each individual
pixel based on this spectral information.
Image segmentation is the foundation of object recognition
and computer vision. And there is some specifically given work
(such as region extraction and image marking) to do after the
main operation of image segmentation for the sake of getting
better visual effect. Two major computer vision problems,
image segmentation and object recognition, have been
traditionally dealt with using a strict, bottom-up ordering.
Implementation between Segmentation and object
recognition using the said techniques is studied.

1.1 Definitions
The following are the definitions relevant to this project:

1.1.1 Computer Vision


Computer vision is a field in Computer Science that includes
methods for acquiring, processing, analyzing, and
understanding images from the real world in order to produce
numerical or symbolic information. A theme in the
development of this field has been to duplicate the abilities of
human vision by electronically perceiving and understanding
an image. Computer vision has application like Object
Detection, Object Recognition and Object Classification.
1.1.2 Detection:
Many points in an image have a nonzero value for the
gradient, and not all of these points are edges for a particular
application. Thresholding is used for the detection of edge
points.
1.1.3 Feature Extraction
When the input data to an algorithm is too large to be
processed and it is suspected to be notoriously redundant
then the input data is transformed into a reduced
representation set of features, also named features vector.
Transforming the input data into the set of features is called
feature extraction. The features used in this project are aspect
ratio, height, width, elongation, skewness, compactness,
orientation, area, perimeter and extent.
1.1.4 Object recognition:
It is the task of finding a given object in an image or video
sequence. For any object in an image,
there are many 'features' which are interesting points on the
object that can be extracted to provide a "feature" description
of the object.
1.1.5 Image segmentation:
It is the process of partitioning/subdividing a digital image into
multiple meaningful regions or sets of pixels regions with
respect to a particular application. The segmentation is based
on measurements taken from the image and might be grey
level, colour, texture, depth or motion.
The result of image segmentation is a set of segments that
collectively cover the entire image.
2.Problem Statement
The task here is to automatically detect and classify the
vegetables image acquired from Matlab. Assuming that the
different images are present and some are overlapped on one
another.The task is to separate the images overlapped and
recognize them .In this, we have use vegetables image present
in Matlab.

3.Working :
The steps of the process are illustrated here with some
discussions of each step.
The speed of the object can be determined by using Matlab
function. There are various steps involve in this process which
are as given :

a)Read the image


After reading the image in the Matlab workspace, the image
must be read using (imread) function which read the image
from Matlab workspace.
b)Taking Reference image :
We must have one reference image of which we have
compare with original image for feature extraction .

Fig3.2: Reference image


c)Convert the image to Black and white
The image is converted to black and white by using the
function im2bw, which converts an image to binary image.
Each pixel takes a value that is a normalized intensity value
that lies in the range [0, 1].
d)Morphological Operation:
Strel( ) function creates a flat, disk-shaped structuring
element, where R specifies the radius. R must be a
nonnegative integer. N must be 0, 4, 6 or 8. When N is
greater than 0, the disk-shaped structuring element is
approximated by a sequence of N periodic-line structuring
elements. When N equals 0, no approximation is used, and
the structuring element members consist of all pixels whose
centers are no greater than R away from the origin. If N is not
specified, the default value is4.
e)Bounding Box Extraction
We do this by constructing a bounding box. This bounding
box is used to isolate the area of interest in the image and is
similar to window (or key region) processing. This can be
easily made by calling the (Bounding Box) property of the
(regionprops) function.
Feature extraction is used to extract the feature and locate
the position of the object in the image , which will be
estimated using function like ‘regionprops’.
f)Calculating pixel value and circularity:
After constructing the bounding box and as per the value of
centroid, we have calculated the
pixel value by using impixel inbuilt command and with the
help of circularity we have distinguish different images
byusing imcrop to crop the desired image.
4.Applications :
This technique of using matlab code can be use in various
purpose which are given below . • Locate tumors and other
pathologies
• Measure tissue volumes
• Surgery planning
• Face recognition
• Fingerprint recognition
• Iris recognition and many more.
5. METHODOLOGY:
This section presents the technology inputs and processing
steps needed in the vegetables identification and
recognization .The key stages in the vehicle identification and
tracking process. Each of these steps is described in detail
below.
5.2 Matlab CODING :
clear all
close all
clc
a=imread('C:\Users\Desktop\Whiteimage.jpg');
b=imread('C:\Users\Desktop\vegetableimage.jpg');
c=rgb2gray(b);
d=im2bw(c);
out=im2double(a)-im2double(d);
se=strel('disk',6);
frameout=imdilate(out,se);
[L,n]=bwlabel(frameout,8);
s=regionprops(L,'Area','Centroid','BoundingBox','Perimeter');
imshow(b)
for i=1:n
C=s(i,1).Centroid;
BB=s(i,1).BoundingBox;
P=s(i,1).Perimeter;
A=s(i,1).Area;
xcorner=BB(1);
ycorner=BB(2);
xwidth=BB(3);
ywidth=BB(4);
Centroidx=C(1);
Centroidy=C(2);
rect=[xcorner ycorner xwidth ywidth];
rectangle('Position',[xcorner ycorner xwidth
ywidth],'Edgecolor','r');
hold on
plot(Centroidx,Centroidy,'b*');
hold off
Pi=impixel(b,Centroidx,Centroidy);
Cir=P^2/(4*pi*A);
if((235<Pi(1))&&(Pi(1)<239)&&(25<Pi(2))&&(Pi(2)<29)&&(34<
Pi(3))&&(Pi(3)<38)&&(1.0<Cir)&&(Cir<1.2))
p=imcrop(b,rect);
figure,imshow(p)
title('TOMATO');
elseif((188<Pi(1))&&(Pi(1)<191)&&(36<Pi(2))&&(Pi(2)<40)&&
(44<Pi(3))&&(
Pi(3)<46)&&(2.6<Cir)&&(Cir<2.8))
p=imcrop(b,rect);
figure,imshow(p)
title('CHILLY');
elseif((119<Pi(1))&&(Pi(1)<123)&&(34<Pi(2))&&(Pi(2)<38)&&
(74<Pi(3))&&(Pi(3)<76)&&(1.3<Cir)&&(Cir<1.4))
p=imcrop(b,rect);
figure,imshow(p)
title('BRINJAL');
elseif((249<Pi(1))&&(Pi(1)<253)&&(180<Pi(2))&&(Pi(2)<184)
&&(111<Pi(3))&&(Pi(3)<115)&&(1.6<Cir)&&(Cir<1.8))
p=imcrop(b,rect);
figure,imshow(p)
title('POTATO');
elseif((140<Pi(1))&&(Pi(1)<144)&&(180<Pi(2))&&(Pi(2)<184)
&&(98<Pi(3))&&(Pi(3)<102)&&(1.96<Cir)&&(Cir<2.0))
p=imcrop(b,rect);
figure,imshow(p)
title('PEAS');
elseif((0<Pi(1))&&(Pi(1)<2)&&(95<Pi(2))&&(Pi(2)<99)&&(49<
Pi(3))&&(Pi(3)<52)&&(1.4<Cir)&&(Cir<1.5))
p=imcrop(b,rect);
figure,imshow(p)
title('BITTERGUARD');
elseif((215<Pi(1))&&(Pi(1)<219)&&(84<Pi(2))&&(Pi(2)<88)&&
(40<Pi(3))&&(
Pi(3)<44)&&(1.2<Cir)&&(Cir<1.3))
p=imcrop(b,rect);
figure,imshow(p)
title('PUMPKIN');elseif((241<Pi(1))&&(Pi(1)<245)&&(111<Pi(2
))&&(Pi(2)<115)&&(36<Pi(3))&&(Pi(3)<40)&&(2.0<Cir)&&(Cir
<2.1))
p=imcrop(b,rect);
figure,imshow(p)
title('CARROT');
elseif((62<Pi(1))&&(Pi(1)<66)&&(98<Pi(2))&&(Pi(2)<102)&&(
34<Pi(3))&&(Pi(3)<38)&&(1.4<Cir)&&(Cir<1.5))
p=imcrop(b,rect);
figure,imshow(p)
title('BRONCHI');
elseif((65<Pi(1))&&(Pi(1)<69)&&(46<Pi(2))&&(Pi(2)<50)&&(Pi
(3)<2)&&(1.8<Cir)&&(Cir<1.9))
p=imcrop(b,rect);
figure,imshow(p)
title('MAIZE');
elseif((235<Pi(1))&&(Pi(1)<239)&&(25<Pi(2))&&(Pi(2)<29)&&
(34<Pi(3))&&(Pi(3)<38)&&(1.2<Cir)&&(Cir<1.3))
p=imcrop(b,rect);
figure,imshow(p)
title('CAPSICUM');
elseif((251<Pi(1))&&(Pi(1)<255)&&(184<Pi(2))&&(Pi(2)<188)
&&(71<Pi(3))&&(Pi(3)<75)&&(1.4<Cir)&&(Cir<1.5))
p=imcrop(b,rect);
figure,imshow(p)
title('ONION');
end
end
Conclusion
It is testified that Vegetable vision is an alternative to
unreliable manual sorting of Vegetables. The system can be
used for vegetables grading by the external qualities of size,
shape, color andsurface. The Vegetable vision system can be
developed to quantify quality attributes of various vegetables
such as mangoes, cucumbers, tomatoes, potatoes, peaches
and mushrooms
In this research, segmentation method and classification
based on area thresholding method are developed. Centroid
and pixel value algorithms are combined to obtain the exactly
classified images. The system shows an effective and reliable
classification of images captured by a camera. The image
segmentation algorithm is very useful method in the image
processing and it is very helpful for the subsequent
processing.

You might also like