Professional Documents
Culture Documents
A PROJECT REPORT
Submitted by
AMUTHAPRIYA.K
MARILAKSHMI.P
JUNE 2022
ANNA UNIVERSITY:CHENNAI 600 025
BONAFIDE CERTIFICATE
SIGNATURE SIGNATURE
Mr.ELAYARAJA M.E., Mr.S.ARAVIND M.E.,
HEAD OF THE DEPARTMENT SUPERVISOR
Associate Professor, Assistant Professor,
Department of Electronics and Department of Electronics and
Communication Engineering Communication Engineering
Sree Sowdambika College Sree Sowdambika College
of Engineering, of Engineering,
Aruppukottai. Aruppukottai.
CHAPTER 1 ...............................................................................................................................................7
INTRODUCTION ........................................................................................................................................7
1.1 IMAGE PROCESSING ...........................................................................................................9
1.2 FUNDAMENTAL STEPS ..........................................................................................................10
1.2.1 IMAGE ACQUISITION ......................................................................................................10
1.2.2 IMAGE PRE-PROCESSING ...............................................................................................10
1.2.3.TRAINING PHASE ............................................................................................................11
1.2.4.VALIDATION PHASE .......................................................................................................11
1.2.5.OUTPUT PREDICTION .....................................................................................................11
1.3 IMAGE PROCESSING GOAL ...................................................................................................12
1.4 IMAGE ENHANCEMENT .........................................................................................................12
1.5 IMAGE RESTORATION ...........................................................................................................12
1.6 IMAGE ANALYSIS ...................................................................................................................13
1.7 FEATURE EXTRACTION .........................................................................................................13
1.8 DEEP LEARNING ......................................................................................................................13
1.9 DEEP LEARNING METHODS..................................................................................................14
1.10 CONVOLUTIONAL NEURAL NETWORK .......................................................................................14
1.11 ARCHITECTURE...........................................................................................................................15
1.11.1 CONVOLUTIONAL LAYERS .........................................................................................16
1.11.2 POOLING LAYERS ..........................................................................................................17
1.11.3 FULLY CONNECTED LAYERS ......................................................................................17
1.11.4 RECEPTIVE FIELD ..........................................................................................................17
1.11.5 WEIGHTS ..........................................................................................................................18
1.12 DISTINGUISHING FEATURE ................................................................................................19
1.13 LITERATURE SURVEY ..........................................................................................................20
1.14 EXISTING METHOD ...............................................................................................................25
1.14.1 CONVOLUTIONAL NEURAL NETWORKS .................................................................25
1.14.2 ARCHITECTURE ..............................................................................................................25
1.14.3 PYCHARM.........................................................................................................................26
1.14.4 MACHINE LEARNING ALGORITHM ...........................................................................26
CHAPTER 2 ..........................................................................................................................................28
PROPOSED METHOD .....................................................................................................................28
2.1 BLOCK DIAGRAM ....................................................................................................................29
2.2 FUNCTIONAL BLOCKS ...........................................................................................................29
2.3 SOFTWARE REQUIREMENTS ................................................................................................29
2.4 ARCHITECTURE .......................................................................................................................29
2.4.1 CONVOLUTIONAL LAYER..............................................................................................30
2.4.2 GLOBAL POOLING LAYER .............................................................................................30
2.4.3 FLATTENING LAYER .......................................................................................................31
2.4.4 FULLY CONNECTED LAYER ..........................................................................................31
2.4.5 SOFTMAX LAYER .............................................................................................................31
2.5 PRE-PROCESSING ....................................................................................................................32
2.6 TRANSFER LEARNING ...........................................................................................................32
2.7 IMPLEMENTATION.................................................................................................................33
2.7.1 PYTHON TKINTER PACKAGE ........................................................................................33
2.7.3 PYCHARM...........................................................................................................................33
2.7.4 DATASETS ..........................................................................................................................33
2.7.5 PYTHON ..............................................................................................................................33
2.8 MACHINE LEARINING LIBRARIES ......................................................................................35
2.8.1 NUMBY ..............................................................................................................................35
2.8.2 PYTORCH ............................................................................................................................35
2.9 MODULES .................................................................................................................................35
2.10 OUTPUT....................................................................................................................................38
CHAPTER 3 .........................................................................................................................................40
CONCLUSION..................................................................................................................................40
CONCLUSION..................................................................................................................................41
FUTUURE SCOPE ...........................................................................................................................42
REFERENCES ..................................................................................................................................43
LIST OF FIGURES
INTRODUCTION
INTRODUCTION
Analogue image processing can be used for the hard copies like
printouts and photographs. Image analysts use various fundamentals of
interpretation while using these visual techniques. Digital image processing
techniques help in manipulation of the digital images by using computers. The
three general phases that all types of data have to undergo while using digital
technique are pre processing, enhancement, and display and information
extraction. Digital Image Processing refers to processing of the image in digital
form.
Modern cameras may directly take the image in digital form but
generally images are originated in optical form. They are captured by video
cameras and digitalized. The digitalization process includes sampling,
quantization. Then these images are processed by the five fundamental processes,
at least any one of them, not necessarily all of them.
This the step where the actual training of the model takes place. In this
phase the model extracts features such as color and shape of the flower used for
training. Each of the training images will be passed through a stack pf layers
which includes convolutional layer , relu layer,pooling layer and fully connected
layer.
1.2.4.VALIDATION PHASE
Once the model completes its training from the training set it tries to
improve itself by tuning its weight values . The loss function used is categorical
cross entropy and the optimizer used is stochastic gradient descent.
1.2.5.OUTPUT PREDICTION
Once the validation phase is over, the model is ready to take an unknown
image of a flower and predict its name from the knowledge it gained during traing
and validation phase. Once the classification is done by the model,it displays the
common name of that flower.
1.3 IMAGE PROCESSING GOAL
• ANN
• CNN
• RNN
• Deep Q-networks
• Deep belief network
1.11 ARCHITECTURE
Convolutional layers convolve the input and pass its result to the next
layer. This is similar to the response of a neuron in the visual cortex to a specific
stimulus. Each convolutional neuron processes data only for its receptive field.
Although fully connected feedforward neural networks can be used to learn
features and classify data, this architecture is generally impractical for larger
inputs such as high resolution images. It would require a very high number of
neurons, even in a shallow architecture, due to the large input size of images,
where each pixel is a relevant input feature. For instance, a fully connected layer
for a (small) image of size 100 x 100 has 10,000 weights for each neuron in the
second layer. Instead, convolution reduces the number of free parameters allows
the network to be deeper. For example, regardless of image size, using a 5 x 5
tiling region, each with the same shared weights, requires only 25 learnable
parameters. Using regularized weights over fewer parameters avoids the
vanishing gradients and exploding gradients problems seen during
backpropagation in traditional neural networks. Furthermore, convolutional
neural networks are ideal for data with a grid-like topology (such as images) as
spatial relations between separate features are taken into account during
convolution and/or pooling.
1.11.5 WEIGHTS
Each neuron in a neural network computes an output value by applying
a specific function to the input values received from the receptive field in the
previous layer. The function that is applied to the input values is determined by a
vector of weights and a bias (typically real numbers). Learning consists of
iteratively adjusting these biases and weights.
The vector of weights and the bias are called filters and represent
particular features of the input (e.g., a particular shape). A distinguishing feature
of CNNs is that many neurons can share the same filter. This reduces the memory
footprint because a single bias and a single vector of weights are used across all
receptive fields that share that filter, as opposed to each receptive field having its
own bias and vector weighting.
1.12 DISTINGUISHING FEATURE
1.Fadzilah Siraj, Muhammad Ashraq Salahuddin and Shahrul Azmi Mohd Yusof
proposed the system for classification of Malaysian blooming flower[4]. In this
paper they presents the application of NN and on image processing particularly
for understanding flower image features. For predictive analysis, they have used
two techniques namely, Neural Network (NN) and Logistic regression. The study
shows that NN obtains the higher percentage of accuracy among two techniques.
The Otsus method was applied in order to compute a global threshold.The image
is then converted to RGB color space again. In color extraction, the images were
transformed from RGB color space to HSV color space the image texture is
calculated based on gray-level co-occurrence matrix (GLCM) to obtain the
contrast, correlation, energy and homogeneity of the image.The prediction
accuracy of logistic regression is 26.8%. Therefore based on 1800 samples of
Malaysian flower images, NN has shown a higher average prediction results vs.
logistic regression.
However this paper cannot present recognition of flower type, its only
recognize flower features so in future studies can be focused on developed flower
model system which can recognize Malaysian blooming flower or extending the
dataset built and Verities sample of images can be captured for different flowers
and recognize their types.
2.Pavan Kumar Mishral, Sanjay Kumar Maurya, Ravindra Kumar Singh and
Arun Kumar Misral present a semi automatic plant identification based on digital
leaf and flower images[5]. They proposed an algorithm for identification using
multiclass classification based on color, shape volume and cell feature. Each stage
further also divided into three steps. First stage comparison based on extracted
features from RGB component. Second stage based on shape feature Area
Convexity, Perimeter Convexity, sphericity and Circulatory. And last stage based
on cell and volume fraction feature. Experiment is performed on a sample of
diverse collection of 1000 leaf and flower and recognition rate is up to 85% on
an average.
This system is based on color model so the accuracy is high if their color
are distinct. But if colors are same then it may mislead to classify the image. So
this system can be further improved to yield more accuracy by combining other
features, such as numbers of petals and flower texture. The accuracy of this
system is more than 80%.
4.Prof.Suvarna Nandyal, Miss.Supriya Bagewadi proposed Automated
Identification of Plant Species from Images of Leaves and Flowers used in the
Diagnosis of Arthritis[7].The present work deals with identification and
classification of medicinal plants that are used in treatment of rheumatoid.In the
present work, plant parts mainly leaves and flower are taken as an object for
identification, since these are available for all the time and have some 2D in
nature size and shape.The proposed work deals with image processing techniques
such as feature extraction and classification. The features namely height, width,
margin and texture featuresare used for extracting leaf shape features. Similarly
for flowers, the petal count and colors are extracted in RGB and Ycbcr color
space. The obtained features are trained by neural network classifier. The
classification results have shown an accuracy of 85% for leaf and 85% for flower.
The present work deals with development of a system where a user in the
field can take a picture of unknown plants, leaf and flower and the system to
classify the species. In the proposed work, shape and texture features of sample
plant images of five classes are used in the rheumatoid are extracted. Further the
accuracy can be increased by taking an efficient shape features in frequency
domain. The work can be extended by taking more features and other classifier.
5.Yuita Arum Sari and Nanik Suciati proposed Flower Classification using
Combined a* b* Color and Fractal- based Texture Feature[8].This research
proposes a new method of flower classification system using combination of
color and texture features.The first phase is getting the crown of the flower, which
is localized from a flower image by using pillbox filtering and OTSUs
thresholding.The color features are extracted by removing L channel in L*a*b*
color space, and taking only a* and b* channel, because of ignoring different
lighting condition in flower image. The texture features are extracted by
Segmentation-based Fractal Texture Analysis (SFTA). Classification is done
using kNN classifier.KNN classifier is used to assess similarity among image
flowers. Cosine measure outperforms to all distance measures under k = 9.The
combined a*b* features and texture gives the better performance when using
cosine measure, than using L* color channel when combined with texture feature.
The flower classification achieves the best result with accuracy 73.63%.
The system has an advantage of its ability of classifying and recognizing the
plant from a small part of the leaf without depending neither on the shape of the
leaf or on its color features, since the system essentially depends on the textural
features. Hence, the system is useful for the botany researchers when he wants to
recognize a damaged plant, since this can be carried out depending only on a
small part of the damaged plant.
Only gray level features have been used. The neural network is trained
using the backpropagation algorithm. Own database of flowers of 5 classes, each
containing 10 flower images has been created. It has been found that MLP offers
accuracy 87% with GLCM features.
1.14.2 ARCHITECTURE
A convolutional neural network consists of an input layer, hidden layers
and an output layer. In any feed-forward neural network, any middle layers are
called hidden because their inputs and outputs are masked by the activation
function and final convolution. In a convolutional neural network, the hidden
layers include layers that perform convolutions. Typically this includes a layer
that does multiplication or other dot product, and its activation function is
commonly ReLU. This is followed by other layers such as pooling layers, fully
connected layers, and normalization layers.
1.14.3 PYCHARM
• Convolutional layer
• Global pooling layer
• Flatten layer
• Fully connected layer
• Softmax layer
2.4 ARCHITECTURE
Fully connected layers connect every neuron in one layer to every neuron
in another layer. It is the same as a traditional multi-layer perceptron neural
network (MLP). The flattened matrix goes through a fully connected layer to
classify the images.
This layer is in the work of changing the output of the model in the form of
a probability sequence and generate a value based on each classes given to the
model. It converts the score of each class into Probability Distribution. The
Probability Distribution values are the one to deceide the final output of the
model.
2.5 PRE-PROCESSING
Tkinter is the standard GUI library for Python. Python when combined
with Tkinter provides a fast and easy way to create GUI applications. Tkinter
provides a powerful object-oriented interface to the Tk GUI toolkit. Import the
Tkinter module.
2.7.3 PYCHARM
2.7.4 DATASETS
2.7.5 PYTHON
The Python interpreter is easily extended with new function and data
types implemented in C or C++ or other languages scalable from C. Python is
also suitable as an extension language for customizable applications. If you do
much work on computers, eventually you find that there's some task you'd like to
automate. For example, you may wish to perform a search-and-replace over a
large number of text files, or rename and rearrange a bunch of photo files in a
complicated way. Perhaps you'd like to write a small custom data base, or a
specialized GUI application, or a simple game. If you're a professional software
developer, you may have to work with a several C/C++/Java libraries but find the
usual write/compiler/test/re-compile/cycle is too slow.Perhaps you're writing a
test suite for such a library and find writing the testing code a tedious task. Or
maybe you're written a programme that could use a extension language, and you
don't want to design and implement a whole new language for your application.
Python is just the language for you. You could write UNIX shell script or
windows batch files for some of the tasks, but shell scripts are best at moving
around files and changing text data, not well-suited for GUI application or games.
2.8.1 NUMBY
2.8.2 PYTORCH
2.9 MODULES
The programming modules of the automated Flower Recognition was
divided into three streams;
1.Required_declaration Module
This module consists of all the global variables, constants and required Library
used to during the program. The constants like Image size, text size, bins for
histogram. Besides that, the training path and small functions are defined in this
module. The function defined are:
A.Fd_histogram: To extract Color Histogram features from the image, we use
cv2.calcHist() function provided by OpenCV. The arguments it expects are the
image, channels, mask, histSize (bins) and ranges for each channel [typically 0-
256). We then normalize the histogram using normalize() function of OpenCV
and return a flattened version of this normalized matrix using flatten().
2.Global_test Module
CONCLUSION
CONCLUSION
Some of the future scopes that can be done to this system are:
1. To provide more information of flower and their family, that might help Botany
student for study and research purpose.
1. http://en.wikipedia.org/wiki/Digtal_image_processing
2. http://en.wikipedia.org/wiki/Machine_learning
3. http://en.wikipedia.org/wiki/List_of_machine_learning_concepts
4. Fadzilah Siraj, Muhammad Ashraq Salahuddin and Shahrul Azmi Mohd
Yusof ,Digital Image Classification for Malaysan Blooming Flower IEEE-
2010.
5. Pavan Kumar Mishral, Sanjay Kumar Maurya2, Ravindra Kumar Singp,
Arun Kumar Misral A semi automatic plant identification based on digital
leaf and flower Images IEEE-2012.
6. Tanakorn Tiay, Pipimphorn Benyaphaichit, and Panomkhawn
Riyamongkol Flower Recognition System Based on Image Processing
ICT-ISPC-2014.
7. Prof.Suvarna Nandyal, Miss.Supriya Bagewadi, Automated Identification
of Plant Species from Images of Leaves and Flowers used in the Diagnosis
of Arthritis IJREAT-Volume 1, Issue 5, Oct- Nov, 2013.
8. Yuita Arum Sari and Nanik Suciati,Flower classification using combined
a*b*color and Fractal-based Texture feature, International Journal of
Hybrid Information Technology Vol.7, No.2 (2014).
9. M. Z. Rashad1 , B.S.el-Desouky2 , and Manal S .Khawasik, Plants Images
Classification Based on Textural Features using Combined Classifier,
(IJCSIT) Vol 3, No 4, August 2011
10. Mari Partio, Bogdan Cramariuc, Moncef Gabbouj, and Ari Visa,Rock
Texture retrieval using gray level co-occurrence Matrix ,(ITS) Surabaya,
Indonesia.
11. Dr.S.M.Mukane and Ms.J.A.Kendule, Flower Classification Using Neural
Network Based Image Processing IOSR-JECE Volume 7, Issue 3,Sep. –
Oct. 2013.