You are on page 1of 7

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

BANGLADESH ARMY UNIVERSITY OF SCIENCE & TECHNOLOGY (BAUST)


SAIDPUR CANTONMENT, NILPHAMARI

(Project/Thesis Proposal)

Application for the approval of B.Sc. Engineering Project/Thesis


(Computer Science & Engineering)

Session: Winter 2023 Date: April 11, 2023

1. Name of the Students (with ID) :


 Khondoker Abu Naim (200101103)
 Abu Rayhan Mouno (180201118)
 Razi Rafid Haque (180101043)

2. Present Address : Bangladesh Army University of Science & Technology,


Saidpur Cantonment, Nilphamari.

3. Name of the Supervisor : Professor Dr. S.M. Jahangir Alam


Designation : Professor & Head
Department of Computer Science & Engineering
Bangladesh Army University of Science & Technology

4. Name of the Department : Computer Science & Engineering

Program : B.Sc. Engineering in CSE

5. Date of First Enrolment


in the Program : 29 December 2019

6. Tentative Title : A Comparative Analysis of Deep Learning Models for Flower


Recognition and Health Prediction
7. Introduction
Maintaining and enhancing biodiversity requires species knowledge about the diverse ranges
of flowers. In addition, nearly all insect pollinators view flowers as the most significant
component of their environment and the food chain. For the purpose of protecting
biodiversity, one must thus possess a sufficient knowledge of floral species. Numerous
flowers flourish in diverse countries around the globe, numbering in the thousands. Even for
the most knowledgeable botanists, manually identifying all these flower species takes a lot of
time and effort[1]. Academics are exploring novel techniques like deep learning and neural
networks for flower picture classification, which have significantly advanced in artificial
intelligence, enabling tasks like object detection and categorization that humans cannot
currently perform[2]. Computer vision has significantly improved floral categorization, using
deep learning and machine learning techniques for precise and quick classification of flower
species. Among these, the automated recognition of floral species and the concurrent
prediction of their health have emerged as pivotal areas of research with profound
implications for environmental monitoring, agriculture, and biodiversity conservation. Raw
photos are converted into formats for easy extraction of handmade elements [3]. In the second
category, convolutional neural networks (CNNs) may receive raw pictures straight away with
little to no preparation. This is among the factors contributing to these techniques’
effectiveness in recognition applications [4]. CNNs may also automatically learn hierarchical
features for the purpose of classifying or segmenting pictures. We use transfer learning of a
CNN-based model to automatically distinguish between different kinds of flowers, thanks to
CNNs remarkable advancements in a variety of fields[5].

8. Background and Present State of the Problem


Deep learning methods are becoming more and more popular in computer vision because they
provide amazing results for a range of image recognition applications. Deep learning models
have proven to be indispensable in the field of botanical studies, particularly in the area of
flower classification. A textural feature-based method for classifying flower images was
proposed by Guruetal.[11]. They tested the effectiveness of their approach on 35 different
kinds of flowers. Their algorithm could yield a maximum result of 79%. A novel algorithm
for image classification is proposed in [12], based on the histogram and Local Binary Pattern
features (LBP) features. At 8x8 images, they could achieve a higher accuracy of 91%. Two
pre-trained AlexNet and LeNet networks were evaluated in [13] using flower datasets
gathered from Flickr, Google, and Kaggle. With regard to image classification, the LeNet
network that has been modified is capable of a higher accuracy of 57.73%.One of the most
defining features of flowers is their color. Color research has been done by [14], who uses the
SVM to categorize flowers. With a classification accuracy of 85.93%, the results are
excellent. Performed pre-processing tasks to enhance image quality, segmentation, and
feature extraction using ANN [2]. The accuracy obtained from the pre-processing step is
81.19%. [15] has also conducted research on CNN classification of flowers, and the model
developed has an overall accuracy of 90% and a real-time prediction accuracy of 98%. In the
meantime, [16] used CNN to classify flower images in his research. He then used k-fold cross
validation for validation, and the result was the highest average accuracy of 76.49%.These
pioneering architectural developments have laid a solid foundation for our current research. In
this study,we leverage the strengths of these well-established models to investigate their
performance within the distinct context of our original 14,400-image flower dataset. Our
research takes a meticulous approach by subjecting these models to a comprehensive and
thorough analysis. The overarching goal is to contribute significantly to the collective
knowledge and understanding of flower recognition, with a particular emphasison two critical
aspects: accuracy and efficiency. Through this investigation, we aim to elucidate the most
effective and efficient methodologies for image classification within the realm of botanical
studies. Flower recognition and health prediction stand at the forefront of technological
innovation. Deep learning models have emerged as powerful tools, demonstrating remarkable
accuracy and efficiency in image classification tasks. Our research delves into the current
landscape, seeking to elevate the capabilities of these models and contribute to the
advancements in understanding and preserving floral ecosystems. Let’s navigate through the
present state of technology in our exploration of flower recognition and health prediction.
9. Objective with Specific Aims and Possible Outcome
To conduct a comparative analysis of deep learning models for two distinct applications:
flower recognition and health prediction. The primary goal is to evaluate the performance,
strengths, weaknesses, and suitability of different deep learning architectures for these
specific tasks.

Specific Aims:

1. Flower Recognition:
 Build and train multiple deep learning models (e.g., CNN, ResNet, VGG, etc.) for
flower recognition using publicly available flower datasets (e.g., Oxford Flowers,
Flower-102, etc.).
 Evaluate the models based on accuracy, precision, recall, and F1-score for flower
classification.
 Analyze the computational efficiency and model complexity of each architecture.
 Compare the interpretability of the models in identifying and explaining flower
features.

2. Health Prediction:
 Develop deep learning models (e.g., LSTM, CNN, Transformer, etc.) for health
prediction tasks such as disease diagnosis, patient outcome prediction, or medical
image analysis.
 Utilize appropriate health-related datasets (e.g., MIMIC-III, NIH Chest X-ray dataset,
etc.) for model training and evaluation.
 Assess the models based on sensitivity, specificity, accuracy, and area under the ROC
curve (AUC) for health prediction tasks.
 Analyze the robustness of the models concerning data variability, bias, and
interpretability in the medical context.

Possible Outcomes:

1. Flower Recognition:
 Identification of the most effective deep learning architecture for accurate flower
classification.
 Insights into the trade-offs between model performance, computational requirements,
and interpretability.
 Comparative analysis highlighting the strengths and weaknesses of different models in
recognizing flower species.

2. Health Prediction:
 Identification of deep learning models suitable for various health prediction tasks
based on performance metrics and interpretability.
 Understanding the challenges and limitations of using deep learning in healthcare,
including data privacy, interpretability of results, and biases.
 Insights into which models are more robust and generalizable in different health
prediction scenarios.
By conducting a comparative analysis of deep learning models for flower recognition and
health prediction, this study aims to provide valuable insights into the applicability,
performance, and limitations of different architectures in these distinct domains. The
outcomes will contribute to informed decisions regarding model selection based on specific
task requirements and shed light on potential improvements or future research directions in
both fields.

10. Outline of Methodology Design

1. Problem Statement and Objectives


- Define the objectives of the study: comparing deep learning models for two distinct tasks
flower recognition and health prediction.
- Clarify the significance and potential applications of these tasks.

2. Data Collection:
- Flower Recognition:
- Obtain a dataset containing images of various flowers with labeled classes.
- Consider standard datasets like Oxford Flower 102, or create a custom dataset.
- Health Prediction:
- Collect health-related datasets, which might include medical records, patient data,
diagnostic images, etc.
- Ensure data privacy and compliance with ethical guidelines.

3. Data Preprocessing:
- For Flower Recognition:
- Resize, crop, and normalize images.
- Augment data if necessary to increase variability.
- For Health Prediction:
- Handle missing values, standardize numerical features, and encode categorical
variables.
- Split data into training, validation, and test sets.

4. Model Selection:
- Flower Recognition:
- Choose popular deep learning architectures like CNN (Convolutional Neural
Networks), ResNet, Inception, etc.
- Experiment with different architectures and hyperparameters.
- Health Prediction:
- Select appropriate models for health prediction tasks such as CNNs, RNNs (Recurrent
Neural Networks), or Transformer-based models.
- Explore architectures suitable for structured and unstructured data.

5. Model Training:
- Implement and train models using selected datasets.
- Optimize hyperparameters using techniques like grid search, random search, or Bayesian
optimization.
- Monitor and log metrics during training, such as accuracy, precision, recall, F1-score, etc.

6. Evaluation Metrics:
- Define appropriate evaluation metrics for each task.
- Flower Recognition:
- Accuracy, precision, recall, F1-score, confusion matrix, etc.
- Health Prediction:
- ROC-AUC, accuracy, precision, recall, F1-score, etc.
- Ensure fairness and unbiased evaluation.

7. Performance Comparison:
- Compare the performance of models for both tasks.
- Analyze the results using statistical tests or visualizations to highlight differences.
11. Resources Required to Accomplish the Task

Hardware Resources:

 A high-performance computer or a cluster of computers with adequate storage and


processing power to store and analyze the dataset.
 Cloud computing resources (if necessary) to manage the large volume of data and to
run complex machine learning algorithms.

Software Resources:

 Appropriate software tools for data pre-processing, feature selection, machine


learning model training, and evaluation.
 Security software and tools to ensure the security of the data and the research
environment.

Data Resources:

 We create a novel dataset with 24 classes.

Funding:

 Sufficient funding to cover the expenses associated with the hardware, and software
resources, as well as any other costs associated with the research project.

By ensuring the availability of the above resources, the proposed research project can be
carried out effectively and efficiently.

12. References

[1] N. Alipour, O. Tarkhaneh, M. Awrangjeb, and H. Tian, “Flower image classification using
deep convolutional neural network,” in 2021 7th International Conference on Web Research
(ICWR). IEEE, 2021, pp. 1–4.

[2] A. Peryanto, A. Yudhana, and R. Umar, “Convolutional neural network and support vector
machine in classification of flower images,” Khazanah Informatika:

Jurnal Ilmu Komputer dan Informatika, vol. 8, no. 1, pp. 1–7, 2022.

[3] R. Shaparia, N. Patel, and Z. Shah, “Flower classification using texture and color

features,” Kalpa Publications in Computing, vol. 2, pp. 113–118, 2017.

[4] K. I. Bae, J. Park, J. Lee, Y. Lee, and C. Lim, “Flower classification with modified
multimodal convolutional neural networks,” Expert Systems with Applications, vol. 159, p.
113455, 2020.

[5] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected

convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern
recognition, 2017, pp. 4700–4708.
[6] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2:

Inverted residuals and linear bottlenecks,” in Proceedings of the IEEE conference

on computer vision and pattern recognition, 2018, pp. 4510–4520.

[7] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in

Proceedings of the IEEE conference on computer vision and pattern recognition,

2017, pp. 1251–1258.

13. Cost Estimation


a) Cost of Materials: 7000/= tk
b) Cost of Report Printing and Binding: 6500/= tk
c) Others: 5700/= tk

14. Committee for Advance Studies and Research (CASR)

Meeting No: Resolution No: Date:

15. Number of Under-Graduate Students Working with the Supervisor


at Present: 9

Signature of the Students Department of CSE

---------------------------------- ------------------------------------
Signature of the Supervisor Signature of the Co-Supervisor

---------------------------------------------------
Signature of the Head of the Department

You might also like