You are on page 1of 42

A Report on

FOOD RECOGNITION AND CALORIE


MEASUREMENT USING MACHINE LEARNING

Submitted for partial fulfillment of award of


BACHELOR OF TECHNOLOGY
degree

In
Computer Science & Engineering

By
Abhijeet Kaushik (2000301530002)
Ayush Chaubey (2000301530016)
Gagan Kumar Sharma (2000301530020)
Satyam (2000301530053)

Dr. Kumud Kundu


SUPERVISOR

INDERPRASTHA ENGINEERING COLLEGE, GHAZIABAD,

Dr. A P J ABDUL KALAM TECHNICAL UNIVERSITY


LUCKNOW
December 2023
CERTIFICATE

Certified that Abhijeet Kaushik, Ayush Chaubey, Gagan Kr.


Sharma, Satyam have carried out the project work presented in this
report entitled “Food Recognition and Calorie Measurement using
Machine Learning” for the award of Bachelor of Technology from
Inderprastha Engineering College, Ghaziabad, under my supervision.
The report embodies result of original work and studies carried out by
the students themselves and the contents of the report do not form the
basis for the award of any other degree to the candidate or to anybody
else.

(Dr. Kumud Kundu)


Designation: HOD (CSE - AIML)

Date: December, 2023


ACKNOWLEDGEMENT

We take this opportunity to thank our teachers and friends who helped us
throughout the project.

First and foremost, we would like to thank our guide for the project (Dr.
Kumud Kundu, Head of Department, Computer Science (AIML)
Department) for her valuable advice and time during development of project.

Name 1: Abhijeet Kaushik Name 2: Ayush Chaubey


`
Roll No: 2000301530002 Roll No: 2000301530016

Signature: Signature:

Name 3: Gagan Kr. Sharma Name 4: Satyam

Roll No.: 2000301530020 Roll No: 2000301530053

Signature: Signature:
DECLERATION

We hereby declare that this submission is our own work and that, to the best of
our knowledge and belief, it contains no material previously published or
written by another person nor material which to a substantial extent has been
accepted for the award of any other degree or diploma of the university or
other institute of higher learning, except where due acknowledgment has been
made in the text.

Name 1: Abhijeet Kaushik Name 2: Ayush Chaubey


`
Roll No: 2000301530002 Roll No: 2000301530016

Signature: Signature:

Name 3: Gagan Kr. Sharma Name 4: Satyam

Roll No.: 2000301530020 Roll No: 2000301530053

Signature: Signature:
ABSTRACT

This project introduces an innovative food recognition and dietary


management system designed specifically for the diverse and rich array of
Indian cuisine. Employing the efficiency of MobileNetV1, a lightweight
convolutional neural network, the system achieves real-time image
classification. Through meticulous fine-tuning on a comprehensive dataset
featuring regional and traditional Indian foods, the model ensures precise and
accurate recognition and classification.

A central aspect of the system is its integration with a nutritional database,


providing users instant access to critical information such as calorie content
and nutritional values associated with recognized foods.

The user interface is crafted for accessibility and seamless interaction,


enabling users to effortlessly upload food images, explore nutritional insights,
and receive personalized recommendations.

This project represents a notable advancement in food recognition and dietary


management, offering a valuable tool for individuals aiming to make informed
and mindful dietary choices within the vibrant tapestry of Indian culinary
traditions. The system not only embodies innovation and efficiency but also
emphasizes precision and accuracy, showcasing the convergence of
technology and nutrition to promote healthier lifestyles.
TABLE OF CONTENTS

CHAPTER NO. TITLE PAGE NO.

ABSTRACT 05

1. INTRODUCTION

1.1 Problem Definition 08


1.2 Background about the project idea 08
1.3 Objectives of proposed system 09
1.4 Feasibility Study, need and significance. 10
1.5 Novelty of Project 11
1.6 Technical Specification 12
1.6.1 Hardware and Software required 12

2. LITERATURE REVIEW 13

3. PROPOSED SYSTEM 15

4. SOFTWARE REQUIREMENT ANALYSIS

4.1. Functional Requirements 17


a) Use Case diagrams. 17
b) Use case descriptions. 17
4.2. Nonfunctional Requirements 19
4.3. Major Modules and their functionalities 21

5. SYSTEM ANALYSIS & DESIGN

5.1 Class designs (Wherever applicable) 22


5.2 Sequence diagrams 22
5.3 Activity Diagrams 23
5.4 DFDs of the project 24
5.4.1 E-R Diagrams 25
5.5 Gantt Chart 26

6. IMPLEMENTATION/CORE MODULE

6.1 Used Algorithms/approaches of project 27


6.2 Implementation of Algorithms 28
6.3 Implementation of Modules 30

7. RESULTS / OUTPUTS & TESTING

7.1 All user interfaces and output screens 32

8. CONCLUSIONS 36

9. REFERENCES 37

10. APPENDICES

12.1 Steps to execute/run/implement the project. 38


12.2 Coding Snippets 39

***********************************************************
CHAPTER 1
INTRODUCTION

1.1 PROBLEM DEFINITION

Indian cuisine boasts an extensive array of regional and traditional dishes,


presenting a rich tapestry of flavors and culinary diversity. The surge in food
blogging and image tagging has underscored the need for a specialized food
recognition system attuned to the nuances of Indian regional fare. The
kaleidoscope of choices poses a challenge, not only in identifying dishes but
also in gauging their nutritional content, which varies significantly.

Neglecting awareness of dietary intake can result in health issues like


diabetes and obesity, a global concern affecting over 650 million adults
worldwide, with a significant portion in India alone. A pivotal challenge in
fostering a healthy lifestyle lies in accurately monitoring food consumption.
Manual calorie counting proves arduous and error-prone, prompting the
need for automated solutions.

Enter the solution: a cutting-edge system leveraging image processing using


Convolutional Neural Network (CNN) and MobileNet for food recognition
and calorie estimation. This technological marvel not only identifies the
food type intelligently but also provides a practical estimate of its calorie
content. Beyond that, the system goes a step further by recommending
suitable food items. This trifecta of objectives addresses the pressing need
for efficient tools supporting individuals in cultivating healthier eating
habits, aiding weight management, fulfilling nutritional goals, and
navigating dietary restrictions for various health conditions. In essence, the
project emerges as a beacon for promoting a balanced and conscious
approach to dietary choices.

1.2 BACKGROUND ABOUT THE PROJECT IDEA

The genesis of this project idea stems from the intricate tapestry of Indian
cuisine, renowned for its myriad regional and traditional dishes. The
contemporary landscape, marked by the burgeoning popularity of food
blogging and image tagging, accentuates the necessity for a bespoke food
recognition system. This recognition system aims not only to identify the
diverse array of Indian foods but also to delve into the intricate realm of
nutritional content, which is highly variable across these culinary delights.

In the backdrop of a global health concern, the prevalence of obesity,


affecting over 650 million adults worldwide and 150 million in India, acts
as a poignant motivator. The realization that a lack of dietary awareness can
contribute to serious health issues such as diabetes and obesity underscores
the need for innovative solutions. This realization prompted the exploration
of avenues to simplify and enhance the accuracy of monitoring food intake.

The confluence of technological advancements, particularly in image


processing using advanced Machine Learning, provides a promising avenue
for addressing these challenges. By harnessing the power of artificial
intelligence, this project envisions a system that not only intelligently
identifies various Indian foods but also provides a practical estimation of
their calorie content. The ultimate goal is to empower individuals in making
informed dietary choices, promoting healthier lifestyles, managing weight
effectively, and addressing specific health conditions that warrant
meticulous dietary regulation.

In essence, the background of this project is rooted in the intersection of


culinary diversity, health consciousness, and cutting-edge technology,
aiming to create a tangible impact on individuals' well-being through
intelligent and accessible tools for managing dietary habits.

1.3 OBJECTIVES OF PROPOSED SYSTEM

1.Accurate Food Identification: Develop a robust food recognition system


for accurately identifying and categorizing diverse regional and traditional
Indian foods.

2. Nutritional Content Estimation: Implement intelligent techniques to


estimate the calorie content and nutritional value of identified foods,
considering the wide variations in ingredients and cooking methods inherent
in Indian cuisine.

3. Streamlined Calorie Tracking: Create a user-friendly interface that allows


individuals to effortlessly track their daily food intake by leveraging automated
calorie counting, eliminating the tedious and error-prone nature of manual
tracking.

4. Personalized Recommendations: Integrate a recommendation system that


suggests healthier food alternatives based on individual dietary preferences,
nutritional goals, and specific health conditions, fostering a tailored approach
to maintaining a balanced lifestyle.

5.Support for Healthier Eating Habits: Design the system to serve as a


comprehensive tool for promoting and supporting healthier eating habits,
aiding individuals in weight management, meeting nutritional goals, and
addressing health conditions that necessitate precise dietary regulation.

1.4 FEASIBLITY STUDY, NEED AND SIGNIFICANCE

Feasibility Study:
1. Technical Feasibility: Assess the technological infrastructure required for
image processing and CNN algorithms, ensuring that the proposed system
can be developed and implemented effectively.

2. Financial Feasibility: Evaluate the budgetary requirements for software


development, infrastructure, and maintenance, considering the potential
return on investment and long-term sustainability.

3. Operational Feasibility: Analyze the practicality of integrating the system


into individuals' daily lives, considering user acceptance, ease of use, and
compatibility with existing platforms and devices.

Need for the Project:


1. Health Awareness: The project addresses the critical need for promoting
healthier lifestyles by providing individuals with a tool to track and manage
their dietary intake, mitigating the risk of health issues like obesity and
diabetes.
2. Dietary Precision: In the context of the diverse and intricate landscape of
Indian cuisine, there is a clear need for a specialized system that accurately
identifies and quantifies the nutritional content of foods. This precision is
essential for individuals aiming to meet specific dietary goals and maintain
a balanced lifestyle.

Significance of the Project:


1. Health Impact: The project holds significant societal importance by
directly contributing to the prevention of widespread health concerns,
offering a practical solution for individuals to make informed choices about
their dietary habits and overall well-being.

2.Cultural Preservation and Technological Innovation: Beyond health, the


project contributes to cultural preservation by celebrating the richness of
Indian cuisine. Simultaneously, it showcases technological innovation by
leveraging advanced image processing and CNN algorithms, illustrating the
real-world applications of cutting-edge technologies in improving daily
lives.
In summary, the feasibility study ensures the project's practical viability,
while the project's need and significance underscore its relevance in
addressing health challenges, promoting cultural appreciation, showcasing
technological innovation, and enhancing personalized wellness.

1.5 NOVELTY OF PROJECT

The novelty of this project lies in its integration of advanced technologies to


address the unique challenges presented by the diverse and intricate landscape of
Indian cuisine. Several key aspects contribute to its innovative character:

1.Cultural Sensitivity and Specificity: The project goes beyond generic food
recognition systems by incorporating a deep understanding of the nuances in
Indian regional and traditional foods. This cultural specificity ensures accurate
identification and nutritional estimation, catering to the diverse culinary practices
across the subcontinent.
2.Holistic Approach to Health: Unlike conventional calorie tracking apps, this
project aims for a holistic impact on health. By addressing not only the quantitative
aspects of food but also the qualitative ones, it becomes a comprehensive tool for
individuals aiming to manage weight, meet specific nutritional goals, and navigate
health conditions requiring precise dietary regulation.

3.Intersection of Technology and Well-Being: The project stands at the


intersection of cutting-edge technology and personal well-being. Leveraging
image processing and Convolutional Neural Network algorithms, it showcases the
practical application of these technologies in everyday life, making them
accessible and beneficial for individuals striving for healthier lifestyles.

In essence, the novelty of this project lies in its cultural sensitivity, personalized
recommendations, holistic health approach, and the effective convergence of
advanced technologies with the practical needs of individuals in managing their
dietary habits.

1.6 TECHNICAL SPECIFICATION


1.6.1 Hardware Specifications:
1. Intel i5
2. Processor RAM: 4GB
3. Hard Disk: 512GB
1.6.2 Software Specifications:
1. Anaconda Navigator
2. Jupyter Notebook (Python 3.7 with supported packages)
3. Operating System Platform: Windows 7 (and above)
CHAPTER 2.
LITERATURE SURVEY

• Almaghrabi, R, Villalobos, G, Pouladzadeh, P & Shirmohammadi, S


2012, ‘A Novel Method for Measuring Nutrition Intake Based on Food
Image’, Proceedings of the International Conference on
Instrumentation and Measurement Technology: This paper presents a
medical system for recognising food nutrition and energy intake, leveraging
food image processing and shape recognition. Recent studies suggest
technology, like smartphones, can enhance obesity treatments. The system
uses a unique technique to estimate calorie intake and nutrient components
by capturing food images before and after consumption. It offers a novel
approach to measuring food intake, potentially aiding obesity management.

• J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks


for semantic segmentation,” in Proc. IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), 2015.: Convolutional networks
are powerful visual models that yield hierarchies of features. We show that
convolutional networks, by themselves, trained end-to-end, pixels-to-pixels,
exceed the state-of-the-art in semantic segmentation. Our critical insight is
to build “fully convolutional” networks that take input of arbitrary size and
produce correspondingly sized output with efficient inference and learning.
We define and detail the space of fully convolutional networks, explain their
application to spatially dense prediction tasks, and draw connections to prior
models.

• Y. Kawano and K. Yanai, “Automatic expansion of a food image


dataset leveraging existing categories with domain adaptation,” in
Proc. European Conference on Computer Vision. Springer, 2014: This
paper presents a novel framework for automatically expanding image
datasets, focusing on diverse food categories. We address the challenge of
varying food types across cultures by introducing a "foodness" classifier and
domain adaptation. It enables the automatic creation of culturally diverse
food datasets. Experimental results highlight the effectiveness of our
approach compared to baseline methods.
• Springer International Publishing, 2015, pp. 584–599. [36] G. Ciocca,
P. Napoletano, and R. Schettini, “Food recognition: A new dataset,
experiments, and results,” IEEE Journal of Biomedical and Health
Informatics: A novel dataset for assessing food recognition algorithms in
dietary monitoring food classes. Each food class undergoes manual
segmentation. A benchmarking process employing a food class analysis
pipeline and three classification strategies with various visual descriptors
achieves a 79% accuracy in food and tray recognition using Convolutional
Neural Network features. The dataset and benchmark framework are open
to the research community.

• MobileNets: Efficient Convolutional Neural Networks for Mobile


Vision Applications:Andrew G. Howard, Menglong Zhu, Bo Chen,
Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco
Andreetto, Hartwig Adam: The advent of high-parameter image
recognition models, demanding extensive training data and energy-
intensive computing, hinders everyday efficiency. This study employs
MobileNet architecture on ARM-based CPUs for image recognition,
achieving 92.4% accuracy on caltech101 with 2.1 Watt power draw.
MobileNet's efficiency may reshape machine learning, aligning with
human-centric, effective computer vision preferences.

• Rethinking the Inception Architecture for Computer Vision Christian


Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew
Wojna: Core to cutting-edge computer vision, deep convolutional
networks, popular since 2014, bring notable advancements, though
increased size and computational demands often boost quality. Balancing
computational efficiency and low parameters remains crucial, especially
for mobile vision and big-data scenarios. Through factorized convolutions
and robust regularization, our methods outperform benchmarks, achieving
21.2% top-1 and 5.6% top-5 error with minimal computational cost and
parameters on the ILSVRC 2012 classification challenge.
CHAPTER 3.
PROPOSED SYSTEM

The proposed system is an intelligent and comprehensive food recognition and


dietary management platform specifically tailored for the diverse landscape of
Indian cuisine. Key features of the system include:

1. Advanced Food Recognition:


- The system utilizes state-of-the-art image processing and Convolutional Neural
Network (CNN) algorithms to accurately identify and categorize a vast array of
regional and traditional Indian foods. This ensures precision in recognizing dishes
with diverse ingredients and cooking methods.

2. Real-time Nutritional Estimation:


- Once a food item is identified, the system provides real-time estimation of its
calorie content and nutritional value. This instantaneous feedback empowers users
with accurate information about the dietary impact of their food choices,
promoting informed decision-making.

3. User-Friendly Interface:
- A user-friendly interface ensures accessibility for a wide range of users,
including those with varying technical proficiency. The interface simplifies the
process of tracking and managing dietary intake, enhancing user engagement and
making the system practical for everyday use.

The proposed system, through its advanced technologies and thoughtful features,
aims to revolutionize how individuals engage with their dietary habits, providing
them with a powerful tool to make informed choices for a healthier and more
balanced lifestyle.

Advantages of proposed system:

1. Streamlined simplicity compared to the existing system.


2. Enhanced and optimized efficiency.
3. Elevated levels of accuracy and precision.
Proposed Model:
CHAPTER 4.
SOFTWARE REQUIREMENT ANALYSIS

Purpose: The main purpose of preparing this document is to give a general insight
into the analysis and requirements of the existing system or situation and to
determine the operating characteristics of the system. Using this document helps
an enterprise confirm that the requirements are fulfilled and helps business leaders
make decisions about the lifecycle of their product, such as when to retire a feature.

In addition, writing such a document can help developers reduce the time and
effort necessary to meet their goals as well as save money on the cost of
development.

Scope: This Document plays a vital role in the development life cycle (SDLC) and
it describes the complete requirement of the system. It is meant for use by the
developers and will be the basic during the testing phase. Any changes made to
the requirements in the future will have to go through a formal change approval
process.

You can think of an SRS as a blueprint or roadmap for the software you're going
to build. The elements that comprise an SRS can be simply summarized into four
Ds:

● Define your product's purpose.


● Describe what you're building.
● Detail the requirements.
● Deliver it for approval.

We want to DEFINE the purpose of our product, DESCRIBE what we are


building, DETAIL the individual requirements, and DELIVER it for approval. A
good SRS document will define everything from how software will interact when
embedded in hardware to the expectations when connected to other software.
4.1 FUNCTIONAL REQUIREMENTS
a) Use Case diagram

b) Use Case descriptions


The use case diagrams describe the system functionality as a set of tasks that the
system must carry out and actors who interact with the system to complete the
tasks.

Use Case: Each use case on the diagram represents a single task that the system
needs to carry out. input Food Image, Image Preprocessing, Food Recognition and
Nutritional Estimation are all examples of use cases. Some use cases may include
or extend a task represented by another use case. For example, in order to make a
purchase, the order information will need to be validated.
Actor: An actor is anything outside the system that interacts with the system to
complete a task. It could be a user or another system. The actor "uses" the use
case to complete a task. Often, it is useful to look at the set of use cases that an
actor has access to -- this defines the actor's overall role in the system.

Association: The association is the link that is drawn between an actor and use
case. It indicates which actors interact with the system to complete the various
tasks

Includes: Use the includes link to show that one use case includes the task
described by another use case. For example, saving a Visual Case project
includes saving the diagrams and saving the project settings. Sometimes the
word "Uses" is used instead of "Includes".

Generalization: The generalization link is an informal way of showing that one


use case is similar to another use case, but with a little bit of extra
functionality. One use case inherits the functionality represented by another use
case and adds some additional behavior to it.

Extends: The extends link is used to show that one use case extends the task
described by another use case. It's very similar to generalization, but is much more
formalized.
The use case that is extended is always referred to as the base use case and has
one or more defined extension points. The extension points show exactly where
extending use cases are allowed to add functionality. The extending use case

doesn't have to add functionality at all of the base use case's extension points. The
extension link indicates which extension points are being used.

4.2 NON – FUNCTIONAL REQUIREMENT

Non-functional requirements are characteristics that define how well the system
performs its functions rather than what functions the system performs. Here are
non-functional requirements for this project:

1. Performance:
Requirement: The system should provide quick and responsive feedback, with
image recognition and nutritional estimation completing within a reasonable time
frame, even during peak usage.
2. Scalability:
Requirement: The system should be designed to handle a growing database of
foods and an increasing number of users without a significant degradation in
performance.
3. Usability:
Requirement: The user interface should be intuitive, easy to navigate, and
accessible to users with varying levels of technical proficiency.
4. Reliability:
Requirement: The system should be highly reliable, minimizing downtime,
errors, and disruptions to ensure a seamless user experience.
5. Security:
Requirement: The system should implement robust security measures to protect
user data, ensuring confidentiality and integrity. This includes secure storage and
transmission of sensitive dietary information.
6. Maintainability:
Requirement: The system should be designed with modularity and
maintainability in mind, allowing for easy updates, bug fixes, and future
enhancements without significant disruption.
7. Compatibility:
Requirement: The system should be compatible with various devices and
platforms, including desktops, laptops, and mobile devices, to cater to a diverse
user base.
8. Accuracy of Nutritional Information:
Requirement: The nutritional estimation provided by the system should be
accurate and reliable, reflecting the true content of recognized food items.

9. Continuous Improvement:
Requirement: The system should have mechanisms in place for continuous
improvement, incorporating updates based on technological advancements, user
feedback, and evolving dietary knowledge.

These non-functional requirements collectively define the performance, reliability,


security, usability, and other aspects that contribute to the overall effectiveness and
user satisfaction with the proposed system.
4.3 MAJOR MODULES AND THEIR FUNCTIONALITIES

User Interface (UI):


• Functionality: Provides a user-friendly interface for users to interact with
the system using JavaScript, CSS, HTML etc.
• Responsibilities: Displays identified foods, detailed nutritional
information, and other features for user inputs and settings.

Image Processing Module:


• Functionality: Preprocesses food images to enhance quality for accurate
recognition using machine learning.
• Responsibilities: Image enhancement, noise reduction, and preparation for
input into the recognition system.

Food Recognition Module:


• Functionality: Utilizes a trained Convolutional Neural Network (CNN)
and MobileNet for intelligent recognition of various regional and
traditional Indian foods.
• Responsibilities: Extracts features from images, categorizes food items,
and interfaces with the nutritional estimation module.

Nutritional Estimation Module:


• Functionality: Estimates the calorie content and nutritional value of
recognized food items in real-time.
• Responsibilities: Accesses a comprehensive database of nutritional
information, calculates values based on recognized foods, and provides
instant feedback to users.
CHAPTER 5.
SYSTEM ANALYSIS & DESIGN

5.1 CLASS DESIGN

5.2 SEQUENCE DIAGRAM


5.3 ACTIVITY DIAGRAM
5.4 DATA FLOW DIAGRAM

LEVEL1
Login and User:

LEVEL 2
Admin:
5.5 DATABASE DESIGN

5.5.1 E-R Chart


5.6 GANTT CHART
CHAPTER 6.
IMPLEMENTATION/CORE MODULE

6.1 USED ALGORITHMS/APPROACHES FOR PROJECTS

MobileNet for Image processing and recognition

MobileNetV1 (CNN) Algorithm Overview:

1. Model Architecture:
• Description: MobileNetV1 is a lightweight convolutional neural network
architecture designed for mobile and embedded systems. It consists of depth
wise separable convolutions, which significantly reduces the number of
parameters and computations compared to traditional CNNs.
• Role in Project: MobileNetV1 serves as the core image classification
algorithm, responsible for recognizing and categorizing various regional
and traditional Indian foods.

2.Transfer Learning:
• Description: MobileNetV1 supports transfer learning, allowing you to
leverage pre-trained weights on large datasets like ImageNet. Fine-tuning
the model on your specific food dataset enables it to adapt to the
characteristics of Indian cuisine.
• Role in Project: Transfer learning with MobileNetV1 enables the model
to learn relevant features from a broad dataset and then specialize in
recognizing Indian dishes.

3.Efficiency and Real-time Processing:


• Description: The efficiency of MobileNetV1 makes it suitable for real-
time image classification, particularly on mobile devices. Its lightweight
architecture allows for quick and responsive processing.
• Role in Project: MobileNetV1's efficiency ensures that users receive
instant feedback on the identified food items, contributing to a seamless
and user-friendly experience.
4.Integration with TensorFlow:
• Description: MobileNetV1 is compatible with the TensorFlow ecosystem,
providing seamless integration with TensorFlow-based projects.
TensorFlow offers tools and functionalities that complement the usage of
MobileNetV1.
• Role in Project: The integration with TensorFlow simplifies the
implementation and deployment of MobileNetV1 within the food
recognition and dietary management system.

5.Accuracy-Computational Tradeoff:
• Description: MobileNetV1 achieves a balance between computational
efficiency and accuracy. While it may not be as complex as some larger
CNN architectures, it offers a suitable tradeoff for applications where
efficiency is crucial.
• Role in Project: The efficiency-accuracy tradeoff ensures that the system
performs well on resource-constrained devices without sacrificing the
accuracy required for food recognition.

In summary, MobileNetV1 serves as the primary image classification algorithm in


your project, providing an efficient and real-time solution for recognizing and
categorizing various Indian foods. Its lightweight architecture and compatibility
with transfer learning make it well-suited for the specific requirements of your
food recognition and dietary management system.

6.2 IMPLEMENTATION OF ALGORITHM

6.2.1 MobileNet v1:

Certainly! The step-by-step process in MobileNetV1 involves the following


stages:

1.Input Image: The process begins with an input image, typically representing a
food item.

2.Preprocessing: The input image undergoes preprocessing, including resizing


and normalization. This step prepares the image for input into the MobileNetV1
model.
3.Depthwise Separable Convolution: MobileNetV1 utilizes depth wise
separable convolutions, a key feature that reduces the number of parameters and
computations compared to traditional convolutional layers. Depth wise
Convolution: Applies a single convolutional filter to each input channel
independently. Pointwise Convolution: Uses 1x1 convolutions to combine
information from depth wise convolutions and create the final output.

4. Convolutional Blocks: The model consists of several convolutional blocks,


each comprising depth wise separable convolutions followed by batch
normalization and non-linear activation (ReLU).

5.Downsampling (Stride 2): Some convolutional blocks include down sampling


layers with a stride of 2, reducing the spatial dimensions of the feature maps.

6.Global Average Pooling: After multiple convolutional blocks, a global average


pooling layer is applied. It reduces the spatial dimensions of the feature maps to a
1x1 size by taking the average value across each channel.

7.Fully Connected Layer: A fully connected layer with softmax activation is


employed for the final classification. This layer assigns probabilities to different
classes based on the learned features.

8.Output: The output is a probability distribution across the predefined classes. In


the context of food recognition, each class corresponds to a specific type of food.

9.Training with Loss Function: During training, the model is optimized by


minimizing a loss function. Commonly used loss functions include categorical
cross-entropy for classification tasks.

10.Backpropagation and Gradient Descent: The model undergoes


backpropagation to adjust the weights and biases based on the calculated gradients.
This process occurs during training to improve the model's ability to correctly
classify images.
11.Fine-tuning (Transfer Learning): MobileNetV1 supports transfer learning,
allowing the model to be fine-tuned on a specific dataset. This is essential for
adapting the model to the characteristics of Indian cuisine in the context of food
recognition.

12.Inference: During inference, the trained MobileNetV1 model is applied to


new, unseen images to classify and recognize the contents, such as identifying
different Indian dishes.

This step-by-step process illustrates the architecture and flow of operations within
MobileNetV1, emphasizing its efficiency through depth wise separable
convolutions. It is particularly suitable for real-time image classification tasks on
resource-constrained devices.

6.3 Implementation of Modules

1.NumPy:
Role: NumPy is used for numerical operations and array manipulations. It's
particularly helpful for processing and analyzing data efficiently in the backend.

2.Pandas:
Role: Pandas is employed for data manipulation and analysis. It's useful for
handling and organizing data related to nutritional information and alternative
healthier choices stored in the system's database.

3.Flask:
Role: Flask serves as the web framework for building the backend of the
application. It handles HTTP requests, facilitates the creation of APIs, and
simplifies the development of server-side components.

4.TensorFlow (gfile module):


Role: The gfile module in TensorFlow is used for reading and writing files. In the
context of the project, it may be employed for managing model files, storing
trained models, and loading them for food recognition tasks.
5. os:
Role: The os module provides a way to interact with the operating system. It can
be used for tasks such as file and directory operations, which are essential for
managing datasets, model files, and other system-related tasks.

6. datetime:
Role: The datetime module is used for handling date and time information. It can
be utilized for timestamping data, tracking user interactions, and managing
temporal aspects of the system.

7. sys:
Role: The sys module provides access to some variables used or maintained by the
Python interpreter. It might be used for system-related configurations and settings.

8. tarfile:
Role: The tarfile module is used for reading and writing tar archive files. In the
context of the project, it may be employed for packaging and unpacking model
files or datasets.

9. random:
Role: The random module provides functions for generating random numbers. It
can be useful for tasks involving data shuffling, which is common in machine
learning scenarios.

These core modules collectively contribute to various aspects of the project,


including data manipulation, backend development, file operations, system
interactions, and machine learning functionalities. They form an integral part of
the technological stack required for the successful implementation of a food
recognition and dietary management system.
CHAPTER 7.
RESULTS / OUTPUTS & TESTING

7.1 ALL USER INTERFACES AND OUTPUT SCREENS

1. Landing Page:

2. Admin:
3. Select “Choose Food”:
4. Select Food Item
5. Select “Analyses Food”:

6. Result for the given image:


CHAPTER 8.
CONCLUSION

In conclusion, the implementation of the food recognition and dietary


management system utilizing MobileNetV1 has proven to be a significant stride
towards fostering healthier eating habits within the context of Indian cuisine. The
project addressed the complexities of recognizing diverse regional and traditional
foods, offering a streamlined solution for users to monitor and manage their
dietary choices. The use of MobileNetV1, with its efficient depthwise separable
convolutions, demonstrated its suitability for real-time image classification on
mobile devices. The model was successfully fine-tuned on a diverse dataset of
Indian dishes, allowing it to adapt to the nuances of the cuisine and providing
accurate recognition results.
The integration of a nutritional database enriched the user experience by offering
instant access to essential information about recognized foods, including calorie
content and nutritional values. The personalized recommendation engine further
enhanced the system's utility, providing users with tailored suggestions based on
individual preferences, dietary goals, and health conditions.
The holistic approach to health management, encompassing weight tracking and
adherence to dietary regulations, contributes to a comprehensive solution for
users aiming to improve their overall well-being. The continuous improvement
loop, fueled by user feedback, ensures the adaptability and relevance of the
system in the dynamic landscape of dietary trends and user preferences.
The security measures implemented guarantee the protection of user data,
emphasizing privacy and confidentiality in handling sensitive information.
As we move forward, the project lays the foundation for future enhancements,
including updates to the MobileNetV1 model, integration of emerging
technologies, and expansion of the nutritional database. The system not only
serves as a valuable tool for individuals striving for healthier lifestyles but also
provides insights into the intersection of technology and nutrition within the rich
tapestry of Indian culinary traditions.
CHAPTER 9.
REFERENCES

1. Y. Matsuda, H. Hoashi, and K. Yanai, “Recognition of multiple-food images by


detecting candidate regions,” in Proc. 2012 IEEE International Conference on
Multimedia and Expo, 2012
2. Y. Kawano and K. Yanai, “Automatic expansion of a food image dataset
leveraging existing categories with domain adaptation,” in Proc. European
Conference on Computer Vision. Springer, 2014
3. T. G. Dietterich, “Ensemble methods in machine learning,” in Proc. 1st
International Workshop on Multiple Classifier Systems, ser. MCS ’00. Berlin,
Heidelberg: Springer-Verlag, 2000
4. Almaghrabi, R, Villalobos, G, Pouladzadeh, P & Shirmohammadi, S 2012,
‘A Novel Method for Measuring Nutrition Intake Based on Food Image’,
Proceedings of the International Conference on Instrumentation and
Measurement Technology
5. L. Bossard, M. Guillaumin, and L. Van Gool, “Food-101–mining
discriminative components with random forests,” in Proc. European
conference on computer vision. Springer, 2014,
6. G. M. Farinella, D. Allegra, and F. Stanco, “A benchmark dataset to study
the representation of food images,” in Computer Vision - ECCV 2014
Workshops, L. Agapito, M. M. Bronstein, and C. Rother, Eds. Cham:
Springer International Publishing, 2015.
7. G. Ciocca, P. Napoletano, and R. Schettini, “Food recognition: A new
dataset, experiments, and results,” IEEE Journal of Biomedical and Health
Informatics, vol. 21, no. 3, 2017
8. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
Applications:Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry
Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig
Adam
9. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for
semantic segmentation,” in Proc. IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2015.
10.G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely
connected convolutional networks,” in Proc. IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), 2017.
CHAPTER 10.
APPENDICES

10.1 STEPS TO EXECUTE/RUN PROJECT.

1. Open anaconda navigator on your computer


2. Go to the environment and create a new project

3. Open the terminal by right-clicking over the new project


4. Navigate to the file path of the project

5. Run the “python app1.py” command in terminal.


6. The project runs on localhost http://127.0.0.1:5000 URL where the landing
page is hosted.
7. Login as admin using admin credentials.
8. Browse a food image in your local storage and select if from the dataset.
9. click on the analyze food button to calculate the macronutrients of the
given food image.
10.The resultant macronutrients of the food along with its calories will be
shown on your screen.
10.2 CODING SNIPPETS
Routing paths for different HTML templates in app1.py:
Routing to upload image:

Sample HTML for Landing Page:


Sample for Image Preprocessing:

Sample for Labeling Image:

You might also like