You are on page 1of 81

CHAPTER 1

INTRODUCTION

1.1 OVERVIEW

Obesity in adults and children is considered a global epidemic. The main


cause of obesity is a combination of excessive food consumption and lack of
physical activities. Therefore, the need to accurately measure diet becomes
important. Preliminary studies among adolescents suggest that innovative use
of technology may improve the accuracy of dietary information of young
people. Also as people are becoming used to sedentary life style, they are
involuntarily carried away from being aware of their food energy intake. There
is overwhelming evidence that metabolic complications, which are caused by
obesity, increase the risks for developing adverse health consequences such as
diabetes, blood pressure, dyslipidemia and hypertension. People in general
understand the links between diet and health. In fact, there is a wide spread of
nutritional information and guidelines that are available to users at their
fingertips. However, such information alone has not prevented diet-related
illnesses or helped patients to eat healthily. In most cases, people find it
difficult to examine all of the information about nutrition and dietary choices.
Furthermore, people are oblivious about measuring or controlling their daily
calorie intake due to the lack of nutritional knowledge, irregular eating patterns
or lack of self-control. Empowering patients with an effective long-term
solution requires novel mechanisms that help them make permanent changes to
their dietary quality and calorie intake. Statistics show that 95% of the people
no longer follow any dietary plan as these restrict the people from consuming
their day-to-day food. So, the primary cause for obesity is imbalance of the

1
amount of food intake and energy consumed by the individual, and a healthy
meal is necessary. Thus, maintaining a healthy diet is an important goal for
many people. The process of tracking the number of calories consumed can be
very tedious as it requires the user to keep a food journal and perform little
messy calculations to estimate the number of calories consumed in every food
item. Through this research we try to classify Indian food images into their
respective classes. The proposed software model uses machine learning as the
base which recognizes the food image that is uploaded as an input by the user,
processes the food image, recognizes it, and estimates the calories from the
predicted image. People record, upload, and share food images more willingly
than ever on websites like Instagram, Facebook etc. So, it is more convenient
to locate more data (images and videos) related to food. Consequently,
supporting users in diet management and reducing the need for the manual
paper approach.

1.2 OBJECTIVE

• Our aim in this project is to empower the user by a convenient, intelligent


and accurate system that helps them become sensible about their nutrition
intake.

• We employed a rather unique combination of region based segmentation


and deep learning neural networks as a means of accurately classifying
and recognizing food items and estimate the nutrition level.

2
CHAPTER 2
LITERATURE SURVEY

[1]Raikwar, H., Jain, H. and Baghel, A.,Calorie Estimation from Fast Food
Images Using Support Vector Machine.2019.

Proposed a model which focused on estimation of number of calories in the


food item by just taking its image as input using SVM. The proposed model
applies some techniques of image processing followed by feature extraction. The
authors designed the dataset, applied this dataset to some image processing
techniques, then processed dataset is applied to the feature extraction process. The
features extracted from all the images are then applied to the classifier support
vector machine (SVM) which classifies the images in different classes as specified
in the learning algorithm. The model consists of several intermediate activities
which are: a. extracting the feature vector of image, b. identifies the food item in
the image; c. predicts the calorie content of the food item in the image. The dataset
includes images from PFID (Pittsburgh Fast Food Image Dataset) and calorie
information from nutrition. For pre-processing, it includes background subtraction
to remove noise and unnecessary information.

[2] Subhi, M.A. and Ali, S.M., A deep convolutional neural network for food
detection and recognition, 2018.

The Google Net refers to the Inception architecture developed by Szeged et


al., is a deep convolution neural architecture that was codenamed as Inception. The
objective of the inception module is to act like a multistage function extractor by
using 1×1, 3×3, and 5×5 convolutions inside a single module of the network, then
the result of this module is fed as input to the next layer within the network. It was

3
implied by for vision networks and covering the hypothesized outcome by dense,
readily available components. With a little tuning of the module, modest profits
were seen in comparison to the other reference networks. Inception V3, the latest
version has been used to build a classifier in the paper.

[3] Pathanjali, C., Salis, V.E., Jalaja, G. and Latha, A.A Comparative Study
of Indian Food Image Classification Using K-Nearest-Neighbor and Support-
Vector-Machines, 2020.

Proposed an automatic food detection system that detects and recognizes


varieties of Indian food. The proposed food recognition system is developed in
such a way that it can classify the Indian food items based on two different
classification models i.e., SVM and KNN. The proposed system uses a combined
color and shape features. A comparative study on the performance of both the
classification models is performed. Parameters such as food density tables, color,
and shape acknowledgment as a part of image processing, and classification with
the SVM and KNN have been considered. The data set contains the feature vector
extracted from the sample images.

[4]. Reddy, V.H., Kumari, S., Muralidharan, V., Gigoo, K. and Thakare, B.S.,
2019, May. Food Recognition and Calorie Measurement using Image
Processing and Convolutional Neural Network. In 2019 4th In-ternational
Conference on Recent Trends on Electronics, Information, Communication &
Technology (RTEICT) (pp. 109-115). IEEE.

To overcome the manual labor and the erroneous data, various applications
were developed to calculate the food intake. One of the latest technological
advancements to overcome difficulties in pictures of food items is a variety of e-

4
health applications were developed to calculate calories in food that used the
concept of image processing.

[5].P.Pouladzadeh, S.Shirmohammadi, and R.Almaghrabi, “Measuring


Calorie and Nutrition from Food Image”, IEEE Transactions on
Instrumentation & Measurement, Vol.63, No.8, p.p. 1947 – 1956, August 2017.

We have introduced a semi-automatic system that assists dieticians to


measure calories and daily nutrient intake for the treatment of obese and
overweight patients. The system enables the user/patient to obtain the measurement
results of the food intake from the application, which simulates the calculation
procedure performed by the dietician

[6].T. Ege and K. Yanai, “Image-based food calorie estimation using


knowledge on food categories, ingredients and cooking directions,” in
Proceedings of the on Thematic Workshops of ACM Multimedia 2017. ACM,
2017, pp. 367–375.

Focused on analyzing which features and models are more suitable for the
food recognition, and comply them into food analysis system to calculate the
calories. Automatically estimate the food calories from a food image via
simultaneous learning of food calories, categories, ingredients and cooking
methods using multi-task convolutional neural networks. What’s more, a food
portion estimation method to estimate the energy from food images using
generative adversarial networks. Although food recognition and nutrition contents
analysis have been a popular field in recent year, there are still a few challenges to
be solved. The first is that most of the works are dealing with image with only one
food inside. They try to use classification method to recognize the food. The

5
second challenge is the time consumed to detect the food items. Usually with
detection models, it takes about 2 seconds to detect food items from the image

[7] K. Yanai and Y. Kawano, “Food image recognition using deep


convolutional network with pre-training and fine-tuning,” in 2019 IEEE
International Conference on Multimedia & Expo Workshops (ICMEW).
IEEE, 2015, pp. 1–6.

Fine-tuned the AlexNet model and achieved the best results on public food
datasets so far, with top-1 accuracy of 67.7% for UEC-FOOD-256. [48] evaluates
the effectiveness in classifying food images of a deep-learning approach based on
the specifications of Google’s image recognition architecture - Inception. Their
architecture is a 54 layers CNN. M

6
CHAPTER 3

EXISTING METHOD AND PROPOSED SYSTEM

3.1 EXISTING METHOD

• The existing framework uses Deep Convolutional Neural Network


(DCNN) that is based on ResNet 50 architecture. Due to the limited
computational resources to train the whole model, the ResNet model is
imitated and the pre-trained weights are imported.

• This section presents the proposed food recognition method based on


ResNet50 which is one of the winning networks in ImageNet machine
learning competition.

• The reason that ResNet50 architecture is chosen over other architectures


is because of the smaller parameter size. This makes the loading of the
model and weights and model training a lot faster.

• The solution includes pre-processing, training and classification. Training


includes feature extraction and weights learning which is done in the
SoftMax layer of the CNN. The classification is also done in CNN.

• In this existing system only detect the food cannot estimate the food
nutrition level.

7
3.2 PROPOSED SYSTEM

• In this project, we propose an assistive nutrition level tracking system to


help person succeed in their fight against diet-related health conditions.

• Our proposed system is deep learning based system, which allow the user
to take a video of the food and measure the amount of nutrition intake
automatically.

• In order to identify the food nutrition level accurately in the system, we


use convolutional neural networks algorithm by comparing the nutrition
level of the food data's stored in the database.

• The need to have a system that measures daily food intake nutrition value
and that value of food intake nutrition values are maintaining daily health
records.

• Hence, we proposed a measurement method to estimate the number of


nutrition from different food images.

• The collected dataset should be viewed after 7days wheatear the nutrition
levels are good or poor nutrition. If the nutrition level is poor means
suggest try to eat more protein and fat and if less nutrition level means
suggest to do exercise

8
CHAPTER 4

SYSTEM FUNCTION

4.1 ARCHITECTURAL DIAGRAM

Fig no: 4.1 block diagram for proposed system

9
4.2 MODULES

There are six modules used in this system

• Input acquisition module

• Image preprocessing and segmentation module

• Features extraction module

• Dataset training module

• Nutrition level estimation module

• Suggestion module

4.2.1 MODULE DESCRIPTION

1. Input acquisition module:-

In this module, the input camera will capture the video and the multiple
frames can be converted into single frame image and send to the next block for
further processing.

2. Image preprocessing and segmentation module:-

The Segmentation process will take place and the input image will be
segmented for the purpose of detection which is necessary to identify the region of
interest in the image. This module will convert the processed and segmented
images by performing region based segmentation process.

It uses key factors in the image like hue saturation value, descriptor points in order
to analyze the complete content of the image.

10
3. Feature extraction module:-

In this feature extraction module, the features like color, size and shape can
be extracted from the input food image.

4. Dataset training module:-

The nutrition level of the food dataset can be pre stored in the database. In
this module, the system will use CNN algorithm to fetch the primitive features in
the pre-stored data in the database and check for the presence of those features in
the input image.

5. Nutrition level estimation module:-

The convolutional neural network algorithm will estimate the food nutrition
level by comparing the dataset of the food nutritional level in the data base. After
that estimate the nutrition level is good or poor nutrition are intake.

The daily food intake data can be stored in the text file.

6. Suggestion module:-

The collected dataset should be viewed after 7days wheatear the nutrition
levels are good or poor nutrition. If the nutrition level is poor means suggest try to
eat more protein and fat and if less nutrition level means suggest to do exercise.

11
CHAPTER 5

SYSTEM SPECIFICATION

5.1 HARDWARE SPECIFICATION

 Processor type : AMD processor


 RAM : 8GB RAM
 Storage : 1TB
 Display : 20’ color display

5.2 SOFTWARE SPECIFICATION


 Front end : GUI
 Back end : python
 Software used : Atom
 Platform : Windows

5.3 CONVOLUTIONAL NEURAL NETWORK


• A Convolutional neural network (CNN) is a neural network that has one
or more convolutional layers and are used mainly for image processing,
classification, segmentation and also for other auto correlated data.
• A convolution is essentially sliding a filter over the input.
• Rather than looking at an entire image at once to find certain features it
can be more effective to look at smaller portions of the image.

12
Fig no 5.1 Food CNN algorithm image

5.3.1 Layers in a Convolutional Neural Network


A convolution neural network has multiple hidden layers that help in
extracting information from an image. The four important layers in CNN are:
1. Convolution layer

2. ReLU layer

3. Pooling layer

4. Fully connected layer

Convolution Layer

This is the first step in the process of extracting valuable features from an
image. A convolution layer has several filters that perform the convolution
operation. Every image is considered as a matrix of pixel values.

13
Consider the following 5x5 image whose pixel values are either 0 or 1.
There’s also a filter matrix with a dimension of 3x3. Slide the filter matrix over the
image and compute the dot product to get the convolved feature matrix.

Fig no 5.2 convolution layer image

ReLU layer

ReLU stands for the rectified linear unit. Once the feature maps are
extracted, the next step is to move them to a ReLU layer. ReLU performs an
element-wise operation and sets all the negative pixels to 0. It introduces non-
linearity to the network, and the generated output is a rectified feature map. Below
is the graph of a ReLU function:

14
Fig no 5.3 ReLU layer image

Pooling Layer

Pooling is a down-sampling operation that reduces the dimensionality of


the feature map. The rectified feature map now goes through a pooling layer to
generate a pooled feature map.

Fig no 5.4 pooling layer image

15
5.3.2 Advantages:

 Very High accuracy in image recognition problems.


 Automatically detects the important features without any human
supervision.
 Weight sharing.

Disadvantages:

 CNN do not encode the position and orientation of object.


 Lack of ability to be spatially invariant to the input data.
 Lots of training data is required.

5.4 Introduction to Image Pre-Processing

As a Machine Learning Engineer, data pre-processing or data cleansing is a


crucial step and most of the ML engineers spend a good amount of time in data
pre-processing before building the model. Some examples for data pre-processing
includes outlier detection, missing value treatments and remove the unwanted or
noisy data.

Similarly, Image pre-processing is the term for operations on images at the


lowest level of abstraction. These operations do not increase image information
content but they decrease it if entropy is an information measure. The aim of pre-
processing is an improvement of the image data that suppresses undesired
distortions or enhances some image features relevant for further processing and
analysis task.

16
There are 4 different types of Image Pre-Processing techniques and they are
listed below.

1. Pixel brightness transformations/ Brightness corrections

2. Geometric Transformations

3. Image Filtering and Segmentation

4. Fourier transform and Image restauration

Let’s discuss each type in detail.

5.4.1 Pixel brightness transformations (PBT)

Brightness transformations modify pixel brightness and the transformation


depends on the properties of a pixel itself. In PBT, output pixel’s value depends
only on the corresponding input pixel value. Examples of such operators include
brightness and contrast adjustments as well as color correction and
transformations.

Contrast enhancement is an important area in image processing for both


human and computer vision. It is widely used for medical image processing and as
a pre-processing step in speech recognition, texture synthesis, and many other
image/video processing applications

There are two types of Brightness transformations and they are below.

1. Brightness corrections

2. Gray scale transformation

17
The most common Pixel brightness transforms operations are

1. Gamma correction or Power Law Transform

2. Sigmoid stretching

3. Histogram equalization

Two commonly used point processes are multiplication and addition with a
constant.

G(x) =αf(x) +β

The parameters α>0 and β are called the gain and bias parameters and sometimes
these parameters are said to control contrast and brightness respectively.

cv.convertScaleAbs (image, alpha=alpha, beta=beta)

Fig no 5.5 PBT food image

18
5.5Region-Based Segmentation

In this segmentation, we grow regions by recursively including the


neighboring pixels that are similar and connected to the seed pixel. We use
similarity measures such as differences in gray levels for regions with
homogeneous gray levels. We use connectivity to prevent connecting different
parts of the image.

There are two variants of region-based segmentation:

 Top-down approach

 First, we need to define the predefined seed pixel. Either we can


define all pixels as seed pixels or randomly chosen pixels. Grow
regions until all pixels in the image belongs to the region.

 Bottom-Up approach

 Select seed only from objects of interest. Grow regions only if the
similarity criterion is fulfilled.

 Similarity Measures:

 Similarity measures can be of different types: For the grayscale


image the similarity measure can be the different textures and other
spatial properties, intensity difference within a region or the
distance b/w mean value of the region.

 Region merging techniques:

In the region merging technique, we try to combine the regions that


contain the single object and separate it from the background.. There

19
Pros:

• Since it performs simple threshold calculation, it is faster to perform.


• Region-based segmentation works better when the object and background
have high contrast.
Limitations:

• It did not produce many accurate segmentation results when there are no
significant differences b/w pixel values of the object and the background.
Implementation:

• In this implementation, we will be performing edge and region-based


segmentation. We will be using scikit image module for that and an
image from its dataset provided.

Fig no 5.6 Region-based segmentation food image

20
5.6 HSV color space

It stores color information in a cylindrical representation of RGB color


points. It attempts to depict the colors as perceived by the human eye. Hue value
varies from 0-179, Saturation value varies from 0-255 and Value varies from 0-
255. It is mostly used for color segmentation purpose.

Fig no 5.7 HSV image

5.6.1 Applications of HSV

The HSV color space is widely used to generate high quality computer
graphics. In simple terms, it is used to select various different colors needed for a
particular picture. An HSV color wheel is used to select the desired color. A user
can select the particular color needed for the picture from the color wheel. It gives
the color according to human perception.

21
5.6.2 HSV Representations

The HSV color wheel is used to pick the desired color. Hue is represented by
the circle in the wheel. A separate triangle is used to represent saturation and value.
The horizontal axis of the triangle indicates value and the vertical axis represents
saturation. When you need a particular color for your picture, first you need to pick
a color from the hue (the circular region), and then from the vertical angle of the
triangle you can select the desired saturation. For brightness, you can select the
desired value from the horizontal angle of the triangle.

Sometimes the HSV model is illustrated as a cylindrical or conical object.


When it is represented as a conical object, hue is represented by the circular part of
the cone. The cone is usually represented in the three-dimensional form. The
saturation is calculated using the radius of the cone and value is the height of the
cone. A hexagonal cone can also be used to represent the HSV model. The
advantage of the conical model is that it is able to represent the HSV color space in
a single object. Due to the two-dimensional nature of computer interfaces, the
conical model of HSV is best suited for selecting colors for computer graphics.

The application of the cylindrical model of HSV color space is similar to the
conical model. Calculations are done in a similar way.

Theoretically, the cylindrical model is the most accurate form of HSV color
space calculation. In practical use, it is not possible to distinguish between
saturation and hue when the value is lowered.

The cylindrical model has lost its relevance due to this and the cone shape is
preferred over it.

22
5.6.3 Advantages of HSV

• The HSV color space is quite similar to the way in which humans
perceive color.
• The other models, except for HSL, define color in relation to the primary
colors.
• The colors used in HSV can be clearly defined by human perception,
which is not always the case with RGB or CMYK.
5.7 CONTOUR MAPPING

A contour map is a type of map where the shape of the land surface is shown
by the contour lines, the relative spacing done between these lines indicates the
relative slope of the particular surface.

Contour map meaning is quite clear to us, if we further deduce this


definition it means – this is the delineation of any property in the map which is
formed by constructing lines. The lines are carved based on the equal values of that
property which is available as data points.

In the contour map meaning, it can be said that contour mapping is a type of
topography mapping, but to distinctly study the concept we will find there is an
acute difference between the two, so we cannot use each other as synonyms.
A topographic map is an accurate map that displays natural terrain and also man-
made objects like buildings, roads, or bridges. While Contour maps represent
changes in the elevation with the help of contour lines.

Each of the contour lines being marked on a map joins the points having an
equal height. The method of contouring cannot be totally relied on because two
investigators can produce different types of maps whenever interpolation between
two data takes place

23
5.7.1 Uses of Contour Mapping

The Contours provide important information which can help us to study the
nature of the terrain. This proves to be useful for the selection of sites, to determine
the catchment area of a drainage basin, or to find indivisibility between two or
more stations, etc. Some of the uses of contours are described below.

Nature of Ground

• To study the nature of the ground which catches interest.


To Locate Route

• To identify the route, a contour map provides worthy information on how


to locate a route.
Indivisibility between Stations

• When the indivisibility between the two points cannot be easily


ascertained by inspecting the area, then the contour map comes to the
rescue.
To Determine Catchment Area or Drainage Area

• The catchment area of a particular river can be well determined by using


the contour map. The watershed line very well indicates the drainage
basin of the river which passes through the ridges and then saddles of the
terrain that turns around the river. It is always perpendicular to the
contour lines. The catchment area which is contained between this
watershed line and the river outlet is measured with a plan meter.

24
Fig no 5.8 Contour mapping image

25
CHAPTER 6

SYSTEM SOFTWARE

6.1 PYQT5

PyQt5 is the latest version of a GUI widgets toolkit developed by Riverbank


Computing. It is a Python interface for Qt, one of the most powerful, and popular
cross-platform GUI library. PyQt5 is a blend of Python programming language and
the Qt library. This introductory tutorial will assist you in creating graphical
applications with the help of PyQt. Our tutorial on earlier version − PyQt4 is
available here.

PyQt is a GUI widgets toolkit. It is a Python interface for Qt, one of the
most powerful, and popular cross-platform GUI library. PyQt was developed by
Riverbank Computing Ltd. The latest version of PyQt can be downloaded from its
official website − riverbankcomputing.com

PyQt API is a set of modules containing a large number of classes and


functions. While QtCore module contains non-GUI functionality for working with
file and directory etc., QtGui module contains all the graphical controls. In
addition, there are modules for working with XML (QtXml), SVG (QtSvg), and
SQL (QtSql), etc.

A list of frequently used modules is given below −

 QtCore − Core non-GUI classes used by other modules

 QtGui − Graphical user interface components

 QtMultimedia − Classes for low-level multimedia programming

 QtNetwork − Classes for network programming

26
 QtOpenGL − OpenGL support classes

 QtScript − Classes for evaluating Qt Scripts

 QtSql − Classes for database integration using SQL

 QtSvg − Classes for displaying the contents of SVG files

 QtWebKit − Classes for rendering and editing HTML

 QtXml − Classes for handling XML

 QtWidgets − Classes for creating classic desktop-style UIs

 QtDesigner − Classes for extending Qt Designer

Supporting Environments

PyQt is compatible with all the popular operating systems including


Windows, Linux, and Mac OS. It is dual licensed, available under GPL as well as
commercial license. The latest stable version is PyQt5-5.13.2.

Windows

Wheels for 32-bit or 64-bit architecture are provided that are compatible
with Python version 3.5 or later. The recommended way to install is
using PIP utility −

pip3 install PyQt5

To install development tools such as Qt Designer to support PyQt5 wheels,


following is the command −

pip3 install pyqt5-tools

You can also build PyQt5 on Linux/macOS from the source


code www.riverbankcomputing.com/static/Downloads/PyQt5
27
Fig no 6.1 Qmainwindow

The PyQt installer comes with a GUI builder tool called Qt Designer. Using
its simple drag and drop interface, a GUI interface can be quickly built without
having to write the code. It is however, not an IDE such as Visual Studio. Hence,
Qt Designer does not have the facility to debug and build the application.

Start Qt Designer application which is a part of development tools and


installed in scripts folder of the virtual environment.

28
Fig no 6.2 PYQT5 main window

Start designing GUI interface by choosing File → new menu.

Fig no 6.3 dialog box

29
The designed form is saved as demo.ui. This ui file contains XML representation
of widgets and their properties in the design. This design is translated into Python
equivalent by using pyuic5 command line utility. The usage of pyuic5 is as follows
pyuic5 -x demo.ui -o demo.py

In the above command, -x switch adds a small amount of additional code to the
generated Python script (from XML) so that it becomes a self-executable
standalone application.

if __name__ == "__main__":

Import sys

App = QtGui.QApplication(sys.argv)

Dialog = QtGui.QDialog()

ui = Ui_Dialog()

ui.setupUi(Dialog)

Dialog.show()

sys.exit(app.exec_())

The resultant python script is executed to show the following dialog box −

Python demo.py

The user can input data in input fields but clicking on Add button will not
generate any action as it is not associated with any function. Reacting to user-
generated response is called as event handling.

Unlike a console mode application, which is executed in a sequential


manner, a GUI based application is event driven. Functions or methods are

30
executed in response to user’s actions like clicking on a button, selecting an item
from a collection or a mouse click etc., called events.

Widgets used to build the GUI interface act as the source of such events.
Each PyQt widget, which is derived from QObject class, is designed to
emit ‘signal’ in response to one or more events. The signal on its own does not
perform any action. Instead, it is ‘connected’ to a ‘slot’. The slot can be
any callable Python function.

Using Qt Designer's Signal/Slot Editor

First design a simple form with a LineEdit control and a PushButton.

Fig no 6.6 design page

It is desired that if button is pressed, contents of text box should be erased.


The QLineEdit widget has a clear () method for this purpose. Hence, the
button’s clicked signal is to be connected to clear () method of the text box.

31
To start with, choose Edit signals/slots from Edit menu (or press F4). Then
highlight the button with mouse and drag the cursor towards the textbox

Fig no 6.7 Form design page

As the mouse is released, a dialog showing signals of button and methods of


slot will be displayed. Select clicked signal and clear () method

32
Fig no 6.8 editing page

33
6.2 PYTHON PROGRAMMING LANGUAGE

Fig no 6.9 python

Python is a powerful general-purpose programming language. It is used in


web development, data science, creating software prototypes, and so on.
Fortunately for beginners, Python has simple easy-to-use syntax. This makes
Python an excellent language to learn to program for beginners.

About Python Programming

 Free and open-source - You can freely use and distribute Python, even
for commercial use.

 Easy to learn - Python has a very simple and elegant syntax. It's much
easier to read and write Python programs compared to other languages
like C++, Java, and C #.

 Portable - You can move Python programs from one platform to another,
and run it without any changes.

34
Some facts about Python Programming Language:

• Python is currently the most widely used multi-purpose, high-level


programming language.
• Python allows programming in Object-Oriented and Procedural
paradigms.
• Python programs generally are smaller than other programming
languages like Java. Programmers have to type relatively less and
indentation requirement of the language, makes them readable all the
time.
• Python language is being used by almost all tech-giant companies like –
Google, Amazon, Facebook, Instagram, Dropbox, and Uber… etc.
• The biggest strength of Python is huge collection of standard library
which can be used for the following:
• Machine Learning
• GUI Applications (like Kivy, Tkinter, and PyQt etc.)
• Web frameworks like Django (used by YouTube, Instagram, and
Dropbox)
• Image processing (like OpenCV, Pillow)
• Web scraping (like Scrapy, Beautiful Soup, and Selenium)
• Test frameworks
• Multimedia
• Scientific computing
• Text processing and many more.
6.1 PYCHARM INTRODUCTION

PyCharm is the most popular IDE used for Python scripting language. This
chapter will give you an introduction to PyCharm and explains its features.

35
PyCharm offers some of the best features to its users and developers in the
following aspects

 Code completion and inspection

 Advanced debugging

 Support for web programming and frameworks such as Django and Flask

Features of PyCharm

Besides, a developer will find PyCharm comfortable to work with because


of the features mentioned below

1. Code Completion
PyCharm enables smoother code completion whether it is for built in
or for an external package.
2. SQLAlchemy as Debugger
You can set a breakpoint, pause in the debugger and can see the SQL
representation of the user expression for SQL Language code.

3. Git Visualization in Editor


When coding in Python, queries are normal for a developer. You can
check the last commit easily in PyCharm as it has the blue sections that can
define the difference between the last commit and the current one.

4. Code Coverage in Editor


You can run .py files outside PyCharm Editor as well marking it as
code coverage details elsewhere in the project tree, in the summary section
etc.

36
5. Package Management
All the installed packages are displayed with proper visual
representation. This includes list of installed packages and the ability to
search and add new packages.

6. Local History
Local History is always keeping track of the changes in a way that
complements like Git. Local history in PyCharm gives complete details of
what is needed to rollback and what is to be added.

7. Refactoring
Refactoring is the process of renaming one or more files at a time and
PyCharm includes various shortcuts for a smooth refactoring process.

8. User Interface of PyCharm Editor


The user interface of PyCharm editor is shown in the screenshot given
below. Observe that the editor includes various features to create a new
project or import from an existing project.

37
Fig no: 6.1 PyCharm software

6.2.1 PYCHAM INSTALLATION

From the screenshot shown above, you can see the newly created project
Demo and the site-packages folder for package management along with various
other folders.

In this chapter, you will learn in detail about the installation process of
PyCharm on your local computer.

38
Steps Involved

You will have to follow the steps given below to install PyCharm on your
system. These steps show the installation procedure starting from downloading the
PyCharm package from its official website to creating a new project.

Step 1

Download the required package or executable from the official website of


PyCharm https://www.jetbrains.com/pycharm/download/#section=windowsHere
you will observe two versions of package for Windows as shown in the screenshot
given below −

Fig no:6.2 PyCharm installation

Note that the professional package involves all the advanced features and
comes with free trial for few days and the user has to buy a licensed key for
activation beyond the trial period. Community package is for free and can be

39
downloaded and installed as and when required. It includes all the basic features
needed for installation. Note that we will continue with community package
throughout this tutorial.

Step 2

Download the community package (executable file) onto your system and
mention a destination folder as shown below –

Fig no:6.3 PyCharm community package

Step 3

Now, begin the installation procedure similar to any other software package.

Step 4

Once the installation is successful, PyCharm asks you to import settings of


the existing package if any.
40
Fig No:6.4 PyCharm Import

This helps in creating a new project of Python where you can work from the
scratch. Note that unlike other IDEs, PyCharm only focusses on working with
projects of Python scripting language.

This chapter will discuss the basics of PyCharm and make you feel
comfortable to begin working in PyCharm editor.

When you launch PyCharm for the first time, you can see a welcome screen
with entry points to IDE such as

 Creating or opening the project

 Checking out the project from version control

 Viewing the documentation

 Configuring the IDE

41
Fig no:6.5 PyCharm configure page

Recall that in the last chapter, we created a project named demo1 and we
will be referring to the same project throughout this tutorial. Now we will start
creating new files in the same project to understand the basics of PyCharm Editor.

42
Fig no:6.6 PyCharm editing page

Fig No:6.7 PyCharm main window

The above snapshot describes the project overview of demo1 and the options
to create a new file. Let us create a new file called main.py.

43
The code included in main.py is as follows

Fig no:6.8 coding window

The code created in the file main.py using PyCharm Editor is displayed as
shown below

This code can be run within IDE environment. The basic demonstration of
running a program is discussed below

44
Fig No:6.9 PyCharm project interpreter

Fig No:6.10 PyCharm terminal window

45
Note that we have included some errors within the specified code such that
console can execute the code and display output as the way it is intended to.

Fig No:6.11 PyCharm execute window

6.2.2 RUNING AND DEBUGGING

Running a python code comprises of two modes: running a script and


debugging the script. This chapter focusses on debugging the Python script using
PyCharm.

Steps Involved

The steps for debugging the Python project are as explained below −

Step 1

Start with debugging the Python project as shown in the screenshot below –

46
Fig No:6.12 PyCharm running image

Step 2

Now, Windows firewall asks permission for debugging the Python project as
the procedure involves line by line compilation.

Fig no:6.13 debugging window


47
Step 3

The debugging console is created in PyCharm editor as shown below which


executes the output line by line.

Fig No:6.14 PyCharm editor and output line

Fig no:6.15 debugging console window

48
The run button moves from one line to another to execute the output as the
way we want.

Fig no:6.16 running window

Understanding Breakpoints

While debugging a particular script, it is intentional to create a breakpoint.


Breakpoints are intentional stopping place or the place where the code is paused in
order to identify the output at specific stage.

49
Fig No:6.17 PyCharm break points

In PyCharm, breakpoints are visible using a separate dialog in the specified


editor. It includes various attributes to evaluate the breakpoints defined and tracing
log for the same with a main motive to achieve better programming practice.

Fig no:6.18 Understanding Breakpoints

50
6.2 Python 2 vs. Python 3

If you are reading this in 2019, then there is no point even discussing this
unless and until you work on some project which is still running on some version
of Python 2.x.

When we say Python 2.x we mean Python 2.7 and when we say Python 3.x
we mean Python 3.7 version.

Python released the Python 3 version of the programming language in 2008.


There was an initial hesitation around adoption of Python 3 but not anymore.

Almost all the libraries/modules of python have been moved to python 3 or


have been made compatible to work with python 3.x version.

If you still want to go for Python 2.7 we would like to inform you that in
2020 the Python 2.x version will be officially discontinued.

Important Differences between Python 2 and Python 3

Although there are many changes in the newer version of the language i.e. in
Python 3 as compared to Python 2. We will be covering the ones which are the
most important ones which can cause issues if you are porting your code form
Python 2.7 (or any other 2.x version) to Python 3.x version.

New print () method syntax

Yes, this is one of the most visible changes as the print statement is used a
lot while you code in python.

In python 2.x, there was no need of adding parentheses after


the print keyword to enclose the text to be printed, but from 3.x version of python,
the correct syntax is print ("TEXT TO BE PRINTED").

51
CHAPTER 7

HARDWARE REQUIERMENTS & SPECIFICATION

7.1 Advanced Micro Devices (AMD) processor

AMD stands for Advanced Micro Devices. It is an American multinational


semiconductor company based in Santa Clara, California. It was invented by Jerry
Sanders, Jack Gifford, and John Carey. It started supplying x86 processors as a
second source manufacturer and became a competitor with Am386.

Fig no 7.1 AMD processor

On a scale of 1-10, AMD processors come at 5-10. It is cheaper than Intel


Processors at a similar range. These processors are efficient compared to the
current generation Core series. AMD APUs are also a good option for their good
iGPU performance and comparable CPU performance to Core i series. Laptops
powered with Ryzen processors often clock lower and less aggressively compared
52
to Intel-powered laptops, they often run cooler and longer on battery, thus for
laptops, when higher iGPU performance and longer battery life is preferred, Ryzen
powered laptops can be used. Although, when building a new Desktop PC, older
FX series CPUs A-series APUs and should be avoided for their higher power
consumption and heat output.

If we talk about the desktop, mobile, and you only want to do normal
gaming and for everyday use, then Ryzen APU is the way to go. For heavier tasks
like video editing, 3D modelling, etc, Ryzen 7 or 9 CPUs or Thread ripper should
be preferred.

For Ryzen Desktop CPUs and APUs in the AM4 platform, the motherboard
chipset should be checked for support otherwise PC may not boot, although it can
be easily solved with motherboards with USB BIOS flashing for newer processors.

7.1.1 Pros of AMD Processor

1. AMD is cheaper

For a user who has less funds and budget, this is a good processor. The price
is quite cheap when compared with Intel. Hence it is among the best choice for the
gamers. The price is still low because AMD has not been holding and dominating
the world market which is still being dominated by Intel and also hasn’t earned
much reputation in the market.

2. Has more Superior Graphic

Along with the cheaper price, the quality of AMD graphics processor is
better and suitable for playing games because it makes the graphics display of the
game more interesting if with this AMD processor. But, for YouTubers or renderer
Intel is a better choice as AMD can be less suitable if using this one processor.

53
3. AMD Processors Can Detect Malware

AMD Processors have a feature called Enhanced Virus Protection (EVP)


which can detect virus and malwares. This feature helps to check whether there is a
virus content in the running program.

4. Can handle 64-bit Applications in a proper way.

With the advancement in technology, application development is also


growing rapidly which results in a lot of content so it is a 64-bit based. Thus, the
AMD producers make the processor more optimal when handling 64-bit based
applications.

7.1.2 Cons of AMD Processor

1. Less Fame

Talking about the number of users in the world, AMD is very far from Intel.
AMD brand is not very familiar to ordinary people. This is the reason behind the
AMD processors being cheaper than the price of Intel because of its less fame
though the quality of AMD is not less than Intel, even outside Intel.

2. Fast Heat Generation

The AMD processors often expel heat because it does not use the heat sink
as its main cooling component. It still uses the fan as a cooling component.
However, some people have switched to ice cooler to reduce the noise that is not
pleasant to hear.

54
3. Lost to Multimedia

Users being mostly involved in the multimedia world are not recommended
to use this processor in the process you do. Intel is much preferable because it is
designed to handle matters relating to multimedia.

Hence, these are few of the pros and cons of AMD processor. Though it is
lagging behind Intel processors, it is doing its best. Analyze the market and your
needs to choose best processor for yourself.

Fig no 7.2 processor working chart

55
7.2 RAM (Random Access Memory)

Computer memory or random access memory (RAM) is your system’s


short-term data storage; it stores the information your computer is actively using so
that it can be accessed quickly. The more programs your system is running, the
more memory you’ll need.

RAM is of two types −

 Static RAM (SRAM)


 Dynamic RAM (DRAM)

Fig no 7.3 RAM memory image

7.2.1History of RAM:

 The first type of RAM was introduced in 1947 with the Williams tube. It
was used in CRT (cathode ray tube), and the data was stored as
electrically charged spots on the face.

56
 The second type of RAM was a magnetic-core memory, invented in
1947. It was made of tiny metal rings and wires connecting to each ring.
A ring stored one bit of data, and it can be accessed at any time.

 The RAM which we know today, as solid-state memory, was invented by


Robert Dennard in 1968 at IBM Thomas J Watson Research Centre. It is
specifically known as dynamic random access memory (DRAM) and has
transistors to store bits of data. A constant supply of power was required
to maintain the state of each transistor.

 In October 1969, Intel introduced its first DRAM, the Intel 1103. It was
its first commercially available DRAM.

 In 1993, Samsung introduced the KM48SL2000 synchronous DRAM


(SDRAM).

 In 1996, DDR SDRAM was commercially available.

 In 1999, RDRAM was available for computers.

 In 2003, DDR2 SDRAM began being sold.

 In June 2007, DDR3 SDRAM started being sold.

 In September 2014, DDR4 became available in the market.

7.2.2Static RAM (SRAM)

The word static indicates that the memory retains its contents as long as
power is being supplied. However, data is lost when the power gets down due to
volatile nature. SRAM chips use a matrix of 6-transistors and no capacitors.
57
Transistors do not require power to prevent leakage, so SRAM need not be
refreshed on a regular basis.

There is extra space in the matrix, hence SRAM uses more chips than
DRAM for the same amount of storage space, making the manufacturing costs
higher. SRAM is thus used as cache memory and has very fast access.

Fig no 7.4 static RAM

7.2.3 Dynamic RAM (DRAM)

DRAM, unlike SRAM, must be continually refreshed in order to maintain


the data. This is done by placing the memory on a refresh circuit that rewrites the
data several hundred times per second. DRAM is used for most system memory as
it is cheap and small. All DRAMs are made up of memory cells, which are
composed of one capacitor and one transistor.
58
Fig no 7.5 Dynamic RAM

7.2.4What is RAM used for?

RAM allows your computer to perform many of its everyday tasks, such as
loading applications, browsing the internet, editing a spreadsheet, or experiencing
the latest game. Memory also allows you to switch quickly among these tasks,
remembering where you are in one task when you switch to another task. As a rule,
the more memory you have, the better.

When you turn on your computer and open a spreadsheet to edit it, but first
check your email, you’ll have used memory in several different ways. Memory is
used to load and run applications, such as your spreadsheet program, respond to
commands, such as any edits you made in the spreadsheet, or toggle between

59
multiple programs, such as when you left the spreadsheet to check email. Memory
is almost always being actively used by your computer.

Advantages of RAM

• It makes the performance of I/O process faster and more efficient.


• The users make use of a RAM disk to speed up their applications like
games, audio, and video editing, databases, etc.
• The performance of a RAM Disk is higher.
• It creates a virtual memory from the RAM.
• The main benefit of RAM disk is you can access the RAM disk at
maximum bandwidth without any fear of mechanical failure.
• Also, at maximum bandwidth, it does not produce excessive heat, noise
or vibration.

Disadvantages of using RAM

• Its most significant disadvantage is its cost. As we know that, RAM is


very costly as compared to a hard disk.
• If your computer lost power and your system gets crashed, then all of
your saved data also get lost.
• RAM disk reserves a good portion of your memory so that you cannot
use that memory for any other purpose.
Applications of using a RAM

A RAM disk is like a hard drive, but it makes the use of memory. You can
use a RAM disk like: you can copy applications and files to the RAM disk, and
then you can start those applications or those files from there if the program
allows that. Mostly, RAM disks are suitable for those

60
• Who want to store temporary data?
• You can also set the Windows temp folder location there for instance.
Whenever you restart your computer, this data got removed entirely
automatically, and you do not need to do this manually.

A lot of RAM disk software is available these days.

61
CHAPTER 8

RESULTS

Thus, our project food nutritional intake tracking system based on deep
learning was successfully implemented. In this system the person intake food
nutritional status can be determined by using convolutional neural network
algorithm and also measured number of nutritional data that can be stored in a
file as a daily health record.

Fig no 8.1 Poor Food Detects

62
CHAPTER 9

CONCLUSION

The convolution neural network-based model is trained over large amount of


food images, which enhances your model capability to get required features
quickly. In the result analysis, the accuracy of the training dataset of images
obtained is about 99%. We can build a larger dataset which includes different
food images to get a better result. The need to have a system that measures
daily food intake for healthy diet is crucial due to the insufficient knowledge of
diet and nutrition requirements and the intake food nutrition level data can be
stored in the text file. Hence, we proposed a measurement method to estimate
the number of calories from different food images by measuring the features
such as color of the food from the image.

63
CODING AND OUTPUT
10.1.1 FRONT END
from PyQt5 import QtCore, QtGui, QtWidgets
from page2 import Ui_MainWindow_1

class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(800, 600)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.label = QtWidgets.QLabel(self.centralwidget)
self.label.setGeometry(QtCore.QRect(-4, -10, 831, 631))
self.label.setText("")
self.label.setPixmap(QtGui.QPixmap("1.jpg"))
self.label.setScaledContents(True)
self.label.setObjectName("label")
self.pushButton = QtWidgets.QPushButton(self.centralwidget)
self.pushButton.setGeometry(QtCore.QRect(34, 20, 681, 61))
self.pushButton.setStyleSheet("background-color: rgb(0, 255, 255);\n"
"font: 75 14pt \"MS Shell Dlg 2\";")
self.pushButton.setObjectName("pushButton")
self.pushButton_2 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_2.setGeometry(QtCore.QRect(570, 270, 161, 61))
self.pushButton_2.setStyleSheet("background-color: rgb(255, 170, 127);\n"

64
"font: 75 12pt \"MS Shell Dlg 2\";")
self.pushButton_2.setObjectName("pushButton_2")
self.pushButton_5 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_5.setGeometry(QtCore.QRect(300, 460, 191, 61))
self.pushButton_5.setStyleSheet("background-color: rgb(85, 255, 255);\n"
"font: 75 14pt \"MS Shell Dlg 2\";")
self.pushButton_5.setObjectName("pushButton_5")
MainWindow.setCentralWidget(self.centralwidget)
self.pushButton_5.clicked.connect(self.next_fun)

self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)

def retranslateUi(self, MainWindow):


_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.pushButton.setText(_translate("MainWindow", "DIABETES
PREDICTION ANALYSER "))
self.pushButton_2.setText(_translate("MainWindow", ""))
self.pushButton_5.setText(_translate("MainWindow", "NEXT"))

def next_fun(self):
self.Mainwindow_1 = QtWidgets.QMainWindow()
self.ui = Ui_MainWindow_1()
self.ui.setupUi(self.Mainwindow_1)
self.Mainwindow_1.show()

65
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
from PyQt5 import QtCore, QtGui, QtWidgets
from page3 import Ui_MainWindow_2

class Ui_MainWindow_1(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(800, 600)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.label = QtWidgets.QLabel(self.centralwidget)
self.label.setGeometry(QtCore.QRect(-34, -10, 851, 621))
self.label.setText("")
self.label.setPixmap(QtGui.QPixmap("2.webp"))
self.label.setScaledContents(True)
self.label.setObjectName("label")
self.pushButton = QtWidgets.QPushButton(self.centralwidget)

66
self.pushButton.setGeometry(QtCore.QRect(84, 30, 601, 61))
self.pushButton.setStyleSheet("font: 75 14pt \"MS Shell Dlg 2\";\n"
"background-color: rgb(0, 255, 255);")
self.pushButton.setObjectName("pushButton")
self.label_2 = QtWidgets.QLabel(self.centralwidget)
self.label_2.setGeometry(QtCore.QRect(100, 220, 141, 41))
self.label_2.setStyleSheet("background-color: rgb(255, 170, 127);\n"
"font: 75 14pt \"MS Shell Dlg 2\";")
self.label_2.setObjectName("label_2")
self.label_3 = QtWidgets.QLabel(self.centralwidget)
self.label_3.setGeometry(QtCore.QRect(100, 290, 141, 41))
self.label_3.setStyleSheet("background-color: rgb(255, 170, 127);\n"
"font: 75 14pt \"MS Shell Dlg 2\";")
self.label_3.setObjectName("label_3")
self.lineEdit = QtWidgets.QLineEdit(self.centralwidget)
self.lineEdit.setGeometry(QtCore.QRect(340, 220, 401, 41))
self.lineEdit.setStyleSheet("font: 75 12pt \"MS Shell Dlg 2\";")
self.lineEdit.setObjectName("lineEdit")
self.lineEdit_2 = QtWidgets.QLineEdit(self.centralwidget)
self.lineEdit_2.setGeometry(QtCore.QRect(340, 290, 401, 41))
self.lineEdit_2.setStyleSheet("font: 75 12pt \"MS Shell Dlg 2\";")
self.lineEdit_2.setObjectName("lineEdit_2")
self.pushButton_2 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_2.setGeometry(QtCore.QRect(280, 460, 201, 61))
self.pushButton_2.setStyleSheet("background-color: rgb(85, 255, 255);\n"

67
"font: 75 16pt \"MS Shell Dlg 2\";")
self.pushButton_2.setObjectName("pushButton_2")
MainWindow.setCentralWidget(self.centralwidget)
self.pushButton_2.clicked.connect(self.next_fun)

self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)

def retranslateUi(self, MainWindow):


_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.pushButton.setText(_translate("MainWindow", "LOGIN PAGE"))
self.label_2.setText(_translate("MainWindow", " USER NAME"))
self.label_3.setText(_translate("MainWindow", " PASSWORD"))
self.lineEdit.setPlaceholderText(_translate("MainWindow", "ENTER YOUR
NAME "))
self.lineEdit_2.setPlaceholderText(_translate("MainWindow", "ENTER
YOUR PASSWORD"))
self.pushButton_2.setText(_translate("MainWindow", "LOGIN"))

def next_fun(self):
user = self.lineEdit.text()
password = self.lineEdit_2.text()
if (user == "12345" and password == "12345"):
self.Mainwindow_2 = QtWidgets.QMainWindow()
self.ui = Ui_MainWindow_2()

68
self.ui.setupUi(self.Mainwindow_2)
self.Mainwindow_2.show()
else:
print("something wrong")

if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow_1()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())

from PyQt5 import QtCore, QtGui, QtWidgets


from try_method import Diabetic_food_checker
from try_method import draw_graph

class Ui_MainWindow_2(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(800, 601)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.label = QtWidgets.QLabel(self.centralwidget)

69
self.label.setGeometry(QtCore.QRect(-24, -10, 831, 631))
self.label.setText("")
self.label.setPixmap(QtGui.QPixmap("3.jpg"))
self.label.setScaledContents(True)
self.label.setObjectName("label")
self.pushButton = QtWidgets.QPushButton(self.centralwidget)
self.pushButton.setGeometry(QtCore.QRect(60, 30, 641, 61))
self.pushButton.setStyleSheet("background-color: rgb(0, 255, 255);\n"
"font: 75 14pt \"MS Shell Dlg 2\";")
self.pushButton.setObjectName("pushButton")
self.pushButton_2 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_2.setGeometry(QtCore.QRect(290, 240, 171, 51))
self.pushButton_2.setStyleSheet("background-color: rgb(255, 170, 127);\n"
"font: 75 14pt \"MS Shell Dlg 2\";")
self.pushButton_2.setObjectName("pushButton_2")
self.pushButton_3 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_3.setGeometry(QtCore.QRect(290, 350, 171, 51))
self.pushButton_3.setStyleSheet("background-color: rgb(255, 170, 127);\n"
"font: 75 14pt \"MS Shell Dlg 2\";")
self.pushButton_3.setObjectName("pushButton_3")
MainWindow.setCentralWidget(self.centralwidget)

self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
self.pushButton_2.clicked.connect(self.web1_fun)

70
self.pushButton_3.clicked.connect(self.web2_fun)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.pushButton.setText(_translate("MainWindow", "FOOD CHECKER
PAGE"))
self.pushButton_2.setText(_translate("MainWindow", "Food checker"))
self.pushButton_3.setText(_translate("MainWindow", "Draw graph"))
def web1_fun(self):
Diabetic_food_checker(self)

def web2_fun(self):
draw_graph(self)
10.1.2BACK END
import cv2
from datetime import datetime
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import csv
import time
import pandas as pd

71
def Diabetic_food_checker(self):

thres = 0.45
cap = cv2.VideoCapture(0)
cap.set(3,1280)
cap.set(4,720)
cap.set(10,70)

classNames= []
classFile = 'coco.data'
with open(classFile,'rt') as f:
classNames = f.read().rstrip('\n').split('\n')
configPath = 'ssd_mobilenet_v3_large_coco_2020_01_14.pbtxt'
weightsPath = 'frozen_inference_graph.pb'
net = cv2.dnn_DetectionModel(weightsPath,configPath)
net.setInputSize(320,320)
net.setInputScale(1.0/ 127.5)
net.setInputMean((127.5, 127.5, 127.5))
net.setInputSwapRB(True)
if not os.path.exists('./food.csv'):
with open('food.csv', 'a+') as file_create:
# file_create.write(str(""))
file_create.write(str("type\n"))

72
while True:

success,img = cap.read()
classIds, confs, bbox = net.detect(img,confThreshold=0.45)

if len(classIds) != 0:
i=0
for classId, confidence,box in zip(classIds.flatten(),confs.flatten(),bbox):
object_name=(classNames[classId-1])
object_id=[classId-1]
# print(object_id)
# print(object_name)

if object_id ==[52]:

cv2.rectangle(img, box, color=(0, 255, 0), thickness=2)


cv2.putText(img, f'{classNames[classId - 1]}', (box[0] + 10, box[1]
+ 30), cv2.FONT_HERSHEY_COMPLEX,1, (0, 255, 0), 2)
cv2.putText(img, str(round(confidence * 100, 2)), (box[0] + 200,
box[1] + 30),cv2.FONT_HERSHEY_COMPLEX, 1, (0, 255, 0), 2)

for i in str(1):
with open('food.csv', 'a+') as file_create:
file_create.write(str(""))
file_create.write(str("good\n"))

73
print("good food")
time.sleep(1)

if object_id == [51]:
cv2.rectangle(img, box, color=(0, 255, 0), thickness=2)
cv2.putText(img, f'{classNames[classId - 1]}', (box[0] + 10, box[1]
+ 30), cv2.FONT_HERSHEY_COMPLEX,1, (0, 255, 0), 2)
cv2.putText(img, str(round(confidence * 100, 2)), (box[0] + 200,
box[1] + 30), cv2.FONT_HERSHEY_COMPLEX, 1, (0, 255, 0), 2)

for i in str(1):
with open('food.csv', 'a+') as file_create:
file_create.write(str("good\n"))
print("good food")
time.sleep(1)

if object_id == [56]:
cv2.rectangle(img, box, color=(0, 255, 0), thickness=2)
cv2.putText(img, f'{classNames[classId - 1]}', (box[0] + 10, box[1]
+ 30), cv2.FONT_HERSHEY_COMPLEX,1, (0, 255, 0), 2)
cv2.putText(img, str(round(confidence * 100, 2)), (box[0] + 200,
box[1] + 30),cv2.FONT_HERSHEY_COMPLEX, 1, (0, 255, 0), 2)

for i in str(1):
with open('food.csv', 'a+') as file_create:
file_create.write(str("good\n"))
print("good food")
74
time.sleep(1)

if object_id == [57]:
cv2.rectangle(img, box, color=(0, 255, 0), thickness=2)
cv2.putText(img, f'{classNames[classId - 1]}', (box[0] + 10, box[1]
+ 30), cv2.FONT_HERSHEY_COMPLEX,1, (0, 255, 0), 2)
cv2.putText(img, str(round(confidence * 100, 2)), (box[0] + 200,
box[1] + 30),cv2.FONT_HERSHEY_COMPLEX, 1, (0, 255, 0), 2)

for i in str(1):
with open('food.csv', 'a+') as file_create:
file_create.write(str("good\n"))
print("good food")
time.sleep(1)

if object_id == [58]:
cv2.rectangle(img, box, color=(0, 255, 0), thickness=2)
cv2.putText(img, f'{classNames[classId - 1]}', (box[0] + 10, box[1]
+ 30), cv2.FONT_HERSHEY_COMPLEX,1, (0, 255, 0), 2)
cv2.putText(img, str(round(confidence * 100, 2)), (box[0] + 200,
box[1] + 30),cv2.FONT_HERSHEY_COMPLEX, 1, (0, 255, 0), 2)

for i in str(1):
with open('food.csv', 'a+') as file_create:
file_create.write(str("bad\n"))
75
print("bad")
time.sleep(1)

if object_id == [59]:
cv2.rectangle(img, box, color=(0, 255, 0), thickness=2)
cv2.putText(img, f'{classNames[classId - 1]}', (box[0] + 10, box[1]
+ 30), cv2.FONT_HERSHEY_COMPLEX,1, (0, 255, 0), 2)
cv2.putText(img, str(round(confidence * 100, 2)), (box[0] + 200,
box[1] + 30),cv2.FONT_HERSHEY_COMPLEX, 1, (0, 255, 0), 2)

for i in str(1):
with open('food.csv', 'a+') as file_create:
file_create.write(str("bad\n"))
print("bad")
time.sleep(1)

cv2.imshow("Output",img)
k = cv2.waitKey(30) & 0xff
if k == 27:
break

cap.release()
cv2.destroyAllWindows()

def draw_graph(self):

76
data = pd.read_csv('./food.csv')
totalVal = data['type'].value_counts().to_dict()
print(totalVal)
name = []
countVal = []

for i, j in totalVal.items():
name.append(i)
countVal.append(j)
plt.bar(name, countVal)
plt.show()

77
OUTPUT

78
CHAPTER 11

REFERENCES

[1]. Rajayogi, J.R., Manjunath, G. and Shobha, G., 2019, December. Indian Food
Image Classification with Transfer Learning. In 2019 4th Inter-national
Conference on Computational Systems and Information Technology for
Sustainable Solution (CSITSS) (Vol. 4, pp. 1-4). IEEE.

[2]. Reddy, V.H., Kumari, S., Muralidharan, V., Gigoo, K. and Thakare, B.S.,
2019, May. Food Recognition and Calorie Measurement using Image Processing
and Convolutional Neural Network. In 2019 4th In-ternational Conference on
Recent Trends on Electronics, Information, Communication & Technology
(RTEICT) (pp. 109-115). IEEE.

[3]. Subhi, M.A. and Ali, S.M., 2018, December. A deep convolutional neural
network for food detection and recognition. In 2018 IEEE-EMBS conference on
biomedical engineering and sciences (IECBES) (pp. 284-287). IEEE.

[4] Burkapalli, V.C. and Patil, P.C., TRANSFER LEARNING: INCEPTION-V3


BASED CUSTOM CLASSIFICATION APPROACH FOR FOOD IMAGES.

[5]H. Hassannejad, G. Matrella, P. Ciampolini, I. De Munari, M. Mordonini, and


S. Cagnoni, “Food image recognition using very deep convolutional networks,” in
Proceedings of the 2nd International Workshop on Multimedia Assisted Dietary
Management. ACM, 2017, pp. 41–49.

[6] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V.


Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings
of the IEEE conference on computer vision and pattern recognition, 2018, pp. 1–9.

79
[7] S. Ao and C. X. Ling, “Adapting new categories for food recognition with
deep representation,” in 2020 IEEE International Conference on Data Mining
Workshop (ICDMW). IEEE, 2020, pp. 1196–1203.

[8] Z. Ge, C. McCool, C. Sanderson, and P. Corke, “Modelling local deep


convolutional neural network features to improve fine-grained image
classification,” in 2019 IEEE International Conference on Image Processing
(ICIP). IEEE, 2019, pp. 4112–4116.

[9] Y. Kawano and K. Yanai, “Real-time mobile food recognition system,” in


Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
Workshops, 2018, pp. 1–7.

[10] ——, “Foodcam-256: a large-scale real-time mobile food recognition system


employing high-dimensional features and compression of classifier weights,” in
Proceedings of the 22nd ACM international conference on Multimedia. ACM,
2019, pp. 761–762.

[11] T. Ege and K. Yanai, “Image-based food calorie estimation using knowledge
on food categories, ingredients and cooking directions,” in Proceedings of the on
Thematic Workshops of ACM Multimedia 2017. ACM, 2017, pp. 367–375.

[12] Pathanjali, C., Salis, V.E., Jalaja, G. and Latha, A., 2018. A Comparative
Study of Indian Food Image Classification Using K-Nearest-Neighbor and
Support-Vector-Machines. J. Eng. Techno, 7, pp.521-525.

[13] Christodoulidis, S., Anthimopoulos, M. and Mougiakakou, S., 2019,


September. Food recognition for dietary assessment using deep con-volitional
neural networks. In International Conference on Image Analysis and Processing
(pp. 458-465)

80
[14] P. Sundaravadivel, K. Kesavan, L. Kesavan, S. P. Mohanty, E. Kougianos,
and M. K. Ganapathiraju, “Smart-Log: An Automated, Predictive Nutrition
Monitoring System for Infants Through IoT,” in Proc. IEEE Int. Conf. Consum.
Electron. 2018.

[15] A. R. S. US Department of Agriculture and N. D. Laboratory, “USDA


National Nutrient Database for Standard Reference,” May 2019, release 28.

[16] M. Sun, Q. Liu, K. Schmidt, J. Yang, N. Yao, J. D. Fernstrom, M. H.


Fernstrom, J. P. DeLany and R. J. Sclabassi, “Determination of Food Portion Size
by Image Processing,” International IEEE EMBS Conference, pp. 871 - 874, 2018.

[17] Zakaria Al-Battashi, John Bronlund, Gourab Sen Gupta, “Investigations into
Force Sensor Characteristics for Food Texture Measurements”, IEEE International
Conference on Instrumentation and Measurement Technology (I2MTC), p.p. 2089-
2094, 2019.

[18] Y. Saeki and F. Takeda, “Proposal of Food Intake Measuring System in


Medical Use and Its Discussion of Practical Capability,” Springer-Verlag Berlin
Heidelberg, vol. 3683, p.p. 1266–1273, 2020.

81

You might also like