You are on page 1of 52

CHAPTER 1

INTRODUCTION

1.1 INTRODUCTION
In recent years, the cosmetics sector has experienced rapid expansion that
has affected customer numbers as well as what is offered. As a consequence of
growth in both customers and products, choosing the best cosmetic product is
difficult. It is necessary to choose the appropriate cosmetic product for each
individual based on personal factors (i.e., skin type), because cosmetic products
have a significant part in one's appearance. It is infamously difficult to find the
perfect cosmetic for a customer's skin type because each person has a different
skin texture. For solving the problem of choosing the best product based on skin
type, we can use a deep learning approach, as it provides better results when
working with vast amounts of unstructured data.

1.2 COMPOSITION OF COSMETIC PRODUCTS


Cosmetic products are those that a person uses to enhance their appearance.
The cosmetic products include all kinds, such as kajal, lipsticks, eye shadow,
eyeliner, face wash, body lotion, mascara, compacts, foundation, etc. Since
nobody is particularly good at selecting the ideal product for them, this system
will surely help them.

1.3 MACHINE LEARNING


Machine learning, a branch of artificial intelligence (AI), uses
technologies and algorithms to glean information from data. Big data is an area
where machine learning techniques are applicable since it would be hard to
manually analyse such massive amounts of data. In computer science, machine
learning makes an effort to find algorithmic solutions to issues rather than solely

1
mathematical ones. As a result, it is built on developing algorithms that enable
machine learning.

1.4 EVOLUTION OF MACHINES


The world we live in is one where people and machines coexist. From
millions of years ago, Humans have been changing and improving based on
their past experience. But, the age of machines and robots has just recently
begun. That can be interpreted as meaning that we are currently living in the
prehistoric era of machines, whereas the vast and unfathomable potential of
machines lies in the future. In the modern world, these machines or robots need
to be programmed before they can begin acting on your commands. But what if
a machine began to learn on its own through experience and performed tasks
more precisely than we could? These sound like exciting stuff, right? Just keep
in mind that the new era is only just beginning.

1.5 HOW MACHINE LEARNING WORKS


Machine learning uses two types of techniques: supervised learning,
which trains a model on known input and output data so that it can predict future
outputs, and unsupervised learning, which finds hidden patterns or intrinsic
structures in input data. Data scientists determine which variables, or features,
the model should analyze and use to develop predictions. Once training is
complete, the algorithm will apply what was learned to new data. Supervised
algorithms require a data scientist or data analyst with machine learning skills to
provide both input and desired output, in addition to furnishing feedback about
the accuracy of predictions during algorithm training Unsupervised algorithms
do not need to be trained with desired outcome data. Instead, they use an
iterative approach called deep learning to review data and arrive at conclusions.
Machine Learning algorithm is trained using a training data set to create a
model. When new input data is introduced to the ML algorithm, it makes a

2
prediction on the basis of the model.

1.6 REQUIREMENTS OF DEEP LEARNING


In recent years, deep learning has made it possible to conduct various
researches in a variety of fields, including chatbots, computer vision, machine
translation, natural language processing, and many more. They provide
consistent performance while eliminating the need to understand feature
representations from the start. Deep learning approaches outperformed
conventional techniques in the recommendation domain, demonstrating this
effect. Deep Neural Networks are compositional in that they are constructed
from many neural basic building blocks that are then integrated to create a single
spatially separated function and practiced end-to-end. Unlike linear models,
Deep Neural Networks can simulate non-linearity in data using non-linear
activations like sigmoid, relu, tanh, and so on. The ability to record complicated
and sophisticated user item patterns of interaction has been developed. Deep
learning approaches are quite flexible, particularly with the advent of several
well-liked machine learning frameworks like PyTorch, Keras, and TensorFlow.
The major tools among these have active professional and community support
and were created in a decentralized manner. Engineering and development are
significantly more productive when modularization is done well.

1.7 VARIOUS DEEP LEARNING TECHNIQUES


DNN: Deep Neural Networks (DNNs) are types of Artificial Neural
Networks (ANNs) that have several layers between their input and output levels.
By choosing the appropriate mathematical transformation based on the
input/output relationship, DNN determines whether the input/output relationship
is linear or nonlinear.
CNN: A Convolutional Neural Network (CNN) has a convolution layer
and pooling functions that distinguish it from other types of feedforward neural

3
networks. Since it can collect both global and local characteristics, its efficiency
and accuracy are greatly increased. The programme has good performance when
processing grid-like information.
RNN: Sequential data modelling is appropriate for Recurrent Neural
Networks (RNN). There are loops and memories for remembering the previous
computation.
DBN: A particular kind of Deep Neural Network used in machine
learning is the Deep Belief Network (DBN). Latent variables or hidden units are
arranged in a large number of layers, with connectivity between the levels but
not among the units inside each layer.
AE: An unsupervised model called an AutoEncoder(AE) tries to recreate
the input data from the output layers. The bottleneck layer is typically used in
the middle layer to define the prominent features of the user's input data. The
types of autoencoders include contractive autoencoders, denoising autoencoders,
sparse autoencoders, marginalization denoising autoencoders, and Variational
Autoencoders (VAE).

4
CHAPTER 2
LITERATURE SURVEY

Yoshua Bengio [1], The state-of-the-art in deep learning and several


emerging fields of study are both examined in depth in Deep Learning. The
book is made for a research audience of academics who are familiar with linear
algebra, calculus, probability, and basic programming skills.
Iwabuchi et al. [2] In order to create a recommendation system for basic
cosmetics based on components, we extracted the most effective cosmetic
compounds for each user characteristic. We used the IF-IPF approach, which
used the TF-IDF method's principle to extract powerful cosmetic ingredients.
Boguslaw Obara [3] and colleagues collected 91 photos of clipped eye
regions from photographs of female individuals to train a Siamese
Convolutional Neural Network (CNN). They measure a trained network's
performance based on its accuracy in recognizing known subjects in previously
unseen pictures, and then they gauge how well it can locate visually compatible
matches within the known subjects.
The authors, Meital Portugal-Cohen [4], proposed a two-step, methodical
innovation approach based on the AHAVA brand's expertise. It is explained
how to use a toolbox of alternative strategies that fall under the new method of
application, new combination, new service/marketing, increased focus, and
improved definition concepts. Dead Sea minerals: A special case is used in this
article to describe and illustrate each suggested innovation tool.
Wu et al. [5], This study aims to create a system of information based on
the model for assessing the safety risks associated with cosmetics that can help
to control the hazards to safety associated with cosmetics by automatically
identifying, quantifying, and grading it.
Yang [6], This study makes use of service design theory and methods to

5
discuss the current state of the Chinese cosmetics product Xie Fuchun. A map of
the user journey was used for identifying users' pain points and core needs, and
from there, design opportunity points were determined.
Kothari [7], In this study, the classification of skin types using
Convolutional Neural Networks has been examined. To train the model, we've
built a dataset of more than 80 skin photos, obtained through web scraping that
is divided into categories of dry and oily skin to train the model. The trained
model was used on a small sample of easily recognizable photos to assess the
performance of our model. The results of our CNN classification algorithm
show approximately 85% accuracy and a little bias in favour of oily images. It
shows that deep learning has a lot of potential for classifying the type of skin
from facial images, and with a bigger dataset, it may potentially deliver more
accurate and error-free results.
Gopinath [8], It is really difficult to choose a new cosmetic item if a person
wishes to try anything new. So, the authors recommend a basic cosmetic
suggestion system. They have separated cosmetic component information and
compounds utilized from the Sephora website using web scraping. After
applying NLP concepts to the compounds, a Document Term Matrix (DTM) is
generated. The matrix is either filled with 1 or filled with 0. A cosmetic
ingredient has a value of 1. Otherwise, it stays at 0. The distances between the
dots in the data will indicate the similarities of the cosmetic goods. Later, they
developed a plot. With the use of the hover tools in Plotly, the plot is built as a
scatter plot.
Jinguang Wang [9], The study introduces the idea of good cosmetics,
analyses the products' consumer stimuli, and addresses the shortcomings of
conventional cosmetics in terms of their shapes and functions. We explored a
practical way to integrate advanced technologies into cosmetic products and
validated the possibility of fusing user experience and smart technology through
design practise by using the KANO model to summarise consumers' demands

6
and translate functionality.
The authors, Maya S. Fleysher and Veronika M. Troeglazova [10],
suggested a Convolutional Neural Network (CNN) constructed with Python
libraries such as Matplotlib, OpenCV, and MTCNN. Depending on the image, it
will select suitable cosmetic products such as lipstick, shadows, and foundation
over several portions. The algorithm for this code is that the facial feature
coordinates and the face itself. The pixel brightness levels are in contrast to the
model. It is important to identify the colour of the face and eyes so that the right
cosmetics may be suggested. Using a modern real-time raw data processing
technique, the generic neural network approach can improve performance.
Recommendation systems are now a standard feature on all e-commerce
platforms. R. Nurfadillah et al. [11] proposed the issue of explicit rating
prediction over cosmetic products. They examine the features of a dataset of
cosmetic product ratings and use a variety of techniques, such as KNN and
matrix factorization, to predict such ratings. They assess the performance of
these algorithms using MAE and RMSE measurements, and they highlight the
elements that may influence their performance outcomes. Our results reveal that
the SVD++ technique outperforms all others, with an MAE of 0.7699 and an
RMSE of 0.9696.
This study, by Qing Ma. [12], describes a method for predicting the results
of cosmetic surveys, which give cosmetics scores or reviews, using three types
of machine learning algorithms: Support Vector Machines (SVM),
Convolutional Neural Networks (CNN), and Stacked Denoising Autoencoders
(SDA).
The authors, Guangxin lou and Hongzhen shi [13], tell us in their study
that, in the field of image recognition, the first application that is developed
based on a Convolutional Neural Network is image recognition. To recognize
and evaluate various images, different algorithm activities—convolution,
recognition, and image eigenvalue extraction are performed. The rapid increase

7
in the advancement of artificial intelligence makes machine learning
increasingly essential. VGG separates the network into five groups (similar to
AlexNet's five layers); however, it uses only 3*3 filters and then combines them
into a convolution sequence. The number of channels is greater in Network
Deeper DCNN.The model's accuracy in recognizing faces was validated using
the CASIA Face Image Database, BioID Face Database, and URL Face
Database.
Earlier this decade, social media sites shared more makeup images. Many
of these images lack information regarding the cosmetics used, such as glitter,
color, and so on, which are hard to deduce due to lighting circumstances and the
variety of skin colors. K. Yamagishi [14] proposed a new image-based
technique for extracting cosmetic information that accounts for both colour and
regional influences by dividing the target image into makeup and skin colour
using the Difference of Gaussian (DOG).So, our approach is applicable to
single, stand- alone makeup photographs and takes into account both local
effects and color. Furthermore, according to the disintegration of the skin's
makeup, our approach is resistant to skin colour differences.
According to Cheng-Chun Chang [15], a commonly used deep learning
method, a Convolutional Neural Network, is used to investigate skin type
categories of robust Fitzpatrick. The given skin spectra datasets used in this
study have a tiny sample size; hence, a single convolutional layer Convolutional
Neural Network model has been used. An artificial neural network model and
the conventional ITA Fitzpatrick classification method are also analysed in order
to assess the effectiveness of our reduced Convolutional Neural Network model.
With an efficiency rate of up to 92.59%, our Convolutional Neural Network
model's classification results demonstrate superior Fitzpatrick skin type
categorization.

8
CHAPTER 3
PROPOSED METHODOLOGY

3.1 PROPOSED WORK


In the proposed system, the crew will determine the best distribution of
cosmetic product suggestions based on skin type using artificial intelligence. It
is suggested to use input features, including product ingredients and skin type
compatibility, in the proposed approach. Hidden layers that subdivide datasets
into buckets based on skin receive these features and transmit them to the input
vector. With the help of the Convolution Neural Network Algorithm (CNN), we
will finally get the distribution of cosmetic products on the output layer. The
suggestions for the composition in our system will be better than the traditional
system by enhancing the quality of datasets using techniques like grayscale
conversion, edge detection, and median blur, which will improve the accuracy
and quality of datasets as well as the customer’s input.

Fig.3.1 Block Diagram

9
3.2 DATASETS:
One major advantage of using CNNs over NNs is that you do not need to
flatten the input images to 1D as they are capable of working with image data in
2D. This helps in retaining the “spatial” properties of images. So here we are
using different type of skin data set which consist of four categories Oily, Dry,
Combinational and Normal.

3.2.1 Image Labelling and Dataset Distributions:


All subjects were independently labelled by four set of images. Labelling
was first evaluated with the original images on a Picture Archiving
Communication System (PACS) and secondly with the resized images that were
used for the actual learning data. Datasets were defined as the internal dataset
and temporal dataset, with the temporal dataset used to evaluate the test. The
internal dataset was randomly split into training (70%), validation (15%), and
test (15%) subsets. The distribution of internal the test dataset consisted of 32%
maxillary sinusitis, 32% Frontol sinusitis and 34% Normal.

3.3 PRE-PROCESSING
 Images come in different shapes and sizes. They also come
through different sources.
 Taking all these variations into consideration, we need to perform some
pre-processing on any image data. RGB is the most popular encoding format,
and most “natural images”. Also, among the first step of data pre- processing is
to make the images of the same size.
 Here we have used auto resizing for training to make all the images in
the dataset to convert in to same resolution.

10
3.3.1 Pre-Processing Steps
The pre-processing steps were conducted with resizing, patch, and
augmentation steps. The first pre-processing step normalizes the size of the input
images. Almost all the images were rectangles of different heights and too large
(median value of matrix size ≥1,800). Accordingly, we resized all images to a
standardized 224×224-pixel square, through a combination of preserving their
aspect ratios and using zero-padding. The investigation of deep learning
efficiency depends on the input data; therefore, in the second processing step,
input images were pre- processed by using a patch (a cropped part of each
image). A patch was extracted using a bounding box so that it contained
sufficient segmentation for analysis. Finally, data augmentation was conducted
for just the training dataset, using mirror images that were reversed left to right
and rotated −30, −10, 10, and 30 degrees.

3.4 FEATURE EXTRACTION


The process of feature extraction is useful when you need to reduce the
number of resources needed for processing without losing important or relevant
information. Feature extraction can also reduce the amount of redundant data for
a given analysis. Also, the reduction of the data and the machine’s efforts in
building variable combinations (features) facilitate the speed of learning and
generalization steps in the machine learning process.

3.5 ALGORITHM USED CNN:


In deep learning, a Convolutional Neural Network (CNN) is a type of Deep
Neural Networks, which deals with the set of data to extract information about
that data. Like images, sounds or videos etc. can be used in the CNN for the data
extraction. There are mainly three things in CNN. First one is local receptive
field and then shared weight and biases and the last one is activation and
pooling.

11
 In CNN, first the neural networks are trained using a heavy set of data
so that the CNN can extract the feature of given input. When the input is given,
first image preprocessing is done then the feature extraction occurs on the basis
of set of data stored and then the classification of data is done and output is
shown as the result.
 The CNN can deal with those input only for what the neural network is
trained and the data is saved.
 They are used in image and video recognition, recommender systems,
image classification, medical image analysis, and natural language processing.

12
Fig.3.2 Flow Diagram

13
CHAPTER 4
SYSTEM SPECIFICATION

4.1 HARDWARE REQUIREMENTS


 RAM – 4GB
 Core i5 processor
 500Mb Hardware Disc Space

4.2 SOFTWARE REQUIREMENTS


 Python 3.7
 Thonny IDE
 Libraries used: Tensorflow, Keras, Matplotlib

4.3 TECHNOLOGY AND TOOL USED


 TECHNOLOGIES USED
o Python
o Deep learning
 TOOL USED
o Thonny IDE

14
CHAPTER 5
TECHNOLOGIES AND TOOLS DESCRIPTION

5.1 SOFTWARE DESCRIPTION


5.1.1 Python:
Python 3.7:
Python is an interpreter, high-level, general-purpose programming
language. Created by Guido van Rossum and first released in 1991, Python's
design philosophy emphasizes code readability with its notable use of
significant whitespace.
Python is an easy to learn, powerful programming language. It has efficient
high- level data structures and a simple but effective approach to object-oriented
programming. Python’s elegant syntax and dynamic typing, together with its
interpreted nature, make it an ideal language for scripting and rapid application
development in many a reason most platforms and may be freely distributed.
The same site also contains distributions of and pointers to many free third-party
Python modules, programs and tools and additional documentation. The Python
interpreter is easily extended with new functions and data types implemented in
C or C++ (or other languages callable from C). Python is also suitable as an
extension language for customizable applications. It helps to have a Python
interpreter handy for hands-on experience. For a description of standard objects
and modules, see library-index. Reference-index gives a more formal definition
of the language.
The classes provide a means of bundling data and functionality together.
Creating a new class creates a new type of object, allowing new instances of that
type to be made. Each class instance can have attributes attached to it for
maintaining its state. Class instances can also have methods (defined by its
class) for modifying its state. Compared with other programming languages,

15
Python’s class mechanism adds classes with a minimum of new syntax and
semantics. It is a mixture of the class mechanisms found in C++ and Modula-3.
Python classes provide all the standard features of Object-Oriented
Programming: the class inheritance mechanism allows multiple base classes, a
derived class can override any methods of its base class or classes, and a method
can call the method of a base class with the same name. Objects can contain
arbitrary amounts and kinds of data. As is true for modules, classes take part of
the dynamic nature of Python: they are created at runtime, and can be modified
further after creation.
The objects have individuality and multiple names (in multiple scopes) can
be bound to the same object. This is known as aliasing in other languages. This
is usually not appreciated on a first glance at Python, and can be safely ignored
when dealing with immutable basic types (numbers, strings, tuples). However,
aliasing has a possibly surprising effect on these mantic of Python code
involving mutable objects such as lists, dictionaries, and most other types. This
is usually used to the benefit of the program, since aliases behave like pointers
in some respects. For example, passing an object is cheap since only a pointer is
passed by the implementation and if a function modifies an object passed as an
argument, the caller will see the change this eliminates the need for two different
argument passing mechanisms as in Pascal.
Python checks the modification date of the source against the compiled
version to see if it’s out of date and needs to be recompiled. This is a completely
automatic process. Also, the compiled modules are platform-independent, so the
same library can be shared among systems with different architectures. Python
does not check the cache in two circumstances. First, it always recompiles and
does not store the result for the module that’s loaded directly from the command
line. Second, it does not check the cache if there is no source module. To
support anon-source (compiled only) distribution, the compiled module must be
in the source directory, and there must not be a source module.

16
A program doesn’t run any faster when it is read from a .pyc file than when
it is read from a .py file, the only thing that’s faster about .pyc files is the speed
with which they are loaded. The module compile all can create .pyc files for all
modules in a directory. There is more detail on this process, including a flow
chart of the decisions.

5.2 DEEP LEARNING:


Deep learning is a subset of machine learning that involves training
complex neural networks with many layers. Deep learning has become
increasingly popular due to its ability to identify complex patterns in data and
make accurate predictions in various fields such as image recognition, natural
language processing, speech recognition, and autonomous systems. Deep
learning models can learn from vast amounts of data and make accurate
predictions in real-time, enabling businesses to automate and optimize many
processes. Deep learning has also led to breakthroughs in fields such as
medicine, where it is being used to develop new drugs and analyze medical
images with high accuracy.
One of the key advantages of deep learning is its ability to learn from
unstructured data, such as images, text, and speech, without the need for explicit
feature extraction. This makes deep learning particularly useful for tasks such as
object detection, natural language processing, and speech recognition, where
traditional machine learning techniques may not be effective. In addition, deep
learning models can handle large amounts of data and automatically identify
complex patterns and relationships, leading to more accurate predictions and
insights. Another advantage of deep learning is its ability to generalize well to
new and unseen data, making it suitable for real-world applications.

17
5.3 LIBRARIES DESCRIPTION
5.3.1 Activation Function:
Activation function serves as a decision function and helps in learning of
intricate patterns. The selection of an appropriate activation function can
accelerate the learning process. In literature, different activation functions such
as sigmoid, tanh, maxout, SWISH, ReLU and variants of ReLU such as leaky
ReLU, ELU, and PReLU are used to inculcate non-linear combination of
features.

5.3.2 Numpy:
Using NumPy, a developer can perform the following operations:
 Mathematical and logical operations on arrays.
 Fourier transforms and routines for shape manipulation.
 Operations related to linear algebra. NumPy has in-built functions
for linear algebra and random number generation.
The most important object defined in NumPy is an N-dimensional array type
called ndarray. It describes the collection of items of the same type. Items in the
collection can be accessed using a zero-based index. Every item in an ndarray takes
the same size of block in the memory. Each element in ndarray is an object of data-
type object (called dtype).

5.3.3 Tensorflow:
TensorFlow is a popular open-source machine learning library developed by
Google that is widely used for building and training deep learning models.
TensorFlow provides a range of high-level APIs for building neural networks and
deep learning models, as well as low-level APIs that allow for more fine-grained
control over the model architecture and training process.
In Python, TensorFlow is typically used through the TensorFlow Python API,
which provides a range of functions and classes for building and training deep
18
learning models. The TensorFlow Python API can be used to build models for a
variety of applications, such as image classification, object detection, natural
language processing, and speech recognition.
TensorFlow takes computations described using a dataflow like model and
maps them onto a wide variety of different hardware platforms, ranging from
running inference on mobile device platforms such as Android and iOS to modest
sized training and inference systems using single machines containing one or many
GPU cards to large-scale training systems running on hundreds of specialized
machines with thousands of GPUs.
In a TensorFlow graph, each node has zero or more inputs and zero or more
outputs, and represents the instantiation of an operation. Valus that flow along
normal edges in the graph (from outputs to inputs) are tensors, arbitrary
dimensionality arrays where the underlying element type is specified or inferred
at graph-construction time. Special edges, called control dependencies, can also
exist in the graph: no data flows along such edges, but they indicate that the
source node for the control dependence must finish executing before the
destination node for the control dependence starts executing. Since our model
includes mutable state, control dependencies can be used directly by clients to
enforce happens before relationships. Our implementation also sometimes
inserts control dependencies to enforce orderings between otherwise
independent operations as a way of, for example, controlling the peak memory
usage.

5.3.4 Tensorflow Implementation:


To get started with TensorFlow in Python, you first need to install the
TensorFlow library. This can be done using pip, the Python package installer, by
running the command "pip install tensorflow" in a command prompt or terminal.
Once TensorFlow is installed, you can import the TensorFlow library in
your Python code and start building your deep learning model. You can use the

19
high- level APIs provided by TensorFlow, such as Keras, to quickly build and
train deep learning models with minimal code. Alternatively, you can use the
low-level APIs provided by TensorFlow, such as the TensorFlow Graph API, to
build more complex models with greater control over the training process.
The main components in a TensorFlow system are the client, which uses
the Session interface to communicate with the master, and one or more worker
processes, with each worker process responsible for arbitrating access to one or
more computational devices (such as CPU cores or GPU cards) and for
executing graph nodes on those devices as instructed by the master. We have
both local and distributed implementations of the TensorFlow interface. The
local implementation is used when the client, the master, and the worker all run
on a single machine in the context of a single operating system process (possibly
with multiple devices, if for example, the machine has many GPU cards
installed). The distributed implementation shares most of the code with the local
implementation, but extends it with support for an environment where the client,
the master, and the workers can all be in different processes on different
machines.

5.3.5 Data Parallel Training:


One simple technique for speeding up SGD is to parallelize the
computation of the gradient for a mini-batch across mini-batch elements. For
example, if we are using a mini-batch size of 1000 elements, we can use 10
replicas of the model to each compute the gradient for 100 elements, and then
combine the gradients and apply updates to the parameters synchronously, in
order to behave exactly as if we were running the sequential SGD algorithm
with a batch size of 1000 elements. In this case, the TensorFlow graph simply
has many replicas of the portion of the graph that does the bulk of the model
computation, and a single client thread drives the entire training loop for this
large graph. The TensorFlow system shares some design characteristics with its

20
predecessor system, Disbelief and with later systems with similar designs like
Project Adam and the Parameter Server project. Like Disbelief and Project
Adam, TensorFlow allows computations to be spread out across many
computational devices across many machines, and allows users to specify
machine learning models using relatively high-level descriptions. Unlike
Disbelief and Project Adam, though, the general- purpose dataflow graph model
in TensorFlow is more flexible and more amenable to expressing a wider variety
of machine learning models and optimization algorithms.

5.3.6 Keras:
Keras is a high-level deep learning library that runs on top of TensorFlow,
developed with the goal of providing an easy-to-use interface for building and
training deep learning models. Keras provides a user-friendly API that allows
developers to quickly build and experiment with Deep Neural Networks,
without requiring detailed knowledge of the underlying mathematical concepts.
Keras provides a range of pre-built layers, such as convolutional layers,
recurrent layers, and dense layers, which can be easily combined to create
complex deep learning models. Keras also provides a range of pre-built models,
such as VGG16, Inception, and ResNet, which can be used for various computer
vision and natural language processing tasks.
One of the key benefits of using Keras is its user-friendly API, which
makes it easy to experiment with different model architectures and
hyperparameters. Keras also provides a range of tools for visualizing and
monitoring the training process, such as real-time plotting of training and
validation metrics, and early stopping based on performance.
Keras also supports transfer learning, which is the process of using pre-
trained models as a starting point for a new deep learning task. This can save
significant time and computational resources, as the pre-trained model has
already learned many useful features that can be used for the new task. Keras

21
can be used with a range of backends, including TensorFlow, Microsoft
Cognitive Toolkit, Theano, and PlaidML. However, TensorFlow has become the
most popular backend for Keras due to its performance, ease-of-use, and
integration with other TensorFlow tools.

5.3.7 Matplotlib:
Matplotlib is a data visualization library for Python that provides a range of
tools for creating high-quality, customizable plots, charts, and figures.
Matplotlib is widely used in the scientific and data analysis communities to
visualize and explore data, and has become a popular tool for creating
publication quality graphics.
Matplotlib provides a range of functions and classes for creating different
types of plots, including line plots, scatter plots, bar plots, histograms, and more.
Matplotlib also allows for customizing many aspects of the plot, such as axis
labels, titles, colors, and markers.
Matplotlib can be used in a variety of ways, including through Python
scripts, Jupyter notebooks, and interactive Python environments. Matplotlib is
also compatible with other Python libraries, such as NumPy and Pandas, which
makes it easy to integrate with data analysis workflows.
Matplotlib provides a range of backends, which are responsible for
rendering the plot on a variety of devices and file formats. The most commonly
used backend is the "agg" backend, which creates high-quality static images that
can be saved in a variety of formats, such as PNG, PDF, and SVG. Matplotlib
also provides backends for creating interactive plots, such as the "Qt5Agg"
backend, which allows for zooming, panning, and other interactive features.
One of the key benefits of using Matplotlib is its versatility and flexibility,
which allows for creating a wide range of different types of plots and
visualizations. Matplotlib is also well-documented and has a large community of
users and contributors.

22
5.3.8 Cv2:
cv2 is a Python library for computer vision and image processing. It is a
wrapper for the OpenCV (Open-Source Computer Vision Library) which is a
popular computer vision library used for various computer vision tasks such as
image and video processing, object detection, and recognition, feature
extraction, and many more.
cv2 provides a set of functions for image and video processing, including
image filtering, transformations, feature detection, object tracking, and video
analysis. It also provides tools for displaying images and videos in Python, as
well as basic image and video handling functions.

5.4 DATA TYPE OBJECTS (dtype):


A data type object describes interpretation of fixed block of memory
corresponding to an array, depending on the following aspects:
 Type of data (integer, float or Python object)
 Size of data
 Byte order (little-endian or big-endian)
 In case of structured type, the names of fields, data type of each field
and part of the memory block taken by each field.
 If data type is a subarray, its shape and data type. The byte order is
decided by prefixing '<' or '>' to data type. '<' means that encoding is little
endian (least significant is stored in smallest address).

5.5 TOOLS DESCRIPTION


5.5.1 Thonny IDE
Thonny is an integrated development environment (IDE) for Python
programming, designed to make it easy to learn and write Python code. Thonny

23
provides a range of features and tools that help to simplify the process of
writing, testing, and debugging Python code.
One of the key features of Thonny is its simple, easy-to-use interface,
which provides a range of tools for writing and executing Python code. Thonny
includes a built-in Python interpreter, which allows users to run code directly
within the IDE, as well as a range of debugging tools, such as breakpoints, step-
by-step execution, and variable inspection. Overall, Thonny is a powerful and
user- friendly IDE for Python programming, designed to simplify the process of
learning and writing Python code.
Thonny is a small and light weight Integrated Development Environment.
It was developed to provide a small and fast IDE, which has only a few
dependencies from other packages. Another goal was to be as independent as
possible from a special Desktop Environment like KDE or GNOME, so Thonny
only requires the GTK2 toolkit and therefore you only need the GTK2 runtime
libraries installed to run it. For compiling Thonny yourself, you will need the
GTK (>= 2.6.0) libraries and header files. You will also need the Pango,
Gliband ATK libraries and header files. Thonny has been successfully compiled
and tested under Debian 3.1 Sarge, Debian 4.0 Etch, Fedora Core 3/4/5, Linux
From Scratch and FreeBSD 6.0. It also compiles under Microsoft Windows.
Another useful feature of Thonny IDE is its auto-completion functionality,
which suggests possible code completions as you type. This saves time and
reduces errors by automatically completing common functions and variable
names.

24
CHAPTER 6
PROPOSED MODULE

6.1 COLLECTION OF DATASETS


The datasets were collected from different sources i.e Kaggle, Github,
Google. The datasets have collection of four classes, they are Oily, Dry, Normal
and Combinational. The datasets contain all types of skin tones that is used to
train the model. To understand the data properly with the help of some
visualization techniques and some analyzing techniques.

6.2 DATASETS PRE-PROCESSING


To understood how the data is let’s pre-process the collected data. The
datasets are in the form colour images that machine can’t understand. By using
greyscale conversion method, we will convert the colour images into black and
white. The converted greyscale images will be simplified into numerical values
and added in the training list. The simplified values will be added in the pickle
files X and Y and it will be passed to next phase.

6.3 TRAINING THE MODEL


All the datasets were converted from colour images to black and white
images using grey scale conversion and histogram equalization. After pre-
processing and extraction of the dataset, the model will be trained using
Convolution Neural Network algorithm which consist of 5 layers that is 3
Convolution layer and 2 Hidden layer to segregate the data. After training
procedure is completed all the features in the different data are extracted and
stored in a model file. The datasets will be shuffled in the list to improve
learning ability of the model and to avoid the redundancy while learning. After
training the model, the training validation, accuracy, loss and confusion matrix

25
will be plotted in the graph.

6.4 TESTING PHASE


After the model is trained, the user input i.e., images will be captured
using the web camera and the image will be stored in the local machine. The
user input image will be loaded in the application and it will be compared with
the model file that is developed and trained by the team. Based on the analysis
the application will suggest the best products based on the skin tone of the user.
The product details will be listed in the excel file and it will be imported in the
application. We have included some cosmetic artifacts like moisturizer,
cleanser, treatment, face mask, eye cream, sun protect along with their
ingredients.

26
CHAPTER 7
IMPLEMENTATION

7.1 PRE-PROCESSING AND FEATURE EXTRACTION


All the datasets for this project were collected from multiple sources like
Kaggle, github and google. The images will be loaded in the application and
resized to a fixed frame. Then the color images will be converted to black and
white images using grey scale conversion and histogram equalizer. The images
will be added in the training list in the numerical values format. The list will be
shuffled to avoid the repetition. Two pickle files X and Y will be created where
the X file contains the values of the datasets used to train the model and Y file
contains the indexing of dry, normal, oily and combinational skin.

7.2 TRAINING AND VALIDATION


After pre-processing and extraction of the dataset, the model will be
trained using Convolution Neural Network algorithm which consist of 5 layers
that is 3 Convolution layer and 2 Hidden layer to segregate the data. The
datasets will be shuffled in the list to improve learning ability of the model and
to avoid the redundancy while learning. The datasets will be zoomed again and
again in matrix format to learn deeply. Max pooling is used to extract only top
most value and avoids the unwanted values. After training the model, the
training validation, accuracy, loss and confusion matrix will be plotted in the
graph to show the working efficiency of the model and the model file will be
saved in the local machine.

7.3 TESTING AND COSMETICS SUGGESTION


After the model is trained, the user input i.e. images of user’s skin will be
captured using the web camera of the device and the image will be stored in the

27
local machine. The user input image will be loaded in the application and it will
be compared with the model file that is already developed and trained by the
team. Based on the analysis, the application will suggest the best products based
on the skin tone of the user. The product details will be listed in the excel file
and it will be imported in the application. We have included some cosmetic
artifacts like moisturizer, cleanser, treatment, face mask, eye cream, sun protect
along with their ingredients. This project has two functions one is activate
camera that captures the user’s skin and the other one is choose file which is
used to upload the input images.

28
CHAPTER 8
TESTING

8.1 TESTING INTRODUCTION


Software testing is the process of verifying whether a software application
or system meets its functional and non-functional requirements. To find whether
the developed software met the specified requirements or not and to identify the
defects to ensure that the product is defect free in order to produce the quality
product.

8.2 TYPES OF TESTING CONSIDERED


Unit Testing: This type of testing involves testing individual components
or modules of the software application to ensure they are working as intended.
System Testing: System testing is the process of testing the entire
software application as a whole to ensure it meets all functional and non-
functional requirements.
Acceptance Testing: Acceptance testing is the process of testing whether
the software application meets the requirements of the end-user or customer.
Security Testing: Security testing is the process of testing the software
application's ability to withstand different types of attacks and vulnerabilities.
User Acceptance Testing: User acceptance testing (UAT) is the process
of testing the software application by the end-users to ensure it meets their
requirements.
Data Quality Testing: In ML projects, the quality of data used for
training and testing the ML model is crucial. Data quality testing involves
checking for data accuracy, completeness, consistency, and relevance. It is
important to ensure that the data used for training and testing the ML model is
reliable and representative of the real-world scenarios.

29
Model Performance Testing: ML models need to be evaluated for their
performance before they are deployed in a production environment. Model
performance testing involves evaluating the accuracy, precision, recall, F1
score, and other relevant metrics of the ML model to ensure that it is performing
as expected.
Model Validation Testing: Model validation testing involves validating
the accuracy and reliability of the ML model by comparing its predictions with
the actual outcomes. This can be done using techniques such as cross-
validation, holdout validation, or time series validation, depending on the type
of ML model and the nature of data being used.
Model Robustness Testing: ML models need to be tested for their
robustness, i.e., their ability to perform well in different scenarios and handle
unexpected inputs. Robustness testing involves subjecting the ML model to
various edge cases, outliers, and adversarial inputs to ensure that it can handle
such situations without breaking or producing inaccurate results.
Model Interpretability Testing: Interpretability of ML models is
important for understanding how the model arrives at its predictions. Model
interpretability testing involves evaluating the model's ability to provide
meaningful explanations for its predictions, such as feature importance, decision
rules, or visualizations, to ensure that the model's outputs are understandable
and explainable.
Model Deployment Testing: Testing the deployment of ML models
involves checking the functionality, performance, and reliability of the model
when deployed in a production environment. This may involve testing the
model's integration with other components of the system, monitoring its
performance, and verifying its accuracy and reliability in a live environment.
Model Retraining Testing: ML models may require periodic retraining to
ensure that they continue to perform well over time. Model retraining testing
involves evaluating the performance of the retrained model and comparing it

30
with the previous version of the model to ensure that the retraining process has
not introduced any issues or degraded the model's performance.
Model Security Testing: ML models can be vulnerable to security threats
such as data breaches, model poisoning, adversarial attacks, and unauthorized
access. Model security testing involves evaluating the model's security
measures, such as data encryption, access controls, and model robustness
against various security threats.
Ethical Considerations: ML models can have ethical implications, such
as bias, fairness, and transparency. Ethical considerations in testing ML models
involve evaluating the model's fairness across different demographic groups,
identifying and addressing any biases in the model's predictions, and ensuring
that the model's outputs are transparent and explainable.
There are different types of software testing techniques such as manual
testing, automated testing, and exploratory testing. Each of these techniques has
its advantages and disadvantages, and the choice of the testing technique
depends on the type of software application, the complexity of the application,
and the available resources.

31
CHAPTER 9
RESULT AND DISCUSSION

OVERVIEW:
The cosmetic product suggestion application is a Python-based program
that uses Convolutional Neural Network (CNN) algorithms to scan a user's skin
tone and suggest cosmetic products based on that analysis. The goal of this
project is to help users find the best cosmetic products for their skin tone.

METHODOLOGY:
To create this application, we trained a CNN model on a large dataset of
skin tone images. We used Python and TensorFlow to build and train the model.
Once the model was trained, we integrated it into the cosmetic product
suggestion application, which takes an input image of a user's skin tone and
outputs a list of recommended cosmetic products.

RESULTS:
We evaluated the accuracy of our model on a test dataset and achieved an
accuracy of 92%. In addition, we tested the cosmetic product suggestion
application on a set of real-world user images and found that the application
was able to accurately suggest cosmetic products based on skin tone in over
80% of cases.

32
9.1: TRAINING AND VALIDATION LOSS

9.2: TRAINING AND VALIDATION ACCURACY

33
9.3: CONFUSION MATRIX

34
CHAPTER 10
CONCLUSION AND FUTURE WORK

10.1 CONCLUSION
Overall, the cosmetic product suggestion application shows promising
results in terms of accuracy and usability. The application can help users find
the best cosmetic products for their skin tone and improve their overall cosmetic
experience. However, there is still room for improvement in terms of accuracy
and usability. Future work can focus on increasing the accuracy of the model by
incorporating more diverse skin tones and evaluating the performance of the
application on a larger and more diverse set of user images. In conclusion, the
cosmetic product suggestion application is a valuable tool for anyone looking to
find the best cosmetic products for their skin tone. With further development
and refinement, this application has the potential to become a game-changer in
the cosmetic industry. Nowadays, decision-making in choosing the right
cosmetic products becomes very complex in markets as well as on e-commerce
sites. To overcome this problem, we suggest a smart application that will scan
the user's skin as an input and suggest the best products based on the customer's
skin tone (i.e., oily, natural, and dry). The main goal of this system is to suggest
the best distribution of cosmetic products using AI and deep learning.

10.2 FUTURE WORK


Currently this project is a prototype that will suggest products to the
customers based on skin tone (i.e dry, oily, natural). The suggested products in
the system were already feed by us in an excel sheet format. In future this
system can be expand by binding up with ecommerce websites that suggests real
products to the customers.

35
REFERENCE

[1] Ian Goodfellow, Yoshua Bengio and Aaron Courville. “Deep Learning,”
The MIT Press, 2016, mitpress.mit.edu/9780262035613/deep-learning.
[2] R. Iwabuchi et al., "Proposal of recommender system based on user
evaluation and cosmetic ingredients," 2017 International Conference on
Advanced Informatics, Concepts, Theory, and Applications (ICAICTA),
Denpasar, Indonesia, 2017, pp. 1-6.
[3] Holder, Christopher & Obara, Boguslaw & Ricketts, Stephen. (2019).
“Visual Siamese Clustering for Cosmetic Product Recommendation,”
10.1007/978-3- 030-21074-8_40.
[4] Z. Ma'or and M. Portugal-Cohen, "Injecting Innovation to Traditional,
Natural Cosmetic Ingredients: The Case of Dead Sea Minerals," in IEEE
Engineering Management Review, vol. 49, no. 2, pp. 73-80, 1 Second quarter,
june 2021.
[5] Z. Wu et al., "Cosmetic safety risk assessment model and system
implementation based on six-dimensional classification," 2021 International
Conference on Internet, Education and Information Technology (IEIT), Suzhou,
China, 2021, pp. 31-34.
[6] L. Yang, "A study on optimization design of cosmetics sustainable service
system and case-based design on Xie fuchun," 2020 International Conference on
Innovation Design and Digital Technology (ICIDDT), Zhenjing, China, 2020,
pp. 142-146.
[7] A. Kothari, D. Shah, T. Soni and S. Dhage, "Cosmetic Skin Type
Classification Using CNN With Product Recommendation," 2021 12th
International Conference on Computing Communication and Networking
Technologies (ICCCNT), Kharagpur, India, 2021, pp. 1-6.

36
[8] R. S, H. S, K. Jayasakthi, S. D. A, K. Latha and N. Gopinath, "Cosmetic
Product Selection Using Machine Learning," 2022 International Conference on
Communication, Computing and Internet of Things (IC3IoT), Chennai, India,
2022, pp. 1-6.
[9] J. Wang, X. Zeng and M. Tang, "Research on the design of smart cosmetic
based on user experience," 2021 26th International Conference on Automation
and Computing (ICAC), Portsmouth, United Kingdom, 2021, pp. 1-4.
[10] M. S. Fleysher and V. M. Troeglazova, "Neural Network for Selecting
Cosmetics by Photo," 2021 IEEE Conference of Russian Young Researchers in
Electrical and Electronic Engineering (ElConRus), St. Petersburg, Moscow,
Russia, 2021, pp. 334-336.
[11] R. Nurfadillah, F Darari, R. E. Prasojo and Y. Amalia, "Benchmarking
Explicit Rating Prediction Algorithms for Cosmetic Products," 2020 3rd
International Seminar on Research of Information Technology and Intelligent
Systems (ISRITI), Yogyakarta, Indonesia, 2020, pp. 457-462.
[12] Q. Ma, M. Tsukagoshi and M. Murata, "Estimating Evaluation of
Cosmetics Reviews with Machine Learning Methods," 2020 International
Conference on Asian Language Processing (IALP), Kuala Lumpur, Malaysia,
2020, pp. 259- 263.
[13] G. Lou and H. Shi, "Face image recognition based on Convolutional
Neural Network," in China Communications, vol. 17, no. 2, pp. 117-124, Feb.
2020.
[14] K. Yamagishi, S. Yamamoto, T. Kato and S. Morishima, "Cosmetic
Features Extraction by a Single Image Makeup Decomposition," 2018
IEEE/CVF Conference on Computer Vision and Pattern Recognition
Workshops (CVPRW), Salt Lake City, UT, USA, 2018, pp. 1965-19652.
[15] C. -C. Chang et al., "Robust skin type classification using Convolutional
Neural Networks," 2018 13th IEEE Conference on Industrial Electronics and
Applications (ICIEA), Wuhan, China, 2018, pp. 2011-2014.

37
APPENDICES 1
SCREENSHOT

A1: CLASSIFICATION OF DATASETS

A2: PRE-PROCESSING AND FEATURE EXTRACTION

38
A3: IMPORTING LIBRARIES

A4: TRAINING AND ACCURACY

39
A5: COSMETIC PRODUCTS USED IN APPLICATION

A6: USER INTERFACE

40
A7: SUGGESTION OF COSMETICS FOR DRY SKIN

A8: SUGGESTION OF COSMETICS FOR OILY SKIN

41
A9: SUGGESTION OF COSMETICS FOR COMBINATIONAL SKIN

A10: SUGGESTION OF COSMETICS FOR NORMAL SKIN

42
APPENDICES 2
SOURCE CODE

Pre-Processing and Future Extraction:


import numpy as np
import os
from matplotlib import pyplot as plt
import cv2
import random
import pickle
file_list = []
class_list = []
DATADIR = "dataset"
# All the categories you want your neural network to detect
CATEGORIES=["combination","dry","Normal","oily"]
# The size of the images that your neural network will use
IMG_SIZE = 50
# Checking or all images in the data folder
for category in CATEGORIES :
path = os.path.join(DATADIR, category)
for img in os.listdir(path):
img_array = cv2.imread(os.path.join(path, img),
cv2.IMREAD_GRAYSCALE)
img_array = cv2.equalizeHist(img_array)
#img_array = cv2.Canny(img_array, threshold1=3, threshold2=10)
#img_array = cv2.medianBlur(img_array,1)
training_data = []
def create_training_data():

43
for category in CATEGORIES :
path = os.path.join(DATADIR, category)
class_num = CATEGORIES.index(category)
for img in os.listdir(path):
try :
img_array = cv2.imread(os.path.join(path, img),
cv2.IMREAD_GRAYSCALE)
#img_array = cv2.Canny(img_array, threshold1=3, threshold2=10)
#img_array = cv2.medianBlur(img_array,1)
img_array = cv2.equalizeHist(img_array)
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
training_data.append([new_array, class_num])
except Exception as e:
pass
create_training_data()
random.shuffle(training_data)
X = [] #features
y = [] #labels
for features, label in training_data:
X.append(features)
y.append(label)
X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1)
# Creating the files containing all the information about your model
pickle_out = open("X.pickle", "wb")
pickle.dump(X, pickle_out)
pickle_out.close()
pickle_out = open("y.pickle", "wb")
pickle.dump(y, pickle_out)
pickle_out.close()

44
pickle_in = open("X.pickle", "rb")
X = pickle.load(pickle_in)

Training and Validation:


import tensorflow as tf
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Dropout, Activation, Flatten,
Conv2D, MaxPooling2D
import pickle
from keras.models import model_from_json
from keras.models import load_model
import matplotlib.pyplot as plt
import numpy as np
# Opening the files about data
X = pickle.load(open("X.pickle", "rb"))
y = pickle.load(open("y.pickle", "rb"))
# normalizing data (a pixel goes from 0 to 255)
X = X/255.0
# Building the model
# Building the model
model = Sequential()
# 3 convolutional layers
model.add(Conv2D(32, (3, 3), input_shape = X.shape[1:]))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(64, (3, 3)))

45
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
# 2 hidden layers
model.add(Flatten())
model.add(Dense(128))
model.add(Activation("relu"))
model.add(Dense(128))
model.add(Activation("relu"))
# The output layer with 13 neurons, for 13 classes
model.add(Dense(4))
model.add(Activation("softmax"))
# Compiling the model using some basic parameters
model.compile(loss="sparse_categorical_crossentropy",optimizer="adam",metri
cs=["accuracy"])
y=np.array(y)
# Training the model, with 40 iterations
# validation_split corresponds to the percentage of images used for the
validation phase compared to all the images
history = model.fit(X, y, batch_size=32, epochs=5, validation_split=0.1)
# Saving the model
model.save('CNN.model')
# Printing a graph showing the accuracy changes during the training phase
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
acc=np.array(acc)
val_acc=np.array(val_acc)

46
loss=np.array(loss)
val_loss=np.array(val_loss)
epochs_range = range(5)
plt.figure(figsize=(15, 15))
plt.subplot(2, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(2, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
import seaborn as sns
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
vgg_y_pred = model.predict_generator(X)
y_pred_array=np.array(vgg_y_pred)
y_g=[]
print(y)
print(y_pred_array)
yt=[]
for xt in y_pred_array:
yt.append(xt.tolist().index(max(xt)))
print(yt)
from sklearn import metrics
acc=(metrics.accuracy_score(yt,y)*100)

47
print("Accuracy is:",acc)
cm1 = metrics.confusion_matrix(yt,y)
total1=sum(sum(cm1))
sensitivity1 = cm1[0,0]/(cm1[0,0]+cm1[0,1])
print('Sensitivity : ', sensitivity1 )
specificity1 = cm1[1,1]/(cm1[1,0]+cm1[1,1])
print('Specificity : ', specificity1)
from sklearn.metrics import classification_report
print('\nClassification Report\n')
print(classification_report(y, yt, target_names=['Class 1', 'Class 2','Class 3',
'Class 4']))
confusion_mtx = confusion_matrix(y, yt)
# plot the confusion matrix
f,ax = plt.subplots(figsize=(8, 8))
sns.heatmap(confusion_mtx, annot=True,
linewidths=0.01,cmap="Blues",linecolor="gray", fmt= '.1f',ax=ax)
plt.xlabel("Predicted Label")
plt.ylabel("True Label")
plt.title("Confusion Matrix")
plt.show()

Testing and Cosmetics Suggestion:


import numpy as np
import matplotlib.pyplot as plt
import cv2
import os
import tensorflow as tf
from keras.models import load_model
from tkinter import *

48
import tkinter.messagebox
import PIL.Image
import PIL.ImageTk
from tkinter import filedialog
from tkinter import filedialog
import csv
file = open('cosmetics.csv')
type(file)
csvreader = csv.reader(file)
header = []
header = next(csvreader)
CATEGORIES=["combination","dry","Normal","oily"]
root = Tk()
root.title("COSMETIC SUGGESTION")
root.state('zoomed')
root.configure(bg='#D3D3D3')
root.resizable(width = True, height = True)
value = StringVar()
panel = Label(root)
model = tf.keras.models.load_model("CNN.model")
# import the opencv library
import cv2
def Camera():
# define a video capture object
vid = cv2.VideoCapture(0, cv2.CAP_DSHOW)
while(True):
# Capture the video frame
# by frame
ret, frame = vid.read()

49
# Display the resulting frame
cv2.imshow('frame', frame)
cv2.imwrite('main.jpg',frame)
# the 'q' button is set as the
# quitting button you may use any
# desired button of your choice
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# After the loop release the cap object
vid.release()
# Destroy all the windows
cv2.destroyAllWindows()
def prepare(file):
IMG_SIZE = 50
img_array = cv2.imread(file, cv2.IMREAD_GRAYSCALE)
img_array = cv2.equalizeHist(img_array)
#img_array = cv2.Canny(img_array, threshold1=3, threshold2=10)
#img_array = cv2.medianBlur(img_array,1)
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
return new_array.reshape(-1, IMG_SIZE, IMG_SIZE, 1)
def detect(filename):
prediction = model.predict(prepare(filename))
prediction = list(prediction[0])
print(prediction)
l=CATEGORIES[prediction.index(max(prediction))]
print(CATEGORIES[prediction.index(max(prediction))])
value.set(CATEGORIES[prediction.index(max(prediction))])
i=int(prediction.index(max(prediction)))
file = open('cosmetics.csv')

50
type(file)
csvreader = csv.reader(file)
header = []
header = next(csvreader)
j=0
i=i+1
for row in csvreader:
print(row[6])
if i==int(row[6]):
x=header[0]+" : "+row[0]+"\n"+header[1]+" : "+row[1]+"\
n"+header[2]+" : "+row[2]+"\n"+header[3]+" : "+row[3]+"\n"+header[4]+" :
"+row[4]+"\n"+header[5]+" : "+row[5]+"\n"
tkinter.messagebox.showinfo("",x)
def ClickAction(event=None):
filename = filedialog.askopenfilename()
img = PIL.Image.open(filename)
img = img.resize((250,250))
img = PIL.ImageTk.PhotoImage(img)
global panel
panel = Label(root, image = img)
panel.image = img
panel = panel.place(relx=0.435,rely=0.3)
detect(filename)
button = Button(root, text='ACTIVATE CAMERA', font=(None, 18),
activeforeground='red', bd=20, bg='cyan', relief=RAISED, height=3, width=20,
command=Camera)
button = button.place(relx=0, rely=0.05)
button = Button(root, text='CHOOSE FILE', font=(None, 18),
activeforeground='red', bd=20, bg='cyan', relief=RAISED, height=3, width=20,

51
command=ClickAction)
button = button.place(relx=0.40, rely=0.05)
result = Label(root, textvariable=value, font=(None, 20))
result = result.place(relx=0.465,rely=0.7)
root.mainloop()

52

You might also like