Professional Documents
Culture Documents
Learning Technique
ABSTRACT:
• This project presents face emotion recognition can be solved by analysing
one or more of these features.
• Choosing to follow the lexical features would require a transcript of the face
which would further require an additional step of text extraction from face if
one wants to predict emotions from face reaction is the act of attempting to
recognize human emotion and affective states from face.
• This is also the phenomenon that animals like dogs and horses employ to be
able to understand human emotion, we will use the libraries librosa,
soundfile, Tensorflow and keras Deep learning frameworks to build a model
using an Artificial Neural Network Classifier.
• This will be able to recognize emotion from face data set. We will load the
data, extract features from it, then split the dataset into training and testing
sets.
• Then, we’ll initialize an Artificial Neural Network Classifier and train the
model. Finally, we’ll calculate the accuracy of our model.
EXISTING SYSTEM
They proposed for a common subspace to obtain the corpus-invariant feature
representations, and then seek the relationships between the features and
labels in this latent subspace by introducing a regression model.
In addition, a discriminative MMD is used as the discrepancy metric to reduce
the distribution difference, the divergence between source and target domains
is crucial to the cross-domain problem.
Over the past few years, many transfer learning algorithms have been proposed
to solve this problem.
These methods focus on transferring the knowledge from source domain to
target domain, Moreover, we present a label graph to help transfer knowledge
from relevant source data to target data.
Finally, we conduct extensive experiments on three popular emotional datasets.
The results show that our method can outperform traditional methods and
some state-of-the-art transfer learning algorithms for cross-corpus speech
emotion recognition tasks.
Drawbacks:
• They are using for analyzing audio frameworks, but audio only not
hold human feeling. They can’t find out human expression based
on voice.
• They are not voice only predict human current expression,
because sad people same time expression happy speech but the
human actually not happy, this speech only not predict human
expression.
PROPOSED SYSTEM
• To classifying the face expression. We planned to design deep
learning technique so that a person with lesser expertise in
software should also be able to use it easily.
• It proposed system to predicting face expression. It explains
about the experimental analysis of our methodology.
• Samples of more number of images are collected that comprised
of different classes such as happy, angry, sad and neutral.
• Different number of images is collected for each classes that was
classified into database images and input images.
• The primary attributes of the image are relied upon the shape
and texture oriented features.
• The sample screenshots displays the face emotion detection
using color based segmentation model.
Advantages:
USECASE DIAGRAM:
CLASS DIAGRAM:
ACTIVITY DIAGRAM:
SEQUENCE DIAGRAM:
ER DIAGRAM:
COLLABORATION DIAGRAM:
Project Requirements:
Framework: Keras
1. Software Requirements:
• Operating System : Windows
• Tool : Anaconda with Jupyter Notebook
• Language : Python
2. Hardware requirements:
• Processor : minimum i3 and above
• Hard disk : minimum 300 GB
• RAM : minimum 4 GB
Conclusion:
In this project, a research to classify facial emotions over
static facial images using deep learning techniques was
developed. This is a complex problem that has already been
approached several times with different techniques. While good
results have been achieved using feature engineering, this
project focused on feature learning, which is one of DL
promises. While feature engineering is not necessary, image pre-
processing boosts classification accuracy. Hence, it reduces noise
on the input data. Nowadays, facial emotion detection software
includes the use of feature engineering. A solution totally based
on feature learning does not seem close yet because of a major
limitation. Thus, emotion classification could be achieved by
means of deep learning techniques.