You are on page 1of 21

Emotion classification using Deep

Learning Technique
ABSTRACT:
• This project presents face emotion recognition can be solved by analysing
one or more of these features.
• Choosing to follow the lexical features would require a transcript of the face
which would further require an additional step of text extraction from face if
one wants to predict emotions from face reaction is the act of attempting to
recognize human emotion and affective states from face.
• This is also the phenomenon that animals like dogs and horses employ to be
able to understand human emotion, we will use the libraries librosa,
soundfile, Tensorflow and keras Deep learning frameworks to build a model
using an Artificial Neural Network Classifier.
• This will be able to recognize emotion from face data set. We will load the
data, extract features from it, then split the dataset into training and testing
sets.
• Then, we’ll initialize an Artificial Neural Network Classifier and train the
model. Finally, we’ll calculate the accuracy of our model.
EXISTING SYSTEM
 They proposed for a common subspace to obtain the corpus-invariant feature
representations, and then seek the relationships between the features and
labels in this latent subspace by introducing a regression model.
 In addition, a discriminative MMD is used as the discrepancy metric to reduce
the distribution difference, the divergence between source and target domains
is crucial to the cross-domain problem.
 Over the past few years, many transfer learning algorithms have been proposed
to solve this problem.
 These methods focus on transferring the knowledge from source domain to
target domain, Moreover, we present a label graph to help transfer knowledge
from relevant source data to target data.
 Finally, we conduct extensive experiments on three popular emotional datasets.
 The results show that our method can outperform traditional methods and
some state-of-the-art transfer learning algorithms for cross-corpus speech
emotion recognition tasks.
Drawbacks:

• They are using for analyzing audio frameworks, but audio only not
hold human feeling. They can’t find out human expression based
on voice.
• They are not voice only predict human current expression,
because sad people same time expression happy speech but the
human actually not happy, this speech only not predict human
expression.
PROPOSED SYSTEM
• To classifying the face expression. We planned to design deep
learning technique so that a person with lesser expertise in
software should also be able to use it easily.
• It proposed system to predicting face expression. It explains
about the experimental analysis of our methodology.
• Samples of more number of images are collected that comprised
of different classes such as happy, angry, sad and neutral.
• Different number of images is collected for each classes that was
classified into database images and input images.
• The primary attributes of the image are relied upon the shape
and texture oriented features.
• The sample screenshots displays the face emotion detection
using color based segmentation model.
Advantages:

• Increasing throughput & reducing subjectiveness arising from


human experts in detecting the face expression.
• Face expression natural non-verbal emotional communication
method.
PREPARING THE DATASET:

This dataset contains approximately 670 train and 182 test


image records of features extracted, which were then classified
into 4 classes:
Angry
Cry
Happy
Neutral
LITERATURE SURVEY:
Title : Emotion Analysis of Social Media Data using Machine
Learning Techniques.
Author : Sonia Xylina Mashal1 , Kavita Asnani.

Analysis of emotion in text is a very young field in the area of


computational linguistics. Analysis of emotions involves evaluation
and classification of text into emotion classes based on certain
levels as defined by emotion dimensional models which are
described in the theory of psychology. Emotion analysis and
classification is performed to identify the expressions of emotion in
a given text. In this paper a huge dataset of social media data
(tweets) is classified into five emotion categories (happy, sad,
anger, fear and surprise) using machine learning techniques (Naïve
Bayes and Support Vector Machine).
Title : Emotion Recognition and Sentiment Analysis in Conversations
Author: Mauajama Firdaus , Hardik Chauhan, Asif Ekbal and Pushpak
Bhattacharyya

Emotion and sentiment classification in dialogues is a challenging task


that has gained popularity in recent times. Humans tend to have multiple
emotions with varying intensities while expressing their thoughts and
feelings. Emotions in an utterance of dialogue can either be independent
or dependent on the previous utterances, making the task complex and
interesting. Multi-label emotion detection in conversations is a significant
task that provides the ability to the system to understand the various
emotions of the users interacting. On the other hand, sentiment analysis
in dialogue or conversation helps in understanding the perspective of the
user with respect to the ongoing conversation. Besides text, additional
information in the form of audio and video assists in identifying the
correct emotions with the appropriate intensity and sentiments in an
utterance of a dialogue. Lately, quite a few datasets have been made
available for emotion and sentiment classification in dialogues.
Title : Emotion recognition classification methods
Author: Andrew Koch
Year : 2018

In this paper I have presented a comparison of the


performances of different classification methods for the task of
classifying facial emotions given a 5-dimensional principal
component reduction of the local phase quantization and
Pyramid of Histogram of Orientation Gradients. These results
were then compared to results obtained and presented in a
paper on static facial expression analysis, with a comparison
being made to the methods used within that paper. It was found
that a decision tree based method was better at dealing with
overfitting than a deep neural network.
Title : Emotion Recognition using Feed Forward Neural Network &
Naïve Bayes
Author: Rahul Mahadeo Shahane, Ramakrishna Sharma.K, Md.Seemab
Siddeeq
Year : 2019

In this paper we analyze and predict the emotion of a user by


recognizing his/her face. Face recognition is a software application
which is used to identify a particular person; it will be mostly useful in
security applications to secure our data. Now a day we are using face
unlock in mobiles to unlock our phones. We need to know the emotions
of a person in some situations. Though we can recognize his emotion
through his tone of voice, it would be more helpful if get to know his
emotions. This can be much helpful in finding out a criminal by finding
out his emotion whether he is feeling nervous or not which expresses
his/her fear by this. In order to analyze his/her emotion firstly we need
to recognize his/her face, so we need to use face recognition method
and then implement emotion analysis. Here we use different algorithms
to implement emotion analysis such as CNN.
Title : Bimodal Emotion Recognition using Machine Learning
Author: Manisha S, Nafisa Saida H, Nandita Gopal, Roshni P Anand
Year : 2021

The predominant communication channel to convey relevant and high


impact information is the emotions that is embedded on our
communications. Researchers have tried to exploit these emotions in
recent years for human robot interactions (HRI) and human computer
interactions (HCI). Emotion recognition through speech or through facial
expression is termed as single mode emotion recognition. The rate of
accuracy of these single mode emotion recognitions are improved using
the proposed bimodal method by combining the modalities of speech
and facing and recognition of emotions using a Convolutional Neural
Network (CNN) model. In this paper, the proposed bimodal emotion
recognition system, contains three major parts such as processing of
audio, processing of video and fusion of data for detecting the emotion of
a person. The fusion of visual information and audio data obtained from
two different channels enhances the emotion recognition rate by
providing the complementary data.
System Architecture:
DESIGN ARCHITECTURE:

USECASE DIAGRAM:
CLASS DIAGRAM:
ACTIVITY DIAGRAM:
SEQUENCE DIAGRAM:
ER DIAGRAM:
COLLABORATION DIAGRAM:
Project Requirements:
Framework: Keras
1. Software Requirements:
• Operating System : Windows
• Tool : Anaconda with Jupyter Notebook
• Language : Python

2. Hardware requirements:
• Processor : minimum i3 and above
• Hard disk : minimum 300 GB
• RAM : minimum 4 GB
Conclusion:
In this project, a research to classify facial emotions over
static facial images using deep learning techniques was
developed. This is a complex problem that has already been
approached several times with different techniques. While good
results have been achieved using feature engineering, this
project focused on feature learning, which is one of DL
promises. While feature engineering is not necessary, image pre-
processing boosts classification accuracy. Hence, it reduces noise
on the input data. Nowadays, facial emotion detection software
includes the use of feature engineering. A solution totally based
on feature learning does not seem close yet because of a major
limitation. Thus, emotion classification could be achieved by
means of deep learning techniques.

You might also like