You are on page 1of 11

EMOTION DETECTION FROM FACIAL EXPRESSIONS IN THE

Pictures

A Project synopsis
Submitted in partial fulfilment of the requirements of the project.
Of

Bachelor of Technology
In
COMPUTER SCIENCE & ENGINEERING

Submitted by: -
Prince Kumar Singh (22BCS11746)
Rashi Gupta (22BCS10062)
Ashwani Kumar Pandey (22BCS10860)
Nishchay (22BCS10648)
Parteek (22BCS10395)

CHANDIGARH UNIVERSITY
1. Project title :-
“Emotion Detection from Facial Expression in the
pictures”
2. Team members details :-
NAME Roll no. e-mail
Prince Kumar Singh 22BCS11746 22BCS11746@cuchd.in
Rashi gupta 22BCS10062 22BCS10062@cuchd.in
Ashwani Kumar 22BCS10860 22BCS10860@cuchd.in
Pandey
Nishchay 22BCS10648 22BCS10648@cuchd.in
Parteek 22BCS10395 22BCS10395@cuchd.in

3. Objective :-
The most imformative channel for machine perception of
emotions is through facial expressions. Effective human
computer intelligent interaction(HCII) needs the computer to
detect emotions through facial expressions. This project aims
to develop automatic emotion detecting system by evaluating
machime learning algorithms for facial expression
recognisation through the pictures. The system will perform
features selection in each frames to anallyse the image and
compare with an authentic database of natural emotions to
classify each frame as a class of human emotion harnessing
facial expression dynamics.

Expected Features, Novelness & Significance of the proposed


project:
Sucessfully detection of facial features, including happiness,
sadness, anger, disgust, surprised, fear, neutral. The program is
able to find different facial features, and send back the result of
analysation of the in percentage and also send back an image
with facial features highlighted.
4. Scope:-
The scope lies in the increasing trend towards human
computer interaction in more natural way to communicate
with computer without traditional interface devices.
Emotion seeing system have wide range of application in
fields like :-
• Reserch and education
• Security and law enforcement
• Psychiatric evaluations
• Telecommunication
• Communication controls in games etc.

5. Technical details:-
Facial expressions give important perception about
emotions. Therefore several approches have been
proposed to classify human affective states. The features
used are typically based on local spatial position or
displacement of specific points and regions of the face,
which uses global statistics of the acoustic features.

The main task involves features extraction and emotion


recoginition by pattern matching to classify into a specific
class of emotion using classifiers. The basic parts are:- Face
tracking and feature extraction:
The face tracking we use is based on a system developed
by OpenCV. It firstly extracts the feature images into a
large sample set by extracting the face Haar features in the
image and then uses the AdaBoost algorithm as the face
detector.
Expression representation:
We use fusion of facce shape and texture as the
representation of facial expression. This hybrid
representation is able to incorporate local pixel intensity
variation pattern while still adhering to shape constraint at
a global level, proving to be effective.
Recognition and Classification:
Several classifier are used from machine learning to classify
the query image as one of the test set images.
Classifiers that can be used for recognition are:
• KNN( k-nearest neighbour ) algorithm
• SVM( support vectot machine ) algorithm
• Bayesian network
• PEBLS
• CN2
• SSS
• Decision Tree
• Voting algorithm
6.Flowchart:-
7.how does emotion recognition work? :-
Emotion recognition(ER) combines knowledge of artificial intelligence (AI)
and psychology. Let’s find out how these fields can be united.

On the AI side, emotion recognition in video relies on object and motion


detection. Emotion recognition is carried out using one of two AI
approaches: machine learning (ML) or deep learning (DL). We’ll take a look
at both of these in the next part of the article.

The facial emotion recognition process is divided into three key stage: •
Detection of faces and facial parts:- At this stage, an ER
solution treats the human face as an object. It detects facial features
(brows, eyes, mouth, etc.), notices their position, and observes their
movements for some time (this period is different for each solution).

Facial extraction:- The ER solution extracts the detected features for


further analysis.
8.Expression classification:- This final stage is devoted to recognizing
emotions. ER software analyses a facial expression based on extracted
features. These features are compared to labelled data (for MLbased
solutions) or information from the previous analysis (for DL-based solutions).
An emotion is recognized when a set of features matches the description of
the emotion.

And this is where psychology joins the game. Usually, an ER solution


detects seven basic emotions: -

1. Happiness
2. Surprise
3. Anger
4. Sadness
5. Fear
6. Disgust
7. Contempt

9.challenges of recognizing emotion in images:-


1.Data augmentation-> As with any machine learning and deep
learning algorithms, ER solutions require a lot of training data.
This data must include images at various frame, from various
angles, with various backgrounds, with people of different
genders, nationalities, and races, etc. There are three ways to
overcome this issue: a. Create your own dataset.

b. Combine several datasets.


c. Modify the data as you go.

2.Face occlusion and lighting issues-> Occlusion due to changes in


pose is a common issue for motion detection in video, especially
when working with unprepared data. A popular method for
overcoming it is by using a frontalization technique that detects
facial features.
3.Identifying facial features-> An emotion recognition solution
scans faces for eyebrows, eyes, noses, mouths, chins, and
other facial features. Sometimes this detection is complicated
due to:
a. The distance between features.

b. Feature size.

c. Skin colour.
In order to improve the accuracy of feature identification, some
researchers implement a part-based model that divides facial
landmarks into several parts according to the physical structures
of the face. This model then feeds these parts into the network
separately, with relevant labels.
4. Recognizing incomplete emotions-> Most algorithms focus on
recognizing the peak high-intensity expression and ignore
lowerintensity expressions. This leads to inaccurate recognition of
emotion when analyzing self-contained people from cultures with
traditions of emotional suppression.

An option to solving this issue is implementing a peak-piloted


deep network. This type of network compares peak and non-peak
expressions of the same emotion and minimizes the chances
between frames.
10.technologies for recognizing emotion in images: -
a. Machine learning algorithms-> Support-vector
machines (SVM) algorithm is a linear
classification technique widely applied for image processing.
Experiments by University of Cambridge on applying SVM algorithms for
emotion recognition have shown 88% accuracy. This is a promising
result; however, the classification time is high compared to other ER
methods. Also, recognition accuracy will be lower when an SVM
algorithm is applied to real-world videos instead of controlled datasets.

Bayesian classifiers are algorithms based on Bayes theorem.


Compared to other machine learning approaches, Bayesian classifiers
need less data for training and are able to learn from both labelled and
unlabelled data. Using maximum likelihood estimation, they achieve
higher accuracy outside laboratory tests.
Random Forest (RF) is a machine learning algorithm applied for
classification, regression, and clusterization. It’s based on a decision
tree predictive model that’s tailored for processing large amounts of
data. RF handles both numerical and categorical data, allowing this
algorithm to recognize emotions and estimate their intensity. RF
accuracy varies between 71% and 96% depending on the complexity of
detected features.

b. Deep learning algorithms->


Deep convolutional neural networks (CNNs) are one of
the most popular deep learning approaches to image and video processing.

Recurrent neural networks (RNNs) are suitable for


processing sequences of data. A traditional neural network processes pieces
of information independent of each other. An RNN adds loops to its layers.
This allows the network to capture the transition between facial expressions
over time and detect more complex emotions.
11.conclusion: -
Emotion recognition is a popular and promising sphere that has a chance to
simplify a lot of things, from marketing studies to health monitoring.
Developing facial emotion recognition software requires both deep
knowledge of human psychology and deep expertise in AI Development.
While the first provides us with an understanding of what facial expressions
indicate what emotions, AI still has much to learn in order to recognize them
correctly.

Refrence
https://pypi.org/project/deepface/#:~:text=Deepface%20is%20a%20lightweight%20face,%2C%20ArcFace%20%2C
%20Dlib%20and%20SFace%20
2. https://medium.com/nerd-for-tech/deep-face-recognition-in-
python41522fb47028

You might also like