You are on page 1of 19

ACTIVENESS MEASURE USING FACE RECOGNITION

BASED ON EMOTIONS
A PROJECT REPORT

Submitted by
SHARMILA.N
(Reg no. 20S029)
of

5 Year Integrated M.Sc., (Data Science)

in

DEPARTMENT OF APPLIED MATHEMATICS AND


COMPUTATIONAL SCIENCE

THIAGARAJAR COLLEGE OF ENGINEERING


(A Govt. Aided Autonomous Institution
affiliated to Anna University)

MADURAI – 625015

SEPTEMBER 2022

1
THIAGARAJAR COLLEGE OF ENGINEERING, MADURAI

DEPARTMENT OF APPLIED MATHEMATICS AND


COMPUTATIONAL SCIENCE

BONAFIDE CERTIFICATE

Certified that this project report “ACTIVENESS MEASURE USING FACE

RECOGNITION BASED ON EMOTIONS” is the bonafide work of SHARMILA N(20S029),

third Semester student of 5 Year Integrated MSc (Data Science) Degree Programme,

who carried out the project under my supervision from 17.07.2022 to 22.07.2022

during the academic year 2022-2023.

The project report was submitted to the department on 22/09/2022

for evaluation/assessment.

<sign> <sign>
Dr. S. Parthasarathy Prof P. Sharmila
Professor & Head Project Guide
Department of Applied Mathematics Assistant Professor in Data Science
and Computational Science Department of Applied Mathematics
and Computational Science

2
TABLE OF CONTENTS

LIST OF FIGURES

TABLE INDEX NAME OF PAGE NO


THE
TABLE
1.1 Convolutional process 8

1.2 Convolutional layer 9

1.3 Output of DeepFace 11


framework

1.4 Output of CNN 14

3
Page no
1. Introduction 5

2. Objective of the Project 6

3. Description of the Project 7

4. Implementation 10

5. Significance of the project 15

6. Conclusion 16

7. Appendices, if any -

8. Project Worksheet / Diary 17

4
1. INTRODUCTION:

This project deals with predicting the activeness of the students from
detecting their facial emotions. This would help their corresponding faculties to give
more focus on Non active students during that day. This would make that students
to listen carefully during those classes. Also, this will create the good student-
faculty relationship.
Usually, a teacher’s focus is often on procedures, rules, and routines.
These are all very important. But to know their student’s daily mood will also make
one step closer to their students. Their kind of relationship whether it is good or
bad with their teacher, will impact the student’s life. So, should be more careful in
dealing with students nowadays.
So, emotions such as anger, happy, sad, surprise, fear, neutral, and
disgust are classified into two categories as active and non-active. And it would
display in excel sheet for faculty purpose or also by printing at top of the
rectangular frame while detecting.
Deep Learning – which has emerged as an effective tool for analyzing big
data – uses complex algorithms and artificial neural networks to train
machines/computers so that they can learn from experience, classify and recognize
data/images just like a human brain does. Within Deep Learning, a Convolutional
Neural Network or CNN is a type of artificial neural network, which is widely used
for image/object recognition and classification. This will helps to classify our
activeness measure.
Here, in this project, two models are created. One with Deepface
framework and another with 4-layer sequential Convolutional neural network
(CNN).
DeepFace is the most lightweight face recognition and facial attribute
analysis library for Python. The open-sourced DeepFace library includes all leading-
edge AI models for face recognition and automatically handles all procedures for
facial recognition in the background.
In this analysis of 2 models, deepface framework gives more accurate
results than 4-layer sequential Convolutional neural network.

5
2.OBJECTIVE OF THIS PROJECT

Students need to deal with lot of works like assignments, deadlines


and projects. They need to handle important days when the test or project deadline
is nearby. There may be situations when they have to attend important sessions.
They may be tired or stresses due to their work loads. In such cases they end up in
depression and unwanted decision taking tasks.
Students also face depression due to their personal issues. It’ll impact
their day in college.
In such cases an activeness measure system can be developed to
recognize their emotions to help them with guidance and assistance. This will help
the faculties to treat the students properly based on their emotions and give them
more focus during their classes.
The aim of the project is to detect the activeness of the student with the
emotions expressed by his/her face using Machine learning Algorithms.

6
3. DESCRIPTION OF THE PROJECT:

Deepface framework:

DeepFace is the most lightweight face recognition and facial attribute


analysis library for Python. The open-sourced DeepFace library includes all leading-
edge AI models for face recognition and automatically handles all procedures for
facial recognition in the background.
If you run face recognition with DeepFace, you get access to a set of
features:
▪ Face Verification
▪ Face Recognition
▪ Facial Attribute Analysis
▪ Real-Time Face Analysis
The deepface library is also published in the Python Package Index
(PyPI), a repository of software for the Python programming language.
The main idea of DeepFace is to integrate the best image recognition
tools for deep face analysis in one lightweight and flexible library. Because
simplicity is so important, we also call it LightFace. Anyone can adopt DeepFace in
production-grade tasks with a high confidence score to use the most powerful open
source algorithms.
DeepFace systems can identify faces with 97% accuracy, almost the
same accuracy as a human in the same position.

1. Verification function under the DeepFace interface offers a single


face recognition. Each call of the function builds a face recognition
model and this is very costly.

2. Can apply face recognition on a large scale data set as well. Face
recognition requires to apply face verification multiple times.

3. Deepface also offers facial attribute analysis


including age, gender, facial expression (including angry, fear,
neutral, sad, disgust, happy and surprise) and race (including asian,
white, middle eastern, indian, latino and black) predictions. Analysis
7
function under the DeepFace interface is used to find demography of
a face.
4. You can run deepface for real time videos as well.Calling stream
function under the DeepFace interface will access your webcam and
apply both face recognition and facial attribute analysis.

Sequential Convolutional Neural Network:


A Convolutional Neural Network (ConvNet/CNN) is a Deep Learning
algorithm which can take in an input image, assign importance (learnable weights
and biases) to various aspects/objects in the image and be able to differentiate one
from the other. The pre-processing required in a ConvNet is much lower as
compared to other classification algorithms. While in primitive methods filters are
hand-engineered, with enough training, ConvNets have the ability to learn these
filters/characteristics.

Fig 1.1 Convolutional process

This is the first step in the process of extracting valuable features from an
image. A convolution layer has several filters that perform the convolution
operation. Every image is considered as a matrix of pixel values.

ReLU stands for the rectified linear unit. Once the feature maps are
extracted, the next step is to move them to a ReLU layer. ReLU performs an

8
element-wise operation and sets all the negative pixels to 0. It introduces non-
linearity to the network, and the generated output is a rectified feature map.

Pooling is a down-sampling operation that reduces the dimensionality of


the feature map. The rectified feature map now goes through a pooling layer to
generate a pooled feature map.

The next step in the process is called flattening. Flattening is used to


convert all the resultant 2-Dimensional arrays from pooled feature maps into a
single long continuous linear vector. The flattened matrix is fed as input to the fully
connected layer to classify the image.

Fig 1.2 convolutional layer

9
4.IMPLEMENTATION OF THE PROJECT:

DeepFace model:
1.Loading the necessary libraries and capturing the faces through webcam:

2.Loading the images and encoding them for face recognition and open a csv file
for recording the status of the recognized face.

3.Once the face is detected and recognized using face recognition library the
emotions are recognized using DeepFace library and the activeness is classified

10
4.Output images:
The faces are recognized and the frames were displayed. The figure shows the
sample output of 2 recognized images and the status of all the classified images
are updates in the created csv file along with the name, date and time.

Fig 1.3 Output of DeepFace framework

2. 4-layer sequential CNN model:

11
1.The dataset used here FER 2013 which is a publicly available dataset used for
face emotion recognition. The required libraries and the dataset are loaded.
The epochs indicate the number of passes of the entire training dataset the
machine learning algorithm has completed.

2. The data is preprocessed and rescaled to build the CNN model

3.The sequential model is built with the preprocessing techniques of ReLU


operation, Max pooling and the layer is at last flatten into an array if vectors and

12
then the model is compiled with the Adam optimizer.

4.The number of epochs is set to 70 and the model is fitted with the training and
validation data in the FER dataset.

5. The model is saved as a h5 file and it is used in the face emotion recognition
model which captures the images using webcam and classifies them as active and
13
inactive. The haarcascade classifier is used to detect the faces and the emotion is
detected from the faces.

6.The classification shown by the developed CNN model is shown in the figure

Fig 1.4 Output of CNN


Findings from the developed 2 models:
The DeepFace model shows an accuracy an accuracy of 97% and the 4-layer CNN
model shows an accuracy of 60%. The best among the two models is found to be
the DeepFace model.

14
5.SIGNIFICANCE OF THIS PROJECT:
By predicting the student’s activeness beforehand will help to increase
their academics performance and also creates a stable bond with their teachers.
It will also reduce the depression rate of the students when they treated properly if
their faculties know the mood of the students on daily basis.

CHALLENGES:

The major challenge is that if there is no brightness in the surrounding


while detecting the images, it would give some not precised outputs.

15
6.CONCLUSION:

In this analysis of 2 models, Deepface framework gives more


accurate results than 4-layer sequential Convolutional neural network. It
has been proved that the framework has good applicability in practical
activities and plays a positive role in solving the problems, such as the lack
of binding force on students, and teachers cannot get timely feedback.
Ultimately, it will contribute to improving the quality of education.

With a large number of participants in the class, we have no way


to ensure that everyone keeps the high level of concentration, and then
students’ expressions may not fully represent their emotions due to these
subjective factors. In future, we need to improve this.

16
PROJECT WORKSHEET / DIARY

Topics learned / Activity carried out


Date
/ Task completed / Online /E-
resourcesaccessed
10.08.2022

11.08.2022 Topic selection from Smart India Hackathon and


WEEK 1

reporting to faculty guide.


12.08.2022

13.08.2022 Reading articles about face recognition

14.08.2022 Reading articles about Emotion recognition

15.08.2022 Reading articles about Emotion recognition

Topics learned / Activity carried out /


Date
Task completed / Online /E-resources
accessed
16.08.2022 Reading articles about Emotion recognition

17.08.2022 Listing the methods and algorithms for emotion


WEEK

recognition
2

18.08.2022 Installing software and packages

19.08.2022 Searching for dataset

20.08.2022 Learn basic face detection coding methods

21.08.2022 Learn basic face recognition coding methods

17
Topics learned / Activity carried out /
Date
Task completed / Online /E-
resourcesaccessed
WEEK 3

22.08.2022 Importing and working with face recognition


library

23.08.2022 Coding for face recognition and test it on video


capture images

Topics learned / Activity carried out /


Date
Task completed / Online /E-
resourcesaccessed
28.08.2022 Build Deepface face recognition and emotion
detection model
29.08.2022 Testing the model on various images and faces
WEEK 4

30.08.2022 Testing the model on various images and faces

31.08.2022 Reading the history and basics of CNN

1.09.2022 Reading articles about CNN methods

2.09.2022 Implementing a basic CNN model

Topics learned / Activity carried out /


Date
Task completed / Online /E-
resourcesaccessed
3.09.2022 Finding dataset for CNN model

4.09.2022 1st Review-feedbacks and comments were noted


WEEK 5

5.09.2022 Using FER dataset and preprocessing


Model Building
6.09.2022 Importing the model for emotion recognition

7.09.2022 Analyze the results

8.09.2022 Increase the number of epochs and rerun the


model

18
Topics learned / Activity carried out /
Date
Task completed / Online /E-resources
accessed
9.09.2022 Finding the accuracy and loss of the model

10.09.2022 Comparing the DeepFace and CNN model accuracy


WEEK 6

11.09.2022 Finding the best among the two models and testing
it on various faces
12.09.2022 Correcting errors and improving accuracy

13.09.2022 Correcting errors and improving accuracy

14.09.2022 Finalize the model with more accuracy

Topics learned / Activity carried out /


Date
Task completed / Online /E-
resourcesaccessed

15.09.2022 2nd review changes and feedbacks were noted


WEEK 7

16.09.2022 Changes were made and the model was completed

17.09.2022 Report Writing

Signature of the Student Signature of the faculty


(with date) guide (with date)

19

You might also like