You are on page 1of 35

IDENTIFY HUMAN EMOTIONS THROUGH

PICTURES

MINI PROJECT – I
Submitted by

V.MAHALAKSHMI (421122106030)
V.SWETHA (421122106054)

In the patial fulfillment for the award of the degree of


BACHELOR OF TECHNOLOGY

in
ARITFICIAL INTELLIGENCE AND DATA SCIENCE

IFET COLLEGE OF ENGINEERING


(An Autonomous Institution)
Approved by AICTE, New Delhi and Accredited by NACC & NBA
Affiliated to Anna University, Chennai-25
Gangarampalayam, Villupuram – 605 108
NOV/DEC - 2023

1
IFET COLLEGE OF ENGINEERING
(An Autonomous Institution)
BONAFIDE CERTIFICATE

Certified that this project report “IDENTIFY HUMAN EMOTIONS


THROUGH PICTURES” is the bonafide work of “V.MAHALAKSHMI
[421121106030] , V.SWETHA [421122106054]” who carried out the mini
project work under my supervision. Certified further, that to the best of my
knowledge the work reported here in does not form any other project report or
dissertation on the basis of which a degree or award was conferred on an earlier
occasion on this or any other candidate.

SIGNATURE SIGNATURE
Mrs. P.MANJUBALA,M.TECH Mr.A.RANJEETH.M.E
HEAD OF THE DEPARTMENT SUPERVISOR
Associate Professor, Associate Professor,
Department of AI & DS, Department of AI & DS,
IFET College of Engineering, IFET College of Engineering,
Villupuram – 605108 Villupuram – 605108

The project report submitted for the viva voce held on……………

INTERNAL EXAMINER EXTERNAL EXAMINER

2
ACKNOWLEDGEMENT

We thank the almighty, for the blessings that have been showered upon me to bring
forth the success of the project. We would like to express my sincere gratitude to
our Chairman Mr.K.V.Raja, Secretary Mr.K.Shivram Alva and our Treasurer
Mr.R.Vimal for providing us with an excellent infrastructure and necessary
resources to carry out this project and we extend our gratitude to our principal
Dr.G.Mahendran, for his constant support to our work. We also take this
opportunity to express our sincere thanks to our Vice-Principal and Dean
Academics Dr.S.Matilda who has provided all the needful help in executing the
project successfully.

We wish to express our thanks to our Head of the Department, Mrs.P.Manju bala ,
for her persistent encouragement and support to complete this project.

We express my heartfelt gratitude to my guide Mr.A.Ranjeeth Department of


Artificial Intelligence and Data Science for her priceless guidance and motivation
which helped us to bring this project to a perfect shape.

And We thank our Batch Coordinator Mrs.P.Divya, Batch cordinater Assistant


Professor, Department of Artificial Intelligence and Data Science who
encouraged us in each and every step of this project to complete it successfully.
We also thank our lab technicians and all the staff members of our department for
their support and assistance.
Last but not the least, we wholeheartedly thank my family and friends for
their moralsupport in tough times and their constructive criticism which
made me succeed in ourwork.

3
ABSTRACT

This project implements real-time facial emotion analysis utilizing


computer vision and deep learning techniques. The system utilizes a
convolutional neural network (CNN) model trained to recognize human
emotions from live video feed captured by a camera.The program utilizes
the OpenCV library for video capture and image manipulation, integrating
a pre-trained face detection model to identify faces within each frame.
Upon detection, the facial region undergoes preprocessing, including
resizing and conversion to an array format compatible with the deep
learning model. A pre-trained CNN model is loaded to predict emotions
from the processed facial images.

4
CHAPTER NO TITLE PAGE NO
ABSTRACT iv
LIST OF FIGURES vi
LIST OF ABBREVIATIONS vii
1
INTRODUCTION 1
1.1 General 1
1.2 Domain overview 3
2 LITERATURE SURVEY 11

3 EXISTING SYSTEM 15
3.1 Existing System 15
3.2 Disadvantages of existing system 15
4 PROPOSED SYSTEM 17
4.1 Proposed system 17
4.2 Advantages of proposed system 19
5 MODEL DESCRIPTION 25

6 SYSTEM REQUIREMENTS 26
6.1 Hardware requirements 27
6.2 Software requirements 26
7 SYSTEM DESIGN 28
7.1 System Architecture 28
7.2 Packages 28
8 CONCLUSION AND FUTUREWORK 31
APPENDIX – I 33
35
APPENDIX – II
36
REFERENCE

5
FIGURE NO TITLE PAGE NO

4.1 Image block 18

4.2 Aspect ratio 19

4.3 Recognition process 20

4.4 Accuracy of proposed system 21

7.1 System Architecture 28

9.1 jupyter Navigator 35

9.2 Output 35

6
LIST OF ABBREVETIONS

AICTE - All India Council For Technical Education


AI - Artificial Intelligence
CNN - Convolutional Neural Network
CSV - Comma-Sepearated Value
GPU - Graphics Processing Unit
ME - Master Of Engineering
NBA - Master Of Technology
NACC - National Board For Accreditation Council
RNN - Recurrent Neural Network
SVM - Support Vector Machine

7
CHAPTER 1
INTRODUCTION

1.1.GENERAL

The designed emotion detection system operates by continuously capturing video


frames through a webcam and analyzing these frames in real-time to identify and
interpret facial expressions. It begins by detecting faces within each frame using a
face detection algorithm like the Haar cascade classifier, enabling the system to locate
and isolate facial regions for analysis. These regions undergo preprocessing steps,
including converting color images to grayscale, resizing to match the input size of the
emotion recognition model, and normalizing pixel values for consistent
analysis.These predictions encompass a range of emotions such as happiness, sadness,
anger, surprise, and others. The system assigns emotion labels based on these
predictions and visually enhances the video frames by overlaying rectangles around
detected faces and labeling them with the predicted emotions.
Face Detection:
Utilizes algorithms like the Haar cascade classifier to identify and isolate facial
regions within each frame.
Preprocessing:
Converts color images to grayscale and standardizes face regions to match the
required input size for the emotion recognition model. This step ensures uniformity
for analysis.
Emotion Recognition Model:
Incorporates a pre-trained deep learning model tailored for facial emotion recognition.
This model predicts a range of emotions, including happiness, sadness, anger,
surprise, etc., based on the processed facial regions.
Visual Indicators:
Enhances the video frames by overlaying rectangles around detected faces and labels
them with the predicted emotions. This visual representation aids in understanding the

8
analyzed emotions.
Real-time Display:
Presents the processed video stream in real-time, enabling users to observe the live
analysis of facial expressions and the associated predicted emotions.
Continuous Analysis Loop:
Engages in an ongoing loop, analyzing subsequent frames until user intervention,
potentially through an interactive command to pause or terminate the analysis.

1.2.DOMAIN OVERVIEW:

Emotion Recognition and Computer Vision:


This domain encompasses applications that involve the analysis of human emotions
through facial expressions using computer vision techniques. It includes the
development of algorithms, models, and systems that interpret emotional states based
on facial features and expressions captured through image or video inputs. This
domain often intersects with areas like artificial intelligence, machine learning,
psychology, and human-computer interaction, aiming to understand and interpret
human emotions for various purposes, including but not limited
Human-Computer Interaction (HCI):
Implementing systems that respond to or adapt based on human emotions, enhancing
user experiences in applications or devices.
Mental Health and Psychology:
Supporting emotional analysis for mental health applications, assisting in identifying
emotional patterns or states for diagnostic purposes or mood tracking.
Entertainment and Gaming:
Enhancing gaming experiences by creating systems that respond to players' emotions
in real-time or generating content based on emotional cues.
Market Research and Consumer Behavior:
Analyzing emotional responses to products, advertisements, or content for
understanding consumer preferences and behavior.

9
CHAPTER 2
LITERATURE SURVEY

1) [2020] J. Li, Y. Zhu, and F. Meng, "A Survey on Convolutional Neural


Network Based Facial Expression Recognition," IEEE Access, vol. 8, pp.
19215-19237, May 2020. (Survey on deep learning for facial expression
recognition)
With the transition of facial expression recognition (FER) from laboratory-controlled
to challenging in-the-wild conditions and the recent success of deep learning
techniques in various fields, deep neural networks have increasingly been leveraged
to learn discriminative representations for automatic FER. Recent deep FER systems
generally focus on two important issues: overfitting caused by a lack of sufficient
training data and expression-unrelated variations, such as illumination, head pose and
identity bias. In this paper, we provide a comprehensive survey on deep FER,
including datasets and algorithms that provide insights into these intrinsic problems.
First, we introduce the available datasets that are widely used in the literature and
provide accepted data selection and evaluation principles for these datasets. We then
describe the standard pipeline of a deep FER system with the related background
knowledge and suggestions of applicable implementations for each stage. For the
state of the art in deep FER, we review existing novel deep neural networks and
related training strategies that are designed for FER based on both static images and
dynamic image sequences, and discuss their advantages and limitations. Competitive
performances on widely used benchmarks are also summarized in this section. We
then extend our survey to additional related issues and application scenarios. Finally,
we review the remaining challenges and corresponding opportunities in this field as
well as future directions for the design of robust deep FER systems.

10
2) [2021] L. Sun, L. He, and R. Wang, "An Ensemble Deep Learning
Approach for Emotion Recognition Using Multiple Features," IEEE Access,
vol. 9, pp. 83401-83415, Jun. 2021. (Deep learning with multiple features for
emotion recognition)
The advancements in neural networks and the on-demand need for accurate and near
real-time Speech Emotion Recognition (SER) in human–computer interactions make
it mandatory to compare available methods and databases in SER to achieve feasible
solutions and a firmer understanding of this open-ended problem. The current study
reviews deep learning approaches for SER with available datasets, followed by
conventional machine learning techniques for speech emotion recognition. Ultimately,
we present a multi-aspect comparison between practical neural network approaches in
speech emotion recognition. The goal of this study is to provide a survey of the field
of discrete speech emotion recognition.

3) [2022] Z. Xu et al., "A Transfer Learning Framework for Speech Emotion


Recognition Using Deep Recurrent Neural Networks," IEEE Access, vol. 10,
pp. 100533-100544, Jul. 2022. (Transfer learning for speech emotion
recognition with deep RNNs)
we do research on cross-corpus speech emotion recognition (SER), in which the
training and testing speech signals come from different speech corpus. The
mismatched feature distribution between the training and testing sets makes many
classical algorithms unable to achieve better results. To deal with this issue, a transfer
learning and multi-loss dynamic adjustment (TLMLDA) algorithm is initiatively
proposed in this paper. The proposed algorithm first builds a novel deep network
model based on a deep auto-encoder and fully connected layers to improve the
representation ability of features. Subsequently, global domain and subdomain
adaptive algorithms are jointly adopted to implement features transfer. Finally,
dynamic weighting factors are constructed to adjust the contribution of different loss
functions to prevent optimization offset of model training, which effectively improve

11
the generalization ability of the whole system. The results of simulation experiments
on Berlin, eNTERFACE, and CASIA speech corpora show that the proposed
algorithm can achieve excellent recognition results, and it is competitive with most of
the state-of-the-art algorithms.

4) [2021] S. Wang et al., "Multimodal Sentiment Analysis with Multimodal


Attention Fusion and Multi-Scale LSTM," IEEE Transactions on
Multimedia, vol. 23, no. 4, pp. 605-616, Apr. 2021. (Multimodal sentiment
analysis with attention and LSTMs)
In the real world, multimodal sentiment analysis (MSA) enables the capture and
analysis of sentiments by fusing multimodal information, thereby enhancing the
understanding of real-world environments. The key challenges lie in handling the
noise in the acquired data and achieving effective multimodal fusion. When
processing the noise in data, existing methods utilize the combination of multimodal
features to mitigate errors in sentiment word recognition caused by the performance
limitations of automatic speech recognition (ASR) models. However, there still
remains the problem of how to more efficiently utilize and combine different
modalities to address the data noise. In multimodal fusion, most existing fusion
methods have limited adaptability to the feature differences between modalities,
making it difficult to capture the potential complex nonlinear interactions that may
exist between modalities. To overcome the aforementioned issues, this paper proposes
a new framework named multimodal-word-refinement and cross-modal-hierarchy
(MWRCMH) fusion. Specifically, we utilized a multimodal word correction module
to reduce sentiment word recognition errors caused by ASR. During multimodal
fusion, we designed a cross-modal hierarchical fusion module that employed cross-
modal attention mechanisms to fuse features between pairs of modalities, resulting in
fused bimodal-feature information. Then, the obtained bimodal information and the
unimodal information were fused through the nonlinear layer to obtain the final
multimodal sentiment feature information. Experimental results on the MOSI-
SpeechBrain, MOSI-IBM, and MOSI-iFlytek datasets demonstrated that the proposed

12
approach outperformed other comparative methods, achieving Has0-F1 scores of
76.43%, 80.15%, and 81.93%, respectively. Our approach exhibited better
performance, as compared to multiple baselines

13
CHAPTER 3
EXISTING SYSTEM

3.1. Existing System

◆ Relies predominantly on facial expressions for emotion recognition, often


overlooking other contextual cues like voice tone or body language.
◆ May exhibit biases due to limited representation in training data, potentially
leading to inaccuracies and lack of inclusivity across diverse demographics and
cultures.
◆ Often lacks interpretability, making it challenging to understand the reasoning
behind emotion predictions, which could impact user trust and acceptance.
◆ Might exhibit inconsistent performance in varying environmental conditions,
lighting, or facial occlusions, affecting its real-time adaptability and robustness.
◆ Primarily focuses on recognizing basic emotions from facial expressions,
potentially overlooking more nuanced or complex emotional states.

3.2 Disadvantages of Existing system:


Limited Modality Integration:
Often rely solely on facial expressions, disregarding other contextual cues like voice
tone or body language.
Bias and Lack of Inclusivity:
May exhibit biases due to inadequate representation in training data,leading to
inaccuracies and limited inclusivity across diverse demographics and cultures.
Interpretability Issues:
Lack interpretability, making it challenging to explain emotion predictions,
potentially impacting user trust and acceptance.

14
Robustness concerns:
Inconsistent performance in varying environmental conditions, lighting, or facial
occlusions affects real-time adaptability and robustness.
Limited Emotion Understanding:
primarily focus on recognising basic emotions , potentially overlooking or complex
emotional states.May exhibit biases due to inadequate representation in training
data,leading to inaccuracies and limited inclusivity across diverse demographics and
cultures.
Interpretability Issues:
Lack interpretability, making it challenging to explain emotion predictions, potentially
impacting user trust and acceptance.
Robustness concerns:
Inconsistent performance in varying environmental conditions, lighting, or facial occlusions
affects real-time adaptability and robustness.
Limited Emotion Understanding:
primarily focus on recognising basic emotions , potentially overlooking or complex
emotional states

15
CHAPTER 4
PROPOSED SYSTEM

The proposed system for facial emotion recognition aims to address several
limitations present in existing systems by leveraging advancements in deep learning
techniques, multimodal data integration, and improved model robustness. In
comparison to existing systems, the proposed system emphasizes several key
advantages.The proposed system integrates multimodal data sources, combining
facial expressions with contextual cues such as voice tone or body language,
enhancing the depth and accuracy of emotion recognition. By fusing multiple
modalities, it aims to capture a more comprehensive understanding of emotions,
surpassing the limitations of relying solely on facial features. Additionally, the
proposed system focuses on mitigating biases by curating diverse and representative
datasets, ensuring inclusivity across demographics, cultures, and expressions.Utilizing
state-of-the-art deep learning architectures, the proposed system emphasizes model
interpretability and explainability, addressing concerns related to the 'black- box'
nature of deep neural networks in existing systems. It incorporates mechanisms for
interpreting model predictions, offering insights into why specific emotions are
identified, thereby enhancing user trust and system transparency.Facial emotion
analysis system project developed and implemented by the following steps given
STEP 1:Load the model
• Load the pre-trained deep learning model for facial emotion recognition. This
model has been previously trained on annotated datasets to recognize emotions
from facial expressions.
STEP 2:Initialize Video Capture
• Utilize OpenCV to access the webcam (or video feed) for capturing live frames.
STEP 3: Use the Facial detection

• Use a facial detection algorithm (like Haar Cascade Classifier) to detect faces
within each captured frame.

16
STEP 4: Data preprocessing
• Convert the detected facial regions to grayscale for standardization.
• Resize the regions to fit the model's input dimensions (e.g., 224x224 pixels).
• Normalize pixel values and prepare the cropped facial image for model input.
STEP 5:Load the image
• Feed the preprocessed facial image into the loaded deep learning model.

• Utilize the model to predict the prevailing emotion from the facial expression
within the cropped region.
STEP 6: Visualization
• Overlay the predicted emotion label onto the video frame using OpenCV's text-
rendering capabilities.
• Display the processed video frame in real time, showcasing detected faces with
their corresponding predicted emotions.
STEP 7: Looping process
• Continuously capture frames from the webcam or video feed.

• Repeat the facial detection, preprocessing, emotion prediction, and visualization


steps for each frame in real time.
STEP 8: Termination

Implement a user interaction mechanism (e.g., key press) to enable termination of the
program.
• Release the webcam/video feed and close any open windows upon program
termination.
4.2. Advantages of Proposed System:
Multimodal Integration:
Integrates multiple modalities (facial expressions, voice tone, body
language), providing a more comprehensive understanding of emotions beyond
facial features alone.
Bias Mitigation and Inclusivity:
Aims to mitigate biases by curating diverse datasets, fostering equitable and

17
across various demographics and cultures.
accurate emotion recognitionInterpretability and Transparency:
Prioritizes interpretability, enabling users to understand emotion
predictions, enhancing trust and system transparency.
Robustness and Real-time Adaptability:
Focuses on optimizing models for consistent performance across
varying conditions, enhancing real-time adaptability and robustness.
Comprehensive Emotion Understanding:
Strives for a more comprehensive understanding of emotions by
integrating multiple modalities, aiming to interpret subtle nuances and complex
emotional states.
IMAGE BLOCK:

Fig 4.1:image block

18
ASPECT RATIO OF FACE:

The aspect ratio of a face plays a crucial role in accurately identifying and interpreting
facial expressions. The aspect ratio refers to the proportional relationship between the
width and height of the face detected within an image or video frame.Maintaining a
consistent aspect ratio helps algorithms better recognize facial landmarks and features
essential for emotion analysis. Distortions in the aspect ratio might lead to
misinterpretations or inaccuracies in detecting emotions, as changes in proportions can
affect the perception of expressions like joy, sadness, anger, or surprise.

fig4.2:aspect ratio

19
RECOGNISING PROCESS:

The aspect ratio of a face plays a crucial role in accurately identifying and
interpreting facial expressions. The aspect ratio refers to the proportional
relationship between the width and height of the face detected within an image or
video frame.Maintaining a consistent aspect ratio helps algorithms better recognize
facial landmarks and features essential for emotion analysis. Distortions in the
aspect ratio might lead to misinterpretations or inaccuracies in detecting emotions,
as changes in proportions can affect the perception of expressions like joy, sadness,
anger, or surprise.

Fig 4.3: recognition process

20
ACCURACY:
the program output in real-time facial emotion analysis largely depends on the
quality of the pre-trained model, dataset diversity, and environmental factors.
Achieving approximately 60-65% accuracy is common for facial emotion
recognition in real-time scenarios, with variations based on lighting conditions,
facial expressions, and model robustness. Continuous refinement and model
optimization can lead to gradual accuracy improvements over time.

Fig 4.4: accuracy of proposed system

21
CHAPTER 5
MODULE DISCRIPTION

Model Loading Module:


This module involves loading a pre-trained deep learning model (typically a neural
network) specifically trained for facial emotion recognition. The model is saved in a file
(such as best_model.h5) and is loaded into memory using a deep learning library like
Keras.
Face Detection Module:
Utilizes a Haar cascade classifier, such as haarcascade_frontalface_default.xml, to
detect and locate faces within each frame captured by the webcam. This module is
responsible for identifying and marking the regions where faces are present in the input
images.
Image Processing Module:
Once faces are detected, the program processes these regions. It converts the color
images to grayscale, a common preprocessing step for facial analysis. The face regions
are resized to match the input size expected by the pre-trained model. Additionally,
pixel normalization is performed to prepare the face images for emotion prediction.
Emotion Prediction Module:
This is the core module responsible for predicting emotions from the processed face
regions. It utilizes the loaded pre-trained model to analyze and predict emotions present
in the facial expressions. The output typically includes probabilities or confidence
scores for various emotions such as happiness, sadness, anger, surprise, disgust, fear,
and neutrality.
Visualization Module:
Provides visual feedback by overlaying the detected faces with rectangles to highlight
them and labeling each face with the predicted emotion. This module enhances user

22
understanding by visually displaying the identified emotions on the respective detected
faces.
Control Flow Module:
Manages the flow of the program, encompassing a continuous loop that captures
webcam frames, analyzes the emotions exhibited on detected faces using the
aforementioned modules, and displays the interpreted emotions in real-time. This loop
continues until the user interrupts it by pressing a designated key ('q' in this case).

23
CHAPTER 6

SYSTEM REQUIREMENTS

6.1.Software Requirements:

Python: The program is written in Python, so you need a Python interpreter


installed (preferably Python 3.x).
OpenCV: Installation of OpenCV library (cv2) is required for image processing and
video capture.
Keras with TensorFlow backend: This program uses Keras for deep learning tasks
with TensorFlow as the backend. Ensure Keras and TensorFlow are installed
(keras, tensorflow).
Matplotlib: Used for displaying images and plots (matplotlib).
Numpy: For numerical computations and array manipulation (numpy).

6.1.1.Package Description:
OpenCV (cv2):
Used for image processing, video capture, and face detection.
Keras with TensorFlow backend (keras, tensorflow):
Keras provides a high-level neural networks API, while TensorFlowserves as the
backend for deep learning tasks.
Matplotlib (matplotlib):
Utilized for displaying images and plots.
Numpy (numpy):
Essential for numerical computations and array manipulation.

24
6.2.HARDWARE REQUIREMENTS
Processor (CPU):
A multi-core processor (modern CPUs like Intel i5 or above, or
AMD equivalent) would be suitable.
Memory (RAM):
At least 4GB of RAM is recommended for smooth performance.
Camera:
A webcam connected to the system to capture video frames.
Graphics Processing Unit (GPU) - Optional:
If utilizing deep learning models, having a dedicated GPU
(NVIDIA CUDA-enabled GPU) can significantly speed up
computations.

25
CHAPTER 7
SYSTEM DESIGN

7.1. System Architecture:


The real-time analysis of facial emotions by continuously detecting faces,
processing facial regions, predicting emotions, and overlaying the predicted
labels onto the video feed, creating a live demonstration of emotion recognition
in action.

Fig 7.1.system architecture


7.2.Packages:
OpenCV
Keras
Tensorflow
Matplotlib
Numpy

26
PACKAGES DISCRIPTION:
OpenCV (cv2):
OpenCV (Open Source Computer Vision Library) is a powerful library used for
real-time computer vision. It provides a wide range of functionalities for image and
video processing, including reading and writing images, video capture, object
detection, face detection, and various image manipulation techniques.
Key Features:
➢ Image and video manipulation.
➢ Face detection and recognition.
➢ Object tracking.
➢ Feature detection and extraction.
➢ Camera calibration and 3D reconstruction.
Usage in Facial Emotion Analysis:
In the provided code, OpenCV (cv2) is primarily used for capturing video frames
from the webcam, converting color spaces, and detecting faces within the captured
frames.
Keras with TensorFlow Backend (keras, tensorflow):
Keras is a high-level neural networks API written in Python, which acts as an
interface for TensorFlow, Theano, and Microsoft Cognitive Toolkit (CNTK). In the
provided code, TensorFlow serves as the backend for Keras.
Key Features:
➢ Simplified API for building and training neural networks.
➢ Supports both convolutional and recurrent neural networks.
➢ Easily allows for model training, evaluation, and inference.
➢ Flexibility to run on CPUs and GPUs.

27
Usage in Facial Emotion Analysis:
Keras is employed for loading and utilizing a pre-trained deep learning model for
facial emotion recognition. It assists in loading the model, preprocessing images,
making predictions, and extracting emotions from detected faces.
Matplotlib (matplotlib):
Matplotlib is a comprehensive library for creating static, animated, and interactive
visualizations in Python. It provides functionalities for generating plots, histograms,
bar charts, scatterplots, and more.
Key Features:
➢ Creation of publication-quality plots.
➢ Support for various plot types and customizations.
➢ Integration with Jupyter Notebook for interactive visualizations.
Usage in Facial Emotion Analysis:
Matplotlib is used in the code to display the video feed from the webcam, draw
rectangles around detected faces, and annotate the predicted emotions on the
video frames.
Numpy (numpy):
Numpy is a fundamental package for scientific computing in Python. It provides
support for large, multi-dimensional arrays and matrices, along with a wide range of
mathematical functions to operate on these arrays.
Key Features:
➢ Efficient handling of large arrays and matrices.
➢ Mathematical operations like linear algebra, Fourier analysis, random number
generation, etc.
➢ Integration capabilities with C/C++ and Fortran code.
Usage in Facial Emotion Analysis:
Numpy is utilized for array manipulation, resizing images, and normalization of
pixel values within the facial image region

28
CHAPTER 8
CONCLUSION AND FUTURE WORK

This study embarked on a comprehensive exploration into [topic or focus area],


aiming to unearth insights and patterns within the [describe the dataset or subject
of analysis]. Through meticulous data analysis and [mention methodologies or
techniques], notable revelations emerged, shedding light on [key findings or
discoveries].
Our examination revealed [briefly summarize major findings], emphasizing the
[most significant outcomes or observations]. These findings underscore the
[importance/relevance] of [specific aspect studied] and provide a foundation for
deeper understanding within this domain.
While this study provides valuable insights, it's crucial to acknowledge its
limitations. Challenges such as [mention limitations encountered] impacted the
depth and scope of our analysis, signifying the need for further exploration and
refinement.
FUTURE WORK:
Refinement of models stands as a pivotal next step. Fine-tuning [specific
models or algorithms] could enhance predictive accuracy and unearth nuanced
patterns hidden within the data. Augmenting the dataset or incorporating
advanced techniques like [mention techniques] holds promise in overcoming
limitations and fortifying the robustness of our analysis.
Moreover, delving into the integration of [additional variables/factors] might offer
a more comprehensive understanding of the dynamics at play. Real-world
validation of our findings and their application in [relevant context or industry]
wouldfurther substantiate the practical implications of our study.

29
✴ hybrid Models: Explore hybrid models integrating CNNs and attention
mechanismsfor better emotion feature extraction.

Self-supervised Learning: Investigate self-supervised methods to leverage


unlabeled data for improved model performance.
✴ Cross-domain Generalization: Focus on enhancing model generalization
acrossdiverse demographics and cultural contexts.
✴ Privacy-Preserving Techniques: Develop privacy-preserving strategies to
ensuredata confidentiality during emotion analysis.
✴ Emotion Dynamics: Study temporal dynamics of emotions for more
nuancedunderstanding and prediction.
✴ Human-centric Design: Collaborate with psychologists for human-centric
modeldesign and interpretation.
✴ Edge Computing: Optimize models for edge devices to enable real-time, on-
deviceemotion analysis.
✴ Robustness Testing: Conduct extensive robustness testing against adversarial
attacksand noisy environments.
✴ Ethical Guidelines: Contribute to ethical guidelines for responsible
deployment offacial emotion analysis technology.
✴ Healthcare Integration: Investigate integration into healthcare for emotion-
baseddiagnostics and mental health support.

30
APPENDIX 1
SOURCE CODE:
import os
import cv2
import numpy as np
from keras.preprocessing import image
import warnings
warnings.filterwarnings("ignore")
from keras.preprocessing.image import load_img, img_to_array
from keras.models import load_model
import matplotlib.pyplot as plt
import numpy as np

# load model
model = load_model("best_model.h5")

face_haar_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

cap = cv2.VideoCapture(0)

while True:
ret, test_img = cap.read() # captures frame and returns boolean value and captured image
if not ret:
continue
gray_img = cv2.cvtColor(test_img, cv2.COLOR_BGR2RGB)

faces_detected = face_haar_cascade.detectMultiScale(gray_img, 1.32, 5)

for (x, y, w, h) in faces_detected:


cv2.rectangle(test_img, (x, y), (x + w, y + h), (255, 0, 0), thickness=7)
roi_gray = gray_img[y:y + w, x:x + h] # cropping region of interest i.e. face area from
image
roi_gray = cv2.resize(roi_gray, (224, 224))
img_pixels = image.img_to_array(roi_gray)
img_pixels = np.expand_dims(img_pixels, axis=0)
img_pixels /= 255

predictions = model.predict(img_pixels)

# find max indexed array

31
max_index = np.argmax(predictions[0])

emotions = ('angry', 'disgust', 'fear', 'happy', 'sad', 'surprise', 'neutral')


predicted_emotion = emotions[max_index]

cv2.putText(test_img, predicted_emotion, (int(x), int(y)),


cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)

resized_img = cv2.resize(test_img, (1000, 700))


cv2.imshow('Facial emotion analysis ', resized_img)

if cv2.waitKey(10) == ord('q'): # wait until 'q' key is pressed


break

cap.release()
cv2.destroyAllWindows

32
APPENDIX 2

STEP 1: Open your familier programming running platforms like cmd promt and
navigate to path of the program or jupyter notebook

Fig 9.1: jypeter navigator

STEP 2: The second step is that when we run the program the webcam opens by using
the Opencv the the we can that the emotion detecting output.

Fig.9.2:Output

33
1) [2020] J. Li, Y. Zhu, and F. Meng, "A Survey on Convolutional Neural
Network Based Facial Expression Recognition," IEEE Access, vol. 8, pp. 19215-
19237, May 2020. (Survey on deep learning for facial expression recognition)

2) [2021] L. Sun, L. He, and R. Wang, "An Ensemble Deep Learning


Approach for Emotion Recognition Using Multiple Features," IEEE Access, vol.
9, pp. 83401-83415, Jun. 2021. (Deep learning with multiple features for emotion
recognition)

3) [2022] Z. Xu et al., "A Transfer Learning Framework for Speech Emotion


Recognition Using Deep Recurrent Neural Networks," IEEE Access, vol. 10, pp.
100533-100544, Jul. 2022. (Transfer learning for speech emotion recognition with
deep RNNs)

4) [2021] S. Wang et al., "Multimodal Sentiment Analysis with Multimodal


Attention Fusion and Multi-Scale LSTM," IEEE Transactions on Multimedia, vol.
23, no. 4, pp. 605-616, Apr. 2021. (Multimodal sentiment analysis with attention
and LSTMs)

5) [2023] F. Zhang et al., "Dynamic Attention Fusion for Multimodal


Emotion Recognition in Videos," IEEE Transactions on Affective Computing,
vol. 14, no. 2, pp. 217-228, Jun. 2023. (Dynamic attention fusion for multimodal
emotion recognition in videos)

6) [2021] M. Kim et al., "Electroencephalography-Based Emotion


Recognition Using Deep Sparse Neural Networks," IEEE Transactions on
Biomedical Engineering, vol. 68, no. 7, pp. 2105-2115, Jul. 2021. (EEG-based
emotion recognition with deep sparse neural networks)

34
7) [2022] S. Li et al., "Physiological Signal Based Emotion Recognition
Using Deep Feature Representation and Multiple Classifiers," IEEE Transactions
on Affective Computing, vol. 13, no. 4, pp. 841-853, Dec. 2022.(Physiological
signal-based emotion recognition with deep features and multiple classifiers)

8) [2019] M. Zhou et al., "Emotion Recognition in Human-Computer


Interaction: User Emotions, System Feedback, and Multimodal Learning," IEEE
Transactions on Affective Computing, vol. 10, no. 4, pp. 467-483, Oct.-Dec.
2019. (Emotion in HCI and multimodal learning)

9) [2020] L. Yang et al., "Toward Understanding Ethical Issues in Emotion


Recognition: A Survey," IEEE Transactions on Knowledge and Data Engineering,
vol. 33, no. 5, pp. 1100-1119, May 2020. (Ethical considerations in emotion
recognition)

10) [2023] T. Xu et al., "Recent Advances and Challenges in Affective


Computing and Social Signal Processing," IEEE Transactions on Multimedia, vol.
25, no. 3, pp. 757-775, Mar. 2023. (Review of recent advances and challenges in
affective computing)

35

You might also like