You are on page 1of 11

BVDU College of Engineering,

Pune (Department of E&TC Engineering)


TITLE: Feature Extraction Technique for
Emotion Detection using Machine
Learning

Name of students→ Devasheesh Tripathi(89)


Kaushal Puri(101)
Yashvi Sudan(113)

Guide Name- Mrs.A.B PATIL


Introduction
EMOTION recognition has important applications in the field of
medicine, education, marketing, security and surveillance. Machines can
enhance the human-computer interaction by accurately recognizing the
human emotions and responding to those emotions. Existing research has
mainly examined automatic detection of single emotion. But psychology
and behavioral science studies have shown that humans can concurrently
experience and express mixed emotions. For instance, a person can feel
happy and sad at the same time. In this research combinations of six
basic emotions (happiness, sadness, surprise, anger, fear, disgust and
neutral state) were used.
RATIONALE:
 The aim of this study is to develop features that capture data from facial
expressions to identify multiple emotions. In case of single-label classification
problem each annotated feature-vector instance is only associated with a
single class label.
 However, the multiple concurrent emotion recognition is a multi-label
classification problem. In a multi-label problem, each feature vector instance
is associated with multiple labels such as presence or absence of one of each
six basic emotions.
 The multi-label classification is receiving increased attention and is being
applied to a many domains such as text, music, images and video based
systems, security and bioinformatics.
 This paper examined recognition of concurrent emotional ambivalence and
mixed emotions. Additionally, the study examined two concurrent emotions
(emotion duality) to limit the scope of the research based on availability of
scenarios. This was done so that the experimental design was realistic. The
subjects could express dual emotions with ease and observers could annotate
the data without ambiguity. This study implemented a multimodal emotion
recognition system with multiple check box input to facilitate the annotation
of concurrent emotions in the user interface software.
OBJECTIVE

 The Emotion Detection Analysis Method requires image recognition


system. The image recognition system is based on two methods: Feature
Based System and Image Based System. The Feature Based System extracts
the eyes, nose and mouth features and aligns them symmetrically. On the other
hand the Image Based System uses pixel representation which includes the
Principal Control Analysis.
 The Eigenfaces are used to evaluate the blobs using the Principal
Component Analysis. The goal of Facial Expression Recognition System
(FERS) is to imitate the human visual system in the most similar way. This is
very challenging task because not only it requires efficient image/video
analysis techniques but also well-suited feature vector used in machine
learning process. The facial expression recognition uses the Fisherface
technique
SCOPE

As technology is improving day-by-day, higher-definition cameras


are available. Computer networks are able to move more data, and
processors will work faster. Facial-recognition algorithms will be
better able to pick out faces from an image and recognize them. An
immediate way to overcome many of these limitations is to change
how images are captured. Using checkpoints, for example, requires
subjects to line up and funnel through a single point. Cameras can
then focus on each person closely, yielding far more useful frontal,
higher-resolution probe images. This method may suffer from the
changes and irregular motion variation of facial expression in
spontaneous behaviour. The facial expressions may differ in
appearance and timing from spontaneously occurring expressions.
Hence, there is still room for improvement and extension to
spontaneous facial expressions in order to make a dynamic facial
descriptor sufficiently stable, efficient and more accurate.
Block Diagram
FLOW CHART
FISHERFACE METHOD
 Fisherface method maximizes separation between classes in the training
process. The image recognition using Fisherface is based on reduction of
face dimension using Principal Component Analysis and Linear
Discriminative Analysis to obtain image characteristics. The PCA method
reduces the dimensions before being used to perform the LDA process and
solves singularity problems. The Fisherface method considers each each
pixel in an image as a coordinate in the high-dimensional image space. The
algorithm begins by creating a matrix wherein each column vector
(consisting of pixel intensities) represents an image. A corresponding class
vector containing class labels is also created. The image matrix is projected
into (n-c) dimensional subspace (where n is the number of images and c is
the number of classes). The between-class and within-class scatter of the
projection is calculated and LDA is applied.
LITERATURE SURVEY
 [1] S. Patwardhan, “Augmenting Supervised Emotion Recognition with Rule-
Based Decision Model”, arXiv, 2016.
 Description: In this paper, we investigate the effect of transfer of emotion-rich
features between source and target networks on classification accuracy and training
time in a multimodal setting for vision based emotion recognition.

 [2] M. Liu, R. Wang, S. Li, S. Shan, Z. Huang, and X. Chen. Combining multiple
kernel methods on riemannian manifold for emotion recognition in the wild. ICMI,
2014.
Description: Emotional expressions of virtual agents are widely believed to
enhance the interaction with the user by utilizing more natural means of
communication. However, as a result of the current technology virtual agents are
often only able to produce facial expressions to convey emotional meaning
RESULT
CONCLUSION

 The main objective of this project is to detect the facial emotions of a


person through the process of feature extraction. This objective is obtained
by using the Fisherface method which further uses PCA and LDA
methods.. Local binary patterns that saves much computational resource
whilst retaining facial information efficiently. It demonstrates that the LBP
features are discriminative and robust over a range of facial image
resolutions, in the real world applications where only low resolution video
input is available. This project can be successfully implemented for real
time operations.

You might also like