You are on page 1of 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/333502821

Facial Expression Classification Based on SVM, KNN and MLP Classifiers

Conference Paper · April 2019


DOI: 10.1109/ICOASE.2019.8723728

CITATIONS READS
19 498

2 authors:

Hivi Dino Maiwan B. Abdulrazzaq


University of Zakho University of Zakho
14 PUBLICATIONS   171 CITATIONS    21 PUBLICATIONS   182 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Machine learning View project

RG Academic Publishers & Reviewers View project

All content following this page was uploaded by Hivi Dino on 30 June 2020.

The user has requested enhancement of the downloaded file.


2019 International Conference on Advanced Science and Engineering (ICOASE),
University of Zakho, Duhok Polytechnic University, Kurdistan Region, Iraq

Facial Expression Classification Based on SVM,


KNN and MLP Classifiers
Hivi Ismat Dino Maiwan Bahjat Abdulrazzaq
Dept. of Computer Science Dept. of Computer Science
University of Zakho University Of Zakho
Duhok, Iraq Duhok,Iraq
hivi.dino@gmail.com maiwan.abdulrazzaq@uoz.edu.krd

Abstract—Facial Expression Recognition (FER) has been an The most popular approaches doing facial expression
active topic of papers that were researched during 1990s till now, analysis are Support Vector Machine (SVM) [4], Artificial
according to its importance, FER has achieved an extremely role Neural Network (ANN) [5], Active Appearance Model
in image processing area. FER typically performed in three (AAM) [6] and etc. In this paper the utilized classifiers are
stages include, face detection, feature extraction and SVM, Multilayer Perceptron (MLP) [7] and KNN [8]. SVM is
classification. This paper presents an automatic system of face a good algorithm that used to create a model for Facial
expression recognition which is able to recognize all eight basic Expression classification, it tries to find a hyper plane in an N-
facial expressions which are (normal, happy, angry, contempt, dimensional space that separate the classes of data points, with
surprise, sad, fear and disgust) while many FER systems were
aim to find a plane that has the maximum margin separation,
proposed for recognizing only some of face expressions. For
validating the method, the Extended Cohn-Kanade (CK+)
the productivity of recognition relies on the method of feature
dataset is used. The presented method uses Viola-Jones extraction [9]. MLP is one of the most common technique of
algorithm for face detection. Histogram of Oriented Gradients neural network which is based on backpropagation algorithm
(HOG) is used as a descriptor for feature extraction from the [10]. KNN is a non-parametric method in pattern recognition,
images of expressive faces. Principal Component Analysis (PCA) it is the simplest algorithm among all machine learning
applied to reduce dimensionality of the Features, to obtaining the algorithms [11].
most significant features. Finally, the presented method used There are three approaches for recognizing faces with
three different classifiers which are Support Vector Machine
different face expressions [12] which are: feature based-
(SVM), K-Nearest Neighbor (KNN) and Multilayer Perceptron
Neural Network (MLPNN) for classifying the facial expressions
techniques techniques, model-based techniques [13] and
and the results of them are compared. The experimental results correlation-based techniques. Feature-based technique is
show that the presented method provides the recognition rate classified into geometric feature-based, appearance feature-
with 93.53% when using SVM classifier, 82.97% when using based and movement feature-based method. The most
MLP classifier and 79.97% when using KNN classifier which common used approaches of feature-based techniques are
refers that the presented method provides better results while geometric based and appearance-based methods. In the
using SVM as a classifier. geometrical feature-based technique, geometric parameters
such as curves, pointers…etc., which are used to represent the
Keywords—face expression recognition, histogram of image locating different face parts like eyes, ears, nose,
oriented gradients, principle component analysis, support mouth… etc. Also, by estimating their relative width, position,
vector machine, k-nearest neighbor, multi-layer perceptron the geometric shapes constructed for representing these
parameters. According to the method of appearance based, the
I. INTRODUCTION importance is disposed to pixel values instead of the shape of
feature components or relative distance. In this method there
Facial recognition has been one of the significant and
are many different parameters of pixels used such as
interest areas in the field of security and biometric since
histogram, intensity and etc. The corresponding values are
1990s, naturally humans communicate their opinions with
performed in a template as a 2D array. The combination of
others, also can see reactions through emotions and intentions,
these two approaches is performed in some algorithms for
therefore it is important in natural communication [1]. Several
getting a good rate of recognition.
advanced research produced in terms of face tracking and
detection, the mechanisms of feature extraction and the There are three main steps for Face expression recognition:
techniques of expression classification [2]. Human have an (1) face detection, (2) feature extraction, (3) expression
exceptionally face recognition system, the brain of human is recognition. Different algorithms are used for each step. For
so sophisticated in such a way that it is able to recognize validating the Face Expression Recognition (FER) many
different faces with very fast changes in appearance. dataset were used such as Cohn Kanade (CK), Extended Cohn
Researchers have dependably been stunned by the capacity of Kanade (CK+), Yale, Jaffe and many other. In this paper CK+
human brain to recognize face in multiple different situations, dataset is used to validate the presented method due to its been
many efforts were made to reproduce the human brain system. one of the most common dataset for FER system, it released in
Based on this idea, many algorithms were produced for face 2000 for the purpose to promote the research in to an
recognition and Face detection [3]. automatic system of expression detection [14]. In FER system,

978-1-5386-9343-8/19/$31.00 ©2019 IEEE


70
2019 International Conference on Advanced Science and Engineering (ICOASE),
University of Zakho, Duhok Polytechnic University, Kurdistan Region, Iraq

Face detection [15] is considered as a first step for any vector’s dimension of data. LRC compacts as a linear
techniques of image processing. This step is required to regression problem with the face recognition problem. In this
normalize our data, in other words the image we get has been paper the reproduction experiment was commanded on the
taken under different requirements and conditions using Yale and JAFFE Database of face. The authors proposed the
different equipment. The image is varying in size and also has method of PCA for extracting the quantity of the face feature
been affected by many disruptions such as backgrounds, noise, that has a great ability in improving the generalization
illumination variations …etc. thus; the first step is image performance and the accuracy of the feature vector. According
standardization in FER which includes enhancement, resizing, to this paper the proposed method is the improved method of
noise removal of the input image. These steps are done in PCA that works through changing the feature vectors for
preprocessing stage. In addition, many techniques of FER producing the transformation of high dimensional data in to
utilize the gray scale images instead of color images, this the low dimensional data. Each class has an average value that
conversion also counts as a preprocessing step. Precisely, in is an image of linear increase. The authors found that the
preprocessing the images are standardized. The most prevalent classifier of LRC method is very effective for identifying the
techniques utilized for face detection are Viola-Jones [16], other expressions. The recognition rate gained 89.1% on Yale
Edge detection, Kirsch, Laplacian, Sobel, Canny, prewitt etc. database which is applied on five expressions while the
recognition rate gained 91.1% on JAFFE database which
In this work we used Viola-Jones algorithm for face applied on eight expressions.
detection, which was proposed by Paul Viola and Michael
Jones in 2001 [17]. Generally, dimension reduction loses Reference [22] presented a FER system to recognize six
information, PCA tends to reduce that information loss. PCA face emotions. It utilized the method of cascade regression
is a standard method utilized in signal processing and tree for feature extraction and used three machine learning
statistical pattern recognition for data reduction. Most of times algorithms for classification which are SVM, NN and logistic
the pattern contains much redundant information, PCA used to regression and compared their results. The presented method
map it in to a feature vector in order to get rid from redundant applied on CK+ dataset. The recognition accuracy gained
information and obtain only important information. The role 77.06% while using logistic regression classifier, 80% while
of those extracted features is to identify the input patterns. using NN classifier and 89% on SVM.
Histogram of Oriented Gradients (HOG) is an image shape
descriptor that can counts how many time the occurrence of Reference [23] described both PCA and kernel-based PCA
gradient orientations in special portions of an image and approach in FER system for three facial expressions of 3D
which used for the principle of object detection, also by face images. KNN classifier is used to recognize the face
instinct useful to form the shape of the facial muscles by edge expressions. Imperial College London dataset utilized for
analysis [18]. Principle Component Analysis (PCA) method validating the system. The experimental results show the
used widely to reduce the number of features in image in order demonstration of kernel PCA being outperforming PCA by
to obtain an available covering image that is covering about obtaining 77.29% of rate while PCA obtained only 52.69% of
99% of the variation [19]. PCA technique is applied in this recognition rate.
paper to reduce the dimensionality of HOG features. Finally, Reference [24] proposed a new recognition method of face
SVM, MLP and KNN methods are utilized as a classifier for expression recognition using the combination of Local
expression recognition. Directional Number Pattern (LDN) and Local Tetra Pattern
The main goal of the presented method is to recognize all (LTrP). This method uses Modified Decision Based
eight facial expressions which are (normal, happy, sad, angry, Unsymmetrical Median Filter (MDBUTMF) for noise
contempt, surprised, fear and disgust) while many modern removal. LDN descriptor is organized based on the descriptor
FER system were produced for recognizing only some face of eight Gaussian edge. The authors used the Japanese female
expressions. The presented research focused on the facial expression database, the used expressions are (disgust,
investigation techniques that are able in increasing both the smile, sad and surprise). The proposed method encrypts the
computational efficiency and recognition accuracy by relationship between the indicated pixel and its neighbors.
applying various classification techniques and comparing their LDPN utilizes the stable directional information against noise
rather than intensity for coding the various patterns from the
results.
texture of face. LTrP technique uses the center pixel direction
Section II of the paper gives an overview of the related to characterize the spatial frame of the local texture. LTrP
work. Section III describes the steps of proposed system. method encrypts the image based on the pixel direction which
Section IV presents the experimental results and discussion. is calculated by vertical and horizontal derivatives. To display
Finally, section V shows the conclusion of the paper. the proposed face expressions, the proposed technique goes
through mask generation and edge detection of Gaussian; also
II. RELATE WORKS after processing LDN and LTrP process, MLDN histogram
and LTrP histogram are calculate for training up the face
Reference [20] has proposed a novel method of FER using expressions by using the SVM classifier. In experimental
equable Principle Component Analysis (EPCA) as a results, it shows that the accuracy of FER was higher than
representation of expression features and Linear Regression 90%.
Classification (LRC) [21] is implemented as a classifiers for
expressions. EPCA deals with the original image by In [25] the authors suggested a dynamic feature-based
preserving the useful information and reducing the feature method of facial expression for video of face sequences based

71
2019 International Conference on Advanced Science and Engineering (ICOASE),
University of Zakho, Duhok Polytechnic University, Kurdistan Region, Iraq

on a low dimensional feature space through extracting a


Pyramid representation of uniform Temporal Local binary
Pattern (PTLBPu2)using orthogonal planes (XT and YT). The
proposed method is then used for selecting the most
discriminating sub regions. Feature space is reduced that is
about to be intended on low-dimensional feature space
thorough applying the PCA method. C4.5 algorithm and SVM
classifier is used for facial expression classification.
Experiments proceeded on MMI and CK+ databases which
are from famous databases of facial expression. The
experimental result shows that the accuracy of recognition rate
obtained 92% with uncontrolled environment.

Fig. 1. Block diagram for the presented method


TABLE I. COMPARISON SUMMERY OF RELATED WORKS

Feature A. Preprocessing
Literature Classifier Emotion Dataset Pros Cons
Extraction This step is considered as a first step for any techniques of
No.
Yale Pollution image processing. In this work we used Viola-Jones algorithm
Error free
Zhu et
EPCA LRC 7
and
recognition
problem in for face detection. The original images were digitized into
al.[20] JAFFE face either 640x490 or 640x480 pixel arrays with 8- bit gray-scale
achievement
recognition
SVM Less
or 24-bit color values.
Suitable for
Bilkhu et Cascade NN effective
6 Ck+ real time
al.[22] Regression Logistic recognition
applications
regression performance
PCA and Imperial Efficient for
Peter et al. Less
Kernel KNN 4 College 3D face
[23] accuracy
PCA London recognition
Less
Emmanuel Robust in
LDN number of
and Revina SVM 4 JAFFE noisy
+LTrP recognized
[24] removal
emotion

Abdallah et LDT More Less


feature SVM 6 CK+
al. [25] Reliable accuracy
space

Researchers in the related papers used various methods of


feature extraction and classification with different numbers of
facial expressions. Our system is able to recognize all eight
basic facial expressions while researchers in [22-25] is able to
recognize fewer number of facial expressions and with lower Fig. 2. Some emotions on detected face
recognition rate compared with our method. Table I show the
comparison between our system and others in accuracy and Viola-Jones is one of the most popular face detection
methods been used. algorithms because of having a robust and real time
characteristic in face detection. The algorithm goes through
III. METHODOLOGY four stages which are Haar features selection [26], creating an
In this paper, The Extended Cohn-Kanade (CK+) integral image, ad boost training and cascading classifiers.
Dataset was used. The dataset contains different emotions for After face detection, the images were normalized and resized
each person. Dataset is consisting of image sequences to 256 by 256-pixel arrays in gray-scale. Table 2 show the total
converted from video frame. There are many missing label emotion labeled and face detected. Fig.2. show some emotions
sequences in the dataset. The labeled emotion was considered on detected faces. To calculate the Haar transform of an array
and used in our experiment. First image frame used as a natural of n samples:
emotion, last image frame used as a labeled emotion in the  Treat the array as n/2 pairs called (a, b)
dataset. Our work is the process for identifying the human’s  Calculate (a + b) / sqrt(2) for each pair, these values
expressions with eight emotions which are: neutral, anger, will be the first half of the output array.
disgust, fear, happy, sadness and surprise. Fig.1. shows the
 Calculate (a - b) / sqrt(2) for each pair, these values
block diagram of the presented method.
will be the second half.
 Repeat the process on the first half of the array.
(the array length should be a power of two)

72
2019 International Conference on Advanced Science and Engineering (ICOASE),
University of Zakho, Duhok Polytechnic University, Kurdistan Region, Iraq

TABLE II. TOTAL DETECTED FACE EMOTIONS recognition for data reduction. Most of times the pattern
Value Count Percent contains much redundant information, PCA used to map it in
neutral 317 50.0% to a feature vector in order to get rid from redundant
anger 45 7.1% information and obtain only important information. The role
contempt 18 2.8% of those extracted features is to identify the input patterns. The
face image of 256x256 pixel size in two-dimension produce
Disgust 55 8.7%
8100 HOG features, which can be considered as one-
fear 24 3.8%
dimension vector of feature. PCA used on HOG feature to
happy 68 10.7%
reduce the dimensionality, we suggest to keep 90% of original
sadness 26 4.1% information. The final feature size was 247 for each sample.
surprise 81 12.8%
Total 634 100.0 IV. EXPERIMENTAL RESULTS

A. Dataset
B. Feature extraction
The proposed method was validated using CK+ dataset,
This is the most significant stage in FER. The efficiency of CK+ dataset consist of the facial emotions of 210 adults, they
FER is depending on the techniques used in this stage. The were between 18 to 50 years old, 69% of members were
major techniques of feature extraction are Local Binary
female, 13% Afro-American, 81% Euro-American, and 6%
Pattern (LBP), Gabor filter, Principal Component Analysis
other gatherings. CK+ consist of 539 image sequences of
(PCA) etc. In this work we used HOG algorithm for feature
extraction which was proposed by Dalal and Higgs [27]. facial expressions from 123 subjects, among them 327
sequences have eight emotion labels, image’s resolution
is640x490 or 640x480. After preprocessing, there were 634
labeled images with eight emotion [14].

B. Classifier
In this stage the extracted features from face images are
classified using a specific classifier to the respective classes of
face expressions. The most widely used classifier is support
vector machine (SVM) and Neural Networks (NN), which are
used for classification. In this work we used three classifier
which they are SVM, MLP and K-nearest neighbor KNN with
comparing their results.
Fig. 3. Extracted HOG features from a grayscale input image
V. RESULTS
The basic idea behind the HOG descriptor is that the shape In the experiment, 10-fold Cross-validation method is used
and appearance of local object inside the image can be as model evaluation. The dataset is divided into 10- subsets
characterized by the allocation of gradients intensity or edge with ten-time evaluation, each time the model uses nine-
directions. HOG is the combination of gradient direction
subsets as a training data, and use one remained subset in
histograms. To improve the accuracy, local histograms are
testing phase, then compute the average of the output results
capable to be construct-normalized which is done by choosing
the wider region on image which is called a block and in ten tests. Usually it is preferred method because it gives a
calculating its measure of intensity, after that the produced model the opportunity to train on multiple train-test splits.
value is used to normalize all cells inside the block. Fig.3. This gives better estimation of how well the model will
shows the visualization of Extract histogram of oriented perform on unseen data.
gradients (HOG). To illustrate the results of Tables 2, 3 and 4. The first
Features on face Construct-normalization produces better attribute is the specificity, which is the true negative rate of
invariant to shadowing and illumination. HOG is a better the considered class. The specificity highest rate value is (1) in
descriptor than others because of its being operated on local disgust emotion for SVM classifier, also in fear, sadness and
cells and it is static for photometric and geometric surprise emotions for K-NN classifier. The specificity lowest
transformation inspire of being object orientation. The total rate value is (0.6309) in neutral emotion for K-NN classifier.
number of extracted HOG features is 8100, it is hug, and The second attribute is the Recall. The Recall commonly
therefore dimension reduction was essential. called Sensitivity, corresponds to the true positive rate of the
consider class. The Recall highest rate value is (1) in happy
C. Feature Reduction emotion for K-NN classifier. The Recall lowest rate value is
Generally, dimension reduction loses information; PCA (0.0385) in sadness emotion for K-NN classifier. The third
tends to reduce that information loss. PCA is a standard attribute is the False Positives (FP) Rate which mean
method utilized in signal processing and statistical pattern incorrectly classified. The value of FP is (0) when take (1) in

73
2019 International Conference on Advanced Science and Engineering (ICOASE),
University of Zakho, Duhok Polytechnic University, Kurdistan Region, Iraq

Specificity. The Last attribute is the F-Measure which is and test data, the proposed method tested on two emotions and
the harmonic mean of precision and recall. The F-Measure obtained 65.35% of accuracy with SVM classifier, 70.87%
highest rate value is (0.9853) in happy emotion for SVM accuracy with KNN classifier. Researchers in paper [29]
classifier while the F-Measure lowest rate value is (0.0741) in employed a Deep Convolution Neural Network (DCNN) in
sadness emotion for K-NN classifier. order to extract deeper features for automatic emotion
detection, KNN classifier and tested on their system, the
The correct classification rate is 93.53% using the SVM recognition rate was 77.27% on CK+ dataset. Researchers in
classifier. The MLP correct classification rate was 82.97%, [22] obtained 80.0% and 89.0% accuracy with SVM and
and the K-NN correct classification rate was 79.97%. The neural network. With KNN classifier [23] enhanced the
accuracy and performance average weights of each classifier recognition rate to 77.29%.
for all emotions are given in table 2, 3 and 4.
TABLE VI. THE COMPARISON OF RECOGNITION RATE OF PRESENTED
TABLE III. PERFORMANCE DETAILS FOR SVM CLASSIFIER WITH 10- METHOD WITH STATE-OF-ART %
FOLD CROSS-VALIDATION
Literature Dataset Classifier Accuracy
SVM accuracy = 93.89%. SVM 80.0%
Label Specificity Recall FP-Rate Precision F-Measure [22] CK+
NN 89.0%
neutral 0.9117 0.9874 0.0883 0.9179 0.9514 [23] CK+ KNN 77.29
anger 0.9932 0.8889 0.0068 0.9091 0.8989 SVM 65.35%
contempt 0.9935 0.6111 0.0065 0.7333 0.6667 [28] CK+Jaffe+PSL
KNN 70.87%
disgust 1.0000 0.9455 0.0000 1.0000 0.9720 [29] CK+ KNN 77.27%
fear 0.9984 0.7083 0.0016 0.9444 0.8095 SVM 93.53%
happy 0.9982 0.9853 0.0018 0.9853 0.9853 ours CK+ NN 82.57%
sadness 0.9984 0.6538 0.0016 0.9444 0.7727 KNN 79.97%
surprise 0.9964 0.9383 0.0036 0.9744 0.9560
Average 0.9862 0.8398 0.0138 0.9261 0.8765
VI. CONCLUSION
TABLE IV. PERFORMANCE DETAILS FOR K-NN CLASSIFIER WITH 10-
FOLD CROSS-VALIDATION In this research FER system based on Histograms of
oriented gradients (HOG) using various machine learning
K-NN accuracy = 79.97% algorithms was presented. The Extended Cohn-Kanade (CK+)
Label Specificity Recall FP-Rate Precision F-Measure dataset used as good data recourse to exam the classification
neutral 0.6309 0.9937 0.3691 0.7292 0.8411
of human Facial Expression. Principal component analysis
anger 0.9983 0.0667 0.0017 0.7500 0.1224
contempt 0.9984 0.1111 0.0016 0.6667 0.1905 (PCA) used to reduce the dimensionality of the features. This
disgust 0.9983 0.7818 0.0017 0.9773 0.8687 paper presents a system with ability to recognize all eight
fear 1.0000 0.1250 0.0000 1.0000 0.2222 basic face expressions.10-fold validation is used to train and
happy 0.9876 1.0000 0.0124 0.9067 0.9510 test the classifiers. The three classifiers were utilized to
sadness 1.0000 0.0385 0.0000 1.0000 0.0741 perform the classification of facial expressions, SVM, NN and
surprise 1.0000 0.8889 0.0000 1.0000 0.9412
Average 0.9517 0.5007 0.0483 0.8787 0.5264
K-NN. The experiments results show that SVM proven to be
better classifier with 93.53% accuracy of correct classification
TABLE V. PERFORMANCE DETAILS FOR MLP CLASSIFIER WITH 10- rate. In future, we can use our own novel dataset which is in
FOLD CROSS-VALIDATION collecting progress and test the presented method with
NN accuracy = 82.97% different machine learning algorithms to provide better
accuracy.
Label Specificity Recall FP-Rate Precision F-Measure
neutral 0.7539 0.9811 0.2461 0.7995 0.8810 REFERENCES
anger 0.9847 0.5778 0.0153 0.7429 0.6500
contempt 0.9919 0.1111 0.0081 0.2857 0.1600 [1] T. Gehrig and H. K. Ekenel, "A common framework for real-time
emotion recognition and facial action unit detection," in Computer
disgust 0.9983 0.7818 0.0017 0.9773 0.8687 Vision and Pattern Recognition Workshops (CVPRW), 2011 IEEE
fear 0.9984 0.4583 0.0016 0.9167 0.6111 Computer Society Conference on, 2011, pp. 1-6.
happy 0.9894 0.8676 0.0106 0.9077 0.8872 [2] S. H. Abdurrahim, S. A. Samad, and A. B. Huddin, "Review on the
effects of age, gender, and race demographics on automatic face
sadness 0.9918 0.4231 0.0082 0.6875 0.5238 recognition," The Visual Computer, vol. 34, pp. 1617-1630, 2018.
surprise 0.9946 0.7778 0.0054 0.9545 0.8571 [3] S. L. Fernandes and G. J. Bala, "A Study on Face Recognition Under
Average 0.9629 0.6223 0.0371 0.7840 0.6799 Facial Expression Variation and Occlusion," in Proceedings of the
International Conference on Soft Computing Systems, 2016, pp. 371-
377.
Experimental results show that our method provides better [4] G. Lei, X.-h. Li, J.-l. Zhou, and X.-g. Gong, "Geometric feature based
accuracy comparing to the results in research [28] which used facial expression recognition using multiclass support vector machines,"
the same data set, it used facial landmarks to extract features in 2009 IEEE International Conference on Granular Computing, 2009,
pp. 318-321.
and applied different classifiers, in this research the
combination of three dataset used CK, Jaffe and PSL to train

74
2019 International Conference on Advanced Science and Engineering (ICOASE),
University of Zakho, Duhok Polytechnic University, Kurdistan Region, Iraq

[5] M.-P. Loh, Y.-P. Wong, and C.-O. Wong, "Facial expression recognition Jones Object Detection Algorithm," in International Conference on
for e-learning systems using Gabor wavelet & neural network," in null, Intelligent Data Engineering and Automated Learning, 2018, pp. 722-
2006, pp. 523-525. 729.
[6] C. Martin, U. Werner, and H.-M. Gross, "A real-time facial expression [18] P. Carcagnì, M. Del Coco, M. Leo, and C. Distante, "Facial expression
recognition system based on active appearance models using gray recognition and histograms of oriented gradients: a comprehensive
images and edge images," in Automatic Face & Gesture Recognition, study," SpringerPlus, vol. 4, p. 645, 2015.
2008. FG'08. 8th IEEE International Conference on, 2008, pp. 1-6. [19] C. Y. Yong, R. Sudirman, and K. M. Chew, "Facial expression
[7] P. Burkert, F. Trier, M. Z. Afzal, A. Dengel, and M. Liwicki, monitoring system using pca-bayes classifier," in Future Computer
"Dexpression: Deep convolutional neural network for expression Sciences and Application (ICFCSA), 2011 International Conference on,
recognition," arXiv preprint arXiv:1509.05371, 2015. 2011, pp. 187-191.
[8] I. T. Meftah, N. Le Thanh, and C. B. Amar, "Emotion recognition using [20] Y. Zhu, X. Li, and G. Wu, "Face expression recognition based on
KNN classification for user modeling and sharing of affect states," in equable principal component analysis and linear regression
International Conference on Neural Information Processing, 2012, pp. classification," in Systems and Informatics (ICSAI), 2016 3rd
234-242. International Conference on, 2016, pp. 876-880.
[9] M. Abdulrahman and A. Eleyan, "Facial expression recognition using [21] Y. Zhu, C. Zhu, and X. Li, "Improved principal component analysis and
support vector machines," in 2015 23nd Signal Processing and linear regression classification for face recognition," Signal Processing,
Communications Applications Conference (SIU), 2015, pp. 276-279. vol. 145, pp. 175-182, 2018.
[10] H. Boughrara, M. Chtourou, C. B. Amar, and L. Chen, "Facial [22] M. S. Bilkhu, S. Gupta, and V. K. Srivastava, "Emotion Classification
expression recognition based on a mlp neural network using constructive from Facial Expressions Using Cascaded Regression Trees and SVM,"
training algorithm," Multimedia Tools and Applications, vol. 75, pp. in Computational Intelligence: Theories, Applications and Future
709-731, 2016. Directions-Volume II, ed: Springer, 2019, pp. 585-594.
[11] S. Li, W. Deng, and J. Du, "Reliable crowdsourcing and deep locality- [23] M. Peter, J.-L. Minoi, and I. H. M. Hipiny, "3D Face Recognition using
preserving learning for expression recognition in the wild," in Kernel-based PCA Approach," in Computational Science and
Proceedings of the IEEE Conference on Computer Vision and Pattern Technology, ed: Springer, 2019, pp. 77-86.
Recognition, 2017, pp. 2852-2861. [24] W. R. S. Emmanuel and I. M. Revina, "Face expression recognition
[12] I. M. Revina and W. S. Emmanuel, "A survey on human face expression using integrated approach of Local Directional Number and Local Tetra
recognition techniques," Journal of King Saud University-Computer and Pattern," in Control, Instrumentation, Communication and
Information Sciences, 2018. Computational Technologies (ICCICCT), 2015 International Conference
[13] C.-L. Huang and Y.-M. Huang, "Facial expression recognition using on, 2015, pp. 707-711.
model-based feature extraction and action parameters classification," [25] T. B. Abdallah, R. Guermazi, and M. Hammami, "Facial-expression
Journal of visual communication and image representation, vol. 8, pp. recognition based on a low-dimensional temporal feature space,"
278-290, 1997. Multimedia Tools and Applications, vol. 77, pp. 19455-19479, 2018.
[14] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. [26] R. S. Stanković and B. J. Falkowski, "The Haar wavelet transform: its
Matthews, "The extended cohn-kanade dataset (ck+): A complete status and achievements," Computers & Electrical Engineering, vol. 29,
dataset for action unit and emotion-specified expression," in Computer pp. 25-44, 2003.
Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE [27] N. Dalal and B. Triggs, "Histograms of oriented gradients for human
Computer Society Conference on, 2010, pp. 94-101. detection," in international Conference on computer vision & Pattern
[15] D. Chawla and M. C. Trivedi, "A comparative study on face detection Recognition (CVPR'05), 2005, pp. 886--893.
techniques for security surveillance," in Advances in Computer and [28] K. Lawrence, R. Campbell, and D. Skuse, "Age, gender, and puberty
Computational Sciences, ed: Springer, 2018, pp. 531-541. influence the development of facial emotion recognition," Frontiers in
[16] T. PAUL, U. A. SHAMMI, M. U. AHMED, R. RAHMAN, S. psychology, vol. 6, p. 761, 2015.
KOBASHI, and M. A. R. AHAD, "A Study on Face Detection Using [29] K. Shan, J. Guo, W. You, D. Lu, and R. Bie, "Automatic facial
Viola-Jones Algorithm in Various Backgrounds, Angles and Distances," expression recognition based on a deep convolutional-neural-network
International Journal of Biomedical Soft Computing and Human structure," in 2017 IEEE 15th International Conference on Software
Sciences: the official journal of the Biomedical Fuzzy Systems Engineering Research, Management and Applications (SERA), 2017,
Association, vol. 23, pp. 27-36, 2018. pp. 123-128.
[17] Á. P. Pertierra, A. B. G. González, J. T. Lafuente, and A. de Luis
Reboredo, "Communication Skills Personal Trainer Based on Viola-

75

View publication stats

You might also like