You are on page 1of 11

The Visual Computer

https://doi.org/10.1007/s00371-019-01628-3

ORIGINAL ARTICLE

Facial emotion detection using modified eyemap–mouthmap


algorithm on an enhanced image and classification with tensorflow
Allen Joseph1 · P. Geetha1

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Abstract
Detection of emotion using facial expression is a growing field of research. Facial expression detection is also helpful to
identify the behavior of a person when a man interacts with the computer. In this work, facial expression recognition with
respect to the changes in the facial geometry is proposed. First, the image is enhanced by means of discrete wavelet transform
and fuzzy combination. Then, the facial geometry is found using the modified eyemap and mouthmap algorithm after finding
the landmarks. Finally, the area and angle of the constructed triangles are found and classified using neural network with the
help of tensorflow central processing unit version. Results show that the proposed algorithm is efficient in finding the facial
emotion.

Keywords Emotion · Eyemap · Facial expression · Facial geometry · Mouthmap

1 Introduction based features are considered to be simple for identifying


emotions [1]. In this work, we extract the facial geometry fea-
Facial emotion detection is very useful for human–machine tures of an enhanced image which is enhanced using discrete
interaction. Facial emotion recognition can be used to wavelet transform (DWT) and fuzzy. The extracted features
improve the way of interaction between humans and com- are classified using tensorflow because of its flexibility. Since
puters [6]. To illustrate the structure of facial expressions, tensorflow offers pipelining and can be used for training mul-
Ekman et al. [5] described the facial action coding system, tiple neural networks, it was used for classification.
which illustrates facial expressions by activations of the so-
The contributions to the proposed work are:
called action units. Each action unit refers to the contraction
of a group of specific facial muscles. Some of the features
1. Image enhancement using DWT and fuzzy, which is used
have to be extracted from the facial expressions in order to
for enhancing a dull image.
identify the emotions. Mayer et al. [22] make the following
2. Modified mouthmap for the detection of mouth, which
suggestion that for features, the degree to which a person
is an efficient method in detecting the mouth of a per-
stretches the mouth while smiling differs among individuals.
son when compared with the state-of-the-art methods.
They conclude that classifiers do not generalize and adapt
Modified eyemap for the construction of eye geometry,
well to database-specific properties as they work on cross-
which is used after the detection of eye using Viola Jones
database evaluation for facial expression recognition.
algorithm.
For identifying facial emotions, features have to be
3. Use of tensorflow for classification of emotion and vali-
extracted properly and used in order to classify them. Use of
dation of the classified results.
spatial information results in good accuracy, and geometry-
4. We test the proposed system with three different
databases, namely Karolinska Directed Emotional Faces
B Allen Joseph (KDEF) [20], Oulu-CASIA NIR-VIS facial expression
allenjoseph1985@gmail.com
(Oulu-CASIA) [33], and extended Cohn–Kanade (CK+)
P. Geetha [19].
geethaplanisamy@gmail.com
1 Department of Computer Science and Engineering, College The rest of the paper is organized as follows: Sect. 2 gives
of Engineering, Guindy, Anna University, Chennai, India the related works, Sect. 3 describes the system, and Sect. 4

123
A. Joseph, P. Geetha

deals with experimental results and analysis. Finally, Sect. 5 emotion recognition. Wong et al. [29] work on face emo-
concludes the paper. tion recognition based on tree structure representation with
probabilistic recursive neural network modeling. The two
databases that they used are JAFFE and Cohn–Kanade.
2 Related works Gabor feature extraction was used to extract the features of
face, and the classifier used was probability-based recursive
Many works have been performed for the detection of facial neural network model.
emotion recognition. Happy et al. [7] proposed a framework Alugupally et al. [2] used Cohn–Kanade database for
for expression recognition with the help of appearance fea- detecting facial expression. In their work, they use statistical
tures of selected facial patches. This method classifies the analysis to determine the landmarks and features that are best
emotion by assessing a few facial patches, and they worked suited to recognize the expressions in a face. They suggest
on grayscale images, with the help of Japanese Female Facial that the texture and spatial information of the features in a
Expressions (JAFFE) [21] and CK+ databases. Malakar et al. face, such as eyes, mouth, eyebrows, and nose, can be used
[23] proposed a method to identify facial emotions based on for feature extraction and classification of facial expression.
histogram of gradients features, and they used JAFFE and Jain et al. [12] proposed an approach using multi-scale Gaus-
Cohn–Kanade [13] databases. They classified their features sian derivatives and SVM for detecting facial expressions.
using support vector machine (SVM). Kim [15] proposed a facial expression recognition system
Lajevardi et al. [16] extract features based on Gabor filters, using active shape model (ASM) landmark information and
log Gabor filter, local binary pattern operator, higher-order appearance-based classification algorithm using embedded
local autocorrelation (HLAC), and HLAC-like features and hidden Markov model. He claims that the proposed approach
used JAFFE and Cohn–Kanade as their databases. The clas- gives performance improvements compared to ASM-based
sifier they used was naive Bayes which is a probabilistic face alignment method for Cohn–Kanade database and
method. They claim that experimental results on both Cohn– JAFFE database, respectively. Yu et al. [32] worked on clas-
Kanade and JAFFE databases show that it is time-consuming sifying strong and weak emotions. Another work on facial
and more complex when holistic methods are used. Buciu emotion recognition is [18], where a super wide regres-
et al. [3] worked on comparison of independent component sion network for facial expression is used. Facial expression
analysis (ICA) approaches for facial expression recogni- recognition by de-expression residue learning is proposed in
tion. In their work, in addition to the InfoMax approach [31], where the authors use expressive component and neutral
present in ICA, other methods such as the extended InfoMax component to classify facial expression.
ICA, the undercomplete ICA, and the nonlinear kernel-ICA From all the works discussed above, one can find that
approaches were used for extracting features from two facial the facial features are extracted and classified with the help
expression databases, namely JAFFE and Cohn–Kanade. of appropriate classification algorithms. The extraction of
They use cosine similarity measure and SVM classifiers for features and manipulation of features with the help of classifi-
classification of facial expression. They conclude that sparse cation algorithms play a vital role in identifying the emotions.
basis images do not necessarily lead to a more accurate facial In this work, facial geometry-based expression detection
expression classification, as ICA yields an efficient coding is proposed by detecting the eye and mouth with the help of
by performing a sparse image representation. Viola Jones and modified mouthmap and eyemap algorithms.
Ilbeygi et al. [10] proposed a fuzzy-based facial expression Tensorflow is used for classification. We compare our work
recognition system based on facial feature extraction. The with different classification algorithms also.
features that they extract are eye opening, mouth opening,
eyebrow constriction, mouth corners displacement, mouth
length, nose-side wrinkles detection, detecting the existence 3 Proposed system
of the tooth, eyebrow slope, and lips thickness. They compare
the accuracy of the proposed fuzzy system with K -nearest The overall block diagram of the proposed system is shown in
neighbor (KNN) classifier and claim that their method out- Fig. 1. The main components are preprocessing, face detec-
performs KNN classifier. tion, mouth detection, and eye detection.
Huang et al. [9] extract the contour deformation of the
facial features and classify using the minimum distance clas- 3.1 Preprocessing
sifier to cluster the parameters. They conclude that further
study is required to improve the accuracy of the facial fea- The input image is first preprocessed in order to enhance the
ture extraction. Karthigayan et al. [14] used three feature clarity of the image. The input RGB image is first converted
extraction methods, namely projection profile, contour pro- to HSV image. The manipulations are carried out on the V
file, and moments. Genetic algorithm was used for face plane after extracting the V plane separately. DWT is used

123
Facial emotion detection using modified eyemap–mouthmap algorithm on an enhanced image…

Fig. 1 Block diagram of the proposed system

Fig. 2 Single-level DWT


transform

Fig. 3 DWT transformation

to transform the V plane. The image obtained after transfor- obtained based on the experiments. The following rules were
mation is shown in Fig. 2. Here, we get the approximation applied, if the input is mf1, then the output is mf1. Similarly
coefficients (LL), horizontal coefficients (HL), vertical coef- if...then rules were applied for mf2, mf3, mf4, and mf5.
ficients (LH), and diagonal coefficients (HH). Figure 3 shows
the output for the given input image after applying DWT. 3.2 Face detection
Fuzzy logic [24]-based manipulations are carried out in
the approximation coefficients of the DWT in order to get the The enhanced image shown in Fig. 5 is used for face detec-
enhanced image. Inverse DWT (IDWT) is used to reconstruct tion. The face is detected using Viola Jones algorithm [28],
the image. Figure 4 shows the membership function details, which has four stages, namely (1) Haar feature selection, (2)
and Table 1 shows the fuzzy rule base for image enhancement creating an integral image, (3) Adaboost training, (4) cascad-

123
A. Joseph, P. Geetha

The reason for resizing the image is to make the processing


easier.

3.3 Modified mouthmap and eyemap (MoMMEM)

The cropped image is then converted to the Y Cb Cr color


space. The different planes are then separated individually
as Y , Cb , and Cr . The separated plane is used to find the
mouthmap [8].
The Cr2 is first normalized to the range [0, 1]. The
mouthmap of the image is found using Eq. 1
 2
MouthMap = Cr2 · Cr2 − η · Cr /Cb , (1)

(a) 
1
n (x,y)∈F G Cr (x, y)2
where η = 0.5 ·  with n
1
n Cr (x, y)/Cb (x, y)
(x,y)∈F G
being the number of pixels within the face mask, F G.
After finding the mouthmap, the image is eroded using a
non-flat, ball-shaped structuring element whose radius in the
X –Y plane is 5 and whose maximum offset height is 5. The
image is then dilated using a non-flat, ball-shaped structuring
element whose radius in the X –Y plane is 11 and whose
maximum offset height is 11. The values for morphological
operations were obtained by trial-and-error method.
A mask is created near the region surrounding the mouth
in order to extract the mouth region alone using active con-
tour by Chan Vese method [4]. It is based on level sets that
are evolved iteratively to minimize an energy. It is defined
by weighted values corresponding to the sum of differ-
ences intensity from the average value outside the segmented
(b)
region, the sum of differences from the average value inside
Fig. 4 Membership function details: a input, b output the segmented region, and a term which is dependent on
the length of the boundary of the segmented region. The
detected image of the face, cropped image, mouthmap, mask,
Table 1 Fuzzy rule base for image enhancement
and segmented active contour image is shown in Fig. 6a–e,
Input Output respectively. Based on the area, the largest blob in the seg-
MF a b c MF a b c mented image is extracted in order to detect the mouth. The
detected eye and mouth are shown in Fig. 7.
mf1 0 0 100 mf1 − 50 0 50
Eye detection is also performed using Viola Jones algo-
mf2 0 100 180 mf2 50 100 150
rithm. The detected eye is further split into the right eye and
mf3 30 120 180 mf3 100 150 200
left eye, and then the modified eyemap algorithm is applied
mf4 100 180 255 mf4 185 220 255
in order to detect the landmarks in the eye. The eyemap is
mf5 180 255 255 mf5 180 255 330 a combination of EmC and EmL. The EmC is found using
MF membership function, a, b, c parameters of triangular membership equation
function
1 2 
EmC = (Cb ) + (C̃r )2 + (Cb /Cr ) , (2)
3
ing classifiers. For this work, Viola Jones algorithm proved to
be very efficient in detecting the face. A rectangular bounding where C̃r is the negative of Cr .
box is created for the detected face, and then the face region Before finding the EmL, a non-flat, ball-shaped structuring
alone is cropped. The cropped image is resized to 256 × 256 element, whose radius in the X –Y plane is 5 and whose max-
pixels because the actual image size is 562 × 762 pixels. imum offset height is 5, was chosen for the morphological

123
Facial emotion detection using modified eyemap–mouthmap algorithm on an enhanced image…

Fig. 5 a Actual image. b


Enhanced image

Fig. 6 a Detected face, b cropped face, c mouthmap, d mask, e seg- Fig. 8 Constructed geometry of the face
mented mouth


structuring function gδ : G δ ⊂ R → R with gδ (x) =
|δ|g(|δ|−1 x) and x ∈ G δ , δ = 0, where δ is the scale param-
eter and G δ = {x : |δ|−1 x ∈ G} as defined in [11]. The prod-
uct of EmC and EmL is then found using Eq. 4 to get eyemap.
EyeMap = EmC · EmL. (4)

After finding the eyemap, a non-flat, ball-shaped structur-


ing element, whose radius in the X –Y plane is 11 and whose
maximum offset height is 11, was chosen for morphologi-
cal operation erosion. After erosion, a non-flat, ball-shaped
structuring element, whose radius in the X –Y plane is 5
and whose maximum offset height is 5, was chosen for the
Fig. 7 Detected eye and mouth
morphological operation dilation. The facial landmarks are
figured out for the detected eye and mouth as defined in [26].
operations dilation and erosion. After dilating and eroding, Finally, the geometry of the face is detected as shown in
the EmL is found using Eq. 3: Fig. 8. Four points were chosen as landmarks for the eye as
well as for the mouth. A total of 16 triangles were constructed
(Y (x, y) ⊕ gσ (x, y)) using the landmarks.
EmL = , (3)
Y (x, y)  gσ (x, y) + 1 The modified mouthmap–eyemap algorithm for feature
extraction is shown in Algorithm 1. In the algorithm, the
where ⊕ and  are the grayscale dilation and erosion oper- following notations have been used; I —image and Fd —

ations on a function f : F ⊂ R → R using the scaled detected face.

123
A. Joseph, P. Geetha

Algorithm 1: MoMMEM Algorithm


Input : Face Image In // n is the total number of images
Output: Set of all feature vectors Fv
1 begin
2 Apply Viola Jones algorithm to detect Face Fd
3 for each Fd from In do
4 Crop the Fd
5 Compute Y , Cb and Cr components
6 for each cropped Fd do
7 Compute mouthmap as in equation 1
8 Change the value of η
9 Perform morphological operations  and ⊕ on mouthmap
10 Segment the mouth region // Use active contour
11 Detect the mouth
12 Find the landmarks
13 end
14 for each cropped Fd do
15 Detect eye using Viola Jones algorithm
16 Split the eye into left and right
17 Compute EmC using equation 2
18 Perform morphological operations ⊕ and  on Y component
19 Compute EmL using equation 3
20 Compute eyemap using equation 4
21 Perform morphological operations  and ⊕ on EyeMap
22 Segment the eyes regions // Use active contour
23 Find the landmarks
24 end
25 end
26 Construct the geometry
27 Compute Feature vector Fv
28 end

4 Results I2 is given by the luminance term, the contrast term, and the
structural term using the formula:
4.1 Dataset
SSIM(I1 , I2 ) = [l(I1 , I2 )]α · [c(I1 , I2 )]β · [s(I1 , I2 )]γ , (5)
Here, we use 478 images from the Karolinska Directed Emo-
tional Faces (KDEF) database, which is a set of human facial
expressions of emotion. The size of the images is 562 × 762 where
pixels with a resolution of 72 dpi. The database uses 16.7
million (32 bits) colors and uses a JPEG file format with a 2μ I1 μ I2 + C1
l(I1 , I2 ) = ,
compression quality of 94%. A sample set is shown in Fig. 9. μ2I1 + μ2I2 + C1
We also use two other datasets, namely the Oulu-CASIA 2σ I1 σ I2 + C2
dataset and the CK+ dataset, for testing the proposed algo- c(I1 , I2 ) = ,
σ I21 + σ I22 + C2
rithm. The Oulu-CASIA dataset consists of six expressions,
σ I1 I2 + C 3
namely surprise, happiness, sadness, anger, fear, and disgust, s(I1 , I2 ) =
σ I1 σ I2 + C 3
from 80 people who were between 23 and 58 years old. The
color images in the CK+ dataset were used for testing.
with μ I1 , μ I2 , σ I1 , σ I2 , and σ I1 I2 being the local means,
standard deviations, and cross-covariance of the images.
4.2 Experiments and discussion α, β, and γ are the weights. C1 , C2 , and C3 are variables
to stabilize the division with weak denominator. The results
4.2.1 Experiments on KDEF dataset of SSIM of the enhanced image are shown in Table 2.
The SSIM values show that the image enhancement tech-
First, we find the structural similarity index (SSIM) of the nique used is robust. This robustness can be improved by
enhanced image in order to assess the quality of the enhanced changing the values of the input membership function and
image. The structural similarity index of two images I1 and keeping the output membership function constant.

123
Facial emotion detection using modified eyemap–mouthmap algorithm on an enhanced image…

Fig. 9 Samples of the KDEF dataset (the first row gives the seven emotional expressions of a female face and the second row gives the seven
emotional expressions of a male face)

Table 2 Results of SSIM of an enhanced image

Input image Enhanced image SSIM

0.8435

0.8340

Fig. 10 a Input image, b Viola Jones algorithm for detecting the mouth,
0.8439
c modified mouthmap algorithm for detecting the mouth

cessor and tensorflow was used for classification. Extensive


experiments were conducted on the KDEF dataset in order
0.8315
to identify the geometry, and the results are shown in Fig. 11.
We get the coordinates of the geometry found, and we
construct triangles. The side of a triangle is found using the
Euclidean distance as we know only the coordinates. The
Euclidean distance is given by

Results obtained using modified mouthmap algorithm also distE ((x1 , y1 ), (x2 , y2 )) = (x1 − x2 )2 + (y1 − y2 )2 , (6)
proved to be efficient as the detection rates increased when
compared with Viola Jones algorithm even after changing
the merge threshold. The merge threshold sets the crite- where distE is the Euclidean distance between two coordi-
ria required to make a final detection in an area where nates (x1 , y1 ) and (x2 , y2 ).
there are multiple detections around the face. According to After finding the distance, we find the area of the triangle
[28], increasing the threshold may suppress negative iden- using the Heron’s area formula:
tifications by requiring that the target location be identified 
multiple times during the multi-scale identification phase of Area of the triangle = s(s − a)(s − b)(s − c), (7)
the algorithm. This scenario can be visualized in Fig. 10.
From the enhanced image, the features were extracted where s = ((a + b + c)/2) and s is called the semiperimeter
using the modified mouthmap and eyemap algorithm with of the triangle. a, b, and c are the sides of the triangle. The
the help of MATLAB 2016a using Intel Core i7-4790 pro- angle of the sides of the triangle is found using

123
A. Joseph, P. Geetha

Fig. 11 Examples of detection from KDEF dataset: a surprised, b anger, c disgust, d afraid, e happy, f neutral, g sad

Table 3 Results of neural


Step Loss Validation accuracy in % Training accuracy in %
network classifier using
tensorflow 4000 1.45 25.9 45.2
8000 1.21 31.2 54.1
48,000 0.7 25.9 77.2
248,000 0.31 25.9 90.3
498,000 0.08 31.2 98.1

−1 b2 + c2 − a 2 Generally, it is the ratio of the exponential of the given


Side A = cos
2bc input value and the sum of exponential values of all the values

in the inputs. The number of hidden layers was 10. Gradient
c2 + a 2 − b2 (8)
Side B = cos−1 descent optimization with a learning rate of .01 was used to
2ca
obtain the accuracy shown in Table 3.
Side C = 180◦ − Side A − Side B. From the results, one can infer that as the steps (i.e., iter-
ations) increase the training accuracy increases and the loss
We arrive at 67 features for the seven emotions. The val- decreases. This increase is due to the learning rate of the net-
ues of the features extracted varied when the alignment of work which shows a good increase gradually. If the steps are
landmarks varied. This variation was controlled by keeping further increased, the accuracy increases, but the time taken
the detected landmarks optimal by changing the parameters for the compilation of the algorithm increases. From 4000
which were used to detect the landmarks. iterations to 498,000 iterations, the loss decreases from 1.45
Neural network-based tensorflow was used for classifica- to 0.08.
tion of the extracted features. Softmax approach was used in Performance comparison with the state-of-the-art meth-
the neural network, and the results of the classifier is shown ods is shown in Table 4.
in Table 3. Softmax is computed using The features extracted were also given to classification
algorithms, namely logistic regression (LR), linear discrim-
inant analysis (LDA), KNN, classification and regression
exp(xi )
softmax(xi ) =  . (9) trees (CART), naive Bayes (NB), and SVM. The mean and
j exp(x j )

123
Facial emotion detection using modified eyemap–mouthmap algorithm on an enhanced image…

Table 4 Performance comparison with the state-of-the-art methods standard deviation accuracy rates are shown in Fig. 12. From
with KDEF database the figure, one can infer that among all the methods used naive
Method Accuracy in % Bayes produced better results since they are highly scalable.
Tenfold cross-validation was used for all the classification
Sun et al. [27] 77.98
algorithms.
Lekdioui et al. [17] 91.87
Yaddaden et al. [30] 90.61
4.2.2 Experiments on Oulu-CASIA and CK+ datasets
Ruiz-Garcia et al. [25] 95.58
Proposed 98.1
Images from the Oulu-CASIA dataset and CK+ dataset were
Proposed (MoMMEM algorithm) subjected to face and eye detection using Viola Jones algo-
rithm, and then we used the modified mouthmap algorithm
0.35 to detect the mouth as it proved efficient in detecting the
Mean

0.3
Standard deviation mouth when compared to Viola Jones algorithm. The geom-
0.25
etry is constructed after finding the landmarks. This can be
visualized in Figs. 13 and 14, respectively. Except for the
Accuracy

0.2
alignment of left eye, the same settings were set for the detec-
0.15
tion of geometry of the two datasets. The detected geometry
0.1
had some variations in the detection when a person closed
0.05
his/her eyes. This can be an area of research which requires
0
LR LDA KNN CART NB SVM challenging algorithms for the detection of facial emotions.
Algorithm
A detailed analysis on the Oulu-CASIA dataset reveals
Fig. 12 Mean and standard deviation accuracy rates using different that the algorithm detects the geometry even when there are
algorithms occlusions in the face and the detection of mouth opening and
closing is also good which can be used for the identification
of emotion. Experiments show that the detected geometry
varies significantly in terms of mouth and eye movements and
the variations can be captured for identifying the emotions.
Based on the detected geometry, classification algorithms can
be designed for classification of emotions.
Experiments on CK+ dataset show that the proposed algo-
rithm can be used for identifying the geometry even when a
person’s emotion is very complex. This can be visualized in
the second row of Fig. 14.
Fig. 13 Example detections from Oulu-CASIA dataset (the first row Finally, to conclude, the proposed algorithm is optimal
represents the input images and the second row represents the detected
in detecting the geometry of the face which in turn can be
geometry)
used for identifying the emotion with good accuracy. The
analysis reveals that proper machine learning algorithm is
also required to identify the emotions.

Fig. 14 Example detections


from CK+ dataset

123
A. Joseph, P. Geetha

5 Conclusion Automatic Face and Gesture Recognition, 2000. Proceedings, pp.


46–53. IEEE (2000)
14. Karthigayan, M., Juhari, M.R.M., Nagarajan, R., Sugisaka, M.,
A work on face emotion detection is proposed here; the geom- Yaacob, S., Mamat, M.R., Desa, H.: Development of a personified
etry of the face with respect to the neutral emotion can also face emotion recognition technique using fitness function. Artif.
be used to identify other emotions. The detected geometry Life Robot. 11(2), 197–203 (2007)
shows that the proposed modified eyemap–mouthmap algo- 15. Kim, D.: Facial expression recognition using ASM-based post-
processing technique. Pattern Recognit. Image Anal. 26(3), 576–
rithm is efficient. The variations in the geometry varied with 581 (2016)
respect to the emotion of a person. Use of different databases 16. Lajevardi, S.M., Hussain, Z.M.: Automatic facial expression recog-
proves that the algorithm can be used for the detection of nition: feature extraction and selection. Signal Image Video Pro-
geometry for other databases also. Tuning the parameters cess. 6(1), 159–169 (2012)
17. Lekdioui, K., Messoussi, R., Ruichek, Y., Chaabi, Y., Touahni,
of the algorithm produced optimal results for identifying R.: Facial decomposition for expression recognition using tex-
emotion. Results also show that the proposed algorithm is ture/shape descriptors and SVM classifier. Signal Process. Image
also gender independent and can be used for any gender. In Commun. 58, 300–312 (2017)
future, the proposed algorithm can be improved by the use 18. Liu, N., Zhang, B., Zong, Y., Liu, L., Chen, J., Zhao, G., Zhu, L.:
Super wide regression network for unsupervised cross-database
of machine learning algorithms and using tensorflow with facial expression recognition. In: 2018 IEEE International Confer-
graphics processing unit (GPU). ence on Acoustics, Speech and Signal Processing (ICASSP), pp.
1897–1901. IEEE (2018)
Funding This work was funded by the Department of Science and 19. Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z.,
Technology—Promotion of University Research and Scientific Excel- Matthews, I.: The extended Cohn–Kanade dataset (CK+): a com-
lence (DST-PURSE) Phase II Program, India. plete dataset for action unit and emotion-specified expression. In:
2010 IEEE Computer Society Conference on Computer Vision
and Pattern Recognition Workshops (CVPRW), pp. 94–101. IEEE
(2010)
References 20. Lundqvist, D., Flykt, A., Öhman, A.: The Karolinska Directed
Emotional Faces-KDEF. CD ROM from Department of Clinical
1. Agarwal, S., Santra, B., Mukherjee, D.P.: Anubhav: recognizing Neuroscience, Psychology section, Karolinska Institutet, ISBN 91-
emotions through facial expression. Vis. Comput. 34(2), 177–191 630-7164-9 (1998)
(2018) 21. Lyons, M., Akamatsu, S., Kamachi, M., Gyoba, J.: Coding facial
2. Alugupally, N., Samal, A., Marx, D., Bhatia, S.: Analysis of land- expressions with Gabor wavelets. In: Third IEEE International
marks in recognition of face expressions. Pattern Recognit. Image Conference on Automatic Face and Gesture Recognition, 1998.
Anal. 21(4), 681–693 (2011) Proceedings, pp. 200–205. IEEE (1998)
3. Buciu, I., Kotropoulos, C., Pitas, I.: Comparison of ica approaches 22. Mayer, C., Eggers, M., Radig, B.: Cross-database evaluation for
for facial expression recognition. Signal Image Video Process. 3(4), facial expression recognition. Pattern Recognit. Image Anal. 24(1),
345–361 (2009) 124–132 (2014)
4. Chan, T.F., Vese, L.A.: Active contours without edges. IEEE Trans. 23. Mlakar, U., Potočnik, B.: Automated facial expression recognition
Image Process. 10(2), 266–277 (2001) based on histograms of oriented gradient feature vector differences.
5. Ekman, P., Friesen, W.: Facial Action Coding System: A Technique Signal Image Video Process. 9(1), 245–253 (2015)
for the Measurement of Facial Movement. Consulting Psycholo- 24. Panda, S.P.: Image contrast enhancement in spatial domain using
gists Press, Washington (1978) fuzzy logic based interpolation method. In: 2016 IEEE Stu-
6. Gogić, I., Manhart, M., Pandžić, I.S., Ahlberg, J.: Fast facial expres- dents’ Conference on Electrical, Electronics and Computer Science
sion recognition using local binary features and shallow neural (SCEECS), pp. 1–4. IEEE (2016)
networks. Vis. Comput. 1–16 (2018) 25. Ruiz-Garcia, A., Palade, V., Elshaw, M., Almakky, I.: Deep learning
7. Happy, S., Routray, A.: Automatic facial expression recognition for illumination invariant facial expression recognition. In: 2018
using features of salient facial patches. IEEE Trans. Affect. Com- International Joint Conference on Neural Networks (IJCNN), pp.
put. 6(1), 1–12 (2015) 1–6. IEEE (2018)
8. Hsu, R.L., Abdel-Mottaleb, M., Jain, A.K.: Face detection in color 26. Silva, C., Schnitman, L., Oliveira, L.: Detection of facial landmarks
images. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 696–706 using local-based information. In: The 19th Edition of the Brazil-
(2002) ian Conference on Automation-CBA 2012, Campina Grande, PB,
9. Huang, C.L., Huang, Y.M.: Facial expression recognition using Brazil (oral presentation), September 3 (2012)
model-based feature extraction and action parameters classifica- 27. Sun, Z., Hu, Z., Wang, M., Zhao, S.: Individual-free representation-
tion. J. Vis. Commun. Image Represent. 8(3), 278–290 (1997) based classification for facial expression recognition. Signal Image
10. Ilbeygi, M., Shah-Hosseini, H.: A novel fuzzy facial expression Video Process. 11(4), 597–604 (2017)
recognition system based on facial feature extraction from color 28. Viola, P., Jones, M.: Rapid object detection using a boosted cascade
face images. Eng. Appl. Artif. Intell. 25(1), 130–146 (2012) of simple features. In: Proceedings of the 2001 IEEE Computer
11. Jackway, P.T., Deriche, M.: Scale-space properties of the multiscale Society Conference on Computer Vision and Pattern Recognition,
morphological dilation–erosion. IEEE Trans. Pattern Anal. Mach. 2001. CVPR 2001, vol. 1, pp. I–I. IEEE (2001)
Intell. 18(1), 38–51 (1996) 29. Wong, J.J., Cho, S.Y.: A face emotion tree structure representa-
12. Jain, V., Mavridou, E., Crowley, J.L., Lux, A.: Facial expression tion with probabilistic recursive neural network modeling. Neural
analysis and the affect space. Pattern Recognit. Image Anal. 25(3), Comput. Appl. 19(1), 33–54 (2010)
430–436 (2015) 30. Yaddaden, Y., Adda, M., Bouzouane, A., Gaboury, S., Bouchard,
13. Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial B.: User action and facial expression recognition for error detection
expression analysis. In: Fourth IEEE International Conference on

123
Facial emotion detection using modified eyemap–mouthmap algorithm on an enhanced image…

systemin an ambient assisted environment. Expert. Syst. Appl. 112, P. Geetha is a Professor in the
173–189 (2018) Department of Computer Science
31. Yang, H., Ciftci, U., Yin, L.: Facial expression recognition by de- and Engineering, College of Engi-
expression residue learning. In: Proceedings of the IEEE Confer- neering campus, Anna University,
ence on Computer Vision and Pattern Recognition, pp. 2168–2177 Chennai, India. Her research
(2018) focuses on Image Processing and
32. Yu, Z., Liu, Q., Liu, G.: Deeper cascaded peak-piloted network Pattern Recognition.
for weak expression recognition. Vis. Comput. 34(12), 1691–1699
(2018)
33. Zhao, G., Huang, X., Taini, M., Li, S.Z., PietikäInen, M.: Facial
expression recognition from near-infrared videos. Image Vis. Com-
put. 29(9), 607–619 (2011)

Publisher’s Note Springer Nature remains neutral with regard to juris-


dictional claims in published maps and institutional affiliations.

Allen Joseph is a Research Scholar


in the Department of Computer
Science and Engineering, College
of Engineering campus, Anna
University, Chennai, India. His
research focuses on Image Pro-
cessing and Speech.

123

You might also like