You are on page 1of 9

2266 IEEE SENSORS JOURNAL, VOL. 19, NO.

6, MARCH 15, 2019

Cross-Subject Emotion Recognition Using Flexible


Analytic Wavelet Transform From EEG Signals
Vipin Gupta, Mayur Dahyabhai Chopda, and Ram Bilas Pachori , Senior Member, IEEE

Abstract— Human emotion is a physical or psychological of researchers from various disciplines like psychology, bio-
process which is triggered either consciously or unconsciously due medical science, neuroscience, etc. In the field of computer
to perception of any object or situation. The electroencephalo- science, emotion study is inclined towards the development of
gram (EEG) signals can be used to record ongoing neuronal
activities in the brain to get the information about the human applications such as task workload assessment and vigilance
emotional state. These complicated neuronal activities in the of operator [5], [6]. An automated emotion recognition system
brain cause non-stationary behavior of the EEG signals. Thus, enriches the computer interface more user-friendly, effective,
emotion recognition using EEG signals is a challenging study and and enjoyable. The approaches to recognize human emotions
it requires advanced signal processing techniques to extract the vary from facial images, gesture, speech signals, to other
hidden information of emotions from EEG signals. Due to poor
generalizability of features from EEG signals across subjects, physiological signals [7]. An inherent ambiguity exists in
recognizing cross-subject emotion has been difficult. Thus, our recognition of emotions using facial images, gesture, or speech
aim is to comprehensively investigate the channel specific nature signals because it might be a pretended emotion not the
of EEG signals and to provide an effective method based on real ones. To resolve this ambiguity, emotion recognition
flexible analytic wavelet transform (FAWT) for recognition of using electroencephalogram (EEG) signals gained significant
emotion. FAWT decomposes the EEG signal into different sub-
band signals. Furthermore, we applied information potential to attention of researchers due to its accurate assessment of the
extract the features from the decomposed sub-band signals of emotions and objective evaluation in comparison with facial
EEG signal. The extracted feature values were smoothed and expressions and gestures based techniques [8]. It has been
fed to the random forest and support vector machine classifiers proven that EEG signals can be helpful in effectively identify-
that classified the emotions. The proposed method is applied ing the different emotions [9]–[12]. For effective medical care,
to two different publicly available databases which are SJTU
emotion EEG dataset and database for emotion analysis using the consideration of emotional state is important [13], [14].
physiological signal. The proposed method has shown better The process of recognition of emotion requires suitable signal
performance for human emotion classification as compared to processing techniques, feature extraction, and machine learn-
the existing method. Moreover, it yields channel specific subject ing based classifiers for automated classification.
classification of emotion EEG signals when exposed to the same Several techniques for automated classification of human
stimuli.
emotions using EEG signals are proposed in [15]–[22]. The
Index Terms— Human emotions, EEG, FAWT, random forest, technique based on discrete wavelet transform (DWT) is used
SVM. in [15] to extract features from the EEG signals for emotion
recognition. The features like energy and entropy are com-
I. I NTRODUCTION puted from the wavelet coefficients of the emotion EEG signals
and the fuzzy c-mean and fuzzy k-mean clustering algorithms
E MOTIONS play a vital role in human life and are
one of the crucial features of humans [1]. The every-
day activities like communication, decision-making, etc., get
are used for classification purpose. Soleymani et al. [16] pre-
sented a method for user-independent emotion recognition
highly affected by emotional behavior. For decades, brain- based on EEG signals, gaze distance, and pupillary response.
computer interfaces (BCI) [2] have been one of the emerging The reported classification accuracy is 68.5% for three valence
and interesting bio-medical engineering research field that labels and 76.4% for three arousal labels using modality
allows human being to control the external devices using fusion strategy, and support vector machine (SVM). The EEG
their brain waves. To achieve precise and natural interaction, signals pertaining to emotions of happiness and sadness are
computers and robots must possess the ability of emotion classified using common special patterns (CSP) and linear-
processing [3], [4]. The study of emotions has drawn attention SVM classifier. They also presented a strategy to choose an
optimal frequency band and gamma band which is found
Manuscript received July 24, 2018; revised October 23, 2018 and suitable for EEG-based emotion classification [18]. The three
November 17, 2018; accepted November 21, 2018. Date of publication time-frequency distributions namely, Hilbert-Huang, Zhao-
November 27, 2018; date of current version February 15, 2019. The associate
editor coordinating the review of this paper and approving it for publication Atlas-Marks, and spectrogram are used to compute the fea-
was Prof. Chang-Hee Won. (Corresponding author: Vipin Gupta.) tures based on time-windowing approach for discrimination
The authors are with the Discipline of Electrical Engineering, between music appraisal responses [19]. The fast Fourier
Indian Institute of Technology Indore, Indore 453552, India (e-mail:
vipingupta@iiti.ac.in; ee150002013@iiti.ac.in; pachori@iiti.ac.in). transform (FFT) based features are extracted and classifica-
Digital Object Identifier 10.1109/JSEN.2018.2883497 tion is performed by employing a classifier depending on
1558-1748 © 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Authorized licensed use limited to: Universidad de Ingeniería y Tecnología (UTEC). Downloaded on October 20,2020 at 03:58:47 UTC from IEEE Xplore. Restrictions apply.
GUPTA et al.: CROSS-SUBJECT EMOTION RECOGNITION USING FAWT FROM EEG SIGNALS 2267

Fig. 1. Block diagram representation of the proposed methodology for the automated classification of the emotion EEG signals.

Bayes theorem and perceptron convergence algorithm [20]. contributed to the experiment thrice at an interval of one
Differential entropy based features are computed from the week or longer. The emotion EEG signals were collected
EEG signals for emotion recognition. These features are found by showing fifteen Chinese film clips for positive, neutral,
appropriate for recognition of emotion categories namely, and negative emotions. These films contain both scene and
positive, neutral, and negative [21]. In another work, the dif- audio to elicit strong emotion in subject. Every emotion
ferential entropy computed in different frequency bands is contains five film clips with each 4 minutes long in one
related to EEG rhythms. The beta and gamma rhythms are experiment. The subject’s emotion reactions were recorded
found most effective for emotion recognition [22]. Recently, through a questionnaire after watching each emotion film clip.
the authors investigated 18 different kinds of linear and non- The 62-channel electrode cap was used for recording the EEG
linear features out of which nine are time-frequency domain signals according to the international 10-20 system at 1000 Hz
features and others are dynamical system features from EEG sampling rate. The recorded EEG signals were preprocessed
measurements and studied the different aspects which are with down-sampling rate of 200 Hz followed by a band pass
important for cross-subject emotion recognition e.g., different filter between 0.5 Hz to 70 Hz to remove the noise and
EEG channels and achieved average classification accuracies artifacts. The detailed information related to the dataset can
of 59.06% and 83.33% on the database for emotion analysis be found in [24].
using physiological signals (DEAP) and SJTU emotion EEG The SEED database contains recordings from 62 channels.
dataset (SEED) databases, respectively [23]. Zheng and Lu [22] presented the appropriate number of
In this paper, we have focused on the channel specific channels for emotion EEG signals classification and it was
features and developed an identification system based on the observed that 12 channels were most effective for classification
signal processing technique for automated emotion recognition of emotions. These channels are as follows: C5, C6, CP5,
using EEG signals and provided channel specific analysis CP6, FT7, FT8, P7, P8, T7, T8, TP7, and TP8. We have
across subjects. For this purpose, the emotion EEG signals considered each channel separately, and extracted one second
are first decomposed using flexible analytic wavelet transform epochs from the last 30 seconds of the recorded EEG signals.
(FAWT) method. The FAWT based decomposition of EEG Koelstra et al. [25] have suggested the use of last 30 seconds
signal results in sub-band signals. The FAWT method has of each trial (video) for emotions identification from EEG
many advantages over the conventional DWT method such as signals. On the other hand, the human emotions normally fall
flexibility in the selection of parameters (fractional sampling, in the duration of 0.5-4 seconds [26]. It should be noted that
quality factor, dilation, and redundancy). Moreover, the FAWT the suitable selection of the duration is an important factor
provides a platform for analysis of transient and oscillatory in the identification of human emotions from EEG signals.
nature of the signal. It should be noted that with these The selection of too long or too short duration may lead
above mentioned specific features, the FAWT can also be to misclassification of human emotions. For these reasons,
implemented using iterative filter bank approach like DWT. the optimal duration of one second has been suggested in [27]
The information potential (IP) estimator is used to extract the and [28] for identification of human emotions.
feature values from different sub-band signals. These feature In this work, we have also studied the DEAP emotion
values are smoothen and fed to the random forest and SVM database which consists recording of 32 subjects and the
classifiers separately that classify the emotion EEG signals. recording from each subject contains 32 EEG and 8 peripheral
The block diagram for the proposed automated emotion clas- signals corresponding to 40 channels. These EEG signals were
sification system is shown in Fig. 1. recorded by showing 40 pre-selected music video each with
The rest of the paper is organized as follows: In Section II, duration of 60 seconds and baseline recording of 3 seconds
the details about the datasets are provided. The proposed duration. The sampling frequency of these recorded EEG
methodology is explained in Section III, followed by results signals is 128 Hz. The detailed information about database
and discussions in Section IV. Section V provides conclusion can be found in [25]. The channels T7, T8, CP5, CP6, P7,
of the paper. and P8 are considered in this work because these channels are
more suitable for recognition of emotions as suggested in [22].
II. DATASETS
III. M ETHODOLOGY
The datasets used in this work are SEED and DEAP which
are publicly available online for the research purpose [22], A. Flexible Analytic Wavelet Transform
[24], [25]. The SEED dataset consists EEG signals recorded FAWT [29], [30] is an advanced form of DWT that
from 15 subjects (7 males and 8 females). Each participant serves as an effective method for analyzing bio-medical

Authorized licensed use limited to: Universidad de Ingeniería y Tecnología (UTEC). Downloaded on October 20,2020 at 03:58:47 UTC from IEEE Xplore. Restrictions apply.
2268 IEEE SENSORS JOURNAL, VOL. 19, NO. 6, MARCH 15, 2019

Fig. 2. Plots of (a) an epoch from positive emotion EEG signal and (b)-(n) its corresponding reconstructed sub-bands (SS1 − SS13 ) obtained using FAWT
decomposition.

signals [31], [32]. The time-frequency covering is one of as [29]:


the salient features of FAWT. The FAWT contains Hilbert ⎧

⎨ (e f )1/2 ,  |ω| < ω p
transform pairs of atoms that make it suitable for analysis of ⎨
⎨ 

⎨ ω − ωp
signals which contain oscillations. The Q-factor (QF), number ⎨ 1/2
⎨(e f ) θ , ω p ≤ ω ≤ ωs
of decomposition level (J), and redundancy (r) are the input H (ω) =  ωs − ω p 

⎨ π − (ω − ω p )
parameters for FAWT. The QF for an oscillatory pulse can be ⎨
⎨ (e f ) θ
1/2
, −ωs ≤ ω ≤ −ω p

⎨ ωs − ω p
expressed as [29]: ⎨
⎩0, |ω| ≥ ωs
ω0 (3)
QF = . (1)

and the low pass filter frequency response is expressed as [29]:
⎧  
where ω0 is central frequency and ω is the bandwidth of the ⎨ π − ω − ω0
⎨(gh) θ

1/2
, ω0 ≤ ω < ω1
signal. ⎨
⎨ ω1 − ω0
Thus, QF is the controlling parameter of the number of ⎨
⎨(gh)1/2
  ω1 < ω < ω2
oscillations in the mother wavelet. The redundancy controls the G(ω) = ω − ω

⎨ 2
time localization of the wavelet. FAWT provides the facility ⎨
⎨ (gh)1/2 θ , ω2 ≤ ω ≤ ω3

⎨ ω3 − ω2
to specify the dilation factor, QF, and redundancy through ⎩
0, ω ∈ [0, ω0 ) ∩ (ω3 , 2π)
adjusting parameters namely, e, f, g, h, and β. We have e
and f for up and down sampling of high pass channel while (4)
g and h are used for up and down sampling of low pass where
channel, respectively. The β is a positive constant which gives (1 − β)π +  π (1 − β)π + 
a measure for QF and it can be expressed as [29]: ωp = ; ωs = ; ω0 = ;
e f g
eπ π − π + e − f +βf
2 ω1 = ; ω2 = ; ω3 = ; ≤ π.
β= (2) fg g g e+ f
QF + 1
The θ (ω) can be given by [29]:
As per the definition of FAWT, the parameters e, f, g, h, [1 + cos(ω)][2 − cos(ω)]1/2
and β control the number of oscillation in the wavelet. For θ (ω) = , for ω ∈ [0, π] (5)
2
a specific QF, the generated wavelet for different decom- For perfect reconstruction, following condition must be satis-
position levels will have same number of oscillations. The fied [29]:
shape of these wavelets will change with the variation of
FAWT parameters [29]. The fractional sampling can also |θ (π − ω)|2 + |θ (ω)|2 = 1 (6)
be done using these FAWT parameters in low and high The constraint for selecting the QF parameter is expressed as:
pass channels. Implementation of J level decomposition using e g
FAWT is done by iterative filter bank comprising of high 1− ≤β ≤ (7)
f h
pass and low pass channels at every iteration level. The
high pass and low pass channels of the filter bank separate The redundancy parameter r can be expressed as:
the positive and negative frequencies, respectively. The fre- 1
r ≈ (g/ h) (8)
quency response corresponding to high pass filter is expressed 1 − e/ f

Authorized licensed use limited to: Universidad de Ingeniería y Tecnología (UTEC). Downloaded on October 20,2020 at 03:58:47 UTC from IEEE Xplore. Restrictions apply.
GUPTA et al.: CROSS-SUBJECT EMOTION RECOGNITION USING FAWT FROM EEG SIGNALS 2269

Fig. 3. Plots of (a) an epoch from neutral emotion EEG signal and (b)-(n) its corresponding reconstructed sub-bands (SS1 − SS13 ) obtained using FAWT
decomposition.

Fig. 4. Plots of (a) an epoch from negative emotion EEG signal and (b)-(n) its corresponding reconstructed sub-bands (SS1 − SS13 ) obtained using FAWT
decomposition.

Thus, the selection of parameter r is subjected to following and (9), respectively. The selected range of values for QF
constraint: parameter are (3, 4, 5, and 6) and r parameter are (3, 4, 5,
e 6, 7, and 8). The value of J is selected from the range of
r > β/(1 − ) (9) (5, 6, 7, 8, 9, 10, 11, and 12) because J=12 is the maximum
f
possible decomposition level using FAWT on these parameters
Figs. 2, 3, and 4 show the plots of the epochs and
values for EEG signals of length 200 samples [29].
its corresponding reconstructed sub-band signals (SS1 -SS13 )
The FAWT is successfully applied for identification
obtained from FAWT decomposition for positive, neutral, and
of atrial fibrillation electrocardiogram (ECG) signals [34],
negative emotion EEG signals, respectively. These epochs are
myocardial infarction ECG signals [31], coronary artery
corresponding to last one second extracted from FT7 channel
disease [35], [36], and focal EEG signals [33]. For the
(first session of first subject) obtained with SEED database.
FAWT decomposition method, matlab toolbox is available at
It should be noted that SS1 to SS13 denote the first to thirteenth
(http://web.itu.edu.tr/ibayram/AnDWT/).
reconstructed sub-band signals (SS) in their decreasing order
of frequency. These components are well behaved and suitable
for features extraction for the classification of human emotion B. Feature Extraction Using Information Potential
EEG signals. These obtained SS show the outcome of FAWT
The IP is a kernel based non-parametric estimator to evalu-
based analysis.
ate Renyi’s quadratic entropy. For a random variable X, the IP
In this work, we have used a fixed value of dilation factor
of X is expressed as [35], [37]:
( ef = 34 ) as suggested in [33] for EEG signals classification.
On the basis of this fixed dilation factor, we have chosen the
1 
N N
values of parameters (QF and r) for the FAWT decomposition Iˆ(X) = 2 kσ (x j , x i ) (10)
subjected to constraints which are expressed in equations (8) N
i=1 j =1

Authorized licensed use limited to: Universidad de Ingeniería y Tecnología (UTEC). Downloaded on October 20,2020 at 03:58:47 UTC from IEEE Xplore. Restrictions apply.
2270 IEEE SENSORS JOURNAL, VOL. 19, NO. 6, MARCH 15, 2019

Fig. 5. Plots of (a) raw feature values (b) smooth feature values using moving average filter.

Fig. 6. Plot of average classification accuracies on SEED database for different channels with respect to J values at QF=3 and r=3 using random forest
classifier.

where {x i }i=1
N
are the data samples of random variable X window-length of 5 samples [21]. Fig. 5 shows the effect of
and N is the total number of observations, kσ represents the moving average filter on the raw feature values obtained from
Gaussian kernel with bandwidth parameter σ . positive emotion using FT7 channel of the first subject during
In this work, computation of IP [37] feature from the first session.
the SS obtained from FAWT decomposition of each
epoch is done. Information theoretic learning (ITL) tool-
box (http://www.sohanseth.com/Home/codes) is used for the D. Classification
faster implementation of IP that makes the use of incomplete In this work, random forest [38] and SVM [39] classifiers
Cholesky decomposition with parameter σ fixed to 1 [35]. have been used for the classification purpose. The working
principle of random forest classifier is based on the aggregate
decisions taken from different trees and each decision tree
C. Feature Smoothing individually with assigned weight makes a decision about the
As the human emotion changes gradually, the rapid fluctu- class for the final decision. For building a tree, random tree
ations in raw feature values need to be removed. The feature method is employed [40]. The class decision from every tree
smoothing process is useful for human emotion classification determines overall classification output. The random forest
from EEG signals [21], due to this reason, we have used classifier has been successfully utilized for the classification
feature smoothing process in our proposed method. In this of ECG signals [41] and EEG signals [42], [43].
process, raw feature values obtained from IP of epochs have In SVM classifier, the data is mapped to a higher dimen-
been smoothen using a moving average filter corresponding to sional space and an optimal hyperplane for separation of data

Authorized licensed use limited to: Universidad de Ingeniería y Tecnología (UTEC). Downloaded on October 20,2020 at 03:58:47 UTC from IEEE Xplore. Restrictions apply.
GUPTA et al.: CROSS-SUBJECT EMOTION RECOGNITION USING FAWT FROM EEG SIGNALS 2271

Fig. 7. Plot of average classification accuracies on SEED database for different channels with respect to QF values at J=12 and r=3 using random forest
classifier.

is constructed in this space. This classifier basically solves a J parameter for all the channels. Therefore, we have selected
quadratic programming problem [39]. The SVM classifier is J=12 which is maximum available decomposition level for
used in [44] and [45], for classification of epileptic seizure selected FAWT parameters with signal length of 200 samples
EEG signals. In this work, the polynomial and radial basis in order to select the QF and r parameters. The variation of
function (RBF) kernels have been used with SVM classifier average classification accuracies for different channels with
for evaluating the classification performance [46]. respect to QF parameter on SEED database can be seen
The ten-fold cross validation is used for the effective in Fig. 7. The selected value of QF parameter is 5 with the help
calculation of the classification accuracy [47]. The Waikato of Fig. 7, because most of the channels have higher average
environment for knowledge analysis (WEKA) [48] is used for classification accuracies at this QF parameter value. However,
the implementation of the random forest and SVM classifiers. the variation of third parameter r shows no impact on the aver-
These classifiers have been used with default parameters age classification accuracies at J=12 and QF = 5 and this vari-
present in WEKA. ation can be seen in Fig. 8. Therefore, we have selected r=3 for
FAWT decomposition in our methodology. Table I shows the
IV. R ESULTS AND D ISCUSSIONS achieved average classification accuracies on selected FAWT
In this work, epochs of 1 second duration are extracted from parameters across channels obtained with random forest and
the last 30 seconds [25] of all the emotion EEG signals namely, SVM classifiers on SEED database. It can be observed from
positive, negative, and neutral from SEED database. We have Table I, that the highest average classification accuracies
applied FAWT decomposition on each extracted 1 second across channels are obtained with random forest classifier in
epoch of emotion EEG signal for 12 selected channels sepa- comparison to SVM classifier for SEED database. It can also
rately. The FAWT parameters J, QF, and r are selected from be observed from Table I, that channels FT7, FT8, T7, T8, C5,
the ranges of 5 to 12, 3 to 6, and 3 to 8, respectively. IP and TP7 have shown higher average classification accuracies
is computed from each of the SS and the dimension of the across channels obtained with SEED database using random
feature set is dependent on J level of FAWT decomposition forest classifier in comparison to other channels. Thus, we can
for an epoch and thirty such epochs are considered from the clearly say that these channels are more efficient for cross-
last 30 seconds duration of the emotion EEG signal. The size subject recognition of emotion with FAWT decomposition
of feature set is product of the number of SS used to extract using EEG signals. The proposed methodology with selected
features with number of epochs for a channel. Therefore, FAWT parameters along with random forest classifier has been
the size of feature set is 30×J+1 for an emotion EEG signal also tested on DEAP database to study the effectiveness of the
of a channel. Extracted feature values have been classified by proposed method. The selection of random forest classifier
the random forest and SVM classifiers. for DEAP database is based on the better classification per-
Fig. 6 shows the variation of average classification accu- formance obtained on SEED database as compared to SVM
racies for different channels with respect to J parameter on classifier. For the DEAP database, J=11 is the maximum pos-
SEED database. It can be seen from the Fig. 6 that the sible decomposition levels due to signal length of 128 samples.
average classification accuracies increase with the increase in Table I also shows the results of average classification

Authorized licensed use limited to: Universidad de Ingeniería y Tecnología (UTEC). Downloaded on October 20,2020 at 03:58:47 UTC from IEEE Xplore. Restrictions apply.
2272 IEEE SENSORS JOURNAL, VOL. 19, NO. 6, MARCH 15, 2019

Fig. 8. Plot of average classification accuracies on SEED database for different channels with respect to r values at J=12 and QF=5 using random forest
classifier.
TABLE I
AVERAGE C LASSIFICATION A CCURACIES (%) A CROSS C HANNELS FOR SEED (J = 12, Q F = 5, AND R = 3)
AND D EAP ( J = 11, Q F = 5, AND R = 3) D ATABASES

accuracies across channels obtained with DEAP database accuracies achieved with random forest classifier are higher
using random forest classifier. The higher average classifi- than SVM classifier. It has been shown that our method
cation accuracies across channels for T7 and T8 common achieves higher classification accuracies in comparison to
channels on DEAP database can be seen from Table I. existing method for cross-subject channel specific classifica-
The proposed methodology obtained the average classifi- tion of emotion EEG signals. Cross-subject classification using
cation accuracies are 90.48% for positive/neutral/negative, channel specific nature can provide an insight to the emotional
79.95% for high arousal (HA)/low arousal (LA), 79.99% sensitivity of different persons across brain regions when the
for the high valence (HV)/low valence (LV), and 71.43% similar stimuli are presented.
for HVHA/HVLA/LVLA/LVHA emotions classification using
EEG signals. Our proposed methodology outperforms in com- R EFERENCES
parison to methodology proposed in [23], which gives an
[1] S. M. Alarcao and M. J. Fonseca, “Emotions recognition using
average classification accuracies of 83.33% on SEED database EEG signals: A survey,” IEEE Trans. Affect. Comput., 2017, doi:
and 59.06% on DEAP database. 10.1109/TAFFC.2017.2714671.
[2] F. Nijboer, F. O. Morin, S. P. Carmien, R. A. Koene, E. Leon, and
U. Hoffmann, “Affective brain-computer interfaces: Psychophysiological
V. C ONCLUSION markers of emotion in healthy persons and in persons with amyotrophic
lateral sclerosis,” in Proc. 3rd Int. Conf. Affect. Comput. Intell. Interact.
In this work, we have presented a new method for the Workshops. 2009, pp. 1–11.
cross-subject classification of the emotion EEG signals. The [3] L. Pessoa and R. Adolphs, “Emotion processing and the amygdala:
From a ‘low road’ to ‘many roads’ of evaluating biological significance,”
proposed method explores the FAWT for identification of Nature Rev. Neurosci., vol. 11, no. 11, p. 773–783, 2010.
human emotions. The effect of variation in FAWT parameters [4] W.-L. Zheng, W. Liu, Y. Lu, B.-L. Lu, and A. Cichocki, “Emotionmeter:
have been studied in this work. The IP feature values of SS A multimodal framework for recognizing human emotions,” IEEE Trans.
Cybern., 2018, doi: 10.1109/TCYB.2018.2797176.
obtained using FAWT decomposition have been found useful [5] L.-C. Shi and B.-L. Lu, “EEG-based vigilance estimation using extreme
for classification of the emotion EEG signals. On increasing learning machines,” Neurocomputing, vol. 102, pp. 135–143, Feb. 2013.
[6] W.-L. Zheng and B.-L. Lu, “A multimodal approach to estimating
the decomposition level (J) and QF parameter, the aver- vigilance using EEG and forehead EOG,” J. Neural Eng., vol. 14, no. 2,
age classification accuracies are increased. The classification p. 026017, 2017.

Authorized licensed use limited to: Universidad de Ingeniería y Tecnología (UTEC). Downloaded on October 20,2020 at 03:58:47 UTC from IEEE Xplore. Restrictions apply.
GUPTA et al.: CROSS-SUBJECT EMOTION RECOGNITION USING FAWT FROM EEG SIGNALS 2273

[7] S. Jerritta, M. Murugappan, R. Nagarajan, and K. Wan, “Physiolog- [31] M. Kumar, R. B. Pachori, and U. R. Acharya, “Automated diagnosis
ical signals based human emotion recognition: A review,” in Proc. of myocardial infarction ECG signals using sample entropy in flexible
IEEE 7th Int. Colloq. Signal Process. Appl., May 2011, pp. 410–415. analytic wavelet transform framework,” Entropy, vol. 19, no. 9, p. 488,
[8] G. L. Ahern and G. E. Schwartz, “Differential lateralization for positive 2017.
and negative emotion in the human brain: EEG spectral analysis,” [32] M. Sharma, R. B. Pachori, and U. R. Acharya, “A new approach
Neuropsychologia, vol. 23, no. 6, pp. 745–755, 1985. to characterize epileptic seizures using analytic time-frequency flexi-
[9] D. Sammler, M. Grigutsch, T. Fritz, and S. Koelsch, “Music and ble wavelet transform and fractal dimension,” Pattern Recognit. Lett.,
emotion: Electrophysiological correlates of the processing of pleasant vol. 94, pp. 172–179, Jul. 2017.
and unpleasant music,” Psychophysiology, vol. 44, no. 2, pp. 293–304, [33] V. Gupta, T. Priya, A. K. Yadav, R. B. Pachori, and U. R. Acharya,
2007. “Automated detection of focal EEG signals using features extracted from
[10] G. G. Knyazev, J. Y. Slobodskoj-Plusnin, and A. V. Bocharov, “Gender flexible analytic wavelet transform,” Pattern Recognit. Lett., vol. 94,
differences in implicit and explicit processing of emotional facial expres- pp. 180–188, Jul. 2017.
sions as revealed by event-related theta synchronization,” Emotion, [34] M. Kumar, R. B. Pachori, and U. R. Acharya, “Automated diagnosis
vol. 10, no. 5, p. 678, 2010. of atrial fibrillation ECG signals using entropy features extracted from
[11] D. Mathersul, L. M. Williams, P. J. Hopkinson, and A. H. Kemp, “Inves- flexible analytic wavelet transform,” Biocybern. Biomed. Eng., vol. 38,
tigating models of affect: Relationships among EEG alpha asymmetry, no. 3, pp. 564–573, 2018.
depression, and anxiety,” Emotion, vol. 8, no. 4, pp. 560–572, 2008. [35] M. Kumar, R. B. Pachori, and U. R. Acharya, “Characterization of
[12] V. Bajaj and R. B. Pachori, “Human emotion classification from coronary artery disease using flexible analytic wavelet transform applied
EEG signals using multiwavelet transform,” in Proc. Int. Conf. Med. on ECG signals,” Biomed. Signal Process. Control, vol. 31, pp. 301–308,
Biometrics, 2014, pp. 125–130. Jan. 2017.
[13] C. Doukas and I. Maglogiannis, “Intelligent pervasive healthcare sys- [36] M. Kumar, R. B. Pachori, and U. R. Acharya, “An efficient automated
tems,” in Proc. Adv. Comput. Intell. Paradigms Healthcare-3. Berlin, technique for CAD diagnosis using flexible analytic wavelet transform
Germany: Springer, 2008, pp. 95–115. and entropy features extracted from HRV signals,” Expert Syst. Appl.,
[14] P. C. Petrantonakis and L. J. Hadjileontiadis, “A novel emotion elic- vol. 63, pp. 165–172, Nov. 2016.
itation index using frontal brain asymmetry for enhanced EEG-based [37] D. Xu and D. Erdogmuns, “Renyi’s entropy, divergence and their
emotion recognition,” IEEE Trans. Inf. Technol. Biomed., vol. 15, no. 5, nonparametric estimators,” in Proc. Inf. Theoretic Learn. New York,
pp. 737–746, Sep. 2011. NY, USA: Springer, 2010, pp. 47–102.
[15] M. Murugappan, M. Rizon, R. Nagarajan, S. Yaacob, I. Zunaidi, [38] M. Brennan, M. Palaniswami, and P. Kamen, “Do existing measures
and D. Hazry, “EEG feature extraction for classifying emotions using of Poincare plot geometry reflect nonlinear features of heart rate
FCM and FKM,” Int. J. Comput. Commun., vol. 1, no. 2, pp. 21–25, variability?” IEEE Trans. Biomed. Eng., vol. 48, no. 11, pp. 1342–1347,
2007. Nov. 2001.
[16] M. Soleymani, M. Pantic, and T. Pun, “Multimodal emotion recognition [39] C. Cortes and V. Vapnik, “Support-vector networks,” Mach. Learn.,
in response to videos,” in Proc. Int. Conf. Affect. Comput. Intell. vol. 20, no. 3, pp. 273–297, 1995.
Interact. (ACII), 2015, pp. 491–497. [40] L. Fraiwan, K. Lweesy, N. Khasawneh, H. Wenz, and H. Dickhaus,
[17] V. Bajaj and R. B. Pachori, “Detection of human emotions using features “Automated sleep stage identification system based on time–frequency
based on the multiwavelet transform of EEG signals,” in Brain-Computer analysis of a single EEG channel and random forest classifier,” Comput.
Interfaces. Cham, Switzerland: Springer, 2015, pp. 215–240. Methods Programs Biomed., vol. 108, no. 1, pp. 10–19, 2012.
[18] M. Li and B.-L. Lu, “Emotion classification based on gamma-band [41] A. Nishad, R. B. Pachori, and U. R. Acharya, “Application of
EEG,” in Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., Sep. 2009, TQWT based filter-bank for sleep apnea screening using ECG signals,”
pp. 1223–1226. in Proc. J. Ambient Intell. Humanized Comput., May 2018, pp. 1–12.
[19] S. K. Hadjidimitriou and L. J. Hadjileontiadis, “EEG-based classification [42] R. Sharma, R. B. Pachori, and A. Upadhyay, “Automatic sleep stages
of music appraisal responses using time-frequency analysis and famil- classification based on iterative filtering of electroencephalogram sig-
iarity ratings,” IEEE Trans. Affect. Comput., vol. 4, no. 2, pp. 161–172, nals,” Neural Comput. Appl., vol. 28, no. 10, pp. 2959–2978, Oct. 2017.
Apr./Jun. 2013. [43] A. Bhattacharyya and R. B. Pachori, “A multivariate approach for
[20] H. J. Yoon and S. Y. Chung, “EEG-based emotion estimation using patient-specific EEG seizure detection using empirical wavelet trans-
Bayesian weighted-log-posterior function and perceptron convergence form,” IEEE Trans. Biomed. Eng., vol. 64, no. 9, pp. 2003–2015,
algorithm,” Comput. Biol. Med., vol. 43, no. 12, pp. 2230–2237, 2013. Sep. 2017.
[21] R.-N. Duan, J.-Y. Zhu, and B.-L. Lu, “Differential entropy feature for [44] V. Joshi, R. B. Pachori, and A. Vijesh, “Classification of ictal and
EEG-based emotion classification,” in Proc. 6th Int. IEEE/EMBS Conf. seizure-free EEG signals using fractional linear prediction,” Biomed.
Neural Eng. (NER), Nov. 2013, pp. 81–84. Signal Process. Control, vol. 9, pp. 1–5, Jan. 2014.
[22] W.-L. Zheng and B.-L. Lu, “Investigating critical frequency bands [45] A. K. Tiwari, R. B. Pachori, V. Kanhangad, and B. K. Panigrahi,
and channels for EEG-based emotion recognition with deep neural “Automated diagnosis of epilepsy using key-point-based local binary
networks,” IEEE Trans. Auton. Mental Develop., vol. 7, no. 3, pattern of EEG signals,” IEEE J. Biomed. Health Inform., vol. 21, no. 4,
pp. 162–175, Sep. 2015. pp. 888–896, Jul. 2017.
[23] X. Li, D. Song, P. Zhang, Y. Zhang, Y. Hou, and B. Hu, “Exploring [46] A. H. Khandoker, D. T. H. Lai, R. K. Begg, and M. Palaniswami,
EEG features in cross-subject emotion recognition,” Frontiers Neurosci., “Wavelet-based feature extraction for support vector machines for
vol. 12, p. 162, May 2018. screening balance impairments in the elderly,” IEEE Trans. Neural Syst.
[24] W.-L. Zheng, J.-Y. Zhu, and B.-L. Lu, “Identifying stable patterns over Rehabil. Eng., vol. 15, no. 4, pp. 587–597, Dec. 2007.
time for emotion recognition from EEG,” IEEE Trans. Affect. Comput., [47] R. Kohavi, “A study of cross-validation and bootstrap for accuracy
2017, doi: 10.1109/TAFFC.2017.2712143. estimation and model selection,” in Proc. Int. Joint Conf. AI, vol. 14.
[25] S. Koelstra et al., “DEAP: A database for emotion analysis; using no. 2, pp. 1137–1145, Aug. 1995.
physiological signals,” IEEE Trans. Affect. Comput., vol. 3, no. 1, [48] M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann,
pp. 18–31, Jan./May 2012. and I. H. Witten, “The WEKA data mining software: An update,”
[26] R. W. Levenson, “Emotion and the autonomic nervous system: ACM SIGKDD Explor. Newslett., vol. 11, no. 1, pp. 10–18, 2009.
A prospectus for research on autonomic specificity,” in Social Psy-
chophysiology: Theory and Clinical Applications, H. L. Wagner, Ed.
Oxford, U.K.: Wiley, 1988, pp. 17–42.
[27] N. Jatupaiboon, S. Pan-ngum, and P. Israsena, “Real-time EEG-based Vipin Gupta received the B.E. degree in electri-
happiness detection system,” Sci. World J., vol. 2013, p. 12, 2013, cal and electronics engineering from Rajiv Gandhi
Art. no. 618649. Technological University, Bhopal, India, in 2010,
[28] R. Sharma, “Automated identification systems based on advanced signal and the M.E. degree in electrical engineering from
processing techniques applied on EEG signals,” Ph.D. dissertation, the Shri Govindram Seksaria Institute of Technology
Discipline Elect. Eng., Indian Inst. Technol. Indore, Indore, India, 2017. and Science, Indore, India, in 2012. He is currently
[29] İ. Bayram, “An analytic wavelet transform with a flexible time-frequency pursuing the Ph.D. degree with the Discipline of
covering,” IEEE Trans. Signal Process., vol. 61, no. 5, pp. 1131–1142, Electrical Engineering, Indian Institute of Technol-
Mar. 2013. ogy Indore, Indore. He has published six papers in
[30] C. Zhang, B. Li, B. Chen, H. Cao, Y. Zi, and Z. He, “Weak fault international journals and conferences. His research
signature extraction of rotating machinery using flexible analytic wavelet interests include biomedical signal processing, non-
transform,” Mech. Syst. Signal Process., vol. 64, pp. 162–187, Dec. 2015. stationary signal processing, and machine learning.

Authorized licensed use limited to: Universidad de Ingeniería y Tecnología (UTEC). Downloaded on October 20,2020 at 03:58:47 UTC from IEEE Xplore. Restrictions apply.
2274 IEEE SENSORS JOURNAL, VOL. 19, NO. 6, MARCH 15, 2019

Mayur Dahyabhai Chopda is currently pursuing Ram Bilas Pachori (SM’16) received the B.E.
the B.Tech. degree in electrical engineering with the (Hons.) degree in electronics and communication
Indian Institute of Technology Indore, Indore, India. engineering from Rajiv Gandhi Technological Uni-
He was involved in many projects related to speech versity, Bhopal, India, in 2001, and the M.Tech. and
and biomedical signal processing. He is currently Ph.D. degrees in electrical engineering from the
involved in his B.Tech. project in the area of human Indian Institute of Technology (IIT) Kanpur, Kanpur,
emotions detection using physiological signals. His India, in 2003 and 2008, respectively. During
research interests include digital signal processing, 2007–2008, he was a Post-Doctoral Fellow with the
machine learning, and communication. Charles Delaunay Institute, University of Technol-
ogy of Troyes, Troyes, France. During 2008–2009,
he was an Assistant Professor with the Communi-
cation Research Center, International Institute of Information Technology,
Hyderabad, India. During 2009–2013, he was an Assistant Professor with the
Discipline of Electrical Engineering, IIT Indore, Indore, India, where he was
an Associate Professor during 2013–2017. He worked as a Visiting Scholar at
the Intelligent Systems Research Center, Ulster University, Northern Ireland,
U.K., during 2014. Since 2017, he has been a Professor with IIT Indore. He
has authored or co-authored more than 150 publications which include journal
papers, conference papers, books, and book chapters. His publications have
around 3300 citations, h-index of 30, and i10-index of 69 (Google Scholar,
2018). His research interests are in the areas of biomedical signal processing,
non-stationary signal processing, speech signal processing, signal processing
for communications, computer-aided medical diagnosis, and signal processing
for mechanical systems. He is a Fellow of IETE. He is currently an Associate
Editor of the Biomedical Signal Processing and Control Journal and an Editor
of the IETE Technical Review Journal. He has served as a reviewer for more
than 75 journals and served for scientific committees of various national and
international conferences.

Authorized licensed use limited to: Universidad de Ingeniería y Tecnología (UTEC). Downloaded on October 20,2020 at 03:58:47 UTC from IEEE Xplore. Restrictions apply.

You might also like