You are on page 1of 18

Computers in Biology and Medicine 136 (2021) 104696

Contents lists available at ScienceDirect

Computers in Biology and Medicine


journal homepage: www.elsevier.com/locate/compbiomed

Recognition of human emotions using EEG signals: A review


Md. Mustafizur Rahman a, Ajay Krishno Sarkar b, Md. Amzad Hossain a, **, Md. Selim Hossain b,
Md. Rabiul Islam c, Md. Biplob Hossain a, Julian M.W. Quinn d, Mohammad Ali Moni d, e, *
a
Department of Electrical and Electronic Engineering, Jashore University of Science & Technology, Jashore, 7408, Bangladesh
b
Department of Electrical and Electronic Engineering, Rajshahi University of Engineering & Technology, Rajshahi, 6204, Bangladesh
c
Department of Electrical and Electronic Engineering, Khulna University of Engineering & Technology, Khulna, 9203, Bangladesh
d
Healthy Ageing Theme, Garvan Institute of Medical Research, Darlinghurst, NSW, 2010, Australia
e
School of Health and Rehabilitation Sciences, Faculty of Health and Behavioural Sciences, The University of Queensland St Lucia, QLD 4072, Australia

A R T I C L E I N F O A B S T R A C T

Keywords: Assessment of the cognitive functions and state of clinical subjects is an important aspect of e-health care de­
Emotion livery, and in the development of novel human-machine interfaces. A subject can display a range of emotions that
Electroencephalography significantly influence cognition, and emotion classification through the analysis of physiological signals is a key
Classification
means of detecting emotion. Electroencephalography (EEG) signals have become a common focus of such
Recognition
development compared to other physiological signals because EEG employs simple and subject-acceptable
methods for obtaining data that can be used for emotion analysis. We have therefore reviewed published
studies that have used EEG signal data to identify possible interconnections between emotion and brain activity.
We then describe theoretical conceptualization of basic emotions, and interpret the prevailing techniques that
have been adopted for feature extraction, selection, and classification. Finally, we have compared the outcomes
of these recent studies and discussed the likely future directions and main challenges for researchers developing
EEG-based emotion analysis methods.

1. Introduction volume pulse (BVP), respiration rate (RT), skin temperature (ST), elec­
tromyography (EMG) and eye gaze; all of these are examples of physi­
Affective computing is an approach to assessing human feelings and ological data that have been widely used in attempts to identify and
emotional states that have increasingly been employed in research fields detect human emotions [3]. It is notable that some of the physiological
relating to healthcare, as well as fields such as human-machine inter­ signals, like GSR, EMG, RT, and ECG that have been used to detect
action (HMI) development, teaching method development and in emotions have generally proven accurate in detecting only certain spe­
various sectors of the entertainment and gaming industries. Affective cific emotions. Nevertheless, they hold the promise that better analysis
computing builds a model of emotional experience in a human partici­ methods will make them of wider utility.
pant by identifying emotions using machine-analyzable data from the Many researchers working on emotion recognition have focused on
participant, usually in real time. Before attempting to identify emotions, EEG-based methods for use in e-healthcare applications because EEG
however, it is essential to define and be able to classify key features of signals clearly offer meaning-rich signals with a high temporal resolu­
human behavior that reflect aspects of subjective feelings and cognition tion that is accessible using cheap, portable EEG devices [4–6]. Although
[1]. These include internal and external features accessible for analysis, EEG-based emotion recognition systems have yielded encouraging re­
like human facial expressions, gestures and other bodily behaviors, and sults, EEG signal analysis faces many challenges due to the very wide
physiological signals [2]. The latter include electroencephalography variability of emotional states experienced and expressed by individuals
(EEG), electrocardiography (ECG), galvanic skin response (GSR), blood and the difficulties in correlating EEG features with these states across

* Corresponding author. School of Health and Rehabilitation Sciences, Faculty of Health and Behavioural Sciences, The University of Queensland St Lucia, QLD
4072, Australia.
** Corresponding author. Department of Electrical and Electronic Engineering, Jashore University of Science & Technology, Jashore, 7408, Bangladesh.
E-mail addresses: mustafizur.170710@gmail.com (Md.M. Rahman), sarkarajay139@gmail.com (A.K. Sarkar), mahossain.eee@gmail.com (Md.A. Hossain), engg.
selim@gmail.com (Md.S. Hossain), rabiul.kuet.bd@gmail.com (Md.R. Islam), biplobh.eee10@gmail.com (Md.B. Hossain), j.quinn@garvan.org.au (J.M.W. Quinn),
m.moni@uq.edu.au (M.A. Moni).

https://doi.org/10.1016/j.compbiomed.2021.104696
Received 22 January 2021; Received in revised form 23 July 2021; Accepted 23 July 2021
Available online 3 August 2021
0010-4825/© 2021 Elsevier Ltd. All rights reserved.
Md.M. Rahman et al. Computers in Biology and Medicine 136 (2021) 104696

many different individuals. It is thus not easy to find appropriate solu­ which would have great potential for a great range of practical uses [7].
tions for classifying emotions when EEG signals are affected by multiple
factors such as time of day, underlying mental conditions and language 3.1. The human brain
and cultural differences. As a result, how brain signals in general can be
used for estimating emotions emerges as a key research question. What The cerebrum has three main anatomical divisions, namely hind­
factors should be considered at the time of analysis? What methodolo­ brain, midbrain, and forebrain, and these are found divided into two
gies might be promising for future research? For EEG devices, the se­ hemispheres. The outer layer of the cerebral cortex has four major lobes:
lection of sampling frequency, number of subjects, number of EEG frontal, occipital, parietal, and temporal (see Fig. 2) [8]. Central nervous
electrodes or channels, electrode location relative to brain regions, and system control of body functions involves integrating and processing
the nature of emotion-eliciting stimulation are all key factors for sense information, which is passed to higher brain functions. These
studying emotions and their correlates. This article reviews and com­ functions are linked to anatomically dispersed locations to a significant
pares existing studies on emotion detection using EEG signals. To begin degree, although there is some important localization of functions that
with we introduce the theories of emotion classification, the types of are of interest here. For example, the frontal lobe functions generally
EEG electrodes, brain waves or neural oscillations (i.e., EEG signals) and relate to personality, emotions, and higher thinking skills. The temporal
their relation with emotion analysis. We describe stimulation materials, lobe has many functions but is mainly involved in the processing of
emotion-related databases, devices used to record EEG signals, EEG hearing and other senses. In contrast, the parietal lobe appears to be
preprocessing techniques, as well as current analysis methodologies that particularly involved in subjective feelings, concentration, and lan­
include feature extraction, selection, and classification. We conclude by guage, while the occipital lobe is notably important in vision [9].
proposing some of the likely future directions that will hopefully meet
future challenges, and the issues involved in making accurate and clin­
3.2. Brain waves and EEG electrodes
ically useful emotion classification.

Hans Berger originally developed the first EEG in 1929 to examine


2. Methods
brain electrical activity and discovered a number of types of oscillations
or waves [10]. The amplitude of EEG signals varies across the range
We gathered human emotion analysis-related articles using IEEE
2–100 μV, with frequency ranging between 1 and 100Hz but falling into
Xplore, PubMed, ScienceDirect, ResearchGate, and other database
five frequency sub-bands that are conventionally named delta (below 4
websites for this review. Then we filtered out the received articles ac­
Hz), theta (4–7 Hz), alpha (8–13 Hz), beta (14–30Hz), and gamma wave
cording to the following criteria: (i) Emotion detection methodologies
(31–100 Hz). In adults, the delta wave often occurs in the frontal area of
using other physiological signals published from 2009 to 2021 are not
the cortex during deep sleep. Theta waves arise in specific locations
included in this article, (ii) papers without recognized indexing were
during sleep and meditation. The alpha wave is notably present in the
excluded, (iii) the most highly cited articles have been considered. First,
occipital portion of the cortex when a person is stress-free and calm. Beta
we grouped 127 documents that describe emotion detection by EEG.
waves occur in the central and frontal parts of the brain when a person is
Among them, 31 articles were removed since they identified emotion
actively working or thinking. The gamma frequency band brainwave
using peripheral signals other than EEG. 28 articles were filtered out by
(31–100 Hz) mentioned above is generally observed in the context of
the second criteria mentioned. In the third step, 14 documents were kept
anxiety, sensory processing, and emotional stress.
excluded because their citations were below a designated threshold.
The electrodes employed to record brain waves over the scalp use an
Finally, we selected 54 papers to review and analyze (see Fig. 1).
internationally recognized 10/20 system [11] whereby the distances
between two adjacent electrodes are either 10% or 20% of the skull’s
3. Background
total front-back or right-left distance (see Fig. 3). In standardized EEG
nomenclature, each location is designated thus to identify the lobes:
EEG is a non-invasive method employed to monitor brain states and frontal (F), temporal (T), occipital (O), parietal (P); letters A or M rep­
responses and has been used to monitor and diagnose seizures, de­ resents the ears. Electrode position is indicated by a number, with
mentia, brain tumors, encephalitis, obstructive sleep apnea, depth of electrodes on the right side of the brain is designated by even-numbers
anesthesia, coma, and stroke. Besides this use it is well recognized that (thus F4, F8, P4, T6, etc.), and left side odd-numbers (F7, F3, T5, P3,
analyses of signals have the potential to identify in human emotions etc.). A second convention called the 10/5 system was designed for
high-resolution EEG to enhance the standardization in EEG studies [12,
13].

Fig. 1. Number of studies reviewed from 2009 to 2021. Fig. 2. Lobes of the brain. Adapted from Ref. [8].

2
Md.M. Rahman et al. Computers in Biology and Medicine 136 (2021) 104696

anger or sorrow, whereas pleasure may be a type of happiness. Further,


Russell proposed sixteen emotions, and these emotions may be distrib­
uted in a two-dimensional plane, namely, arousal (from pleasant state to
unpleasant state) and valence (from a calm state to an excited state) (see
Fig. 4). Russell’s valence-arousal model, published in 1980, is summa­
rized in Table 2 [25].
Emotion is not only recognized through facial expression (FE)
[26–28] but also voice or body gestures [29–32], text, and conversation
[33], while physiological signals are also helpful to detect emotional
states [34–40]. Facial expression and voice are considered external as­
pects of emotions, but they cannot offer meaningful information.
Therefore, researchers have focused on physiological signals using EEG,
ECG, EMG, ST, RT, and GSR to consider the actual emotions. Among
these approaches, ST and GSR can detect arousal levels, while EMG
measures valence. Negative emotions (such as panic, fear, and depres­
sion) can often be identified using RT and ECG, but positive emotions
cannot be readily detected this way. Consequently, current research
utilizing EEG signals has indicated that two types of wave signals,
gamma, and beta, are particularly useful for recognizing emotion [41,
42]. Mu Li et al. [43] observed that gamma wave signal detection was
associated with two particular but distinct emotions: happiness and
Fig. 3. International 10/20 system of electrode placement. Adapted sadness. Huang et al. [44] described valence and arousal by the use of
from Ref. [8]. both beta and gamma bands. Although the high-frequency signals of
EEG have demonstrated associations with certain emotions, some
4. Human emotions and EEG studies have found that very different anatomical brain regions can play
significant roles in displaying such emotions [45–48].
Defining human emotions and how they are generated in all their Considering the non-linearity and non-stationary characteristics of
complexity is beyond the scope of this work, but for most practical EEG signals, features are required to be extracted and calculated from
purposes we can consider emotions simply as recognizable mental states that data in such a manner so that finding genuine emotions would be
arising from the interaction of physiological (i.e., internal) and external facilitated. Furthermore, it is possible that emotions can be analyzed
influences. Thus, while emotions are challenging to describe directly, using distinguishable features of EEG signals in the time domain, fre­
they can be detected in expressive behavior, physiological indices, and quency domain, time-frequency domain, and spatial domain. Time-
self-assessment [14]. Many theorists and physiologists have attempted domain analyses such as statistical parameters, Hjorth parameters,
to define emotions in a practical way [15–20] (see Table 1). Tomkins non-stationary index, and higher-order crossing represent EEG features
and Ekman have provided a basic definition referred to as a categorized that could help estimate emotions [49–52]. Power spectrum analysis of
model. In 1972, Paul Ekman suggested six basic emotions: happiness, EEG signals has yielded several fruitful features that can be used to
anger, fear, surprise, sadness, and disgust, and each of those is associated identify the emotion [53–60]. Some studies have also used non-linear
with distinct facial expressions [21]. Ekman later expanded his list to characteristics such as entropy and fractal dimension to analyze emo­
incorporate some other fundamental emotions, including embarrass­ tions, and such non-linear analysis has provided some significant results
ment, pride, excitement, shame, contempt, satisfaction, and amusement. [61–63].
Tomkins demonstrated that interest-excitement, enjoyment-joy,
anger-rage, distress-anguish, fear-terror, surprise-startle, 5. Techniques of emotion recognition
shame-humiliation, and contempt-disgust are the most fundamental
emotions [22]. In 1980, Robert Plutchik constructed the wheel method The EEG signals acquired from the EEG devices need to be amplified
of emotion definitions by combining fundamental emotions [23]. and filtered before analysis. Amplitude and frequency-related features
Moreover, Plutchik also suggested a further eight emotions: happiness are extracted using a range of methods, and there are several machine-
vs. sadness, anger vs. fear, trust vs. disgust, and surprise vs. anticipation. learning approaches that can then be applied to classify and estimate
Cowen et al. also have defined twenty-seven distinct emotions using emotions by analyzing EEG features, as shown in Fig. 5.
statistical methods. In their experiment, they utilized 2185 short videos
as stimuli to evoke emotions by choosing eight hundred participants 5.1. Datasets
[24]. All emotions were composed from the basic emotions proposed by
Tomkins. For instance, jealousy is derived from a combined sense of Several public datasets are available for classifying different emo­
tions through the analysis of EEG signals. DEAP [69], DREAMER [70],
SEMAINE [71], MELD [72], IAPS [73], ASCERTAIN [74], MANHOB-HCI
Table 1 [75], DECAF [76], SEED [77], AMIGOS [78], CAPS [79] and IN­
Summary of categorized emotions stated by theorists and physiologist. TERFACES [80] are all widely used public datasets for this purpose. The
Study Year Emotions datasets used to analyze emotional states are listed in Table 3. The
Panksepp J. [12] 1982 Expectancy, rage, fear, panic studies that we examined employed such datasets to define and detect
Weiner B. et al. 1984 Sadness and happiness basic emotions including happiness, anger, fear, surprise, sadness, and
[13] disgust. Three datasets estimated valence and arousal using different
A.W. James [14] 1984 Fear, grief, love, rage rating scales (1–9 for DEAP and AMIGOS, 1–5 for DREAMER). Two
Gray J. A. [15] 1985 Rage and terror, anxiety, joy
Parrott, W. G [16] 2000 Anger, fear, joy, love, sadness, surprise
datasets, MELD, and SEED analyzed positive, negative, and neutral
Plutchik R. [17] 2001 Acceptance, anger, anticipation, disgust, joy, fear, emotions. Only a few studies worked on like versus dislike emotions.
sadness, surprise
McDougall [18] 2003 Anger, disgust, elation, fear
Paul Ekman [20] 1972 happiness, anger, fear, surprise, sadness, and disgust

3
Md.M. Rahman et al. Computers in Biology and Medicine 136 (2021) 104696

Fig. 4. Emotions model (Left-Categorical model, Right- Dimensional model). Adopted from Ref. [27].

5.4. EEG electrodes


Table 2
Emotions in the valence-arousal plane.
Usually EEG devices record brain signals using electrodes over the
Position Emotional State Emotions scalp according to the 10/20 methodology. Sometimes maintaining so
1st quadrant High arousal positive valence Happy, excited, pleased many electrodes may be difficult for the users, and this makes the sys­
2nd quadrant High arousal negative valence Angry, nervous, annoying tems complex. Therefore, the number of electrodes must be kept low to
3rd quadrant Low arousal negative valence Sad, bored, sleepy
avoid system difficulty. It is for this reason that the electrode numbers
4th quadrant Low arousal positive valence Relax, peaceful,calm
used in the studies ranged from 2 to 62. Only 17% of the studies (9 out of
54) used less than 12 electrodes, while 13% of studies (7 works out of
5.2. Emotion stimulation material 54) used 14 electrodes. One study [81] found that the most reliable
electrode positions for detecting emotional valence are F3 and F4.
In the studies, emotional states are evoked by stimuli that play a Valenzi et al. [82] had used eight electrodes to classify the emotion and
meaningful role in the analyses. As a result of the variability of emotions, obtained an accuracy of 87.5%. Another two articles [52,58] recorded
some stimuli cause changes in EEG signals with the change of emotional EEG signals using only one electrode, while article [83] utilized three
states and so can be effective in determining emotional status. Pictures, channels to detect the signal. S. Wu et al. [56] utilized two channels, Fp1
videos, audio-video clips, music, memories, self-induction, and games and Fp2, to detect emotions and obtained an accuracy of 76.34%. About
are commonly used to stimulate emotions. Many studies have used 41% of the studies (22 out of 54) utilized 32 electrodes to record the EEG
pictures from IAPS and CAPS dataset as stimuli [50,59]. The length of signal in the past studies. Moreover, 24% of the studies (13 out of 54)
time for image presentation was 1.5 s–12.5 s in those studies. In addi­ used 62 electrodes; two studies utilized 40 and 60 electrodes, respec­
tion, some datasets (DEAP, SEED, MANHOB-HCI, ASCERTAIN, tively (see Fig. 7).
AMIGOS, DECAF, DREAMER) included audio-video clips to stimulate
the target emotions. The duration of stimuli ranged from 35 s to 393 s. 5.5. Preprocessing
Among 54 studies investigated for this review, 33 used audio-video clips
as stimuli, while 9 studies used videos. The remaining 12 studies Electrical signals occurring from locations other than the brain are
investigated using images and music (ranges from 15 s to 60 s) to elicit often reported as EEG artifacts. Once the signal is recorded with the
emotional responses (see Fig. 6). Some researchers observed that presence of artifacts, it must be denoised first. Artifacts during the
audio-video clips are much more efficient to trigger emotions compared recording of EEG may be physiological or non-physiological, with non-
to other stimuli because audio-video clips have both scenes and music, physiological artifacts arising from sources outside the body, while
and also expose participants in exposing to more realistic situations [44, physiological artifacts occur as a result of electrical signals interior to the
55,57,61]. body [86,87]. Examples of each type are shown in Table 5 below. Based
on the signal of interest, it is crucial to filter out unwanted signals. Some
5.3. EEG devices physiological artifacts like skin and sweat artifacts can be removed
easily during the recording of neural activity. The subject can wear a
To monitor EEG signals, electrodes are placed over the scalp in a cooling ventilation vest to maintain body temperature during exercise. A
configuration that obtains electrical signals with the best possible spatial notch filtering technique can reject the power line artifacts. When the
resolution and signal to noise ratio (SNR). The most widely used EEG head or body moves, it distorts the EEG signal due to the electrode and
devices are shown in Table 4. These include Biosemi Active Two (2048 cable movement. Thus, artifacts due to movement can be omitted using
Hz, 1024 Hz, 512 Hz, 256 Hz), Emotiv (2048 Hz, 128 Hz), ESI Neuro­ a double-layer cap. Some artifacts lie in a limited frequency range, so
Scan (500 Hz, 250 Hz), and OpenBCI that record brain activity at the they may be easily eliminated by frequency filtering using a bandpass
different sampling frequencies. Otherwise, EEG NeuroScan and Neuro­ filter [88].
Sky both record EEG signals at a sampling frequency of 1000 Hz. However, some physiological artifacts such as ECG, EMG, and EOG

4
Md.M. Rahman et al. Computers in Biology and Medicine 136 (2021) 104696

Fig. 5. Process of emotion recognition from EEG.

can be difficult to treat since their spectrum overlaps with EEG, mainly Recently, researchers have used hybrid techniques to remove mul­
with the beta and gamma bands (see Fig. 8). Nevertheless, there are now tiple artifacts in which two or more algorithms have been utilized
several efficient methods that can be used for artifact removal, which is together. The ICA-based BSS method can manage many kinds of artifacts
indicated in Fig. 9, although some methods are applied only for the in EEG recordings, but it is not an automatic system and can distort brain
identification and removal of specific artifacts such as ECG [89], EMG signals after canceling artifacts [99]. Regression and Adaptive filtering
[90], and EOG [91]. Most of the researchers have utilized single artifact methods may be an excellent choice if there is a reference channel for a
removal techniques, like the Regression (REG) [92], Blind Source Sep­ specific artifact. In addition, EMD and wavelet transform techniques are
aration (BSS) [49,50,55,93], Wavelet Transform [58,94], Empirical suitable for single-channel analysis. On the other hand, principal
Mode Decomposition (EMD) [95,96], Filtering [52–54], Sparse component analysis (PCA) fails to reject ocular artifacts because of
Decomposition [97] and Variational Mode Decomposition (VMD). higher-order statistical properties. As a result, EMD-BSS [98,99],

5
Md.M. Rahman et al. Computers in Biology and Medicine 136 (2021) 104696

Table 3
Description of public datasets.
Name Participants Documented Stimulus Emotions
Signals

DEAP 32 EEG (32), EMG (4), EOG (4), GSR (1), 40 Video clips Valence, Arousal, Liking, Dominance, Familiarity
Plethysmography (1), Temperature (1), Face Video
DREAMER 23 EEG (14), ECG (1) 18 Video clips Valence, Arousal,
Dominance
SEMAINE 150 Audio, Visual 959 Conversations Valence and Arousal
MELD - Audio, and Textual 13,000 Voice clips Anger, Fear, Surprise, Joy, Neutral, Sadness, and Disgust
IAPS 1483 - Pictures Arousal, Valence, Dominance
ASCERTAIN 58 Frontal EEG, ECG, GSR, Facial motion Unit (EMO) 36 Video clips Arousal, Valence,
Engagement, Liking, Familiarity
MANHOB- 27 EEG (32), ECG (3), GSR (2), ERG (2), 20 Video clips and Happiness, Anger, fear, Surprise, Sadness, Amusement,
HCI Respiration Pictures Anxiety, Neutral and Disgust
Amplitude, and Skin Temperature, Face Video, Audio
Signals, Eye Gaze
DECAF 30 MEG, ECG, EOG, EMG, NIR Facial Video 36 Video clips Amusing, Funny, Happy, Exciting,
Angry, Disgusting, Fear, Sad, Shock
SEED 15 EEG (15), Face Video, Eye tracking 15 Video clips Positive, Neutral,
Negative
AMIGOS 40 Audio, Visual, Depth, EEG, GSR, and ECG 20 Video clips Arousal, Valence, Dominance, Liking, Familiarity; Discrete/
Basic emotions
CAPS 46 - 852 Pictures Arousal, Valence, Dominance
INTERFACES 43 EEG (32), EDA (21), BVP (13), Temperature (4) 15 Video clips Valence, Arousal, Happiness, Fear, and Excitement

Fig. 6. Number of reviewed studies based on stimulation materials. Fig. 7. Number of reviewed studies based on number of channels.

all the requirements.


Table 4
Equipment for EEG recording.
Name Frequency Reference
5.6. Feature extraction

Biosemi Active 2048 Hz, 1024 [47,49–51,53,56,64–66,68,84,112,115,


Researchers have tried to calculate multiple features in different
Two Hz, 123,125,127,129,131],
512 Hz, 256 Hz [132–134,141–143], frequency bands of the EEG signal for classifying emotions. We reviewed
Emotiv 2048 Hz, 128 [54,60,113,135,137], different feature extraction methods that are used for emotion recog­
Hz nition purposes. Higher-Order Crossings (HOC) [49,83], Autoregressive
EEG NeuroScan 500 Hz [59,114,118], (AR) Model [65,115], Fractal Dimension (FD) [57], Common Spatial
ESI NeuroScan 1000 Hz, 250 [43,45,55,57,61,68,130],
Hz [128,133,138],
Pattern (CSP) [43,44,118], Hjorth Parameters [68,135] Differential
NeuroSky 1000 Hz, [52,58], Entropy [46,138], and Power Spectral Density (PSD) [49,125,130] have
g.HIamp system 512 Hz [46,62,83], all been applied in previous studies. Researchers have distinguished
OpenBCI 250 Hz [80,139] features in three domains (Time, Frequency, and Time-Frequency do­
other - [44,136,140,144],
mains) to represent EEG. Besides these, there is another feature
extraction method called non-linear analysis. Fig. 10 represents the
wavelet-BSS [100], BSS-VCM [101], REG-BSS [102] may be a good feature extraction method in terms of three domains and non-linear
choice for removing ocular and EMG artifact. Nguyen et al. interpreted analysis. Numerous studies, such as [104–106], have used the ampli­
another hybrid technique by integrating wavelet transform and artificial tude (P200, P300) and latency (N100, N200) of ERPs (positive and
neural networks to remove EOG artifacts [103]. By contrast, despite all negative voltage deflections) as features that are related to emotions.
the available techniques discussed above, no single method can satisfy Moreover, various studies considered statistical parameters and Hjorth
Parameters (activity, mobility, and complexity) to characterize the EEG

6
Md.M. Rahman et al. Computers in Biology and Medicine 136 (2021) 104696

Table 5
EEG Artifacts, sources, and their influence.
Artifact Types Name Sources Influence on EEG signal

Physiological Ocular activity Eye Blink, flatter or Eye movement, and REM sleep. It affects low-frequency bands of EEG signals
(Electrooculography- EOG) (delta and theta band).
Muscle activity Swallowing, chewing, talking, sucking, sniffing, grimacing, frowning, A high-frequency signal overlaps beta and
(Electromyography- EMG) hiccupping, and jaw Clenching, neck muscle, and shoulder muscle. gamma bands.
Cardiac activity Cardiac muscle movement, Pulse artifact ECG component at low frequency
(Electrocardiogram- ECG) interferences with brain function.
Other activity Skin and sweat artifacts, respiration (Inhale or Exhale) The low-frequency artifacts overlap delta
and theta bands.
Non- Instrumental Electrode pop-up, electrode misplacement, Cable and electrode movement generate non-
physiological cable movement EEG frequency peaks.
Interference power line interference from ac sources and improper grounding, EM wave It creates a spike at a frequency of 50 Hz
based on the supply system.
Movement Head movement, body movement The effect is localized in lower frequency
overlapping delta and theta bands.

Table 6. Some articles used PCA [47,57], and LDA [57] to obtain better
results. Both PCA and LDA have some limitations. PCA reduces data
dimension without minimum loss of information. Still, it has some
complications in the data processing if the data is not linear and
continuous. Sometimes LDA fails to discriminate the features, and its
structure is complex. Hence, most of the studies have used the maximum
relevance minimum redundancy (MRMR) method proposed by Peng
et al. [126] because of their estimation reliability and faster speed.
Furthermore, some studies have utilized an ANOVA test-based approach
as a statistical method [46,53,62,85,127].

5.8. Classification

Fig. 8. Physiological artifacts in human EEG. Adapted from Ref. [87]. Different features extracted from EEG data using different methods
are classified using a suitable classification algorithm in analyzing
emotion recognition techniques. K-nearest neighbors (KNN) [49,84,115,
signal [108–110]. Likewise, Non-Stationary Index (NSI) [111] and
129], support vector machine (SVM) [43,53,54,57–60,65,113,114,125,
Higher-Order Crossings (HOC) have also been used in studies of emotion
130,131], naïve Bayes (NB) [44], multi layer perception back propa­
[49,69,82,112].
gation (MLPBP) [52], logistic regression (LR) [45,47,128,129], gradient
In addition to the above considerations, band power features in in­
boosting (GB) [128], linear discriminant analysis (LDA) [135],
dividual sub-bands of EEG signal, as noted above, can be calculated by
quadratic discriminant analysis (QDA) [119], random forest (RF) [112,
implementing Fast Fourier Transform (FFT) algorithms [113]. Further­
143] and various types of decision tree (DT) [133] are the most widely
more, another recognized method for estimating the power spectrum is
used classification algorithms. Moreover, convolution neural network
power spectral density (PSD) [114]. Also, the autoregressive coefficient
(CNN) [137], deep neural networks (DNNs) [138], deep belief networks
(AR) is utilized as a feature vector through the AR method for recogni­
(DBN) [45], artificial neural network (ANN) [50,127], CapsNet [134,
tion of emotional state [65,115]. Moreover, the common spatial pattern
139], and recurrent neural network (RNN) [146,148] all have some
has been applied as a feature extraction method in articles [44,117,118],
abilities to identify exact emotions. Therefore, in recent studies, many
and Time-Frequency analyses are employed in papers [59,60,120,121].
studies have applied deep learning classification methods for estimating
Nevertheless, some non-linear methods like entropy analysis [45,46,61,
emotions. It is found that above 70% of the studies used SVM, which is
84] and fractal dimension [122,123] are used to study non-linear EEG
relatively easy to use [45,50,61,83,85]. NB, KNN, DNN, and ANN ob­
signals. S. A. Hosseini et al. [116] explored Higher Order Spectra (HOS)
tained better results, typically with more than 80% accuracy. Spiking
that can be applied to extract the EEG features related to human emo­
neural network [141] and hierarchical fusion CNN [142] also provide
tions. Some researchers applied the EMD method to obtain optimum
better performance than the normal CNN. Detection of emotion using
features [64–66]. Some researchers have reported that differential
offline and online classification methods was reported in some studies;
asymmetry of power spectrum can be used to build a relation between
about 90% of the articles mentioned that they applied the offline clas­
EEG and emotion, and they achieved remarkable results with high
sification methods [60,62,113,133]. A few articles used online classifi­
classification accuracy by using machine learning approaches [124].
cation [59] and some other used both the online and offline [118]
The asymmetry reveals the differences of power of the corresponding
classification techniques, which are more appropriate for real-time sit­
electrode pairs on the left/right hemisphere of the brain [61]. Table 6
uations. A comparison of classification methods for emotion identifica­
below presents a comparison of various feature extraction methods for
tion using EEG signals is presented in Table 6.
examining emotion identification using brain signals.

6. Discussion and future research directions


5.7. Feature selection
EEG-based emotion detection has become a fascinating and common
When the features are extracted using EEG signals to train the clas­ topic for research, facilitated by advances in brain physiology. Re­
sification model, the model can get confused because of the large data searchers have identified some particular approaches that can be used to
size. Therefore, it is vital to choose the most relevant features to get detect emotions accurately under a range of conditions. In this review,
target output with higher accuracy. A comparison of various feature we have explored recent literature to highlight current methods that are
selection methods for examining emotion identification is presented in used for classifying emotions. The growing availability of emotion-

7
Md.M. Rahman et al. Computers in Biology and Medicine 136 (2021) 104696

Fig. 9. Artifact removal techniques.

classifying algorithms and experimental EEG datasets has resulted in of emotion analysis. In addition, some studies have focused on the short
significant progress in methodological development. presentation time of stimuli to evoke emotions, but they did not give any
Most recent work has used multimodal datasets to analyze some guidelines to the choice of the duration of stimulation. Although short-
basic emotions, including happiness, anger, fear, surprise, sadness, term stimulation materials are sometimes fruitful in detecting target
disgust, valence, and arousal [150]. However, none of the datasets emotions, however, such stimuli should not be too short so that complex
provides selection criteria of subjects and stimuli that ensure the and confusing data are obtained that are not useful for the analysis.
research validity. They have rather focused on obtaining higher accu­ Consequently, based on earlier studies, it seems that stimuli of at least 1
racy by adopting stimuli and participant variations. Therefore, in­ min to evoke real emotion should be used.
vestigators need to find appropriate ways to solve these issues. Recent studies have explored various devices that can be used to
According to the surveyed studies, it is clear that at least 30 participants capture the EEG signals at different sampling rates. Among these, Bio­
should be used in the experiments for detecting actual target emotions. If semi Active Two and ESI NeuroScan are the most widely used EEG
not, then the classifier can fail to identify the actual required output due recording devices. EEG signal recording must be efficient and econom­
to insufficient data. One thing that needs to be kept in mind for future ical, and it is essential to have great care about practical matters while
research is that if the researchers use both male and female subjects, the recording EEG. For instance, electrode placement, the type of sensor
number of participants needs to be balanced for the two sexes. As used (wet, dry, and semi-dry), the channels used, and the sampling rate
mentioned above, several studies have indicated that emotions might be must be appropriate and robust. Similarly, the sensors must be
evoked only by the actual presentation of appropriate stimuli. Images comfortable for the subject to avoid adverse experiences. Most of the
and music have been employed in existing studies due to low cost and studies recorded EEG signals using more than 14 channels at a sampling
reliability. However, some failed to evoke a wide range of emotions rate of 128 Hz and 200 Hz. Many previous studies have used topographic
because the types of music and video influence emotions according to brain maps and statistical analysis methods to determine which channels
the taste and choice of the subjects. or electrodes are more practical to estimate emotions related to brain
Previous studies have indicated that audio-video clips can stimulate waves. It is envisaged that future studies can significantly improve the
human emotions in the most appropriate way because it captures real emotion recognition rate by reducing the number of electrodes and
scenarios of subjective and physiological changes. As a result, audio- choosing the gamma and beta bands of EEG.
video clips can be considered the best stimuli to employ in the study Unfortunately, in the laboratory EEG signals are often corrupted to

8
Md.M. Rahman et al. Computers in Biology and Medicine 136 (2021) 104696

Fig. 10. EEG-Based feature extraction techniques.

some degree by artifacts that originate from environmental and physi­ Many studies have suggested the wavelet technique as a means of
ological signals. The laboratory setting must be as free as possible from removing biological artifacts. Among other proposed artifact removal
magnetic and electrical interference sources in order to obtain a reliable techniques are EMD-BSS, wavelet-BSS, BSS-VCM, and REG-BSS; these
signal for the purposes of analysis. Therefore, one of the significant have been considered the most effective methods for processing EEG
challenges of this research is data processing. Sometimes this can in­ signals. In addition, many researchers have proposed a hybrid technique
crease the difficulty of correctly classifying or determining an emotion by combining adaptive filtering and the BSS method to eliminate EMG
because the complete expunging of artifacts can also remove valuable and EOG artifacts in future research.
information from the signal, and incomplete exclusion can affect the The collection of meaningful features from EEG signals is challenging
outcome. The BSS approach has gained significant attention in the EEG because of its nonlinear characteristics. Time, frequency, and time-
processing field because this uses subspace filtering to separate artifacts. frequency analyses cannot provide a complete description of the

9
Md.M. Rahman et al.
Table 6
Recent studies of emotion recognition from EEG signal based on reviewed papers between 2009 and 2021.
Year Author Stimulus Participants Channel Artifacts filtering Frequency Feature extraction Feature Classification Emotions Result with accuracy
(Down- method reduction/ method
sample) selection
method

2009 M. Li et al. [43] Image 10 62 IIR Filter - CSP - SVM Happiness, Sadness 93.5% (3s trial)
EMG (manually) 93.0% (1s trial)
2009 Petrantonakis Image 16 3 Bandpass (Butterworth) - HOC - QDA, SVM, KNN, Happiness, QDA: 62.3% (single
et al. [83] MD Surprise, Anger, channel)
Fear, Disgust, and SVM: 83.33%
Sadness (Combined Channel)
2011 D. Nie et al. Video 6 32 Bandpass (1–40 Hz) - FFT - SVM Positive and Overall accuracy:
[113] EMG (manually) Negative 89.22%
2011 X.-W. Wang Video 5 62 EOG, EMG (manually) 200 Hz Statistical and Band MRMR SVM, KNN, MP, Joy, Relax, Sad, SVM: 66.51% using
et al. [85] Power using FFT and Fear band power
2012 T. F. Bastos- DEAP 32 32 Bandpass (4–45 Hz) 128 Statistical, PSD, HOC - KNN (K = 8) Stress and Calm 70.1% using PSD (4
Filho et al. [49] BSS, CAR applied to channel)
preprocessing
2012 S. K. music 9 14 Bandpass (0.16–85 Hz) 128 Hz Time-Frequency - KNN (K = 4), QDA, Like vs Dislike 86.52% using KNN
Hadjidimitriou Notch Filter (50 & 60 SVM
et al. [119] Hz)
2012 D.Huang et al. Video 4 32 - - ASP, CSP, FBCSP - NB Valence and Valence: 65%
[44] Arousal Arousal: 80%
2012 M. Soleymani Video 30 32 Bandpass (4–45 Hz) 256 Hz PSD, AI - SVM RBF Valence and Valence: 50.50%
et al. [125] CAR applied to Arousal Arousal: 62.10%
preprocessing
2013 R. N. Duan et al. Video 6 62 Manually 200 Hz DE, DASM, RASM and MRMR SVM, KNN Positive and DE: 84.22%, DASM:
10

[61] ES Negative 80.96%, RASM:


83.28%, and ES:
76.56% (Using SVM)
2013 Y. H. Liu et al. IAPS 7 64 - - DFT - Adaptive SVM Valence and Arousal: 73.57%
[59] Arousal Valence: 73.42%
2013 A. T. Sohaib IAPS 15 6 ICA applied to - Statistical - KNN, SVM, RT, BN, 56.10% using SVM
et al. [50] preprocess ANN (Single Subject)
66.47% using SVM
(All
Subjects)
2014 Wang X–W et al. Audio-video 6 62 EOG and EMG 200 Hz AE, FD, HE, PSD, WT PCA, LDA and SVM Positive and 91.77% using the LDA
[57] clip (manually) CFS Negative method
2014 Y.-P. Lin et al. music 26 32 Bandpass (1–100 Hz) - PSD, DLAT, DCAU, - SVM Valence and Valence: 76.08%
[114] MESH (combination of Arousal Arousal: 74.27%

Computers in Biology and Medicine 136 (2021) 104696


DLAT, DCAU and PSD)
2014 X.-W. Wang Video 6 62 Manually 200 Hz PSD, WT, NDA - SVM Positive, Overall accuracy:
et al. [130] Negative 87.53%
2014 S. Hatamikia DEAP 32 32 Bandpass (4–45 Hz), 128 Hz AR (Burgh Method) - KNN Valence and 2 Classes: 74.20% for
et al. [115] BSS Arousal Arousal and 72.33%
for Valence
3 Classes: 65.16% for
Arousal and 61.10%
for Valence
2014 X. Jie et al. DEAP 32 32 Bandpass (4–45 Hz) 128 Hz SE - SVM Valence-arousal Single Subject:
[132] BSS Model 79.11% (LVHA-
HVHA), 64.47%
(LVLA-LVHA)
All Subjects: 80.43%
(continued on next page)
Md.M. Rahman et al.
Table 6 (continued )
Year Author Stimulus Participants Channel Artifacts filtering Frequency Feature extraction Feature Classification Emotions Result with accuracy
(Down- method reduction/ method
sample) selection
method

(LVHA-HVHA),
71.17% (LVLA-LVHA)
2015 Zheng WL et al. SEED 15 62 Bandpass (0.3–50 Hz) 200 Hz PSD, DE, DASM, - DBN, KNN, LR, Positive, SVM: 86.65% (12
[45] EOG, EMG (manually) RASM, DCAU SVM Neutral, Channel) and SVM:
Negative 83.99%
(62 Channel)
DBN: 86.08% (62
Channel)
2015 Zheng WL et al. Audio-video 15 62 Bandpass (0.3–75 Hz) 200 Hz DE - KNN, LR, LDA, Positive, LR: 81.26%
[128] clip QDA, SVM, NB, Neutral,
DT, SGD, RF, GB Negative
2015 J. Chen et al.. DEAP 32 32 Bandpass (4–45 Hz), 128 Hz Statistical, Entropy, - C4.5, SVM, MLP, Valence and Valence: 67.89%
[51] BSS Hjorth parameters KNN Arousal Arousal: 69.09%
2015 A. Vijayan et al. DEAP 32 7 Bandpass (4–45 Hz) 128 Hz WT, SE, CC, AR - SVM Exciting, Happy, Exciting: 95.83%,
[131] Sadness, Happy: 90.97%,
Hatred Sadness: 96.52%,
Hatred: 93.05%
2016 S. Liu et al. [55] Video 8 60 Manually and ICA to 500 Hz PSD (Welch’s Method) - SVM Gaussian Positive, Overall accuracy: 73%
remove EOG Neutral,
Negative
2016 P. Ackermann DEAP 32 14 Bandpass (4–45 Hz) 128 Hz HHS, HOC, STFT - RF Happiness, Overall accuracy:
et al. [112] BSS Surprise, Anger, 40%–50%
Fear, Disgust and
Sadness
11

2016 J. Atkinson et al. DEAP 32 14 Bandpass (4–45 Hz) 128 Hz Statistical MRMR SVM RBF Valence and 2 Classes: 73.06% for
[53] Arousal Arousal and 73.14%
for Valence
3 Classes: 60.7% for
Arousal and 62.33%
for Valence
2016 M. Alsolamy Music 14 14 Bandpass (2–42 Hz) - PSD - SVM RBF Happy and Overall accuracy:
et al. [54] CAR applied to Unhappy 85.86%
preprocessing
2016 A. M. Bhatti Music 30 1 Bandpass (1–50 Hz) 300 Hz Statistical, PSD, WT, - MLP-BP Happy, Sad, Love, Happy: 94.87%, Sad:
et al. [52] FFT Anger 78.13%, Love:
65.38%, Anger:

Computers in Biology and Medicine 136 (2021) 104696


74.07%
2016 A. Jalilifard Video 8 1 Using Stationary - STFT - SVM Fear and Overall accuracy:
et al. [58] Wavelet Relaxation 92.10%
Transform
2016 J. Pan et al. Image 6 18 Bandpass (0.1–60 Hz) 250 Hz CSP - SVM Happiness and Overall accuracy:
[118] Sadness 74.17%
2016 H. Shahabi et al. Music 19 14 Bandpass (2–42 Hz) - DTF - SVM Joyful, Joyful vs Neutral:
[60] Melancholic, 93.7%,
Neutral Joyful vs
Melancholic: 83.04%,
Familiar vs
Unfamiliar: 83.04%
2017 S. Wu et al. [56] DEAP 32 32 Bandpass (4–45 Hz) 128 Hz FFT, WT, AS, SASI - GBDT Valence Overall accuracy:
BSS 75.18%
2017 DEAP 32 8 128 Hz - SVM
(continued on next page)
Md.M. Rahman et al.
Table 6 (continued )
Year Author Stimulus Participants Channel Artifacts filtering Frequency Feature extraction Feature Classification Emotions Result with accuracy
(Down- method reduction/ method
sample) selection
method

Ning Zhuang Bandpass (4–45 Hz) EMD (1st difference of Valence and Valence: 69.10%
et al. [66] BSS time series, Arousal Arousal: 71.99%
1st Difference of IMF’s
Phase, Normalized
energy of IMF)
2017 R. Majid Image 21 14 - - Hjorth Parameters ANOVA SVM, KNN, LDA, Happy, Calm, Sad, 76.6% using Boosting
Mehmood et al. (IAPS) NB, RF, DL, four Scared Classifier
[135] ensembles’
methods (Bagging,
Boosting,
Stacking,
Voting)
2017 Zheng WL et al. DEAP and 32 32 Bandpass (4–45 Hz) 128 Hz PSD, DASM, RASM, PCA, KNN, LR, SVM, Positive, DEAP: 69.67%
[47] SEED EOG (manually) ASM, DCAU MRMR GELM (5-fold Cross Neutral, SEED: 91.07% (GELM)
Validation) Negative
2017 Salma Alhagry DEAP, 32 32 Bandpass (4–45 Hz) 128 Hz Raw EEG signal using - LSTM-RNN Valence, Arousal, Arousal: 85.65%,
et al. [146] EOG (manually) LSTM-RNN method Liking Valence: 85.45,
Liking: 87.99
2018 Ahmet Mert DEAP 32 32 Bandpass (4–45 Hz) 128 Hz Time- Frequency ICA and NMF SVM, KNN, ANN Valence and High/Low Arousal:
et al. [127] BSS Arousal ANN:82.11%,
SVM: 76.64%, KNN:
74.22%
High/Low
Valence:
12

ANN: 82.03%,
SVM: 78.75%,
KNN: 71.17%
2018 Zhang, Y et al. DEAP 32 2 Bandpass (4–45 Hz) 128 Hz EMD (IMFs), AR - SVM Valence- Arousal HAHV/LAHV:
[65] BSS Model 88.37%, LAHV/LALV:
77.75% LALV/HALV:
86.96% HAHV/HALV:
92.04%
2018 Hanieh DEAP 32 32 Bandpass (4–45 Hz), 128 Hz WT (Gabor Wavelet), - SVF RBF (10-fold Happy, Sad, Happy: 95.3%, Sad:
Zamanian et al. BSS EMD (IMFs) cross validation) Exiting, Hate 94.8%,
[64] Exiting: 91.35%, Hate:
94% (Using 3
Channels)

Computers in Biology and Medicine 136 (2021) 104696


2018 N. Zhuang et al. Video 30 62 Band Pass (0.1–100 Hz), - DE based on STFT, MRMR SVM Joy, Neutrality, 87.36% positive from
[62] Notch Filter (50 Hz), first difference of IMF Sadness, Disgust, negative self-induced
BSS (FICA) based on EMD Anger, Fear. emotions,
54.52% six discrete
categories
2018 Li M et al. [84] DEAP 32 32 Bandpass (4–45 Hz), 128 Hz DWT (Entropy and - KNN Valence and Valence: 89.54%
AMR Energy) Arousal (Using 10 Channel),
92.28% (Using 14
Channel), 93.72%
(Using 18 Channel),
95.70% (Using 32
Channel)
Arousal: 89.81%
(Using 10 Channel),
92.24% (Using 14
(continued on next page)
Md.M. Rahman et al.
Table 6 (continued )
Year Author Stimulus Participants Channel Artifacts filtering Frequency Feature extraction Feature Classification Emotions Result with accuracy
(Down- method reduction/ method
sample) selection
method

Channel), 93.69%
(Using 18 Channel),
95.69% (Using 32
Channel)
2019 C. Qing et al. DEAP and DEAP-32 DEAP-32 DEAP: Filter (4–45 Hz), DEAP- 128 Statistical (first and - DT, KNN, RF Positive, DEAP: 62.63% (for the
[133] SEED SEED- 15 SEED-62 BSS Hz second-order Negative, Calm last 34 s of the EEG
SEED: SEED- 200 differential features), data)
Filter (0–75 Hz) Hz NNM SEED: 74.85% (for the
last 75 s EEG data)
2019 Chao H et al. DEAP 32 32 Bandpass (4–45 Hz) 128 Hz Multiband Feature - CapsNet Valence, Arousal Valence: 66.73%
[134] BSS Matrix and Dominance Arousal: 68.28%
Dominance: 67.25%
2019 Taran S et al. Audio-video 20 24 Two stage filtering: (i) 256 Hz Sample entropy, ANOVA MC-LS-SVM Happy, Fear, Sad, Happy: 92.79%, Fear:
[136] clips Correlation-Criterion Tsallis entropy, FD Relax 87.62%, Sad: 88.98%,
for removal of noisy (Higuchi), Hurst and Relax: 93.13%
IMFs (ii) Variational Exponent (HX) using MC-LS-SVM
Mode Decomposition (Morlet) classifier,
(VMD) for unwanted Overall
frequency mode Accuracy: 90.63 using
MC-LS-SVM (Morlet)
classifier.
2019 Gonzalez HA DEAP, IAPS DEAP-32 32 ICA applied to remove 128 Hz Frequency Emotion - CNN Valence and Single subject:
et al. [137] and DREAMER- EOG eXpression Dataset Arousal Valence: 70.26%
DREAMER, 23 based on PSD Arousal: 72.42%
13

IAPS-6
2019 Wang Y et al. SEED 15 62 Bandpass (0.3–75 Hz), 200 Hz DE ANOVA DNNs Positive, Overall
[138] EOG, EMG (manually) Negative, Accuracy: 93.28%
Neutral
2019 X. Chen et al. DEAP 32 32 Bandpass (4–45 Hz) 128 Hz time domain and - Bagging Tree (BT), Valence and Valence: 99.97%
[147] BSS frequency domain and SVM, LDA, Arousal (using CVCNN)
Raw signal Bayesian LDA, Arousal: 99.58%
Deep CNN (using GSLTCNN)
2020 Yang K et al. Audio 24 62 Filter (0.1–100 Hz) - DE MRMR SVM Positive, Overall accuracy:
[46] Notch Filter (50 Hz) Negative 87.2%
BSS (FICA) Neutral
2020 G. Chen et al. DEAP and DEAP-32 DEAP-32 DEAP: Bandpass (4–45 DEAP-128 Statistical, Hjorth - SVM DEAP- Valence- DEAP: 89.64% in
[68] TYUT 2.0 TYUT 2.0–6 TYUT Hz), BSS Hz Parameters, FD, DE Arousal Model gamma band using FC

Computers in Biology and Medicine 136 (2021) 104696


2.0–64 TYUT 2.0: Bandpass TYUT- Wavelet Entropy, TYUT 2.0: features
(0.5–45 Hz), ICA (eye 1000 Hz Function Connectivity Happiness, TYUT 2.0: 84.67% in
artifacts) Features (FC) Surprise, Anger, gamma band and
Neutral, Sadness using DE features
2020 B. Nakisa et al. Audio-video 17 5 Bandpass and Notch - Time and Frequency - ConvNet long Low Arousal Overall accuracy:
[139] Clips filter, ICA Domain Features short-term memory Positive, High Early fusion:71.61%
(LSTM) Arousal Positive, Late fusion: 70.17%
(early and late Low Arousal
fusion) Negative, High
Arousal Negative,
2020 E. P. Torres et al. Audio-video 10 8 Butterworth filters 128 Hz DE, DASM, RASM Mutual Random Forest Valence and RF: 70.12% (2 min) &
[140] clips (1–80 Hz), Notch Filter Information (RF) and Deep Arousal 71.22% (last 30 s)
Matrix and Chi- Learning (DL) DL: 82.51% (2 min) &
Square statistics, 83.18% (last 30 s)
(continued on next page)
Md.M. Rahman et al.
Table 6 (continued )
Year Author Stimulus Participants Channel Artifacts filtering Frequency Feature extraction Feature Classification Emotions Result with accuracy
(Down- method reduction/ method
sample) selection
method

embedded
method
2020 Y. Luo et al. DEAP and DEAP-32 DEAP-32 DEAP: Filter (4–45 Hz), DEAP- 128 Variance, FFT and - Spiking Neural DEAP: Arousal, DEAP: arousal: 74%,
[141] SEED SEED- 15 SEED-62 BSS Hz DWT Network (SNN) Valence, Valence: 78%,
SEED: SEED- 200 Dominance, Liking Dominance: 80% and
Filter (0–75 Hz) Hz SEED: Negative, Liking: 86.27%
Positive, Neutral SEED: Overall
accuracy: 96.67%
2020 Alhalaseh R DEAP 32 32 Bandpass (4–45 Hz), 128 Hz Spectral entropy and - CNN, k-NN, NB, DT Valence and Overall accuracy:
et al. [145] EMD/IMF and VMD Higuchi’s Fractal Arousal 95.20% (using CNN)
filter method Dimension (HFD)
2020 Heng Cui et al. DEAP and DEAP: 32 DEAP: 32 Bandpass (4–45 Hz) 128 Hz Raw signals, DE, - Regional- Valence and Overall accuracy:
[148] DREAMR DREAMR: DREAMR: Statistical Features Asymmetric Arousal 96.65 (Valence),
23 14 Convolutional 97.11% (Arousal)
14

Neural Network
(RACNN)
2021 Y. Zhang et al. DEAP and DEAP: 32 DEAP: 32 Bandpass (DEAP: 4–45 DEAP: 128 Statistical Features - Hierarchical Valence and Overall
[142] MANHOB- MANHOB- MANHOB- Hz, MANHOB-HCI: Hz Hjorth parameters Fusion Arousal Accuracy:
HCI HCI: 30 HCI: 32 0.3–50Hz) MANHOB- Power spectral density Convolutional DEAP: 84.71%
(EOG, EMG) manually HCI: 256 Hz Neural Network MANHOB-HCI: 89%
(HFCNN)
2021 A. SEED 15 62 Filter (0–75 Hz) 200 Hz MFBSE-EWT based - Sparse Negative, Positive, ARF: 94.4%
Bhattacharyya Temporal entropy Autoencoder based Neutral
et al. [143] Spectral entropy and Random Forest
Shannon entropy (ARF)
2021 S. K. Khare et al. Audio-video 20 24 Butterworth Filter 256 Hz Hjorth Complexity - DT, KNN, SVM, Fear, Sad, Happy, DT: 87.7%, KNN:
[144] clips (0.5–100 Hz), Notch (HC), Difference and ELM Relax 93.8%, SVM: 93.1%,

Computers in Biology and Medicine 136 (2021) 104696


Filter, and ICA Absolute Standard and ELM: 97.24%,
Deviation Value overall
(DASDV), Log Energy accuracy: 97.24%
Entropy (LEE), Renyi
Entropy (RE)
Md.M. Rahman et al. Computers in Biology and Medicine 136 (2021) 104696

characteristics and behavior of EEG signals. Therefore, finding stable [4] P. Autthasan, et al., A single-channel consumer-grade EEG device for brain-
computer interface: enhancing detection of SSVEP and its amplitude modulation,
features and designing accurate models with high classification perfor­
IEEE Sensor. J. 20 (6) (2020) 3366–3378, https://doi.org/10.1109/
mance are crucial. According to the published literature, researchers JSEN.2019.2958210, 15 March 15.
have recommended nonlinear features because nonlinear features would [5] P. Sawangjai, S. Hompoonsup, P. Leelaarporn, S. Kongwudhikunakorn,
be more effective in knowing human emotions. In future research, a T. Wilaiprasitporn, Consumer grade EEG measuring sensors as research tools: a
review, IEEE Sensor. J. 20 (8) (2020) 3996–4024, https://doi.org/10.1109/
combination of EEG signals with other physiological signals might in­ JSEN.2019.2962874, 15 April 15.
crease the recognition rate of emotions. An excessive number of [6] Islam M.R., Moni M.A., Islam M.M., Rashed-Al-Mahfuz M., Islam M.S., Hasan M.
irrelevant features of the signal will degrade system classification K., Hossain M.S., Ahmad M., Uddin S., Azad A.K., Alyami S.A., Emotion
recognition from EEG signal focusing on deep learning and shallow learning
performance, but there are appropriate statistical methods that can techniques, IEEE Access (2021 Jun 22). doi: 10.1109/ACCESS.2021.3091487.
be applied to select meaningful features. Similarly, the MRMR [7] T. Wilaiprasitporn, A. Ditthapron, K. Matchaparn, T. Tongbuasirilai,
method can be used to avoid redundancy between chosen features. N. Banluesombatkul, E. Chuangsuwanich, Affective EEG-based person
identification using the deep learning approach, in: IEEE Transactions on
SVM, KNN, DT, RF, and QDA were the most used classifier algorithms to Cognitive and Developmental Systems, vol. 12, Sept. 2020, pp. 486–496, https://
lead the model to higher accuracy. However, those algorithms did not doi.org/10.1109/TCDS.2019.2924648, 3.
perform well when researchers used nonlinear features. Therefore, more [8] S.M. Alarcão, M.J. Fonseca, Emotions recognition using EEG signals: a survey,
IEEE Transactions on Affective Computing 10 (3) (2019) 374–393, https://doi.
recent studies are turning to deep learning approaches to improve the org/10.1109/TAFFC.2017.2714671.
quality of the final classification. Many studies have obtained remark­ [9] B. Kolb, I. Whishaw, Fundamentals of Human Neuropsychology, Macmillan,
able and accurate results using deep learning-based methods. Thus, in 2009, pp. 73–75.
[10] L.F. Haas, Hans Berger, Richard caton, and electroencephalography", J. Neurol.
future research, RNN, DBN, HFCNN, and CNN are likely to prove valu­
Neurosurg. Psychiatr. 74 (1) (2003) 9, https://doi.org/10.1136/jnnp.74.1.9.
able approaches to reveal significant insights and discover important [11] G.H. Klem, H.O. Lüders, H.H. Jasper, C. Elger, The ten-twenty electrode system of
information about emotion analysis. the International Federation, The International Federation of Clinical
Neurophysiology, Electroencephalography and Clinical Neurophysiology.
Supplement 52 (1999) 3–6.
7. Conclusion [12] A. Pitkanen, V. Savander, J.E. LeDoux, Organization of intra-amygdaloid
circuitries in the rat: an emerging framework for understanding functions of the
In this work, we have reviewed numerous articles that described amygdala, Trends Neurosci. 20 (11) (1997) 517–523, https://doi.org/10.1016/
S0166-2236(97)01125-9.
studies of emotion recognition from EEG signals. For systematic surveys, [13] Robert Oostenveld, Peter Praamstra, The five percent electrode system for high-
recognized database websites have been used to collect research docu­ resolution EEG and ERP measurements, Clin. Neurophysiol. 112 (2001) 713–719,
ments. This paper describes the required steps to improve the estimation https://doi.org/10.1016/s1388-2457(00)00527-7.
[14] J. Panksepp, Toward a general psychobiological theory of emotions, Behav. Brain
of emotions through EEG signal analysis. Hence, we have described Sci. 5 (3) (1982) 407–468, https://doi.org/10.1017/S0140525X00012863.
standard datasets, emotion elicitation materials, EEG devices, and the [15] B. Weiner, S. Graham, An attributional approach to emotional development”,
influences of artifacts on brain waves. Similarly, we have discussed a Emotions, Cognition and Behaviour (1984) 167–191.
[16] A.W. James, What is an Emotion? Mind 9 (34) (1884) 188–205, https://doi.org/
structured approach for choosing the number of subjects, stimulations 10.1093/mind/os-IX.34.188.
materials, feature extraction, and selection techniques. Moreover, we [17] J.A. Gray, A whole and its parts: behaviour, the brain, cognition, and emotion,
determined which methods can be used to obtain high accuracy for Bull. Br. Psychol. Soc. 38 (1985) 99–112. https://psycnet.apa.org/record/1986-
00192-001.
feature extraction, data dimensionality reduction, and data smoothing.
[18] W.G. Parrott, Emotions in Social Psychology, ” Psychology Press, 2000, p. 378.
In addition, to obtain meaningful features, a number of artifact removal [19] R. Plutchik, The nature of emotions: human emotions have deep evolutionary
techniques, especially hybrid techniques, have been recommended in roots, Am. Sci. 89 (4) (2001) 344–350, https://doi.org/10.1511/2001.4.344.
[20] Mcdougall, 2, in: An Introduction to Social Psychology, 23rd ed. vol. 2, The
this work. Future research needs to be focused on increasing the number
Economic Bulletin, 2003, pp. 168–171.
of participants and the number of subject-independent features to [21] P. Ekman, Basic Emotions. Handbook of Cognition and Emotion, 2005, pp. 45–60,
improve the accuracy. It is also clear that audio-video clips are a good https://doi.org/10.1007/978-3-319-28099-8_495-1.
choice as stimuli, indeed, usually the best option to represent a real [22] Silvan S. Tomkins, Affect Imagery Consciousness, Springer, New York, 1962.
[23] R. Plutchik, In search of the basic emotions, Contemp. Psychol.: A Journal of
scenario of emotions. We have also focused on deep learning approaches Reviews 29 (6) (1984) 511–513, https://doi.org/10.1037/022979.
for distinguishing emotional states. Since shallow machine learning [24] A.S. Cowen, D. Keltner, Self-report captures 27 distinct categories of emotion
approaches are not generally efficient for analyzing high quantities of bridged by continuous gradients, Proceedings of the National Academy of
Sciences, National Academy of Sciences 114 (38) (2017) 7900–7909, https://doi.
data, it is likely that the best way to increase accuracy is to employ deep org/10.1073/pnas.1702247114.
learning approaches such as CNN, DBNs and RNN in future development [25] James Russell, A circumplex model of affect, J. Pers. Soc. Psychol. 39 (6) (1980)
and research. 1161–1178. https://content.apa.org/record/1981-25062-001.
[26] C. Busso, Z. Deng, S. Yildirim, M. Bulut, C.M. Lee, A. Kazemzadeh, S. Lee,
U. Neumann, S. Narayanan, Analysis of emotion recognition using facial
Declaration of competing interest expressions, speech and multimodal information, Proceedings of the 6th
international conference on Multimodal interfaces, ICMI’04 (2004) 205–211,
https://doi.org/10.1145/1027933.1027968.
Prof Dr Edward Ciaccio Editor-in-Chief, The journal of Computers in [27] Yoshi-Taka Matsuda, Tomomi Fujimura, Kentaro Katahira, Masato Okada,
Biology and Medicine. Kenichi Ueno, Kang Cheng, Kazuo Okanoya, The implicit processing of
We wish to submit a revised review article entitled “Recognition of categorical and dimensional strategies: an fMRI study of facial emotion
perception, Front. Hum. Neurosci. 7 (2013) 551, https://doi.org/10.3389/
Human Emotions Using EEG Signals: A Review” for consideration by the fnhum.2013.00551.
journal of Computers in Biology and Medicine. With the submission, the [28] Y. Miyakoshi, S. Kato, Facial emotion detection considering partial occlusion of
authors confirm no conflicts of interest. face using Bayesian network, in: 2011 IEEE Symposium on Computers &
Informatics, Kuala Lumpur, 2011, pp. 96–101, https://doi.org/10.1109/
Thank you for your consideration of this manuscript. ISCI.2011.5958891.
[29] R. Banse, K.R. Scherer, Acoustic profiles in vocal emotion expression, J. Pers. Soc.
References Psychol. 70 (1996) 614–636, https://doi.org/10.1037//0022-3514.70.3.614.
[30] B. Schuller, G. Rigoll, M. Lang, Speech emotion recognition combining acoustic
features and linguistic information in a hybrid support vector machine-belief
[1] C. Mühl, B. Allison, A. Nijholt, G. Chanel, A survey of affective brain-computer
network architecture, in: IEEE International Conference on Acoustics, Speech,
interfaces: principles, state-of-the-art, and challenges”, Brain-Computer Interfaces
and Signal Processing, Canada, Montreal, Que, 2004, pp. I–577, https://doi.org/
1 (2) (2014) 66–84, https://doi.org/10.1080/2326263X.2014.912881.
10.1109/ICASSP.2004.1326051.
[2] D. Hockenbury, S.E. Hockenbury, Discovering Psychology, Worth Publishers,
[31] Ellen Douglas-Cowie, Nick Campbell, Roddy Cowie, Peter Roach, Emotional
New York, 2007.
speech: towards a new generation of databases, Speech Commun. 40 (1–2) (2003)
[3] H. Gunes, B. Schuller, M. Pantic, R. Cowie, Emotion representation, analysis and
33–60, https://doi.org/10.1016/S0167-6393(02)00070-5.
synthesis in continuous space: a survey, in: 2011 IEEE International Conference
[32] S. Poria, N. Majumder, R. Mihalcea, E. Hovy, Emotion recognition in
on Automatic Face & Gesture Recognition (FG), 2011, pp. 827–834, https://doi.
conversation: research challenges, datasets, and recent advances, in: IEEE Access
org/10.1109/FG.2011.5771357.

15
Md.M. Rahman et al. Computers in Biology and Medicine 136 (2021) 104696

vol. 7, 2019, pp. 100943–100953, https://doi.org/10.1109/ [56] S. Wu, X. Xu, L. Shu, B. Hu, Estimation of valence of emotion using two frontal
ACCESS.20192929050. EEG channels, in: IEEE International Conference on Bioinformatics and
[33] Alexandra Balahur, JesúS.M. Hermida, AndréS. Montoyo, Detecting implicit Biomedicine, BIBM), Kansas City, MO, 2017, pp. 1127–1130, https://doi.org/
expressions of emotion in text: a comparative analysis, Decis. Support Syst. 53 (4) 10.1109/BIBM.2017.8217815.
(2012) 742–753, https://doi.org/10.1016/j.dss.2012.05.024. [57] X.-W. Wang, D. Nie, B.-L. Lu, Emotional state classification from EEG data using
[34] J.T. Ingjaldsson, J.C. Laberg, J.F. Thayer, Reduced heart rate variability in machine learning approach, Neurocomputing 129 (2014) 94–106, https://doi.
chronic alcohol abuse: relationship with negative mood, chronic thought org/10.1016/j.neucom.2013.06.046.
suppression, and compulsive drinking, Biol. Psychiatr. 54 (12) (2003) [58] A. Jalilifard, E.B. Pizzolato, M.K. Islam, Emotion classification using single-
1427–1436, https://doi.org/10.1016/s0006-3223(02)01926-1. channel scalp-EEG recording, in: 38th Annual International Conference of the
[35] C. Maaoui, A. Pruski, Emotion Recognition through Physiological Signals for IEEE Engineering in Medicine and Biology Society, EMBC), Orlando, FL, 2016,
Human-Machine Communication, Cutting Edge Robotics, Vedran Kordic, Intech pp. 845–849, https://doi.org/10.1109/EMBC.2016.7590833.
Open, 2010, pp. 317–333, https://doi.org/10.5772/10312. [59] Y. Liu, C. Wu, Y. Kao, Y. Chen, Single-trial EEG-based emotion recognition using
[36] G. Shivakumar, P.a. Vijaya, Emotion recognition using finger tip temperature: kernel Eigen-emotion pattern and adaptive support vector machine, in: 35th
first step towards an automatic system, International Journal of Computer and Annual International Conference of the IEEE Engineering in Medicine and Biology
Electrical Engineering 4 (3) (2012) 252–255, https://doi.org/10.17706/IJCEE. Society, EMBC), Osaka, 2013, pp. 4306–4309, https://doi.org/10.1109/
[37] M.W. Park, C.J. Kim, M. Hwang, E.C. Lee, Individual Emotion Classification EMBC.2013.6610498.
between Happiness and Sadness by Analyzing Photoplethysmography and Skin [60] H. Shahabi, S. Moghimi, Toward automatic detection of brain responses to
Temperature, Fourth World Congress on Software Engineering, Hong Kong, 2013, emotional music through analysis of EEG effective connectivity, Comput. Hum.
pp. 190–194, https://doi.org/10.1109/WCSE.2013.34. Behav. 58 (2016) 231–239, https://doi.org/10.1016/j.chb.2016.01.005.
[38] Atefeh Goshvarpour, Ataollah Abbasi, Ateke Goshvarpour, An accurate emotion [61] R. Duan, J. Zhu, B. Lu, Differential Entropy Feature for EEG-Based Emotion
recognition system using ECG and GSR signals and matching pursuit method, Classification, in: 6th International IEEE/EMBS Conference on Neural
Biomed. J. 40 (6) (2017) 355–368, https://doi.org/10.1016/j.bj.2017.11.001. Engineering (NER), 2013, pp. 81–84, https://doi.org/10.1109/
[39] Q. Zhang, X. Chen, Q. Zhan, T. Yang, S. Xia, Respiration-based emotion NER.2013.6695876. San Diego, CA.
recognition with deep learning, Comput. Ind. 92–93 (2017) 84–90, https://doi. [62] N. Zhuang, Y. Zeng, K. Yang, C. Zhang, L. Tong, B. Yan, Investigating patterns for
org/10.1016/j.compind.2017.04.005. self-induced emotion recognition from eeg signals, Sensors 18 (3) (2018), https://
[40] A. Nakasone, H. Prendinger, M. Ishizuka, in: Emotion Recognition from doi.org/10.3390/s18030841, 841-353.
Electromyography and Skin Conductance, the 5th International Workshop on [63] M. Zangeneh Soroush, K. Maghooli, S.K. Setarehdan, A.M. Nasrabadi, Emotion
Biosignal Interpretation, 2005, pp. 219–222. recognition using EEG phase space dynamics and Poincare intersections, Biomed.
[41] M. Balconi, C. Lucchiari, Consciousness and arousal effects on emotional face Signal Process Contr. 59 (2020) 101918, https://doi.org/10.1016/j.
processing as revealed by brain oscillations. A gamma band analysis, Int. J. bspc.2020.101918.
Psychophysiol. 67 (2008) 41–46, https://doi.org/10.1016/j. [64] Hanieh Zamanian, Farsi Hassan, A new feature extraction method to improve
ijpsycho.2007.10.002. emotion detection using EEG signals, Electronic Letters on Computer Vision and
[42] W.H.R. Miltner, C. Braun, M. Arnold, Coherence of gamma-band EEG activity as a Image Analysis 17 (1) (2018) 29–44, https://doi.org/10.5565/rev/elcvia.1045.
basis for associative learning, Nature 397 (6718) (1999) 434–436, https://doi. [65] Y. Zhang, S. Zhang, X. Ji, EEG-based classification of emotions using empirical
org/10.1038/17126. mode decomposition and autoregressive model, Multimed. Tool. Appl. 77 (2018)
[43] M. Li, B. Lu, Emotion classification based on gamma-band EEG, in: Annual 26697–26710, https://doi.org/10.1007/s11042-018-5885-9.
International Conference of the IEEE Engineering in Medicine and Biology [66] Ning Zhuang, Ying Zeng, Li Tong, Chi Zhang, Hanming Zhang, Bin Yan, Emotion
Society, 2009, pp. 1223–1226, https://doi.org/10.1109/iembs.2009.5334139. recognition from EEG signals using multidimensional information in EMD
Minneapolis, MN. domain, BioMed Res. Int. 9 (2017), https://doi.org/10.1155/2017/8317357,
[44] D. Huang, C. Guan, Kai Keng Ang, Haihong Zhang, Yaozhang Pan, Asymmetric 2017.
spatial pattern for EEG-based emotion detection, in: The 2012 International Joint [68] G. Chen, X. Zhang, Y. Sun, J. Zhang, Emotion feature analysis and recognition
Conference on Neural Networks, IJCNN), Brisbane, QLD, 2012, pp. 1–7, https:// based on reconstructed EEG sources, IEEE Access 8 (2020) 11907–11916, https://
doi.org/10.1109/IJCNN.2012.6252390. doi.org/10.1109/ACCESS.2020.2966144.
[45] W. Zheng, B. Lu, Investigating critical frequency bands and channels for EEG- [69] Sander Koelstra, Christian Muhl, Mohammad Soleymani, Jong-Seok Lee,
based emotion recognition with deep neural networks, IEEE Transactions on Ashkan Yazdani, Touradj Ebrahimi, Thierry Pun, Anton Nijholt, Ioannis Patras,
Autonomous Mental Development 7 (3) (2015) 162–175, https://doi.org/ DEAP: a database for emotion analysis using physiological signals, IEEE
10.1109/TAMD.2015.2431497. Transactions on Affective Computing 3 (1) (2012) 18–31, https://doi.org/
[46] K. Yang, L. Tong, J. Shu, N. Zhuang, B. Yan, Y. Zeng, High gamma band EEG 10.1109/T-AFFC.2011.15.
closely related to emotion: evidence from functional network, Fontier in Human [70] Stamos Katsigiannis, Naeem Ramzan, DREAMER: a database for emotion
Neuroscience 14 (2020) 89–100, https://doi.org/10.3389/fnhum.2020.00089. recognition through EEG and ECG signals from wireless low-cost off-the-shelf
[47] W.L. Zheng, J.Y. Zhu, B.L. Lu, Identifying stable patterns over time for emotion devices, IEEE Journal of Biomedical and Health Informatics 22 (1) (2018)
recognition from EEG, IEEE Trans Affect Computing 10 (3) (2019) 417–429, 98–107, https://doi.org/10.1109/JBHI.2017.2688239.
https://doi.org/10.1109/TAFFC.2017.2712143. [71] G. McKeown, M. Valstar, R. Cowie, M. Pantic, M. Schroder, The SEMAINE
[48] J. Zhang, S. Zhao, W. Huang, in: D. Liu, S. Xie, Y. Li, D. Zhao, E.M. El-Alfy (Eds.), database: annotated multimodal records of emotionally colored conversations
Brain Effective Connectivity Analysis from EEG for Positive and Negative between a person and a limited agent, IEEE Transactions on Affective Computing
Emotion, International Conference on Neural Information Processing vol. 10637, 3 (1) (2012) 5–17, https://doi.org/10.1109/T-AFFC.2011.20.
Springer International Publishing AG 2017: Guangzhou, 2017, pp. 851–857, [72] Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik,
https://doi.org/10.1007/978-3-319-70093-9_9. Erik Cambria, Rada Mihalcea, MELD: a multimodal multi-party dataset for
[49] T.F. Bastos-Filho, A. Ferreira, A.C. Atencio, S. Arjunan, D. Kumar, Evaluation of emotion recognition in conversations, in: Proceedings of the 57th Annual Meeting
feature extraction techniques in emotional state recognition, in: 4th International of the Association for Computational Linguistics, Association for Computational
Conference on Intelligent Human Computer Interaction (IHCI), 2012, pp. 1–6, Linguistics, Stroudsburg, PA, USA, 2019, pp. 527–536.
https://doi.org/10.1109/IHCI.2012.6481860. Kharagpur. [73] P.J. Lang, M.M. Bradley, B.N. Cuthbert, International Affective Picture System
[50] Sohaib A.T., Qureshi S., Hagelbäck J., Hilborn O., Jerčić P., Evaluating classifiers (IAPS): Affective Ratings of Pictures and Instruction Manual, Technical Report A-
for emotion recognition using EEG, In: Schmorrow D.D., Fidopiastis C.M. (eds) 8, University of Florida, Gainesville, FL, 2008.
Foundations of Augmented Cognition. AC 2013. Lecture Notes in Computer [74] R. Subramanian, J. Wache, M.K. Abadi, R.L. Vieriu, S. Winkler, N. Sebe,
Science, 8027, 492-501, Springer, Berlin, Heidelberg. https://doi.org/10.1007/9 ASCERTAIN: emotion and personality recognition using commercial sensors, IEEE
78-3-642-39454-6_53. Transactions on Affective Computing 9 (2) (2018) 147–160, https://doi.org/
[51] J. Chen, B. Hu, P. Moore, X. Zhang, X. Ma, Electroencephalogram-based emotion 10.1109/TAFFC.2016.2625250.
assessment system using ontology and data mining techniques, Appl. Soft [75] M. Soleymani, J. Lichtenauer, T. Pun, M. Pantic, A multimodal database for affect
Comput. (2005) 663–674, https://doi.org/10.1016/j.asoc.2015.01.007. recognition and implicit tagging, IEEE Transactions on Affective Computing 3 (1)
[52] A.M. Bhatti, M. Majid, S.M. Anwar, B. Khan, Human emotion recognition and (2012) 42–55, https://doi.org/10.1109/T-AFFC.2011.25.
analysis in response to audio music using brain signals, Comput. Hum. Behav. 65 [76] M.K. Abadi, R. Subramanian, S.M. Kia, P. Avesani, I. Patras, N. Sebe, DECAF:
(c) (2016) 267–275, https://doi.org/10.1016/j.chb.2016.08.029. MEG-based multimodal database for decoding affective physiological responses,
[53] J. Atkinson, D. Campos, Improving BCI-based emotion recognition by combining IEEE Transactions on Affective Computing 6 (3) (2015) 209–222, https://doi.
EEG feature selection and kernel classifiers, Expert Syst. Appl. 47 (2016) 35–41, org/10.1109/TAFFC.2015.2392932.
https://doi.org/10.1016/j.eswa.2015.10.049. [77] Wei-Long Zheng, Bao-Liang Lu, A multimodal approach to estimating vigilance
[54] M. Alsolamy, A. Fattouh, Emotion estimation from EEG signals during listening to using EEG and forehead EOG, J. Neural. Eng. 14 (2) (2017), 026017, https://doi.
Quran using PSD features, in: 7th International Conference on Computer Science org/10.1088/1741-2552/aa5a98.
and Information Technology (CSIT), Amman, 2016, pp. 1–5, https://doi.org/ [78] J.A. Miranda Correa, M.K. Abadi, N. Sebe, I. Patras, AMIGOS: a dataset for affect,
10.1109/CSIT.2016.7549457. personality and mood research on individuals and groups, IEEE Transactions on
[55] S. Liu, J. Tong, M. Xu, J. Yang, H. Qi, D. Ming, Improve the generalization of Affective Computing (2018), https://doi.org/10.1109/TAFFC.2018.2884461, 1-
emotional classifiers across time by using training samples from different days, in: 1.
38th Annual International Conference of the IEEE Engineering in Medicine and [79] B. Lu, M. Hui, H. Yu-Xia, The development of native Chinese affective picture
Biology Society, EMBC), Orlando, FL, 2016, pp. 841–844, https://doi.org/ system–A pretest in 46 college students, Chin. Ment. Health J. 19 (11) (2005)
10.1109/EMBC.2016.7590832. 719–722.

16
Md.M. Rahman et al. Computers in Biology and Medicine 136 (2021) 104696

[80] P. Lakhan, et al., Consumer grade brain sensing for emotion recognition, 21, in: [104] C.A. Frantzidis, C. Bratsas, C.L. Papadelis, E. Konstantinidis, C. Pappas, P.
IEEE Sensors Journal, vol. 19, 2019, pp. 9896–9907, https://doi.org/10.1109/ D. Bamidis, Toward emotion aware computing: an integrated approach using
JSEN.2019.2928781, 1 Nov.1. multichannel neurophysiological recordings and affective visual stimuli, IEEE
[81] D.O. Bos, EEG-based emotion recognition, The Influence of Visual and Auditory Trans. Inf. Technol. Biomed. 14 (3) (2010) 589–597, https://doi.org/10.1109/
Stimuli (2006) 1–17. TITB.2010.2041553.
[82] S. Valenzi, T. Islam, P. Jurica, A. Cichocki, Individual classification of emotions [105] M. Goyal, M. Singh, M. Singh, Classification of emotions based on ERP feature
using EEG, J. Biomed. Sci. Eng. 7 (8) (2014) 604–620, https://doi.org/10.4236/ extraction, in: 1st International Conference on Next Generation Computing
jbise.2014.78061. Technologies, NGCT), Dehradun, 2015, pp. 660–662, https://doi.org/10.1109/
[83] P.C. Petrantonakis, L.J. Hadjileontiadis, Emotion recognition from EEG using NGCT.2015.7375203.
higher order crossings, IEEE Trans. Inf. Technol. Biomed. 14 (2) (2010) 186–197, [106] R. Ding, P. Li, W. Wang, W. Luo, Emotion processing by ERP combined with
https://doi.org/10.1109/TITB.2009.2034649. development and plasticity, Neural Plast. (2017) 5282670, https://doi.org/
[84] M. Li, H. Xu, X. Liu, S. Lu, Emotion recognition from multichannel EEG signals 10.1155/2017/5282670, 2017.
using K-nearest neighbor classification, Technol. Health Care 26 (S1) (2018) [108] Bo Hjorth, A.B. Elema-Schönander, EEG analysis based on time domain
509–519, https://doi.org/10.3233/thc-174836. properties, Electroencephalogr. Clin. Neurophysiol. 29 (1970) 306–310, https://
[85] X.W. Wang, D. Nie, B.L. Lu, EEG-based emotion recognition using frequency doi.org/10.1016/0013-4694(70)90143-4.
domain features and support vector machines, in: B.L. Lu, L. Zhang, J. Kwok [109] A. Patil, C. Deshmukh, A.R. Panat, Feature extraction of EEG for emotion
(Eds.), Neural Information Processing. ICONIP 2011, vol. 7062, Lecture Notes in recognition using Hjorth features and higher order crossings, in: Conference on
Computer Science, 2011, pp. 734–743, https://doi.org/10.1007/978-3-642- Advances in Signal Processing (CASP), 2016, pp. 429–434, https://doi.org/
24955-6_87. 10.1109/CASP.2016.7746209. Pune.
[86] M.K. Islam, A. Rastegarnia, Z. Yang, Methods for artifact detection and removal [110] X. Li, D. Song, P. Zhang, Y. Zhang, Y. Hou, B. Hu, Exploring EEG features in cross-
from scalp EEG: a review, Clin. Neurophysiol. 46 (2016) 287–385, https://doi. subject emotion recognition, Front. Neurosci. 12 (2018) 162, https://doi.org/
org/10.1016/j.neucli.2016.07.002. 10.3389/fnins.2018.00162.
[87] X. Jiang, G.B. Bian, Z. Tian, Removal of artifacts from EEG signals: a review, [111] E. Kroupi, A. Yazdani, T. Ebrahimi, EEG correlates of different emotional states
Sensors 19 (5) (2019) 987, https://doi.org/10.3390/s19050987. elicited during watching music videos, in: Proceedings of the International
[88] Suguru Kanoga, Yasue Mitsukura, Review of Artifact Rejection Methods for Conference on Affective Computing and Intelligent Interaction, 2011,
Electroencephalographic Systems, Intertech open, 2017. pp. 457–466, https://doi.org/10.1007/978-3-642-24571-8_58.
[89] K.J. Lee, C. Park, B. Lee, Elimination of ECG artifacts from a single-channel EEG [112] P. Ackermann, C. Kohlschein, J.Á. Bitsch, K. Wehrle, S. Jeschke, EEG-based
using Sparse derivative method, in: Proceedings of the 2015 IEEE International automatic emotion recognition: feature extraction, selection and classification
Conference on Systems, Man, and Cybernetics, Kowloon, China, 2015, methods, in: IEEE 18th International Conference on E-Health Networking,
pp. 2384–2389, https://doi.org/10.1109/SMC.2015.417. Applications and Services (Healthcom), 2016, pp. 1–6, https://doi.org/10.1109/
[90] J.A. Urigüen, B. Garciazapirain, EEG artifact removal—state-of-the-art and HealthCom.2016.7749447. Munich.
guidelines, J. Neural. Eng. 12 (3) (2015), 031001, https://doi.org/10.1088/1741- [113] D. Nie, X.-W. Wang, L.-C. Shi, B.-L. Lu, EEG-based emotion recognition during
2560/12/3/031001. watching movies, IEEE/EMBS Neural Engineering (2011) 667–670, https://doi.
[91] A. Schlögl, C. Keinrath, D. Zimmermann, R. Scherer, R. Leeb, G.A. Pfurtscheller, org/10.1109/NER.2011.5910636.
Fully automated correction method of EOG artifacts in EEG recordings, Clin. [114] Y.-P. Lin, Y.-H. Yang, T.-P. Jung, Fusion of Electroencephalogram dynamics and
Neurophysiol. 118 (2007) 98–104, https://doi.org/10.1016/j. musical contents for estimating emotional responses in music listening, Front.
clinph.2006.09.003. Neurosci. 8 (2014) 94, https://doi.org/10.3389/fnins.2014.00094.
[92] G.D. Flumeri, P. Aricó, G. Borghini, A. Colosimo, F. Babiloni, A new regression- [115] S. Hatamikia, K. Maghooli, A.M. Nasrabadi, The emotion recognition system
based method for the eye blinks artifacts correction in the EEG signal, without based on the autoregressive model and sequential forward feature selection of
using any EOG channel, in: Proceedings of the 38th Annual International electroencephalogram signals, Journal of medical signals and sensors 4 (3) (2014)
Conference of the IEEE Engineering in Medicine and Biology Society, EMBC), 194–201.
Orlando, FL, USA, 2016, pp. 3187–3190, https://doi.org/10.1109/ [116] S.A. Hosseini, M.A. Khalilzadeh, M.B. Naghibi-Sistani, V. Niazmand, Higher Order
EMBC.2016.7591406. Spectra Analysis of EEG Signals in Emotional Stress States, in: Second
[93] D. Safieddine, A. Kachenoura, L. Albera, G. Birot, A. Karfoul, A. Pasniu, Removal International Conference on Information Technology and Computer Science,
of muscle artifact from EEG data: comparison between stochastic (ICA and CCA) 2010, pp. 60–63, https://doi.org/10.1109/ITCS.2010.21. Kiev.
and deterministic (EMD and wavelet-based) approaches, EURASIP J. Appl. Signal [117] A.C. Conneau, E. Slim, Assessment of new spectral features for EEG-based
Process. 127 (2012) (2012), https://doi.org/10.1186/1687-6180-2012-127. emotion recognition, IEEE International Conference on Acoustic, Speech and
[94] P. Gajbhiye, N. Mingchinda, W. Chen, S.C. Mukhopadhyay, T. Wilaiprasitporn, R. Signal Processing (ICASSP) (2014) 4698–4702, https://doi.org/10.1109/
K. Tripathy, Wavelet domain optimized savitzky–golay filter for the removal of ICASSP.2014.6854493.
motion artifacts from EEG recordings, in: IEEE Transactions on Instrumentation [118] Jiahui Pan, Yuanqing Li, Jun Wang, An EEG-Based brain-computer interface for
and Measurement vol. 70, 2021, pp. 1–11, https://doi.org/10.1109/ emotion recognition, in: International Joint Conference on Neural Networks,
TIM.2020.3041099. Art no. 4002111. IJCNN), Vancouver, BC, 2016, pp. 2063–2067, https://doi.org/10.1109/
[95] X. Chen, X. Xu, A. Liu, M.J. McKeown, Z.J. Wang, The use of multivariate EMD IJCNN.2016.7727453.
and CCA for denoising muscle artifacts from few-channel EEG recordings, IEEE [119] S.K. Hadjidimitriou, L.J. Hadjileontiadis, Toward an EEG-based recognition of
Transactions on Instrumentation and Measurement 67 (2) (2018) 359–370, music liking using time-frequency analysis, IEEE (Inst. Electr. Electron. Eng.)
https://doi.org/10.1109/TIM.2017.2759398. Trans. Biomed. Eng. 59 (12) (2012) 3498–3510, https://doi.org/10.1109/
[96] X. Xu, X. Chen, Y. Zhang, Removal of muscle artifacts from few-channel EEG TBME.2012.2217495.
recordings based on multivariate empirical mode decomposition and independent [120] V. Vanitha, P. Krishnan, Time-frequency analysis of EEG for improved
vector analysis, Electron. Lett. 54 (14) (2018) 866–868, https://doi.org/10.1049/ classification of emotion, Int. J. Biomed. Eng. Technol. 23 (2017) 191–212,
el.2018.0191. https://doi.org/10.1504/IJBET.2017.082661.
[97] P. Li, X. Wang, F. Li, R. Zhang, T. Ma, Y. Peng, X. Lei, Y. Tian, D. Guo, T. Liu, et [121] Y. Li, J. Huang, H. Zhou, N. Zhong, Human emotion recognition with
al., Autoregressive model in the Lp norm space for EEG analysis, J. Neurosci. electroencephalographic multidimensional features by hybrid deep neural
Methods 240 (2014) 170–178, https://doi.org/10.1016/j.jneumeth.2014.11.007. networks, Appl. Sci. 7 (2017) 1060, https://doi.org/10.3390/app7101060.
[98] X. Chen, Q. Chen, Y. Zhang, Z.J. Wang, A novel EEMD-CCA approach to removing [122] Sourina, Olga & Liu, Yisi., A fractal-based algorithm of emotion recognition from
muscle artifacts for pervasive EEG, IEEE Sensor. J. 19 (19) (2019) 8420–8431, EEG using arousal-valence model, BIOSIGNALS 2011 - Proceedings of the
https://doi.org/10.1109/JSEN.2018.2872623. International Conference on Bio-Inspired Systems and Signal Processing. 209-214.
[99] G. Wang, C. Teng, K. Li, Z. Zhang, X. Yan, The removal of EOG artifacts from EEG [123] Y. Liu, O. Sourina, EEG-based subject-dependent emotion recognition algorithm
signals using independent component analysis and multivariate empirical mode using fractal dimension, in: IEEE International Conference on Systems, Man, and
decomposition, IEEE Journal of Biomedical and Health Informatics 20 (5) (2016) Cybernetics (SMC), 2014, pp. 3166–3171, https://doi.org/10.1109/
1301–1308, https://doi.org/10.1109/JBHI.2015.2450196. SMC.2014.6974415. San Diego, CA.
[100] M.B. Hamaneh, N. Chitravas, K. Kaiboriboon, S.D. Lhatoo, K.A. Loparo, [124] R.-N. Duan, X.-W. Wang, B.-L. Lu, EEG-based emotion recognition in listening
Automated removal of EKG artifact from EEG data using independent music by using support vector machine and linear dynamic system, in ICONIP
component analysis and continuous wavelet transformation, IEEE (Inst. Electr. (2012) 468–475, https://doi.org/10.1007/978-3-642-34478-7_57.
Electron. Eng.) Trans. Biomed. Eng. 61 (6) (2014) 1634–1641, https://doi.org/ [125] M. Soleymani, J. Lichtenauer, T. Pun, M. Pantic, A multimodal database for affect
10.1109/TBME.2013.2295173. recognition and implicit tagging, IEEE Transactions on Affective Computing 3 (1)
[101] L. Shoker, S. Sanei, J. Chambers, Artifact removal from electroencephalograms (2012) 42–55, https://doi.org/10.1109/T-AFFC.2011.25.
using a hybrid BSS-SVM algorithm, IEEE Signal Process. Lett. 12 (10) (2005) [126] H.C. Peng, F. Long, C. Ding, Feature selection based on mutual information:
721–724, https://doi.org/10.1109/LSP.2005.855539. criteria of max-dependency, max-relevance, and min-redundancy, IEEE Trans.
[102] Klados, Manousos & Papadelis, Christos & Braun, Christoph & Bamidis, Pattern Anal. Mach. Intell. 27 (8) (2005) 1226–1238, https://doi.org/10.1109/
Panagiotis, REG-ICA: a hybrid methodology combining Blind Source Separation TPAMI.2005.159.
and regression techniques for the rejection of ocular artifacts. Biomed. Signal [127] Ahmet Mert, Aydin Akan, Emotion recognition based on time–frequency
Process Contr.. 6. 291-300, https://doi.org/10.1016/j.bspc.2011.02.001. distribution of EEG signals using multivariate synchrosqueezing transform, Digit.
[103] Hoang-Anh Nguyen, John Musson, Feng Li, Wei Wang, Guangfan Zhang, Signal Process. 81 (2018) 106–115, https://doi.org/10.1016/j.dsp.2018.07.003.
Roger Xu, Carl Richey, Tom Schnell, Frederic Mckenzie, Jiang Li, EOG artifact [128] W.L. Zheng, R. Santana, B.L. Lu, Comparison of classification methods for EEG-
removal using a wavelet neural net, Neurocomputing 97 (2012) 374–389, based emotion recognition, in: World Congress on Medical Physics and
https://doi.org/10.1016/j.neucom.2012.04.016.

17
Md.M. Rahman et al. Computers in Biology and Medicine 136 (2021) 104696

Biomedical Engineering, Toronto, Canada, Springer, Cham, 2005, pp. 1184–1187, vol. 8, 2020, pp. 225463–225474, https://doi.org/10.1109/
https://doi.org/10.1007/978-3-319-19387-8_287. ACCESS.2020.3027026.
[129] N. Liu, Y. Fang, L. Li, L. Hou, F. Yang, Y. Guo, Multiple feature fusion for [140] E.P. Torres, E.A. Torres, M. Hernández-Álvarez, S.G. Yoo, Emotion recognition
automatic emotion recognition using EEG signals, in: IEEE International related to stock trading using machine learning algorithms with feature selection,
Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, New in: IEEE Access, vol. 8, 2020, pp. 199719–199732, https://doi.org/10.1109/
York, 2018, pp. 896–900, https://doi.org/10.1109/ICASSP.2018.8462518. ACCESS.2020.3035539.
[130] X.-W. Wang, D. Nie, B.-L. Lu, Emotional state classification from EEG data using [141] Y. Luo, et al., EEG-based emotion classification using spiking neural networks, in:
machine learning approach, Neurocomputing 129 (2014) 94–106, https://doi. IEEE Access, vol. 8, 2020, pp. 46007–46016, https://doi.org/10.1109/
org/10.1016/j.neucom.2013.06.046. ACCESS.2020.2978163.
[131] A.E. Vijayan, D. Sen, A.P. Sudheer, EEG-based emotion recognition using [142] Y. Zhang, C. Cheng, Y. Zhang, Multimodal emotion recognition using a
statistical measures and auto-regressive modeling, in: IEEE International hierarchical fusion convolutional neural network, in: IEEE Access, vol. 9, 2021,
Conference on Computational Intelligence & Communication Technology, 2015, pp. 7943–7951, https://doi.org/10.1109/ACCESS.2021.3049516.
pp. 587–591, https://doi.org/10.1109/CICT.2015.24. Ghaziabad. [143] A. Bhattacharyya, R.K. Tripathy, L. Garg, R.B. Pachori, A novel multivariate-
[132] X. Jie, R. Cao, L. Li, Emotion recognition based on the sample entropy of EEG, Bio multiscale Approach for computing EEG spectral and temporal complexity for
Med. Mater. Eng. 24 (1) (2014) 1185–1192, https://doi.org/10.3233/bme- human emotion recognition, 3, in: IEEE Sensors Journal, vol. 21, 2021,
130919. pp. 3579–3591, https://doi.org/10.1109/JSEN.2020.3027181, 1 Feb.1.
[133] C. Qing, R. Qiao, X. Xu, Y. Cheng, Interpretable emotion recognition using EEG [144] S.K. Khare, V. Bajaj, An evolutionary optimized variational mode decomposition
signals, IEEE Access 7 (2019) 94160–94170, https://doi.org/10.1109/ for emotion recognition, 2, in: IEEE Sensors Journal, vol. 21, 2021,
ACCESS.2019.2928691. pp. 2035–2042, https://doi.org/10.1109/JSEN.2020.3020915, 15 Jan.15.
[134] H. Chao, L. Dong, Y. Liu, B. Lu, Emotion recognition from multiband EEG signals [145] R. Alhalaseh, S. Alasasfeh, Machine-learning-based emotion recognition system
using CapsNet, Sensors 19 (9) (2019) 2212, https://doi.org/10.3390/s19092212. using EEG signals, Computers 9 (4) (2020) 95, https://doi.org/10.3390/
[135] R. Majid Mehmood, R. Du, H.J. Lee, Optimal feature selection and deep learning computers9040095.
ensembles method for emotion recognition from human brain EEG sensors, IEEE [146] Salma Alhagry, Aly Aly Fahmy, A. Reda, El-Khoribi, Emotion recognition based
Access 5 (2017) 14797–14806, https://doi.org/10.1109/ACCESS.2017.2724555. on EEG using LSTM recurrent neural network, Int. J. Adv. Comput. Sci. Appl. 8
[136] S. Taran, V. Bajaj, Emotion recognition from single-channel EEG signals using a (10) (2017), https://doi.org/10.14569/IJACSA.2017.081046.
two-stage correlation and instantaneous frequency-based filtering method, [147] X. Chen, P.W. Zhang, Z.J. Mao, Y.F. Huang, D.M. Jiang, Y.N. Zhang, Accurate
Comput. Methods Progr. Biomed. 173 (2019) 157–165, https://doi.org/10.1016/ EEG-based emotion recognition on combined features using deep convolutional
j.cmpb.2019.03.015. neural networks, in: IEEE Access, vol. 7, 2019, pp. 44317–44328, https://doi.org/
[137] H.A. Gonzalez, J. Yoo, I.M. Elfadel, in: EEG-based Emotion Detection Using 10.1109/ACCESS.2019.2908285.
Unsupervised Transfer Learning, 41st Annual International Conference of the [148] Heng Cui, Aiping Liu, Xu Zhang, Xiang Chen, Kongqiao Wang, Xun Chen, EEG-
IEEE Engineering in Medicine and Biology Society, EMBC), Berlin, Germany, based emotion recognition using an end-to-end regional-asymmetric
2019, pp. 694–697, https://doi.org/10.1109/EMBC.2019.8857248. convolutional neural network, Knowl. Base Syst. 205 (2020) 106243, https://doi.
[138] Y. Wang, et al., EEG-based emotion recognition with prototype-based data org/10.1016/j.knosys.2020.106243.
representation, in: 41st Annual International Conference of the IEEE Engineering [150] M. Rashed-Al-Mahfuz, M.A. Moni, S. Uddin, S.A. Alyami, M.A. Summers,
in Medicine and Biology Society, EMBC), Berlin, Germany, 2019, pp. 684–689, V. Eapen, A deep convolutional neural network method to detect seizures and
https://doi.org/10.1109/EMBC.2019.8857340. characteristic frequencies using epileptic electroencephalogram (EEG) data, IEEE
[139] B. Nakisa, M.N. Rastgoo, A. Rakotonirainy, F. Maire, V. Chandran, Automatic Journal of Translational Engineering in Health and Medicine 9 (2021 Jan 11) 1–2.
emotion recognition using temporal multimodal deep learning, in: IEEE Access,

18

You might also like