You are on page 1of 12

Journal of Neural Engineering

PAPER You may also like


- Cooperative dry-electrode sensors for
An investigation of in-ear sensing for motor task multi-lead biopotential and bioimpedance
monitoring
classification M Rapin, M Proença, F Braun et al.

- Ultra-low power signal conditioning system


for effective biopotential signal recording
To cite this article: Xiaoli Wu et al 2020 J. Neural Eng. 17 066010 Diksha Thakur, Kulbhushan Sharma,
Sonal Kapila et al.

- Pulsatile flow conditioning of three-


dimensional bioengineered cardiac
ventricle
View the article online for updates and enhancements. Nikita M Patel and Ravi K Birla

This content was downloaded from IP address 45.249.87.200 on 21/05/2022 at 13:38


J. Neural Eng. 17 (2020) 066010 https://doi.org/10.1088/1741-2552/abc1b6

Journal of Neural Engineering

PAPER

An investigation of in-ear sensing for motor task classification


RECEIVED
1 July 2020
Xiaoli Wu1, Wenhui Zhang1, Zhibo Fu1, Roy T H Cheung2,3 and Rosa H M Chan4
REVISED
30 September 2020 1
MindAmp Limited, Hong Kong, People’s Republic of China
2
ACCEPTED FOR PUBLICATION Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, People’s Republic of China
3
15 October 2020 School of Health Sciences, Western Sydney University, Sydney, Australia
4
PUBLISHED Department of Electrical Engineering, City University of Hong Kong, Hong Kong, People’s Republic of China
19 November 2020
E-mail: rosachan@cityu.edu.hk

Keywords: electroencephalography, in-ear sensing, human–computer interface, wearable device

Abstract
Objective. Our study aims to investigate the feasibility of in-ear sensing for human–computer
interface. Approach. We first measured the agreement between in-ear biopotential and
scalp-electroencephalogram (EEG) signals by channel correlation and power spectral density
analysis. Then we applied EEG compact network (EEGNet) for the classification of a two-class
motor task using in-ear electrophysiological signals. Main results. The best performance using
in-ear biopotential with global reference reached an average accuracy of 70.22% (cf 92.61%
accuracy using scalp-EEG signals), but the performance in-ear biopotential with near-ear reference
was poor. Significance. Our results suggest in-ear sensing would be a viable human–computer
interface for movement prediction, but careful consideration should be given to the position of the
reference electrode.

1. Introduction state auditory/visual evoked potentials [10] and alpha


attenuation [11] from the in-ear EEG.
Electroencephalogram (EEG) has high temporal res- Although these studies showed the potential of
olution and is relatively more tolerant of subject in-ear sensing in the human–computer interface,
movements compared to other neuroimaging tech- its application for the motor task is still not well
niques such as functional magnetic resonance ima- developed, despite that the motor task is of great
ging [1, 2]. Nevertheless, its low portability and importance in human–brain interfaces. Kim et al
complicated system setup limit its applicability to used signals from ear-adjacent scalp EEG for motor
everyday life. Hence, the applications of EEG are imagery task classification and showed promising res-
often constrained to medical or research laborator- ults [12]. But the usage of the ear-adjacent scalp EEG
ies [3, 4]. To enhance the applicability of EEG and was relatively limited in portability [13–15].
human–computer interface in daily life, a more port- Therefore, we introduced an in-ear electro-
able, wearable, and comfortable electrophysiological physiological signal based human–computer inter-
signal monitoring device is necessary. face system for a 2-class motor task in this paper.
Ear-EEG was first proposed by Looney et al in With both in-ear and ear-around sensors, such
2012 [5], as a method that records EEG signal from wearable in-ear electrophysiological signal monit-
electrodes embedded on a personalized earpiece. oring device could facilitate a more practical and
Nguyen et al introduced the in-ear EEG recording wearable human–computer application, and benefit
system for sleep staging [6]. Compared with the healthy and disabled users in daily life [16, 17].
scalp-EEG system, in-ear EEG monitoring exhibits a We first investigated the quality of the in-ear elec-
higher degree of wearable potential and more user- trophysiological signals by calculating the channel
comfortable property, as well as higher tolerance to correlation and the power spectral density (PSD) dif-
the muscle movements [7]. In-ear sensing for mul- ferences between ear sensors, the adjacent scalp EEG
tiple modalities monitoring improves the measure- channels, and the motor-related scalp EEG chan-
ment accuracy and minimize the artifacts [8, 9]. nels. We then evaluated the classification accuracy
Some studies have demonstrated that it is sufficient of using the in-ear electrophysiological signals or
to extract brain activities such as steady and transient the scalp EEG signals as the input of the EEGnet

© 2020 IOP Publishing Ltd


J. Neural Eng. 17 (2020) 066010 X Wu et al

classifier. Our results shed light on the possibility displayed on the monitor, pseudo-randomly point-
of the in-ear electrophysiological signals for motor ing to the right or left. During the 4 s arrow presenta-
task classification as the human–computer inter- tion, subjects were instructed to make a left- or right-
face. The dataset is available in the IEEE DataPort handed fist according to the arrow direction. Blinking
(http://dx.doi.org/10.21227/j7rq-2p11). and swallowing were not allowed during the 10 s trial,
and eye movement should be appropriately avoided.
2. Method Figure 2 provides a schematic diagram of the trial.
The scalp-EEG was recorded from the 122-
The in-ear electrophysiological signal recording channel cap, using the Neuroscan Quick Cap (Model
device used in this study comprised two customized C190). The reference electrode from the scalp REF
earpieces, each of which had four recording electrodes channel and the ground electrode from the scalp GRD
embedded as shown in figure 1. The ear-electrodes channel was shared by the ear and scalp electrodes.
were made of Ø 4 mm silver (Ag) flakes printed with Figure 3 represents the position of the electrodes for
Ag/AgCl ink, and for all data collected in this paper, the scalp and ear EEG. In-ear electrophysiological sig-
the electrode positions were xF, xB, xOU, xOD, where nals and scalp-EEG were recorded simultaneously.
x marks the left (L) or right (R) ear. The ear canal The recording amplifier was SynAmps2, and Curry
electrodes were labeled as xF, xB, contacting with 7 was used for real-time data monitoring and col-
the front, and back ear canal; the other two concha lecting. The signals were sampled at 1000 Hz then
electrodes were xOU and xOD, corresponding to filtered with a band-pass filter between 0.5 and 100 Hz
the upper and lower positions of outer ears. The together with a notch filter to suppress the line noises.
custom-fit earpieces were hand-crafted in the follow- We evaluated the in-ear electrophysiological sig-
ing steps: (1) making ear impressions via injecting nals by measuring the difference of the channel cor-
high-viscosity silicone to canals; (2) obtaining the relation and the PSD between each ear channel, the
ear model by solidifying the agar of impressions; adjacent scalp channels, and the motor-related chan-
(3) manufacturing earpieces by using ultraviolet ray nels. All raw EEG signals were offline analyzed, and
irradiated resin; (4) Wire leads from the electrodes details were explained as follows.
connecting to recording system directly, and a hole at Data were downsampled from 1000 to 250 Hz,
the end of each earpiece for auditory signal delivery. then filtered with a 4–38 Hz FIR filter. The FP1 chan-
nel from subject 4 was broken and therefore needed to
2.1. Experimental paradigm and data collection be removed and interpolated by the remaining scalp
Six subjects (two males, four females, ages: 22–28) channels. Electrical signals were re-referenced to ref-
participated in this study. All were right-handed, erence electrode standardization technique, as pro-
without medication and free from any known cent- posed by Yao in 2001 [18]. Segmental extraction was
ral nervous system abnormality. Complete informa- applied to segment the continuous raw data [−2, 6]
tion about the experimental procedures was given to s according to the time point when the arrow star-
the subjects, and an informed consent document was ted to display. Epoch was rejected if it contained data
signed before participation. The study was approved points higher than 6σ for single-channel and 2σ for
by the ethics committee at the City University of all channels. Independent component analysis was
Hong Kong (Reference number: 2-25-201602_01). performed on the remaining epochs for artifact elim-
Before the placement of earpieces, the concha and ear ination by ICLable [19]. All processing was carried
canals were cleaned with ethanol. A small amount of out using EEGLAB version 2019.0.
high viscosity conductive gel (Ten 20 EEG paste) was For measuring the PSD differences, all in-ear elec-
applied on each ear electrodes, and additional con- trophysiological signals, the adjacent scalp channels
ductive gel (Compumedics Quik-Gel Electrolyte) was (T9, T10, figure 3 in blue color) and the motor-related
inserted to the scalp-EEG electrodes. The impedance channels (C3, C4, figure 3 in red color) were selec-
of all scalp-EEG electrodes was measured to ensure ted. The mean power of the theta (4–8 Hz), alpha
their values were below 10 kΩ. Otherwise, more gel (8–13 Hz), beta (13–30 Hz), low gamma (30–38 Hz)
would be added. As the materials for the scalp EEG frequency bands were respectively calculated by the
and the ear electrodes were different, the impedance 8 s (−2 ~ 6 s) epoch segments and averaged from
was controlled within 50 kΩ for the ear electrodes. all epochs. Then the mean power of left/right in-
During the experiment, participants were seated ear biopotential was divided by the adjacent T9/T10
in front of a computer monitor with stimuli presen- channels to obtain the in-ear biopotential to adja-
ted. Five subjects completed at least ten blocks of tri- cent scalp EEG power ratio. And the mean power
als, the sixth subject finished five blocks. Each block of left/right the in-ear biopotential was divided by
contained sixteen 10 s long trials. At the beginning, the motor-related C3/C4 channels to obtain the in-
a fixation cross presented in the center of the mon- ear biopotential to motor-elated scalp EEG power
itor for 3 s. A short warning beep was played 2 s after ratio. For the channel correlation, we calculated the
the cross onset to call the subjects’ attention for the Pearson’s correlation coefficient for each of 8 ear elec-
task. When the fixation cross disappeared, an arrow trodes with 122 scalp electrodes from each subject,

2
J. Neural Eng. 17 (2020) 066010 X Wu et al

Figure 1. Example of the earpiece used in the study.

Figure 2. Schematic diagram of a task.

and the topographic shape of the ear-scalp channel REF electrode, as ear-REF group; (3) in-ear electro-
correlation varied in the range −1 to 1. We used a physiological signals (6 channels, xF, xB, xOD) ref-
permutation test to evaluate the channel correlation erenced to the ipsilateral xOU channel, as ear-IPSI
between the in-ear electrophysiological signals and group; (4) in-ear electrophysiological signals (6 chan-
the motor-related channels (C3 and C4) and see if nels, xF, xB, xOD) referenced to contralateral xOU
the correlation coefficients were statistically differ- channel, as ear-CONTRA group; 5) near ear-EEG (4
ent from a chance level. In order to maintain tem- channels, T7, T8, T9, T10) referenced to the scalp REF
poral structure, data points of each channel within the electrode, as near ear-EEG group. The 3 s epoch of
same trial was regarded as a whole. We did the per- signals was extracted, starting 1 s before and 2 s after
mutation test by randomized the trials for 5000 times the presentation of the arrow, see figure 2. Bad epochs
and calculated the statistic, i.e. the correlation coeffi- were rejected if any channel has peak-to-peak amp-
cient between the in-ear electrophysiological signals litude higher than 800 uV. All EEG data preprocessing
and the motor-related channels, for individual sub- was performed using the Python MNE package [22].
jects during each permutation. Then we estimated the
p-value for the correlation coefficient calculated from 2.2.2. Classification—EEGNet
the real data. The EEGNet comprises three building blocks. The
first block is applied with F 2D convolutional filters of
size (1, 10) to learn temporal features, then a depth-
2.2. Classification wise convolution of size (C, 1) to learn spatial filters
2.2.1. Classifier training (C is the number of channels). Followed is a 2D max-
For the motor task classification, we applied a com- pooling layer of size (1, 3) used to reduce the camp-
pact CNN model (EEGnet) proposed by Lawhern et al ing rate. In the second block, 2∗ F separable convo-
[20]. EEG signals were downsampled to 250 Hz and lution filters of size (1, 10) and a 2D max-pooling
bandpass filtered between 4 and 38 Hz, and then an layer of (1, 3) are applied. The third block is a clas-
electrode-wise exponential moving standardization sification layer with a softmax function. In all of the
was executed as introduced in Schirrmeister et al [21]. layers except the final dense layer, the batch normal-
To compare the classification accuracy of using in-ear ization, rectified linear unit activation function, and
biosignals and scalp-EEG and analyze the influence of dropout are applied to prevent overfitting. By manual
reference location, the dataset was partitioned as fol- fine-tuning, we set F = 25, D (depth multiplier) = 2,
lows: (1) scalp-EEG data (122 channels) all referenced dropout = 0.5 and learning rate = 0.001. See table 1
to scalp REF electrode; (2) in-ear electrophysiological for more details of EEGNet architecture used in this
signals (6 channels, xF, xB, xOD) referenced to scalp study. To fit the model, we used Adam optimizer to

3
J. Neural Eng. 17 (2020) 066010 X Wu et al

Figure 3. Projected scalp and ear electrode positions for EEG. The blue are scalp electrodes close to ear (T7, T8, T9, T10); the red
are motor-related electrodes (Cz, C1–6); the bold black are ear electrodes (xF, xB, xOU, xOD); the bold black italic are the
reference electrode (REF) and the ground electrode (GRD).

Table 1. EEGNet architecture.

Block Layer #Filters Size Output Options

1 Input (C, T, 1)
Conv2D F (1, 10) (C, T-9, F) mode = valid
Activation (C, T-9, F) ReLU
DepthwiseConv2D D∗ F (C, 1) (1, T-9, D∗ F) (1, T-9, D∗ F)
Activation (1, T-9, D∗ F) ReLU
BatchNorm (1, T-9, D∗ F)
MaxPooling2D (1, 3) (1, (T-9) // 3, D∗ F) mode = valid
Dropout (1, (T-9) // 3, D∗ F) p = 0.5
2 SeparableCon2D 2 ∗F (1, 10) (1, (T-9) //3 - 9, 2 ∗ F) mode = valid
Activation (1, (T-9) //3 - 9, 2 ∗ F) ReLU
BatchNorm (1, (T-9) //3 - 9, 2 ∗ F)
MaxPooling2D (1, 3) (1, T // 9 - 4, 2 ∗ F) mode = valid
Dropout (1, T // 9 - 4, 2 ∗ F) p = 0.5
Flatten (2 ∗ F ∗ (T // 9 - 4))
3 Dense N N
Softmax N
Note: C = number of channels, T = number of time points (T = 750), F = number of filters (F = 25), D = depth multiplier (D = 2),
N = number of classes (N = 2).

minimize the categorical cross-entropy loss function within-subject classification, the data from each sub-
and an early stop method to save the best model. All ject were split into the training sets and testing sets.
models were trained in a NVIDIA GeForce GTX 1660, We used five-fold cross-validation, with four blocks
with CUDA 10, Tensor flow 2.0. of data as the training set, one block as the testing
set. The average classification accuracy was calculated
across all folds and all subjects for the within-subject
2.2.3. Classification accuracy model. For the cross-subject classification, we chose
Movement task classification was evaluated by five-subject data as the training set and the leave-out
three sets of analyses: within-subject classification, subject as the testing set. This process was repeated
cross-subject classification, and individual sub- six times to include all subjects. The average classi-
ject retraining after cross-subject classification. In fication accuracy was calculated across all subjects.

4
J. Neural Eng. 17 (2020) 066010 X Wu et al

Figure 4. Simultaneously recorded in-ear biopotential and scalp EEG signals after down sampling, filtering, and artifacts removal.
The upper signals were one sample epoch from the T9 channel of the scalp EEG, and the lower one was from the LF channel of
in-ear biopotential.

Figure 5. The correlation between ear channels and selected scalp channels during motor tasks (−2 to 6 s). (a) The correlation
coefficients with channel T9, Oz, T10 for all subjects; (b) the correlation coefficients with motor-related channel C3, C4 for all
subjects; (c) the correlation map between ear channels and selected scalp channels (T9, T7, C5, C3, C1, CZ, C2, C4, C6, T8, T10)
example from subject 6. (d) The correlation topography for ear channels, example from subject 6.

The individual subject retraining after cross-subject p-value of less than 0.05 was considered statistically
classification had additional retraining steps. The significant.
five-subject data were split into a training set (80%)
and testing set (20%), the EEGNet model was trained 2.2.4. Feature visualization
using the five-subject training set, and the models’ In order to understand what are the essential fea-
weights were chosen based on the best testing accur- tures for the classification, we used the deep learning
acy. Then the leave-out subject data, which also split important features (DeepLIFT) method to calculate
into a training set (80%) and testing set (20%), was the importance scores for the first input layer of
retrained on the five-subject model weights. This pro- the EEGNet. DeepLIFT is a gradient-based relevance
cess was also repeated six times to include all subjects. attribution method that evaluates how the input fea-
The average classification accuracy was calculated tures contribute to the outcome, compared to the ‘ref-
across all subjects. erence’ input [23]. We here set the reference input
We compared the performance using one-way as zeros and calculate the importance scores based
ANOVA to evaluate the five data groups as indi- on the specific class (left/right), averaging across the
vidual input in the three sets of analysis. We applied epochs with good classification results (classification
the Tukey multiple comparison test to compare confidence > 70%) from the testing set. Increasing the
the difference between each pair of means, and a importance score with a positive sign would lead to

5
J. Neural Eng. 17 (2020) 066010 X Wu et al

Figure 6. Channel correlation coefficient between the ear channels and the motor-related channels C3(a) and C4(b). The
shuffling data distribution were from 5000 permutations. The solid lines indicate the channel correlation coefficient calculated
from real data. Comparing the channel correlation coefficient from the real data with the permutation distribution, all channel
correlation had p-value less than 0.001. Example from subject 1.

Figure 7. The power spectral density differences for in-ear biopotential and selected scalp channels. (a) The in-ear biopotential to
adjacent scalp EEG power ratio boxplot for all subjects; (b) The in-ear biopotential to motor-related scalp EEG power ratio
boxplot for all subjects; (c) The median power spectral density for in-ear biopotential (all eight channels) and adjacent scalp EEG
(T7, T8, T9, T10) in solid line, with 25%–75% quantile in dashed line. Example from subject 6.

a higher possibility in that class while increasing the channels, respectively. The correlation coefficients
score with a negative sign would reduce the possibility with the adjacent channels were closed to 1. In con-
of that class. trast, the correlation coefficients with the Cz chan-
nel (located near the center of the scalp) were around
3. Results −0.5, and the correlation coefficients with the con-
tralateral channels were around zero (figure 5(a)).
3.1. Data recordings The channel correlation map (figure 5(c)) and the
The filtered in-ear electrophysiological signals and topography for the ear-correlation coefficients with
the adjacent scalp EEG signals are shown in figure 4. all scalp channels (figure 5(d)) showed the in-ear elec-
We first investigated the channel correlation between trophysiological signals were positively correlated to
in-ear electrophysiological signals and the adjacent the adjacent scalp channels. Although the channel
scalp EEG channels (T9 is the adjacent channel to correlation coefficients between in-ear biopotential
the left ear channels; T10 is the adjacent channel and the motor-related channels were small (as shown
to the right ear channels; all channel locations are in figure 5(b)), the permutation test showed that all
shown in figure 3). We found that the left/right ear the correlation coefficients were significantly different
channels were positively correlated to their adjacent from shuffling data (p < 0.001, figure 6), except for the

6
J. Neural Eng. 17 (2020) 066010 X Wu et al

Table 2. Accuracy(%) for within-subject classification.

Subject
EEG signal 1 2 3 4 5 6 Average

Scalp EEG 99.58 89.37 76.79 64.36 94.03 85.41 84.93


ear-IPSI 52.35 50.24 49.41 33.75 49.09 54.58 48.24
near ear-EEG 67.92 80.63 58.87 41.09 63.27 62.08 62.23
ear-REF 64.02 77.00 44.71 43.75 54.09 60.42 57.33
ear-CONTRA 48.95 41.40 48.24 46.25 42.73 49.58 46.19

Figure 8. 5-fold within-subject classification performance averaged over all folds and all subjects for the ear-CONTRA group, the
ear-REF group, the near ear-EEG group, the ear-IPSI group, and the scalp EEG group. The dashed horizontal line indicates the
random classification with 50% accuracy. There was a significant difference between the five groups for the within-subject
classification accuracy (F(4,25) = 13.06, p < 0.001). Tukey’s post-hoc tests were used to compare the differences between each
pair of groups. ∗ indicates p < 0.05; ∗∗ indicates p < 0.01; ∗∗∗ indicates p < 0.001; No stars indicates p > 0.05 in that pair of
comparison.

correlation coefficient between the LOU-C3 channels the EEGnet separately. The within-subject five-fold
in subject 3 (p = 0.087) and subject 4 (p = 0.72). cross-validation results are shown in figure 8 and
We then compared the PSD between the in- table 2. We found that the scalp EEG group, the near-
ear biopotential and the selected scalp EEG chan- ear EEG group and the ear-REF group have average
nels (including the adjacent channels: T9, T10; and prediction accuracy higher than the random classific-
the motor-related channels: C3, C4). We found that ation level, while the other two groups have lower or
the in-ear electrophsyiolgical signals to the adjacent around 50% prediction accuracy. There was a statist-
channels power ratios were basically near 100% in all ically significant difference between the five groups.
frequency bands for all subjects, except the ear chan- The scalp EEG group has better prediction accur-
nel ROU, which is around 150% in all four frequency acy than the other four groups. The poor perform-
bands (figure 7(a)). The in-ear biopotential to the ance shown in the within-subject classification may
selected motor channels power ratios were varied be caused by the limited training samples from indi-
from 10% to 50% in frequency bands for all subjects vidual subjects and fewer channels were used in the
(figure 7(b)). in-ear biopotential recordings.

3.2. Classification results


We evaluated the classification performance of the 3.2.2. Cross-subject classification
in-ear electrophysiological signals and scalp EEG, in In the cross-subject classification, the classification
three different conditions: within-subject classifica- result of the leave-out subject was predicted dir-
tion, cross-subject classification, and individual sub- ectly on the EEGnet weights trained by other sub-
ject retraining after cross-subject classification. jects. Similar to the within-subject analysis, cross-
subject classification results also showed a statistical
3.2.1. Within-subject classification difference between the EEG groups. The prediction
The within-subject classification was used to evalu- accuracy using the scalp EEG group was significantly
ate the classification performance of individual sub- higher than the other four groups (figure 9, table 3).
ject’s in-ear biopotential. The ear-REF group, ear- However, the prediction accuracy of the three in-ear
CONTRA group, ear-IPSI group, near-ear EEG group biopotential groups was close to the random classific-
and scalp EEG group, were used as the input for ation level. The unsatisfactory performance in these

7
J. Neural Eng. 17 (2020) 066010 X Wu et al

Table 3. Accuracy(%) for cross-subject classification.

Subject
EEG signal 1 2 3 4 5 6 Average

Scalp EEG 92.08 80.00 79.49 62.26 91.88 75.00 80.12


ear-IPSI 50.63 47.13 48.24 50.00 47.27 50.42 48.95
near ear-EEG 48.95 60.51 45.88 47.50 55.45 52.50 51.80
ear-REF 50.63 50.32 54.12 46.25 51.82 52.08 50.87
ear-CONTRA 60.26 75.14 48.23 43.75 59.09 64.58 58.51

Figure 9. Cross-subject classification performance averaged over all subjects for the ear-CONTRA group, the ear-REF group, the
near ear-EEG group, the ear-IPSI group, and the scalp EEG group. The dashed horizontal line indicates the random classification
with 50% accuracy. There was a significant difference between EEG groups for the cross-subject classification accuracy
(F(4,25) = 16.9, p < 0.001). Tukey’s post-hoc tests were used to compare the differences between each pair of groups. ∗∗ indicates
p < 0.01; ∗∗∗ indicates p < 0.001; No stars indicates p > 0.05 in that pair of comparison.

Table 4. Accuracy(%) for Individual subject retraining after cross-subject classification.

Subject
EEG signal 1 2 3 4 5 6 Average

Scalp EEG 100 96.88 87.50 81.82 95.74 93.75 92.61


ear-IPSI 56.25 56.25 64.71 56.25 50.00 66.67 58.35
near ear-EEG 72.92 81.25 76.47 68.75 68.18 83.33 75.15
ear-REF 77.08 81.25 70.59 62.50 59.09 70.83 70.22
ear-CONTRA 66.67 81.25 64.71 50.00 70.45 75.00 68.01

three groups may be caused by the between-subject showed no significant difference. The accuracy of
variation. the scalp EEG group and the near Ear-EEG group
were also significantly better than that of the ear-IPSI
groups.
3.2.3. Individual subject retraining after cross-subject
classification
In order to overcome the between-subject variation 3.3. Feature visualization
and the limited channel number, we utilized the DeepLIFT method was applied to the individual sub-
concept of transfer learning. We retrained the cross- ject retraining model to investigate what features are
subject EEGnet with the 80% samples of the leave-out essential for the classification [23]. Based on the pre-
subject. The result is shown in figure 10 and table 4. diction result from the individual subject retraining
Interestingly, with individual subject retraining, we model, we selected epochs with higher confidence
found that the average classification accuracy for all (probability > 0.7) in either one of the two classes.
groups were now higher than the random classific- And then we calculated the importance scores for
ation level. The performance of the ear-REF group, each epoch from the two classes using DeepLIFT.
which was referencing to the scalp REF electrode, Compared to the ‘reference’ input, the positive sign
was better than the ear-IPSI group. Although the of the scores indicates that an increase of the feature
classification accuracy for the ear-CONTRA group is value, which corresponded to the amplitude of signal
higher than the ear-IPSI group, the statistical analysis recorded from a channel, will increase the prediction

8
J. Neural Eng. 17 (2020) 066010 X Wu et al

Figure 10. The performance of individual subject retraining after cross-subject classification was averaged over all subjects for the
ear-CONTRA group, the ear-REF group, the near ear-EEG group, the ear-IPSI group, and the scalp EEG group. The dashed
horizontal line indicates the random classification with 50% accuracy. There was a significant difference between EEG groups for
the classification accuracy (F(4,25) = 15.5, p < 0.001). Tukey’s post-hoc tests were used to compare the differences between each
pair of groups. ∗ indicates p < 0.05; ∗∗ indicates p < 0.01; ∗∗∗ indicates p < 0.001; ∗∗∗∗ indicates p < 0.0001. No stars indicates
p > 0.05 in that pair of comparison.

probability of that class, while a negative sign of the bands. The results substantiated that a high degree of
scores will be against that class. similarity was implied between the in-ear biopoten-
We estimated the importance scores by calculat- tial and the adjacent scalp channels.
ing the means across the high-confidence trials from We then evaluated the possibility of the in-ear
the same class. The left or right movements condi- sensing in the application of motor task and human–
tions were calculated independently for the ear-REF computer interface. We classified the 2-class motor
group, ear-CONTRA group, and the scalp EEG group task using the in-ear biopotential and EEG com-
(figure 11). The ear-IPSI group was not considered pact network. The performance of the in-ear biosig-
because of poor performance in classification. nals in the within-subject classification and cross-
Results showed that both the ear-REF group, subject classification was much lower than the scalp
ear-CONTRA group and scalp EEG group had EEG. The small trial numbers and the limited chan-
changes in importance scores about 0.25 s after arrow nels used in the in-ear biopotential may cause poor
presentation. We observed that the feature scores for performance in the within-subject classification. The
different groups of ear channels (Left: LF, LB, LOD; large between-subject variation in the in-ear biopo-
Right: RF, RB, ROD) had opposite signs. The sign tential may lead to the unsatisfactory performance
of feature score switches between ear channel groups in the cross-subject classification. Although the poor
when the movement switches side. Similar results performance of the in-ear biopotential groups com-
were found across all subjects. In the scalp EEG group, pared to the scalp EEG groups may be due to the
we also found that, among different groups of chan- lower channel correlation between the ear channels
nels, the sign of feature scores was opposite, which as and the motor-related channels. The channel correl-
well correlated to the left/right movements. ation between the in-ear biopotential and the motor-
related channels were statistically different from the
4. Discussion random level. Moreover, satisfactory performance
was shown when we borrowed the concept of trans-
In-ear biopotential monitoring provides a portable, fer learning and retrained the EEGnet using the
wearable, and comfortable system, overcomes the individual subjects’ in-ear biopotential. Therefore,
uncomfortable and unsuitable limitation of the scalp the in-ear biopotential can be used in the 2-class
EEG, and enables the integration of the human– motor classification study.
computer interface into daily life. In this paper, we In the feature visualization of the EEGnet classi-
investigated the quality of the in-ear biopotential fication, the positive feature scores of one category
by comparing ear-scalp channels’ correlation and indicate the increasing amplitude in that channels
power spectrum. In-ear biopotential recordings were would shift the classification probability to that cat-
highly correlated with the adjacent scalp channels, egory. Although it was commonly believed that the
and showed statistically significant correlation with hand movement is controlled by the contralateral
the motor-related channels. And most of the in-ear motor cortex, event related potential (ERP) study in
biopotential presented similar power distribution of [24] showed no obvious correlation of ERP amplitude
the adjacent scalp-EEG across different frequency with the contralateral hand movement. Moreover, the

9
J. Neural Eng. 17 (2020) 066010 X Wu et al

Figure 11. Important feature scores of the scalp EEG group ((a),(b)), the ear-REF group ((c),(d)), the ear-CONTRA group
((e),(f)) for the individual subject retraining model calculated by using DeepLIFT. The average scores for high confidence trials of
left movement (left column), and of the right movement (right column). Example from subject 2.

feature scores revealed that the EEG amplitude differ- biopotential has an advantage of greatest conveni-
ences within the time windows (0 to 0.5 s after cue) ence in implementation and portability, and facil-
were closely related to the motor task, which indic- itates useful application in the real-world. Consid-
ates the EEGnet classification was not using random erably more work will be needed to determine the
features. best location for the reference electrode. With fur-
Nevertheless, one limitation was also found when ther development of the in-ear biopotential recording
referencing ear channels to a near reference electrode device, in-ear biopotential based human–computer
(like ipsilateral referencing in our study). The ear- interface system is feasible in real-life applications.
IPSI group across all classification analyses was poorly
performed. Some general direction information may
be canceled out when referencing to the nearby elec-
ORCID iDs
trode. Referencing ear channels to a distant electrode
Xiaoli Wu  https://orcid.org/0000-0002-5651-1673
(like scalp referencing and contralateral referencing in
Wenhui Zhang  https://orcid.org/0000-0002-7672-
our study) produced better classification results. We
3104
should consider placing the reference electrode of the
Zhibo Fu  https://orcid.org/0000-0003-0099-3201
in-ear biopotential recording device with a suitable
Roy T H Cheung  https://orcid.org/0000-0002-
distance related to the ear channels. We will extend
0288-7755
the investigation of in-ear biopotential monitoring in
Rosa H M Chan  https://orcid.org/0000-0003-
the human–computer interface to other applications
4808-2490
in daily life.

5. Conclusion References
We have introduced a novel approach to record- [1] Jackson A F and Bolger D J 2014 The neurophysiological
ing the in-ear biopotential with electrodes placed bases of EEG and EEG measurement: a review for the rest of
us Psychophysiology 51 1061–71
both on the ear canal and concha. The results of
[2] Schomer D L and Da Silva F L 2012 Niedermeyer’s
this study indicated that the in-ear biopotential has Electroencephalography: Basic Principles, Clinical
excellent correlation with the adjacent scalp chan- Applications, and Related Fields (Oxford: Oxford University
nels and evaluated the feasibility of constructing a Press)
[3] Teplan M 2002 Fundamentals of EEG measurement Meas.
human–computer interface for 2-class motor task
Sci. Rev. 2 1–11
using the in-ear biopotential recording device. Over- [4] Nunez P L and Srinivasan R 2007 Electroencephalogram
all, this study strengthens the idea that the in-ear Scholarpedia 2 1348

10
J. Neural Eng. 17 (2020) 066010 X Wu et al

[5] Looney D, Kidmose P, Park C, Ungstrup M, Rank M L, 6th Int. Conf. on Brain-Computer Interface (BCI) IEEE
Rosenkranz K and Mandic D P 2012 The in-the-ear pp 1–2
recording concept: user-centered and wearable brain [16] Oostra K M, Oomen A, Vanderstraeten G and Vingerhoets G
monitoring IEEE Pulse 3 32–42 2015 Influence of motor imagery training on gait
[6] Nguyen A, Alqurashi R, Raghebi Z, Banaei-Kashani F, rehabilitation in sub-acute stroke: a randomized controlled
Halbower A C, Dinh T and Vu T 2016 In-ear biosignal trial J. Rehabil. Med. 47 204–9
recording system: a wearable for automatic whole-night [17] Caligiore D, Mustile M, Spalletta G and Baldassarre G 2017
sleep staging Proc. of the 2016 Workshop on Wearable Systems Action observation and motor imagery for rehabilitation in
and Applications pp 19–24 Parkinson’s disease: a systematic review and an integrative
[7] Fiedler L, Obleser J, Lunner T and Graversen C 2016 hypothesis Neurosci. Biobehav. Rev. 72 210–22
Ear-EEG allows extraction of neural responses in challenging [18] Yao D 2001 A method to standardize a reference of
listening scenarios—a future technology for hearing aids? scalp eeg recordings to a point at infinity Physiol. Meas.
2016 38th Annual Int. Conf. of the IEEE Eng. Med. Biol. Soc. 22 693
pp 5697–700 [19] Pion-Tonachini L, Kreutz-Delgado K and Makeig S 2019
[8] Wolpaw J R, Birbaumer N, McFarland D J, Pfurtscheller G Iclabel: An automated electroencephalographic independent
and Vaughan T M 2002 Brain–computer interfaces for component classifier, dataset and website Neuroimage
communication and control Clin. Neurophysiol. 113 767–91 198 181–97
[9] Goverdovsky V, von Rosenberg W, Nakamura T, Looney D, [20] Lawhern V J, Solon A J, Waytowich N R, Gordon S M,
Sharp D J, Papavassiliou C, Morrell M J and Mandic D P Hung C P and Lance B J 2018 Eegnet: a compact
2017 Hearables: multimodal physiological in-ear sensing Sci. convolutional neural network for eeg-based brain–computer
Rep. 7 1–10 interfaces J. Neural Eng. 15 056013
[10] Kidmose P, Looney D, Ungstrup M, Rank M L and [21] Schirrmeister R T, Springenberg J T, Fiederer L D J,
Mandic D P 2013 A study of evoked potentials from ear-eeg Glasstetter M, Eggensperger K, Tangermann M, Hutter F,
IEEE Trans. Biomed. Eng. 60 2824–30 Burgard W and Ball T 2017 Deep learning with
[11] Mikkelsen K B, Kappel S L, Mandic D P and Kidmose P 2015 convolutional neural networks for brain mapping and
Eeg recorded from the ear: characterizing the ear-eeg decoding of movement-related information from the human
method Front. Neurosci. 9 438 eeg arXiv: 1703.05051
[12] Kim Y-J, Kwak N-S and Lee S-W 2018 Classification of [22] Gramfort A et al 2013 MEG and EEG data analysis with
motor imagery for ear-eeg based brain-computer interface mne-python Front. Neurosci. 7 267
2018 6th Int. Conf. on Brain-Computer Interface (BCI) pp 1–2 [23] Shrikumar A, Greenside P and Kundaje A 2017 Learning
[13] Kaongoen N and Jo S 2018 An auditory P300-based important features through propagating activation
brain-computer interface using ear-EEG 2018 6th Int. Conf. differences Proc. of the 34th Int. Conf. on Machine Learning
on Brain-Computer Interface (BCI) pp 1–4 vol 70 pp 3145–53
[14] Ahn J W, Ku Y, Kim D Y, Sohn J, Kim J-H and Kim H C 2018 [24] Huong N T M and Linh H Q et al 2017 Classification of
Wearable in-the-ear eeg system for SSVEP-based left/right hand movement eeg signals using event related
brain–computer interface Electron. Lett. 54 413–14 potentials and advanced features Int. Conf. on the
[15] Kim Y-J, Kwak N-S and Lee S-W 2018 Classification of motor Development of Biomedical Engineering in Vietnam pp
imagery for ear-EEG based brain-computer interface 2018 209–15

11

You might also like