You are on page 1of 10

Artificial Intelligence In Medicine 102 (2020) 101765

Contents lists available at ScienceDirect

Artificial Intelligence In Medicine


journal homepage: www.elsevier.com/locate/artmed

DESIGN AND DEVELOPMENT OF HUMAN COMPUTER INTERFACE USING T


ELECTROOCULOGRAM WITH DEEP LEARNING
Geer Tenga,b, Yue Heb, Hengjun Zhaoc, Dunhu Liud, Jin Xiaob,*, S. Ramkumare
a
The Faculty of Social development and Western China Development Studies, Sichuan University, Chengdu, 610065, China
b
School of Business, Sichuan University, Chengdu, 610065, China
c
School of Economics and Management, Sichuan Radio and TV University, Chengdu, 610073, China
d
Management Faculty, Chengdu University of Information Technology, Chengdu, 610065, China
e
School of Computing, Kalasalingam Academy of Research and Education, Krishnankoil, Virudhunagar (Dt), India

ARTICLE INFO ABSTRACT

Keywords: Today’s life assistive devices were playing significant role in our life to communicate with others. In that
Electrooculogram (EOG) modality Human Computer Interface (HCI) based Electrooculogram (EOG) playing vital part. By using this
Band Power (BP) method we can able to overcome the conventional methods in terms of performance and accuracy. To overcome
Human Computer Interface (HCI) such problem we analyze the EOG signal from twenty subjects to design nine states EOG based HCI using five
Amyotrophic lateral sclerosis (ALS)
electrodes system to measure the horizontal and vertical eye movements. Signals were preprocessed to remove
Pattern Recognition Neural Network (PRNN)
the artifacts and extract the valuable information from collected data by using band power and Hilbert Huang
Transform (HHT) and trained with Pattern Recognition Neural Network (PRNN) to classify the tasks. The
classification results of 92.17% and 91.85% were shown for band power and HHT features using PRNN archi-
tecture. Recognition accuracy was analyzed in offline to identify the possibilities of designing HCI. We compare
the two feature extraction techniques with PRNN to analyze the best method for classifying the tasks and re-
cognizing single trail tasks to design the HCI. Our experimental result confirms that for classifying as well as
recognizing accuracy of the collected signals using band power with PRNN shows better accuracy compared to
other network used in this study. We compared the male subjects performance with female subjects to identify
the performance. Finally we compared the male as well as female subjects in age group wise to identify the
performance of the system. From that we concluded that male performance was appreciable compared with
female subjects as well as age group between 26 to 32 performance and recognizing accuracy were high com-
pared with other age groups used in this study.

1. INTRODUCTION speak or move their limbs. To reduce this problem, different types of
HCI systems have been developed in recent years. some of them were
Neurological disorders were increasing daily. People with motor electric wheelchair[3], cursor Control[4], tooth Controller[5], Hospital
impairment due to neurological disorder face several challenges in alarm System[6], lip movement system[7], sip-and-puff controller[8],
movements and communications with neighbors. In such case persons virtual keyboard [9], Television control system [10], virtual keyboard
affected with ALS were suffered from muscle movements as well as [11], smartphones[12], virtual Game Controller[13], Eye Tracking
speech interaction. For such persons eye movements can be effectively System[14].
used as one of the communication medium to convey their thoughts. To Basically eye movements were categorized into eight basic move-
overthrown the problem we need help from assistive technologies. EOG ments namely up, down, right, left, up-right, up-left, down-right and
based HCI is one of the technique overcome such problem. EOG based down-left. The majority of the HCI’s were using the conventional
HCI was a mechanism of powerful communication medium between method to design a device for weakened [15–19]. In this study we
users and systems. It does not need any exterior devices or muscle in- additionally included five more eye movements’ tasks in order to check
volvement to issue commands and completes the interaction [1,2]. EOG the nine states HCI by using eleven different tasks. The proposed ad-
based HCI system can help impair people to increase the communica- ditional five movements were stare, open, close, rapid movement and
tion ability as well as value of life for disabled individuals who cannot lateral movement. The signals collected from different eye movements


Corresponding author.
E-mail address: swwre43fdfgs@163.com (J. Xiao).

https://doi.org/10.1016/j.artmed.2019.101765
Received 16 September 2019; Received in revised form 28 October 2019; Accepted 15 November 2019
0933-3657/ © 2019 Elsevier B.V. All rights reserved.
G. Teng, et al. Artificial Intelligence In Medicine 102 (2020) 101765

Fig. 1. Equipment Setup during Signal Acquisition for Subject S12.

Fig. 2. Raw EOG signal acquired from subject S12 for eleven tasks.

were applied with band power and HHT methods to extract the fea- average accuracy of 84.42% and 71.50%[22]. Arnab Rakshit et al
tures. The features were classified by using PRNN model to identify the (2016) developed assistive device for speech disable person from twelve
tasks performed by the subjects to develop assistive devices for elderly healthy subjects. Collected signals were applied to Power Spectral
disabled. Density to extract the features. Features were classified using SVM with
multilayer perceptron and got 90% of accuracy for classifying the tasks
2. BACKGROUND [23]. Anwesha Banerjee and D. N. Tibarewala (2016) designed EOG
based HCI to detects the dry eye user by using threshold technique and
Researches based on EOG technique were increasing day by day to obtain maximum average accuracy of 96.67% in offline mode [24].
overcome the neurodegenerative disease. Few prominent studies con- Yurdagul et al (2017) designed EOG based HMI for Amyotrophic Lat-
tributed to this society were mentioned below. Usakli and Serkan eral Sclerosis (ALS) patients from twenty subject using six electrode by
Gurkan (2009) developed virtual keyboard for disabled person by col- measuring both horizontal and vertical eye movements. Collected sig-
lecting signal from eight subjects. Signals were sampled at 176 Hz and. nals were classified by KNN and SVM and obtained accuracy of 90.3%
Acquired signals were applied to Euclidean distance to extract the and 92.6% respectively. From this study they concluded that perfor-
features and classified with Nearest Neighborhood and obtain the ac- mance of SVM was appreciated compared with KNN [25]. R. S. Soun-
curacy of 92%.The outcome of the study shows that subject can write a dariya and R. Renuga designed EOG based emotion analyzer using In-
five words in 25 seconds [20]. Watcharin et al (2012) proposed key- dependent Component Analysis and Multi-class Support Vector
board controller for patients with locked in syndrome by two channels Machines[26]. Zakir Hossain et al (2017) designed eyeball controlled
with six electrodes using threshold algorithm. Voltage threshold algo- cursor by placing five electrodes from horizontal and vertical eye
rithm was used to classify signals and obtained an accuracy of 95.2% movements. Acquired signals were classified with Support Vector ma-
with a typing rate of 25.94 sec/letter on virtual keyboard [21]. Ang et al chine (SVM) and Linear Discriminant Analysis (LDA) to categorize the
(2015) developed cursor control using EOG signal from eight subjects in different tasks performed by the subjects [27]. In this study we made
indoor and outdoor environment using neurosky headset and obtain the comparison between male and female subjects with different age

2
G. Teng, et al. Artificial Intelligence In Medicine 102 (2020) 101765

Fig. 3. Feature Extracted Signal from Eleven Different Eye Movements for Subject 12 using Band Power.

groups from 20 to 44 to confirm the probability to devise the HCI. n 1


S= [X (t )]
n= 0 (1)
3. METHODS
R= 20*log(S 2) (2)
3.1. Experimental Protocol Where S is the original signal and R is the band power density of the
signal [29].
EOG signals were collected from twenty different age grouped
healthy subjects using ADT26 bio amplifier and sampled at 100 Hz for 3.2.2. Hilbert Huang Transform
two seconds and split the signal in the range of two Hz from 0.1 to Hilbert Huang Transform was an empirical approach to molds a
16 Hz. The protocol designed for this study, Experimental setup, elec- signal X(t) in to intrinsic functions to obtain momentary regularity of
trode placement and signal pre-processing methods were already ex- the signal. It furnishes the signal in time, frequency and energy pattern
plained by same authors in his previous work [28]. Equipment setup to deal nonlinearity and non stationary signals better than traditional
and raw signals acquired from subject S12 was shown in the Fig. 1 and model of constant frequency and amplitude. By using this approach any
Fig. 2 respectively. problematic signals can be break down into a fixed number of com-
ponents of original signals in the form of orthogonal basis is called
3.2. Feature Extraction intrinsic mode functions (IMF). The method for obtaining an IMF is
called sifting. So that the sifting process of HHT signal was written as
Features were extracted from filtered signals using band power and X(t)=Y(t) (3)
HHT methods. The feature extraction algorithms for band power and
HHT use the following procedure Were X(t) is original signal and Y(t) is momentary regularity of the
signals of IMF and it can be expressed as
n
3.2.1. Band Power Y(t) = Real aj (t)ei wj (t)dt
j= 1 (4)
Band Power states that the signal X (t ) is summed, squared and
perform logarithmic transform to the band power data so that Real in the above mentioned equation states the original data

3
G. Teng, et al. Artificial Intelligence In Medicine 102 (2020) 101765

Fig. 4. Feature Extracted Signal from Eleven Different Eye Movements for Subject 12 using Hilbert Huang Transform.

expressed as a real part and Y(t) is the Hilbert transform on each IMF
component of the real part [30–32].
From each feature extraction techniques sixteen energy features
were collected from each individual task signal. The features extracted
for ten such trials for each tasks. 110 data samples were collected to test
the designed architectural model separately to identify the tasks.
Feature extracted signals for two different techniques for subject12
were shown in Fig. 3 and Fig. 4 for band power and Hilbert Huang
Fig. 5. Architecture of Pattern Recognition Neural Network Model.
Transform features.

4
G. Teng, et al. Artificial Intelligence In Medicine 102 (2020) 101765

Table 1
Gender Based Classification for Twenty Subjects.
Gender Male Female

Subjects S1,S2,S3,S4,S5,S6,S7,S8,S9,S10 S11,S12,S13,S14,S15,S16,S17,S18,S19,S20

Table 2 4. CLASSIFICATION TECHNIQUE


Age Group Based Classification for Twenty Subjects.
Age Group 20-25 yrs 26-32 yrs 33-44 yrs
Pattern Recognition Neural Network (PRNN) was nothing but
multilayered feed forward network with feedback connection. Each
Subjects S1,S2,S3,S4, S5,S6,S7,S8, S9,S10,S19,S20 layer was connected to next layers in the forward direction. The input
S11,S12,S13,S14 S15,S16,S17,S18 layer was connected to hidden layer and output layer was connected by
means of interrelated weights. The network activation movement was
exclusively in one direction, from the input layer to output layer pas-
Table 3
sing through the hidden layer. The errors propagate backwards from
Classification Performance of PRNN Using Band Power Features.
output nodes to the input nodes [33]. Levenberg Marquardt Back pro-
S.no Sub Hidden Mean Mean Classification Performance for pagation algorithm was used for updating the weights. The structure of
Neuron Training Testing TDNN in %
Pattern Recognition Neural Network architecture used in this study was
Time Time
(sec) (sec) Max Min Mean SD shown in Fig. 5

1 S1 8 24.28 0.68 94.24 89.79 92.31 1.76


2 S2 8 24.32 0.62 94.29 88.65 91.25 2.04
5. RESULT AND DISCUSSION
3 S3 8 24.56 0.64 94.7 89.48 91.87 1.40
4 S4 8 24.59 0.69 94.64 89.9 92.13 1.50
5 S5 8 25.19 0.64 94.55 89.09 92.80 1.89 We conducted our experiment with ADI T26 Power Lab by using
6 S6 8 24.72 0.63 94.72 88.77 93.14 1.68 two channel arrangements to measure horizontal and vertical eye
7 S7 8 24.67 0.68 94.58 88.92 93.45 1.76 movements using cup shaped electrodes placed on the human face.
8 S8 8 24.21 0.61 95.63 90.62 94.88 1.30
Twenty subjects participated in the experiment were divided in to two
9 S9 8 24.79 0.66 93.64 89.18 91.22 1.51
10 S10 8 24.32 0.67 93.64 87.36 91.86 1.55 categories male and female was shown in Table 1. Through experi-
11 S11 8 24.87 0.72 94.89 89.09 91.78 1.37 mental analysis we compared two gender age group between 20 to 44
12 S12 8 24.74 0.71 94.72 88.72 91.51 1.94 in three categories depicted in the Table 2.To analyze the classification
13 S13 8 24.39 0.69 94.55 88.56 91.88 1.95 accuracy of system in different age group we applied the band power
14 S14 8 25.45 0.76 96.36 90.43 91.64 1.69
15 S15 8 24.86 0.7 94.79 89.09 92.45 1.60
and HHT features with PRNN network models. Classification accuracy
16 S16 8 24.54 0.74 94.52 88.68 92.42 1.67 for twenty subjects using band power and HHT features with PRNN
17 S17 8 24.63 0.68 94.66 89.09 92.48 2.10 were illustrated in the Table 3 and Table 4. From the Table 3 and
18 S18 8 24.19 0.71 94.69 88.18 92.04 1.48 Table 4 we obtained the overall mean maximum classification accuracy
19 S19 8 24.89 0.63 93.72 89.09 91.27 1.40
of 94.60% and 94.01%, the overall minimum classification accuracy of
20 S20 8 24.58 0.65 94.5 89.63 91.96 1.68
89.12% and 89.09% were obtained and standard deviation were varied
from 1.31 to 2.04 and 1.36 to 2.3 and testing and training time were
Table 4 varied for different feature extraction techniques from 24.64 seconds,
Classification Performance of PRNN Using Hilbert Huang Transform Features. 0.68 seconds and 25.03 seconds and 0.67 seconds for band power and
HHT features using PRNN model. From the Table 3 and Table 4 we
S.no Sub Hidden Mean Mean Classification Performance for
Neuron Training Testing TDNN in %
found the first maximum classification of 95.63% and 93.58, second
Time Time maximum classification of 93.45% and 92.80% for Subject S8 and S7
(sec) (sec) Max Min Mean SD using Band power and HHT features with PRNN model. The minimum
classification accuracy was obtained for subject S9 of 91.22% and
1 S1 8 25.45 0.66 93.96 89.32 92.01 1.79
2 S2 8 25.11 0.68 93.81 88.90 91.10 2.30
91.20% for band power and HHT features with PRNN which was shown
3 S3 8 25.09 0.73 94.21 89.11 91.41 1.49 in the Fig. 6. Through this classification analysis we found that classi-
4 S4 8 24.85 0.72 94.14 89.71 91.69 1.60 fication accuracy using band power features with PRNN model was
5 S5 8 24.91 0.69 94.02 88.66 92.41 1.46 outstanding compared to the HHT with PRNN model and also, this
6 S6 8 24.69 0.68 94.43 88.34 92.70 2.00
study proves that band power features with PRNN model was more
7 S7 8 25.04 0.66 94.18 88.70 92.80 1.88
8 S8 8 24.64 0.67 94.68 90.31 93.58 1.36 suitable for classifying the task performed by the participants.
9 S9 8 25.49 0.68 92.93 88.88 91.02 2.09
10 S10 8 25.11 0.72 93.20 88.79 91.57 1.72
11 S11 8 24.62 0.68 93.98 89.12 91.36 1.49 5.1. Gender Wise Classification
12 S12 8 25.16 0.63 94.08 88.75 91.32 1.88
13 S13 8 24.93 0.65 94.34 88.36 91.46 1.83
14 S14 8 24.80 0.63 93.63 90.16 91.52 1.92 We also compared male subject performance with female subject. At
15 S15 8 25.23 0.68 94.36 89.17 92.20 1.56 the end of the study we analyzed that performance of male subjects was
16 S16 8 24.90 0.63 94.28 88.68 92.10 1.49 maximum for all ten subjects compared to female subjects which was
17 S17 8 24.88 0.65 94.32 89.24 92.24 2.12
shown in fig.6. From the analysis we concluded that male performance
18 S18 8 25.34 0.61 94.21 88.46 91.69 1.80
19 S19 8 25.24 0.61 93.30 89.81 91.16 1.98 was appreciable compared to female performance in all the tasks per-
20 S20 8 25.12 0.65 94.15 89.52 91.58 2.08 formance. During the training male can able to perform all the tasks
easily compared to female subjects. From the study we analyzed and
concluded that female subjects got tired soon compared to male sub-
jects.

5
G. Teng, et al. Artificial Intelligence In Medicine 102 (2020) 101765

Fig. 6. Overall Classification Accuracy for Twenty Subjects using Band Power and HHT Features using PRNN.

Fig. 7. Classification accuracy for Band Power and HHT Features using PRNN for 20-25 years Age group.

Fig. 8. Classification accuracy for Band Power and HHT Features using PRNN for 26-32 years Age group.

5.2. AGE Group Wise Classification maximum classification accuracy of 92.31%, 92.01% and minimum
classification accuracy of 91.25%, 91.10% using band power features
We also compared the classification accuracy results by different and PRNN for the subjects belongs to the age groups of 20 to 25 which
age group to analyze the performance of the system. Comparison results were shown in the Fig. 7. From the study we analyzed that maximum
for different age group from 20 to 25, 26 to 32 and 33 to 44 were classification accuracy of 94.88%, 93.58% and minimum classification
displayed in the Fig. 7, Fig. 8 Fig. 9. We were analyzed the result in- accuracy of 92.04%, 91.69% using band power features and PRNN for
dividually to identify the classification accuracy of individual subjects the subjects belongs to the age groups between 26 to 32 which were
from each age group. From the Table 3 and Table 4 we found the shown in the Fig. 8. From the result we analyzed maximum

6
G. Teng, et al. Artificial Intelligence In Medicine 102 (2020) 101765

Fig. 9. Classification accuracy for Band Power and HHT Features using PRNN for 33-44 years Age group.

Table 5 experimentation. The recognition accuracy of the individual subjects


Single Trail Analysis for PRNN Using Band Power Features. using single trail analysis for band power and HHT features with TDNN
Tasks Single Trail Analysis
architecture was shown in Table 5 and Table 6 respectively. The per-
formance of the nine states HCI systems was verified through offline
Events Non Events analysis to determine the accuracy of the HCI system using the GUI, it
was demonstrated in Fig. 10. Single Trail performance and recognizing
R L UR DR UL DL RM LM O C S Unknown
accuracy for twenty subjects for eleven tasks was displayed on the
S1 77 8 7 6 8 9 8 8 8 7 8 6 Fig. 11 and overall Single Trail performance and recognizing accuracy
S2 7 10 6 7 7 7 9 8 6 7 7 8 of Individual tasks for twenty subjects was displayed on the Fig. 12.
S3 6 6 7 6 7 6 8 9 6 6 8 7 From the Table 5 it was concluded that maximum average re-
S4 6 9 7 7 6 6 8 8 7 6 7 8
cognition accuracy of above 93.64% was achieved by the subject S8,
S5 8 8 7 7 7 8 8 7 7 8 8 9
S6 5 9 9 7 8 8 6 8 8 8 7 8 second maximum average recognition accuracy of 88.18% was
S7 8 8 9 8 8 8 7 9 8 8 8 8 achieved by the subject 7 and minimum average recognition accuracy
S8 9 9 10 9 8 8 10 9 8 9 9 5 of 74.54% was achieved by subject S3 and the remaining subjects
S9 8 7 7 6 9 9 9 7 9 9 9 7 average recognition accuracy of 82.90% was achieved using band
S10 8 6 9 9 9 8 6 6 8 8 9 8
S11 7 8 7 6 7 8 6 6 8 6 7 8
power features with PRNN architectures was shown in Fig. 13. From the
S12 6 9 6 7 9 9 5 5 8 7 8 9 Table 6 it was concluded that first maximum average recognition ac-
S13 6 9 8 7 9 5 6 8 9 8 9 8 curacy of 91.81% was obtained by the subject S8, second maximum
S14 7 9 9 6 6 8 5 8 8 9 8 10 recognition accuracy of 87.27% was obtained by the subject S7 and S17
S15 6 8 8 7 6 8 6 8 8 7 7 9
S16 5 8 8 7 7 7 6 7 8 8 9 10
S17 5 8 8 6 8 8 8 9 9 9 9 9
S18 6 7 7 7 6 9 6 8 8 8 8 10 Table 6
S19 8 9 7 6 7 8 7 7 8 9 9 8 Single Trail Analysis for PRNN Using Hilbert Huang Transform Features.
S20 6 9 6 7 6 10 6 8 9 7 8 9
Tasks Single Trail Analysis

Events Non-Events

classification accuracy of 91.96%, 91.58% and minimum classification R L UR DR UL DL RM LM O C S Unknown


accuracy of 91.22%, 91.02% using band power features and PRNN for
S1 7 8 7 6 8 8 8 8 8 7 8 6
the subjects belongs to the age groups between 33 to 44 which is shown
S2 7 10 6 7 7 7 9 8 6 6 7 8
in the Fig. 9. S3 6 6 7 6 7 6 8 9 6 6 7 10
The results shows that the age group belongs to 26 to 32 perfor- S4 6 8 7 7 6 6 8 8 7 6 7 10
mances were fast and accurate and they can able to perform all the S5 8 8 7 7 7 8 8 7 7 8 8 10
tasks very easily and they need less training compared with other age S6 5 9 8 7 8 8 6 8 8 8 7 8
S7 8 8 9 8 8 7 7 9 8 8 8 8
groups of 20 to 25 and 33 to 44. The result proved that band power
S8 8 9 10 9 8 8 10 9 8 8 9 5
features with PRNN model was more suitable for classifying the task S9 8 7 7 6 9 9 9 7 9 8 9 7
performed by different age groups. Through this comparison we found S10 8 6 9 9 8 8 6 6 8 8 9 8
that age group from 20 to 25 and 33 to 44 the average performance was S11 7 8 7 6 7 8 6 6 8 6 7 10
S12 6 9 6 7 9 9 5 5 8 7 8 9
minimum compared to the age group of 26 to 32. Finally through the
S13 6 9 8 7 9 5 6 8 9 8 9 8
result we concluded that age groups from 26 to 32 performances was S14 7 9 9 6 6 8 5 8 8 9 8 10
appreciable compared to other age groups. S15 6 8 8 7 6 8 6 8 8 7 7 10
S16 5 8 8 7 7 7 6 7 8 8 9 10
5.3. Single Trail Analysis S17 5 8 8 6 8 8 8 9 9 9 9 9
S18 6 7 7 7 6 9 6 8 8 8 8 8
S19 8 9 7 6 7 8 7 7 8 9 9 6
Single trail analysis were carry out to verify the recognition accu- S20 6 8 6 7 6 10 6 8 9 7 8 9
racy of the individual subjects voluntarily take part in this

7
G. Teng, et al. Artificial Intelligence In Medicine 102 (2020) 101765

Fig. 10. Single Trail Analysis Result Evaluation Using GUI.

and minimum average recognition accuracy of 76.36% was achieved by age group between 20 to 25 and 33 to 44 for overall tasks recognition as
subject S4 and the remaining subjects average recognition accuracy of well as individual task recognition. From the single trail analysis we
82.63% was achieved using HHT features with PRNN architectures was concluded that performance of the PRNN model with band power
displayed in Fig. 14. features were outperforms other feature extraction method used in this
From the Table 5 we found individual first maximum task re- experiment.
cognition of 82% for left, second maximum task recognition of 81% for
stare and minimum recognition of 69% was obtained for down right 6. CONCLUSION
using band power features with PRNN model. From the Table 6 we
obtained individual first maximum task recognition of 81% for left, We conducted our study from twenty subjects (10 male and 10 fe-
second maximum task recognition of 80.50% for stare and minimum male) from different age groups from 20 to 42 using Analog Digital
recognition of 69% was obtained for down right using HHT features Instrument T26 by placing five electrodes on the face by executing
with PRNN model. During this study we analyzed that performance of eleven tasks. From the obtained tasks signal, we made a comparison
the age group between 26 to 32 recognition accuracy outperforms the study between male and female as well as their age group wise

Fig. 11. Single trail recognizing accuracy of individual subject using Band power and HHT Features for PRNN.

8
G. Teng, et al. Artificial Intelligence In Medicine 102 (2020) 101765

Fig. 12. Overall single trail recognizing accuracy for individual task using Band Power and HHT Features using PRNN.

Fig. 13. Individual Trial analysis for each Task using Band Power with PRNN model.

Fig. 14. Individual Trial analysis for each Task using HHT with PRNN model.

performance using band power and HHT features using PRNN to concluded that band power features with PRNN model outperforms in
identify the performance. The study proves that performance as well as both classification as well as recognition accuracy. So designing HCI is
recognition accuracy of the male subjects was higher compared with possible by using this model. In future we are planned to conduct this
female subjects. We made a comparison between different age groups experiment in online phase to check the possibility of designing HCI.
from 20 to 44. From that analysis we found that age group between 26
to 32 performances and recognition accuracy was appreciable com-
pared with other age groups. During the training the age group between Training Video Link Is Given Below For Your Kind Notice
26 to 32 completed all the tasks very easily and fastly. They can able to
concentrate on the given tasks interestingly and not get tired soon https://www.youtube.com/watch?v=-fOCm4671z8&t = 58s
compared with other age groups in this experiment. Finally we

9
G. Teng, et al. Artificial Intelligence In Medicine 102 (2020) 101765

References movements based on electrooculography”. IEEE Transaction on Neural System and


Rehabilitation Engineering 2002;4:209–18.
[17] Güven A, Kara S. Classification of electro-oculogram signals using artificial neural
[1] Lingegowda Dhanush Roopa, Amrutesh Karan, Ramanujam Srikanth. network. Expert System Applications 2006;31:199–205.
Electrooculography based assistive technology for ALS patients. IEEE International [18] Kim Y, Doh NL, Youm Y, Chung WK. Robust discrimination method of the elec-
Conference on Consumer Electronics 2017:36–40. trooculogram signals for human-computer interaction controlling mobile robot.
[2] Lee Kwang-Ryeol, Chang Won-Du, Kim Sungkean, Im Chang-Hwan. Real-Time “Eye- Intelligent. Autom. Soft Comput. 2007;13:319–36.
Writing” Recognition Using Electrooculogram. IEEE Transactions on Neural Systems [19] Shuyan H, Gangtie Z. Driver drowsiness detection with eyelid related parameters by
and Rehabilitation Engineering 2017;25(1):37–48. support vector machine”. Expert Syst. Appl 2009;36:7651–8.
[3] Postelnicu Cristian-Cezar, Girbacia Florin, Talaba Doru. EOG-based visual naviga- [20] Usakli Ali Bulent, Gurkan Serkan. A novel electrooculogram-based human computer
tion interface development. Expert Systems with Applications 2012;39:10857–66. interface and its application as a virtual keyboard. IEEE Conference National
[4] Ang AMS, Zhang ZG, Hung YS, Mak JNF. A User-friendly Wearable Single-channel Biomedical Engineering Meeting. 2009. p. 1–4.
EOG-based Human-Computer Interface for Cursor Control. IEEE EMBS Conference [21] Tangsuksant Watcharin, Aekmunkhongpaisal Chittaphon, Cambua Patthiya,
on Neural Engineering. 2015. Chanwimalueang Theekapun Charoenpong Theerasak. Directional eye movement
[5] Simpson T, Broughton C, Gauthier MJA, Prochazka A. Toothclick control Of a detection system for virtual keyboard controller. IEEE International Conference on
hands-free computer interface. IEEE Transaction on Biomedical Engineering Biomedical Engineering. 2012. p. 1–5.
2008;55:2050–6. [22] Ang AMS, Zhang ZG, Hung YS, Mak JNF. A user-friendly wearable single-channel
[6] Venkataramanan S, Prabhat P, Choudhury SR, Nemade HB, Sahambi JS. Biomedical EOG-based human-computer interface for cursor control. International IEEE/EMBS
instrumentation based on electrooculogram (EOG) signal processing and applica- Conference on Neural Engineering. 2015. p. 565–8.
tion to a hospital alarm system. Proceedings of the 2nd International Conference on [23] Rakshit Arnab, Banerjee Anwesha, Tibarewala DN. Electro-oculogram based digit
Intelligent Sensing and Information Processing. 2005. p. 535–40. recognition to design assitive communication system for speech disabled patients.
[7] Shaikh AA, Kumar DK, Gubbi J. Visual speech recognition using optical flow and IEEE International Conference on Microelectronics, Computing and
support vector machines. International Journal of Computational. Intelligent and Communications. 2016. p. 1–5.
Application 2011;10:167–87. [24] Banerjee Anwesha, Tibarewala DN. Electrooculogram based approach for preven-
[8] Jones M, Grogg K, Anschutz J, Fierman R. A sip-and-puff wireless remote control for tion of dry eye condition in computer users. IEEE International Conference on
the Apple iPod. Assist. Technol 2008;20:107–10. Control, Measurement and Instrumentation. 2016. p. 503–7.
[9] Usakli AB, Gurkan S, Aloise F, Vecchiato G, Babilon F. On The Use of [25] Karagoz Yurdagul, Gul Sevda, Cetınel Gokcen. An EOG based communication
Electrooculogram for Efficient Human Computer Interfaces. Computational channel for paralyzed patients. IEEE International Conference on signal Processing
Inteligence and NeuroScience 2010. and Communications Applications. 2017. p. 1–4.
[10] Keegan Johnalan, Burke Edward, Condron James. An Electrooculogram-based [26] Soundariya RS, Renuga R. Emotion Recognition based on Eye Movement. Advances
Binary Saccade Sequence Classification (BSSC) Technique for Augmentative in Natural and Applied Sciences 2017;11(5):38–43.
Communication and Control. International Conference of the IEEE EMBS [27] Hossain M.d Zakir, Shuvo Maruf Hossain, Sarker Prionjit. Hardware and software
2009:2604–7. Sept. implementation of real time electrooculogram (EOG) acquisition system to control
[11] Usakli Ali Bulent, Gurkan Serkan. Design of a Novel Efficient Human–Computer computer cursor with eyeball movement. IEEE International Conference on
Interface: An Electrooculagram Based Virtual Keyboard. IEEE Transactions on Advances in Electrical Engineering. 2017. p. 132–7.
Instrumentation and Measurement 2010;59(8):2099–108. [28] Hema CR, Paulraj MP, Ramkumar S. Classification of Eye Movements Using
[12] Al-Haiqi Ahmed, Ismail Mahamod, Nordin Rosdiadee. The eye as a new side Electrooculography and Neural Networks. International Journal of Human
channel threat on smartphones. IEEE conference on Research and Developement. Computer Interaction 2014;5(August (3)):51–63.
2013. p. 475–9. [29] Ramkumar S, Hema CR. Recognition of Eye Movement Electrooculogram Signals
[13] Kumar Devender, Sharma Amit. Electrooculogram-based virtual reality game con- Using Dynamic Neural Networks. Karpagam Journal of Computer Science
trol using blink detection and gaze calibration. IEEE International Conference on 2013;7:12–20.
Advances in Computing, Communications and Informatics. 2016. p. 2358–62. [30] Huang Norden E, Wu2 Zhaohua. A Review on Hilbert-Huang Transform: Method
[14] Kumar Deepesh, Dutta Anirban, Das Abhijit, Lahiri Uttama. Smart Eye: Developing and Its Applications to Geophysical Studies. Reviews of Geophysics 2006;46.
a Novel Eye Tracking System for Quantitative Assessment of Oculomotor [31] https://en.wikipedia.org/wiki/Hilbert%E2%80%93Huang_transform.
Abnormalities. IEEE Transactions on Neural Systems and Rehabilitation [32] https://cseweb.ucsd.edu/classes/sp14/cse291-b/notes/HHT.pdf.
Engineering 2016;24(10):1051–9. [33] Ahmad AL-Allaf Omaima N, AbdAlKader Shahlla A, Tamimi Abdelfatah Aref.
[15] Yamagishi K, Hori J, Miyakawa M. Development of EOG-based communication Pattern Recognition Neural Network for Improving the Performance of Iris
system controlled by eight-directional eye movements. Proceedings of the 28th Recognition System. International Journal of Scientific & Engineering Research
IEEE EMBS Annual International conference. 2006. p. 2574–7. 2013;4(6):661–7.
[16] Barea R, Boquete L, Mazo M, Lopez E. System for assisted mobility using eye

10

You might also like