Professional Documents
Culture Documents
2019 Design and Development of Human Computer Interface Using Electrooculogram With Deep Learning
2019 Design and Development of Human Computer Interface Using Electrooculogram With Deep Learning
Keywords: Today’s life assistive devices were playing significant role in our life to communicate with others. In that
Electrooculogram (EOG) modality Human Computer Interface (HCI) based Electrooculogram (EOG) playing vital part. By using this
Band Power (BP) method we can able to overcome the conventional methods in terms of performance and accuracy. To overcome
Human Computer Interface (HCI) such problem we analyze the EOG signal from twenty subjects to design nine states EOG based HCI using five
Amyotrophic lateral sclerosis (ALS)
electrodes system to measure the horizontal and vertical eye movements. Signals were preprocessed to remove
Pattern Recognition Neural Network (PRNN)
the artifacts and extract the valuable information from collected data by using band power and Hilbert Huang
Transform (HHT) and trained with Pattern Recognition Neural Network (PRNN) to classify the tasks. The
classification results of 92.17% and 91.85% were shown for band power and HHT features using PRNN archi-
tecture. Recognition accuracy was analyzed in offline to identify the possibilities of designing HCI. We compare
the two feature extraction techniques with PRNN to analyze the best method for classifying the tasks and re-
cognizing single trail tasks to design the HCI. Our experimental result confirms that for classifying as well as
recognizing accuracy of the collected signals using band power with PRNN shows better accuracy compared to
other network used in this study. We compared the male subjects performance with female subjects to identify
the performance. Finally we compared the male as well as female subjects in age group wise to identify the
performance of the system. From that we concluded that male performance was appreciable compared with
female subjects as well as age group between 26 to 32 performance and recognizing accuracy were high com-
pared with other age groups used in this study.
1. INTRODUCTION speak or move their limbs. To reduce this problem, different types of
HCI systems have been developed in recent years. some of them were
Neurological disorders were increasing daily. People with motor electric wheelchair[3], cursor Control[4], tooth Controller[5], Hospital
impairment due to neurological disorder face several challenges in alarm System[6], lip movement system[7], sip-and-puff controller[8],
movements and communications with neighbors. In such case persons virtual keyboard [9], Television control system [10], virtual keyboard
affected with ALS were suffered from muscle movements as well as [11], smartphones[12], virtual Game Controller[13], Eye Tracking
speech interaction. For such persons eye movements can be effectively System[14].
used as one of the communication medium to convey their thoughts. To Basically eye movements were categorized into eight basic move-
overthrown the problem we need help from assistive technologies. EOG ments namely up, down, right, left, up-right, up-left, down-right and
based HCI is one of the technique overcome such problem. EOG based down-left. The majority of the HCI’s were using the conventional
HCI was a mechanism of powerful communication medium between method to design a device for weakened [15–19]. In this study we
users and systems. It does not need any exterior devices or muscle in- additionally included five more eye movements’ tasks in order to check
volvement to issue commands and completes the interaction [1,2]. EOG the nine states HCI by using eleven different tasks. The proposed ad-
based HCI system can help impair people to increase the communica- ditional five movements were stare, open, close, rapid movement and
tion ability as well as value of life for disabled individuals who cannot lateral movement. The signals collected from different eye movements
⁎
Corresponding author.
E-mail address: swwre43fdfgs@163.com (J. Xiao).
https://doi.org/10.1016/j.artmed.2019.101765
Received 16 September 2019; Received in revised form 28 October 2019; Accepted 15 November 2019
0933-3657/ © 2019 Elsevier B.V. All rights reserved.
G. Teng, et al. Artificial Intelligence In Medicine 102 (2020) 101765
Fig. 2. Raw EOG signal acquired from subject S12 for eleven tasks.
were applied with band power and HHT methods to extract the fea- average accuracy of 84.42% and 71.50%[22]. Arnab Rakshit et al
tures. The features were classified by using PRNN model to identify the (2016) developed assistive device for speech disable person from twelve
tasks performed by the subjects to develop assistive devices for elderly healthy subjects. Collected signals were applied to Power Spectral
disabled. Density to extract the features. Features were classified using SVM with
multilayer perceptron and got 90% of accuracy for classifying the tasks
2. BACKGROUND [23]. Anwesha Banerjee and D. N. Tibarewala (2016) designed EOG
based HCI to detects the dry eye user by using threshold technique and
Researches based on EOG technique were increasing day by day to obtain maximum average accuracy of 96.67% in offline mode [24].
overcome the neurodegenerative disease. Few prominent studies con- Yurdagul et al (2017) designed EOG based HMI for Amyotrophic Lat-
tributed to this society were mentioned below. Usakli and Serkan eral Sclerosis (ALS) patients from twenty subject using six electrode by
Gurkan (2009) developed virtual keyboard for disabled person by col- measuring both horizontal and vertical eye movements. Collected sig-
lecting signal from eight subjects. Signals were sampled at 176 Hz and. nals were classified by KNN and SVM and obtained accuracy of 90.3%
Acquired signals were applied to Euclidean distance to extract the and 92.6% respectively. From this study they concluded that perfor-
features and classified with Nearest Neighborhood and obtain the ac- mance of SVM was appreciated compared with KNN [25]. R. S. Soun-
curacy of 92%.The outcome of the study shows that subject can write a dariya and R. Renuga designed EOG based emotion analyzer using In-
five words in 25 seconds [20]. Watcharin et al (2012) proposed key- dependent Component Analysis and Multi-class Support Vector
board controller for patients with locked in syndrome by two channels Machines[26]. Zakir Hossain et al (2017) designed eyeball controlled
with six electrodes using threshold algorithm. Voltage threshold algo- cursor by placing five electrodes from horizontal and vertical eye
rithm was used to classify signals and obtained an accuracy of 95.2% movements. Acquired signals were classified with Support Vector ma-
with a typing rate of 25.94 sec/letter on virtual keyboard [21]. Ang et al chine (SVM) and Linear Discriminant Analysis (LDA) to categorize the
(2015) developed cursor control using EOG signal from eight subjects in different tasks performed by the subjects [27]. In this study we made
indoor and outdoor environment using neurosky headset and obtain the comparison between male and female subjects with different age
2
G. Teng, et al. Artificial Intelligence In Medicine 102 (2020) 101765
Fig. 3. Feature Extracted Signal from Eleven Different Eye Movements for Subject 12 using Band Power.
3
G. Teng, et al. Artificial Intelligence In Medicine 102 (2020) 101765
Fig. 4. Feature Extracted Signal from Eleven Different Eye Movements for Subject 12 using Hilbert Huang Transform.
expressed as a real part and Y(t) is the Hilbert transform on each IMF
component of the real part [30–32].
From each feature extraction techniques sixteen energy features
were collected from each individual task signal. The features extracted
for ten such trials for each tasks. 110 data samples were collected to test
the designed architectural model separately to identify the tasks.
Feature extracted signals for two different techniques for subject12
were shown in Fig. 3 and Fig. 4 for band power and Hilbert Huang
Fig. 5. Architecture of Pattern Recognition Neural Network Model.
Transform features.
4
G. Teng, et al. Artificial Intelligence In Medicine 102 (2020) 101765
Table 1
Gender Based Classification for Twenty Subjects.
Gender Male Female
5
G. Teng, et al. Artificial Intelligence In Medicine 102 (2020) 101765
Fig. 6. Overall Classification Accuracy for Twenty Subjects using Band Power and HHT Features using PRNN.
Fig. 7. Classification accuracy for Band Power and HHT Features using PRNN for 20-25 years Age group.
Fig. 8. Classification accuracy for Band Power and HHT Features using PRNN for 26-32 years Age group.
5.2. AGE Group Wise Classification maximum classification accuracy of 92.31%, 92.01% and minimum
classification accuracy of 91.25%, 91.10% using band power features
We also compared the classification accuracy results by different and PRNN for the subjects belongs to the age groups of 20 to 25 which
age group to analyze the performance of the system. Comparison results were shown in the Fig. 7. From the study we analyzed that maximum
for different age group from 20 to 25, 26 to 32 and 33 to 44 were classification accuracy of 94.88%, 93.58% and minimum classification
displayed in the Fig. 7, Fig. 8 Fig. 9. We were analyzed the result in- accuracy of 92.04%, 91.69% using band power features and PRNN for
dividually to identify the classification accuracy of individual subjects the subjects belongs to the age groups between 26 to 32 which were
from each age group. From the Table 3 and Table 4 we found the shown in the Fig. 8. From the result we analyzed maximum
6
G. Teng, et al. Artificial Intelligence In Medicine 102 (2020) 101765
Fig. 9. Classification accuracy for Band Power and HHT Features using PRNN for 33-44 years Age group.
Events Non-Events
7
G. Teng, et al. Artificial Intelligence In Medicine 102 (2020) 101765
and minimum average recognition accuracy of 76.36% was achieved by age group between 20 to 25 and 33 to 44 for overall tasks recognition as
subject S4 and the remaining subjects average recognition accuracy of well as individual task recognition. From the single trail analysis we
82.63% was achieved using HHT features with PRNN architectures was concluded that performance of the PRNN model with band power
displayed in Fig. 14. features were outperforms other feature extraction method used in this
From the Table 5 we found individual first maximum task re- experiment.
cognition of 82% for left, second maximum task recognition of 81% for
stare and minimum recognition of 69% was obtained for down right 6. CONCLUSION
using band power features with PRNN model. From the Table 6 we
obtained individual first maximum task recognition of 81% for left, We conducted our study from twenty subjects (10 male and 10 fe-
second maximum task recognition of 80.50% for stare and minimum male) from different age groups from 20 to 42 using Analog Digital
recognition of 69% was obtained for down right using HHT features Instrument T26 by placing five electrodes on the face by executing
with PRNN model. During this study we analyzed that performance of eleven tasks. From the obtained tasks signal, we made a comparison
the age group between 26 to 32 recognition accuracy outperforms the study between male and female as well as their age group wise
Fig. 11. Single trail recognizing accuracy of individual subject using Band power and HHT Features for PRNN.
8
G. Teng, et al. Artificial Intelligence In Medicine 102 (2020) 101765
Fig. 12. Overall single trail recognizing accuracy for individual task using Band Power and HHT Features using PRNN.
Fig. 13. Individual Trial analysis for each Task using Band Power with PRNN model.
Fig. 14. Individual Trial analysis for each Task using HHT with PRNN model.
performance using band power and HHT features using PRNN to concluded that band power features with PRNN model outperforms in
identify the performance. The study proves that performance as well as both classification as well as recognition accuracy. So designing HCI is
recognition accuracy of the male subjects was higher compared with possible by using this model. In future we are planned to conduct this
female subjects. We made a comparison between different age groups experiment in online phase to check the possibility of designing HCI.
from 20 to 44. From that analysis we found that age group between 26
to 32 performances and recognition accuracy was appreciable com-
pared with other age groups. During the training the age group between Training Video Link Is Given Below For Your Kind Notice
26 to 32 completed all the tasks very easily and fastly. They can able to
concentrate on the given tasks interestingly and not get tired soon https://www.youtube.com/watch?v=-fOCm4671z8&t = 58s
compared with other age groups in this experiment. Finally we
9
G. Teng, et al. Artificial Intelligence In Medicine 102 (2020) 101765
10