You are on page 1of 1

FrCT3.

23
A novel method of EEG data acquisition, feature ex-
traction and feature space creation for Early Detection
of Epileptic Seizures
Sylvia Bugeja, Lalit Garg, and Eliazar E. Audu

Problem:
The proposed method creates a simple, yet very
effective training set acquisition for epileptic sei-
zure detection making the classifiers training
phase faster.
The proposed method was tested using CHB-MIT
database[2], a dataset of 977 hours of EEG data
containing 192 seizure instances from 22 pediat-
ric patients collected at the Childrens Hospital,
Boston.

Method: Results:
The proposed method creates a simple, yet Seizure detection Sensitivity
100%
very effective training set acquisition for epi- 99% ELM ELM
ELM
98%
leptic seizure detection making the classifiers 97% SVM
96%
Shoeb et
training phase faster. 95%
SVM SVM al. [2]
94%
93%
The proposed method was tested using CHB- 10-Minute Subsets 20-Minute Subsets 30-Minutes Subset Shoeb et al. [2]
SVM ELM Shoeb et al. [2]
MIT database[2], a dataset of 977 hours of
Figure 2. Seizure detection Sensitivity: SVM vs ELM and Shoeb et al. [2].
EEG data containing 192 seizure instances
from 22 pediatric patients collected at the Seizure detection Specificity
100%
Childrens Hospital, Boston. 80% SVM
SVM SVM
ELM ELM
60% ELM

40%

CHB-MIT 20%

0%
10-Minute Subsets 20-Minute Subsets 30-Minutes Subset
Download Patient European Data Format (EDF )Files
SVM ELM
Data Pre-Processing
Figure 3. Specificity: SVM vs ELM performance metrics.
Extract Seizure Data and save it into a more easy to read format
Set number and location of channels consistent throughout the entire patients
Seizure detection latency (in seconds)
EDF files. Also, eliminate artifact channels and save this data
3.50
Transform all EDF files into two second epoched EEG data
3.00
If an EDF file contains multiple seizures than divide each EDF file into multiple SVM Shoeb et
2.50 SVM SVM
EDF files such that each subset contains exactly one seizure data. 2.00 al. [2]
1.50
1.00 ELM
Feature Vector Design 0.50 ELM ELM
0.00
10-Minute 20-Minute 30-Minutes Shoeb et al. [2]
Simple Training Set Acquisition Subsets Subsets Subset

Fetch N number of Epochs/Minutes training subsets from EDF files containing SVM ELM Shoeb et al. [2]
Seizure Data
Save remaining data (i.e. data not part of the N number of Epochs/Minutes train- Figure 4. Latency: SVM vs. ELM and Shoaib et al. [2].
ing subsets) as Non Seizure EDF files.
Table of Results

Testing Set Acquisition 10-Minute 20-Minute 30-Minutes Shoeb et


Fetch all Non Seizure EDF files created in the Training Set Acquisition
Subsets Subsets Subset al. [2]

Fetch all the EDF files which did not have any Seizure Data SVM ELM SVM ELM SVM ELM results
Sensitivity(%) 95.33 99.48 95.42 99.48 97.98 98.99 96%
Feature Selection Specificity(%) 87.11 74.21 89.90 77.16 83.73 81.39 -
Extract 1 to 10 frequency waveforms from Training and Testing sets Multilevel Latency(Seconds) 3.18 0.97 2.88 0.97 2.95 1.26 3
Wavelet Decomposition
Calculate Mean Frequencies of energy falling within the waveforms
Get Time-Evolution features
Conclusions
The proposed method provides a very good
Feature Vector Classification
foundation for simple and effective epileptic
Input Space Creation seizure detectors to be built in the near fu-
Create SeizureVectors
Create SeizureClass Classify Fea-
ture.
ture Vectors Albert Supercomputer
Create HourlySeizureData
Create NonSeizureVectors
using SVM/
Execute Feature Vectors
The 10-minutes training subsets, perform
ELM
Create NonSeizureClass Classification Process
as good as lengthier training subsets.
Create 10 Minutes, 20 Minutes and
30 Minutes Training Subsets
The ELM classification technique performs
Figure 1. Flowchart illustrating the steps involved in the proposed method
better than the SVM classification tech-
nique.

References:
[1]Epilepsy, Fact sheet, WHO. February 2016. Available: http://www.who.int/mediacentre/
factsheets/fs999/en/.
[2] A. H. Shoeb and J. V. Guttag, Application of machine learning to epileptic seizure detection,
Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp.~975-982,
2010.