Professional Documents
Culture Documents
INTRODUCTION
Parkinson’s disease (PD) is the second most common neurological disease after
Alzheimer’s disease, affecting 2–3% of the population older than 65 years of
age . It is characterized by the loss of dopaminergic neurons in the substantia
nigra . The diagnosis of PD relies on the presence of motor symptoms
(bradykinesia, rigidity and tremor at rest ). However, autopsy and neuroimaging
studies indicate that the motor signs of PD are a late manifestation that is
evident when the degree of degeneration of dopaminergic neurons is 50–70%
[4,5]. There are a wide variety of techniques in the field of neurology that are
used individually or in combination to support the clinical diagnosis.
Commonly used techniques include image-based tests (single photon emission
computed tomography (SPECT), M-iodobenzyl-guanidine cardiac scintiscan
(MIBG)), however, these are costly and are not always accessible.
Electroencephalography (EEG) is a non-invasive technique that records the
electrical activity of the pyramidal neurons of the brain, giving an indirect
insight of their function with a great time resolution. It has been widely used to
study epileptic disorders and is a widely accessible and low-cost technique.
Visual EEG analysis is the gold standard for clinical EEG interpretation and
analysis, but the information processing techniques allow for the extraction of
different characteristics that can be of help when characterizing neurological
diseases
SYSTEM ANALYSIS
2. ANALYSIS OVERVIEW
The system analysis and designing are two main process before
developing the project because if these two step go wrong then the entire
project may go wrong because wrong design leads us to the wrong product
or system development.
The results of a DaTscan can’t show that you have Parkinson’s, but they
can help your doctor confirm a diagnosis or rule out a Parkinson’s mimic.
Disadvantages
Advantages:-
The can predict the Patient disease and the level of risk
The saved model can also be fit in MRI devices as it can predict the
patient disease in report itself and can skip the doctor for result.
2.3 FEASIBILITY STUDY
Technical Feasibility
Operation Feasibility
Economical Feasibility
TECHNICAL FEASIBILITY
The technical issue usually raised during the feasibility stage of the
investigation includes the following:
Do the proposed equipment's have the technical capacity to hold the data
required to use the new system?
OPERATIONAL FEASIBILITY
The analyst considers the extent the proposed system will fulfill his
departments. That is whether the proposed system covers all aspects of the
working system and whether it has considerable improvements. We have found
that the proposed “Secure file transaction” will certainly have considerable
improvements over the existing system.
ECONOMIC FEASIBILITY
SYSTEM REQUIREMENT
3. REQUIREMENT
Processor : i3 or Above
SOFTWARE SPECIFICATION
A common LSTM unit is composed of a cell, an input gate, an output gate and a
forget gate. The cell remembers values over arbitrary time intervals and the
three gates regulate the flow of information into and out of the cell.
LSTM networks are well-suited to classifying, processing and making
predictions based on time series data, since there can be lags of unknown
duration between important events in a time series. LSTMs were developed to
deal with the vanishing gradient problem that can be encountered when training
traditional RNNs. Relative insensitivity to gap length is an advantage of LSTM
over RNNs, hidden Markov models and other sequence learning methods in
numerous applications
History
1999: Felix Gers and his advisor Jürgen Schmidhuber and Fred Cummins
introduced the forget gate (also called “keep gate”) into LSTM architecture,[8]
enabling the LSTM to reset its own state.[7]
2014: Kyunghyun Cho et al. put forward a simplified variant called Gated
recurrent unit (GRU).[13]
2015: Google started using an LSTM for speech recognition on Google Voice.
[14][15] According to the official blog post, the new model cut transcription
errors by 49%. [16]
Amazon released Polly, which generates the voices behind Alexa, using a
bidirectional LSTM for the text-to-speech technology
2017: Facebook performed some 4.5 billion automatic translations every day
using long short-term memory networks.
A problem with using gradient descent for standard RNNs is that error gradients
vanish exponentially quickly with the size of the time lag between important
events. This is due to {\displaystyle \lim _{n\to \infty }W^{n}=0}{\displaystyle
\lim _{n\to \infty }W^{n}=0} if the spectral radius of {\displaystyle W}W is
smaller than 1.
However, with LSTM units, when error values are back-propagated from the
output layer, the error remains in the LSTM unit's cell. This "error carousel"
continuously feeds error back to each of the LSTM unit's gates, until they learn
to cut off the value.
Time series are very frequently plotted via run charts (a temporal line chart).
Time series are used in statistics, signal processing, pattern recognition,
econometrics, mathematical finance, weather forecasting, earthquake prediction,
electroencephalography, control engineering, astronomy, communications
engineering, and largely in any domain of applied science and engineering
which involves temporal measurements.
Time series analysis comprises methods for analyzing time series data in order
to extract meaningful statistics and other characteristics of the data. Time series
forecasting is the use of a model to predict future values based on previously
observed values. While regression analysis is often employed in such a way as
to test relationships between one more different time series, this type of analysis
is not usually called "time series analysis," which refers in particular to
relationships between different points in time within a single series. Interrupted
time series analysis is used to detect changes in the evolution of a time series
from before to after some intervention which may affect the underlying
variable.
Time series data have a natural temporal ordering. This makes time series
analysis distinct from cross-sectional studies, in which there is no natural
ordering of the observations (e.g. explaining people's wages by reference to
their respective education levels, where the individuals' data could be entered in
any order). Time series analysis is also distinct from spatial data analysis where
the observations typically relate to geographical locations (e.g. accounting for
house prices by the location as well as the intrinsic characteristics of the
houses). A stochastic model for a time series will generally reflect the fact that
observations close together in time will be more closely related than
observations further apart. In addition, time series models will often make use
of the natural one-way ordering of time so that values for a given period will be
expressed as deriving in some way from past values, rather than from future
values (see time reversibility).
Methods for time series analysis may be divided into two classes: frequency-
domain methods and time-domain methods. The former include spectral
analysis and wavelet analysis; the latter include auto-correlation and cross-
correlation analysis. In the time domain, correlation and analysis can be made in
a filter-like manner using scaled correlation, thereby mitigating the need to
operate in the frequency domain.
Methods of time series analysis may also be divided into linear and non-linear,
and univariate and multivariate.
Panel data
A time series is one type of panel data. Panel data is the general class, a
multidimensional data set, whereas a time series data set is a one-dimensional
panel (as is a cross-sectional dataset). A data set may exhibit characteristics of
both panel data and time series data. One way to tell is to ask what makes one
data record unique from the other records. If the answer is the time data field,
then this is a time series data set candidate. If determining a unique record
requires a time data field and an additional identifier which is unrelated to time
(student ID, stock symbol, country code), then it is panel data candidate. If the
differentiation lies on the non-time identifier, then the data set is a cross-
sectional data set candidate.
Analysis
There are several types of motivation and data analysis available for time series
which are appropriate for different purposes.
Motivation
Exploratory analysis
Curve fitting
Function approximation
Simple or fully formed statistical models to describe the likely outcome of the
time series in the immediate future, given knowledge of the most recent
outcomes (forecasting).
Forecasting on large scale data can be done with Apache Spark using the Spark-
TS library, a third-party package.[28]
Classification
Assigning time series pattern to a specific category, for example identify a word
based on series of hand movements in sign language.
Signal estimation
Segmentation
Models
Models for time series data can have many forms and represent different
stochastic processes. When modeling variations in the level of a process, three
broad classes of practical importance are the autoregressive (AR) models, the
integrated (I) models, and the moving average (MA) models. These three
classes depend linearly on previous data points.[29] Combinations of these
ideas produce autoregressive moving average (ARMA) and autoregressive
integrated moving average (ARIMA) models. The autoregressive fractionally
integrated moving average (ARFIMA) model generalizes the former three.
Extensions of these classes to deal with vector-valued data are available under
the heading of multivariate time-series models and sometimes the preceding
acronyms are extended by including an initial "V" for "vector", as in VAR for
vector autoregression. An additional set of extensions of these models is
available for use where the observed time-series is driven by some "forcing"
time-series (which may not have a causal effect on the observed series): the
distinction from the multivariate case is that the forcing series may be
deterministic or under the experimenter's control. For these models, the
acronyms are extended with a final "X" for "exogenous".
Among other types of non-linear time series models, there are models to
represent the changes of variance over time (heteroskedasticity). These models
represent autoregressive conditional heteroskedasticity (ARCH) and the
collection comprises a wide variety of representation (GARCH, TARCH,
EGARCH, FIGARCH, CGARCH, etc.). Here changes in variability are related
to, or predicted by, recent past values of the observed series. This is in contrast
to other possible representations of locally varying variability, where the
variability might be modelled as being driven by a separate time-varying
process, as in a doubly stochastic model.
Notation
Y = (Yt: t ∈ T),
where T is the index set.
Conditions
There are two sets of conditions under which much of the theory is built:
Stationary process
Ergodic process
In addition, time-series analysis can be applied where the series are seasonally
stationary or non-stationary. Situations where the amplitudes of frequency
components change with time can be dealt with in time-frequency analysis
which makes use of a time–frequency representation of a time-series or signal.
[32]
Machine Learning
Artificial intelligence
As a scientific endeavor, machine learning grew out of the quest for artificial
intelligence. In the early days of AI as an academic discipline, some researchers
were interested in having machines learn from data. They attempted to approach
the problem with various symbolic methods, as well as what was then termed
"neural networks"; these were mostly perceptrons and other models that were
later found to be reinventions of the generalized linear models of
statistics.Probabilistic reasoning was also employed, especially in automated
medical diagnosis.
Optimization
Machine learning also has intimate ties to optimization: many learning problems
are formulated as minimization of some loss function on a training set of
examples. Loss functions express the discrepancy between the predictions of the
model being trained and the actual problem instances (for example, in
classification, one wants to assign a label to instances, and models are trained to
correctly predict the pre-assigned labels of a set of examples).
Generalization
The difference between optimization and machine learning arises from the goal
of generalization: while optimization algorithms can minimize the loss on a
training set, machine learning is concerned with minimizing the loss on unseen
samples. Characterizing the generalization of various learning algorithms is an
active topic of current research, especially for deep learning algorithms.
Statistics
Machine learning and statistics are closely related fields in terms of methods,
but distinct in their principal goal: statistics draws population inferences from a
sample, while machine learning finds generalizable predictive
patterns.According to Michael I. Jordan, the ideas of machine learning, from
methodological principles to theoretical tools, have had a long pre-history in
statistics. He also suggested the term data science as a placeholder to call the
overall field.
Leo Breiman distinguished two statistical modeling paradigms: data model and
algorithmic model, wherein "algorithmic model" means more or less the
machine learning algorithms like Random forest.
Supervised learning
Unsupervised learning
Unsupervised learning algorithms take a set of data that contains only inputs,
and find structure in the data, like grouping or clustering of data points. The
algorithms, therefore, learn from test data that has not been labeled, classified or
categorized. Instead of responding to feedback, unsupervised learning
algorithms identify commonalities in the data and react based on the presence or
absence of such commonalities in each new piece of data. A central application
of unsupervised learning is in the field of density estimation in statistics, such as
finding the probability density function.[41] Though unsupervised learning
encompasses other domains involving summarizing and explaining data
features.
Semi-supervised learning
Reinforcement learning
Self learning
Models
The original goal of the ANN approach was to solve problems in the same way
that a human brain would. However, over time, attention moved to performing
specific tasks, leading to deviations from biology. Artificial neural networks
have been used on a variety of tasks, including computer vision, speech
recognition, machine translation, social network filtering, playing board and
video games and medical diagnosis.
Decision trees
Training models
Usually, machine learning models require a lot of data in order for them to
perform well. Usually, when training a machine learning model, one needs to
collect a large, representative sample of data from a training set. Data from the
training set can be as varied as a corpus of text, a collection of images, and data
collected from individual users of a service. Overfitting is something to watch
out for when training a machine learning model. Trained models derived from
biased data can result in skewed or undesired predictions. Algorithmic bias is a
potential result from data not fully prepared for training.
Flask
Flask is a web framework, it’s a Python module that lets you develop web
applications easily. It’s has a small and easy-to-extend core: it’s a
microframework that doesn’t include an ORM (Object Relational Manager) or
such features.
It does have many cool features like url routing, template engine. It is a WSGI
web app framework.
Werkzeug
jinja2
jinja2 is a popular template engine for Python.A web template system combines
a template with a specific data source to render a dynamic web page.
Microframework
SYSTEM DESIGN
5. SYSTEM DESIGN
A use case is a set of scenarios that describing an interaction between a user and
a system. A use case diagram displays the relationship among actors and use
cases. The two main components of a use case diagram are use cases and actors.
An actor is represents a user or another system that will interact with the system
modeled. A use case is an external view of the system that represents some
action the user might perform in order to complete a task
USECASE COMPONENTS
Actor and usecase are the main components of the usecase diagram
which are used to explain the access of function for a particular participant in
an application
The symbol mentioned in the fig 5.2.1 is used to represent the user of
the functionalities or usecase of the project. It contains a label below which
helps us to identifier the user category.
The arrow show in the diagram are used to represent the relationship between
the usecase and the actor. The arrow point toward the usecase from the actor
which means the user have the access to the particular functionalities in the
project. The usecase diagram can contain multiple actor and can have multiple
access to the usecases. Similarly same usecase can be accessed by the multiple
actor/users in the project.
5.3 SEQUENTIAL DIAGRAM
The sequence diagram uses arrows to represent the message flow between
the object inside the project. The sequence of the arrow from top down method
represent the sequence of the message flow
Fig:5.3.1 Object Component
The rectangular box is used to represent the objects used in the project
and a label given inside the rectangle to identify the object. The messages flow
between the objects for which a straight line is drawn below the rectangle.
An arrow as shown in the figure 5.3.2 will be used to mention the message flow
between the objects the label below the arrow represent the message. The point
of the arrow represent the direction of the message flow and the sequence of the
arrows used in the diagram represent the sequence of the message flow.
CHAPTER 6
IMPLEMENTATION
6. SYSTEM IMPLEMENTATION
In this chapter the main implementation of the projects is explained in
detail part by part in the form of modules. The implementation is based on the
project design as already mentioned in the system design chapter. The developer
always need to compare and verify the system design information for
developing the correct and proper implementation of the project.
Modules:-
The screening process was carried out in two steps. First, duplicates were
removed. EEG to assess PD progression or diagnosis using different ML
architectures in awake surface EEG recordings. The exclusion criteria were:
studies that did not consider EEGs, studies that did not use ML techniques for
EEG analysis, studies that focused their analysis on other neurological diseases,
animal studies, pharmacological studies, articles studying evoked changes in
EEGs due to exogenous stimuli, invasive and sleep EEG recordings. Finally,
those studies that did not include information about the methodology were
omitted. As a consequence, the resulting selected articles consisted of PD
studies that sought to diagnose or determine the evolution of this disease, by
means of using ML techniques in resting state EEG tests or motor activation
EEG tests.
Once the inclusion and exclusion criteria were applied, two review authors
independently screened the full-text articles to obtain a score in the checklist as
proposed in the Guidelines for Developing and Reporting Machine Learning
Predictive Models in Biomedical Research [28]. The checklist consists of 12
reporting items to be included in a research article in order to assure the good
quality of the article.The categories evaluated through the checklist include
requirements within the field of biomedicine, the field of computer science, and
requirements on how these two fields overlap with each other.
The resting state EEG group contains seven articles, which considered different
measurement protocols and channels of the EEG recording. Articles [31,37]
were based on the same study but using different features and models, as it can
be seen in Table 3. The predominant protocol within this group consisted of
recording the EEG in the eyes closed resting state, which was used in four
articles. On the other hand, the articles exhibited different recording durations,
with an average value of 6.37 ± 3.10 min, and a mode of 5 min, which was
considered in four of the seven articles of this group. The number of EEG
channels also varied inside the resting group, with a low density of electrodes
prevailing: 71.43% of these articles considered between 14 and 20 electrodes,
whereas in the rest the number of channels exceeded 100 electrodes.
CHAPTER 7
TESTING
Equivalence Class
Boundary Value Analysis
Domain Tests
Orthogonal Arrays
Decision Tables
State Models
Exploratory Testing
All-pairs testing
1. Big-Bang Integration
2. Top Down Integration
3. Bottom Up Integration
4. Hybrid Integration
The acceptance test cases are executed against the test data or using an
acceptance test script and then the results are compared with the expected
ones. Acceptance criteria are defined on the basis of the following attributes
Expected
Controls Cases Value Outcome
Result
CHAPTER 8
APPENDICE
8.1 SCREENSHOT
8.2 SOURCE CODING
CHAPTER 10
Conclusion
Future Enhancement
REFERENCES
1. Mahlknecht, P.; Krismer, F.; Poewe, W.; Seppi, K. Meta-Analysis of Dorsolateral Nigral
Hyperintensity on Magnetic Resonance Imaging as a Marker for Parkinson’s Disease. Mov.
Disord. 2017, 32, 619–623. [CrossRef] 2. Dickson, D.W. Neuropathology of Parkinson
disease. Parkinsonism Relat. Disord. 2018, 46 (Suppl. 1), S30–S33. [CrossRef] 3. Kalia, L.V.;
Lang, A.E. Parkinson’s Disease. Lancet 2015, 386, 896–912. [CrossRef] 4. Jankovic, J.
Progression of Parkinson Disease: Are We Making Progress in Charting the Course? Arch.
Neurol. 2005, 62, 351–352. [CrossRef] [PubMed] 5. Beitz, J.M. Parkinson’s Disease: A
Review. Front. Biosci. 2014, S6, 65–74. [CrossRef] 6. Bigdely-Shamlo, N.; Mullen, T.;
Kothe, C.; Su, K.-M.; Robbins, K.A. The PREP pipeline: Standardized preprocessing for
large-scale EEG analysis. Front. Neuroinform. 2015, 9, 1–20. [CrossRef] 7. Cole, S.; Voytek,
B. Cycle-by-cycle analysis of neural oscillations. J. Neurophysiol. 2019, 122, 849–861.
[CrossRef] 8. Sorensen, G.L.; Jennum, P.; Kempfner, J.; Zoetmulder, M.; Sorensen, H.B.D. A
Computerized Algorithm for Arousal Detection in Healthy Adults and Patients with
Parkinson Disease. J. Clin. Neurophysiol. 2012, 29, 58–64. [CrossRef] 9. Samuel, A.L. Some
Studies in Machine Learning Using the Game of Checkers. II-Recent Progress. In Computer
Games I; Levi, D.N.L., Ed.; Springer: New York, NY, USA, 1988; pp. 366–400. [CrossRef]
10. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Sánchez,
C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88.
[CrossRef] 11. Jiang, F.; Jiang, Y.; Zhi, H.; Dong, Y.; Li, H.; Ma, S.; Wang, Y. Artificial
intelligence in healthcare: Past, present and future. Stroke Vasc. Neurol. 2017, 2, 230–243.
[CrossRef] 12. Miller, D.D.; Brown, E.W. Artificial Intelligence in Medical Practice: The
Question to the Answer? Am. J. Med. 2018, 131, 129–133. [CrossRef] [PubMed] 13. Gandal,
M.J.; Edgar, J.C.; Klook, K.; Siegel, S.J. Gamma synchrony: Towards a translational
biomarker for the treatment-resistant symptoms of schizophrenia. Neuropharmacology 2012,
62, 1504–1518. [CrossRef] [PubMed] 14. Sleigh, J.W.; Olofsen, E.; Dahan, A.; de Goede, J.;
Steyn-Ross, A. Entropies of the EEG: The effects of general anaesthesia. In Proceedings of
the 5th International Conference on Memory, Awareness and Consciousness, New York, NY,
USA, 1–3 June 2001. Available online:
https://researchcommons.waikato.ac.nz/handle/10289/ 770 (accessed on 23 January 2019).
15. Kannathal, N.; Choo, M.L.; Acharya, U.R.; Sadasivan, P.K. Entropies for detection of
epilepsy in EEG. Comput. Methods Programs Biomed. 2005, 80, 187–194. [CrossRef]
[PubMed] 16. Roy, Y.; Banville, H.; Albuquerque, I.; Gramfort, A.; Falk, T.H.; Faubert, J.
Deep learning-based electroencephalography analysis: A systematic review. J. Neural Eng.
2019, 16, 1–37. [CrossRef] 17. Sheng, J.; Wang, B.; Zhang, Q.; Liu, Q.; Ma, Y.; Liu, W.;
Shao, M.; Chen, B. A novel joint HCPMMP method for automatically classifying
Alzheimer’s and different stage MCI patients. Behav. Brain Res. 2019, 365, 210–221.
[CrossRef] 18. Raghavendra, U.; Acharya, U.R.; Adeli, H. Artificial Intelligence Techniques
for Automated Diagnosis of Neurological Disorders. Eur. Neurol. 2019, 82, 41–64.
[CrossRef] 19. Jahmunah, V.; Oh, S.L.; Rajinikanth, V.; Ciaccio, E.J.; Cheong, K.H.;
Arunkumar, N.; Acharya, U.R. Automated detection of schizophrenia using nonlinear signal
processing methods. Artif. Intell. Med. 2019, 100, 101698. [CrossRef] 20. Chen, Y.; Gong,
C.; Hao, H.; Guo, Y.; Xu, S.; Zhang, Y.; Yin, G.; Cao, X.; Yang, A.; Meng, F.; et al. Automatic
Sleep Stage Classification Based on Subthalamic Local Field Potentials. IEEE Trans. Neural
Syst. Rehabil. Eng. 2019, 27, 118–128. [CrossRef] 21. Zhou, M.; Tian, C.; Cao, R.; Wang, B.;
Niu, Y.; Hu, T.; Guo, H.; Xiang, J. Epileptic Seizure Detection Based on EEG Signals and
CNN. Front. Neuroinform. 2018, 12, 95. [CrossRef] 22. Dhivya, S.; Nithya, A. A Review on
Machine Learning Algorithm for EEG Signal Analysis. In Proceedings of the 2018 Second
International Conference on Electronics, Communication and Aerospace Technology
(ICECA), Coimbatore, India, 29–31 March 2018; pp. 54–57. [CrossRef] Appl. Sci. 2020, 10,
8662 21 of 21 23. Rasheed, K.; Qayyum, A.; Qadir, J.; Sivathamboo, S.; Kwan, P.;
Kuhlmann, L.; O’Brien, T.; Razi, A. Machine Learning for Predicting Epileptic Seizures
Using EEG Signals: A Review. arXiv 2020, arXiv:2002.01925. 24. Vourvopoulos, A.; Jorge,
C.; Abreu, R.; Figueiredo, P.; Fernandes, J.-C.; Bermúdez, S. Efficacy and Brain Imaging
Correlates of an Immersive Motor Imagery BCI-Driven VR System for Upper Limb Motor
Rehabilitation: A Clinical Case Report. Front. Hum. Neurosci. 2019, 13, 1–17. [CrossRef]
[PubMed] 25. Siderowf, A.; Lang, A.E. Premotor Parkinson’s Disease: Concepts and
Definitions. Mov. Disord. 2012, 27, 608–616. [CrossRef] [PubMed] 26. Moher, D.;
Shamseer, L.; Clarke, M.; Ghersi, D.; Liberati, A.; Petticrew, M.; Shekelle, P.; Stewart, L.A.;
the PRISMA-P Group. Preferred Reporting Items for Systematic Review and Meta-Analysis
Protocols (PRISMA-P) 2015: Statement. Syst. Rev. 2015, 4, 1. [CrossRef] [PubMed] 27.
Shamseer, L.; Moher, D.; Clarke, M.; Ghersi, D.; Liberati, A.; Petticrew, M.; Shekelle, P.;
Stewart, L.A.; the PRISMA-P Group. Preferred Reporting Items for Systematic Review and
Meta-Analysis Protocols (PRISMA-P) 2015: Elaboration and explanation. BMJ 2015, 349,
1–25. [CrossRef] [PubMed] 28. Luo, W.; Phung, D.; Tran, T.; Gupta, S.; Rana, S.; Karmakar,
C.; Shilton, A.; Yearwood, J.; Dimitrova, N.; Ho, T.B.; et al. Guidelines for Developing and
Reporting Machine Learning Predictive Models in Biomedical Research: A Multidisciplinary
View. J. Med. Internet Res. 2016, 18. [CrossRef] 29. Betrouni, N.; Delval, A.; Chaton, L.;
Defebvre, L.; Duits, A.; Moonen, A.; Leentjens, A.F.G.; Dujardin, K.
Electroencephalography-based machine learning for cognitive profiling in Parkinson’s
disease: Preliminary results. Mov. Disord. 2019, 34, 210–217. [CrossRef] 30. Chaturvedi, M.;
Hatz, F.; Gschwandtner, U.; Bogaarts, J.G.; Meyer, A.; Fuhr, P.; Roth, V. Quantitative EEG
(QEEG) measures differentiate Parkinson’s disease (PD) patients from healthy controls (HC).
Front. Aging Neurosci. 2017, 9, 3. [CrossRef] 31. Oh, S.L.; Hagiwara, Y.; Raghavendra, U.;
Yuvaraj, R.; Arunkumar, N.; Murugappan, M.; Acharya, U.R. A deep learning approach for
Parkinson’s disease diagnosis from EEG signals. Neural Comput. Appl. 2020, 32, 10927–
10933. [CrossRef] 32. Ruffini, G.; Ibañez, D.; Castellano, M.; Dubreuil-Vall, L.; Soria-
Frisch, A.; Postuma, R.; Gagnon, J.-F.; Montplaisir, J. Deep Learning with EEG
Spectrograms in Rapid Eye Movement Behavior Disorder. Front. Neurol. 2019, 10, 806.
[CrossRef] 33. Saikia, A.; Hussain, M.; Barua, A.R.; Paul, S. EEG-EMG Correlation for
Parkinson’s Disease. Int. J. Eng. Adv. Technol. IJEAT 2019, 8, 1179–1185. [CrossRef] 34.
Saikia, A.; Hussain, M.; Barua, A.R.; Paul, S. Performance Analysis of Various Neural
Network Functions for Parkinson’s Disease Classification Using EEG and EMG. Int. J.
Innov. Technol. Explor. Eng. IJITEE 2019, 9, 3402–3406. [CrossRef] 35. Vanneste, S.; Song,
J.-J.; Ridder, D.D. Thalamocortical dysrhythmia detected by machine learning. Nat.
Commun. 2018, 9, 1103. [CrossRef] [PubMed] 36. Waninger, S.; Berka, C.; Karic, M.S.;
Korszen, S.; Mozley, P.D.; Henchcliffe, C.; Kang, Y.; Hesterman, J.; Mangoubi, T.; Verma, A.
Neurophysiological Biomarkers of Parkinson’s Disease. J. Parkinson’s Dis. 2020, 10, 471–
480. [CrossRef] 37. Yuvaraj, R.; Acharya, U.R.; Hagiwara, Y. A novel Parkinson’s Disease
Diagnosis Index using higher-order spectra features in EEG signals. Neural Comput. Appl.
2018, 30, 1225–1235. [CrossRef]