Professional Documents
Culture Documents
1 Introduction
Intellectual functioning including memory testing is a commonly used diagnosis
tool to characterize the state of cognitive impairments such as Alzheimer’s dis-
ease. In this paper, we investigate the idea to use the classification ability of a
machine learning algorithm as an indicator for the detection of memory related
cognitive diseases. We have collected EEG recordings from a single healthy sub-
ject performing a relaxing and a memory task; the latter represents the cognitive
scenario. If the subject is healthy, a distinct difference between the EEG record-
ings of the two scenarios is expected and a classification algorithm should be
able to tell the memory and relax scenarios reliably apart. Therefore, if a high
classification accuracy is observed, the subject is expected to be healthy. On
the other hand, if the classification performance is poor, it may be an indicator
for memory related cognitive disease. In this paper, we investigate a brief proof
of concept only. We are especially interested in establishing the suitability of a
reservoir computing approach for the described learning scenario. Reservoir com-
puting has reported promising results on the detection of epileptic seizures [1]
and the classification of motor imagery based on EEG data streams [6]. While
the above studies have investigated the suitability of Echo State Networks [4],
we explore Liquid State Machines (LSM) [7] for classifying spatio-temporal EEG
signals in this paper.
Corresponding author.
M. Lee et al. (Eds.): ICONIP 2013, Part III, LNCS 8228, pp. 55–62, 2013.
c Springer-Verlag Berlin Heidelberg 2013
56 S. Schliebs, E. Capecci, and N. Kasabov
An LSM consists of two main components, a “liquid” (also called reservoir) in the
form of a recurrent Spiking Neural Network (SNN) [3] and a trainable readout
function. The liquid is stimulated by spatio-temporal input signals causing neural
activity in the SNN that is further propagated through the network due to its
recurrent topology. Therefore, a snapshot of the neural activity in the reservoir
contains information about the current and past inputs to the system.
The function of the liquid is to accumulate the temporal and spatial infor-
mation of all input signals into a single high-dimensional intermediate state in
order to enhance the separability between network inputs. The readout function
is then trained to transform this intermediate state into a desired system output.
EEG channel
C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14
6
3
0
1
−3
−6
6
3
normalized EEG amplitude
2
−3
−6
6
session
3
0
3
−3
−6
6
3
0
4
−3
−6
6
3
0
5
−3
−6
10
20
30
40
10
20
30
40
10
20
30
40
10
20
30
40
10
20
30
40
10
20
30
40
10
20
30
40
10
20
30
40
10
20
30
40
10
20
30
40
10
20
30
40
10
20
30
40
10
20
30
40
10
20
30
40
0
0
time in sec
Fig. 1. EEG data recorded in 2 × 10 sessions (five sessions for each task) using a
14-channel (C1 to C14) EEG recording device
are in the range [−win , win ] nA. Configuring parameter win is very important,
since it determines how strong the state of the reservoir neurons is influenced by
the input signal. A low input scaling factor decreases the influence of the input
signal and increases the influence of the recurrently connected reservoir neurons.
Thus, by carefully adjusting parameter win , we decide how strongly the network
responds to the input and how strongly it reacts to the activity generated by its
reservoir neurons.
variables which require careful optimization. These variables are the scaling win
of the input weights, the scaling ws of the connection weights of the reservoir, the
connection density λ of the reservoir neurons and the regularization parameter
α that is used for learning the mapping of the reservoir state to the class label.
0.8
0.7
λ=3
0.6
network density
test accuracy
0.5
0.8
0.7
λ=8
0.6
0.5
100
100
100
100
100
10
20
50
75
10
20
50
75
10
20
50
75
10
20
50
75
10
20
50
75
0
1
5
0
1
5
0
1
5
0
1
5
0
1
5
regularization α
scaling of
5 10 20 40 60
input weights
Fig. 2. Grid search for suitable parameter configurations for the LSM
The results of this grid search is reported in Fig. 2. Each point in this plot
represent one of the 400 configuration tested during the grid search. The y-axis
represents the test accuracy of the trained model which is our performance metric
for this study. Since the data is perfectly balanced (identical number of instances
for both classes), a test accuracy of 50% corresponds to a random classification
of the data.
In the figure, we clearly note the impact of the regularization parameter α
which directly controls the generalization capabilities of the trained model. If the
regularization is too small/large the regression model over/under-fits the data.
Also the input scaling has a considerate influence on the working of the model
although the rule of this influence is less clear. The network density does not
seem very important for this data set. From the grid search we select α = 1,
win = 40, ws = 10 and λ = 3 as the most suitable configuration.
60 S. Schliebs, E. Capecci, and N. Kasabov
features (EEG channels). Future work could involve the acquisition of a more
suitable data set that also involves the EEG recordings from subjects suffering
from cognitive diseases.
References
1. Buteneers, P., Verstraeten, D., Nieuwenhuyse, B.V., Stroobandt, D., Raedt, R.,
Vonck, K., Boon, P., Schrauwen, B.: Real-time detection of epileptic seizures in
animal models using reservoir computing. Epilepsy Research 103(2-3), 124–134
(2013)
2. von der Elst, W., van Boxtel, M.J., van Breukelen, G.P., Jolles, J.: Assessment
of information processing in working memory in applied settings: the paper and
pencil memory scanning test. Psychological Medicine 37, 1335–1344 (2007)
3. Gerstner, W., Kistler, W.M.: Spiking Neuron Models: Single Neurons, Populations,
Plasticity. Cambridge University Press, Cambridge (2002)
4. Jaeger, H.: The “echo state” approach to analysing and training recurrent neural
networks. Tech. rep., Fraunhofer Institute for Autonomous Intelligent Syst. (2001)
5. Kasabov, N.: Neucube evospike architecture for spatio-temporal modelling and
pattern recognition of brain signals. In: Mana, N., Schwenker, F., Trentin, E. (eds.)
ANNPR 2012. LNCS (LNAI), vol. 7477, pp. 225–243. Springer, Heidelberg (2012)
6. Kindermans, P.J., Buteneers, P., Verstraeten, D., Schrauwen, B.: An uncued brain-
computer interface using reservoir computing. In: Workshop: Machine Learning
for Assistive Technologies, Proceedings, p. 8. Ghent University, Department of
Electronics and information systems (2010)
7. Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable
states: A new framework for neural computation based on perturbations. Neural
Computation 14(11), 2531–2560 (2002)
8. Markram, H., Wang, Y., Tsodyks, M.: Differential signaling via the same axon
of neocortical pyramidal neurons. Proceedings of the National Academy of Sci-
ences 95(9), 5323–5328 (1998)
9. Schliebs, S., Defoin-Platel, M., Kasabov, N.: Integrated feature and parameter
optimization for an evolving spiking neural network. In: Köppen, M., Kasabov, N.,
Coghill, G. (eds.) ICONIP 2008, Part I. LNCS, vol. 5506, pp. 1229–1236. Springer,
Heidelberg (2009)
10. Schliebs, S., Fiasché, M., Kasabov, N.: Constructing robust liquid state machines to
process highly variable data streams. In: Villa, A.E.P., Duch, W., Érdi, P., Masulli,
F., Palm, G. (eds.) ICANN 2012, Part I. LNCS, vol. 7552, pp. 604–611. Springer,
Heidelberg (2012)
11. Schliebs, S., Hunt, D.: Continuous classification of spatio-temporal data streams
using liquid state machines. In: Huang, T., Zeng, Z., Li, C., Leung, C.S. (eds.)
ICONIP 2012, Part IV. LNCS, vol. 7666, pp. 626–633. Springer, Heidelberg (2012)