You are on page 1of 8

Digital Neuromorphic Design of a Liquid State Machine for

Real-Time Processing
Anvesh Polepalli, Nicholas Soures, Dhireesha Kudithipudi
Nanocomputing Research Lab
Rochester Institute of Technology
e-mail: dxkeec@rit.edu

Abstract—The Liquid State Machine (LSM) is a form of memory of the past behavior due to previous inputs. How-
reservoir computing which emulates the brains capability of ever, training recurrent networks is computationally expensive
processing spatio-temporal data. This type of network generates using methods such as back-propagation through time. They
highly descriptive responses to continuous input streams. The
response is then used to extract information about the input also suffer from exploding and vanishing gradients and are
stream. A single LSM network can be used as a generic computationally expensive [1]. A class of networks known as
intelligent processor that processes different streams of data Reservoir Computing (RC) that are proposed in 2002, offer
(or) on same stream of data to extract different features. The rapid training capability.
LSM has been shown to perform well in tasks dependent on These networks consist of a reservoir, which can be viewed
a systems behavior through time. The LSM’s intrinsic memory
and its reduced training complexity make it a suitable choice as a 3D array of recurrently connected neurons. This reservoir
for hardware implementations for spatio-temporal applications. of neurons can create a multi-dimensional dynamic represen-
Existing behavioral models of LSM cannot process real time data tation of an input stream. The idea behind RC is that the
due to their hardware complexity or inability to deal with real- reservoir will provide a unique, nonlinear description of each
time data or both. The proposed model focuses on a simple liquid different input sequence to a higher dimensional space. It will
design that exploits spatial locality and is capable of processing
real time data. The model is evaluated for EEG seizure detection suffice to only train a classifier to recognize and associate
with an accuracy of 84.2% and for user identification based on the different reservoir states with the desired target outputs.
walking pattern with an accuracy of 98.4%. This means the only training needed is on the output layer
itself. Two major forms of RC emerged at the same time
and they were the Echo State Network (ESN) proposed in
I. I NTRODUCTION [2] and the Liquid State Machine (LSM) proposed in [3].
The major difference between two networks is that the LSM
Our environment predominantly consists of dynamic signals was designed from a more biological viewpoint, operating on
which provide information about the state of our surroundings. spiking neurons and spike trains, as well as having dynamic
These signals include visual stimuli, motion, and audio signals. synaptic strengths. The ESN operates on continuous analog
By continuously processing all these signals, we are able signals using activation functions such as the sigmoid function
to perceive, predict, or identify patterns in the data. Simi- with static reservoir weights. In this work we focus on the
larly autonomous systems rely on memory of recent states LSM. It has been shown in [4] that Spiking Neurons are at
to process spatio-temporal streams of information and find least as computationally powerful as neurons using threshold
correlations in patterns so they can make predictions based or sigmoidal or other similar activation functions.
on the dynamic signals. There are a few networks proposed Software models of LSM have been implemented in diverse
in literature that process spatio-temporal inputs, such as Long applications such as facial expression recognition [5], speech
Short Term Memory and time-delay networks. Typically, these recognition [6], isolated word recognition [7]. In general,
require a predefined window size appropriate for the task at digital hardware realization of LSM offer performance and
hand as well as a linear increase in the network size depending energy efficiency ([8], [9], [10], and [11]) due to highly
on the size of the networks memory. Recurrent neural networks parallel structures and small form factors. Few hardware
(RNN) remove the need of knowing the appropriate window models explore compact hardware on FPGA [12], hardware
size because the network only requires the input at the current efficient neuromorphic dendritically enhanced readout [13],
point in time. The recurrent connections give the network a and real-time speech recognition [14]. These implementations
offer early architectures which are either not hardware efficient
This material is based on research sponsored by AirForce Research Lab- or not capable of dealing with real time inputs. Specific
oratory under agreement number FA8750-16-1-0108. The U.S. Government contribution of this research study is to design a generic
is authorized to reproduce and distribute reprints for Governmental purposes
notwithstanding any copyright notation thereon. behavioral model of LSM which focuses on a scalable and
The views and conclusions contained herein are those of the authors and hardware efficient approach for processing real time data
should not be interpreted as necessarily representing the official policies or without significant performance loss.
endorsements, either expressed or implied, of AirForce Research Laboratory
or the U.S. Government. The rest of the paper is outlined as follows. Section II
978-1-5090-1370-8/16/$31.00 c 2016 Crown provides an overview of LSM algorithm and process flow

1
and neurons within the output layer) based on the outputs of
neurons in the liquid for each time window.
The training process flow of the LSM model is as follows
1) Initialization
• Each neuron within the liquid is chosen to be either
inhibitory or excitatory at random depending on the
inhibitory to excitatory neurons ratio.
• All the three sets of connections and their respective
synaptic strengths are initialized.
2) A set of inputs u(t) are fed into the input layer.
3) Response of the liquid is calculated based on (1).
4) The responses in the previous time step are fed into the
output layer and are also stored for the next time step
Fig. 1: A basic structure of LSM consisting of three layers: (to calculate liquid response).
input, reservoir and output layer 5) The response from the liquid is used to train the third
set of weights, using a specific training algorithm and
weight update rule as shown in (2).
of training the network. Section III provides an insight in 6) Repeat steps 2-5 on all of the input training sets.
to the proposed model. Section IV describes in detail the III. P ROPOSED M ODEL AND D IGITAL D ESIGN OF THE
various network topologies that have been explored in the L IQUID
implementation of LSM. Section V describes the simulation
framework and the data sets that are used to benchmark The proposed behavioral model of LSM is composed of 3
the proposed architecture. Section VI presents the simulation stages (refer Fig. 2) - input/preprocessing layer, liquid, output
results for the benchmarks. layer. All the connections between and within these blocks are
associated with a synaptic strength.
II. LSM A LGORITHM A. Preprocessing
The state of the neuron in the liquid is calculated based on The type of preprocessing performed is application depen-
the current inputs u(t), synaptic strength of these inputs Win dent. For the epilepsy detection dataset used in this case, the
and current state of all the neurons within the liquid (refer to proposed design consists of 4 FIR filters. Number of taps
Fig. 1). Current responses of the neurons within the liquid are for the filter is a reconfigurable parameter. For EEG signal
based on preceding perturbations/inputs (u(t’) for t’ 6 t). In preprocessing a low-pass filter is designed to pass frequency
mathematical terms, liquid state is simply the current output of below 3 Hz (delta brain waves) and three band pass filters are
some function LM that maps input u(t 0 ) on to xM (t) as shown designed to pass frequency ranges 4Hz-8Hz (theta brain wave),
in (1). All the information from inputs u(t’) from preceding 8Hz-13Hz (alpha brain signals) and 13Hz-30Hz (beta brain
time points t 0 6 t that are needed to produce target output y(t) signals). For the biometric user identification no preprocessing
at time t is contained in the current state of liquid [3]. is required.
B. Input Layer
xM (t) = (LM ∗ u)(t) (1)
The input layer receives the preprocessed data and generates
LSM has a memoryless readout map f M that transforms a spike train. There are a few spike train generator models that
the current liquid state xM (t) into the output as shown in (2). provide binary signals (spike trains). Few of these models are
The liquid function LM is task independent; where a read out discussed in detail by Hugo de Gais et al. in [15], along with
map f M in general is chosen in a task dependent manner. a optimized spike train generator.
There can be more than one memoryless read out maps, that The input layer of LSM has been implemented and tested
extract different features from the current liquid output LM (if for variations in performance (in terms of accuracy and
required). hardware complexity), where each neuron follows Bens Spiker
Algorithm (BSA). There are as many neurons in input layer
y(t) = f M (xM (t)) (2) as there are input channels being fed into the LSM. Outputs
from the preprocessing layer (if any) are directly fed into the
within the liquid each neuron spikes when its threshold liquid layer of LSM.
is reached, the number of spikes that have occurred within Connections from input/preprocessing layer to liquid are
a given window (may or may not be sliding) is the output of random connectivity; where as the number of connections
state of the liquid for each window duration. The training depend on the degree of connectivity for each input channel
algorithm calculates and updates the third set of weights from input/preprocessing layer to liquid. In this proposed
(weights involved in connecting liquid to the output layer model for epilepsy detection there are four input signals to the

2
Fig. 2: Proposed behavioral model of the LSM

liquid each from one of the filter for epilepsy detection data last layer are connected to the output layer and one random
and for input signals in case of bio-metric user identification neuron from each layer is connected to the output layer.
using the walking data. The weights connecting from the
D. Output Layer
input/preprocessing layer to the liquid are chosen to be of unit
value. Degree of connectivity from input/preprocessing layer The output layer is composed of two layer Multi-Layer
to liquid is 40% i.e. each input channel is connected to 40% Perceptron. The number of neurons in the first and second
of the neurons in the liquid randomly chosen. layer are chosen to be 4 and 1 respectively; further increase
in the number of neurons doesn’t show any significant perfor-
C. Liquid Layer mance improvement. All the neurons within the output layer
are of logistic-sigmoid activation function and the weights
The liquid layer consists of neurons connected by a set of connecting liquid to output layer and weights within the output
synapses, depending upon the topology chosen, which can be layer are updated based on the gradient descent method.
a random topology, spatial locality, ring, mesh e.t.c. For the
proposed design two types of topologies are explored - random E. Digital Design of LSM
topology and connectivity based on spatial locality (spatial The proposed digital design architecture of LSM is im-
topology) (section IV provides further detail information). plemented for reconfigurable platforms (refer to Fig.2). A
There are two kinds of neurons used within the liquid - fully parallel design approach of network operation is chosen
excitatory and inhibitory. All the neurons within the liquid instead of partially parallel or a sequential approach so as to
are modeled as Leaky-Integrate-and-Fire (LIF) neurons. process the real-time input data streams, with as minimum
A scanning window of fixed width is used to scan the delay and latency as possible. The RAM and ROM blocks
number of spikes that occurred within the time window and are used in this design are chosen to be distributed type rather
fed out of the liquid layer, which is fed into the output layer than block type for high performance with minimum delay. In
to predict. In this design approach the duration of window is case of block memory the logic is far from the memory leading
chosen to be of 1 second for epileptic seizure detection data to a huge wire delay (long access times) that preclude real time
and of 0.25 second for user identification using the walking processing. A distributed memory approach offers memory
data. access immediately at the next clock cycle and supports real-
Several types of connections between the liquid and output time processing.
layer have been explored. The best performance has been The liquid layer consists of a parameterized number of
observed when all the neurons from the liquid are connected neuron blocks(60 in this case), a RAM block, a ROM block,
to the output layer. The other types of connections that have and a memory controller to control the flow of data from the
been explored are random connectivity, only neurons in the neurons to the different memory blocks. A control unit unit

3
is used to control the functionality/state of all the neurons
and helps in synchronizing all the neurons states w.r.t each
other. An internal control unit within each neuron determines
the flow of data and the state of neuron independently. ROM
block is used to store the synaptic connection strengths of all
the connections within the liquid and from the input layer to
the liquid. RAM block is used to store the spiking status of the
neurons,output current from the neurons and the spike count
of each neuron. All the variables within the liquid are integers
with a Bernoulli weights drawing either ’0’ or ’1’, and weights
within the liquid are 3 bits long taking a value of 0/4/8/(-2).
Output current from each neuron is 12 bits long, whereas the
input current to the neurons from the input layer is 21 bits
long.
The output layer consists of 4 hidden layers and 1 out- (a) EEG Sample data for Normal Subject
put layer neuron and each neuron has a sigmoid activation
function. The sigmoid activation function is implemented as
a piece-wise-linear model instead of look-up-table resulting
in an hardware efficient model by reducing the memory
requirement.

IV. S PATIAL TOPOLOGY


In spatial topology the probability of the connections are
established based on the type of neurons (inhibitory or excita-
tory) between which the connection is being established. The
distance between the neurons and the degree of connectivity
chosen; from neuron ’a’ to neuron ’b’ can be mathematically (b) EEG Sample Data for seizure
2
defined as C ∗ e−(D(a,b)/λ ) , where ’D’ is the function that
Fig. 3: Time series EEG signal samples for (a) Normal case
computes the distance between the neurons, ’λ ’ is a parameter
(dataset A) and (b) Seizure case (dataset E).
that is used to control the degree of connectivity, ’C’ is a
constant (Used to signify excitatory and inhibitory signals)
whose value is 0.3(EE), 0.2(EI), 0.4(IE), 0.1(II) as proposed
in [3].

   
−4t −4t
Rn+1 = Rn (1 − un+1 )exp + 1 − exp (3)
τrec τrec

    
−4t −4t
un+1 = un exp +U 1 − un exp (4)
τ f acil τ f acil
(a) Scheme of the locations of surface electrodes according to the
The synaptic strengths of these connections are based upon international 10-20 system which is used to collect EEG signals for
the model proposed by Markram, et al. in [16] and presented set A.
by Maass et al. and can be mathematically formulated as in (3),
(4) (for a detail description please refer [[16]]) . Considering
the hardware design complexity this has been optimized such
that the synaptic strengths of the connections within the liquid
are fixed and use a binary numeral system (of type 2n ).

V. S IMULATION M ETHODOLOGY (b) Scheme of intracranial electrodes implanted for presurgical eval-
uation of epilepsy patients which is used to collect EEG signals for
MATLAB is used as a simulation platform to build the set E.
behavioral model of the proposed network. An exhaustive
parameter sweep of several parameters were done on the this Fig. 4: Electrode placements for EEG Data Acquisition [17].
behavioral model and the results are analyzed to build an
hardware behavioral model.

4
A. Epilepsy
The EEG dataset used is presented in [17] and obtained
from [18]. Signals in this dataset were recorded using the
same 128-channel amplifier system. A 12 bit analog-to-digital
converter is used to sample the signals at a rate of 173.61.
The dataset consists of 500 single-channel EEG segments. The
dataset is divided into five sets A-E, each set contains 100
EEG segments of 23.6 seconds each. These segments were (a) Acceleration in X-direction for two individuals.
selected and cut form multi-channel EEG records after visual
inspection for artifacts. Set A and E of this dataset are used
in this work. Set A contains EEG recordings of five healthy
volunteers in a relaxed state. Surface electrodes as shown in
Fig. 4a are used to collect the data in this set. The signals
were collected from all these electrodes where each segment
contains data of one electrode. Set E contains seizure activity
segments taken from five epilepsy patients during presurgical
(b) Acceleration in Y-direction for two individuals.
evaluation. Depth electrodes implanted symmetrically into the
hippocampal formations of the brain were used to collect the
data as shown in Fig. 4b show EEG signal samples of normal
and seizure cases respectively. Fig. 3a and Fig. 3b show EEG
signal samples of normal and seizure cases respectively.
B. Biometric User Identification using walk data
Biometrics are biological features of an individual that offer
unique descriptors. These are used in applications such as (c) Acceleration in Z-direction for two individuals.
user identification and security. One such example is user
Fig. 5: Accelerometer Recordings in XYZ Coordinates
identification based on their walking pattern ([19],[18]). In this
dataset, the walking pattern of 22 individuals was measured
using an android smart phone in a chest shirt pocket. The smart
phone used androids accelerometer at the highest sampling
frequency to record the motion of the individual which was
33 samples per second. Each person was required to walk
along an identified path but no restrictions on how they walked
were placed. The acceleration in the x, y, and z directions was
recorded for each individual until the path was completed.
From this dataset we used trained the network to differentiate
between two people. A sample of the accelerometers measure-
ments in each direction in shown in Fig. 5.
VI. R ESULTS AND A NALYSIS
A. Epileptic Seizure Detection
The proposed LSM model is trained and tested using a total
of 200 EEG signals (140 for training 20 for validation and
40 for testing). The input EEG signal is passed through a Fig. 6: Cross-Entropy vs Epochs - for epilepsy detection data.
preprocessing block with four filters each extracting a spe-
cific features. The four output signals from the preprocessing
block are connected to 40% of the neurons within the liquid confusion matrix obtained as a result of predictions form LSM;
randomly with each connection having a weight (or synaptic confusion matrix for training, validation and testing are shown
strength) of either zero or one. All the neurons outputs are which each having an accuracy of 85.5%, 84.3%, and 84.2%
connected to the output layer which is trained to detect respectively. The overall accuracy of 85.1% is observed over
epileptic seizure. 50 iterations.
Fig. 6 shows the performance of the liquid in terms of
cross-entropy of the output layer over epochs. It can be B. Biometric user Identification
observed that there is a gradual decrease in cross entropy as The three output signals from the sensor (accelerator) are
the number of epochs increases which indicates the decrease connected to 40% of the neurons within the liquid randomly
in error (or increase in prediction accuracy). Fig. 7 shows the with each connection having a weight (or synaptic strength)

5
Fig. 9: Confusion matrix based on the predictions made by
Fig. 7: Confusion matrix based on the predictions made by the LSM for bio-metric user identification data.
the LSM for epileptic seizure detection.

exponent strongly correlate with reservoir performance. Kernel


quality is a measure of the linear separability of the reservoir
[20]. In this metric analysis, the reservoir response of all the
set of inputs is used. The rank of the result matrix indicates the
kernel quality of the liquid. The rank of the matrix determines
the correlation between neurons within the liquid. If the rank
of the matrix equals the number of neurons within the liquid
then it indicates all the neurons generate unique response form
each other. For the proposed behavioral model a kernel quality
of 59 (i.e rank of matrix is 59) is observed for a liquid with 60
neurons; which indicates a minute co-relation between neurons
within the liquid.

D. Separation Property of the Liquid


The separation property describes how unique the state of
the liquid is for different input classes. This has a direct impact
Fig. 8: Cross-Entropy vs Epochs - for bio-metric user identi- on the performance of the readout layers performance. An
fication data. ideal liquid should have a high inter-class variance with low
intra-class variance. This will enhance the performance of the
readout layer. In order to evaluate the liquid (5) is used as
of either zero or one. All the neurons outputs are connected to given in [23] which is an expanded version of the work from
the output layer which is trained for user identification. Fig. 8 [24] .
shows the network learning over time as the error decreases. Cd (t)
The final performance is shown by the confusion matrix in Sepx (t) = (5)
Cv (t) + 1
Fig. 9 with an overall accuracy of 98.4%.
In (5), the inter-class distance is represented by Cd and
C. Kernel Quality of the Liquid determined by using 96), while the intra-class distance is rep-
The performance of the liquid not only depends upon the resented by Cv and determined using (7). Where N represents
number of neurons but also on various other factors like the total number of classes, k represents the number of samples
- pseudo-randomly generated weights and neuron type (in- of that class, and µ (Om (t)) represents the mean state of class
hibitory or excitatory). These factors effect the overall perfor- m.
mance of the system. As the goal of LSM is to have the liquid The separation property of the reservoir was found to
independent of application, arises the need of generic metrics be 0.1028 and 0.2945 for the seizure detection and user
that evaluate the performance of the liquid. Several metrics identification tasks respectively. From these results we were
have been proposed in the literature [20], [21], [22] compared able to see that the intra-class variance was significantly larger
the ability of reservoir metrics to measure the performance than the inter-class distance.
of several reservoir topologies. The reservoir metrics used in
N N
their study are: class separation, kernel quality, and Lyapunov’s ||µ (Om (t)) − µ (On (t)) ||
Cd (t) = ∑∑ (6)
exponent. Results show that kernel quality and Lyapunov’s m=1 n=1 N2

6
R EFERENCES
[1] S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber, “Gradient
flow in recurrent nets: the difficulty of learning long-term dependencies,”
2001.
[2] H. Jaeger, “The echo state approach to analysing and training recurrent
neural networks-with an erratum note,” p. 34, 2001.
[3] W. Maass, T. Natschläger, and H. Markram, “Real-time computing
without stable states: a new framework for neural computation based
on perturbations.” Neural computation, vol. 14, pp. 2531–2560, 2002.
[4] W. Maass, “Networks of spiking neurons: the third generation of neural
network models,” Neural networks, vol. 10, no. 9, pp. 1659–1671, 1997.
[5] B. J. Grzyb, E. Chinellato, G. M. Wojcik, and W. A. Kaminski, “Facial
expression recognition based on liquid state machines built of alternative
neuron models,” in 2009 International Joint Conference on Neural
Networks, June 2009, pp. 1011–1017.
[6] Y. Zhang, P. Li, Y. Jin, and Y. Choe, “A digital liquid state machine with
Fig. 10: Lyapunov Exponent for reservoir performance on biologically inspired learning and its application to speech recognition,”
different tasks. IEEE Transactions on Neural Networks and Learning Systems, vol. 26,
no. 11, pp. 2635–2649, Nov 2015.
[7] D. Verstraeten, B. Schrauwen, D. Stroobandt, and J. Van Campenhout,
“Isolated word recognition with the liquid state machine: a case study,”
Information Processing Letters, vol. 95, no. 6, pp. 521–528, 2005.
[8] B. Schrauwen and J. Van Campenhout, “Parallel hardware implemen-
1 N ∑ki=1 ||µ (Om (t)) − xi (t)|| tation of a broad class of spiking neurons using serial arithmetic,”
CV (t) = ∑ (7) in Proceedings of the 14th European Symposium on Artificial Neural
N m=1 K Networks. d-side publications:, 2006, pp. 623–628.
[9] D. Roggen, S. Hofmann, Y. Thoma, and D. Floreano, “Hardware
E. Lyapunov Exponent of the Liquid spiking neural network with run-time reconfigurable connectivity in an
autonomous robot,” in Evolvable hardware, 2003. proceedings. nasa/dod
In [20] it was shown that networks like the LSM have conference on. IEEE, 2003, pp. 189–198.
[10] A. Upegui, C. A. Peña-Reyes, and E. Sanchez, “An fpga platform for on-
the greatest computational capability when operating on the line topology exploration of spiking neural networks,” Microprocessors
edge of chaos. The Lyapunov exponent provides a method to and microsystems, vol. 29, no. 5, pp. 211–223, 2005.
determine if the networks is stable or chaotic. If the number is [11] B. Girau and C. Torres-Huitzil, “Massively distributed digital imple-
mentation of an integrate-and-fire legion network for visual scene
positive, the system is behaving chaotically and if it is negative segmentation,” Neurocomputing, vol. 70, no. 7, pp. 1186–1197, 2007.
the network is stable. This means ideally a network should [12] B. Schrauwen, M. DHaene, D. Verstraeten, and J. Van Campenhout,
have a Lyapunov exponent value of 0 which would represent “Compact hardware liquid state machines on fpga for real-time speech
recognition,” Neural Networks, vol. 21, no. 2, pp. 511–523, 2008.
operation at the edge of chaos. The Lyapunov exponent is [13] S. Roy, A. Basu, and S. Hussain, “Hardware efficient, neuromorphic
calculated using (8) as explained in [25]. dendritically enhanced readout for liquid state machines,” in 2013 IEEE
Biomedical Circuits and Systems Conference (BioCAS), Oct 2013, pp.
! 302–305.
N ||x j (t) − x jˆ(t)|| [14] B. Schrauwen, M. D’Haene, D. Verstraeten, and J. V. Campenhout,
λ (t) = k ∑ ln (8) “Compact hardware for real-time speech recognition using a liquid state
n=1 ||u j (t) − u jˆ(t)||
machine,” in 2007 International Joint Conference on Neural Networks,
Aug 2007, pp. 1097–1102.
u j (t) and its nearest neighbor in the input space u jˆ(t) [15] H. De Garis, N. E. Nawa, M. Hough, and M. Korkin, “Evolving an opti-
represent the initial inputs to the liquid. x j and x jˆ are the mal de/convolution function for the neural net modules of atr’s artificial
brain project,” in Neural Networks, 1999. IJCNN’99. International Joint
respective reservoir states of the two input vectors. K is a Conference on, vol. 1. IEEE, 1999, pp. 438–443.
scalar that can be chosen based on the data. The proposed [16] M. Tsodyks, K. Pawelzik, and H. Markram, “Neural networks with
network had a Lyapunov Exponent of 2.1154 and 1.9654 for dynamic synapses,” Neural computation, vol. 10, no. 4, pp. 821–835,
1998.
user identification and seizure detection respectively as shown [17] R. G. Andrzejak, K. Lehnertz, F. Mormann, C. Rieke, P. David, and C. E.
in Fig. 10. These values were determined by setting k equal to Elger, “Indications of nonlinear deterministic and finite-dimensional
1. The positive values indicate the reservoir is slightly chaotic structures in time series of brain electrical activity: Dependence on
recording region and brain state,” Physical Review E, vol. 64, no. 6,
which agrees with the high intra-class variance. p. 061907, 2001.
[18] M. Lichman, “UCI machine learning repository,” 2013. [Online].
Available: http://archive.ics.uci.edu/ml
VII. C ONCLUSIONS [19] P. Casale, O. Pujol, and P. Radeva, “Personalization and user verification
in wearable systems using biometric walking patterns,” Personal and
In this research a reconfigurable digital model of the LSM Ubiquitous Computing, vol. 16, no. 5, pp. 563–580, 2012.
which processes real-time data is proposed. Exploiting the [20] R. Legenstein and W. Maass, “Edge of chaos and prediction of computa-
spatial locality in the liquid, a hardware friendly topology is tional performance for neural circuit models,” Neural Networks, vol. 20,
no. 3, pp. 323–334, 2007.
presented. A digital behavioral model of the LSM shows an an [21] L. Büsing, B. Schrauwen, and R. Legenstein, “Connectivity, dynamics,
average accuracy of 85 % for epileptic seizure detection and and memory in reservoir computing with binary and analog neurons,”
98% for user identification. The performance of the liquid is Neural computation, vol. 22, no. 5, pp. 1272–1311, 2010.
[22] J. Chrol-Cannon and Y. Jin, “On the correlation between reservoir
assessed using the kernel quality, the separation property, and metrics and performance for time series classification under the influence
the Lyapunov exponent. of synaptic plasticity,” PloS one, vol. 9, no. 7, p. e101792, 2014.

7
[23] D. Norton and D. Ventura, “Improving liquid state machines through
iterative refinement of the reservoir,” Neurocomputing, vol. 73, no. 16,
pp. 2893–2904, 2010.
[24] E. Goodman and D. A. Ventura, “Spatiotemporal pattern recognition via
liquid state machines,” 2006.
[25] T. E. Gibbons, “Unifying quality metrics for reservoir networks,” in
Neural Networks (IJCNN), The 2010 International Joint Conference on.
IEEE, 2010, pp. 1–7.

You might also like