You are on page 1of 4

FROM COMPRESSIVE TO ADAPTIVE SAMPLING OF NEURAL AND ECG RECORDINGS

Alexander Singh Alvarado and José C. Prı́ncipe

University of Florida, Electrical and Computer Engineering Department


Gainesville, Florida, 32611

ABSTRACT Although the approach seems reasonable, a number of dif-


The miniaturization required for interfacing with the brain ficulties arise. First, the CS framework assumes the sparsity
demands new methods of transforming neuron responses of the input is known and constant. This does not hold for
(spikes) into digital representations. The sparse nature of neu- neural recordings, since the firing patterns may change dras-
ral recordings is evident when represented in a shift invariant tically. Specially when neurons are involved in neural assem-
basis. Although a compressive sensing (CS) framework may blies. Furthermore, measurements in CS are created based on
seem suitable in reducing the data rates, we show that the projections on the entire sparse feature. Although this may
time varying sparsity in the signals makes it difficult to apply. reduce the data rates in comparison to conventional analog to
Furthermore, we present an adaptive sampling scheme which digital converters (ADC), the time structure of the signal is
takes advantage of the local characteristics of the neural spike being ignored. In contrast we propose an adaptive sampling
trains and electrocardiograms (ECG). In contrast to the global scheme that can be tuned to the specific features of the in-
constraints imposed in CS our solution is sensitive to the local put. Instead of imposing global sparseness constraints, we ex-
time structure of the input. The simplicity in the design of ploit the local structure of the signal. The adaptive sampling
the integrate-and-fire (IF) make it a viable solution in cur- scheme proposed is based on the integrate-and-fire model [1].
rent brain machine interfaces (BMI) and ambulatory cardiac The sampler is asynchronous and will generate samples based
monitoring. on level crossings of the integral of the signal. Therefore, the
resulting sample sets are nonuniformly spaced and clustered
Index Terms— Adaptive sampling, brain-machine inter- in the vicinity of the action potentials, given their prominent
face, non-uniform sampling, integrate-and-fire model, ECG amplitude.
Recently a variety of adaptive sampling schemes have
1. INTRODUCTION been implemented, mostly in wireless sensor networks [2].
Other approaches also include information about the specific
Information in the brain is encoded in the stereotypical out- task in the design of the sampler [3], these are related to ac-
put of neurons known as action potentials (AP) that stand out tive learning in the machine learning context. In this case, the
above a noisy background. The goal of BMIs is to extract sample locations and values are determined based on the final
meaningful information from these neural recordings, in or- goal and the available information. Given the abstract nature
der to take future actions. The first stage in any BMI is to of the problem, it is difficult to describe a general framework
capture the behavior of the hundreds of neurons and reliably for all adaptive sampling schemes.
transmit APs to the external world.
In this paper we focus on the IF sampler which is a spe-
Cortical neurons spike at rates of 5-100 AP per second,
cial case of a large class of asynchronous Σ − Δ modulators
the upper limit when they form neural assemblies that are
[4]. These samplers are typically designed such that they os-
related to behavioral tasks. The algorithms used in conven-
cillate in an autonomous mode. The oscillation frequency is
tional BMIs require that the AP be catalogued by neuron and
analogous to the clock period in their discrete counterparts.
assume the rest of the signal as featureless. It is evident that
These modulators encode the input into the output sample
the regions of interest in neural recordings are localized in
rate, which is typically hundreds of times the Nyquist rate.
time and amplitude. The only information that must be pre-
Nevertheless, they are attractive since coarse quantization of
served is the shape of each action potential, in order to guaran-
the amplitude still provides an accurate representation. The
tee discriminability. The recording of the electrocardiogram
input can be recovered using a simple low pass filter. In con-
(ECG) presents a very similar structure and there is a great
trast, the IF sampler is used to sub-sample the input. When
need to compress ECG for ambulatory, 24 hour monitoring.
used to encode neural recordings or ECG we can still accu-
The sparse events immediately suggest the use of a CS ap-
rately recover the APs and QRS complexes at sub-Nyquist
proach.
rates. Nevertheless, the recovery algorithms are nonlinear and
The authors acknowledge NINDS Grant Number: NS053561. the compression depends on the signal characteristics.

978-1-4577-0539-7/11/$26.00 ©2011 IEEE 633 ICASSP 2011


The paper is organized as follows. We describe both CS 
and IF sampling schemes and their corresponding recovery 
algorithms. Both approaches are then compared in terms of     
the data rates and the recovery accuracy when applied to neu- 

ral recordings and ECG. Since BMI algorithms are designed 


on conventional digital systems, we must also consider the ef-


fects of quantization under both schemes. In the case of CS
sample amplitudes are quantized assuming a constant sam-
pling period while the IF events are quantized in time and Fig. 1. Bi-phasic integrate-and-fire block diagram.
consider a coarse amplitude range.
3. THE BI-PHASIC INTEGRATE-AND-FIRE
2. A CS APPROACH FOR ANALOG SIGNALS SAMPLER

Compressive sensing literature [5], was initially framed in Since the inter-spike activity is mostly noise, an appropriate
terms of discrete finite length signals. The basic framework sampler should exploit the sparse nature of the APs. A simi-
exploits the fact that given an appropriate representation most lar case occurs in ECG, where the most relevant feature is the
signals can be considered sparse. Therefore a relatively small QRS complex. The IF model takes advantage of these fea-
number of measurements is necessary to ensure recovery. It tures, Figure 1 shows the basic components. The continuous
is crucial to define the necessary and sufficient conditions un- input x(t) is convolved with an “averaging function” uk (t),
der which a feasible recovery is possible, i.e. constraints on the result is compared against two fixed thresholds, when ei-
the projection matrix must be imposed. Most authors refer ther of these is reached, a pulse is created at time tk repre-
to the sufficient conditions as the coherence measure or the senting the threshold value. The integrator is reset and held
Uniform Uncertainty Principle (UUP) otherwise known as the at this state depending on the refractory period specified by
restricted isometry property (RIP). τ ; the process then repeats. Note that samples are bi-polar,
In recent years, compressive sensing has been extended which is one of the differences when compared to other ap-
to the analog signal class providing reconstruction algorithms proaches using these type of models as sampling devices [11].
and hardware implementations. We have identified at least The output pulse train is defined over a discrete set of nonuni-
three different trends in the implementation of compressive formly spaced time stamps. Here we assume uk (t) is constant
sampling algorithms for streaming data. The first, consists of between two samples. Other averaging functions have been
using finite length random filters [6]. The measurements are considered in [12]. Therefore, the IF samples satisfy:
the convolution of the finite response filter and the input. The  tk+1
process can be translated into a linear system of equations θk = x(t)dt (1)
tk +τ
y = Φx where y are the measurements, x is the sparse input
and Φ is a banded projection matrix for which each row has The input is modeled in a shift invariant spaces (SIS), a nat-
no more than b nonzero elements, where b is the length of the ural representation for the action potentials. These spaces are
filter. In this paper we use a similar implementation assuming well known in approximation theory, nonuniform sampling
we have access to the entire sparse signal. The integrate and and compressive sensing. In this case, the input is defined in
fire, although a nonlinear system can also be framed in terms a M dimensional space:
of linear constraints but the matrix rows are built from ones M

with a varying number of nonzero elements which depend on x(t) = ck φ(t − T k) (2)
the distance between consecutive samples. On the other hand k=1
the measurement vector would only consist of ±θ.
Where φ(t) are known as the generators of the space and T is
Other approaches, such as the random analog demodula-
the underlying period. Which is determined in relation to the
tor proposed in [7, 8] mix the sparse components directly in
Nyquist period. The generator is chosen to be a cubic B(asis)-
the continuous time domain. These are then uniformly sam-
spline defined over a uniform knot sequence. The samples
pled presumably at sub-Nyquist rates. A third class of im-
generated by the IF provide a system of equations from which
plementations combines traditional frequency based analysis
we can determine the sparse coefficients, θ = Sc, where the
and CS proposed in [9]. Other alternatives that assume spar-
elements in S are defined by:
sity constraints on the input include the work on finite rate of
 ti+1
innovation [10]. The reconstruction algorithms separate the
problem in two stages the first uses Prony’s method (a.k.a an- Si,k = φ(t − T k)dt (3)
ti +τ
nihilation filter) to determine the location of the sparse com-
ponents. The second determines the sample values at these The vector 
θ contains the threshold values and c are the coeffi-
locations by least squares. cients. Note that the size and elements of the matrix are signal

634
dependent and since the basis are compactly supported most but then grows slowly, while CS soars to nearly 100 dB. The
of the non zero elements are near the diagonal. In this case low SNR reconstruction in the IF is due to the low amplitude
we can ensure perfect recovery as long as, one sample falls details in the AP as seen in Figure 2(a), in order to reproduce
in the support of each basis. Nevertheless, we are not con- these small variations we would need to select the threshold
cerned with full recovery of the signal, but rather the spike extremely low. In contrast, CS only assumes a sparse sig-
regions [12]. We have also presented recovery algorithms in nal and does not constrain the amplitude of these elements,
the typical bandlimited space, along with the corresponding therefore the conditions are exactly satisfied. Nevertheless,
error bounds [13]. these small details are in the range of the noise for this appli-
cation and are not crucial. This trend does not extend to real
4. RESULTS

We show results both on neural and ECG recordings. The 120


IF
50

neural data used has been recorded from a behaving animal. 100
CS
40
IF
CS

Average spike SER [dB]

Average spike SER [dB]


This data was provided by the Neuro-prosthetics Research 80

60
30

Group at the University of Florida. We also use synthetic 40


20
Nyquist Rate

data in order to satisfy the sparsity constraints imposed by 20


10

CS. The synthetic data was created by placing synthetic APs 0


0 100 200 300 400
Number of samples
500 600 700
0
0 100 200 300 400 500
Number of samples
600 700 800

and forcing the inter-spike regions to zero. Figure 2(a) shows (a) Neural simulator. (b) Real data.
the reconstruction for this type of signals using the IF and CS
approaches. The insert shows the complete signal consisting Fig. 3. Variation in the reconstruction accuracy in relation to
of three APs. Since the IF assumes an analog input, the pre- the number of samples.
cise timings for the samples are determined based on an over-
sampled version of the input. The measurements in the CS
case, were determined by projections of the input with a ran-
dom matrix whose elements are instances of a zero mean unit recordings, as seen in Figure 2(b). Here we show a zoomed in
variance gaussian random variable. We use the optimization version of the last AP, these have amplitudes near 50μV and
methods proposed in [5], based on an interior point method a bandwidth of 5KHz. Since they are no longer strictly sparse
to determine the sparse vector. As mentioned earlier BMIs the performance using CS deteriorates as seen in Figure 3(b).
−5 −5
The IF outperforms the conventional CS approach, and pro-
vides a feasible representation even at sub-Nyquist rates. In
x 10 x 10
6
6
original

4
rec−CS
rec−IF
4

2
this case the Nyquist rate is 10Khz and the signal duration
Amplitude [Volts]

2 0
is 37ms which translates into 368 samples. We have also
Amplitude [Volts]

−2
0 −4 extended our results to ECG recordings, which share simi-
lar time structure with neural signals. Here we used the QT
−6 original
−2 rec−CS
−8 rec−IF

−4
5
−9
x 10 database available at PhysioBank [14]. The recordings have
Threshold

−6

0.01 0.011 0.012 0.013 0.014 0.015 0.016


0

−5
0.02 0.022 0.024 0.026 0.028
been sampled at 250Hz. Figure 4 shows the original and re-
Time [s] Time [s]
covered signals, as well as the variation of the reconstruction
(a) Neural simulator. (b) Real data on the entire window in relation to the number of samples.
The entire signal has a duration of approximately 4s therefore
Fig. 2. Data recovered using CS and IF. The underline defines the number of samples equivalent to the Nyquist rate would
the AP region. be 1024. These results are comparable to those in [15] based
on a CS approach using wavelet decompositions. Although
require knowledge of the precise timings of the APs for each the SER seems to be low, sufficient information of the signal
neuron. Therefore, we are only concerned with the recon- is preserved to make reliable estimates on cardiac features.
struction accuracy in the vicinity of the AP. The recovery er- Since the data must eventually be processed by a digital sys-
ror is measured in terms of the signal to error ratio (SER) tem we also show the variation in the recovery based on the
(10 log(power signal/power error)) within a window around quantization. In CS the amplitudes of the measurements are
the AP, where the error is simply given by the difference be- quantized, while in the IF the sample time instants are quan-
tween both signals. Figure 3(a) shows the average variation tized to a specific clock as seen in Figure 4. These results were
of the SER around the three AP regions for both approaches obtained using the synthetic data. In the case of the IF we
in relation to the number of samples. In the case of the IF the require at least 1μs resolution at the receiver side, this does
samples were increased by reducing the threshold, keeping not imply we need to transmit the quantized time. In con-
the refractory period at zero. Note that the reconstruction per- trast, each measurement transmitted using the CS approach
formance of the IF outperforms the CS for very few samples requires at least 10 bits.

635
35 6. REFERENCES
1.2 Original

Average spike SER [dB]


IF
1
30
Nyquist Rate [1] Wulfram Gerstner and Werner Kistler, “Spiking neuron models: An
introduction,” Cambridge University Press, 2002.
Amplitude [mV]

0.8
0.6
0.4
25 [2] R.M. Willett, A.M. Martin, and R.D. Nowak, “Adaptive sampling for
0.2 wireless sensor networks,” Information Theory, 2004. ISIT 2004. Pro-
0 20 ceedings. International Symposium on, p. 519, jun. 2004.
−0.2
[3] Rui Castro, Active Learning and Adaptive Sampling for NonParametric
−0.4
2 2.2 2.4 2.6 2.8 3
15
0 500 1000 1500 2000 Inference, Ph.D. thesis, Rice University, aug 2007.
−3 Number of samples
x 10
Threshold

2
[4] S. Ouzounov, Engel Roza, J.A. Hegt, G. van der Weide, and A.H.M. van
0
Roermund, “Analysis and design of high-performance asynchronous
−2
2 2.2 2.4
Time [s]
2.6 2.8 3 1 2
Time [s]
3 4
sigma-delta modulators with a binary quantizer,” Solid-State Circuits,
IEEE Journal of, vol. 41, no. 3, pp. 588 – 596, mar. 2006.
[5] E.J. Candes and T. Tao, “Decoding by linear programming,” Infor-
mation Theory, IEEE Transactions on, vol. 51, no. 12, pp. 4203–4215,
Fig. 4. ECG reconstruction and performance at an average dec. 2005.
sample rate of 50 samp/s. [6] J.A. Tropp, M.B. Wakin, M.F. Duarte, D. Baron, and R.G. Baraniuk,
“Random filters for compressive sampling and reconstruction,” Acous-
35
90
tics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings.
80
2006 IEEE International Conference on, vol. 3, pp. III –III, may. 2006.
Average spike SER [dB]

30 70
Average spike SER [dB]

25
60

50
[7] J.N. Laska, S. Kirolos, M.F. Duarte, T.S. Ragheb, R.G. Baraniuk, and
20 40
Y. Massoud, “Theory and implementation of an analog-to-information
15
30 converter using random demodulation,” Circuits and Systems, 2007.
20
ISCAS 2007. IEEE International Symposium on, pp. 1959 –1962, may.
10 −9 10
2007.
−8 −7 −6 −5
10 10 10 10 10 0 5 10 15 20 25 30 35
Clock period [s] Number of quantization bits

(a) Time quantization of IF samples. (b) Amplitude quantization of CS [8] J.A. Tropp, J.N. Laska, M.F. Duarte, J.K. Romberg, and R.G. Baraniuk,
samples. “Beyond nyquist: Efficient sampling of sparse bandlimited signals,”
Information Theory, IEEE Transactions on, vol. 56, no. 1, pp. 520 –
544, jan. 2010.
Fig. 5. Effect of quantization on recovery methods for both
[9] M. Mishali and Y.C. Eldar, “Xampling: Analog data compression,”
approaches.
Data Compression Conference (DCC), 2010, pp. 366 –375, mar. 2010.
[10] M. Vetterli, P. Marziliano, and T. Blu, “Sampling signals with finite
5. CONCLUSIONS rate of innovation,” Signal Processing, IEEE Transactions on, vol. 50,
no. 6, pp. 1417 –1428, jun. 2002.

We have shown that in an ideal noiseless scenario CS provides [11] Aurel A. Lazar, “Time encoding with an integrate-and-fire neuron with
a refractory period,” Neurocomputing, vol. 58-60, pp. 53–58, jun 2004.
a feasible solution to neural encoding. Nevertheless, when
[12] A.S. Alvarado, J.C. Principe, and J.G. Harris, “Stimulus reconstruc-
applied to real recordings the lack of sparsity in the time do-
tion from the biphasic integrate-and-fire sampler,” Neural Engineering,
main restricts the use of this type of CS method. Furthermore, 2009. NER ’09. 4th International IEEE/EMBS Conference on, pp. 415
when dealing with bursting neurons, or signals with varying –418, apr. 2009.
sparsity modifications are required in order to apply CS. In [13] H.G. Feichtinger, J. C. Principe, J.L. Romero, A. S. Alvarado, and
contrast, the IF can be readily used in streaming data appli- G. Velasco, “Approximate reconstruction of bandlimited functions for
cations, since it is based on local structure. The IF provides the integrate and fire sampler,” Advances in computational mathemat-
ics, 2010, [Accepted].
compression, while accurately representing the spike regions
[14] A. L. Goldberger, L. A. N. Amaral, L. Glass, J. M. Hausdorff, P. Ch.
and the QRS complex, even at sub-Nyquist rates. However,
Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C.-K. Peng, and H. E.
if high accuracy is needed the IF data rates will increase sub- Stanley, “PhysioBank, PhysioToolkit, and PhysioNet: Components of
stantially. Since BMI algorithms are based on conventional a new research resource for complex physiologic signals,” Circulation,
digital systems quantization is inevitable. We evaluate the vol. 101, no. 23, pp. e215–e220, 2000 (June 13), Circulation Electronic
Pages: http://circ.ahajournals.org/cgi/content/full/101/23/e215.
effect of amplitude quantization of the measurements in the
case of CS as well as time quantization in the IF. Although [15] Octavian Adrian Postolache Eduardo Correia Pinheiro and Pedro Silva
Girao, “Implementation of compressed sensing in telecardiology sensor
we presented both approaches independently, it may also be networks,” International Journal of Telemedicine and Applications, jul.
possible to extend the IF encoding scheme in order to take ad- 2010.
vantage of the optimization techniques developed for CS, this
would involve input dependent projection matrices.

636

You might also like