You are on page 1of 4

Compressed Sensing Reconstruction: Comparative

Study with Applications to ECG Bio-Signals


Anna M.R. Dixon, Emily G. Allstot, Andrew Y. Chen, Daibashish Gangopadhyay, and David J. Allstot
Department of Electrical Engineering
Univ. of Washington
Seattle, WA, USA
amrdixon@ee.washington.edu

Abstract—Compressed sensing (CS) is a rapidly emerging signal Section II overviews the general approaches and
processing technique that enables accurate capture and theoretical performance limits of the different reconstruction
reconstruction of sparse signals from only a fraction of Nyquist- algorithms. Section III presents experimental results that
rate samples, significantly reducing the data-rate and system quantitatively compare the advantages and disadvantages of
power consumption. This paper presents an in-depth the algorithms over several key metrics. Section IV describes
comparative study on current state-of-the-art CS reconstruction a case study on ECG signal reconstruction and Section V
algorithms. Reliability, accuracy, noise tolerance, computation presents qualitative conclusions from the experiments.
time and are used as key metrics. Further, experiments on ECG
signals are used to assess performance on real-world bio-signals.
II. COMPRESSED SENSING AND STATE-OF-THE-ART
RECONSTRUCTION ALGORITHMS
I. INTRODUCTION
Compressed sensing (CS) [1] is revolutionizing the CS is a non-adaptive scheme modeled by:
domain of signal acquisition. This innovative concept allows a
(1)
signal to be acquired and accurately reconstructed with
significantly fewer samples than required by Nyquist-rate where is a vector of discrete-time Nyquist-rate
sampling. Unlike Nyquist sampling, which relies on the samples of an analog signal, is the so-called
maximum rate-of-change of a signal, compressive sampling measurement or sensing matrix, e is observation noise and
relies on the maximum rate-of-information in a signal. Sparse
signals, such as an electrocardiogram (ECG) bio-signal, is the compressed output vector. CS captures M <<
exhibit this difference in rate which is utilized by CS to enable N measurements from N Nyquist samples using random linear
low-data-rate acquisition. The accuracy of reconstruction projections [1]. The sensing matrix is formulated by selecting
increases with increased sparsity of the signal being acquired. random entries from uniform, Gaussian or Bernoulli
The reconstruction of signals acquired with CS involves an probability density functions. CS is successful on either time-
optimization which seeks the best solution to an under- or frequency-sparse signals. Further, to achieve accurate and
determined set of linear equations with no prior knowledge of numerically stable compression and reconstruction, Candès
the original signal except that it is sparse in the sampled recommends [1]:
domain. Of the several CS recovery algorithms currently in
use, most can be classified as variants of convex-optimization
or greedy algorithms where the L1-norm is widely used as the (2)
measure of signal sparsity. Initially L1-norm convex where k is the number of non-zero entries in the X vector.
optimization was chosen for its stability and high accuracy [1]. In a noise-free environment, the reconstruction problem can
Various greedy reconstruction algorithms have recently gained
prominence by demonstrating a healthy trade-off of accuracy be posed as solving , where there are N unknowns
versus computational complexity [2]-[5]. in reconstructed signal and M knowns in measured signal .
This paper presents a comprehensive comparative study of The matrix is non-square and thus non-invertible. Because
five leading state-of-art compressed sensing reconstruction this problem is under-determined, there are potentially several
algorithms. It aims to provide insight into their approach,
functionality and performance trade-offs. Further, and as an signals, , that satisfy (1). Thus, the reconstruction algorithms
application-specific study, experiments are performed on real- are required to search for the correct within the search
world ECG bio-signals sampled with CS and then space using a sparsity-measure (such as L1-norm) as an
reconstructed using these algorithms. objective function. Table I gives an overview of several state-
of-the-art reconstruction algorithms including the general
optimization objectives and the theoretical computation times.

This work was supported by a grant from the Intel Corp., Hillsboro, OR.

978-1-4244-9472-9/11/$26.00 ©2011 IEEE 805


Table I. Compressed Sensing Reconstruction Algorithm Overview
Theoretical Comp.
CS Reconstruction Algorithm Algorithm Objective Time
Convex Optimization [6] Search the space for a solution with the minimum 2 1.5
O(m n )
L1-norm
Orthogonal Matching Pursuit (OMP) [2] Find the column of Φ with the strongest correlation O(kmn)
to the residual
Compressive Sampling Matching Pursuit Find the top 2K columns of Φ with the strongest O(log(k)mn)
(CoSaMP) [3] correlation to the residual
Regularized Orthogonal Least Squares Find the column of Φ that minimizes the residual of O(kmn)
(ROLS) [4] that column’s solution
Normalized Iterative Hard Thresholding (NIHT) [5] Find the top 2K values of the sum of the previous O(log(k)mn)
best guess and the residual’s signal proxy

As noted previously, CS reconstruction algorithms can be


divided into two distinct classes: convex optimization and
greedy algorithms.

A. Convex Optimization
Convex optimization is the original compressed sensing
reconstruction algorithm [1]. Ideally, the compressed sensing
signal recovery is defined as a basis pursuit optimization
problem:

(3)
which can be cast as a linear programming problem. However,
for the signal reconstruction to be robust to noise, the
constraints of the optimization problem are relaxed to give:

(4)
Fig. 1. Probability of exact recovery
This type of convex optimization problem, also known as
the least absolute shrinkage and selection operator (LASSO) or compressed sensing [3], the top 2k columns of Φ are added to
basis pursuit de-noising (BPDN), is solved using second-order the support vector per iteration and later pruned.
cone programming. Orthogonal least squares (OLS) [4], like OMP, selects the
L1-norm convex optimization is the current standard column of that minimizes the residual per iteration. Unlike
approach to basis pursuit problems primarily due to its proven matching pursuit, OLS evaluates each possible index’s
stability. The convex optimization problems are implemented resulting residual before selection.
in this work with the open-source software CVX [6]. CVX uses A more recent state-of-the-art reconstruction algorithm is
a primal-dual interior-point method to solve (3) and (4). normalized iterative hard thresholding (NIHT) [5]. This
method differs from the greedy algorithms described above by
B. Greedy Algortihms
not directly searching for the columns of that reduce the
Unlike convex optimization, greedy algorithms solve the residual error. Instead NIHT operates by iteratively selecting
reconstruction problem in a less exact manner. They function solutions that both minimize the residual and maximize the
by greedily optimizing a metric that predicts error difference between the current and previous solutions.
minimization.
Matching pursuit, one class of CS reconstruction greedy III. EXPERIMENTAL RESULTS
algorithms, attempts to find the columns of the measurement In order to maintain consistency with the experimental setups
matrix ( is commonly referred to as an “over-complete used in existing works [2][5] the measurement matrix
dictionary” in this context) that have the most participation in
was populated with independent and
the measurement . For each iteration the column(s) of identically distributed (i.i.d.) Gaussian random entries,
with the strongest correlation to the residual is added to the achieving a 4X data compression (i.e. N/M = 4). k-spike
support vector and its contribution is subtracted from the
signals, , were synthesized by assigning randomly
residual. In orthogonal matching pursuit (OMP), only one
the distributed k elements to values from the i.i.d. Gaussian
column of (or “atom”) is added to the support vector per distribution with all points set to zero. Sparsity levels were
iteration. In CoSaMP (Compressed Sampling Matching varied by adjusting k. Results were averaged over an ensemble
Pursuit), a specialized adaption of matching pursuit for of 100 k-spike sample signals for each sparsity level.

806
(a) (b)
Fig. 2. SNR of reconstructed signals for the original signal ensemble having (a) 60 dB SNR, and (b) 80 dB SNR

(5)
The reconstruction accuracies for the SNR = 60 dB and
SNR = 80 dB cases are shown. The signals were synthesized by
summing additive white Gaussian noise (AWGN) with the
compressed signal . CoSaMP, OMP and NIHT meet their
respective SNR targets at high sparsity levels and demonstrate
robust noise tolerance whereas L1-norm convex and ROLS
show moderate noise resilience with a loss of ~10 dB SNR at
peak performance levels.
C. Computation Time

Fig. 3. Computation time


Computation time is the primary objective for exploring
potential greedy solutions as it has direct impact on energy
A. Reliability efficiency and real-time application feasibility. Fig. 3 compares
The most common performance metric is the reliability of the computation time of these algorithms. ROLS and OMP
signal recovery. In CS it is the probability that the required the least computation time. Convex optimization has
reconstruction algorithm will correctly recover the signal. This higher computational complexity than most greedy solutions
is of particular importance here because the reconstruction but is notably independent of sparsity. CoSaMP shows the
algorithms have a tendency to find local optima that do not largest computation time, most likely due to the high tolerance
necessarily represent the signal of interest. requirements for convergence and the costly pseudo-inverse
Fig. 1 shows the probability of exact signal recovery, which calculations per iteration. Needell, et al. [3] further recommend
increases with the sparsity of . All of the reconstruction a fast multiply strategy to improve computation time (at the
curves show success above 88% sparsity. OMP produces the expense of accuracy) that is not explored here.
most reliable reconstruction at low sparsity levels whereas IV. ECG BIO-SIGNAL CASE STUDY
ROLS is unable to produce accurate results until the sparsity
exceeds 97%. Convex optimization, CoSaMP, and NIHT all The ultimate goal of this study is to apply the compressed
exhibit moderate reliability. sensing method to real-world applications, such as in
biomedical signal acquisition systems leading to efficient, low-
B. Accuracy and Noise Tolerance power body area networks. The case study presented here
In a real data acquisition system, the high sparsity values explores the tradeoffs in the choice of a compressed sensing
needed for exact recovery (Fig. 1) may not be guaranteed. reconstruction algorithm in an ECG sensor application.
Furthermore, the sampled-data acquisition systems that CS More than 50 hours of ECG signals was collected from the
targets are inherently noisy. Thus, a more realistic measure for Physiobank® database [8]. The ECG signal is windowed to
signal recovery accuracy is presented (Fig. 2) as measured by 1024 samples, and dynamically thresholded [7] to control
the signal to noise ratio: signal sparsity. Fig. 4 shows the effects of the dynamic-
thresholding level in increasing sparsity. For higher thresholds,
sparsity increases at the cost of detail in ECG signal features.
For example, at 75% sparsity, most of the ECG QRS complex

807
Fig. 5. ECG signal reconstruction comparison of (a) original signal with 92% sparsity using (b) L1-norm convex optimization, (c) CoSaMP, (d) OMP, (e)
NIHT and (f) ROLS

Fig. 4. Sparsity level considerations for ECG analysis

features are lost. To account for sampling circuit noise, AWGN


was added to the thresholded signal such that the compressed
ECG signal achieved 80 dB SNR (typical of an ECG sensor
analog front-end). Compression was achieved with a 6-bit Fig. 6. ECG signal reconstruction SNR.
precision 512 x 1024 Gaussian measurement matrix. Table II. Reconstruction Algorithm Qualitative Assessment
Fig. 5 shows the time-domain ECG original and Signal Reconstruction Reliability/ Speed
reconstructed waveforms for each algorithm. Most algorithms Algorithm Accuracy
capture the QRS complex but NIHT and ROLS add significant Convex Fair Fair
artifacts, which limits their use for ECG applications. CoSaMP Good Bad
ECG bio-signal reconstruction accuracy, computation time OMP Good Good
and noise tolerance were consistent with the results of Section NIHT Fair Fair
ROLS Bad Good
III. Fig. 6 shows the SNR of the ECG signal reconstructions.
V. CONCLUSIONS REFERENCES
This paper presents a detailed empirical comparative study [1] E. Candès, “An introduction to compressive sampling,” IEEE Signal
of selected state-of-the-art compressed sensing reconstruction Processing Magazine, vol. 25, pp. 21-30, March 2008.
algorithms. Table II summarizes in qualitative terms the [2] J. A. Tropp, et al., “Signal recovery from random measurements via
overall relative merits and demerits of these algorithms. A case orthogonal matching pursuit,” IEEE Trans. Inform. Theory, vol. 53, pp.
4655-4666, April 2007.
study of CS was applied to 80 dB SNR ECG bio-signals to [3] D. Needell, et al., “COSAMP: Iterative signal recovery from incomplete
show consistency with the empirical comparisons presented. and inaccurate samples,” Applied Computational Harmonic Analysis,
If computation time is not of primary concern, CoSaMP vol. 26, pp. 301-321, April 2008.
and L1-norm convex are likely the best choice for most [4] T. Blumensath, et al., “On the difference between orthogonal matching
applications where accuracy is needed; further CoSaMP pursuit and orthogonal least squares,” Tech. Rep., Univ. of Edinburgh,
outperforms L1-norm convex in noise resilience. However, for Mar. 2007.
[5] T. Blumensath, et al., “Normalised iterative hard thresholding:
systems where computational complexity is of concern, such as Guaranteed stability and performance,” IEEE J. of Selected Topics in
an ASIC implementation with low power consumption, OMP Signal Processing, vol. 4, pp. 298-309, March 2010.
is preferable. Although no one best compressed sensing [6] M. Grant, et al. CVX: MATLAB software for disciplined convex
reconstruction algorithm can be assured for all applications, the programming, version 1.21. http://cvxr.com/cvx, Aug. 2010.
proper system specifications would lead to optimum results [7] E. Allstot, et al., “Compressive sampling of ECG bio-signals:
Quantization noise and sparsity considerations,” IEEE Biomedical
based on the comparative study presented. Circuits and Systems Conf., Nov. 2010, pp. 41-44.
[8] Goldberger, et al., “Physiobank, PhysioToolkit ,and PhysioNet:
Components of a new research resource for complex physiological
signals,” Circulation, vol. 101, no. 23, 2000, pp. 1-6.

808

You might also like