Professional Documents
Culture Documents
Abstract—Compressed sensing (CS) is a rapidly emerging signal Section II overviews the general approaches and
processing technique that enables accurate capture and theoretical performance limits of the different reconstruction
reconstruction of sparse signals from only a fraction of Nyquist- algorithms. Section III presents experimental results that
rate samples, significantly reducing the data-rate and system quantitatively compare the advantages and disadvantages of
power consumption. This paper presents an in-depth the algorithms over several key metrics. Section IV describes
comparative study on current state-of-the-art CS reconstruction a case study on ECG signal reconstruction and Section V
algorithms. Reliability, accuracy, noise tolerance, computation presents qualitative conclusions from the experiments.
time and are used as key metrics. Further, experiments on ECG
signals are used to assess performance on real-world bio-signals.
II. COMPRESSED SENSING AND STATE-OF-THE-ART
RECONSTRUCTION ALGORITHMS
I. INTRODUCTION
Compressed sensing (CS) [1] is revolutionizing the CS is a non-adaptive scheme modeled by:
domain of signal acquisition. This innovative concept allows a
(1)
signal to be acquired and accurately reconstructed with
significantly fewer samples than required by Nyquist-rate where is a vector of discrete-time Nyquist-rate
sampling. Unlike Nyquist sampling, which relies on the samples of an analog signal, is the so-called
maximum rate-of-change of a signal, compressive sampling measurement or sensing matrix, e is observation noise and
relies on the maximum rate-of-information in a signal. Sparse
signals, such as an electrocardiogram (ECG) bio-signal, is the compressed output vector. CS captures M <<
exhibit this difference in rate which is utilized by CS to enable N measurements from N Nyquist samples using random linear
low-data-rate acquisition. The accuracy of reconstruction projections [1]. The sensing matrix is formulated by selecting
increases with increased sparsity of the signal being acquired. random entries from uniform, Gaussian or Bernoulli
The reconstruction of signals acquired with CS involves an probability density functions. CS is successful on either time-
optimization which seeks the best solution to an under- or frequency-sparse signals. Further, to achieve accurate and
determined set of linear equations with no prior knowledge of numerically stable compression and reconstruction, Candès
the original signal except that it is sparse in the sampled recommends [1]:
domain. Of the several CS recovery algorithms currently in
use, most can be classified as variants of convex-optimization
or greedy algorithms where the L1-norm is widely used as the (2)
measure of signal sparsity. Initially L1-norm convex where k is the number of non-zero entries in the X vector.
optimization was chosen for its stability and high accuracy [1]. In a noise-free environment, the reconstruction problem can
Various greedy reconstruction algorithms have recently gained
prominence by demonstrating a healthy trade-off of accuracy be posed as solving , where there are N unknowns
versus computational complexity [2]-[5]. in reconstructed signal and M knowns in measured signal .
This paper presents a comprehensive comparative study of The matrix is non-square and thus non-invertible. Because
five leading state-of-art compressed sensing reconstruction this problem is under-determined, there are potentially several
algorithms. It aims to provide insight into their approach,
functionality and performance trade-offs. Further, and as an signals, , that satisfy (1). Thus, the reconstruction algorithms
application-specific study, experiments are performed on real- are required to search for the correct within the search
world ECG bio-signals sampled with CS and then space using a sparsity-measure (such as L1-norm) as an
reconstructed using these algorithms. objective function. Table I gives an overview of several state-
of-the-art reconstruction algorithms including the general
optimization objectives and the theoretical computation times.
This work was supported by a grant from the Intel Corp., Hillsboro, OR.
A. Convex Optimization
Convex optimization is the original compressed sensing
reconstruction algorithm [1]. Ideally, the compressed sensing
signal recovery is defined as a basis pursuit optimization
problem:
(3)
which can be cast as a linear programming problem. However,
for the signal reconstruction to be robust to noise, the
constraints of the optimization problem are relaxed to give:
(4)
Fig. 1. Probability of exact recovery
This type of convex optimization problem, also known as
the least absolute shrinkage and selection operator (LASSO) or compressed sensing [3], the top 2k columns of Φ are added to
basis pursuit de-noising (BPDN), is solved using second-order the support vector per iteration and later pruned.
cone programming. Orthogonal least squares (OLS) [4], like OMP, selects the
L1-norm convex optimization is the current standard column of that minimizes the residual per iteration. Unlike
approach to basis pursuit problems primarily due to its proven matching pursuit, OLS evaluates each possible index’s
stability. The convex optimization problems are implemented resulting residual before selection.
in this work with the open-source software CVX [6]. CVX uses A more recent state-of-the-art reconstruction algorithm is
a primal-dual interior-point method to solve (3) and (4). normalized iterative hard thresholding (NIHT) [5]. This
method differs from the greedy algorithms described above by
B. Greedy Algortihms
not directly searching for the columns of that reduce the
Unlike convex optimization, greedy algorithms solve the residual error. Instead NIHT operates by iteratively selecting
reconstruction problem in a less exact manner. They function solutions that both minimize the residual and maximize the
by greedily optimizing a metric that predicts error difference between the current and previous solutions.
minimization.
Matching pursuit, one class of CS reconstruction greedy III. EXPERIMENTAL RESULTS
algorithms, attempts to find the columns of the measurement In order to maintain consistency with the experimental setups
matrix ( is commonly referred to as an “over-complete used in existing works [2][5] the measurement matrix
dictionary” in this context) that have the most participation in
was populated with independent and
the measurement . For each iteration the column(s) of identically distributed (i.i.d.) Gaussian random entries,
with the strongest correlation to the residual is added to the achieving a 4X data compression (i.e. N/M = 4). k-spike
support vector and its contribution is subtracted from the
signals, , were synthesized by assigning randomly
residual. In orthogonal matching pursuit (OMP), only one
the distributed k elements to values from the i.i.d. Gaussian
column of (or “atom”) is added to the support vector per distribution with all points set to zero. Sparsity levels were
iteration. In CoSaMP (Compressed Sampling Matching varied by adjusting k. Results were averaged over an ensemble
Pursuit), a specialized adaption of matching pursuit for of 100 k-spike sample signals for each sparsity level.
806
(a) (b)
Fig. 2. SNR of reconstructed signals for the original signal ensemble having (a) 60 dB SNR, and (b) 80 dB SNR
(5)
The reconstruction accuracies for the SNR = 60 dB and
SNR = 80 dB cases are shown. The signals were synthesized by
summing additive white Gaussian noise (AWGN) with the
compressed signal . CoSaMP, OMP and NIHT meet their
respective SNR targets at high sparsity levels and demonstrate
robust noise tolerance whereas L1-norm convex and ROLS
show moderate noise resilience with a loss of ~10 dB SNR at
peak performance levels.
C. Computation Time
807
Fig. 5. ECG signal reconstruction comparison of (a) original signal with 92% sparsity using (b) L1-norm convex optimization, (c) CoSaMP, (d) OMP, (e)
NIHT and (f) ROLS
808