You are on page 1of 13

Received: 23 March 2019 Revised: 21 March 2020 Accepted: 19 April 2020

DOI: 10.1002/ima.22438

RESEARCH ARTICLE

An enhanced weighted greedy analysis pursuit algorithm


with application to EEG signal reconstruction

Fahimeh Mohagheghian1,2,3 | Mohammad Reza Deevband2 |


Nasser Samadzadehaghdam3,4 | Hassan Khajehpour3,4 | Bahador Makkiabadi3,4

1
Department of Biomedical Engineering,
University of Connecticut, Storrs,
Abstract
Connecticut In the past decade, compressed sensing (CS) has provided an efficient frame-
2
Department of Medical Physics and work for signal compression and recovery as the intermediate steps in signal
Biomedical Engineering, School of
processing. The well-known greedy analysis algorithm, called Greedy Analysis
Medicine, Shahid Beheshti University of
Medical Sciences (SBMU), Tehran, Iran Pursuit (GAP) has the capability of recovering the signals from a restricted
3
Research Center for Biomedical number of measurements. In this article, we propose an extension to the GAP
Technology and Robotics (RCBTR), to solve the weighted optimization problem satisfying an inequality constraint
Institute of Advanced Medical
Technologies (IAMT), Tehran University
based on the Lorentzian cost function to modify the EEG signal reconstruction
of Medical Sciences (TUMS), Tehran, Iran in the presence of heavy-tailed impulsive noise. Numerical results illustrate
4
Department of Medical Physics and the effectiveness of our proposed algorithm, called enhanced weighted GAP
Biomedical Engineering, School of
(ewGAP) to reinforce the efficiency of the signal reconstruction and provide an
Medicine, Tehran University of Medical
Sciences (TUMS), Tehran, Iran appropriate candidate for compressed sensing of the EEG signals. The
suggested algorithm achieves promising reconstruction performance and
Correspondence
Bahador Makkiabadi, Department of
robustness that outperforms other analysis-based approaches such as GAP,
Medical Physics and Biomedical engineering, Analysis Subspace Pursuit (ASP), and Analysis Compressive Sampling
School of Medicine, Tehran University of Matching Pursuit (ACoSaMP).
Medical Sciences (TUMS), Tehran, Iran.
Email: b-makkiabadi@sina.tums.ac.ir
KEYWORDS

Funding information compressed sensing, cosparse analysis model, EEG signal reconstruction, Greedy Analysis
Shahid Beheshti University of Medical Pursuit, sparsity
Sciences

1 | INTRODUCTION than the original data, and reconstruct the signals by a


nonlinear procedure.1,2
Electrical potentials of the head surface produced by The EEG signal recovery using the CS leads to solving
the neural activity of the brain can be measured on the an underdetermined linear inverse problem:
scalp, using EEG electrodes. EEG signal is considered
as one of the most frequently used biological signals y = Mx ð1Þ
and has several significant applications in medicine
and biomedical engineering. Generally, storage and where the original EEG measurement vector x   n is
transmission of the continuous EEG data lead to high sampled at the Nyquist rate (in the time domain) and
power consumption due to the high sampling rate con- y   m specifies the compressed form of the vector x and
strained by the Nyquist rate from the Shannon- M  m × n (m < n), is the sampling or the measurement
Nyquist sampling theorem. Compressed sensing, also matrix. If the measurement matrix satisfies some prop-
known as the compressive sampling, enables us to erties, such as null space property or the restricted
acquire the compressive signals with far fewer samples isometry property (RIP), classical CS can successfully

Int J Imaging Syst Technol. 2020;1–13. wileyonlinelibrary.com/journal/ima © 2020 Wiley Periodicals LLC 1
2 MOHAGHEGHIAN ET AL.

recover the signal x from its compressive measure- The l1-relaxation of the above analysis prior problem
ments y.3 The assumption of m < n implies that Equa- Equation (3) is assumed as the main approach to recover
tion (1) is an underdetermined inverse problem and a the signal x.
typical approach to solve the problem is based on sig- Unlike the synthesis model, if the signal x and the anal-
nal sparsity as a priori information. ysis operator Ω are specified for the cosparse analysis model
then, the analysis representation of the signal x, i.e. Ωx can
entirely be computed. Since there is no ambiguity about Ωx
1.1 | Sparse synthesis approach and it is uniquely specified by Ω and x, it has been argued
that the uniqueness condition is provided inherently for the
The sparse synthesis model of a signal of interest x is analysis prior model and the question of uniqueness is only
presented by a sparse representation of the signal x, considered for solving the inverse problem. Further, it has
which is formulated as x = Dz, where z   d is been previously reported that the EEG signals do not have a
assumed to be a k-sparse vector with respect to the sparse representation in transformed domains that is,
(possibly) redundant (d > n) dictionary D   n × d. A k- applying sparsifying transforms to the EEG signals.7 Thus,
sparse vector z is a vector with the number of non- the cosparse analysis model has been proposed as the
zero elements equals to k, which is much smaller appropriate recovery method for EEG signals that yields
than the vector length (k = |z| 0  d). Knowing the better reconstruction results compared to the sparse synthe-
measured signal y and measurement matrix M, in sis model.8-10
sparse synthesis model the signal x is reconstructed
by solving the following optimization problem (syn-
thesis formulation): 1.3 | Related works

minkzk0 subject to y = φz ð2Þ The l1-minimization problem was firstly investigated by


xd
Candes et al.11 In the field of signal processing, the
weighted l1-minimization with prior information has
where φ = MD and x = Dz. The optimization problem been frequently used for signal reconstruction.
Equation (2) is NP-hard and supposed to require a combi- To solve the analysis based problem (Equation (3)),
natorial search and have exponential complexity.4 Thus, the greedy-type algorithms such as Greedy Analysis Pur-
an alternative algorithm called Basis pursuit (BP)5 is fre- suit12-14 or thresholding-based methods such as iterative
quently used, which solves a convex problem based on l1- hard thresholding (IHT) have been proposed.15 Several
minimization. Wavelets, B-spline and Gabor dictionaries modifications of the greedy algorithms, incorporated par-
have been suggested as sparsifying transforms to sparsely tially known support information, have been performed
represent the EEG signals.6,7 for compressed sensing.16 Modification of the IHT,17
Binary Iterative Hard Thresholding (BIHT),18 Orthogonal
Matching Pursuit (OMP),19 Compressive Sampling
1.2 | Cosparse analysis approach Matching Pursuit (CoSaMP),20 and re-weighted least
squares21 algorithms were studied in References 22-24.
The counterpart for the sparse synthesis model is called The iterative reweighted approach can be considered as a
the cosparse analysis model and suggests for a signal x, classical method to solve the lp-norm optimization prob-
the possibly redundant (d > n) analysis operator (or dic- lem.25 The reweighted l1-norm algorithm was developed
tionary) Ω   d × n, which is defined such that the analy- with Iteratively Reweighted Least Squares (IRLS) for solv-
sis representation vector Ωx would be sparse. The signal ing the least absolute shrinkage and selection operator
x is called cosparse if its cosparsity l, which is calculated (LASSO) problem, as proposed in References 26,27. How-
by l = d − kΩxk0, is large. The index set containing the ever, the speed of convergence of IRLS solver was not very
zero elements of the analysis representation Ωx is called competitive compared to the advanced methods of solving
the cosupport of x. the optimization problem with l1-norm prior, and the
Then, the optimization model for the cosparse sig- method suffered from potential numerical instabilities due
nal recovery is formulated as follows (analysis to the precision limitation in the calculation of the inverse
formulation): matrix.28 The IRLS method was utilized in the FOCal
Underdetermined System Solver (FOCUSS) algorithm to
solve the l0-norm prior problem25,29 and more generally,
minn kΩxk0 subject to y = Mx ð3Þ
x lp-norm penalization problem with p≤ 1.30
MOHAGHEGHIAN ET AL. 3

In the context of cosparse signal recovery Giryes et In Reference 45, a review of robust sparse signal
al31,32 proposed an analysis version of IHT and HTP, reconstruction strategies was presented. The authors con-
called analysis IHT (AIHT) and analysis HTP (AHTP). sidered signal reconstruction in CS, when the measured
Both AIHT and AHTP methods have been accompa- signals are corrupted by outliers or impulsive noise. They
nied with recovery guarantees analogous to the RIP- showed that the robust techniques such as l1-based coor-
based guarantees, which are applied for the synthesis dinate descent l1-CD),40 Lorentzian-based basis pursuit
equivalent methods IHT and HTP. The other two (LBP), Lorentzian-based iterative hard thresholding
algorithms are ACoSaMP and ASP as the analysis ver- (LITH),41 Lorentzian-based coordinate descent (L-CD),
sions for CoSaMP and Subspace Pursuit (SP).32,33 Liu robust lasso (R-Lasso) method, Huber iterative hard
et al34 developed a modified (reweighed) GAP algo- thresholding (HIHT) method, and l1-OMP method out-
rithm to solve the weighted regularized l2-minimization perform other traditional CS methods, when the com-
problem. The weight matrix was iteratively constructed pressive measurements have large deviations and show
as a function of the inner product of the approximation comparable performance in light-tailed cases.
of the original signal x and the analysis dictionary rows
at estimated cosupport locations. The algorithm itera-
tively determines the weights of the next iteration from 1.4 | Our contribution
the value of the current solution to form an adaptive
weight matrix to fill the gap between non-convex and The aim of this study is to develop an algorithm for EEG
convex optimization. Zhang et al proposed the CoIRLq signal reconstruction in noisy (Gaussian and non-Gauss-
Algorithm as an iterative reweighted method to deal ian) as well as noiseless environments. In this article, we
with the nonconvex lp-norm optimization with a varia- investigate the signal reconstruction performance in the
tional framework, which reconstructed both noisy and cosparse analysis framework through a weighted analysis
noiseless signals with weaker sufficient conditions and optimization problem subject to a Lorentzian cost func-
better performance compared to its convex counterpart tion. We propose an extension to the GAP algorithm
and other methods such as GAP.35 based on Lorentzian norm data fitting to enhance the
As the data acquired by EEG data acquisition systems recovery performance in a non-Gaussian noisy environ-
as well as other sensing operators are corrupted with ment as an application for (EEG) signal reconstruction.
noise, all CS signal reconstruction methods have been Further, we improve the efficiency of the reconstructed
developed to approximate the signals from noisy mea- signals by iteratively adjusting the weights to impose the
surements. However, commonly applied CS reconstruc- cosparsity assumption to the solution.
tion algorithms are based on Gaussian noise assumption Thus, empirical evidence is provided to demonstrate
and fail to recover the signals contaminated with non- the effectiveness of our algorithm in several numerical
Gaussian impulsive noise. In recent years, several experiments for noiseless signals or Gaussian and impul-
research efforts have been carried out to address the sig- sive noise assumptions. The algorithm suggests practical
nal recovery problem assuming impulsive processes.36-42 benefits as it shows promising outcomes for cosparse sig-
Several approaches have been proposed based on data nal recovery in a wide range of cosparsity and sampling
fidelity criterion, which replaces the l2-norm by a more ratio and with different noise levels and dictionary
robust objective function. A continuous mixed norm redundancies.
(CMN) was exploited in Reference 43 for robust sparse
recovery instead of lp-norm. In Reference 37, Carrillo et
al proposed the Lorentzian Based Basis Pursuit method 1.5 | Paper organization
based on l1 minimization and Lorentzian norm con-
straint on the residual error for the reconstruction of The rest of this article is organized as follows. In section 2,
sparse signals in the impulsive environment. An exten- we recall the GAP method and Ω-RIP recovery guaran-
sion to this approach was proposed by Arce38 to develop tees for vectors with the sparse representation using the
an algorithm employing iterative weighted myriad filters dictionary Ω. Further, a general overview of the
based on Lorentzian cost function with an l0-norm regu- Lorentzian Based Basis Pursuit technique is introduced.
larization term. In Reference 44, the authors suggested Then we present our proposed ewGAP algorithm as an
Huber iterative hard thresholding (HIHT) and in Refer- enhanced version of the GAP algorithm. In section 3, we
ence 41, an iterative hard thresholding algorithm using a provide the numerical experiments for simulated EEG
Lorentzian cost function for reconstructing the sparse sig- signals. In section 4, we conclude the derived results with
nals in the presence of impulsive noise. a suggestion on the future of the work.
4 MOHAGHEGHIAN ET AL.

2 | METHODS 2.2 | Lorentzian based basis pursuit

2.1 | Greedy Analysis Pursuit Lorentzian basis pursuit (BP) method37 is a robust algo-
Algorithm (GAP) rithm to reconstruct the sparse signals when the mea-
surements are corrupted by impulsive noise and
Greedy Analysis Pursuit is a greedy algorithm inspired by outperforms the state of the art recovery algorithms in
the well-known greedy pursuit algorithm used in the syn- impulsive environments.41 Since Large deviations are not
thesis model, that is, the OMP.14 The strategy in GAP is heavily penalized by the Lorentzian norm, as by the l1
based on the identification of the locality of the zero-val- and l2-norms, it is a more robust error metric in the pres-
ued entries (or elements) of Ωx to specify an estimation ence of impulsive noise.37,46
of the cosupport λ and then, update the signal x using the To estimate x from y the following non-linear optimi-
estimated cosupport. Unlike the synthesis model, GAP zation problem is used in Lorentzian BP:
detects the non-zero elements to determine the locations
of the zero-value elements. The cosupport is initialized minn kxk1 subject to ky −φxkLL2 ,γ < ϵ ð6Þ
x
with the whole set size d, and through several iterations,
its size reduces to l (or less, d − m), described as follows.
Let x be the sparse signal measured by y = Mx, where where γ is the scale parameter and ϵ is the measurement
M is an m × n (m < n) measurement matrix with rows (sampling) noise.
incoherent with signal x. At first step of GAP, an initial The Lorentzian norm or LL2 norm is defined as:
approximation of the signal x is derived based on the fol-
lowing optimization problem: X
m  
kukLL2 ,γ = log 1 + γ − 2 ui 2 , um , γ > 0 ð7Þ
i=1
x^0 = arg min
x
kΩxk22 subject to y = Mx ð4Þ

The Lorentzian norm has significant properties that


In the second step, cosupport is estimated using the make it appropriate for impulsive and Gaussian noise.
initially estimated signal x (the index set of elements in Ωx Briefly, the Lorentzian norm:
with the smallest absolute value are considered as
cosupport). Using the estimated cosupport, a new estima- • is a continuous function, which has this characteristic
tion of the signal x is attained by solving the above optimi- everywhere.
zation problem Equation (4) and subsequently, a new • is convex near the origin, which behaves as an l2-norm
cosupport is estimated based on the estimated signal x. cost function for small deviations.
Finally, the cosparse signal recovery is achieved by repeat- • does not over penalize large variations leading to a
ing the first and second steps as mentioned above.14 In robust metric for gross deviations of the signal.
order to ensure the exact recovery of the signal the Ω-RIP,
which is an extension to the standard RIP11 is applied.
2.3 | Enhanced weighted GAP (ewGAP)

2.1.1 | Ω-RIP definition According to the advantages of Lorentzian norm-based


recovery methods, we propose the following weighted
The measurement matrix M has the restricted isometry GAP algorithm using Lorentzian norm for data fitting as
property adapted to Ω (Ω-RIP) with the parameter δl if δl a greedy-based optimization strategy to estimate signal
is the smallest constant that satisfies: x  n from measurement vector y  m:

ð1 −δl Þ kxk22 ≤ kMxk22 ≤ ð1 + δl Þkxk22 ð5Þ


minn kWΩxk22 subject to ky −MxkLL2 ,γ < ϵ ð8Þ
x

where Ω is the analysis operator and Ωx is (d − l)-sparse


(has at least l zeroes).11,32 where W is the diagonal weight matrix with positive sca-
For random matrices with independent Bernoulli, lars on the diagonal, Ω is the analysis operator (or dictio-
Gaussian or Sub-Gaussian entries if the  number of mea- nary) such that the analysis representation vector Ωx
surements m is on the order of l × log dl the Ω-RIP would would be sparse, and k:kLL2 ,γ is the Lorentzian norm with
be fulfilled.11 the scale parameter γ.
MOHAGHEGHIAN ET AL. 5

A diversity of possible functions can be supposed to


calculate the weight matrix W. Regarding the FOCUSS ewGAP Algorithm
and reweighted l1 minimization5 algorithms, the weight 1. Input parameters: the measurement matrix M,
matrix n o
in each p −2iteration k is computed as analysis dictionary Ω, the measured vector y, the
W = diag ΩΛ^ k − 1 x^k − 1 2 , (0 ≤ p ≤ 1), where bx is the esti- cosparsity level l (or the number of zeros in
mated signal and ΩΛ^ is the dictionary acquired at the sparse representation), and parameters
estimated cosupport Λ ^ at the previous step k − 1.34
Therefore, the l2-norm weighted optimization prob- γ, ϵ, ϵ1 , T > 0
lem Equation (8) is equal to the lp-norm unweighted opti-
2. Initialization:
mization problem Equation (9), which is a non-convex
optimization problem: • Initial iteration: k = 0
• Initial cosupport: Λ ^ 0 = f1, 2, 3, …,dg
• Initial weight matrix W = I
minn kΩxkpp subject to ky − MxkLL2 ,γ < ϵ ð9Þ • Initial solution:
x

x^0 = argmin
x
kW ΩΛ^ 0 xk22 subject to ky −Mxk22 < ϵ
It is highly reasonable to solve the lp-norm
ð10Þ
(0 ≤ p ≤ 1) instead of l2-norm prior problem to promote
the cosparsity of the solution. The minimization yields 3. Iterations k ! k + 1 and accomplish the fol-
the unique solution whenever the corresponding lowing steps:
unweighted counterpart has a unique solution, though
the weighted and unweighted optimization problems • Compute α = Ω^ xk − 1
have different solutions. Depending on the selected • Find the largest elements: γ k = {i :
weights, the weighted optimization might improve the |αi| = maxj|αj|}
signal reconstruction and outperform simple • Update the cosupport: Λ ^ k − 1 ∖Γ k
^k = Λ
implementations of lp-norm minimization. Candes et al5 • Update the weight matrix Wk
have suggested that as a general rule of thumb, the • Update the solution:
weights should have an inverse relationship to the true
signal magnitudes. x^k = arg minkWk ΩΛ^ k xk22 subject to ky −MxkLL2 ,γ < ϵ
x
In the proposed iterative optimization problem by
solving the unweighted equivalent of Equation (8), an ð11Þ
inaccurate signal estimation is provided at the first itera-
tion, through which the Ωx coefficients with the largest x k − x^k − 1 k22 ≤ T
• Stopping criterion: If k ≥ l or k^
absolute values are identified. The rows of Ω, or kMx^k − yk∞ ≤ ϵ1 , stop.
corresponding to the identified largest coefficients are not 4. Output: The identified solution: x^k .
belonging to the cosupport. Thus, these rows should be
removed from the cosupport. In the next iterations, the
remaining small coefficients are identified and removed
by applying the weights through down-weighting the As y is the measured data vector and M is a measure-
large coefficients. ment matrix, the main objective is to obtain the
Since the weights modify in each iteration, the solu- unknown signal x, which should be estimated from y.
tion of the weighted optimization problem yields a Whereas the initial starting point for vector x is consider-
more reasonable approximation of the signal cosupport ably involved with the experimental consequences, the
after each iteration. Consequently, the solution can be ewGAP algorithm is initialized with the solution x of
derived by solving an iterative weighted minimization unweighted equivalent regularized problem as the GAP
problem using the weights that are highly influenced algorithm proposes this technique for initialization.
by the estimated cosupport. However, without knowing
the original signal and the cosupport, the construction
of the accurate weight matrix is substantially impossi- 3 | EXPERIMENTAL RESULTS
ble, but their approximation in each step leads to grad- A N D DI S C U S S I O N
ually correcting the solution of the main problem.
Details of the enhanced weighted GAP algorithm are To evaluate the efficiency of the proposed ewGAP algo-
described below. rithm, we conduct several experiments to compare the
6 MOHAGHEGHIAN ET AL.

F I G U R E 1 A, Typical simulated signal x with n = 256; B, Sparse analysis representation vector ωx with l = 240 [Color figure can be
viewed at wileyonlinelibrary.com]

reconstruction results of our algorithm with the well- to satisfy the RIP with a high probability.1 The weighting
 p− 2
known cosparse algorithm GAP as well as other cosparse   2
^k − 1 
function is selected as ΩΛ^ k − 1 x .
algorithms such as ASP, ACoSaMP, modified GAP, and
CoIRLq, using synthetic data. Based on different choices for the power of the
We run 50 realizations in each experiment to recon- weighting function (p), and the scale parameter of the
struct the signal x from measurements y using each algo- Lorentzian norm (γ), we might acquire different versions
rithm and quantify the recovery results via the of our algorithm. In the following experiments, we set
normalized MSE (Mean Square Error) calculated p = 0.5 (is empirically chosen) and select two values for
between the original signal and the reconstructed signal the scale parameter γ = 0.1, 0.5. In all experiments, the
averaging over all realizations. The experiments are per- value of parameters ϵ1 and T were empirically selected as
formed using MATLAB optimization toolbox,47 under a 10−2 and 10−5, respectively.
Core i7 64-bit processor and Windows 10 environment. In the context of biomedical engineering, a kind
To simulate noisy data, two different types of noises are of model error considered for biological signal analy-
applied to the signals, namely impulsive noise and Gauss- sis is due to the combination of noise and the origi-
ian noise as the additive measurement noises. nal signals. Thus, we conduct a sequence of
The parameter settings of ewGAP are as follows: experiments to evaluate the performance of several
The d × n analysis operator Ω was specified with i.i. algorithms for simulated EEG signal reconstruction.
d. Gaussian entries and approximately unit-norm rows. The experiments and results are categorized into the
The cosparsity l is generated by randomly choosing l following sub-sections: Noiseless data, Corrupted data by
rows of the analysis operator Ω as the index set λ, Gaussian noise, and Corrupted data by impulsive noise.
namely cosupport. In order to form the synthetic
cosparse EEG signal x, a random signal of length n 3.1 | Noiseless data
with Gaussian i.i.d entries are projected onto the
orthogonal complement of the subspace generated by 3.1.1 | Recovery performance vs
the rows of Ω that are denoted by λ.14 Figure 1 shows a cosparsity
typical noiseless synthetic EEG signal x of length
n = 256 and its sparse analysis (ie, cosparse) represen- In this section, we examine the recovery error for signals
tation vector Ωx with l = 240. without additive noise to evaluate the efficiency of
By selecting a sampling ratio (ie, mn ), the m × n mea- ewGAP algorithm in the noiseless environment.
surement operator M yields the synthetic EEG compres- Figure 2 presents the average MSE calculated
sive measurements y = Mx. As it has been previously between the original and reconstructed signals aver all
applied for EEG compression in CS framework, we select realizations for different signal cosparsity. The input
the i.i.d. Gaussian entries for the measurement matrix M parameters set as m = 128, n = 256 for m × n matrix M,
MOHAGHEGHIAN ET AL. 7

d = 512 for the d × n analysis operator Ω (over- 3.1.2 | Recovery performance vs


completeness factor is equal to 2, which yields measurement numbers
d = 2n = 512).
From the figure, it can be seen that ewGap (both ver- Figure 3 depicts the recovery performance of the algo-
sions, using γ = 0.1 and γ = 0.5) is the most efficient algo- rithms as a function of the number of signal measure-
rithm when the signals are reconstructed with high ments. The parameters set as n = 256, d = 512 and the
cosparsity levels. Further, for low cosparsities, ewGAP cosparsity level of the signal is chosen as l = 200. The
shows a more acceptable reconstruction error compared number of measurements varies from 26 to 230, which
to the other algorithms. leads to changing the sampling ratio between 0.1 and 0.9.
The figure indicates the comparable performance for all
algorithms in the low and high sampling ratios, while
shows the superiority of ewGAP for middle values of
sampling ratio.

3.2 | Corrupted data by Gaussian noise

3.2.1 | Recovery performance vs


cosparsity

The next experiment investigates the reconstruction


performance vs the cosparsity level for corrupted sig-
nals with Gaussian noise. The average performance
of 50 realizations of the signal recovery for each algo-
rithm is depicted in Figure 4. The signal length set as
n = 256 and the number of measurements is fixed as
m = 128, which yields the sampling ratio of δ = 0.5.
F I G U R E 2 Performance of signal reconstruction by MSE as a The noisy signals are generated with SNR = 20 dB
function of the cosparsity l, with n = 256, m = 128, d = 512 [Color and the dictionary overcompleteness set to
figure can be viewed at wileyonlinelibrary.com] 2 (d = 512).

F I G U R E 3 Performance of signal reconstruction by MSE as F I G U R E 4 Performance of noisy signal reconstruction by


a function of the number of measurements m, with n = 256, MSE as a function of the cosparsity with n = 256, m = 128, d = 512,
l = 200, d = 512 [Color figure can be viewed at input SNR = 20 dB [Color figure can be viewed at
wileyonlinelibrary.com] wileyonlinelibrary.com]
8 MOHAGHEGHIAN ET AL.

TABLE 1 Average and SD of the reconstruction MSE as a function of the cosparsity (n = 256, d = 512, m = 128, SNR = 20 dB)

Cosparsity

150 160 170 180 190 200 210 220 230 240 250
GAP Mean 2.10e−03 2.05e−03 2.07e−03 2.00e−03 1.96e−03 1.92e−03 1.93e−03 1.85e−03 1.83e−03 1.71e−03 1.75e−03
Std 2.50e−04 2.43e−04 2.49e−04 2.41e−04 2.58e−04 2.44e−04 2.93e−04 2.30e−04 2.33e−04 2.28e−04 2.47e−04
mGAP Mean 2.10e−03 2.05e−03 2.07e−03 2.00e−03 1.96e−03 1.92e−03 1.93e−03 1.85e−03 1.84e−03 1.72e−03 1.75e−03
Std 2.52e−04 2.42e−04 2.51e−04 2.40e−04 2.53e−04 2.44e−04 2.93e−04 2.33e−04 2.36e−04 2.29e−04 2.48e−04
ewGAP Mean 2.02e−03 1.93e−03 1.95e−03 1.88e−03 1.77e−03 1.72e−03 1.71e−03 1.58e−03 1.53e−03 1.44e−03 1.41e−03
γ = 0.1 Std 2.88e−04 2.73e−04 2.52e−04 2.48e−04 2.75e−04 2.63e−04 3.29e−04 2.18e−04 2.56e−04 2.40e−04 2.19e−04
ewGAP Mean 2.07e−03 1.97e−03 1.99e−03 1.92e−03 1.80e−03 1.74e−03 1.71e−03 1.54e−03 1.52e−03 1.40e−03 1.38e−03
γ= 0.5 Std 3.07e−04 2.84e−04 2.78e−04 2.77e−04 2.91e−04 2.68e−04 3.42e−04 2.25e−04 2.58e−04 2.23e−04 2.42e−04
CoIRLq Mean 2.12e−03 2.06e−03 2.08e−03 2.02e−03 1.99e−03 1.95e−03 1.97e−03 1.89e−03 1.88e−03 1.75e−03 1.81e−03
Std 2.49e−04 2.41e−04 2.48e−04 2.33e−04 2.61e−04 2.49e−04 2.79e−04 2.31e−04 2.35e−04 2.50e−04 2.51e−04
ACoSaMP Mean 2.40e−03 2.40e−03 2.46e−03 2.50e−03 2.50e−03 2.48e−03 2.62e−03 2.74e−03 3.42e−03 5.26e−03 6.09e−03
Std 2.59e−04 2.21e−04 3.10e−04 2.78e−04 2.93e−04 3.02e−04 3.69e−04 4.25e−04 6.64e−04 1.04e−03 8.75e−04
ASP Mean 2.74e−03 2.69e−03 2.66e−03 2.59e−03 2.66e−03 2.58e−03 2.64e−03 2.73e−03 3.13e−03 4.41e−03 5.26e−03
Std 3.41e−04 3.01e−04 3.98e−04 3.15e−04 4.00e−04 3.43e−04 4.27e−04 3.84e−04 6.18e−04 3.41e−04 3.01e−04

According to Figures 2 and 4, we can find that: (a)


The acquired average MSE values for different signal
recovery algorithms show the same trend in both
noiseless and Gaussian noise environments. (b)
ewGAP method (using both γ = 0.1 and γ = 0.5)
achieves the lowest average MSE among all the algo-
rithms. The best performance for ewGAP is observed
for higher cosparsity values. (c) ASP and ACoSaMP
methods show the highest average error values among
all the algorithms, specifically for high cosparsities.
Table 1 illustrates the results of this section in terms of
averaged MSE and SD in different cosparsities for all
algorithms.

3.2.2 | Recovery performance vs


measurement numbers
F I G U R E 5 Performance of noisy signal recovery by MSE as a
function of the number of measurements m, with n = 256, l = 200,
The next experiment is conducted to explore the behavior d = 512, input SNR = 20 dB [Color figure can be viewed at
of the algorithms with varied sampling ratios, when mea- wileyonlinelibrary.com]
sured signals are corrupted with Gaussian noise (Fig-
ure 3). Signals are generated with cosparsity l = 200 and
the signal length set as n = 256. As the previous experi- noisy signals. (b) The superiority of ewGAP over other
ment, the dictionary overcompleteness set to 2 (d = 512). algorithms is observable, when the measured signal is
The number of measurements varies from 26 to 230 moderately compressed (sampling ratio of 0.5 to 0.7). (c)
(sampling ratio between 0.1 and 0.9). ASP and ACoSaMP algorithms indicate the largest devia-
Regarding Figures 3 and 5, one can see that: (a) For tion from average MSE acquired by the other algorithms
very low and very high values of sampling ratio ewGAP however, they show comparable or lower MSE for high
achieves comparable values of MSE in comparison with sampling ratios that might interpret as a performance
GAP, mGAP, and CoIRLq algorithms in both clean and inconsistency. Table 2 provides the results in terms of
MOHAGHEGHIAN ET AL. 9

TABLE 2 Average and SD of the reconstruction MSE as a function of the measurements number (n = 256, d = 512, l = 200,
SNR = 20 dB)

Number of measurements (sampling ratio)

26 (0.1) 51 (0.2) 77 (0.3) 102 (0.4) 128 (0.5) 154 (0.6) 179 (0.7) 205 (0.8) 230 (0.9)
GAP Mean 5.10e−03 3.87e−03 3.05e−03 2.52e−03 1.99e−03 1.51e−03 1.15e−03 7.87e−04 6.21e−04
Std 3.38e−04 4.16e−04 3.44e−04 2.35e−04 2.35e−04 1.81e−04 1.99e−04 1.35e−04 1.43e−04
mGAP Mean 5.13e−03 3.88e−03 3.05e−03 2.52e−03 1.99e−03 1.51e−03 1.14e−03 7.57e−04 5.25e−04
Std 3.30e−04 4.15e−04 3.44e−04 2.39e−04 2.38e−04 1.84e−04 2.00e−04 1.29e−04 9.82e−05
ewGAP Mean 5.00e−03 3.79e−03 2.96e−03 2.36e−03 1.79e−03 1.31e−03 9.47e−04 6.03e−04 4.56e−04
γ = 0.1 Std 4.41e−04 5.09e−04 4.25e−04 2.86e−04 2.55e−04 2.09e−04 1.99e−04 1.29e−04 1.17e−04
ewGAP Mean 5.04e−03 3.83e−03 2.99e−03 2.39e−03 1.76e−03 1.29e−03 9.11e−04 5.58e−04 3.39e−04
γ = 0.5 Std 4.75e−04 5.31e−04 4.38e−04 2.92e−04 2.51e−04 1.96e−04 1.96e−04 1.35e−04 8.39e−05
CoIRLq Mean 5.20e−03 3.97e−03 3.10e−03 2.56e−03 2.02e−03 1.54e−03 1.17e−03 6.83e−04 4.21e−04
Std 3.09e−04 3.96e−04 3.28e−04 2.53e−04 2.31e−04 1.89e−04 2.01e−04 1.45e−04 1.01e−04
ACoSaMP Mean 5.44e−03 4.41e−03 3.69e−03 3.17e−03 2.56e−03 2.35e−03 4.50e−03 9.37e−04 3.77e−04
Std 3.40e−04 4.23e−04 3.80e−04 2.98e−04 3.68e−04 4.65e−04 1.11e−03 3.90e−04 1.51e−04
ASP Mean 5.59e−03 5.39e−03 4.17e−03 3.38e−03 2.68e−03 2.16e−03 4.23e−03 8.82e−04 3.85e−04
Std 3.78e−04 6.95e−04 5.21e−04 4.98e−04 4.00e−04 3.84e−04 8.58e−04 3.42e−04 1.46e−04

averaged MSE and SD for noisy signal reconstruction


over all trials under different compression ratios for all
algorithms.

3.2.3 | Recovery performance vs


dictionary overcompleteness

In this section, the reconstruction performance is evalu-


ated as a function of the dictionary overcompleteness for
corrupted signals with Gaussian noise. The number of
measurements is fixed as m = 128 and the signal length set
as n = 256. The noisy signals were generated with
SNR = 20 dB.
The relationship between the dictionary redundancy
and reconstruction error is shown in Figure 6. From the
figure, it can be seen that when the dictionary redun-
dancy is low, the original signal can be restored with a F I G U R E 6 Performance of signal recovery by MSE as a
lower error rate in all algorithms. In addition, ewGAP function of the dictionary redundancy (overcompleteness) with
algorithm indicates better performance in terms of MSE n = 256, m = 128, l = 200, input SNR = 20 dB [Color figure can be
in all overcompleteness values compared to the other viewed at wileyonlinelibrary.com]
algorithms. Table 3 illustrates the MSE and SD values in
detail.
algorithms. A set of l-cosparse vectors (l = 200) of size
n × 1 are generated where n = 256 and d = 512 (over-
3.2.4 | Recovery performance vs noise completeness = 2). Each sample vector is compressed
power via a measurement matrix M with m = 128. Figure 7
presents the performance of the algorithms by the
In this experiment, we examine the effect of Gaussian averaged MSE of the reconstructed signals (over all
noise power on the performance of recovery trials) as a function of noise power. The figure
10 MOHAGHEGHIAN ET AL.

TABLE 3 Average and SD of the reconstruction MSE as a function of the dictionary overcompleteness (n = 256, m = 128, l = 200,
SNR = 20 dB)

Dictionary overcompleteness

1 1.25 1.50 1.75 2


GAP Mean 1.37e−03 1.59e−03 1.71e−03 1.85e−03 1.94e−03
Std 2.32e−04 2.72e−04 2.26e−04 2.19e−04 2.56e−04
mGAP Mean 1.37e−03 1.59e−03 1.70e−03 1.85e−03 1.94e−03
Std 2.34e−04 2.78e−04 2.33e−04 2.22e−04 2.57e−04
ewGAP Mean 9.14e−04 1.23e−03 1.43e−03 1.64e−03 1.74e−03
γ= 0.1 Std 2.45e−04 2.90e−04 2.56e−04 2.41e−04 2.57e−04
ewGAP Mean 8.62e−04 1.25e−03 1.42e−03 1.64e−03 1.74e−03
γ= 0.5 Std 2.44e−04 2.94e−04 2.64e−04 2.59e−04 2.50e−04
CoIRLq Mean 1.40e−03 1.66e−03 1.77e−03 1.90e−03 1.97e−03
Std 3.21e−04 2.89e−04 2.34e−04 2.12e−04 2.51e−04
ACoSaMP Mean 1.12e−03 5.64e−03 2.60e−03 2.49e−03 2.49e−03
Std 6.79e−04 9.93e−04 5.41e−04 3.11e−04 2.80e−04
ASP Mean 8.92e−04 3.82e−03 2.52e−03 2.59e−03 2.67e−03
Std 5.68e−04 8.94e−04 4.46e−04 3.26e−04 3.20e−04

3.3 | Corrupted data by impulsive noise

3.3.1 | Recovery performance vs tail


parameter

In this section, the reconstruction performance is evalu-


ated for the recovery of the cosparse signals with additive
impulsive noise. To this aim, the tail parameter of the
noise (α) is varied from 0.5 to 2, that is, very impulsive to
the Gaussian noise. The parameter β, which specified the
skewness of the noise distribution should lie in the range
[−1, +1]. When it is positive, the distribution is skewed
to the right and when it is negative, the distribution is
skewed to the left. When β = 0 the distribution is sym-
metrical. In this experiment, we set β = 0 for all cases.
Figure 8 shows the performance of the algorithms as a
function of the tail parameter for different algorithms. The
F I G U R E 7 Performance of signal recovery by MSE as a
parameters are set as l = 210, n = 256, d = 512 and
function of the input SNR with n = 256, m = 128, l = 200, d = 512
[Color figure can be viewed at wileyonlinelibrary.com]
m = 128. For small values of α, all algorithms perform
poorly, except ewGAP with γ = 0.1 yields the most faithful
reconstruction. ewGAP with γ = 0.5 produces acceptable
results in comparison to other algorithms, too. Note that
demonstrates that ewGAP with γ = 0.1 and γ = 0.5 when the tail parameter is close to two (ie, Gaussian case)
achieves significantly better performance compared to all algorithms show approximately similar reconstruction
the other algorithms and has a robust performance for results. In addition, note that ewGAP with γ = 0.1 main-
all selected levels of noise. Table 4 illustrates the aver- tains a consistent descending trend over all values of the
aged MSE and the SD for noisy signal reconstruction tail parameter. Table 5 shows the averaged MSE and SD
over all trials under different SNR values for all for noisy signal reconstruction over all trials under different
algorithms. values of the tail parameter for all algorithms.
MOHAGHEGHIAN ET AL. 11

TABLE 4 Average and SD of the reconstruction MSE as a function of the input SNR (n = 256, m = 128, l = 200, d = 512)

SNR

10 15 20 25 30 35
GAP Mean 2.36e−03 2.08e−03 1.95e−03 1.95e−03 1.87e−03 1.86e−03
Std 1.91e−04 2.37e−04 2.54e−04 2.34e−04 2.76e−04 1.91e−04
mGAP Mean 2.36e−03 2.08e−03 1.95e−03 1.95e−03 1.87e−03 1.86e−03
Std 1.87e−04 2.37e−04 2.54e−04 2.35e−04 2.77e−04 1.94e−04
ewGAP Mean 2.20e−03 1.89e−03 1.76e−03 1.76e−03 1.69e−03 1.66e−03
γ = 0.1 Std 2.05e−04 2.65e−04 2.54e−04 2.44e−04 2.84e−04 2.20e−04
ewGAP Mean 2.14e−03 1.86e−03 1.76e−03 1.78e−03 1.73e−03 1.67e−03
γ = 0.5 Std 2.07e−04 2.80e−04 2.70e−04 2.68e−04 2.96e−04 2.28e−04
CoIRLq Mean 2.38e−03 2.12e−03 1.97e−03 1.97e−03 1.90e−03 1.89e−03
Std 1.84e−04 2.33e−04 2.60e−04 2.26e−04 2.67e−04 1.91e−04
ACoSaMP Mean 3.01e−03 2.67e−03 2.52e−03 2.54e−03 2.43e−03 2.38e−03
Std 2.75e−04 3.40e−04 2.62e−04 2.77e−04 3.01e−04 3.03e−04
ASP Mean 3.07e−03 2.77e−03 2.56e−03 2.64e−03 2.53e−03 2.56e−03
Std 3.39e−04 3.17e−04 3.36e−04 3.16e−04 3.42e−04 3.32e−04

In this article, we aimed to improve the signal recon-


struction performance of the GAP algorithm, which effec-
tively solves the analysis pursuit problem. Our proposed
approach, called ewGAP solves the weighted optimization
problem satisfying an inequality constraint to enhance the
signal recovery from compressive measurements in the
presence of impulsive noise based on the Lorentzian norm.
Several numerical experiments were performed to
examine the performance of ewGAP for the noiseless envi-
ronment or corrupted signals by Gaussian or impulsive
noise. Our algorithm achieved to outperform the traditional
recovery techniques, which use l2-norm for data fitting
term, specifically when the measurements have gross devia-
tions, that is, high impulsive noise (small values of tail
parameter). This is related to the specific characteristic of
the Lorentzian norm used for data fitting term, which leads
F I G U R E 8 Performance of signal reconstruction by MSE as a to not heavily penalize large deviations of the measured sig-
function of the tail parameter of the impulsive noise, with n = 256, nal and makes it appropriate for corrupted signals by
m = 128, l = 210, d = 512 [Color figure can be viewed at impulsive noise. However, the Lorentzian norm behaves as
wileyonlinelibrary.com] a l2-norm cost function for small deviations. The weighted
lp-norm term encourages the cosparse solution by applying
p = 0.5 for the weight, which is iteratively adjusted. Thus,
4 | C ON C L U S I ON the weighted lp-norm regularization imposes the cosparsity
assumption to the solution, although it solves a l2-norm
As a well-known pursuit method, GAP has the capability convex optimization problem. Hence, the results demon-
of recovering the signals from a restricted number of strated higher performance in reconstruction of the noise-
compressive measurements via solving the constrained less signals or signals corrupted by Gaussian noise using
least-squares problem with the analysis-model prior. ewGAP compared to other mentioned cosparse methods.
However, the sensitivity of the least square estimators to As the future work on this topic, the algorithm should
signal outliers leads to reducing the recovery efficiency of be applied to the real data, which have cosparse represen-
the signals corrupted with non-Gaussian noise. tation, that is, EEG signals.
12 MOHAGHEGHIAN ET AL.

TABLE 5 Average and SD of the reconstruction MSE as a function of the impulsive noise tail parameter (n = 256, m = 128,
d = 512, l = 210)

Tail parameter

0.5 0.75 1 1.25 1.50 1.75 2


GAP Mean 7.90e−03 7.83e−03 7.79e−03 7.84e−03 7.74e−03 7.54e−03 7.66e−03
Std 5.12e−04 5.06e−04 5.72e−04 6.12e−04 6.44e−04 5.19e−04 5.16e−04
mGAP Mean 7.90e−03 7.83e−0 7.79e−03 7.84e−03 7.75e−03 7.56e−03 7.64e−03
Std 5.07e−04 5.00e−04 5.55e−04 5.90e−04 6.02e−04 4.51e−04 5.24e−0
ewGAP Mean 7.83e−03 7.78e−03 7.72e−03 7.65e−03 7.66e−03 7.56e−03 7.63e−030
γ = 0.1 Std 5.10e−04 4.16e−04 5.57e−04 4.79e−04 5.82e−04 4.48e−04 5.10e−04
ewGAP Mean 7.91e−03 7.87e−03 7.73e−03 7.81e−03 7.76e−03 7.53e−03 7.65e−03
γ= 0.5 Std 5.87e−04 4.33e−04 5.28e−04 5.37e−04 6.17e−04 4.71e−04 5.40e−04
CoIRLq Mean 7.90e−03 7.83e−03 7.79e−03 7.84e−0 7.75e−03 7.56e−03 7.64e−03
Std 5.06e−04 4.96e−04 5.47e−0 5.90e−04 6.03e−04 4.48e−04 5.25e−04
ACoSaMP Mean 7.93e−03 7.80e−03 7.75e−03 7.84e−0 7.75e−03 7.66e−03 7.66e−03
Std 4.85e−04 5.14e−04 5.31e−04 5.54e−04 6.67e−04 5.30e−04 5.43e−04
ASP Mean 7.87e−03 7.89e−03 7.78e−03 7.80e−03 7.79e−03 7.54e−03 7.63e−03
Std 4.87e−04 5.33e−04 5.30e−04 6.27e−04 5.74e−04 5.40e−04 5.52e−04

A C K N O WL E D G M E N T S 5. Candes EJ, Wakin MB, Boyd SP. Enhancing sparsity by


reweighted ℓ 1 minimization. J Fourier Anal Appl. 2008;14(5):
This study is related to the Ph.D. thesis No. 441 Shahid
877-905.
Beheshti University of Medical Sciences, Tehran, Iran. 6. Abdulghani AM, Casson AJ, Rodriguez-Villegas E. Compres-
We appreciate the “Student Research Committee” and sive sensing scalp EEG signals: implementations and practical
“Research & Technology Chancellor” in Shahid Beheshti performance. Med Biol Eng Comput. 2012;50(11):1137-1145.
University of Medical Sciences for their financial support 7. Zhang Z, Jung TP, Makeig S, Rao BD. Compressed sensing of
of this study. EEG for wireless telemonitoring with low energy consumption
and inexpensive hardware. IEEE Trans Biomed Eng. 2013;60
CONFLICT OF INTEREST (1):221-224.
8. Selesnick IW, Figueiredo MA. Signal restoration with over-
All the authors confirm that there is no conflict of
complete wavelet transforms: comparison of analysis and syn-
interest. thesis priors. In Wavelets XIII. 2009. International Society for
Optics and Photonics.
ORCID 9. Majumdar A, Ward RK. Under-determined non-cartesian MR
Fahimeh Mohagheghian https://orcid.org/0000-0002- reconstruction with non-convex sparsity promoting analysis
6653-7148 prior. In International Conference on Medical Image Computing
Nasser Samadzadehaghdam https://orcid.org/0000- and Computer-Assisted Intervention. 2010. Springer.
10. Mohsina M, Majumdar A. Gabor based analysis prior formula-
0002-5027-3416
tion for EEG signal reconstruction. Biomed Signal Process Con-
trol. 2013;8(6):951-955.
R EF E RE N C E S 11. Candes EJ et al. Compressed sensing with coherent and redun-
1. Liu Y, De Vos M, Van Huffel S. Compressed sensing of multi- dant dictionaries. Appl Comput Harmon Anal. 2011;31(1):59-73.
channel EEG signals: the simultaneous cosparsity and low-rank 12. Nam S, Davies ME, Elad M, Gribonval R. Cosparse analysis
optimization. IEEE Trans Biomed Eng. 2015;62(8):2055-2061. modeling-uniqueness and algorithms. Paper presented at:
2. Nasser S, Bahador M, Sadegh M, Mohammad M, Fahimeh IEEE International Conference on Acoustics, Speech and Sig-
M. A new linearly constrained minimum variance nal Processing (ICASSP). 22-27 May 2011; Prague, Czech
beamformer for reconstructing EEG sparse sources. Int J Republic.
Imag Syst Tech. 2019;29(4):686–700. https://onlinelibrary. 13. Nam S. et al., Cosparse analysis modeling. The 9th International
wiley.com/doi/abs/10.1002/ima.22355. Conference on Sampling Theory and Applications (SAMPTA-
3. Eldar YC, Kutyniok G. Compressed Sensing: Theory and Appli- 2011), 2011.
cations. NY: Cambridge University Press; 2012. 14. Nam S, Davies ME, Elad M, Gribonval R. The cosparse analysis
4. Candes EJ, Tao T. Decoding by linear programming. IEEE model and algorithms. Appl Comput Harmon Anal. 2013;34(1):
Trans Inform Theory. 2005;51(12):4203-4215. 30-56.
MOHAGHEGHIAN ET AL. 13

15. Blumensath T, Davies ME. Iterative hard thresholding for com- 32. Giryes R, Nam S, Elad M, Gribonval R, Davies ME. Greedy-like
pressed sensing. Appl Comput Harmon Anal. 2009;27(3): algorithms for the cosparse analysis model. Linear Algebra
265-274. Appl. 2014;441:22-60.
16. Needell D, Saab R, Woolf T. Weighted-minimization for sparse 33. Giryes R, Elad M. CoSaMP and SP for the cosparse analysis
recovery under arbitrary prior information. Inf Infer: J IMA. model. Paper presented at: Proceedings of the 20th European
2017;iaw023.6 Signal Processing Conference (EUSIPCO). August 27-31, 2012;
17. Blurnensath T, Davies M. Iterative hard thresholding for com- Bucharest, Romania.
pressive sensing. Appl Comput Harmon Anal. 2009;27(3): 34. Liu Z, Li J, Li W, Dai P. A modified greedy analysis pursuit
265-274. algorithm for the cosparse analysis model. Numer Algorith.
18. Jacques L, Laska JN, Boufounos PT, Baraniuk RG. Robust 2017;74(3):867-887.
1-bit compressive sensing via binary stable embeddings of 35. Zhang S, Qian H, Zhang Z. A Nonconvex Approach for Struc-
sparse vectors. IEEE Trans Inform Theory. 2013;59(4):2082- tured Sparse Learning. arXiv preprint arXiv:1503.02164, 2015.
2102. 36. Laska JN, Davenport MA, Baraniuk RG. Exact signal recovery
19. Tropp J, Gilbert AC. Signal recovery from partial information from sparsely corrupted measurements through the pursuit of
via orthogonal matching pursuit. IEEE Trans Inform Theory. justice. In 2009 Conference Record of the Forty-Third Asilomar
2007;53(12):4655-4666. Conference on Signals, Systems and Computers. 2009. IEEE.
20. Needell D, Tropp JA. CoSaMP: iterative signal recovery from 37. Carrillo RE, Barner KE, Aysal TC. Robust sampling and recon-
incomplete and inaccurate samples. Appl Comput Harmon struction methods for sparse signals in the presence of impul-
Anal. 2009;26(3):301-321. sive noise. IEEE J Sel Top Signal Process. 2010;4(2):392-408.
21. Carrillo RE, Barner KE. Iteratively re-weighted least squares 38. Arce GR. et al. Reconstruction of sparse signals from ℓ 1
for sparse signal reconstruction from noisy measurements. In. dimensionality-reduced Cauchy random-projections. In 2010
43rd Annual Conference on Information Sciences and Systems, IEEE International Conference on Acoustics, Speech and Signal
2009. CISS 2009. 2009. IEEE. Processing. 2010. IEEE.
22. Carrillo RE, Polania LF, Barner KE. Iterative hard thresholding 39. Studer C et al. Recovery of sparsely corrupted signals. IEEE
for compressed sensing with partially known support. In 2011 Trans Inform Theory. 2011;58(5):3115-3130.
IEEE International Conference on Acoustics, Speech and Signal 40. Paredes JL, Arce GR. Compressive sensing signal reconstruc-
Processing (ICASSP). 2011. IEEE. tion by weighted median regression estimates. IEEE Trans Sig-
23. Carrillo RE, Polania LF, Barner KE. Iterative algorithms for nal Process. 2011;59(6):2585-2601.
compressed sensing with partially known support. In 2010 41. Carrillo RE, Barner KE. Lorentzian iterative hard thresholding:
IEEE International Conference on Acoustics Speech and Signal robust compressed sensing with prior information. IEEE Trans
Processing (ICASSP). 2010. IEEE. Signal Process. 2013;61(19):4822-4833.
24. North P, Needell D. One-bit compressive sensing with partial 42. Zou X, Feng L, Sun H. Robust compressive sensing of multi-
support. In 2015 IEEE 6th International Workshop on Computa- channel EEG signals in the presence of impulsive noise. Inform
tional Advances in Multi-Sensor Adaptive Processing (CAMSAP). Sci. 2018;429:120-129.
2015. IEEE. 43. Javaheri A et al. Robust sparse recovery in impulsive noise via con-
25. Gorodnitsky IF, Rao BD. Sparse signal reconstruction from lim- tinuous mixed norm. IEEE Sig Process Lett. 2018;25(8):1146-1150.
ited data using FOCUSS: a re-weighted minimum norm algo- 44. Ollila E, Kim H-J, Koivunen V. Robust iterative hard
rithm. IEEE Trans Signal Process. 1997;45(3):600-616. thresholding for compressed sensing. In 2014 6th International
26. Schlossmacher E. An iterative technique for absolute devia- Symposium on Communications, Control and Signal Processing
tions curve fitting. J Am Stat Assoc. 1973;68(344):857-859. (ISCCSP). 2014. IEEE.
27. Holland PW, Welsch RE. Robust regression using iteratively 45. Carrillo RE et al. Robust compressive sensing of sparse signals:
reweighted least-squares. Commun Stat-Theory Methods. 1977; a review. EURASIP J Adv Sig Process. 2016;2016(1):108.
6(9):813-827. 46. Carrillo RE, Aysal TC, Barner KE. A generalized Cauchy distri-
28. Gramfort A. Mapping, timing and tracking cortical activations bution framework for problems requiring robust behavior.
with MEG and EEG: methods and application to human vision. EURASIP J Adv Sig Process. 2010;2010(1):312989.
Ecole nationale supérieure des telecommunications-ENST. 2009; 47. Toolbox MO. The MathWorks Inc. MA: Natick; 2002.
1(1):263.
29. Gorodnitsky IF, George JS, Rao BD. Neuromagnetic source
imaging with FOCUSS: a recursive weighted minimum norm
algorithm. Electroencephalogr Clin Neurophysiol. 1995;95(4): How to cite this article: Mohagheghian F,
231-251. Deevband MR, Samadzadehaghdam N,
30. Rao BD, Engan K, Cotter SF, Palmer J, Kreutz-Delgado K. Sub- Khajehpour H, Makkiabadi B. An enhanced
set selection in noise based on diversity measure minimization. weighted greedy analysis pursuit algorithm with
IEEE Trans Signal Process. 2003;51(3):760-770.
application to EEG signal reconstruction. Int
31. Giryes R. et al. Iterative cosparse projection algorithms for the
recovery of cosparse vectors. Paper presented at: 19th European
J Imaging Syst Technol. 2020;1–13. https://doi.org/
Signal Processing Conference. August 29-September 2; Barce- 10.1002/ima.22438
lona, Spain.

You might also like