Professional Documents
Culture Documents
DOI: 10.1002/ima.22438
RESEARCH ARTICLE
1
Department of Biomedical Engineering,
University of Connecticut, Storrs,
Abstract
Connecticut In the past decade, compressed sensing (CS) has provided an efficient frame-
2
Department of Medical Physics and work for signal compression and recovery as the intermediate steps in signal
Biomedical Engineering, School of
processing. The well-known greedy analysis algorithm, called Greedy Analysis
Medicine, Shahid Beheshti University of
Medical Sciences (SBMU), Tehran, Iran Pursuit (GAP) has the capability of recovering the signals from a restricted
3
Research Center for Biomedical number of measurements. In this article, we propose an extension to the GAP
Technology and Robotics (RCBTR), to solve the weighted optimization problem satisfying an inequality constraint
Institute of Advanced Medical
Technologies (IAMT), Tehran University
based on the Lorentzian cost function to modify the EEG signal reconstruction
of Medical Sciences (TUMS), Tehran, Iran in the presence of heavy-tailed impulsive noise. Numerical results illustrate
4
Department of Medical Physics and the effectiveness of our proposed algorithm, called enhanced weighted GAP
Biomedical Engineering, School of
(ewGAP) to reinforce the efficiency of the signal reconstruction and provide an
Medicine, Tehran University of Medical
Sciences (TUMS), Tehran, Iran appropriate candidate for compressed sensing of the EEG signals. The
suggested algorithm achieves promising reconstruction performance and
Correspondence
Bahador Makkiabadi, Department of
robustness that outperforms other analysis-based approaches such as GAP,
Medical Physics and Biomedical engineering, Analysis Subspace Pursuit (ASP), and Analysis Compressive Sampling
School of Medicine, Tehran University of Matching Pursuit (ACoSaMP).
Medical Sciences (TUMS), Tehran, Iran.
Email: b-makkiabadi@sina.tums.ac.ir
KEYWORDS
Funding information compressed sensing, cosparse analysis model, EEG signal reconstruction, Greedy Analysis
Shahid Beheshti University of Medical Pursuit, sparsity
Sciences
Int J Imaging Syst Technol. 2020;1–13. wileyonlinelibrary.com/journal/ima © 2020 Wiley Periodicals LLC 1
2 MOHAGHEGHIAN ET AL.
recover the signal x from its compressive measure- The l1-relaxation of the above analysis prior problem
ments y.3 The assumption of m < n implies that Equa- Equation (3) is assumed as the main approach to recover
tion (1) is an underdetermined inverse problem and a the signal x.
typical approach to solve the problem is based on sig- Unlike the synthesis model, if the signal x and the anal-
nal sparsity as a priori information. ysis operator Ω are specified for the cosparse analysis model
then, the analysis representation of the signal x, i.e. Ωx can
entirely be computed. Since there is no ambiguity about Ωx
1.1 | Sparse synthesis approach and it is uniquely specified by Ω and x, it has been argued
that the uniqueness condition is provided inherently for the
The sparse synthesis model of a signal of interest x is analysis prior model and the question of uniqueness is only
presented by a sparse representation of the signal x, considered for solving the inverse problem. Further, it has
which is formulated as x = Dz, where z d is been previously reported that the EEG signals do not have a
assumed to be a k-sparse vector with respect to the sparse representation in transformed domains that is,
(possibly) redundant (d > n) dictionary D n × d. A k- applying sparsifying transforms to the EEG signals.7 Thus,
sparse vector z is a vector with the number of non- the cosparse analysis model has been proposed as the
zero elements equals to k, which is much smaller appropriate recovery method for EEG signals that yields
than the vector length (k = |z| 0 d). Knowing the better reconstruction results compared to the sparse synthe-
measured signal y and measurement matrix M, in sis model.8-10
sparse synthesis model the signal x is reconstructed
by solving the following optimization problem (syn-
thesis formulation): 1.3 | Related works
In the context of cosparse signal recovery Giryes et In Reference 45, a review of robust sparse signal
al31,32 proposed an analysis version of IHT and HTP, reconstruction strategies was presented. The authors con-
called analysis IHT (AIHT) and analysis HTP (AHTP). sidered signal reconstruction in CS, when the measured
Both AIHT and AHTP methods have been accompa- signals are corrupted by outliers or impulsive noise. They
nied with recovery guarantees analogous to the RIP- showed that the robust techniques such as l1-based coor-
based guarantees, which are applied for the synthesis dinate descent l1-CD),40 Lorentzian-based basis pursuit
equivalent methods IHT and HTP. The other two (LBP), Lorentzian-based iterative hard thresholding
algorithms are ACoSaMP and ASP as the analysis ver- (LITH),41 Lorentzian-based coordinate descent (L-CD),
sions for CoSaMP and Subspace Pursuit (SP).32,33 Liu robust lasso (R-Lasso) method, Huber iterative hard
et al34 developed a modified (reweighed) GAP algo- thresholding (HIHT) method, and l1-OMP method out-
rithm to solve the weighted regularized l2-minimization perform other traditional CS methods, when the com-
problem. The weight matrix was iteratively constructed pressive measurements have large deviations and show
as a function of the inner product of the approximation comparable performance in light-tailed cases.
of the original signal x and the analysis dictionary rows
at estimated cosupport locations. The algorithm itera-
tively determines the weights of the next iteration from 1.4 | Our contribution
the value of the current solution to form an adaptive
weight matrix to fill the gap between non-convex and The aim of this study is to develop an algorithm for EEG
convex optimization. Zhang et al proposed the CoIRLq signal reconstruction in noisy (Gaussian and non-Gauss-
Algorithm as an iterative reweighted method to deal ian) as well as noiseless environments. In this article, we
with the nonconvex lp-norm optimization with a varia- investigate the signal reconstruction performance in the
tional framework, which reconstructed both noisy and cosparse analysis framework through a weighted analysis
noiseless signals with weaker sufficient conditions and optimization problem subject to a Lorentzian cost func-
better performance compared to its convex counterpart tion. We propose an extension to the GAP algorithm
and other methods such as GAP.35 based on Lorentzian norm data fitting to enhance the
As the data acquired by EEG data acquisition systems recovery performance in a non-Gaussian noisy environ-
as well as other sensing operators are corrupted with ment as an application for (EEG) signal reconstruction.
noise, all CS signal reconstruction methods have been Further, we improve the efficiency of the reconstructed
developed to approximate the signals from noisy mea- signals by iteratively adjusting the weights to impose the
surements. However, commonly applied CS reconstruc- cosparsity assumption to the solution.
tion algorithms are based on Gaussian noise assumption Thus, empirical evidence is provided to demonstrate
and fail to recover the signals contaminated with non- the effectiveness of our algorithm in several numerical
Gaussian impulsive noise. In recent years, several experiments for noiseless signals or Gaussian and impul-
research efforts have been carried out to address the sig- sive noise assumptions. The algorithm suggests practical
nal recovery problem assuming impulsive processes.36-42 benefits as it shows promising outcomes for cosparse sig-
Several approaches have been proposed based on data nal recovery in a wide range of cosparsity and sampling
fidelity criterion, which replaces the l2-norm by a more ratio and with different noise levels and dictionary
robust objective function. A continuous mixed norm redundancies.
(CMN) was exploited in Reference 43 for robust sparse
recovery instead of lp-norm. In Reference 37, Carrillo et
al proposed the Lorentzian Based Basis Pursuit method 1.5 | Paper organization
based on l1 minimization and Lorentzian norm con-
straint on the residual error for the reconstruction of The rest of this article is organized as follows. In section 2,
sparse signals in the impulsive environment. An exten- we recall the GAP method and Ω-RIP recovery guaran-
sion to this approach was proposed by Arce38 to develop tees for vectors with the sparse representation using the
an algorithm employing iterative weighted myriad filters dictionary Ω. Further, a general overview of the
based on Lorentzian cost function with an l0-norm regu- Lorentzian Based Basis Pursuit technique is introduced.
larization term. In Reference 44, the authors suggested Then we present our proposed ewGAP algorithm as an
Huber iterative hard thresholding (HIHT) and in Refer- enhanced version of the GAP algorithm. In section 3, we
ence 41, an iterative hard thresholding algorithm using a provide the numerical experiments for simulated EEG
Lorentzian cost function for reconstructing the sparse sig- signals. In section 4, we conclude the derived results with
nals in the presence of impulsive noise. a suggestion on the future of the work.
4 MOHAGHEGHIAN ET AL.
2.1 | Greedy Analysis Pursuit Lorentzian basis pursuit (BP) method37 is a robust algo-
Algorithm (GAP) rithm to reconstruct the sparse signals when the mea-
surements are corrupted by impulsive noise and
Greedy Analysis Pursuit is a greedy algorithm inspired by outperforms the state of the art recovery algorithms in
the well-known greedy pursuit algorithm used in the syn- impulsive environments.41 Since Large deviations are not
thesis model, that is, the OMP.14 The strategy in GAP is heavily penalized by the Lorentzian norm, as by the l1
based on the identification of the locality of the zero-val- and l2-norms, it is a more robust error metric in the pres-
ued entries (or elements) of Ωx to specify an estimation ence of impulsive noise.37,46
of the cosupport λ and then, update the signal x using the To estimate x from y the following non-linear optimi-
estimated cosupport. Unlike the synthesis model, GAP zation problem is used in Lorentzian BP:
detects the non-zero elements to determine the locations
of the zero-value elements. The cosupport is initialized minn kxk1 subject to ky −φxkLL2 ,γ < ϵ ð6Þ
x
with the whole set size d, and through several iterations,
its size reduces to l (or less, d − m), described as follows.
Let x be the sparse signal measured by y = Mx, where where γ is the scale parameter and ϵ is the measurement
M is an m × n (m < n) measurement matrix with rows (sampling) noise.
incoherent with signal x. At first step of GAP, an initial The Lorentzian norm or LL2 norm is defined as:
approximation of the signal x is derived based on the fol-
lowing optimization problem: X
m
kukLL2 ,γ = log 1 + γ − 2 ui 2 , um , γ > 0 ð7Þ
i=1
x^0 = arg min
x
kΩxk22 subject to y = Mx ð4Þ
x^0 = argmin
x
kW ΩΛ^ 0 xk22 subject to ky −Mxk22 < ϵ
It is highly reasonable to solve the lp-norm
ð10Þ
(0 ≤ p ≤ 1) instead of l2-norm prior problem to promote
the cosparsity of the solution. The minimization yields 3. Iterations k ! k + 1 and accomplish the fol-
the unique solution whenever the corresponding lowing steps:
unweighted counterpart has a unique solution, though
the weighted and unweighted optimization problems • Compute α = Ω^ xk − 1
have different solutions. Depending on the selected • Find the largest elements: γ k = {i :
weights, the weighted optimization might improve the |αi| = maxj|αj|}
signal reconstruction and outperform simple • Update the cosupport: Λ ^ k − 1 ∖Γ k
^k = Λ
implementations of lp-norm minimization. Candes et al5 • Update the weight matrix Wk
have suggested that as a general rule of thumb, the • Update the solution:
weights should have an inverse relationship to the true
signal magnitudes. x^k = arg minkWk ΩΛ^ k xk22 subject to ky −MxkLL2 ,γ < ϵ
x
In the proposed iterative optimization problem by
solving the unweighted equivalent of Equation (8), an ð11Þ
inaccurate signal estimation is provided at the first itera-
tion, through which the Ωx coefficients with the largest x k − x^k − 1 k22 ≤ T
• Stopping criterion: If k ≥ l or k^
absolute values are identified. The rows of Ω, or kMx^k − yk∞ ≤ ϵ1 , stop.
corresponding to the identified largest coefficients are not 4. Output: The identified solution: x^k .
belonging to the cosupport. Thus, these rows should be
removed from the cosupport. In the next iterations, the
remaining small coefficients are identified and removed
by applying the weights through down-weighting the As y is the measured data vector and M is a measure-
large coefficients. ment matrix, the main objective is to obtain the
Since the weights modify in each iteration, the solu- unknown signal x, which should be estimated from y.
tion of the weighted optimization problem yields a Whereas the initial starting point for vector x is consider-
more reasonable approximation of the signal cosupport ably involved with the experimental consequences, the
after each iteration. Consequently, the solution can be ewGAP algorithm is initialized with the solution x of
derived by solving an iterative weighted minimization unweighted equivalent regularized problem as the GAP
problem using the weights that are highly influenced algorithm proposes this technique for initialization.
by the estimated cosupport. However, without knowing
the original signal and the cosupport, the construction
of the accurate weight matrix is substantially impossi- 3 | EXPERIMENTAL RESULTS
ble, but their approximation in each step leads to grad- A N D DI S C U S S I O N
ually correcting the solution of the main problem.
Details of the enhanced weighted GAP algorithm are To evaluate the efficiency of the proposed ewGAP algo-
described below. rithm, we conduct several experiments to compare the
6 MOHAGHEGHIAN ET AL.
F I G U R E 1 A, Typical simulated signal x with n = 256; B, Sparse analysis representation vector ωx with l = 240 [Color figure can be
viewed at wileyonlinelibrary.com]
reconstruction results of our algorithm with the well- to satisfy the RIP with a high probability.1 The weighting
p− 2
known cosparse algorithm GAP as well as other cosparse 2
^k − 1
function is selected as ΩΛ^ k − 1 x .
algorithms such as ASP, ACoSaMP, modified GAP, and
CoIRLq, using synthetic data. Based on different choices for the power of the
We run 50 realizations in each experiment to recon- weighting function (p), and the scale parameter of the
struct the signal x from measurements y using each algo- Lorentzian norm (γ), we might acquire different versions
rithm and quantify the recovery results via the of our algorithm. In the following experiments, we set
normalized MSE (Mean Square Error) calculated p = 0.5 (is empirically chosen) and select two values for
between the original signal and the reconstructed signal the scale parameter γ = 0.1, 0.5. In all experiments, the
averaging over all realizations. The experiments are per- value of parameters ϵ1 and T were empirically selected as
formed using MATLAB optimization toolbox,47 under a 10−2 and 10−5, respectively.
Core i7 64-bit processor and Windows 10 environment. In the context of biomedical engineering, a kind
To simulate noisy data, two different types of noises are of model error considered for biological signal analy-
applied to the signals, namely impulsive noise and Gauss- sis is due to the combination of noise and the origi-
ian noise as the additive measurement noises. nal signals. Thus, we conduct a sequence of
The parameter settings of ewGAP are as follows: experiments to evaluate the performance of several
The d × n analysis operator Ω was specified with i.i. algorithms for simulated EEG signal reconstruction.
d. Gaussian entries and approximately unit-norm rows. The experiments and results are categorized into the
The cosparsity l is generated by randomly choosing l following sub-sections: Noiseless data, Corrupted data by
rows of the analysis operator Ω as the index set λ, Gaussian noise, and Corrupted data by impulsive noise.
namely cosupport. In order to form the synthetic
cosparse EEG signal x, a random signal of length n 3.1 | Noiseless data
with Gaussian i.i.d entries are projected onto the
orthogonal complement of the subspace generated by 3.1.1 | Recovery performance vs
the rows of Ω that are denoted by λ.14 Figure 1 shows a cosparsity
typical noiseless synthetic EEG signal x of length
n = 256 and its sparse analysis (ie, cosparse) represen- In this section, we examine the recovery error for signals
tation vector Ωx with l = 240. without additive noise to evaluate the efficiency of
By selecting a sampling ratio (ie, mn ), the m × n mea- ewGAP algorithm in the noiseless environment.
surement operator M yields the synthetic EEG compres- Figure 2 presents the average MSE calculated
sive measurements y = Mx. As it has been previously between the original and reconstructed signals aver all
applied for EEG compression in CS framework, we select realizations for different signal cosparsity. The input
the i.i.d. Gaussian entries for the measurement matrix M parameters set as m = 128, n = 256 for m × n matrix M,
MOHAGHEGHIAN ET AL. 7
TABLE 1 Average and SD of the reconstruction MSE as a function of the cosparsity (n = 256, d = 512, m = 128, SNR = 20 dB)
Cosparsity
150 160 170 180 190 200 210 220 230 240 250
GAP Mean 2.10e−03 2.05e−03 2.07e−03 2.00e−03 1.96e−03 1.92e−03 1.93e−03 1.85e−03 1.83e−03 1.71e−03 1.75e−03
Std 2.50e−04 2.43e−04 2.49e−04 2.41e−04 2.58e−04 2.44e−04 2.93e−04 2.30e−04 2.33e−04 2.28e−04 2.47e−04
mGAP Mean 2.10e−03 2.05e−03 2.07e−03 2.00e−03 1.96e−03 1.92e−03 1.93e−03 1.85e−03 1.84e−03 1.72e−03 1.75e−03
Std 2.52e−04 2.42e−04 2.51e−04 2.40e−04 2.53e−04 2.44e−04 2.93e−04 2.33e−04 2.36e−04 2.29e−04 2.48e−04
ewGAP Mean 2.02e−03 1.93e−03 1.95e−03 1.88e−03 1.77e−03 1.72e−03 1.71e−03 1.58e−03 1.53e−03 1.44e−03 1.41e−03
γ = 0.1 Std 2.88e−04 2.73e−04 2.52e−04 2.48e−04 2.75e−04 2.63e−04 3.29e−04 2.18e−04 2.56e−04 2.40e−04 2.19e−04
ewGAP Mean 2.07e−03 1.97e−03 1.99e−03 1.92e−03 1.80e−03 1.74e−03 1.71e−03 1.54e−03 1.52e−03 1.40e−03 1.38e−03
γ= 0.5 Std 3.07e−04 2.84e−04 2.78e−04 2.77e−04 2.91e−04 2.68e−04 3.42e−04 2.25e−04 2.58e−04 2.23e−04 2.42e−04
CoIRLq Mean 2.12e−03 2.06e−03 2.08e−03 2.02e−03 1.99e−03 1.95e−03 1.97e−03 1.89e−03 1.88e−03 1.75e−03 1.81e−03
Std 2.49e−04 2.41e−04 2.48e−04 2.33e−04 2.61e−04 2.49e−04 2.79e−04 2.31e−04 2.35e−04 2.50e−04 2.51e−04
ACoSaMP Mean 2.40e−03 2.40e−03 2.46e−03 2.50e−03 2.50e−03 2.48e−03 2.62e−03 2.74e−03 3.42e−03 5.26e−03 6.09e−03
Std 2.59e−04 2.21e−04 3.10e−04 2.78e−04 2.93e−04 3.02e−04 3.69e−04 4.25e−04 6.64e−04 1.04e−03 8.75e−04
ASP Mean 2.74e−03 2.69e−03 2.66e−03 2.59e−03 2.66e−03 2.58e−03 2.64e−03 2.73e−03 3.13e−03 4.41e−03 5.26e−03
Std 3.41e−04 3.01e−04 3.98e−04 3.15e−04 4.00e−04 3.43e−04 4.27e−04 3.84e−04 6.18e−04 3.41e−04 3.01e−04
TABLE 2 Average and SD of the reconstruction MSE as a function of the measurements number (n = 256, d = 512, l = 200,
SNR = 20 dB)
26 (0.1) 51 (0.2) 77 (0.3) 102 (0.4) 128 (0.5) 154 (0.6) 179 (0.7) 205 (0.8) 230 (0.9)
GAP Mean 5.10e−03 3.87e−03 3.05e−03 2.52e−03 1.99e−03 1.51e−03 1.15e−03 7.87e−04 6.21e−04
Std 3.38e−04 4.16e−04 3.44e−04 2.35e−04 2.35e−04 1.81e−04 1.99e−04 1.35e−04 1.43e−04
mGAP Mean 5.13e−03 3.88e−03 3.05e−03 2.52e−03 1.99e−03 1.51e−03 1.14e−03 7.57e−04 5.25e−04
Std 3.30e−04 4.15e−04 3.44e−04 2.39e−04 2.38e−04 1.84e−04 2.00e−04 1.29e−04 9.82e−05
ewGAP Mean 5.00e−03 3.79e−03 2.96e−03 2.36e−03 1.79e−03 1.31e−03 9.47e−04 6.03e−04 4.56e−04
γ = 0.1 Std 4.41e−04 5.09e−04 4.25e−04 2.86e−04 2.55e−04 2.09e−04 1.99e−04 1.29e−04 1.17e−04
ewGAP Mean 5.04e−03 3.83e−03 2.99e−03 2.39e−03 1.76e−03 1.29e−03 9.11e−04 5.58e−04 3.39e−04
γ = 0.5 Std 4.75e−04 5.31e−04 4.38e−04 2.92e−04 2.51e−04 1.96e−04 1.96e−04 1.35e−04 8.39e−05
CoIRLq Mean 5.20e−03 3.97e−03 3.10e−03 2.56e−03 2.02e−03 1.54e−03 1.17e−03 6.83e−04 4.21e−04
Std 3.09e−04 3.96e−04 3.28e−04 2.53e−04 2.31e−04 1.89e−04 2.01e−04 1.45e−04 1.01e−04
ACoSaMP Mean 5.44e−03 4.41e−03 3.69e−03 3.17e−03 2.56e−03 2.35e−03 4.50e−03 9.37e−04 3.77e−04
Std 3.40e−04 4.23e−04 3.80e−04 2.98e−04 3.68e−04 4.65e−04 1.11e−03 3.90e−04 1.51e−04
ASP Mean 5.59e−03 5.39e−03 4.17e−03 3.38e−03 2.68e−03 2.16e−03 4.23e−03 8.82e−04 3.85e−04
Std 3.78e−04 6.95e−04 5.21e−04 4.98e−04 4.00e−04 3.84e−04 8.58e−04 3.42e−04 1.46e−04
TABLE 3 Average and SD of the reconstruction MSE as a function of the dictionary overcompleteness (n = 256, m = 128, l = 200,
SNR = 20 dB)
Dictionary overcompleteness
TABLE 4 Average and SD of the reconstruction MSE as a function of the input SNR (n = 256, m = 128, l = 200, d = 512)
SNR
10 15 20 25 30 35
GAP Mean 2.36e−03 2.08e−03 1.95e−03 1.95e−03 1.87e−03 1.86e−03
Std 1.91e−04 2.37e−04 2.54e−04 2.34e−04 2.76e−04 1.91e−04
mGAP Mean 2.36e−03 2.08e−03 1.95e−03 1.95e−03 1.87e−03 1.86e−03
Std 1.87e−04 2.37e−04 2.54e−04 2.35e−04 2.77e−04 1.94e−04
ewGAP Mean 2.20e−03 1.89e−03 1.76e−03 1.76e−03 1.69e−03 1.66e−03
γ = 0.1 Std 2.05e−04 2.65e−04 2.54e−04 2.44e−04 2.84e−04 2.20e−04
ewGAP Mean 2.14e−03 1.86e−03 1.76e−03 1.78e−03 1.73e−03 1.67e−03
γ = 0.5 Std 2.07e−04 2.80e−04 2.70e−04 2.68e−04 2.96e−04 2.28e−04
CoIRLq Mean 2.38e−03 2.12e−03 1.97e−03 1.97e−03 1.90e−03 1.89e−03
Std 1.84e−04 2.33e−04 2.60e−04 2.26e−04 2.67e−04 1.91e−04
ACoSaMP Mean 3.01e−03 2.67e−03 2.52e−03 2.54e−03 2.43e−03 2.38e−03
Std 2.75e−04 3.40e−04 2.62e−04 2.77e−04 3.01e−04 3.03e−04
ASP Mean 3.07e−03 2.77e−03 2.56e−03 2.64e−03 2.53e−03 2.56e−03
Std 3.39e−04 3.17e−04 3.36e−04 3.16e−04 3.42e−04 3.32e−04
TABLE 5 Average and SD of the reconstruction MSE as a function of the impulsive noise tail parameter (n = 256, m = 128,
d = 512, l = 210)
Tail parameter
15. Blumensath T, Davies ME. Iterative hard thresholding for com- 32. Giryes R, Nam S, Elad M, Gribonval R, Davies ME. Greedy-like
pressed sensing. Appl Comput Harmon Anal. 2009;27(3): algorithms for the cosparse analysis model. Linear Algebra
265-274. Appl. 2014;441:22-60.
16. Needell D, Saab R, Woolf T. Weighted-minimization for sparse 33. Giryes R, Elad M. CoSaMP and SP for the cosparse analysis
recovery under arbitrary prior information. Inf Infer: J IMA. model. Paper presented at: Proceedings of the 20th European
2017;iaw023.6 Signal Processing Conference (EUSIPCO). August 27-31, 2012;
17. Blurnensath T, Davies M. Iterative hard thresholding for com- Bucharest, Romania.
pressive sensing. Appl Comput Harmon Anal. 2009;27(3): 34. Liu Z, Li J, Li W, Dai P. A modified greedy analysis pursuit
265-274. algorithm for the cosparse analysis model. Numer Algorith.
18. Jacques L, Laska JN, Boufounos PT, Baraniuk RG. Robust 2017;74(3):867-887.
1-bit compressive sensing via binary stable embeddings of 35. Zhang S, Qian H, Zhang Z. A Nonconvex Approach for Struc-
sparse vectors. IEEE Trans Inform Theory. 2013;59(4):2082- tured Sparse Learning. arXiv preprint arXiv:1503.02164, 2015.
2102. 36. Laska JN, Davenport MA, Baraniuk RG. Exact signal recovery
19. Tropp J, Gilbert AC. Signal recovery from partial information from sparsely corrupted measurements through the pursuit of
via orthogonal matching pursuit. IEEE Trans Inform Theory. justice. In 2009 Conference Record of the Forty-Third Asilomar
2007;53(12):4655-4666. Conference on Signals, Systems and Computers. 2009. IEEE.
20. Needell D, Tropp JA. CoSaMP: iterative signal recovery from 37. Carrillo RE, Barner KE, Aysal TC. Robust sampling and recon-
incomplete and inaccurate samples. Appl Comput Harmon struction methods for sparse signals in the presence of impul-
Anal. 2009;26(3):301-321. sive noise. IEEE J Sel Top Signal Process. 2010;4(2):392-408.
21. Carrillo RE, Barner KE. Iteratively re-weighted least squares 38. Arce GR. et al. Reconstruction of sparse signals from ℓ 1
for sparse signal reconstruction from noisy measurements. In. dimensionality-reduced Cauchy random-projections. In 2010
43rd Annual Conference on Information Sciences and Systems, IEEE International Conference on Acoustics, Speech and Signal
2009. CISS 2009. 2009. IEEE. Processing. 2010. IEEE.
22. Carrillo RE, Polania LF, Barner KE. Iterative hard thresholding 39. Studer C et al. Recovery of sparsely corrupted signals. IEEE
for compressed sensing with partially known support. In 2011 Trans Inform Theory. 2011;58(5):3115-3130.
IEEE International Conference on Acoustics, Speech and Signal 40. Paredes JL, Arce GR. Compressive sensing signal reconstruc-
Processing (ICASSP). 2011. IEEE. tion by weighted median regression estimates. IEEE Trans Sig-
23. Carrillo RE, Polania LF, Barner KE. Iterative algorithms for nal Process. 2011;59(6):2585-2601.
compressed sensing with partially known support. In 2010 41. Carrillo RE, Barner KE. Lorentzian iterative hard thresholding:
IEEE International Conference on Acoustics Speech and Signal robust compressed sensing with prior information. IEEE Trans
Processing (ICASSP). 2010. IEEE. Signal Process. 2013;61(19):4822-4833.
24. North P, Needell D. One-bit compressive sensing with partial 42. Zou X, Feng L, Sun H. Robust compressive sensing of multi-
support. In 2015 IEEE 6th International Workshop on Computa- channel EEG signals in the presence of impulsive noise. Inform
tional Advances in Multi-Sensor Adaptive Processing (CAMSAP). Sci. 2018;429:120-129.
2015. IEEE. 43. Javaheri A et al. Robust sparse recovery in impulsive noise via con-
25. Gorodnitsky IF, Rao BD. Sparse signal reconstruction from lim- tinuous mixed norm. IEEE Sig Process Lett. 2018;25(8):1146-1150.
ited data using FOCUSS: a re-weighted minimum norm algo- 44. Ollila E, Kim H-J, Koivunen V. Robust iterative hard
rithm. IEEE Trans Signal Process. 1997;45(3):600-616. thresholding for compressed sensing. In 2014 6th International
26. Schlossmacher E. An iterative technique for absolute devia- Symposium on Communications, Control and Signal Processing
tions curve fitting. J Am Stat Assoc. 1973;68(344):857-859. (ISCCSP). 2014. IEEE.
27. Holland PW, Welsch RE. Robust regression using iteratively 45. Carrillo RE et al. Robust compressive sensing of sparse signals:
reweighted least-squares. Commun Stat-Theory Methods. 1977; a review. EURASIP J Adv Sig Process. 2016;2016(1):108.
6(9):813-827. 46. Carrillo RE, Aysal TC, Barner KE. A generalized Cauchy distri-
28. Gramfort A. Mapping, timing and tracking cortical activations bution framework for problems requiring robust behavior.
with MEG and EEG: methods and application to human vision. EURASIP J Adv Sig Process. 2010;2010(1):312989.
Ecole nationale supérieure des telecommunications-ENST. 2009; 47. Toolbox MO. The MathWorks Inc. MA: Natick; 2002.
1(1):263.
29. Gorodnitsky IF, George JS, Rao BD. Neuromagnetic source
imaging with FOCUSS: a recursive weighted minimum norm
algorithm. Electroencephalogr Clin Neurophysiol. 1995;95(4): How to cite this article: Mohagheghian F,
231-251. Deevband MR, Samadzadehaghdam N,
30. Rao BD, Engan K, Cotter SF, Palmer J, Kreutz-Delgado K. Sub- Khajehpour H, Makkiabadi B. An enhanced
set selection in noise based on diversity measure minimization. weighted greedy analysis pursuit algorithm with
IEEE Trans Signal Process. 2003;51(3):760-770.
application to EEG signal reconstruction. Int
31. Giryes R. et al. Iterative cosparse projection algorithms for the
recovery of cosparse vectors. Paper presented at: 19th European
J Imaging Syst Technol. 2020;1–13. https://doi.org/
Signal Processing Conference. August 29-September 2; Barce- 10.1002/ima.22438
lona, Spain.