## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

**Design and Analysis of Compressive Sensing
**

Radar Detectors

Laura Anitori,

∗

Student Member, IEEE, Arian Maleki, Member, IEEE, Matern Otten,

Richard Baraniuk, Fellow, IEEE, and Peter Hoogeboom

Abstract

We consider the problem of target detection from a set of Compressive Sensing (CS) radar measurements

corrupted by additive white Gaussian noise. We propose two novel architectures and compare their

performance by means of Receiver Operating Characteristic (ROC) curves. Using asymptotic arguments

and the Complex Approximate Message Passing (CAMP) algorithm, we characterize the statistics of the

1

-norm reconstruction error and derive closed form expressions for both the detection and false alarm

probabilities of both schemes. Of the two architectures, we demonstrate that the best one consists of

a reconstruction stage based on CAMP followed by a detector. This architecture, which outperforms

the

1

-based detector in the ideal case of known background noise, can also be made fully adaptive by

combining it with a conventional Constant False Alarm Rate (CFAR) processor. Using the state evolution

framework of CAMP, we also derive Signal to Noise Ratio (SNR) maps that, together with the ROC

curves, can be used to design a CS-based CFAR radar detector. Our theoretical ﬁndings are conﬁrmed

by means of both Monte Carlo simulations and experimental results.

Index Terms

Radar, Compressive Sensing, Detection Probability, False Alarm Probability, Constant False Alarm

Rate (CFAR), Complex Approximate Message Passing (CAMP).

EDICS: SAM-RADR, SSP-DETC.

L. Anitori, M. Otten, and P. Hoogeboom are with TNO, Oude Waalsdorperweg 63, 2597 AK, The Hague, The Netherlands.

E-mail: {laura.anitori, matern.otten, peter.hoogeboom}@tno.nl.

L. Anitori, and P. Hoogeboom are also with the Technical University of Delft, Stevinweg 1, 2628 CN Delft, The Netherlands.

E-mail: {l.anitori, p.hoogeboom}@tudelft.nl.

A. Maleki, and R. Baraniuk are with the Dept. of Electrical and Computer Engineering, Rice University, Houston, 77005 TX,

USA. E-mail: {arian.maleki, richb}@rice.edu.

Part of the material presented in this paper in Section VII was presented at the 2012 IEEE Radar Conference.

March 8, 2012 DRAFT

2

I. INTRODUCTION

High resolution radar detection and classiﬁcation of targets requires transmission of wide bandwidth

during a short observation time. Wide instantaneous bandwidth necessitates wideband receivers and fast

Analog to Digital Converters (ADC) that drive up the cost of a system. In applications using interleaving

of radar modes in time or space (antenna aperture), multi-function operation often leads to conﬂicting

requirements on sampling rates in both the time and space domains. Measurement time may also be a

constraint in real-time applications.

Recently, Compressive Sensing (CS) has been proposed to alleviate these problems [1]–[12]. CS

exploits the sparsity or compressibility of a signal to reduce both the sampling rate (while keeping

the resolution ﬁxed) and the amount of data generated. In many radar applications, the prerequisite on

the signal sparsity is often met, as the number of targets is typically much smaller than the number of

resolution cells in the illuminated area or volume.

Classical radar architectures (without CS) use well-established signal processing algorithms and detec-

tion schemes, such as Matched Filtering (MF) and Constant False Alarm Rate (CFAR) processors. CS

instead relies on a nonlinear reconstruction scheme such as

1

-minimization for reconstructing the target

scene. Most of the existing CS recovery algorithms have a threshold or a regularization parameter, that

affects the quality of the reconstruction result; for the best performance it is essential to optimally tune

this parameter [13]. A similar problem is encountered in classical radar detectors, where the detection

threshold adapts to the environment using a CFAR scheme. These detectors ﬁrst estimate the background

noise and clutter surrounding the Cell Under Test (CUT) and then calculate the threshold to preserve a

constant false alarm rate even in changing environments. Several CFAR schemes have been designed for

different types of noise, clutter, and target environments [14], [15].

The design of CFAR-like schemes for CS radar systems has not been addressed so far in the literature,

mostly due to the unknown relations between the false alarm probability/noise statistics and the parameters

of the recovery algorithm. Moreover, the main focus of CS has been on the reconstruction Mean

Square Error (MSE), while in radar, the primary performance parameters are the detection and false

alarm probabilities. Furthermore, most of the theoretical results provide only sufﬁcient conditions with

unknown/large constants that are not useful for applications [16]–[19]. These issues have been recently

addressed in a series of papers [20]–[23]. In these papers a novel CS recovery algorithm has been

proposed, called Complex Approximate Message Passing (CAMP). The statistical properties of CAMP

enable accurate calculations of different performance criteria such as the false alarm and detection

March 8, 2012 DRAFT

3

probabilities.

In this paper, we use the features of the CAMP algorithm to design a fully adaptive CS-CFAR radar

detector. Toward this goal we address the following questions. How can the detection probability be

optimized against false alarm rate? How can the false alarm rate be controlled adaptively against unknown

noise and clutter? How can a CS-radar be designed; in particular, what amount of undersampling is

acceptable, and at what cost in terms of power?

To this end, in Section II we begin with a summary of the main features of the CAMP algorithm,

originally presented in [23] for the CS recovery of complex signals and previously in [20], [21], [24]

for the real signal case. In Section III we propose two novel CS radar detection architectures based on

CAMP. We predict the performance of these two detectors theoretically by characterizing the statistical

properties of the recovered signal. Although the theoretical results on CAMP assume sensing matrices with

independent identically distributed (i.i.d.) random entries, both our simulations and empirical results show

that the general trend of our ﬁndings also applies to randomly subsampled (or partial) Fourier sensing

matrices. Such matrices are particularly important for CS-radar range compression, where subsampling

in frequency is a power-efﬁcient way of applying CS [9], as opposed to time undersampling.

From the analytical, simulation, and experimental results presented respectively in Sections III, VI,

and VII, it is clear that the best architecture consists of a reconstruction stage based on CAMP followed

by a detector. For this architecture, in Section IV we propose an adaptive version of CAMP followed

by a CFAR processor, resulting in a fully adaptive detector. In Section VIII we derive Signal-to-Noise

Ratio (SNR) plots in which the reconstruction SNR, which is input to the detection stage, is plotted

against the undersampling factor (δ) and the relative signal sparsity (ρ). Given a maximum number of

expected targets in the scene, such graphs can be used to evaluate how transmitted power can be traded

for undersampling and thus serve as a guide for the CS radar designer. In Section IX we draw some

conclusions from our theory and experiments.

II. COMPLEX APPROXIMATE MESSAGE PASSING (CAMP)

A. Heuristic explanation of the CAMP algorithm

Consider the problem of recovering a sparse vector x

o

∈ C

N

from a noisy undersampled set of linear

measurements y ∈ C

n

y = Ax

o

+n, (1)

where A ∈ C

n×N

is called the sensing matrix and n is i.i.d. complex Gaussian noise with variance σ

2

.

In radar systems, y corresponds to the sampled received signal. Each non-zero element of the vector x

o

March 8, 2012 DRAFT

4

corresponds to the (complex) Radar Cross Section (RCS) of a target and may also include propagation

and other factors normally associated with the radar equation [25]. In the remainder of this paper, [x

o,i

[

2

represents the received power from a target at position i. Hence, x

o,i

is zero except at the target locations.

If there are k ¸N targets, then x

o

is sparse.

It has been proved that, under certain conditions on the measurement matrix and the sparsity of x

o

,

it is possible to recover x

o

accurately [16], [26]. One of the most successful recovery algorithms is

1

-regularized least squares, also known as the LASSO or Basis Pursuit Denoising (BPDN) [27]–[29],

given by

´ x(λ) = arg min

x

1

2

|y −Ax|

2

2

+ λ|x|

1

, (2)

where |x|

1

N

i=1

_

(x

R

i

)

2

+ (x

I

i

)

2

, with x

R

i

and x

I

i

denoting the real and imaginary parts of x

i

,

respectively. λ is called the regularization parameter. The cost function in (2) is convex and can be solved

by standard techniques such as interior point or homotopy methods [30], [31]. However, these approaches

are computationally expensive, and therefore researchers have considered several iterative algorithms

with inexpensive per-iteration computations. For more information on these algorithms, see [23] and the

references therein. These methods rely on the fact that the optimization problem arg min

x

1

2

|u −x|

2

2

+

λ|x|

1

has a closed form solution η(u; λ) ([u[ − λ)e

ju

, where η(; λ) is called the complex soft

thresholding function.

In this paper, we focus on a speciﬁc algorithm called Complex Approximate Message Passing (CAMP)

[23]. This algorithm has several appealing properties for radar applications that will become clear as we

proceed. The CAMP algorithm is given in Algorithm I, where ¸¸ denotes the average of a vector, η

I

and η

R

are the imaginary and real parts of the complex soft thresholding function,

∂η

R

∂xR

is the partial

derivative of η

R

with respect to the real part of the input,

∂η

I

∂xI

is the partial derivative of η

I

with respect to

the imaginary part of the input, 1 is the indicator function, and maxiter is the (user speciﬁed) maximum

number of iterations. δ =

n

N

is called the compression factor, i.e., the number of CS measurements

divided by the number of samples to be recovered. We ﬁrst explain each component of CAMP.

(i) ´ x

t

is an estimate of x

o

at iteration t. If the parameter τ is tuned properly, ´ x

t

→ ´ x(λ) as t → ∞.

The tuning of τ in terms of λ is given later in (5) and will be explained in detail in Section II-B.

(ii) ¯ x

t

is a non-sparse, noisy estimate of x

o

. Deﬁne the “noise” vector w

t

= ¯ x

t

− x

o

at iteration t of

CAMP. The histogram of w

t

, which is shown in Figure 1, suggests that the empirical distribution

of w

t

is “close” to a zero mean Gaussian probability density function. We will more rigorously

discuss the Gaussianity of w

t

in Section II-B.

March 8, 2012 DRAFT

5

Algorithm I: Ideal CAMP Algorithm

Input: y, A, τ, xo

Initialization x

0

= 0, z

0

= y

for t = 1 : maxiter

x

t

= A

†

z

t−1

+ x

t−1

σt = std( x

t

−xo)

z

t

= y −A x

t−1

+z

t−1 1

2δ

∂η

R

∂x

R

( x

t

; τσt) +

∂η

I

∂x

I

( x

t

; τσt)

x

t

= η( x

t

; τσt)

end

Output: x, x, σ∗

−0.5 0 0.5

0

1

2

3

Data

D

e

n

s

i

t

y

Iteration 2

−0.5 0 0.5

0

2

4

Data

Iteration 5

−0.5 0 0.5

0

2

4

6

8

Data

Iteration 10

(a) Histogram of the real part of w

t

.

−0.5 0 0.5

0

1

2

3

Data

D

e

n

s

i

t

y

Iteration 2

−0.5 0 0.5

0

2

4

Data

Iteration 5

−0.5 0 0.5

0

2

4

6

8

Data

Iteration 10

(b) Histogram of the imaginary part of w

t

.

Fig. 1. Histograms of the real and imaginary parts of the CAMP residual noise signal w

t

at different iterations of Ideal CAMP.

A Gaussian probability density function (pdf) is ﬁtted to the histograms (red line). In these plots, N = 4000, δ = 0.6, and

σ = 10

−3

.

(iii) σ

t

is the standard deviation of w

t

. In the rest of the paper we use the notation σ

∗

lim

t→∞

σ

t

.

CAMP ﬁrst ﬁnds a noisy estimate of the signal called ¯ x

t

. Since this estimate is not sparse, the soft

thresholding function is applied to obtain a sparse estimate ´ x

t

. Here for the clarity of exposition we have

assumed that the algorithm uses x

o

in computing the noise standard deviation σ

t

. Hence, we refer to this

algorithm as Ideal CAMP. In Section IV we propose a more practical scheme where σ

t

is replaced with

an estimate.

March 8, 2012 DRAFT

6

B. State evolution: A framework for the analysis of CAMP

There are three important questions that have not been answered yet. (i) Under what conditions are the

above heuristics accurate? (ii) Can we use the properties of CAMP to predict its performance theoretically?

(iii) What is the formal connection between CAMP and the LASSO problem deﬁned in (2).

The ﬁrst question is accurately discussed in [22]–[24], [32]. It has been proved that in the asymptotic

setting N →∞, while δ is ﬁxed, the above heuristics are correct. Consider the following deﬁnition from

[24].

Deﬁnition II.1. A sequence of instances ¦x

o

(N), A(N), n(N)¦ is called a converging sequence if the

following conditions hold:

- The empirical distribution of x

o

(N) ∈ R

N

converges weakly to a probability measure p

X

with

bounded second moment as N →∞.

- The empirical distribution of n ∈ R

n

(n = δN) converges weakly to a probability measure p

n

with

bounded second moment as N →∞.

- The elements of A(N) ∈ R

n×N

are i.i.d. drawn from a Gaussian distribution.

Theorem 1. [24] Let ¦x

o

(N), A(N), n(N)¦ be a converging sequence, and let ¯ x

t

(N) be the estimate

provided by the CAMP algorithm. The empirical law of w

t

(N) = ¯ x

t

(N) −x

o

(N) converges to a zero-

mean Gaussian distribution almost surely as N →∞.

While the above result has been proved for Gaussian measurement matrices in the asymptotic setting,

simulation results conﬁrm that it is still accurate even for medium problem sizes with N ∼ 200 and

different classes of sensing matrices.

1

We explore this claim for the partial Fourier matrices that are of

particular interest in radar applications using Stepped Frequency (SF) waveforms in Section VI-A.

Theorem 2 enables us to answer the second question as well. Using the Gaussianity of the noise in

the asymptotic regime, one can predict the performance of CAMP through what is called the “state

evolution” (SE). SE tracks the evolution of the standard deviation of the noise σ

t

across iterations. Let

the marginal distribution of x

o

converge to p

X

, and let σ

t

denote the standard deviation of w

t

. In the

asymptotic setting, the value of the standard deviation at time t + 1 is calculated from σ

t

according to

the following equation:

σ

2

t+1

= Ψ(σ

t

), (3)

1

Empirical studies have already conﬁrmed that this theoretical prediction holds for other sensing matrices with i.i.d. elements

other than Gaussian, [20], [22], [24].

March 8, 2012 DRAFT

7

where

Ψ(σ

t

) = σ

2

+

1

δ

E

_

[η(X + σ

t

Z; τσ

t

) −X[

2

_

, (4)

Z ∼ (A(0, 1), and the expectation is with respect to the two independent random variables X ∼ p

X

and

Z. In fact, if σ

t

is the standard deviation of the noise at iteration t, then E

_

[η(X + σ

t

Z; τσ

t

) −X[

2

_

equals the MSE of the estimate ´ x

t

after applying the soft thresholding function. It has been proved [23]

that the function Ψ is concave, and therefore the iteration (3), (4) has at most one stable ﬁxed point σ

2

∗

.

Also, CAMP converges to this ﬁxed point exponentially fast (linear convergence according to optimization

literature). Appendix A provides the calculations involved in the Ψ function for a given distribution p

X

.

The following two consequences of (3), (4) are particularly useful for the radar application:

(i) The SE framework establishes the input/output relation for CAMP. Particularly, the output noise

power σ

2

∗

is the sum of the actual system noise (σ

2

) plus a noise-like component caused by the

reconstruction itself (

1

δ

MSE). Consequently, for a given σ

2

, minimizing the reconstruction error

also minimizes σ

2

∗

, therefore maximizing the CAMP reconstruction SNR.

2

(ii) The ﬁxed point σ

∗

depends on τ as well as on δ, p

X

, and σ. Figure 2 exhibits the dependence of

σ

∗

on τ for two distinct values of δ and for a ﬁxed problem instance, i.e., ﬁxed p

X

and σ. As is

clear from the ﬁgure, there is a value of τ, say τ

o

, for which σ

∗

is minimized. Moreover, as the

number of measurements decreases, both the optimal threshold τ

o

and the corresponding output

noise standard deviation increase.

Finally, to answer the third question we observe that there is a nice connection between the CAMP and

LASSO algorithms. According to [23], if τ is chosen to satisfy

λ τσ

∗

_

1−

1

2δ

E

_

∂η

R

∂x

R

(X + σ

∗

Z; τσ

∗

)+

∂η

I

∂x

I

(X + σ

∗

Z; τσ

∗

)

__

, (5)

then in the asymptotic setting CAMP with threshold τ solves the LASSO problem (2) with parameter λ.

III. CS TARGET DETECTION USING CAMP

We now propose two CS target detection schemes based on CAMP and evaluate their performance

as measured by their Receiver Operating Characteristic (ROC). Let k be the number of targets, i.e., the

number of non-zero coefﬁcients in x

o

. Deﬁne ρ =

k

n

, and deﬁne G as the distribution of the non-zero

2

For a given target received power a

2

, the CAMP input and output SNR are deﬁned here respectively as SNRin = a

2

/(nσ

2

)

and SNRout = a

2

/σ

2

∗

. Note that the output (reconstruction) SNR depends on, amongst other quantities, the CAMP threshold τ

via σ∗. A more rigorous deﬁnition of SNR is given in Section V.

March 8, 2012 DRAFT

8

0.5 1 1.5 2 2.5 3

0.25

0.3

0.35

0.4

0.45

0.5

0.55

0.6

τ

σ

*

δ = 0.6

δ = 0.2

Fig. 2. Fixed point σ∗ versus threshold τ for Ideal CAMP with σ = 0.23, compression factors δ = 0.2 (dashed, red line) and

δ = 0.6 (solid, black line). The sensing matrix A has i.i.d. Gaussian entries.

elements of x

o

. As before, σ

2

is the variance of the system noise. In this section we assume that k, G,

and σ are known. In Section IV we will investigate the realistic scenario where these parameters are not

known a priori and describe how to implement the proposed architectures in this case.

The two architectures we consider in this paper are displayed in Figure 3. Since any sparse recovery

algorithm is intrinsically a detection scheme, Architecture 1 seems a natural choice for a CS radar detector.

In this architecture, the measurements y are input to a recovery algorithm (here CAMP or equivalently

LASSO). This algorithm returns a sparse vector ´ x with the non-zero values being detections. Clearly,

the threshold parameter τ in CAMP (or equivalently the regularization parameter λ in LASSO) controls

the false alarm probability α and detection probability (P

d

) of the algorithm.

Proposition III.1. Consider the CAMP iteration with threshold τ

α

=

√

−ln α. If A(N), x

o

(N), w(N)

is a converging sequence and ´ x(N) is the ﬁxed point of CAMP, then

lim

N→∞

1

N −k

N

i=1

1

{ xi(N)=0,xi(N)=0}

= α. (6)

almost surely. Also, τ

α

is the only value of τ for which (6) holds.

Proof: Deﬁne z ∼ (A(0, 1). According to [32] we know that

lim

N→∞

1

N −k

N

i=1

1

{ xi(N)=0,xi(N)=0}

= P([σ

∗

z[ > τ

α

σ

∗

) = e

−τ

2

α

.

The uniqueness is also clear from the above equation.

Roughly speaking, the above proposition states that the number of false alarms are on the order of

(N − k)α. Note that the connection between τ and the detection probability is not clear yet. We will

discuss this issue later.

March 8, 2012 DRAFT

9

(a) Architecture 1 (b) Architecture 2

Fig. 3. Block diagrams of the proposed architectures for CS-radar detection.

The second architecture is inspired by the standard, non-CS radar approach. Most commonly, radar

detectors comprise two stages: an estimation stage, where a noisy estimate of the signal is computed,

followed by a detection stage. Usually, a Matched Filter (MF), matched to the transmitted waveform,

is used to obtain a noisy estimate of the signal with optimum SNR. Then this noisy estimate is fed to

a detection block that controls the False Alarm Probability (P

fa

or FAP). Inspired by this philosophy

and by the properties of CAMP, we introduce Architecture 2. We ﬁrst use CAMP to obtain a noisy,

non-sparse estimate of the signal ¯ x = x

o

+w. Similar to classical estimation procedures, the goal of the

ﬁrst stage is to minimize σ

∗

by choosing the optimal CAMP threshold τ = τ

o

. Once σ

∗

is minimized

or equivalently the SNR is maximized, the noisy signal ¯ x can be fed to a detection block with ﬁxed

threshold κ = σ

∗

_

−ln(α), which is used to control the false alarm rate. From the Gaussianity of w, it

is clear that in the asymptotic setting this choice of κ results in the false alarm probability α as derived

in Proposition III.1.

As will be clariﬁed later, Architecture 2 is much more appropriate for practical radar applications,

since all of the parameters can be optimized and estimated efﬁciently even without prior knowledge of k,

G, and σ. Furthermore, from a detection perspective, Architecture 2 outperforms Architecture 1. Suppose

that the Gaussianity of w

t

holds. Let τ

o

be the optimal value of τ that leads to the minimum σ

∗

and that

this optimal value is unique and can be computed using (4). Under these assumptions, which provably

hold under the conditions speciﬁed in Theorem 1, the following theorem can be derived.

Theorem 2. Set the probability of false alarm to α for both Architecture 1 that uses τ

α

and Architecture

2 that uses τ

o

in CAMP. If P

d,1

and P

d,2

are the detection probabilities of the two schemes, then

P

d,1

≤ P

d,2

.

Furthermore the equality is satisﬁed at only one speciﬁc value α

= e

−τ

2

o

.

The above theorem is proved in Appendix B by comparing the detection and false alarm probabilities

March 8, 2012 DRAFT

10

10

−6

10

−4

10

−2

10

0

0

0.2

0.4

0.6

0.8

1

P

fa

P

d

Arch. 2, Theoretical

Arch. 1, Theoretical

Arch. 2, MC

Arch. 1, MC

0.2 0.4 0.6

0.9998

1

Fig. 4. Comparison of the ROC curves for Architectures 1 and 2 at δ = 0.6, ρ = 0.1 and σ

2

= 0.05. The distribution

G of the non-zero coefﬁcients of xo is chosen such that all non zero components have the same amplitude (equal to 1) and

phase uniformly distributed between −π and π. The solid and dashed lines represent the theoretical predictions based on the

SE equation. The dots are the results of MC simulations using Ideal CAMP. The sensing matrix for MC simulations had i.i.d.

Gaussian entries.

of the two architectures. However, since in Architecture 2 the CAMP threshold is designed to minimize

the output noise variance, intuitively it is clear that, for the same P

fa

, any other choice of τ will lead to

a higher σ

∗

, i.e., a lower SNR, and therefore a lower P

d

. Example ROC curves for Architectures 1 and

2 are shown in Figure 4. The solid and dashed lines are obtained using the analytical equations derived

in Appendix B. The theoretical ROC curves are veriﬁed by Monte Carlo (MC) simulations (dots). In

the simulations we averaged over 10000 realizations of the Ideal CAMP algorithm given in Algorithm

I for several values of P

fa

ranging from 10

−1

to 10

−5

. In Figure 4 we can also observe an interesting

characteristic of Architecture 1 in the region around P

fa

= 0.4 (the zoomed area); the probability of

detection, P

d,1

, decreases as P

fa

increases above this value. To understand why, recall that in Architecture

1 the CAMP threshold τ

α

varies with P

fa

. Therefore σ

∗

(and hence the CAMP reconstruction SNR) also

changes with P

fa

, and it is not constant along the ROC curve. This explains why P

d,1

reaches its

maximum at around P

fa

= 0.4 and then decreases again as the P

fa

goes to 1. Instead, in Architecture

2, σ

∗

is ﬁxed to its minimum along the ROC curve, and therefore P

d,2

increases with increasing FAP.

From this plot it is also possible to observe that there exists only one value of FAP (α

= 0.22) where

P

d,1

= P

d,2

.

3

3

The difference in performance between the two architectures depends signiﬁcantly on the values of (δ, ρ, σ) and on the

sensing matrix in use, as explained later in Section VI.

March 8, 2012 DRAFT

11

IV. ADAPTIVE CAMP CFAR RADAR DETECTOR

So far, we have assumed we know exactly ρ, σ, and G (to derive the theoretical ﬁxed point) and x

o

(to run Ideal CAMP). However, in practical radar systems such information is not available, and both

the CAMP and detector parameters must be estimated from the CS measurements y. In this section we

demonstrate how these issues can be handled in practice and propose a fully adaptive scheme, i.e., one

that does not require any prior information and that adapts to changes in noise level. There are three

main issues that have to be settled. (i) How to estimate σ

t

without knowing x

o

. (ii) How to compute the

optimal value τ

o

for Architecture 2, efﬁciently and accurately, without the SE. (iii) How to replace the

ﬁxed threshold κ for the detector in Architecture 2 with an adaptive threshold to maintain a CFAR.

The ﬁrst question can be answered in several different ways. For instance, in this paper we use the

median to estimate the standard deviation via

´ σ

t

=

_

1

ln 2

median([¯ x

t

[). (7)

This estimator is unbiased if x

o

= 0. However, in the presence of targets, i.e., when x

o

,= 0, (7) is a

biased estimator. The main advantage of this scheme is its robustness to high SNRs. To see this, we

consider the performance of the median estimation in the asymptotic setting. As mentioned above, in the

asymptotic setting at each iteration of CAMP we have ¯ x

t

= x

o

+ w

t

. Assume that the elements of x

o

are distributed as x

o,i

i.i.d.

∼ (1 −)G(x) + (1 −)δ

0

(x) with = δρ ¸1 and w

t

i

∼ (A(0, σ

2

∗

). The goal

is to estimate the median µ

∗

of [w

t

[. However, since we only have access to ¯ x, we estimate the median

of [w

t

[ as the ´ µ that satisﬁes P([¯ x

t

i

[ > ´ µ) =

1

2

. The following theorem provides an upper bound on the

deviation of the estimated median from the true median.

Proposition IV.1. The error of the estimated median is bounded above by

[´ µ −µ

∗

[

σ

∗

≤

[ ln(1 −)[

2

√

ln 2

. (8)

This proposition is proved in Appendix C. It is important to note several interesting properties of the

upper bound (8). First, the distribution G of the non zero components of x

o

does not play any role. Second,

for small values of , i.e., in very sparse situations, [ ln(1−)[ ≈ and, hence, the error is proportional to

the sparsity level. So, when the noise level and x

o

are unknown, a practical implementation of Algorithm

I is obtained by replacing σ

t

with the estimate in (7). We will refer to this algorithm simply as CAMP

or Median CAMP.

The estimate of σ

t

provides an approach to answer the second above question as well, namely how to

estimate, in Architecture 2, the optimum CAMP threshold ´ τ

o

that minimizes ´ σ

2

∗

. Suppose that we know

March 8, 2012 DRAFT

12

or can estimate τ

max

such that τ

o

< τ

max

. Given a step size δ

τ

, we deﬁne a sequence of thresholds

τ = ¦τ

¦

L

=1

such that τ

1

= τ

max

and τ

= τ

−1

−δ

τ

. Starting from τ

max

, at each new iteration CAMP

is initialized with ´ x

0

= ´ x

−1

and z

0

= z

−1

. Using the solution of CAMP at the previous iteration −1

as initial value for τ = τ

**, CAMP needs only a few iterations to converge to the solution, and therefore
**

the entire process is very fast. After L iterations, we have a matrix of solutions

´

X = [´ x

1

, ´ x

2

, , ´ x

L

] of

size N L, where each column contains the CAMP solution for a given τ

**. Also, we have L estimates
**

¦´ σ

∗

¦

L

=1

. The optimum estimated threshold ´ τ

o

is chosen as the one that minimizes the estimated CAMP

output noise variance ´ σ

2

∗

. It is clear that δ

τ

speciﬁes the trade-off between computational complexity and

the accuracy of the algorithm in estimating ´ τ

o

. Decreasing δ

τ

increases the number of points L needed

to span the same τ search region, but it also results in a more accurate estimate of ´ τ

o

.

We now explain how to set τ

max

. At the ﬁrst iteration ( = 1 and t = 1) the CAMP algorithm is

initialized with ´ x

0

= 0 and z

0

= y, and we have ¯ x = A

H

y, where A

H

is the Hermitian of the matrix A.

Suppose that ´ σ

0

is estimated from this vector. Consider now the LASSO problem in (2). It is well known

that for λ > λ

max

= |A

H

y|

∞

the only solution is the zero solution. Using the calibration equation (5)

with λ = λ

max

and σ

∗

= ´ σ

0

, we can compute an estimate of ´ τ

max

. We will refer to this algorithm as

Adaptive CAMP, since both the noise variance ´ σ

t

the threshold ´ τ

o

are adaptively estimated inside the

algorithm itself, and the only input variables are y and A.

It remains now to establish how to replace the ﬁxed threshold κ with an adaptive one for Architecture

2. In Appendix B we show how to set the ﬁxed threshold κ to achieve the desired FAP when the noise

variance σ

2

is known. In practice, however, the noise statistics are not known in advance. Hence, in

classical radar detectors a CFAR processor is employed. In CFAR schemes the Cell Under Test (CUT),

is tested for the presence of a target against a threshold that is derived from an estimated clutter plus

noise power. The 2N

w

cells (CFAR window) surrounding the CUT are used to derive an estimate of

the local background and they are assumed to be target free. Commonly, 2N

G

guard cells immediately

adjacent to the CUT are excluded from the CFAR window. The great advantage of CFAR schemes is

that they are able to maintain a constant false alarm rate via adaptation of the threshold to a changing

environment.

The general form of a CFAR test is

X

H1

≷

H0

βY, (9)

where the random variable X represents some function (generally envelope or power) of the CUT, say

x

o,i

, β is a threshold multiplier, and Y is also a random variable function of the cells in the CFAR

March 8, 2012 DRAFT

13

Fig. 5. Block diagram of the Adaptive CAMP CFAR detector.

window [x

o,i−Nw−NG

, , x

o,i−NG−1

, x

o,i+NG+1

, , x

o,i+Nw+NG

]. In the well known Cell Averaging

CFAR (CA-CFAR) detector, Y is the average of the cells in the CFAR window [14]. Clearly, one has to

know the relation between the CFAR threshold multiplier and P

fa

so that β can be adjusted to maintain

P

fa

constant during the observation time. Hence, the noise distribution should be known to design a

CFAR scheme. If the CAMP estimate ´ x were input to a CFAR processor, then estimation of the noise

characteristics would be very difﬁcult, since many samples in ´ x are identically zero. Instead, the signal

¯ x contains all non-zero samples, and it is modeled as the sum of targets plus Gaussian noise; this can

be directly input to a conventional CFAR processor. A block diagram of the Adaptive CAMP CFAR

detector based on Architecture 2 is shown in Figure 5. For the properties of CAMP, in Architecture

2 replacing the ﬁxed threshold detector with a CFAR detector should provide comparable results as in

classical CFAR without CS.

V. DEFINING THE SNR FOR CS CAMP

Before proceeding to the simulations and experiments, we provide a deﬁnition of SNR for the CAMP-

based CS radar system. This deﬁnition will be useful not only for understanding the performance of

the proposed detectors, but also to compare the novel CS based architectures to more classical ones for

which the performance is known. We observed in (4) that the variance σ

2

∗

of the noise present in the

CAMP estimate depends on the 5-tuple (δ, ρ, p

X

, τ, σ). In fact, even for ﬁxed p

X

, σ and for a speciﬁc

Architecture (that ﬁxes τ), σ

2

∗

, and therefore the (P

d

, P

fa

) curves, can vary signiﬁcantly with δ and ρ.

This is uncommon in classic radar systems, where the SNR after the MF is uniquely determined for a

given input SNR and integration time. Therefore it is important to relate the performance of a given CS

scheme to that of MF-based classical systems.

Let A

+

∈ C

N×N

be the measurement matrix for the case δ = 1, i.e., no undersampling. A conventional

radar processor feeds the measurement vector y = A

+

x

o

+σz, z ∼ (A

N

(0, I), to a MF to obtain a noisy

March 8, 2012 DRAFT

14

estimate ´ x

MF

= x

o

+σz of the target received signal with optimum SNR.

4

CAMP enables us to deﬁne

SNR in a similar way. Once CAMP has converged we have access to both the sparse estimate ´ x and

its noisy version ¯ x. As stated before, even in medium size problems ¯ x can be accurately approximated

by the sum of the true target vector plus white Gaussian noise with variance σ

2

∗

, i.e. ¯ x = x

o

+ σ

∗

z.

Indicating with a

2

the target received power at bin i, we deﬁne the SNR at the output of MF or CAMP

respectively as

SNR

MF

=

a

2

σ

2

, SNR

CS

=

a

2

σ

2

∗

. (10)

In the remainder of the paper we assume that the total transmitted (and received) power is independent of

δ.

5

In the case of partial Fourier sensing matrix, described later in Sections VI and VII, this is achieved

in practice by dividing the total available transmit power over the subset of transmitted frequencies. With

this assumption, dividing both sides of (4) by a

2

, we can derive the CAMP output SNR as a function of

the equivalent MF output SNR, i.e., the SNR that we would obtain without compression using a MF. The

constraint of keeping the received target power equal for different amounts of undersampling enables us

to evaluate the changes in SNR

CS

due to reconstruction and not due to a reduction in total signal power.

In other words, if we normalize (4) by a

2

, then we keep the ﬁrst term on the right hand side ﬁxed (which

is equal to SNR

MF

) and observe the change in SNR

CS

produced by the reconstruction error term (1/δ

MSE). Note that, with the above assumptions made, SNR

MF

represents an upper bound on the highest

SNR that can be obtained from CAMP.

VI. SIMULATION RESULTS

In this section we investigate the performance of median and Adaptive CAMP using MC simulations

and compare it with the theoretical results obtained from the SE. Moreover, we replace the Gaussian

sensing matrices, for which SE provably applies, with partial Fourier matrices, which are of particular

interest in radar applications transmitting Stepped Frequency (SF) waveforms [9], [11], [33]. An n N

partial Fourier matrix is obtained from an N N discrete Fourier transform matrix by preserving only

a random subset n of the original N matrix rows. The elements of a partial Fourier matrix are given by

a

i,j

= e

−j2π(i−1)(j−1)/N

, for i = 1, , n and j = 1, , N.

4

In a multiple target scenario, the MF SNR is optimum and independent of the number of targets as long as each target is

exactly on a grid point and the matrix A+ is orthogonal. We assume these (ideal) conditions are satisﬁed when computing the

MF SNR.

5

This is achieved by using sensing matrices with unit column norms in all simulations.

March 8, 2012 DRAFT

15

−0.2 0 0.2

0

2

4

6

δ = 0.2, ρ = 0.1

Data

−2 0 2

0

0.5

1

Data

δ = 0.2, ρ = 0.7

−0.5 0 0.5

0

1

2

Data

δ = 0.9, ρ = 0.5

−0.4 −0.2 0 0.2 0.4

0

2

4

D

e

n

s

i

t

y

δ = 0.7, ρ = 0.2

Data

(a) Histogram of the real part of w.

−0.5 0 0.5

0

1

2

Data

δ = 0.9, ρ = 0.5

−1 0 1

0

0.5

1

Data

δ = 0.2, ρ = 0.7

−0.2 0 0.2

0

2

4

D

e

n

s

i

t

y

δ = 0.7, ρ = 0.2

Data

−0.2 0 0.2

0

2

4

6

δ = 0.2, ρ = 0.1

Data

(b) Histogram of the imaginary part of w.

Fig. 6. Histograms (bars) of the real and imaginary parts of the the noise signal w for different combinations (δ, ρ) using CAMP

with threshold τ = 1.8. A Gaussian pdf is ﬁtted to the histograms (solid, red line). In these plots σ

2

= 0.1 and N = 4000.

The sensing matrix is partial Fourier. See Appendix D for the p-values of Kolmogorov-Smirnov test.

A. Gaussianity of w using partial Fourier matrices

In this section we investigate the Gaussianity of the reconstruction noise vector w

t

for a partial Fourier

sensing matrix using MC simulations. In Figure 6 we show a few examples of the empirical distribution

of w at convergence for different combinations of δ and ρ. To obtain the histograms we used CAMP with

a ﬁxed (not necessarily optimal) threshold τ and we ﬁxed σ = 0.1. Simulation results conﬁrm that the

Gaussianity of the noise vector is preserved for partial Fourier matrices as well. We further investigate

its Gaussianity using the Kolmogorov-Smirnov test in Appendix D.

B. Accuracy of the SE

We investigate the accuracy of the SE by comparing the theoretical results obtained from (4) with

simulations results obtained using the Ideal CAMP algorithm. Moreover, we study the behavior of σ

∗

for

the case of a partial Fourier sensing matrix and investigate how it deviates from the theoretical case of

a Gaussian sensing matrix, for which the SE applies. Figure 7 compares σ

∗

obtained from Ideal CAMP

for the case of complex Gaussian and partial Fourier sensing matrices with the theoretical one from the

SE. The following two remarks can be made:

(i) SE does not predict the performance of CAMP accurately for partial Fourier matrices. However, for

τ = τ

o

, the value of σ

∗

for Fourier is not very different from the value for Gaussian.

March 8, 2012 DRAFT

16

2 2.5 3

0.26

0.27

0.28

0.29

0.3

τ

σ

*

δ = 0.05

1.5 2 2.5 3

0.26

0.27

0.28

0.29

0.3

τ

σ

*

δ = 0.1

1.5 2 2.5 3

0.26

0.27

0.28

0.29

0.3

τ

σ

*

δ = 0.2

1 1.5 2 2.5 3

0.25

0.26

0.27

0.28

0.29

0.3

τ

σ

*

δ = 0.5

CAMP Ideal, Gaussian

CAMP Ideal, Fourier

Theoretical SE

Fig. 7. σ∗ versus τ from Ideal CAMP for both complex Gaussian and partial Fourier sensing matrices. The empirical curves

are obtained by averaging over 100 MC samples for σ

2

= 0.05, ρ = 0.05, N = 4000, and different δ. The solid, blue line

shows the analytical σ∗ computed from (13) in Appendix A.

(ii) As δ → 0 the predictions of SE become more accurate for Fourier. However, as the number of

measurements increases, i.e., δ →1, the columns of the matrix become more correlated, and hence

the true behavior deviates from the SE that holds for matrices with i.i.d. entries.

C. Effects of the median estimator in CAMP

We investigate the effect of replacing the true σ

t

with the median based estimate ´ σ

t

from (7) when

x

o

,= 0. In Figure 8 the estimated output noise standard deviation is shown for both Ideal and Median

CAMP. We observe that the estimate ´ σ

∗

deviates from the Ideal CAMP case because of the bias introduced

by the estimator. Furthermore, as is clear from this ﬁgure and conﬁrmed by the upper bound provided

in Appendix C, the deviation diminishes as = δρ decreases.

In Architecture 1, overestimating ´ σ

∗

will result in a loss in performance compared to the case of using

Ideal CAMP. This is because the soft thresholding function in CAMP uses the parameter τ

α

´ σ

∗

. For a ﬁxed

FAP (that ﬁxes τ

α

), the bias will have the effect of increasing the overall threshold, therefore resulting in

losing both detection performance and the CFAR property (since the output P

fa

is underestimated). This

is shown in Figure 9, where we plot the ROC for Architecture 1 using both Ideal and Median CAMP.

From this ﬁgure we also observe that, for the value of δ used here, Architecture 1 performs much better

when using the partial Fourier sensing matrix as compared to the Gaussian one. This is related to the

March 8, 2012 DRAFT

17

2 2.5 3

0.26

0.27

0.28

0.29

0.3

τ

σ

*

δ = 0.05

1.5 2 2.5 3

0.26

0.27

0.28

0.29

0.3

τ

σ

*

δ = 0.1

1.5 2 2.5 3

0.26

0.27

0.28

0.29

0.3

τ

σ

*

δ = 0.2

1 1.5 2 2.5 3

0.25

0.26

0.27

0.28

0.29

0.3

τ

σ

*

δ = 0.5

CAMP Adaptive, Gaussian

CAMP Adaptive, Fourier

CAMP Ideal, Gaussian

CAMP Ideal, Fourier

Fig. 8. Output noise standard deviation versus τ for both Ideal (σ∗) and Median ( σ∗) CAMP for complex Gaussian and partial

Fourier sensing matrices. The curves are obtained by averaging over 100 MC realizations for σ

2

= 0.05, ρ = 0.05, N = 4000,

and various δ.

10

−4

10

−3

10

−2

10

−1

0.4

0.6

0.8

1

P

fa

P

d

Median CAMP

Ideal CAMP

(a) Gaussian sensing matrix

10

−4

10

−3

10

−2

10

−1

0.75

0.8

0.85

0.9

0.95

1

P

fa

P

d

Median CAMP

Ideal CAMP

(b) Partial Fourier sensing matrix

Fig. 9. ROC curves for Architecture 1 using Ideal and Median CAMP. Here N = 1000, δ = 0.6, ρ = 0.1, a

2

= 1, and

σ

2

= 0.05 (corresponding to a MF SNR = 13dB).

variation of the noise variance σ

2

∗

, and therefore the SNR, with the threshold τ

α

along the ROC curve.

As can be seen from Figure 8, in the partial Fourier case as δ increases the variance curve becomes

ﬂatter than in the Gaussian case. This implies that, for decreasing FAPs, in the partial Fourier case the

SNR in Architecture 1 deviates much less from the optimum SNR that is achieved for τ = τ

o

.

March 8, 2012 DRAFT

18

10

−4

10

−3

10

−2

10

−1

0.7

0.75

0.8

0.85

0.9

0.95

1

P

fa

P

d

Ideal CAMP CFAR

Adaptive CAMP FT

Adaptive CAMP CFAR

Ideal CAMP FT

Theoretical CA−CFAR

(a) Gaussian sensing matrix

10

−4

10

−3

10

−2

10

−1

0.75

0.8

0.85

0.9

0.95

P

fa

P

d

Ideal CAMP CFAR

Adaptive CAMP FT

Ideal CAMP FT

Adaptive CAMP CFAR

Theoretical CA−CFAR

(b) Partial Fourier sensing matrix

Fig. 10. ROC curves for Architecture 2 for different levels of adaptivity. Here N = 1000, δ = 0.6, ρ = 0.1, a

2

= 1, and

σ

2

= 0.05 (corresponding to a MF SNR = 13dB). FT denotes the use of an (ideal) ﬁxed threshold detector.

D. Adaptive CAMP CFAR detector performance

We investigate now the (P

d

, P

fa

) performance of the Adaptive CAMP CFAR detector. In the simula-

tions for Architecture 2, Adaptive CAMP is followed by a CA-CFAR processor preceded by a Square

Law (SL) detector (recall Figure 5). For the CFAR processor, we use a CFAR window of length 20 with

4 guard cells.

It is a well known fact that, if one or more targets are present in the CFAR window, then they cause

a rise in the adaptive threshold, thus possibly masking the target in the CUT that has yet to be detected.

Therefore ,in the basic analysis of CFAR schemes, it is assumed that there are no targets present in the

CFAR window. This scenario is referred to as the non-interfering targets scenario. Following a similar

approach, we consider the case of multiple but non-interfering targets. For the case of interfering targets,

to reduce the interference losses encountered in CA-CFAR, several dedicated CFAR schemes have been

proposed in literature, such as Order Statistic (OS) CFAR [15]. To keep the discussions concise, we do

not pursue these directions and leave them for future research.

Adaptivity imposes extra losses on the system. One loss is due to the use of Adaptive- instead of Ideal

CAMP. This means that there is an error in estimating τ

o

. A second loss is caused by the CFAR processor

and its estimate of the noise standard deviation. In Figure 10 we show the ROC curves for Architecture

2 obtained using: (a) Ideal CAMP with ideal (ﬁxed threshold) detector, (b) Ideal CAMP in combination

with the CA-CFAR detector, (c) Adaptive CAMP with ideal (ﬁxed threshold) detector, and (d) a fully

adaptive scheme consisting of Adaptive CAMP followed by CA-CFAR processor. We also show the

theoretical curve of a CA-CFAR processor with the same window length and SNR = 11.8dB for the

Gaussian sensing matrix and 12.4dB for the partial Fourier sensing matrix. The SNR can be estimated

March 8, 2012 DRAFT

19

during simulations, or it can be derived using the procedure described later in Section VIII. For the

Gaussian sensing matrix the optimal threshold for Architecture 2 (using Ideal CAMP) is computed using

the SE. For the partial Fourier sensing matrix, the threshold in Ideal CAMP is set to τ

o

= 1.85, which is

derived from a plot like the ones shown in Figure 7 for the case δ = 0.6. It is clear from this ﬁgure that

Adaptive CAMP introduces almost no loss in the detection performance of Architecture 2. This is due

to the fact that, although ´ σ

∗

is biased, the value of ´ τ

o

at which the minimum ´ σ

∗

occurs is very close to

the true optimal τ

o

(see Figure 8). The main loss instead is introduced by the adaptive CFAR detector.

This is the well-known CFAR loss [14], which can be controlled by changing the CFAR window length.

In general, the window length depends on the speciﬁc application (e.g., the expected types of target and

environment). Furthermore, by comparing the curve of Adaptive CAMP plus CFAR with the theoretical

one of a CA-CFAR processor (without CS) with equal parameters, we observe that the CA-CFAR detector

performance seems independent of the fact that the input to the detector is obtained by running CAMP

instead of a conventional MF.

By comparing Figures 9 and 10 it can be seen that, in the ﬁxed threshold case, Architecture 2 always

outperforms Architecture 1, as predicted by Theorem 2.

VII. EXPERIMENTAL RESULTS

To verify our theoretical and simulated results, we performed several experiments at Fraunhofer FHR

using the software deﬁned LabRadOr experimental radar system. The digital SF waveforms were designed

to acquire a set of CS measurements for δ = 0.5 and 0.25. A more detailed description of the radar

system and the setup is given in [33]. In the experiments, we use ﬁve stationary corner reﬂectors of

different sizes (RCS) as targets. The corner reﬂectors are about 5m apart in range. For each transmitted

(TX) waveform 300 measurements (with the same set-up) were performed.

A. Transmitted waveform

We use a SF waveform, meaning that the TX signal consists of a number of discrete frequencies. In the

Nyquist case (that represents unambiguous mapping of ranges to phases over the whole bandwidth) we

transmit N = 200 frequencies over a bandwidth of 800MHz. The achievable range resolution is therefore

δ

R

= 18.75cm. Each frequency is transmitted during 0.512µs, thus implying a bandwidth of B

f

=

1.95MHz, and sequential frequencies are separated by ∆f = 4MHz, resulting in 37.5m unambiguous

range (∆R).

March 8, 2012 DRAFT

20

0 5 10 15 20 25 30 35

10

1

10

2

10

3

10

4

Range [m]

l

o

g

(

a

m

p

l

i

t

u

d

e

)

CAMP Arch. 1

CAMP Arch. 2

MF

T1

T2

T3

T4

T5

Fig. 11. Reconstructed range proﬁle using: CAMP Architectures 1 and 2, and MF. For the MF, N = 200; for CS, δ = 0.5.

In the CS case, the number of TX frequencies is reduced from N to n (n < N). The subset of

transmitted frequencies is chosen uniformly at random within the total transmitted bandwidth. In our

experiments, we ﬁx N = 200, and n = 100, 50, corresponding to δ = 0.25, 0.5 and ρ = 0.1, 0.05.

After reception and demodulation, each range bin, r

i

= r

0

+i ∆R/N, i = 1, . . . , N, maps to n phases

proportional to the n transmitted frequencies f

m

= f

0

+ m∆f, m = 1, . . . , n. Therefore, the sensing

matrix A can be represented as a partial Fourier matrix, and the n samples y

m

of the compressed

measurement vector y are given by

y

m

=

1

√

n

N

i=1

e

−j2kmri

x

o,i

= a

(m)

x

o

, (11)

where a

(m)

is the m

th

row of the partial Fourier matrix, k

m

= 2πf

m

/c is the wave number, and c is

the speed of light. We assume that the same total power is transmitted, irrespective of the number of

transmitted frequencies. This means that when the number of measurements is reduced by δ, the power

per transmitted frequency P

T

is 1/δ times higher than in the Nyquist waveform case, so that the total

transmitted energy (P

T

n/B

f

) is the same in all cases, irrespective of n. This enables us to analyze

the effects of undersampling and transmitted power separately.

B. CFAR CAMP adaptive detector performance

We show now the ROC curves obtained using CAMP-based detection on the measured data. Figure

11 exhibits the signals reconstructed using Architectures 1 and 2 in addition to the MF. For Architecture

1, τ

α

was set using a FAP = 10

−4

. In this ﬁgure the ﬁve corner reﬂectors are indicated as T1–T5. In

the experimental measurements, due to the fact that targets are not exactly on Fourier grid points, there

is a leakage of target power into neighboring range bins both for MF and CS. Also, since the SNR

is very high for all targets (in all cases above 20dB), to evaluate the performance of the detectors at

March 8, 2012 DRAFT

21

10

−3

10

−2

10

−1

10

0

0.2

0.4

0.6

0.8

1

P

fa

P

d

Arch. 2 CA−CFAR

Arch. 2 FT

Arch. 1

T1

T3

T4

(a) δ = 0.5

10

−3

10

−2

10

−1

10

0

0

0.2

0.4

0.6

0.8

1

P

fa

P

d

Arch. 2 CA−CFAR

Arch. 2 FT

Arch. 1

T3

T1

T4

(b) δ = 0.25

Fig. 12. ROC curves for Architecture 1 and Architecture 2 using both ﬁxed threshold (FT) and CA-CFAR detectors. The

curves correspond to targets T1, T3 and T4 in Figure 11.

medium SNR values, we added additional white Gaussian noise (with σ = 500) to the raw frequency data

samples, resulting in a MF output SNR of 17.2, 16.6, 14, 10.2 and 26dB, respectively, from the closest

corner (T1) to the farthest one (T5). Figure 12 shows the ROC curves for three of the ﬁve targets with

different SNRs, indicated as T1, T3, and T4 in Figure 11, for δ = 0.5 and 0.25. In these plots, the FAP

was estimated by excluding the range bins corresponding to the target locations plus four guard cells.

For P

d

estimation, we used the detection at the location of the highest target peak. For each target the

ROC curve is shown using both Architecture 1 and Architecture 2. In Architecture 1, the highest FAP

that can be estimated is equal to δ, since the sparse estimated signal ´ x cannot have more than n out of

N non-zero coefﬁcients. In Architecture 2, CAMP recovery is followed by either a ﬁxed threshold (FT)

detector or a CA-CFAR processor that uses four guard cells and a CFAR window of length 20.

Recall that in CAMP the reconstruction SNR is the ratio of the estimated target power to the system

plus reconstruction noise power (σ

2

∗

). Therefore, while the total transmitted power remains ﬁxed for

δ = 0.5 and 0.25, the reconstruction SNR for each target depends on the architecture used in addition to

the compression factor δ. As observed in the simulated results in Figures 7 and 8, the reconstruction noise

variance σ

∗

increases, and thus the CAMP SNR decreases, as the number of measurements is reduced.

Since a loss in SNR translates directly into a loss in detection probability, for a given FAP, CAMP will

perform better for larger δ.

From Figure 12 we observe that, in agreement with our theoretical ﬁndings, the detection probability

of Architecture 2 with the ﬁxed threshold detector is always higher than that of Architecture 1. When

Architecture 2 is followed by a CA-CFAR processor, as expected, there is a loss in detection performance

compared to the ﬁxed threshold case.

March 8, 2012 DRAFT

22

At very low and very high SNRs, the two proposed architectures perform very similarly. However,

at low FAPs and high P

d

, which is the most relevant case in practical situations, Architecture 2 always

outperforms Architecture 1. Furthermore, we observe that Architecture 1 is comparable to an OS-CFAR

detector where the CFAR window is the entire signal, i.e., 2N

w

= N, including the CUT. However, a

serious disadvantage of Architecture 1 is that, since the entire signal is used in the noise estimation and

the threshold τ

α

is ﬁxed, it can not adapt to local variation of noise level. This makes Architecture 1

unsuitable for many radar applications. Architecture 2, in contrast, provides the ﬂexibility to choose both

the most appropriate CFAR processor and CFAR window length depending on the speciﬁc scenario.

VIII. DESIGN METHODOLOGY

Using the tools developed in the previous sections, we propose here a methodology for designing CS

radar detectors based on CAMP. Given the detection range, target RCS and system noise, the ﬁrst step

is to compute the transmitted power necessary to reach a given CAMP output SNR. For example, if we

consider Architecture 2 and we assume that the received power is equal to a

2

for all targets,

6

then using

equation (4) for a given σ

2

we obtain a value of σ

2

∗

for each couple (δ, ρ). Computing the ratio a

2

/σ

2

∗

we

obtain the CAMP output SNR map, an example of which is shown in Figure 13 for a Gaussian sensing

matrix.

7

It is clear that CS radar performance depends, besides on the SNR, on δ and ρ. Therefore, in

the system design phase, an estimate of the expected number of targets should be made. To design the

system for the worst-case scenario, k could be set based on the maximum expected number of targets,

i.e., k = k

max

. Then, for a given number of range (or Doppler) bins N and a given σ, we can vary the

number of CS measurements n to obtain several values of δ = n/N and corresponding ρ = k

max

/n. By

evaluating the SNR at these points, we obtain a curve that shows how power (SNR

CS

) and undersampling

(δ) can be traded against one another.

In Figure 14 we show an example of such curves for several values of k

max

and N = 200. In Figure

14(a) the sensing matrix has i.i.d. Gaussian entries and the curves are obtained using the theoretical SE. In

Figure 14(b) the sensing matrix is partial Fourier and the curves are obtained using MC simulations with

Adaptive CAMP. Observe that in both cases the curves are equal for the same k

max

up to approximately

6

This choice of target amplitudes distribution provides a lower bound on the SNR performance, being this the least favorable

distribution for the non-zero entries in xo [23].

7

For sensing matrices other than Gaussian, the SNR map of CAMP can be obtained via simulations using the Ideal or Adaptive

CAMP algorithms, as shown in Figure 14(b) for a few sets of points in the map. For Figure 13 we used the analytical equations

from Appendix A.

March 8, 2012 DRAFT

23

δ

ρ

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

0

2

4

6

8

10

12

Fig. 13. CAMP output SNR for Architecture 2 for a = 1 and σ

2

= 0.05, corresponding to SNRMF = 13dB. The sensing

matrix is i.i.d. Gaussian. The colormap is the output SNR of CAMP in a dB scale.

0 0.2 0.4 0.6 0.8 1

2

4

6

8

10

12

14

δ

S

N

R

C

S

k

max

= 1

k

max

= 6

k

max

= 12

k

max

= 24

k

max

= 48

(a) i.i.d. Gaussian sensing matrix

0 0.2 0.4 0.6 0.8 1

2

4

6

8

10

12

14

δ

S

N

R

C

S

k

max

= 1

k

max

= 6

k

max

= 12

k

max

= 24

k

max

= 48

(b) partial Fourier sensing matrix

Fig. 14. CAMP output SNR versus δ for Architecture 2 for different numbers of targets kmax. Here N = 200, a = 1, and

σ

2

= 0.05, corresponding to SNRMF = 13dB.

δ = 0.8. However, as δ →1 the partial Fourier matrix approaches a full Fourier matrix, i.e., it becomes

orthogonal, and SNR

CS

→ 13 dB. On the contrary, even in the limit n = N (i.e., δ = 1) the Gaussian

sensing matrix is not orthogonal. Hence, we estimate that if k

max

> 1, just as in conventional MF, there

will be losses due to target interference. In this case the upper bound (SNR = 13dB) cannot be achieved.

Once we have computed the SNR maps or curves, we can use them to predict the performance of our

CAMP CFAR detector. Assume, for example, that we would like to choose δ = 0.6 and k

max

= 12.

For this combination (δ, ρ) we derive a value for the CAMP output SNR equal to 11.7dB that can be

plugged in the selected CFAR P

d

and P

fa

equations. If we use a CA-CFAR processor and we assume

March 8, 2012 DRAFT

24

that the targets are not interfering, then for P

fa

= 10

−4

we obtain P

d

= 0.7. This means that, if we want

to increase the detection probability for the desired FAP, since k

max

is ﬁxed, we either have to increase

the number of CS measurements (δ) or the received power. The latter can be achieved by increasing the

transmitted power or any term in the radar equation that increases the received power, such as antenna

gain.

IX. CONCLUSIONS

In this paper we have achieved two goals. Firstly, we have presented the ﬁrst architecture for adaptive

CS radar detection with CFAR properties. Secondly, we have provided a methodology to predict the

performance of the proposed detector, which makes it possible to design practical CS based radar systems.

These goals have been achieved by exploiting CAMP, which features closed-form expressions for the

detection and false alarm probabilities in the ideal case of known system parameters. Based on the SE

theoretical results, we have demonstrated that, out of two proposed architectures, the combination of a

recovery stage based on CAMP with a separate detector achieves the best performance. With a simple

modiﬁcation to CAMP, we have combined conventional CFAR processing with

1

-norm minimization to

provide a fully adaptive detection scheme. Our theoretical ﬁndings have been supported by both numerical

and experimental evidence.

Furthermore, by comparing theoretical and experimental results we have been able to understand the

behavior of CAMP for sensing matrices other than i.i.d. Gaussian. In fact, our experimental data has shown

that our conclusions still hold for the case of partial Fourier sensing matrices, for which, unfortunately,

no theoretical claims can yet be made.

We have derived closed form expressions for the CAMP output SNR as a function of system and

target parameters. These relations can be used to obtain CS link budget plots that allow the system

designer to evaluate the trade off between power and undersampling. Such charts play an important role

in determining when and how CS can be applied and at what cost.

We believe this work paves the way for the use of CS in radar detection.

APPENDIX A

RISK OF THE SOFT THRESHOLDING FUNCTION

We derive an analytical expression for (4) for the case when the non-zero coefﬁcients in x

o

all have

equal amplitude. At the end of the appendix, we also explain how the calculations can be easily generalized

to the more general setting where the non-zero elements have different amplitude distributions. For the

March 8, 2012 DRAFT

25

complex signal case, it has been proved in [23] that the function Ψ(σ

∗

) is independent of the phase

of the non-zero coefﬁcients, and therefore, without loss of generality, we will assume in the following

calculations that the phase of the non-zero coefﬁcients is equal to zero. Therefore, we have

f

x

(x) = (1 −δρ)δ(0) + δρδ(a), (12)

where δρ =

k

N

and a is the amplitude of all non-zero coefﬁcients in x

o

.

Using f

x

(x) in (4), we obtain

σ

2

∗

= σ

2

+

1

δ

E

Z,X

¦[(η(X + σ

∗

Z; τσ

∗

) −X)[

2

¦

= σ

2

+

1

δ

_

E

Z

¦[(η(σ

∗

Z; τσ

∗

))[

2

¦P(x = 0) +E

Z

¦[(η(a + σ

∗

Z; τσ

∗

) −a)[

2

¦P(x = a)

_

= σ

2

+

σ

2

∗

δ

_

(1 −δρ)E

Z

¦[(η(Z; τ))[

2

¦ + δρE

Z

¦[(η(µ + Z; τ) −µ)[

2

¦

_

, (13)

where Z ∼ (A(0, 1) and can be decomposed as Z = (Z

r

+ jZ

i

), where Z

r

, Z

i

∼ A(0, 1/2), and

µ = a/σ

∗

. Deﬁne the two independent random variables w = [z[ ∼ Rayleigh(

1

2

) and θ = (z) ∼ that is

uniformly distributed between 0 and 2π. The ﬁrst expectation in (13) can be computed as

E

Z

¦[η(Z; τ)[

2

¦ =

_

θ

_

w>τ

(w −τ)

2

f

w

(w)f

θ

(θ)dwdθ = 2

_

w>τ

w(w −τ)

2

e

−w

2

dw

= 2

√

π

_

_

_

w>τ

w

3

e

−w

2

√

π

dw −2τ

_

w>τ

w

2

e

−w

2

√

π

dw + τ

2

_

w>τ

w

e

−w

2

√

π

dw

_

_

, (14)

where each of the integrals in the last line is the incomplete moment of a Gaussian random variable with

parameters (0,

1

√

2

) of order 3, 2, and 1, respectively. These integrals can be computed numerically.

After some changes of variable and coordinate transformations, the second expectation in (13) can be

written as

E([η(µ + Z

r

+ jZ

c

; τ) −µ[

2

) =

2π

_

φ=0

_

r≤τ

µ

2

1

π

e

−r

2

−µ

2

+2rµcos(φ)

rdrdφ

+

2π

_

φ=0

_

r>τ

[(r −τ)

2

+ µ

2

−2µ(r −τ) cos(φ)]e

−r

2

−µ

2

+2rµcos(φ)

rdrdφ

= µ

2

+ e

−µ

2

_

∞

τ

r(r −τ)e

−r

2

_

−4µI

1

(2rµ) + 2(r −τ)I

0

(2rµ)

_

dr, (15)

March 8, 2012 DRAFT

26

where I

0

and I

1

are modiﬁed Bessel functions of the ﬁrst kind of order 0 and 1, respectively, and J

1

is

the Bessel function of the ﬁrst kind of order 1 [34]. For a given τ, the ﬁxed point solution σ

∗

can be

iteratively computed using the integrals above.

Finally, note that if the amplitudes of the non-zero coefﬁcients are drawn from an arbitrary distribution

G, then we can also calculate the expected value of the expressions in (15) with respect to µ ∼ G.

APPENDIX B

PROOF OF THEOREM 2

To prove Theorem 2, we derive the detection and false alarm probabilities of Architectures 1 and 2.

Since the thresholds τ

α

and τ

o

are different for Architectures 1 and 2, in what follows, σ

∗,α

and σ

∗,o

denote the ﬁxed point solutions of SE in Architectures 1 and 2, respectively.

Architecture 1

Recall from Figure 3(a) that in Architecture 1 the non-zero coefﬁcients in ´ x represent the ﬁnal detections

and the threshold τ

α

is selected so as to achieve the desired FAP α. In this architecture, the output of

CAMP is given by

´ x = η(x

o

+ σ

∗,α

z; τ

α

σ

∗,α

), (16)

where z ∼ (A(0, I). The Gaussianity of the noise is due to the assumption of the Theorem, and it holds

in the asymptotic settings according to Theorem 1. The test statistic at bin i, i = 1, , N, is given by

[´ x

i

[

H1

H0

0. (17)

Therefore, we have

P

fa

1

= P([´ x

i

[ , = 0[H

0

) = P([σ

∗,α

z[ > τ

α

σ

∗,α

) = P([z[ > τ

α

) = e

−τ

2

α

,

where z ∼ (A(0, 1). This leads us to the following straightforward parameter tuning: τ

α

=

√

−ln α.

For evaluating the detection probability, deﬁne a as the square root of the power received from a target

at location i. We then have

P

d1

= P([´ x

i

[ , = 0[H

1

) = P([η(˜ x

i

; τ

α

σ

∗,α

)[ , = 0[H

1

) = P([x

o,i

+ σ

∗,α

z[ > τ

α

σ

∗,α

) (18)

= P([a + σ

∗,α

z[ > τ

α

σ

∗,α

) = Q

1

(

_

2SNR

CS,1

,

√

−2 ln α),

where SNR

CS,1

= a

2

/σ

2

∗,α

, σ

2

∗,α

evaluated using (4) with threshold parameter τ

α

, and Q

1

(., .) is the

Marcum-Q function.

March 8, 2012 DRAFT

27

Architecture 2

Suppose that after the estimation block we obtain the following noisy estimate of x

o

¯ x = x

o

+ σ

∗,o

z.

For Architecture 2, the decision statistic at bin i is given by

[¯ x

i

[

H1

≷

H0

κ.

Therefore,

P

fa2

= P([¯ x

i

[ > κ[H

0

) = P([σ

∗,o

z[ > κ) = P([z[ > κ/σ

∗,o

) = e

−κ

2

/σ

2

∗,o

.

The last equality comes from the fact that [z[ ∼ Rayleigh(1/2). Thus, for a desired FAP α, the detector

threshold can be set as κ = σ

∗,o

√

−ln α, where as before σ

∗,o

can be computed from (4) with threshold

parameter τ

o

. The detection probability is given by

P

d2

= P([¯ x

i

[ > κ[H

1

) = P([x

o,i

+ σ

∗,o

z[ > κ) = P([a + σ

∗,o

z[ > κ) (19)

= Q

1

(

_

2SNR

CS,2

,

√

−2 ln α),

where SNR

CS,2

= a

2

/σ

2

∗,o

.

The proof of Theorem 2 is now straightforward. Since σ

∗,o

≤ σ

∗,α

, for the same target received power

a

2

, SNR

CS,2

≥SNR

CS,1

and, therefore, for the same FAP α, P

d2

≥ P

d1

, with equality if and only if

α = e

−τ

2

o

. Note that the above arguments are independent of the distribution G of the non-zero elements

of x

o

.

APPENDIX C

MEDIAN ESTIMATOR ERROR BOUND

To quantify the error of the median estimator as a function of δ and ρ, consider the random variable

x with pdf as in (12) and z ∼ (A(0, σ

2

). Deﬁne µ

∗

as the median of the absolute value of the random

variable z, i.e., P([z[ ≤ µ

∗

) =

1

2

. Since [z[ ∼ Rayleigh(σ

2

/2), we obtain µ

∗

= σ

_

log(2).

Deﬁne = δρ and µ as the median of the absolute value of the random variable x + z, i.e.,

P([x +z[ ≤ µ) = P([a +z[ ≤ µ) + (1 −)P([z[ ≤ µ) = P([a +z[ ≤ µ) + (1 −)(1 −e

−µ

2

/σ

2

) =

1

2

.

Since 0 ≤ P([a + z[ ≤ µ) ≤ 1, using the upper bound we obtain

(1 −)(1 −e

−µ

2

/σ

2

) ≤

1

2

−

March 8, 2012 DRAFT

28

and therefore

µ ≤ σ

_

log

_

2(1 −)

_

.

Therefore the normalized bias introduced by the median estimator in the presence of a target is bounded

by

[µ −µ

∗

[

σ

≤

¸

¸

¸

_

log

_

2(1 −)

_

−

_

log(2)

¸

¸

¸.

Furthermore, using the inequality log

_

2(1 −)

_

< log(2), the bound in (8) follows.

APPENDIX D

GAUSSIANITY OF w FOR PARTIAL FOURIER MATRIX

We report simulation results that demonstrate the approximate Gaussianity of the noise vector w for the

partial Fourier sensing matrix. In Table I we report the p-values from the Kolmogorov-Smirnov (KS) test

[35] for N = 200. The KS test compares the empirical distribution of the input samples to the Gaussian

TABLE I

p-VALUES OF THE REAL (R) AND IMAGINARY (I) PARTS OF w FOR PARTIAL FOURIER SENSING MATRIX, N = 200.

δ 0.1 0.18 0.2 0.2 0.3 0.4 0.4 0.5 0.6 0.65 0.7 0.8 0.8 0.9 0.9

ρ 0.01 0.5 0.05 0.75 0.6 0.2 0.3 0.15 0.3 0.8 0.6 0.3 0.65 0.1 0.45

p-value (R) 0.84 0.99 0.73 0.86 0.93 0.7 0.7 0.73 0.67 0.89 0.87 0.86 0.93 0.96 0.78

p-value (I) 0.72 0.7 0.98 0.94 0.86 0.84 0.9 0.96 0.82 0.94 0.83 0.97 0.78 0.89 0.9

distribution, and the p-value measures the similarity of the input samples to the reference distribution. In

the null hypothesis it is assumed that the samples belong to the reference probability density function.

In our simulations, in all cases the null hypothesis was accepted.

We performed the KS test also for different combinations of N, δ, and ρ and obtained similar results.

ACKNOWLEDGMENTS

Thanks to Prof. J. Ender and T. Mathy from Fraunhofer FHR, Wachtberg, Germany, for making the

LabRadOr radar system available available and for technical support during the experiments. Many thanks

also to W. van Rossum from TNO, The Netherlands, for frequent discussions and constructive comments.

March 8, 2012 DRAFT

29

REFERENCES

[1] R. G. Baraniuk and T. P. H. Steeghs. Compressive radar imaging. In Proc. IEEE Radar Conf., 2007.

[2] A. C. Gurbuz, J. H. McClellan, and W. R. Scott. Compressive sensing for subsurface imaging using ground penetrating

radar. J. Signal Process., 89(10):1959–1972, Oct. 2009.

[3] I. Stojanovic, W. C. Karl, and M. Cetin. Compressed sensing of mono-static and multi-static SAR. In Proc. SPIE Algorithms

for Synthetic Aperture Radar Imagery XVI, 2009.

[4] C. Y. Chen and P. P. Vaidyanathan. Compressed sensing in MIMO radar. In Proc. Asilomar Conf. Signals, Systems, and

Comput., 2008.

[5] J. J. Fuchs. The generalized likelihood ratio test and the sparse representations approach. In Proc. Int. Conf. Image and

Signal Process. (ICISP), 2010.

[6] M. A. Herman and T. Strohmer. High-resolution radar via compressed sensing. IEEE Trans. Signal Process., 57(6):2275–

2284, Jun. 2009.

[7] Y. Wang, G. Leus, and A. Pandharipande. Direction estimation using compressive sampling array processing. In Proc.

IEEE Work. Stat. Signal Process., 2009.

[8] Y. Yu, A. P. Petropulu, and H. V. Poor. MIMO radar using compressive sampling. IEEE J. Sel. Topics Sig. Proc.,

4(1):146–163, Feb. 2010.

[9] J. H. G. Ender. On compressive sensing applied to radar. J. Signal Process., 90(5):1402–1414, May 2010.

[10] L. C. Potter, E. Ertin, J. T. Parker, and M. Cetin. Sparsity and compressed sensing in radar imaging. Proc. IEEE,

98(6):1006–1020, Jun. 2010.

[11] S. Shah, Y. Yu, and A. Petropulu. Step-frequency radar with compressive sampling SFR-CS. In Proc. IEEE Int. Conf.

Acoust., Speech, and Signal Process. (ICASSP), 2010.

[12] L. Anitori, M. Otten, and P. Hoogeboom. Compressive sensing for high resolution radar imaging. In Proc. IEEE Asia-Paciﬁc

Microwave Conf. (APMC), 2010.

[13] A. Maleki and D. L. Donoho. Optimally tuned iterative reconstruction algorithm for compressed sensing. IEEE J. Sel.

Topics Sig. Proc., 4(2):330–341, Apr. 2010.

[14] P. P. Gandhi and S.A. Kassam. Analysis of CFAR processors in homogeneous background. IEEE Trans. Aerosp. Electron.

Syst., 24(4):427–445, Jul. 1988.

[15] H. Rohling. Radar CFAR thresholding in clutter and multiple target situations. IEEE Trans. Aerosp. Electron. Syst.,

19(4):608–621, Jul. 1983.

[16] J. Romberg E. Candes and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure

Appl. Math., 59(8):1207–1223, Aug. 2006.

[17] M. Elad D. L. Donoho and V. Temlyakov. Stable recovery of sparse overcomplete representations in the presence of noise.

IEEE Trans. Inf. Theory, 52(1):6–18, Jan. 2006.

[18] E. Candes and T. Tao. The dantzig selector: statistical estimation when p is much larger than n. Ann. Stat., 35:2392–2404,

2007.

[19] B. Bubacarr and J. Tanner. Improved bounds for restricted isometry constants for Gaussian matrices. SIAM J. Matrix

Anal., 31(5):2882–2898, 2010.

[20] D. L. Donoho, A. Maleki, and A. Montanari. Message passing algorithms for compressed sensing. Proc. Natl. Acad. Sci.,

106(45):18914–18919, 2009.

March 8, 2012 DRAFT

30

[21] A. Maleki and A. Montanari. Analysis of approximate message passing algorithm. In Proc. IEEE Conf. Inform. Science

and Systems (CISS), 2010.

[22] D. L. Donoho, A. Maleki, and A. Montanari. The noise-sensitivity phase transition in compressed sensing. IEEE Trans.

Inf. Theory, 57(10):6920–6941, Oct. 2011.

[23] A. Maleki, L. Anitori, Y. Zai, and R. G. Baraniuk. Asymptotic analysis of complex LASSO via complex approximate

message passing (CAMP). Submitted to IEEE Trans. Inf. Theory, 2011.

[24] M. Bayati and A. Montanari. The dynamics of message passing on dense graphs, with applications to compressed sensing.

IEEE Trans. Inf. Theory, 57(2):764 – 785, Feb. 2011.

[25] M. Skolnik. Introduction to Radar Systems. McGraw Hill, Third edition, 2002.

[26] D. L. Donoho and J. Tanner. Precise undersampling theorems. Proc. IEEE, 98(6):913 – 924, Jun. 2010.

[27] S. S. Chen and D. L. Donoho. Examples of basis pursuit. In Proc. SPIE Ann. Meeting: Wavelet Apps. Sig. Imag. Proc.,

1995.

[28] R. Tibshirani. Regression shrinkage and selection via the LASSO. J. Roy. Stat. Soc., Series B, 58(1):267–288, 1996.

[29] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM J. on Sci. Computing,

20:33–61, 1998.

[30] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Ann. Stat., 32(2):407–499, 2004.

[31] S. J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky. A method for large-scale 1-regularized least squares. IEEE

J. Sel. Topics Sig. Proc., 1(4):606–617, Dec. 2007.

[32] M. Bayati and A. Montanari. The LASSO risk for Gaussian matrices. To appear in IEEE Trans. Inf. Theory, 2011.

[33] L. Anitori, A. Maleki, W. van Rossum, R. G. Baraniuk, and M. Otten. Compressive CFAR radar detection. In To appear

in Proc. IEEE Radar Conf., 2012.

[34] M. Abramowitz and I. A. Stegun. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables.

National Bureau of Standards, U.S. Dept. of Commerce, 1972.

[35] F. J. Massey. The Kolmogorov-Smirnov test for goodness of ﬁt. J. Amer. Statist. Assoc., 46(253):68–78, 1951.

March 8, 2012 DRAFT

2

I. I NTRODUCTION High resolution radar detection and classiﬁcation of targets requires transmission of wide bandwidth during a short observation time. Wide instantaneous bandwidth necessitates wideband receivers and fast Analog to Digital Converters (ADC) that drive up the cost of a system. In applications using interleaving of radar modes in time or space (antenna aperture), multi-function operation often leads to conﬂicting requirements on sampling rates in both the time and space domains. Measurement time may also be a constraint in real-time applications. Recently, Compressive Sensing (CS) has been proposed to alleviate these problems [1]–[12]. CS exploits the sparsity or compressibility of a signal to reduce both the sampling rate (while keeping the resolution ﬁxed) and the amount of data generated. In many radar applications, the prerequisite on the signal sparsity is often met, as the number of targets is typically much smaller than the number of resolution cells in the illuminated area or volume. Classical radar architectures (without CS) use well-established signal processing algorithms and detection schemes, such as Matched Filtering (MF) and Constant False Alarm Rate (CFAR) processors. CS instead relies on a nonlinear reconstruction scheme such as

1 -minimization

for reconstructing the target

scene. Most of the existing CS recovery algorithms have a threshold or a regularization parameter, that affects the quality of the reconstruction result; for the best performance it is essential to optimally tune this parameter [13]. A similar problem is encountered in classical radar detectors, where the detection threshold adapts to the environment using a CFAR scheme. These detectors ﬁrst estimate the background noise and clutter surrounding the Cell Under Test (CUT) and then calculate the threshold to preserve a constant false alarm rate even in changing environments. Several CFAR schemes have been designed for different types of noise, clutter, and target environments [14], [15]. The design of CFAR-like schemes for CS radar systems has not been addressed so far in the literature, mostly due to the unknown relations between the false alarm probability/noise statistics and the parameters of the recovery algorithm. Moreover, the main focus of CS has been on the reconstruction Mean Square Error (MSE), while in radar, the primary performance parameters are the detection and false alarm probabilities. Furthermore, most of the theoretical results provide only sufﬁcient conditions with unknown/large constants that are not useful for applications [16]–[19]. These issues have been recently addressed in a series of papers [20]–[23]. In these papers a novel CS recovery algorithm has been proposed, called Complex Approximate Message Passing (CAMP). The statistical properties of CAMP enable accurate calculations of different performance criteria such as the false alarm and detection

March 8, 2012

DRAFT

3

probabilities. In this paper, we use the features of the CAMP algorithm to design a fully adaptive CS-CFAR radar detector. Toward this goal we address the following questions. How can the detection probability be optimized against false alarm rate? How can the false alarm rate be controlled adaptively against unknown noise and clutter? How can a CS-radar be designed; in particular, what amount of undersampling is acceptable, and at what cost in terms of power? To this end, in Section II we begin with a summary of the main features of the CAMP algorithm, originally presented in [23] for the CS recovery of complex signals and previously in [20], [21], [24] for the real signal case. In Section III we propose two novel CS radar detection architectures based on CAMP. We predict the performance of these two detectors theoretically by characterizing the statistical properties of the recovered signal. Although the theoretical results on CAMP assume sensing matrices with independent identically distributed (i.i.d.) random entries, both our simulations and empirical results show that the general trend of our ﬁndings also applies to randomly subsampled (or partial) Fourier sensing matrices. Such matrices are particularly important for CS-radar range compression, where subsampling in frequency is a power-efﬁcient way of applying CS [9], as opposed to time undersampling. From the analytical, simulation, and experimental results presented respectively in Sections III, VI, and VII, it is clear that the best architecture consists of a reconstruction stage based on CAMP followed by a detector. For this architecture, in Section IV we propose an adaptive version of CAMP followed by a CFAR processor, resulting in a fully adaptive detector. In Section VIII we derive Signal-to-Noise Ratio (SNR) plots in which the reconstruction SNR, which is input to the detection stage, is plotted against the undersampling factor (δ ) and the relative signal sparsity (ρ). Given a maximum number of expected targets in the scene, such graphs can be used to evaluate how transmitted power can be traded for undersampling and thus serve as a guide for the CS radar designer. In Section IX we draw some conclusions from our theory and experiments. II. C OMPLEX A PPROXIMATE M ESSAGE PASSING (CAMP) A. Heuristic explanation of the CAMP algorithm Consider the problem of recovering a sparse vector xo ∈ CN from a noisy undersampled set of linear measurements y ∈ Cn

y = Axo + n,

(1)

where A ∈ Cn×N is called the sensing matrix and n is i.i.d. complex Gaussian noise with variance σ 2 . In radar systems, y corresponds to the sampled received signal. Each non-zero element of the vector xo

March 8, 2012 DRAFT

(2) where x 1 N i=1 (xR )2 + (xI )2 . We ﬁrst explain each component of CAMP. In the remainder of this paper. and therefore researchers have considered several iterative algorithms with inexpensive per-iteration computations.4 corresponds to the (complex) Radar Cross Section (RCS) of a target and may also include propagation and other factors normally associated with the radar equation [25]. The histogram of wt . One of the most successful recovery algorithms is 1 -regularized least squares. Hence. see [23] and the references therein. it is possible to recover xo accurately [16]. noisy estimate of xo . (i) xt is an estimate of xo at iteration t. i. where η(·. It has been proved that. For more information on these algorithms. xo. If there are k N targets. 1 y − Ax 2 given by x(λ) = arg min x 2 2 + λ x 1. i i i i respectively. [26]..i |2 represents the received power from a target at position i. [31]. we focus on a speciﬁc algorithm called Complex Approximate Message Passing (CAMP) [23]. these approaches are computationally expensive. However. also known as the LASSO or Basis Pursuit Denoising (BPDN) [27]–[29]. λ) is called the complex soft thresholding function. The tuning of τ in terms of λ is given later in (5) and will be explained in detail in Section II-B. xt → x(λ) as t → ∞. 1 is the indicator function. under certain conditions on the measurement matrix and the sparsity of xo . (ii) xt is a non-sparse. λ is called the regularization parameter.i is zero except at the target locations. derivative of η R with respect to the real part of the input. ∂η I ∂xI ∂η R ∂xR is the partial is the partial derivative of η I with respect to the imaginary part of the input. suggests that the empirical distribution of wt is “close” to a zero mean Gaussian probability density function. These methods rely on the fact that the optimization problem arg minx λ x 1 1 2 u−x 2 2 + has a closed form solution η(u. η I and η R are the imaginary and real parts of the complex soft thresholding function. This algorithm has several appealing properties for radar applications that will become clear as we proceed. and maxiter is the (user speciﬁed) maximum number of iterations. with xR and xI denoting the real and imaginary parts of xi .e. Deﬁne the “noise” vector wt = xt − xo at iteration t of CAMP. 2012 DRAFT . March 8. |xo. where · denotes the average of a vector. the number of CS measurements divided by the number of samples to be recovered. If the parameter τ is tuned properly. which is shown in Figure 1. δ = n N is called the compression factor. In this paper. The cost function in (2) is convex and can be solved by standard techniques such as interior point or homotopy methods [30]. We will more rigorously discuss the Gaussianity of wt in Section II-B. The CAMP algorithm is given in Algorithm I. λ) (|u| − λ)ej u. then xo is sparse.

Hence. Since this estimate is not sparse. A. Here for the clarity of exposition we have assumed that the algorithm uses xo in computing the noise standard deviation σt . x. σ∗ Iteration 2 3 Density 2 2 1 0 −0. Histograms of the real and imaginary parts of the CAMP residual noise signal wt at different iterations of Ideal CAMP.5 0 Data 0. In these plots. and σ = 10−3 . 1. In the rest of the paper we use the notation σ∗ limt→∞ σt .5 0 Data 0. CAMP ﬁrst ﬁnds a noisy estimate of the signal called xt .5 4 Iteration 5 8 6 4 2 0 −0. In Section IV we propose a more practical scheme where σt is replaced with an estimate. τ σt ) end Output: x.5 0 −0. 2012 DRAFT . xo Initialization x0 = 0. the soft thresholding function is applied to obtain a sparse estimate xt . τ σt ) xt = η(xt . (iii) σt is the standard deviation of wt .5 Algorithm I: Ideal CAMP Algorithm Input: y. δ = 0. March 8. N = 4000.5 Iteration 10 (b) Histogram of the imaginary part of wt .5 Iteration 10 0 Data 0. Iteration 2 3 Density 2 2 1 0 −0.6. τ σt ) ∂xR ∂η I ∂xI + (xt . τ. z0 = y for t = 1 : maxiter xt = A† zt−1 + xt−1 σt = std(xt − xo ) 1 zt = y − Axt−1 + zt−1 2δ ∂η R (xt .5 0 Data 0.5 0 −0.5 0 Data 0.5 4 Iteration 5 8 6 4 2 0 Data 0.5 (a) Histogram of the real part of wt . we refer to this algorithm as Ideal CAMP. A Gaussian probability density function (pdf) is ﬁtted to the histograms (red line).5 0 −0. Fig.

A(N ).d. Consider the following deﬁnition from [24]. While the above result has been proved for Gaussian measurement matrices in the asymptotic setting. 1 (3) Empirical studies have already conﬁrmed that this theoretical prediction holds for other sensing matrices with i. [22]. It has been proved that in the asymptotic setting N → ∞.1 We explore this claim for the partial Fourier matrices that are of particular interest in radar applications using Stepped Frequency (SF) waveforms in Section VI-A. Using the Gaussianity of the noise in the asymptotic regime. elements other than Gaussian.The empirical distribution of n ∈ Rn (n = δN ) converges weakly to a probability measure pn with bounded second moment as N → ∞. (i) Under what conditions are the above heuristics accurate? (ii) Can we use the properties of CAMP to predict its performance theoretically? (iii) What is the formal connection between CAMP and the LASSO problem deﬁned in (2).i. Deﬁnition II. 2012 DRAFT . drawn from a Gaussian distribution.i. [24]. Let the marginal distribution of xo converge to pX . March 8.The elements of A(N ) ∈ Rn×N are i. simulation results conﬁrm that it is still accurate even for medium problem sizes with N ∼ 200 and different classes of sensing matrices. In the asymptotic setting. [32]. Theorem 2 enables us to answer the second question as well.1. A sequence of instances {xo (N ). and let xt (N ) be the estimate provided by the CAMP algorithm. SE tracks the evolution of the standard deviation of the noise σt across iterations. . The ﬁrst question is accurately discussed in [22]–[24]. State evolution: A framework for the analysis of CAMP There are three important questions that have not been answered yet. n(N )} be a converging sequence. [24] Let {xo (N ). while δ is ﬁxed.6 B. A(N ). the above heuristics are correct. The empirical law of wt (N ) = xt (N ) − xo (N ) converges to a zeromean Gaussian distribution almost surely as N → ∞. .d. the value of the standard deviation at time t + 1 is calculated from σt according to the following equation: 2 σt+1 = Ψ(σt ). n(N )} is called a converging sequence if the following conditions hold: .The empirical distribution of xo (N ) ∈ RN converges weakly to a probability measure pX with bounded second moment as N → ∞. Theorem 1. one can predict the performance of CAMP through what is called the “state evolution” (SE). [20]. and let σt denote the standard deviation of wt .

therefore maximizing the CAMP reconstruction SNR. and the expectation is with respect to the two independent random variables X ∼ pX and Z . the number of non-zero coefﬁcients in xo . to answer the third question we observe that there is a nice connection between the CAMP and LASSO algorithms. τ σ∗ )+ (X + σ∗ Z. Also. the CAMP input and output SNR are deﬁned here respectively as SNRin = a2 /(nσ 2 ) 2 and SNRout = a2 /σ∗ . τ σt ) − X|2 equals the MSE of the estimate xt after applying the soft thresholding function.. τ σ∗ ) 2δ ∂xR ∂xI . if τ is chosen to satisfy λ τ σ∗ 1− ∂η R ∂η I 1 E (X + σ∗ Z. τ σt ) − X|2 . Let k be the number of targets. as the number of measurements decreases. It has been proved [23] 2 that the function Ψ is concave. for which σ∗ is minimized. for a given σ 2 . Consequently. The following two consequences of (3). 2012 DRAFT . (4) are particularly useful for the radar application: (i) The SE framework establishes the input/output relation for CAMP. say τo . δ (4) Z ∼ CN (0. III. amongst other quantities. ﬁxed pX and σ .. In fact. the CAMP threshold τ via σ∗ . (4) has at most one stable ﬁxed point σ∗ . i. 1). Moreover. the output noise 2 power σ∗ is the sum of the actual system noise (σ 2 ) plus a noise-like component caused by the reconstruction itself ( 1 MSE). pX .e. As is clear from the ﬁgure.2 (ii) The ﬁxed point σ∗ depends on τ as well as on δ .7 where 1 Ψ(σt ) = σ 2 + E |η(X + σt Z. minimizing the reconstruction error δ 2 also minimizes σ∗ . CS TARGET DETECTION USING CAMP We now propose two CS target detection schemes based on CAMP and evaluate their performance as measured by their Receiver Operating Characteristic (ROC). and σ . Appendix A provides the calculations involved in the Ψ function for a given distribution pX . i. if σt is the standard deviation of the noise at iteration t. CAMP converges to this ﬁxed point exponentially fast (linear convergence according to optimization literature). Note that the output (reconstruction) SNR depends on. and therefore the iteration (3). then E |η(X + σt Z. According to [23]. Deﬁne ρ = 2 k n. Particularly. March 8. (5) then in the asymptotic setting CAMP with threshold τ solves the LASSO problem (2) with parameter λ. there is a value of τ . and deﬁne G as the distribution of the non-zero For a given target received power a2 . both the optimal threshold τo and the corresponding output noise standard deviation increase. A more rigorous deﬁnition of SNR is given in Section V. Finally. Figure 2 exhibits the dependence of σ∗ on τ for two distinct values of δ and for a ﬁxed problem instance.e.

Gaussian entries. 2012 DRAFT . 1). and σ are known. xo (N ). In this section we assume that k .1. The two architectures we consider in this paper are displayed in Figure 3.35 0.i. Clearly. black line). τα is the only value of τ for which (6) holds.2 (dashed. According to [32] we know that 1 lim N →∞ N − k N 1{xi (N )=0.2 σ* 0.5 δ = 0. the measurements y are input to a recovery algorithm (here CAMP or equivalently LASSO). Note that the connection between τ and the detection probability is not clear yet. Architecture 1 seems a natural choice for a CS radar detector. i=1 (6) almost surely. In Section IV we will investigate the realistic scenario where these parameters are not known a priori and describe how to implement the proposed architectures in this case. w(N ) 1{xi (N )=0. then 1 lim N →∞ N − k N √ − ln α.23.xi (N )=0} = α. Roughly speaking. the threshold parameter τ in CAMP (or equivalently the regularization parameter λ in LASSO) controls the false alarm probability α and detection probability (Pd ) of the algorithm.6 0.6 δ = 0.3 0. i=1 2 The uniqueness is also clear from the above equation.8 0. If A(N ).d. In this architecture.45 0. compression factors δ = 0.5 1 1. Proof: Deﬁne z ∼ CN (0. Consider the CAMP iteration with threshold τα = is a converging sequence and x(N ) is the ﬁxed point of CAMP.5 τ 2 2.5 3 Fig. elements of xo . σ 2 is the variance of the system noise. G. Fixed point σ∗ versus threshold τ for Ideal CAMP with σ = 0.6 (solid.xi (N )=0} = P(|σ∗ z| > τα σ∗ ) = e−τα . Proposition III. As before. red line) and δ = 0.55 0. Also. the above proposition states that the number of false alarms are on the order of (N − k)α. The sensing matrix A has i. We will discuss this issue later.25 0. This algorithm returns a sparse vector x with the non-zero values being detections. 2. March 8. Since any sparse recovery algorithm is intrinsically a detection scheme.4 0.

Then this noisy estimate is fed to a detection block that controls the False Alarm Probability (Pf a or FAP). Theorem 2. it is clear that in the asymptotic setting this choice of κ results in the false alarm probability α as derived in Proposition III. Furthermore. the noisy signal x can be fed to a detection block with ﬁxed threshold κ = σ∗ − ln(α). As will be clariﬁed later. 3. then Pd. non-CS radar approach. where a noisy estimate of the signal is computed.9 (a) Architecture 1 Fig. Similar to classical estimation procedures. Under these assumptions. we introduce Architecture 2. We ﬁrst use CAMP to obtain a noisy. which provably hold under the conditions speciﬁed in Theorem 1. If Pd. is used to obtain a noisy estimate of the signal with optimum SNR. 2012 DRAFT . Furthermore the equality is satisﬁed at only one speciﬁc value α = e−τo . non-sparse estimate of the signal x = xo + w. (b) Architecture 2 The second architecture is inspired by the standard.2 . which is used to control the false alarm rate. From the Gaussianity of w. radar detectors comprise two stages: an estimation stage.1. Set the probability of false alarm to α for both Architecture 1 that uses τα and Architecture 2 that uses τo in CAMP. the following theorem can be derived. from a detection perspective.1 and Pd. Suppose that the Gaussianity of wt holds. and σ . Block diagrams of the proposed architectures for CS-radar detection. the goal of the ﬁrst stage is to minimize σ∗ by choosing the optimal CAMP threshold τ = τo .1 ≤ Pd. a Matched Filter (MF). Let τo be the optimal value of τ that leads to the minimum σ∗ and that this optimal value is unique and can be computed using (4). The above theorem is proved in Appendix B by comparing the detection and false alarm probabilities 2 March 8. G. Architecture 2 is much more appropriate for practical radar applications. Architecture 2 outperforms Architecture 1. matched to the transmitted waveform. Once σ∗ is minimized or equivalently the SNR is maximized. Inspired by this philosophy and by the properties of CAMP. Usually. followed by a detection stage. since all of the parameters can be optimized and estimated efﬁciently even without prior knowledge of k . Most commonly.2 are the detection probabilities of the two schemes.

and therefore Pd. a lower SNR. Therefore σ∗ (and hence the CAMP reconstruction SNR) also changes with Pf a . as explained later in Section VI.3 3 The difference in performance between the two architectures depends signiﬁcantly on the values of (δ. σ∗ is ﬁxed to its minimum along the ROC curve. MC 10 10 Pfa −2 10 0 Fig.2 increases with increasing FAP. The dots are the results of MC simulations using Ideal CAMP.i. σ) and on the sensing matrix in use.6 Arch. 2.. the probability of detection.e. The theoretical ROC curves are veriﬁed by Monte Carlo (MC) simulations (dots). This explains why Pd. In Figure 4 we can also observe an interesting characteristic of Architecture 1 in the region around Pf a = 0.8 0. any other choice of τ will lead to a higher σ∗ . 4. in Architecture 2.4 and then decreases again as the Pf a goes to 1. MC Arch. Example ROC curves for Architectures 1 and 2 are shown in Figure 4.9998 0. decreases as Pf a increases above this value.2 . However. i.1 and σ 2 = 0. ρ. Instead. March 8.1 . 1. Theoretical Arch. Comparison of the ROC curves for Architectures 1 and 2 at δ = 0. From this plot it is also possible to observe that there exists only one value of FAP (α = 0.2 0 −6 10 −4 1 0.4 0. since in Architecture 2 the CAMP threshold is designed to minimize the output noise variance. and it is not constant along the ROC curve.1 = Pd. Theoretical Arch. In the simulations we averaged over 10000 realizations of the Ideal CAMP algorithm given in Algorithm I for several values of Pf a ranging from 10−1 to 10−5 . The solid and dashed lines are obtained using the analytical equations derived in Appendix B.05. Pd.4 0.6. Gaussian entries. To understand why.6 Pd 0. intuitively it is clear that.2 0. and therefore a lower Pd . for the same Pf a . The distribution G of the non-zero coefﬁcients of xo is chosen such that all non zero components have the same amplitude (equal to 1) and phase uniformly distributed between −π and π. of the two architectures.22) where Pd. The sensing matrix for MC simulations had i. 2. recall that in Architecture 1 the CAMP threshold τα varies with Pf a .d.4 (the zoomed area). 2012 DRAFT . ρ = 0.10 1 0.1 reaches its maximum at around Pf a = 0. The solid and dashed lines represent the theoretical predictions based on the SE equation. 1.

and G (to derive the theoretical ﬁxed point) and xo (to run Ideal CAMP). i. in Architecture 2. namely how to 2 estimate. ln 2 (7) This estimator is unbiased if xo = 0. σ∗ ). σ∗ 2 ln 2 (8) This proposition is proved in Appendix C. for small values of . in very sparse situations.11 IV. However. Proposition IV.e. and both the CAMP and detector parameters must be estimated from the CS measurements y. efﬁciently and accurately.e.d.. So. Second. in the asymptotic setting at each iteration of CAMP we have xt = xo + wt . since we only have access to x. when xo = 0. First. the distribution G of the non zero components of xo does not play any role. σ . (i) How to estimate σt without knowing xo . (iii) How to replace the ﬁxed threshold κ for the detector in Architecture 2 with an adaptive threshold to maintain a CFAR. There are three main issues that have to be settled. i. The ﬁrst question can be answered in several different ways. The main advantage of this scheme is its robustness to high SNRs. It is important to note several interesting properties of the upper bound (8). one that does not require any prior information and that adapts to changes in noise level.i ∼ (1 − )G(x) + (1 − )δ0 (x) with i. the error is proportional to the sparsity level.1. we consider the performance of the median estimation in the asymptotic setting. To see this. in the presence of targets. The estimate of σt provides an approach to answer the second above question as well. However. (ii) How to compute the optimal value τo for Architecture 2.i. For instance.. the optimum CAMP threshold τo that minimizes σ∗ . We will refer to this algorithm simply as CAMP or Median CAMP. Assume that the elements of xo are distributed as xo. we estimate the median of |wt | as the µ that satisﬁes P(|xt | > µ) = 1 . in practical radar systems such information is not available. without the SE. a practical implementation of Algorithm I is obtained by replacing σt with the estimate in (7). 2012 DRAFT . = δρ t 2 1 and wi ∼ CN (0. The following theorem provides an upper bound on the i 2 deviation of the estimated median from the true median. hence. The goal is to estimate the median µ∗ of |wt |. in this paper we use the median to estimate the standard deviation via σt = 1 median(|xt |). when the noise level and xo are unknown. In this section we demonstrate how these issues can be handled in practice and propose a fully adaptive scheme. (7) is a biased estimator.. However. we have assumed we know exactly ρ. A DAPTIVE CAMP CFAR RADAR DETECTOR So far. The error of the estimated median is bounded above by |µ − µ∗ | | ln(1 − )| √ ≤ . i. As mentioned above. | ln(1 − )| ≈ and. Suppose that we know March 8.e.

We now explain how to set τmax . 2NG guard cells immediately adjacent to the CUT are excluded from the CFAR window. say xo. Given a step size δτ . In CFAR schemes the Cell Under Test (CUT). The great advantage of CFAR schemes is that they are able to maintain a constant false alarm rate via adaptation of the threshold to a changing environment. At the ﬁrst iteration ( = 1 and t = 1) the CAMP algorithm is initialized with x0 = 0 and z0 = y. Hence. The general form of a CFAR test is H1 X H0 βY.i . Starting from τmax . and therefore the entire process is very fast. and the only input variables are y and A. the noise statistics are not known in advance. 2012 DRAFT . we have L estimates {σ∗ }L . Using the calibration equation (5) with λ = λmax and σ∗ = σ0 . The 2Nw cells (CFAR window) surrounding the CUT are used to derive an estimate of the local background and they are assumed to be target free. β is a threshold multiplier. The optimum estimated threshold τo is chosen as the one that minimizes the estimated CAMP =1 2 output noise variance σ∗ . and we have x = AH y. Using the solution of CAMP at the previous iteration − 1 as initial value for τ = τ . we can compute an estimate of τmax .12 or can estimate τmax such that τo < τmax . where AH is the Hermitian of the matrix A. xL ] of size N × L. since both the noise variance σt the threshold τo are adaptively estimated inside the algorithm itself. We will refer to this algorithm as Adaptive CAMP. we deﬁne a sequence of thresholds τ = {τ }L such that τ1 = τmax and τ = τ =1 −1 − δτ . Consider now the LASSO problem in (2). · · · . After L iterations. where each column contains the CAMP solution for a given τ . but it also results in a more accurate estimate of τo . x2 . at each new iteration CAMP is initialized with x0 = x −1 and z0 = z −1 . In practice. we have a matrix of solutions X = [x1 . It is clear that δτ speciﬁes the trade-off between computational complexity and the accuracy of the algorithm in estimating τo . and Y is also a random variable function of the cells in the CFAR March 8. Suppose that σ0 is estimated from this vector. (9) where the random variable X represents some function (generally envelope or power) of the CUT. It is well known that for λ > λmax = AH y ∞ the only solution is the zero solution. Decreasing δτ increases the number of points L needed to span the same τ search region. Commonly. in classical radar detectors a CFAR processor is employed. Also. In Appendix B we show how to set the ﬁxed threshold κ to achieve the desired FAP when the noise variance σ 2 is known. CAMP needs only a few iterations to converge to the solution. however. is tested for the presence of a target against a threshold that is derived from an estimated clutter plus noise power. It remains now to establish how to replace the ﬁxed threshold κ with an adaptive one for Architecture 2.

we provide a deﬁnition of SNR for the CAMPbased CS radar system. V. in Architecture 2 replacing the ﬁxed threshold detector with a CFAR detector should provide comparable results as in classical CFAR without CS. Hence. Instead. Block diagram of the Adaptive CAMP CFAR detector. We observed in (4) that the variance σ∗ of the noise present in the CAMP estimate depends on the 5-tuple (δ. pX . Pf a ) curves. ρ. the signal x contains all non-zero samples. z ∼ CN N (0. xo. In fact. window [xo. the noise distribution should be known to design a CFAR scheme. xo. For the properties of CAMP. Let A+ ∈ CN ×N be the measurement matrix for the case δ = 1.i−Nw −NG . Y is the average of the cells in the CFAR window [14]. A conventional radar processor feeds the measurement vector y = A+ xo +σz. σ and for a speciﬁc 2 Architecture (that ﬁxes τ ). then estimation of the noise characteristics would be very difﬁcult. Clearly. but also to compare the novel CS based architectures to more classical ones for 2 which the performance is known. and it is modeled as the sum of targets plus Gaussian noise. · · · .i+Nw +NG ]. can vary signiﬁcantly with δ and ρ. τ. If the CAMP estimate x were input to a CFAR processor. · · · .i+NG +1 . since many samples in x are identically zero. σ∗ . Therefore it is important to relate the performance of a given CS scheme to that of MF-based classical systems. This deﬁnition will be useful not only for understanding the performance of the proposed detectors. A block diagram of the Adaptive CAMP CFAR detector based on Architecture 2 is shown in Figure 5. In the well known Cell Averaging CFAR (CA-CFAR) detector. i. one has to know the relation between the CFAR threshold multiplier and Pf a so that β can be adjusted to maintain Pf a constant during the observation time.i−NG −1 . where the SNR after the MF is uniquely determined for a given input SNR and integration time. even for ﬁxed pX . this can be directly input to a conventional CFAR processor. D EFINING THE SNR FOR CS CAMP Before proceeding to the simulations and experiments. to a MF to obtain a noisy March 8. and therefore the (Pd . no undersampling. 2012 DRAFT . I).. 5.e. σ ). xo.13 Fig. This is uncommon in classic radar systems.

with the above assumptions made.e. With this assumption. 2012 DRAFT .5 In the case of partial Fourier sensing matrix. We assume these (ideal) conditions are satisﬁed when computing the MF SNR. if we normalize (4) by a2 .e. Once CAMP has converged we have access to both the sparse estimate x and its noisy version x.14 estimate xM F = xo + σz of the target received signal with optimum SNR. The constraint of keeping the received target power equal for different amounts of undersampling enables us to evaluate the changes in SNRCS due to reconstruction and not due to a reduction in total signal power. then we keep the ﬁrst term on the right hand side ﬁxed (which is equal to SNRM F ) and observe the change in SNRCS produced by the reconstruction error term (1/δ MSE). x = xo + σ∗ z. The elements of a partial Fourier matrix are given by ai. March 8. we deﬁne the SNR at the output of MF or CAMP respectively as SNRM F = a2 . As stated before. In other words. 4 In a multiple target scenario. we can derive the CAMP output SNR as a function of the equivalent MF output SNR. Indicating with a2 the target received power at bin i. Note that. σ2 a2 .4 CAMP enables us to deﬁne SNR in a similar way.. · · · . the MF SNR is optimum and independent of the number of targets as long as each target is exactly on a grid point and the matrix A+ is orthogonal. n and j = 1. described later in Sections VI and VII. N . the SNR that we would obtain without compression using a MF. with partial Fourier matrices. dividing both sides of (4) by a2 . 5 This is achieved by using sensing matrices with unit column norms in all simulations. i. for which SE provably applies. this is achieved in practice by dividing the total available transmit power over the subset of transmitted frequencies. [11]. An n × N partial Fourier matrix is obtained from an N × N discrete Fourier transform matrix by preserving only a random subset n of the original N matrix rows. · · · . we replace the Gaussian sensing matrices.j = e−j2π(i−1)(j−1)/N . SNRM F represents an upper bound on the highest SNR that can be obtained from CAMP. [33]. for i = 1. S IMULATION RESULTS In this section we investigate the performance of median and Adaptive CAMP using MC simulations and compare it with the theoretical results obtained from the SE. 2 σ∗ SNRCS = (10) In the remainder of the paper we assume that the total transmitted (and received) power is independent of δ . i. VI. which are of particular interest in radar applications transmitting Stepped Frequency (SF) waveforms [9]. Moreover. even in medium size problems x can be accurately approximated 2 by the sum of the true target vector plus white Gaussian noise with variance σ∗ .

In these plots σ 2 = 0. A. The sensing matrix is partial Fourier.7 (b) Histogram of the imaginary part of w. for τ = τo . ρ = 0. ρ = 0.1 2 δ = 0. Histograms (bars) of the real and imaginary parts of the the noise signal w for different combinations (δ.5 1 δ = 0.7. However. To obtain the histograms we used CAMP with a ﬁxed (not necessarily optimal) threshold τ and we ﬁxed σ = 0.2 0 −0.2 0 Data 0. ρ = 0. ρ = 0. March 8.1 2 δ = 0. Accuracy of the SE We investigate the accuracy of the SE by comparing the theoretical results obtained from (4) with simulations results obtained using the Ideal CAMP algorithm. The following two remarks can be made: (i) SE does not predict the performance of CAMP accurately for partial Fourier matrices.2 4 Density 6 4 2 2 0 −0.15 δ = 0. In Figure 6 we show a few examples of the empirical distribution of w at convergence for different combinations of δ and ρ. 6. 2012 DRAFT .2 0 Data 0. red line).2. Fig.2 4 Density 6 4 2 2 0 Data 0.7 0 −0.2 0 −0.5 0 −2 0 Data 2 1 0.2 0. δ = 0.5 δ = 0.2 0 Data 0.2 0 −0.2 (a) Histogram of the real part of w.1. ρ = 0.8. ρ = 0.5 0 Data 0. B.5 1 δ = 0. ρ = 0.4 −0. See Appendix D for the p-values of Kolmogorov-Smirnov test.2. We further investigate its Gaussianity using the Kolmogorov-Smirnov test in Appendix D.2. the value of σ∗ for Fourier is not very different from the value for Gaussian.1 and N = 4000. we study the behavior of σ∗ for the case of a partial Fourier sensing matrix and investigate how it deviates from the theoretical case of a Gaussian sensing matrix.5 0 Data 0.9. ρ) using CAMP with threshold τ = 1. A Gaussian pdf is ﬁtted to the histograms (solid.5 δ = 0.2.9. Simulation results conﬁrm that the Gaussianity of the noise vector is preserved for partial Fourier matrices as well.4 0 −0. ρ = 0. for which the SE applies. Gaussianity of w using partial Fourier matrices In this section we investigate the Gaussianity of the reconstruction noise vector wt for a partial Fourier sensing matrix using MC simulations.7. Moreover. Figure 7 compares σ∗ obtained from Ideal CAMP for the case of complex Gaussian and partial Fourier sensing matrices with the theoretical one from the SE.5 0 −1 0 Data 1 1 0.

the columns of the matrix become more correlated.3 0. This is shown in Figure 9. blue line shows the analytical σ∗ computed from (13) in Appendix A.5 3 0. i. δ → 1. as the number of measurements increases.5 2 τ 2. However. as is clear from this ﬁgure and conﬁrmed by the upper bound provided in Appendix C. and different δ.5 2 τ δ = 0. σ∗ versus τ from Ideal CAMP for both complex Gaussian and partial Fourier sensing matrices. where we plot the ROC for Architecture 1 using both Ideal and Median CAMP.27 0.26 0.28 0. The solid.1 0. N = 4000. This is related to the March 8. for the value of δ used here.27 0.25 1. Furthermore. Effects of the median estimator in CAMP We investigate the effect of replacing the true σt with the median based estimate σt from (7) when xo = 0. From this ﬁgure we also observe that.29 0.27 0.3 0.5 3 Fig.e.3 0.5 τ 2 2. (ii) As δ → 0 the predictions of SE become more accurate for Fourier. Gaussian CAMP Ideal. For a ﬁxed FAP (that ﬁxes τα ). Architecture 1 performs much better when using the partial Fourier sensing matrix as compared to the Gaussian one.28 0. therefore resulting in losing both detection performance and the CFAR property (since the output Pf a is underestimated).i.2 2.29 σ* 0.d. In Figure 8 the estimated output noise standard deviation is shown for both Ideal and Median CAMP. We observe that the estimate σ∗ deviates from the Ideal CAMP case because of the bias introduced by the estimator.3 0.26 0.29 0.05. the deviation diminishes as = δρ decreases. In Architecture 1. This is because the soft thresholding function in CAMP uses the parameter τα σ∗ . the bias will have the effect of increasing the overall threshold.5 2. 2012 DRAFT .28 σ* 0.5 3 1 1. overestimating σ∗ will result in a loss in performance compared to the case of using Ideal CAMP. C. The empirical curves are obtained by averaging over 100 MC samples for σ 2 = 0. Fourier Theoretical SE σ* δ = 0.26 0. entries.28 σ* 0.16 δ = 0. 7.29 0. and hence the true behavior deviates from the SE that holds for matrices with i.26 2 τ δ = 0. ρ = 0.05..27 0.5 3 1.05 CAMP Ideal.

The curves are obtained by averaging over 100 MC realizations for σ 2 = 0.95 0. This implies that.5 2 τ 2.29 0.1 0. 2 variation of the noise variance σ∗ .05.27 0. Fourier σ* δ = 0.28 0.5 2 τ δ = 0.85 0.5 3 Fig.29 0.28 σ* 0.1. 2012 DRAFT . for decreasing FAPs. 9.5 3 1. Here N = 1000. 8.5 τ 2 2. Fourier CAMP Ideal.6 0. March 8.3 0. Gaussian CAMP Adaptive.25 1.3 0. with the threshold τα along the ROC curve.3 0. δ = 0.5 2. (b) Partial Fourier sensing matrix ROC curves for Architecture 1 using Ideal and Median CAMP.28 σ* 0. N = 4000. and various δ. in the partial Fourier case the SNR in Architecture 1 deviates much less from the optimum SNR that is achieved for τ = τo . and therefore the SNR.27 0.4 Pd Median CAMP Ideal CAMP −4 0. As can be seen from Figure 8.05 CAMP Adaptive. in the partial Fourier case as δ increases the variance curve becomes ﬂatter than in the Gaussian case.75 10 −4 Median CAMP Ideal CAMP 10 −3 10 10 −3 10 −2 10 −1 Pfa 10 Pfa −2 10 −1 (a) Gaussian sensing matrix Fig.5 3 0. a2 = 1.29 0.9 Pd 0.26 0.29 σ* 0.05.2 2.6.8 1 0.27 0.27 0. Gaussian CAMP Ideal.26 2 τ δ = 0. 1 0.28 0.26 0.26 0.5 3 1 1.05 (corresponding to a MF SNR = 13dB). Output noise standard deviation versus τ for both Ideal (σ∗ ) and Median (σ∗ ) CAMP for complex Gaussian and partial Fourier sensing matrices.17 δ = 0.8 0. ρ = 0. and σ 2 = 0.3 0. ρ = 0.

8dB for the Gaussian sensing matrix and 12.18 1 0.9 Ideal CAMP CFAR Adaptive CAMP FT Adaptive CAMP CFAR Ideal CAMP FT Theoretical CA−CFAR −4 Pd Pd 0. and (d) a fully adaptive scheme consisting of Adaptive CAMP followed by CA-CFAR processor.8 0. Therefore .in the basic analysis of CFAR schemes.6. a2 = 1.95 0.7 10 10 −3 0. δ = 0. (b) Ideal CAMP in combination with the CA-CFAR detector. D.05 (corresponding to a MF SNR = 13dB). then they cause a rise in the adaptive threshold. 10. One loss is due to the use of Adaptive. We also show the theoretical curve of a CA-CFAR processor with the same window length and SNR = 11. For the CFAR processor. The SNR can be estimated March 8. Here N = 1000.95 0.85 0.9 0. to reduce the interference losses encountered in CA-CFAR. Adaptive CAMP is followed by a CA-CFAR processor preceded by a Square Law (SL) detector (recall Figure 5). (c) Adaptive CAMP with ideal (ﬁxed threshold) detector. To keep the discussions concise.1. Adaptivity imposes extra losses on the system. Pf a ) performance of the Adaptive CAMP CFAR detector. A second loss is caused by the CFAR processor and its estimate of the noise standard deviation. it is assumed that there are no targets present in the CFAR window. such as Order Statistic (OS) CFAR [15]. This means that there is an error in estimating τo .75 0. In Figure 10 we show the ROC curves for Architecture 2 obtained using: (a) Ideal CAMP with ideal (ﬁxed threshold) detector. ρ = 0. Following a similar approach. (b) Partial Fourier sensing matrix ROC curves for Architecture 2 for different levels of adaptivity. several dedicated CFAR schemes have been proposed in literature.8 0. and σ 2 = 0. Adaptive CAMP CFAR detector performance We investigate now the (Pd .4dB for the partial Fourier sensing matrix. we consider the case of multiple but non-interfering targets. It is a well known fact that.75 −4 10 Ideal CAMP CFAR Adaptive CAMP FT Ideal CAMP FT Adaptive CAMP CFAR Theoretical CA−CFAR 10 −3 10 −2 10 −1 Pfa 10 Pfa −2 10 −1 (a) Gaussian sensing matrix Fig. if one or more targets are present in the CFAR window. we use a CFAR window of length 20 with 4 guard cells.85 0. This scenario is referred to as the non-interfering targets scenario.instead of Ideal CAMP. thus possibly masking the target in the CUT that has yet to be detected. 2012 DRAFT . FT denotes the use of an (ideal) ﬁxed threshold detector. For the case of interfering targets. In the simulations for Architecture 2. we do not pursue these directions and leave them for future research.

which is derived from a plot like the ones shown in Figure 7 for the case δ = 0. In the experiments. Each frequency is transmitted during 0. resulting in 37. A more detailed description of the radar system and the setup is given in [33]. 2012 DRAFT . as predicted by Theorem 2. For the partial Fourier sensing matrix. the expected types of target and environment).95MHz. In the Nyquist case (that represents unambiguous mapping of ranges to phases over the whole bandwidth) we transmit N = 200 frequencies over a bandwidth of 800MHz.25. by comparing the curve of Adaptive CAMP plus CFAR with the theoretical one of a CA-CFAR processor (without CS) with equal parameters. E XPERIMENTAL R ESULTS To verify our theoretical and simulated results. A. By comparing Figures 9 and 10 it can be seen that. March 8. thus implying a bandwidth of Bf = 1.5 and 0.5m unambiguous range (∆R). meaning that the TX signal consists of a number of discrete frequencies. This is due to the fact that. The achievable range resolution is therefore δR = 18. Architecture 2 always outperforms Architecture 1. the window length depends on the speciﬁc application (e. This is the well-known CFAR loss [14]. Transmitted waveform We use a SF waveform.85. in the ﬁxed threshold case. The corner reﬂectors are about 5m apart in range. VII. we use ﬁve stationary corner reﬂectors of different sizes (RCS) as targets. For the Gaussian sensing matrix the optimal threshold for Architecture 2 (using Ideal CAMP) is computed using the SE.512µs. Furthermore. which can be controlled by changing the CFAR window length. the value of τo at which the minimum σ∗ occurs is very close to the true optimal τo (see Figure 8).19 during simulations. and sequential frequencies are separated by ∆f = 4MHz.. although σ∗ is biased. we performed several experiments at Fraunhofer FHR using the software deﬁned LabRadOr experimental radar system. The digital SF waveforms were designed to acquire a set of CS measurements for δ = 0. In general. It is clear from this ﬁgure that Adaptive CAMP introduces almost no loss in the detection performance of Architecture 2. For each transmitted (TX) waveform 300 measurements (with the same set-up) were performed. the threshold in Ideal CAMP is set to τo = 1. or it can be derived using the procedure described later in Section VIII.6.g.75cm. The main loss instead is introduced by the adaptive CFAR detector. we observe that the CA-CFAR detector performance seems independent of the fact that the input to the detector is obtained by running CAMP instead of a conventional MF.

In the CS case. .05. . N . Figure 11 exhibits the signals reconstructed using Architectures 1 and 2 in addition to the MF. there is a leakage of target power into neighboring range bins both for MF and CS. Therefore. In this ﬁgure the ﬁve corner reﬂectors are indicated as T1–T5. For the MF. 1 CAMP Arch.1. τα was set using a FAP = 10−4 . The subset of transmitted frequencies is chosen uniformly at random within the total transmitted bandwidth. Also. ri = r0 + i ∆R/N . n. . irrespective of n. each range bin. i = 1.20 10 log(amplitude) 4 CAMP Arch. 2012 DRAFT . This enables us to analyze the effects of undersampling and transmitted power separately. In the experimental measurements. . irrespective of the number of transmitted frequencies. 50. and MF. Reconstructed range proﬁle using: CAMP Architectures 1 and 2. After reception and demodulation. to evaluate the performance of the detectors at March 8. km = 2πfm /c is the wave number. the number of TX frequencies is reduced from N to n (n < N ). and the n samples ym of the compressed measurement vector y are given by 1 ym = √ n N e−j2km ri xo. We assume that the same total power is transmitted. for CS. since the SNR is very high for all targets (in all cases above 20dB). so that the total transmitted energy (PT × n/Bf ) is the same in all cases. CFAR CAMP adaptive detector performance We show now the ROC curves obtained using CAMP-based detection on the measured data. due to the fact that targets are not exactly on Fourier grid points. This means that when the number of measurements is reduced by δ . and n = 100. . . 0. corresponding to δ = 0. N = 200. δ = 0. we ﬁx N = 200. i=1 (11) where a(m) is the mth row of the partial Fourier matrix. 11.i = a(m) xo . In our experiments. m = 1.5 and ρ = 0. For Architecture 1. maps to n phases proportional to the n transmitted frequencies fm = f0 + m∆f . the power per transmitted frequency PT is 1/δ times higher than in the Nyquist waveform case. and c is the speed of light.5. .25. . 2 MF T1 T2 T5 T3 T4 10 3 10 2 10 1 0 5 10 15 20 Range [m] 25 30 35 Fig. 0. B. the sensing matrix A can be represented as a partial Fourier matrix.

CAMP will perform better for larger δ .25 ROC curves for Architecture 1 and Architecture 2 using both ﬁxed threshold (FT) and CA-CFAR detectors. while the total transmitted power remains ﬁxed for δ = 0.5 Fig. 2 CA−CFAR Arch.5 and 0. T3.8 T1 1 0.5 and 0. Therefore.6 0. Since a loss in SNR translates directly into a loss in detection probability.25. March 8. When Architecture 2 is followed by a CA-CFAR processor. In Architecture 1. In these plots. 1 10 Pfa −1 0. 2 CA−CFAR Arch. for δ = 0. indicated as T1.2 and 26dB. 10. 12. CAMP recovery is followed by either a ﬁxed threshold (FT) detector or a CA-CFAR processor that uses four guard cells and a CFAR window of length 20. the FAP was estimated by excluding the range bins corresponding to the target locations plus four guard cells. For Pd estimation. As observed in the simulated results in Figures 7 and 8. we used the detection at the location of the highest target peak. in agreement with our theoretical ﬁndings. 2012 DRAFT . The curves correspond to targets T1. 16. the highest FAP that can be estimated is equal to δ . respectively. as expected. the reconstruction noise variance σ∗ increases.2 0 T4 10 −2 0 10 10 −3 10 −2 10 Pfa −1 10 0 (a) δ = 0.25.21 1 0. we added additional white Gaussian noise (with σ = 500) to the raw frequency data samples. and thus the CAMP SNR decreases. From Figure 12 we observe that. for a given FAP.4 0. 2 FT Arch. medium SNR values.6.6 0. since the sparse estimated signal x cannot have more than n out of N non-zero coefﬁcients. T3 and T4 in Figure 11. (b) δ = 0.2 10 −3 0.4 Arch. resulting in a MF output SNR of 17. Figure 12 shows the ROC curves for three of the ﬁve targets with different SNRs. 2 FT Arch. 1 T1 T3 T3 P Pd T4 d 0. from the closest corner (T1) to the farthest one (T5). the reconstruction SNR for each target depends on the architecture used in addition to the compression factor δ . For each target the ROC curve is shown using both Architecture 1 and Architecture 2. there is a loss in detection performance compared to the ﬁxed threshold case. Recall that in CAMP the reconstruction SNR is the ratio of the estimated target power to the system 2 plus reconstruction noise power (σ∗ ).8 Arch. the detection probability of Architecture 2 with the ﬁxed threshold detector is always higher than that of Architecture 1. as the number of measurements is reduced. 14. In Architecture 2. and T4 in Figure 11.2.

as shown in Figure 14(b) for a few sets of points in the map.e. 7 For sensing matrices other than Gaussian. since the entire signal is used in the noise estimation and the threshold τα is ﬁxed.e. Given the detection range. the SNR map of CAMP can be obtained via simulations using the Ideal or Adaptive CAMP algorithms. an estimate of the expected number of targets should be made. if we consider Architecture 2 and we assume that the received power is equal to a2 for all targets. in contrast. Observe that in both cases the curves are equal for the same kmax up to approximately 6 This choice of target amplitudes distribution provides a lower bound on the SNR performance. i.i. the two proposed architectures perform very similarly. 2012 DRAFT . target RCS and system noise. i. it can not adapt to local variation of noise level. Gaussian entries and the curves are obtained using the theoretical SE. For Figure 13 we used the analytical equations from Appendix A.. we can vary the number of CS measurements n to obtain several values of δ = n/N and corresponding ρ = kmax /n. k could be set based on the maximum expected number of targets.7 It is clear that CS radar performance depends.22 At very low and very high SNRs. we obtain a curve that shows how power (SNRCS ) and undersampling (δ ) can be traded against one another. By evaluating the SNR at these points. Architecture 2 always outperforms Architecture 1. Computing the ratio a2 /σ∗ we obtain the CAMP output SNR map. To design the system for the worst-case scenario.. an example of which is shown in Figure 13 for a Gaussian sensing matrix.d. we propose here a methodology for designing CS radar detectors based on CAMP. Therefore. we observe that Architecture 1 is comparable to an OS-CFAR detector where the CFAR window is the entire signal. on δ and ρ. Architecture 2. ρ). VIII. Then. However. provides the ﬂexibility to choose both the most appropriate CFAR processor and CFAR window length depending on the speciﬁc scenario. a serious disadvantage of Architecture 1 is that. the ﬁrst step is to compute the transmitted power necessary to reach a given CAMP output SNR. March 8. including the CUT. for a given number of range (or Doppler) bins N and a given σ .6 then using 2 2 equation (4) for a given σ 2 we obtain a value of σ∗ for each couple (δ. In Figure 14(b) the sensing matrix is partial Fourier and the curves are obtained using MC simulations with Adaptive CAMP. This makes Architecture 1 unsuitable for many radar applications. in the system design phase. However. k = kmax . D ESIGN M ETHODOLOGY Using the tools developed in the previous sections. Furthermore. at low FAPs and high Pd . For example. In Figure 14(a) the sensing matrix has i. In Figure 14 we show an example of such curves for several values of kmax and N = 200. besides on the SNR. which is the most relevant case in practical situations. 2Nw = N . being this the least favorable distribution for the non-zero entries in xo [23].

a = 1. and σ 2 = 0.3 0.d.8 1 10 8 6 4 2 0 0.i. δ = 1) the Gaussian sensing matrix is not orthogonal. as δ → 1 the partial Fourier matrix approaches a full Fourier matrix. that we would like to choose δ = 0. it becomes orthogonal.2 0. (b) partial Fourier sensing matrix CAMP output SNR versus δ for Architecture 2 for different numbers of targets kmax .05. δ = 0. ρ) we derive a value for the CAMP output SNR equal to 11. and SNRCS → 13 dB.. even in the limit n = N (i.d.8 0.9 0.5 0.6 kmax = 1 kmax = 6 kmax = 12 kmax = 24 kmax = 48 0. we can use them to predict the performance of our CAMP CFAR detector.4 0.23 12 0.4 δ 0. On the contrary.5 0. Assume. In this case the upper bound (SN R = 13dB) cannot be achieved.2 0.6 0. For this combination (δ.05. CAMP output SNR for Architecture 2 for a = 1 and σ 2 = 0.7 0. 13.2 0. we estimate that if kmax > 1. However. corresponding to SNRM F = 13dB.1 0 4 8 10 ρ 6 0. Gaussian.e. Hence. there will be losses due to target interference.8 0. 14. Here N = 200.8 1 8 6 4 2 0 (a) i..7dB that can be plugged in the selected CFAR Pd and Pf a equations.2 2 0. 14 12 10 SNRCS SNRCS 14 12 kmax = 1 kmax = 6 kmax = 12 kmax = 24 kmax = 48 0. corresponding to SNRM F = 13dB. 2012 DRAFT .e. for example.6 0.6 0.9 δ Fig.i.7 0. just as in conventional MF. Gaussian sensing matrix Fig. Once we have computed the SNR maps or curves. If we use a CA-CFAR processor and we assume March 8. i.4 0.8.1 0. The colormap is the output SNR of CAMP in a dB scale.6 and kmax = 12.4 δ 0. The sensing matrix is i.3 0.

A PPENDIX A R ISK OF THE SOFT THRESHOLDING FUNCTION We derive an analytical expression for (4) for the case when the non-zero coefﬁcients in xo all have equal amplitude.d. We have derived closed form expressions for the CAMP output SNR as a function of system and target parameters. In fact. We believe this work paves the way for the use of CS in radar detection. At the end of the appendix. since kmax is ﬁxed. This means that. we also explain how the calculations can be easily generalized to the more general setting where the non-zero elements have different amplitude distributions. which makes it possible to design practical CS based radar systems. if we want to increase the detection probability for the desired FAP. unfortunately. Based on the SE theoretical results. Such charts play an important role in determining when and how CS can be applied and at what cost. For the March 8. then for Pf a = 10−4 we obtain Pd = 0. such as antenna gain. we either have to increase the number of CS measurements (δ ) or the received power. we have demonstrated that. 2012 DRAFT . Firstly. the combination of a recovery stage based on CAMP with a separate detector achieves the best performance. no theoretical claims can yet be made.7. IX. C ONCLUSIONS In this paper we have achieved two goals. we have provided a methodology to predict the performance of the proposed detector.i. out of two proposed architectures.24 that the targets are not interfering. we have presented the ﬁrst architecture for adaptive CS radar detection with CFAR properties. Gaussian. which features closed-form expressions for the detection and false alarm probabilities in the ideal case of known system parameters. for which. we have combined conventional CFAR processing with 1 -norm minimization to provide a fully adaptive detection scheme. These goals have been achieved by exploiting CAMP. The latter can be achieved by increasing the transmitted power or any term in the radar equation that increases the received power. by comparing theoretical and experimental results we have been able to understand the behavior of CAMP for sensing matrices other than i. These relations can be used to obtain CS link budget plots that allow the system designer to evaluate the trade off between power and undersampling. our experimental data has shown that our conclusions still hold for the case of partial Fourier sensing matrices. Secondly. Our theoretical ﬁndings have been supported by both numerical and experimental evidence. Furthermore. With a simple modiﬁcation to CAMP.

Zi ∼ N (0. and therefore. δ (13) where Z ∼ CN (0. The ﬁrst expectation in (13) can be computed as EZ {|η(Z. After some changes of variable and coordinate transformations. τ )|2 } = θ w>τ (w − τ )2 fw (w)fθ (θ)dwdθ = 2 w>τ 2 e−w 2 e−w w(w − τ )2 e−w dw 2 √ = 2 π w>τ w3 √ π dw − 2τ w>τ w2 √ π dw + τ 2 w>τ w √ 2 e−w dw . 1) and can be decomposed as Z = (Zr + jZi ). the second expectation in (13) can be written as 2π E(|η(µ + Zr + jZc . τ σ∗ ))|2 }P (x = 0) + EZ {|(η(a + σ∗ Z. 2012 DRAFT . it has been proved in [23] that the function Ψ(σ∗ ) is independent of the phase of the non-zero coefﬁcients. we obtain 1 2 σ∗ = σ 2 + EZ. we will assume in the following calculations that the phase of the non-zero coefﬁcients is equal to zero.X {|(η(X + σ∗ Z. Therefore. where Zr . we have fx (x) = (1 − δρ)δ(0) + δρδ(a). without loss of generality. τ ) − µ)|2 } . respectively.25 complex signal case. π (14) where each of the integrals in the last line is the incomplete moment of a Gaussian random variable with 1 parameters (0. (12) where δρ = k N and a is the amplitude of all non-zero coefﬁcients in xo . √2 ) of order 3. τ σ∗ ) − X)|2 } δ = σ2 + = σ2 + 1 EZ {|(η(σ∗ Z. Deﬁne the two independent random variables w = |z| ∼ Rayleigh( 2 ) and θ = (z) ∼ that is uniformly distributed between 0 and 2π . Using fx (x) in (4). and 1 µ = a/σ∗ . 1/2). (15) March 8. τ ))|2 } + δρEZ {|(η(µ + Z. 2. τ σ∗ ) − a)|2 }P (x = a) δ 2 σ∗ (1 − δρ)EZ {|(η(Z. These integrals can be computed numerically. and 1. τ ) − µ|2 ) = φ=0 r≤τ 2π 1 2 2 µ2 e−r −µ +2rµ cos(φ) rdrdφ π + φ=0 r>τ [(r − τ )2 + µ2 − 2µ(r − τ ) cos(φ)]e−r ∞ τ 2 −µ2 +2rµ cos(φ) rdrdφ = µ2 + e−µ 2 r(r − τ )e−r 2 − 4µI1 (2rµ) + 2(r − τ )I0 (2rµ) dr.

σ∗. N .o denote the ﬁxed point solutions of SE in Architectures 1 and 2.α ) x = P (|a + σ∗. 2012 DRAFT . we have Pf a1 = P (|xi | = 0|H0 ) = P (|σ∗. For evaluating the detection probability. . we derive the detection and false alarm probabilities of Architectures 1 and 2. −2 ln α).i + σ∗.α and σ∗.) is the Marcum-Q function. the ﬁxed point solution σ∗ can be iteratively computed using the integrals above. A PPENDIX B P ROOF OF T HEOREM 2 To prove Theorem 2. and Q1 (. (17) Therefore. deﬁne a as the square root of the power received from a target at location i. The test statistic at bin i.1 = a2 /σ∗. Architecture 1 Recall from Figure 3(a) that in Architecture 1 the non-zero coefﬁcients in x represent the ﬁnal detections and the threshold τα is selected so as to achieve the desired FAP α. in what follows. In this architecture.α z| > τα σ∗.α z. is given by H1 |xi | H0 0. We then have Pd1 = P (|xi | = 0|H1 ) = P (|η(˜i . 1).. The Gaussianity of the noise is due to the assumption of the Theorem. then we can also calculate the expected value of the expressions in (15) with respect to µ ∼ G. (16) where z ∼ CN (0. · · · .α )| = 0|H1 ) = P (|xo. τα σ∗. and J1 is the Bessel function of the ﬁrst kind of order 1 [34]. Since the thresholds τα and τo are different for Architectures 1 and 2. the output of CAMP is given by x = η(xo + σ∗. For a given τ . 2 where z ∼ CN (0. Finally. σ∗. i = 1. respectively. τα σ∗.α evaluated using (4) with threshold parameter τα .α ) = Q1 ( √ 2SNRCS. and it holds in the asymptotic settings according to Theorem 1. respectively. I).α z| > τα σ∗.α .1 .α ). (18) 2 2 where SNRCS.α ) = P (|z| > τα ) = e−τα . note that if the amplitudes of the non-zero coefﬁcients are drawn from an arbitrary distribution G.26 where I0 and I1 are modiﬁed Bessel functions of the ﬁrst kind of order 0 and 1.α z| > τα σ∗. March 8. This leads us to the following straightforward parameter tuning: τα = √ − ln α.

2 ≥SNRCS.i + σ∗.. for the same FAP α.o .2 .o can be computed from (4) with threshold parameter τo .α . Since σ∗. for the same target received power a2 . 2 Since 0 ≤ P(|a + z| ≤ µ) ≤ 1.o . (19) 2SNRCS.o z| > κ) = P (|z| > κ/σ∗. For Architecture 2. The proof of Theorem 2 is now straightforward. √ −2 ln α).2 = a2 /σ∗. Pd2 ≥ Pd1 . Deﬁne µ∗ as the median of the absolute value of the random 1 variable z . for a desired FAP α. SNRCS. Thus.. i. Since |z| ∼ Rayleigh(σ 2 /2). with equality if and only if α = e−τo .o z| > κ) = Q1 ( 2 where SNRCS. A PPENDIX C M EDIAN E STIMATOR E RROR B OUND To quantify the error of the median estimator as a function of δ and ρ. 2012 . The last equality comes from the fact that |z| ∼ Rayleigh(1/2). σ 2 ). H0 Therefore.o ) = e−κ 2 2 /σ∗. therefore. P (|z| ≤ µ∗ ) = 2 . Note that the above arguments are independent of the distribution G of the non-zero elements 2 of xo . Pf a2 = P (|xi | > κ|H0 ) = P (|σ∗.o − ln α.27 Architecture 2 Suppose that after the estimation block we obtain the following noisy estimate of xo x = xo + σ∗. i. the detector √ threshold can be set as κ = σ∗.e. consider the random variable x with pdf as in (12) and z ∼ CN (0. using the upper bound we obtain (1 − )(1 − e−µ 2 /σ 2 )≤ 1 − 2 DRAFT March 8. 2 P (|x + z| ≤ µ) = P (|a + z| ≤ µ) + (1 − )P (|z| ≤ µ) = P (|a + z| ≤ µ) + (1 − )(1 − e−µ /σ 2 1 )= . Deﬁne = δρ and µ as the median of the absolute value of the random variable x + z .o ≤ σ∗.e.o z| > κ) = P (|a + σ∗.o z. the decision statistic at bin i is given by |xi | H1 κ. The detection probability is given by Pd2 = P (|xi | > κ|H1 ) = P (|xo. where as before σ∗.1 and. we obtain µ∗ = σ log(2).

9 distribution.45 0.86 0. for frequent discussions and constructive comments.7 0.78 0.82 0.8 0.93 0.83 0. for making the LabRadOr radar system available available and for technical support during the experiments.94 0.94 0.7 0.67 0.8 0.1 0.6 0.96 0.15 0.78 0.18 0.65 0.9 0.98 0.89 0. Ender and T. Many thanks also to W.84 0. ACKNOWLEDGMENTS Thanks to Prof.7 0.99 0. J. In Table I we report the p-values from the Kolmogorov-Smirnov (KS) test [35] for N = 200. Furthermore. the bound in (8) follows.87 0.6 0. Mathy from Fraunhofer FHR.9 0. and the p-value measures the similarity of the input samples to the reference distribution.89 0.9 0. A PPENDIX D G AUSSIANITY OF w FOR PARTIAL F OURIER MATRIX We report simulation results that demonstrate the approximate Gaussianity of the noise vector w for the partial Fourier sensing matrix. In the null hypothesis it is assumed that the samples belong to the reference probability density function.7 0.3 0. The Netherlands.73 0. Therefore the normalized bias introduced by the median estimator in the presence of a target is bounded by |µ − µ∗ | ≤ σ log 2(1 − ) − log(2) . using the inequality log 2(1 − ) < log(2).73 0. δ ρ p-value (R) p-value (I) 0.05 0.1 0.2 0.3 0.2 0. 2012 DRAFT . N = 200.5 0.4 0.3 0.01 0. Germany. We performed the KS test also for different combinations of N .VALUES OF THE REAL (R) AND IMAGINARY (I) PARTS OF w FOR PARTIAL F OURIER SENSING MATRIX .4 0. δ .8 0. March 8.28 and therefore µ≤σ log 2(1 − ) . Wachtberg.86 0.3 0.65 0. The KS test compares the empirical distribution of the input samples to the Gaussian TABLE I p.72 0.2 0. In our simulations.6 0.93 0. van Rossum from TNO.75 0. in all cases the null hypothesis was accepted. and ρ and obtained similar results.97 0.84 0.86 0.96 0.5 0.

SIAM J. Pure Appl. Strohmer. Proc. 35:2392–2404. IEEE Trans. 2010. Compressive sensing for subsurface imaging using ground penetrating radar.. Direction estimation using compressive sampling array processing. Improved bounds for restricted isometry constants for Gaussian matrices. [10] L. Karl.. 2007. Sparsity and compressed sensing in radar imaging. Signal Process. Compressed sensing of mono-static and multi-static SAR. R. 2008. P. Apr. H. Oct. Syst. and H. [16] J. Kassam. [19] B. Step-frequency radar with compressive sampling SFR-CS. Ann. [12] L. [17] M. 106(45):18914–18919. 2009. In Proc. A. In Proc. On compressive sensing applied to radar. Temlyakov. 2010. Wang. and A. IEEE J. Acoust.. L. Pandharipande. Hoogeboom. Conf. 24(4):427–445. 31(5):2882–2898. H. J. [5] J. Proc. Stable recovery of sparse overcomplete representations in the presence of noise. G. Analysis of CFAR processors in homogeneous background. A. and M.. [11] S. 89(10):1959–1972. Speech. and M. M. Aerosp. Signals.A. Jun. [13] A. W. 2006. Acad. Sci. Topics Sig. Compressive sensing for high resolution radar imaging. Tao. G. 1983. Jul. Maleki and D. Donoho and V. Steeghs. IEEE Radar Conf. May 2010. IEEE Trans. [20] D. 2010. Jan. Theory. 59(8):1207–1223. A. Proc. 52(1):6–18. J. Asilomar Conf. Stable signal recovery from incomplete and inaccurate measurements. and Comput. Compressed sensing in MIMO radar. C. [2] A.. McClellan. L. [8] Y. IEEE Int. 4(1):146–163. 2010. Stojanovic. Vaidyanathan. IEEE Asia-Paciﬁc Microwave Conf. Aerosp. (ICISP).. Comm. Cetin. (APMC). IEEE Trans.. Poor. P. [18] E. Yu. March 8. J.29 R EFERENCES [1] R. G.. Potter. [15] H. Int. [4] C. [3] I. IEEE. Electron. SPIE Algorithms for Synthetic Aperture Radar Imagery XVI. Rohling. [9] J. In Proc. 90(5):1402–1414. Jul. IEEE Work. C. Tao. MIMO radar using compressive sampling. Topics Sig. 4(2):330–341. Ertin. 1988. IEEE Trans. 2009. Shah. Petropulu. 98(6):1006–1020. Anitori. 2012 DRAFT . Signal Process. The dantzig selector: statistical estimation when p is much larger than n. and Signal Process. Parker. Donoho. and W. Maleki. Natl. and P. 2010. L. [14] P. Conf. Matrix Anal. Cetin. Tanner... 2006. Math. Fuchs. Sel. 2010. Signal Process. Bubacarr and J. In Proc. Y. Message passing algorithms for compressed sensing. Ender. T. Sel.. 2010. IEEE J. and A. Systems. Syst. Electron. Compressive radar imaging. Gandhi and S. Montanari. 57(6):2275– 2284. Leus. Yu.. C. and A. Aug. The generalized likelihood ratio test and the sparse representations approach. P. In Proc. 2007. Stat.. In Proc. 2009. Chen and P. V. Signal Process. Optimally tuned iterative reconstruction algorithm for compressed sensing. [7] Y. Baraniuk and T. Donoho. Stat. [6] M. Image and Signal Process. E. Jun. J. Inf. Feb. Y. 2009.. In Proc. 2009. Radar CFAR thresholding in clutter and multiple target situations. Romberg E. Proc. P. Candes and T. Elad D. High-resolution radar via compressed sensing. (ICASSP). Candes and T. H. J. Herman and T. 19(4):608–621. Scott.. Gurbuz. Otten. Petropulu.

[35] F. Saunders. Theory. In To appear in Proc. Koh. Maleki and A. National Bureau of Standards. IEEE Trans. Montanari. 2011. on Sci. [33] L. Introduction to Radar Systems. Dept. Statist. U. and M. Sel. [28] R. Montanari. I. Otten. Soc. A. Gorinevsky. 1(4):606–617. and D. 1972. G.30 [21] A. Proc. J. 2012 DRAFT . 1998. Roy. [23] A. 2004. with applications to compressed sensing. 32(2):407–499. Theory. Tibshirani. L. Montanari.S. Stat. SPIE Ann. A. Atomic decomposition by basis pursuit. A method for large-scale J. Stat. S. Maleki. J. [34] M. Donoho. Lustig. Meeting: Wavelet Apps. SIAM J. Submitted to IEEE Trans. Abramowitz and I. Inform. M. Oct. The noise-sensitivity phase transition in compressed sensing. Maleki. Maleki. [26] D. Johnstone. Donoho. In Proc. Topics Sig. Montanari. [32] M. The dynamics of message passing on dense graphs. L. van Rossum. S. Baraniuk. Bayati and A. Y. [27] S. Computing. Handbook of Mathematical Functions with Formulas. R. Tanner.. 1 -regularized least squares. Chen. and R. J. Anitori. Anitori. J. IEEE Radar Conf. Amer. Analysis of approximate message passing algorithm. Ann. Skolnik. Theory. IEEE. 2011. [22] D.. of Commerce. [29] S. IEEE March 8. Stegun. Donoho. G. The Kolmogorov-Smirnov test for goodness of ﬁt. The LASSO risk for Gaussian matrices. IEEE Conf. and Mathematical Tables. 98(6):913 – 924. Inf. Baraniuk. Asymptotic analysis of complex LASSO via complex approximate message passing (CAMP). and A. 2010. To appear in IEEE Trans.. Kim. Massey. Proc. Graphs.. 2011. D. Proc. T. [24] M. Assoc. 58(1):267–288. 57(2):764 – 785. Inf. Regression shrinkage and selection via the LASSO. 2007. Zai. Hastie. Inf. Tibshirani.. Dec. Third edition. 2002. Bayati and A. 46(253):68–78. Donoho and J. K. 57(10):6920–6941. IEEE Trans. L. 1996. Least angle regression. 2010. Science and Systems (CISS). Efron. McGraw Hill. Inf. [25] M. Feb. 20:33–61.. [30] B. A. S. Sig. and R. Chen and D. Precise undersampling theorems. Examples of basis pursuit. 2011. L. Imag. Compressive CFAR radar detection. Series B. Jun. A. In Proc. L. 2012. W. Theory. Boyd. 1995. [31] S. 1951. and M.

- Monte Carlo Finance
- We Standardize to Eliminate Units
- 63293-19248-Probability and Statistics
- Basic Statistics
- exp2 (1)
- Rpp2009 Rev Statistics
- Am J Clin Nutr 1976 Picciano 242 54.PDF HMC
- Statistical Methods for Appraisal of Quality of Stator Winding Insulation
- Chapter 6
- Prob-7
- Monte Carlo Simulation Formula in Excel
- 2012-03-22-#-15-0824702603
- Wilcox (2006)
- Chase Cats1d
- Ti Stat Instructions
- Econometrics I Review
- ECON2206 Assignment 2 William Chau z3376203
- Sokal
- communications systems
- Quality Condensed Notes
- Excel
- IT_Lecture_05
- Resouce Management With Hoses
- REV-090003-r1
- Class4_2
- Random Matrix Theory and Wireless Communications
- ch14
- 507 full
- Pseudo-Mathematics and Financial Charlatanism
- pmdist

Read Free for 30 Days

Cancel anytime.

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading