You are on page 1of 6

1

Canonical Framework for Describing Suboptimum Radar Space-Time Adaptive Processing (STAP) Techniques
Sbastien De Grve, Fabian D. Lapierre (Research Fellow) and Jacques G. Verly University of Lige, Department of Electrical Engineering and Computer Science Sart-Tilman, Building B28, B-4000 Lige, Belgium {sdegreve, F.Lapierre, Jacques.Verly}@ulg.ac.be

Abstract We address the problem of detecting slow moving targets from a moving radar system using Space-Time Adaptive Processing (STAP) techniques. Optimum interference rejection is known to require the estimation and the subsequent inversion of an interference-plus-noise covariance matrix. To reduce the number of training samples involved in the estimation and the computational cost inherent to the inversion, many suboptimum STAP techniques have been proposed. Earlier attempts at unifying these techniques had a limited scope. In this paper, we propose a new canonical framework that unies all of the STAP methods we are aware of. This framework can also be generalized to include the estimation of the covariance matrix and the compensation of the range dependence; it applies to monostatic and bistatic congurations. We also propose a new decomposition of the CSNR performance metric that can be used to understand the performance degradation specically due to the use of a suboptimum method.

I. I NTRODUCTION Space-time adaptive processsing (STAP) is an increasingly popular signal processing technique for detecting slow moving targets [1], [2]. The space dimension arises from an antenna elements and the time dimension from a array of coherent train of pulses. The power of STAP comes from the joint processing along the space and time dimensions. The data collected by STAP radars can be viewed as a sequence of 2D arrays, typically treated as vectors. They are called snapshots. The optimum STAP processor computes a weighted linear combination of the snapshot elements. The calculation of these optimum weights generally involves the estimation and the inversion of the covariance matrix (CM) of interferenceplus-noise (I+N) snapshots. This estimation is performed using neighboring snapshots. However, the optimum processor cannot be used in practice for two major reasons. First, the inversion of the I+N CM requires on the order of operations, which can be prohibitive for realtime applications. Second, the number of training snapshots and needed to estimate CM is typically between [3]. For typical values of and , this amount of data is most probably not available. These two problems have motivated the design of suboptimum methods (SOM) that reduce the size of the CM.

Such methods lead to a drastic reduction of the number of training snapshots and of the computational cost. The most popular SOMs are the following. Ward [2] proposed a taxonomy of SOMs using combinations of beamforming and overlapping sample selections. Wang and Ca [4] developped the JDL-GLR algorithm, which uses a different processor for each nonoverlapping angle-Doppler selection. Klemm [1] proposed many SOMs working in either the space-time domain, the space domain only or the time domain only. He also proposed SOMs working in the spatial-frequency domain. Goldstein [5] developped a SOM based on a multistage Wiener lter. Haimovich et al. [6] extended the generalized sidelobe canceller structure by using an eigenanalysis of the I+N CM. Other SOMs based on such eigenanalysis are the principal components method (PC) [7] and the methods using cross-spectral metric (CSM) [8]. Some authors proposed ways of unifying suboptimum methods [9], [10], [11]. However, each scheme only unies a small subset of the available methods. In this paper, we propose a canonical framework that encompasses all of the methods described above, as well as DPCA [12]. Even though we only discuss here the case of known I+N CM, the framework can be generalized to the case where this CM must be estimated. We also propose a new decomposition of the CSNR performance measure [9], which isolates the performance degradation specically due to the use of a SOM. II. P RINCIPLE A. Data collection During each coherent processing interval (CPI), the radar transmits a coherent train of pulses. The returns are collected at each of the elements of the antenna array. The antenna elements and the waveform pulses correspond to the rst two dimensions of STAP, i.e., space and time. The third dimension, range, arises as follows. Consider some minimum range , where we want to test for the presence of potential targets. This translates seconds. The returns are into a delay of then sampled seconds after the transmission of each of the pulses. The resulting data can be viewed as a
OF OPTIMUM

STAP

 








   !

Coherent pulse train

#~   ~Q # ~ # ~3 ~  s ~

#~

Linear antenna array

" %/ ()/6*-,

"

%'& # $ . %0/213% &54

()&+*-,

Processor

~
No

~
Yes

Thr

"

"

()/6*-, " Pulse 7

#$ #$

( & (/ " ( / *, ( & *, Pulse


for a given range

. ~ 1  4 at
(a)

Target absent

Target present at

. ~ 1  4
(b)

Fig. 1. 1D and 2D representations of a snapshot gate index .

Fig. 2. Structure of the detector: (a) arbitrary processor followed by decision device. (b) Structure of the optimum processor (OP).

ABC
D @ E EHGI KJ =ML2LMLN= > @O E =;   9R3; = ; =E  ' 9 Q P <  ;
T U SP 9'PQ3; = ; V SP S 5W P S X WP S P  S 5W PZY S X W P L The snapshot S 5W P is typically given by [13] S 5W P H[ 5W P>\ ^] = ] _  `[ 5W Pa ^] _ cbed f]  = where the complex =  scalar [ 5W P comes from the radar equation [2], \ f] ] _ is the space-time steering vector evaluated target spatial and frequencies  = d  Doppler at the potential ( ] and ] _ respectively) and ^] and a ^] _ are the
corresponding space and time steering vectors, respectively given by

2D array of complex values. The above sampling is then repeated at successive time increments corresponding to the range resolution. This results in samples per element per pulse. The entire data for each CPI can thus be viewed as a 3D array of complex values, where is the range gate index, with . In STAP, it is customary to view each slice of for a xed as a vector of size obtained by scanning row by row (Fig. 1). The snapshot can be expressed as the sum of a potential target component, , and an I+N component ,

=;  9:<; >

B. Detector and optimum processor

?@ A 9 <; = ; =FE  :

Detection is performed at each range gate individually. We denote the index of the range gate of interest by . The structure of the detector is depicted in Fig. 2(a). The inputs are and . The processor produces a scalar for the given and . Depending upon the value of with respect to a threshold, a target is declared to be present or absent for each triplet . The optimum implementation of the processor in Fig. 2(a) is shown in Fig. 2(b). The scalar output is given by [3],

>= ] _  \ f  ] =]_ S \ ^] >

We assume here that the interference consists of only clutter (C), so that,

L L2LKgihkjmlFn o2qs p rut q hBvVw Vx  M  LML2LKg hkjmlFn o2yp rzt p h:vw x L The I+N snapshot S X W P can be expressed as the sum of an interference snapshot, S { W P , and a noise snapshot, S | W P . S } WP S X WP  S } WP Y S | WP =

d f]   a ^] _  

` (1)  W ^] = ] _  S L W The optimum weight is given by (2) W ^] = ] _  `B XhBW v \ f] = ] _  = where is an arbitrary constant and X W is the (true) I+N CM of S X W , (3) X W H S X W S X W m L In practice, X W is unknown and must be estimated. A customary choice is the maximum-likelihood estimate X W of X W [3], given by X W  P = with P  S P S P = (4) Pi E where is a set of range gate indices used to estimate X W and is the number of elements in . Observe that, since X W is used to test for the presence of a target at ,
should not include . C. Optimum processor in practice The OP cannot be used in real-time applications for two major reasons [1], [2]. First, the inversion of the I+N CM . involves a number of operations proportional to Second, the number of training snapshots required, , is

= ] _ =F  f] >

where is the clutter snapshot [13]. C and N snapshots are typically assumed to be spatially and temporally white.

s   

on the order of to [3]. Thus, in practice, there is not enough training data to estimate the I+N CM and there is not enough time to invert it! The main goal of suboptimum STAP is to reduce the size of the I+N CM, thereby leading to a drastic reduction in the number of training snapshots required and in the computational cost. III. E XISTING S UBOPTIMUM A. Suboptimum methods A large number of suboptimum methods (SOM) for STAP have been proposed. The rst SOM for STAP was DPCA [12]. This method performs non-adaptive clutter suppression. Specically, the I+N CM does not appear in the expression for the weight vector. Adaptive DPCA (ADPCA) introduced the notion of adaptivity [14], [15]. Klemm [1] introduced SOMs based on (a) space-time transforms, as such the auxiliary channel processor (ACP), (b) space-only transforms, such as the overlapping subarray processor (OSP), (c) transforms using FIR lters, such as the symmetric auxiliary sensor/echo processor (SAS) with space-time FIR lter (ASFF), and (d) space-time frequency transforms, such as the 2D SAS/echoprocessor (ASEP). Gabriel [16] introduced the factored approach (FA) where a temporal DFT is applied to each spatial element, followed by spatial ltering in each individual Doppler bin. The motivation for performing the DFT prior to adaptation is the reduction of correlation in the frequency domain [17]. The FA was extended to more than one Doppler bin in the extended FA (EFA) [17], where the correlation between adjacent Doppler bins is taken into account. Bao et al. [18] also developped SOMs based on the same framework as the FA. These methods are the Doppler transform-space adaptive processing (DT-SAP) and the factorized time-space (FTS). Note that the mDT-SAP [19] is equivalent to the EFA. The lter-then-adapt (F$A) method [20] also applies a temporal DFT. However, pulse repetition interval (PRI) delayed taps from each of the spatial channels are added before spatial adaptive weight calculation. The - STAP approach [21], [22] reduces spatial degrees of freedom (DOF) to sum and difference beams only. Note that - STAP is generally used together with a reduction in the temporal DOF. The idea of simultaneous reduction of DOF in both space and time lies at the core of joint-domain localized (JDL) processing. Based on this idea, Wang and Ca [4] designed the JDL-GLR SOM. Along the same line of ideas, Bao [19] proposed an hybrid low-dimensional STAP approach (HSTAP) combining mDT-SAP, - STAP and ACP. Ward [2] proposed a taxonomy of SOMs in STAP driven by the type of preprocessor applied before the adaptive weight computation. These methods are all based on subselections. He distinguishes between pre-Doppler and postDoppler methods, depending upon whether the temporal DFT is applied before or after the weight application, respectively. He also distinguishes between element-space
METHODS AND

 A


UNIFYING FRAMEWORKS

and beamspace methods depending upon whether the weight is applied directly to the array outputs or to spatial beams obtained via a spatial DFT. An important set of SOMs uses the low-rank property of the I+N CM [1]. The principal component (PC) approach [7] selects the most signicant interference eigenvectors with a rank-ordering metric (ROM) given by the expected energy of interference eigenvectors related to the corresponding eigenvalues. To enhance the performance of the eigenvector-based SOMs, a steering-vector dependency has been introduced in the ROM used to select the interference subspace. This can be achieved by viewing STAP as a generalized sidelobe canceller (GSC) [8], [6]. Corresponding SOMs are the cross spectal metric (CSM), which chooses the eigenvectors that contribute the most to maximizing the SINR, and the multistage Wiener lter, which chooses the interference subspace through a decomposition of the GSC auxiliary channels into a sequence of orthogonal projections [23]. For the sake of completeness, we should mention that there are methods based on a model for the I+N CM or of its structure. An example of such a method is the Parametric Adaptive Matched Filter (PAMF) [24]. B. Unifying frameworks A plethora of SOMs were introduced over the last decade or so to cope with the issues of computational tractability and of availability of training data. At rst sight, it is often difcult to understand the relationship between the various methods. Here, we summarize the contributions of others in the development of unifying frameworks. A rst attempt at unication was provided by Ward in [2], where the SOMs are presented in terms of the dichotomies pre-Doppler vs. post-Doppler and element-space vs. beamspace. In [9] and [10], this scheme was extended to JDL-GLR, EFA and ADPCA. Rangaswamy [11] proposed a canonical framework for PC, PAMF and CSM methods. Guerci [25] recently proposed a partial classication scheme, which brings to light the constitutive elements of the expression for the inverse I+N CM. However, all these attempts only unify a limited subset of the existing SOMs. Below, we propose a novel canonical framework for describing SOMs. This framework was conceived to encompass all the SOMs we are aware of. In this paper, we limit our study to the case where the I+N CM is known. However, the generalization to unknown is possible.

X W

X W

IV. N EW CANONICAL

FRAMEWORK FOR SUBOPTIMUM METHODS

In spite of the wide variety of proposed SOMs for STAP, we have succeeded in creating a canonical framework that describes all these methods in terms of ve successive processing steps. Each of the rst four can be expressed as a multiplication by a vector or a matrix. The last one is a simple thresholding operation. Figure 3 shows these ve which processing steps. The input is the snapshot vector

S =

yl yl
T pre

Nt y Ns x

Snapshot

spectral representation of transform (FT). In this case,

ypre
Sb

ypre ys

ys
w

Subselections yw yw ypost
Zb

T post

ypost
D{.} Zb

where is a generalized version of the matrix acting on the vector to produce the vector representation of the 2D FT of the snapshot viewed as a matrix. JDL-GLR [4] uses this particular form of . The most general expression is for

@ S

X W , obtained using the Fourier (7) @  @F = @F

Threshold

Target absent

Target present

Fig. 3. Two different views of the ve processing steps of the new canonical framework. Here we assume that the I+N covariance matrix is known.

is shown in Fig. 3 as a 2D space-time array. The output is a vector of binary values : 0 for target absent and 1 for target present. Observe that, contrary to the OP, several weight vectors, packaged row by row in the weight matrix , are used resulting in a vector output (instead of a scalar for the OP). Similarly, the nal output is a vector of binary values (instead of a scalar for the OP). These generalizations were introduced to handle the fact that some SOMs produce more than a single binary output simultaneously. Each binary value in thus corresponds to the response of the detector for the triplet . (These outputs all corresponds to the same range .) A compact mathematical expression for in terms of is

< 

F @S L HD 5

f] 3  = ] _ 3  =m  

=  th element of this matrix is given by  F WX WX WX WX @F  =m    =m  g h+jmlmn rz q r w^ q r w< p r w^ p r w<w L (9) only contains 0s and 1s. If  =m   The matrix , =m X  th element of the 2D snapshot is ignored in the the <; =F   , this element is processed. The 1 calculation. If  matrices and contain representing the indices <; = ;  integers 9 of the elements of to be processed by the 2D FT. and contain the angle and Doppler frequencies where the FT is to be computed. The constant is such that @F remains unitary, @F @  L One special form of @ is particularly important. It corresponds to the case where @ is separable, @F  @ b @ L The expressions for @F and @F are the 1D equivalents
The of Eqs. (8) and (9). Providing more details is beyond the scope of this paper. FA and EFA are examples of SOMs using only a temporal . Beamspace pre-Doppler is an FT matrix, so that example of SOM using only a spatial FT matrix, so that . Of course, could also include both a dimension reduction via and a 1D or 2D FT via ,

= = = =D  L @  @ +

(8)

(5)

@ F 

@ F 

A. Pre-processor @F As indicated earlier, one must reduce the size of X W and, thus, the size of S . A rst possibility for @ is to perform a projection of S onto a subspace of lower dimension via a projection matrix , (6) @F  L The auxiliary eigenvector processor (AEP) [1] is an example of a SOM where consists of the juxtaposition of the eigenvectors of and the steering vector corresponding to the current target hypothesis. A second possibility for is based on the fact that the provides a reduction in the spectral representation of correlation between bins [17]. The spectral representation of the I+N CM is thus diagonal dominant. The consequence is a reduction in the computational cost for the inversion. This property is the motivation for the design of SOMs using the

@ @ L F  F

(10)

We are not aware of any method exploiting this possibility. The pre-processed snapshot is given by (11)

S @ W S @ W ` @ S L S

@ X W

B. Subselection via

@ X W

Since a dimension reduction of by a factor leads to a reduction of the computational cost for the inversion of the resulting I+N CM by a factor it is tempting to create subselections (that can possibly be overlapping) and to apply a rejection lter independently to each subselection. is to create such subselections. can be The role of

=

% . *8,s15()/ 40 , the rest of the division of by .


1

()/V , where .u ^ 1 4 is and z and is the lowest integer greater than

qV r vw q r { w qV rzt q w

q r vVw q r{w  q / q 

r vw r{w rzt q w m

q ur t q w

Fig. 4. Components and of the subselection matrix structures of these matrices are also shown.

. The

(17) 3   =  { where \ x r w 3 3 _ \ x r { w 3 <  = _ <    22 \ x { w f] t <  = ] _ t <  L \ x r { w ^] v 3  = ] _ v < m r S The application of to FW gives a collection of ltered  snapshots S W < stacked in an elongated vector S W , S W  S x W   =BL2LML:= S x W   xW where

Second, if we test for multiple, say , target hypotheses for the th subselection, is a weight matrix,

o  < h:v { \ { 3 <  = _ <  = xrw xrw <  is given by

decomposed in a subselection matrix matrix (Fig. 4),

and a transformation (12)

is composed of binary submatrices <  = each building a corresponding subselection. An additional transforma tion 3 , such as a 1D or 2D FT, can be applied to the th subselection before the application of the weight. is  then a block-diagonal matrix. Each submatrix < can be  rectangular. An example of SOM with a rectangular 3 is the F$A. Other examples of SOM using subselections are the methods of Ward, where  [2]. The application of to S @ W leads to a collection of  subsnapshots S W < stacked in an elongated vector S FW , S FW  S xFW   =BL2LML:= S xW   xW is the number of subselections and where S FW 3   3  S @F W H 3  <  S @F W L  The S FW < s are stacked only for mathematical convenience.
In practice, they are processed independently. C. Filtering processor

` L

S W 3  H 3  S FW 3  L D. Post-processor 5 S W <  5 L L @ 5

55W 5 S W L O E. Thresholding operator J Finally, we test for the absence or presence of a target in  each output S 55W 3 . This is typically done by comparing L  S each scalar output 55W 3 to a threshold This nonlinear operation is represented by the operator -+ J =FO L Each output =  corresponds to a distinct triplet ^] < ] _ < . The nal output of the detector is the vector  : binary  =  =mof values corresponding to all triplets ^] < ] _ < 0 for S S 5 5W 
target absent and 1 for target present. V. N EW
PERFORMANCE CRITERION

Further processing can be applied to the s, as in the pre-Doppler methods of [2], where a FT is applied s. The post-processing of the output is to the performed by Mathematically, is equivalent However, since the inversion of the covariance to matrix was performed in the previous step and is no longer an issue, no further dimension reduction is required. There is thus no need to include a projection operation in The post-processed signal is given by

S W <  S W

5 L

(13)  > =BLML2LB=  2 L  For the th subselection, the weights 3 can be of one of two types. First, if we target hypothesis =F  ,a single  = test for for a single triplet f] < ] _ < <  is a weight vector, (14) <   xh:rv { w \ x r { w ^] <  = ] _ <  =   where <  < @ and (15) x r { w  <  <  = L   m   { \ x r w  < \ f] 3 ] _ 3 (16)

In the case where subselections different processor is applied to each matrix is thus block diagonal,

S F W <  are created, a S FW 3  . The weight

diag

It would be useful to have a metric to quantify the loss of performance that is specically and solely due to the use of any particular SOM. Such metric can be introduced through a new decomposition of the conditioned SNR (CSNR). The CSNR was proposed by Peckham et al. [9]. It is dened as the ratio of (a) the effective SINR (SINReff ) dened as the SINR obtained by using a particular SOM, in the presence of both interference (clutter and noise), and based on the estimated I+N CM, and (b) the optimum SNR (SNRo ) dened as the SNR obtained using the OP, in the presence of noise only, and based on the known noise-only covariance matrix. (The index o stands for optimum.) Therefore, we have SINReff CSNR (18) SNRo

CSNR
0 10 4 5

SUBSINR

OSP, K s = 8 Elem. preDop., K t = 8 CSM, Rank = 23

SUBSINR ( =0) (dB) L s

20

30 40 50 60 70 OSP, Ks = 8 Elem. preDop., K t = 8 CSM, Rank = 23 Optimum 0.25

0.5

0
d

0.25

0.5

0.5

0.25

0
d

0.25

0.5

(a)

(b)

Fig. 5. Performance measures of SOMs in terms of (a) CSNR and (b) SUBSINRL . Each graph shows the performance of the overlapping subarray processor (OSP), element-space pre-Doppler, PC and CSM. The performance of the OP is shown for reference.

The proposed new decomposition of the CSNR is CSNR or CSNR

SINReff SINRsub

SINRsub SINRo

SINRo SNRo

(19)

SUBSINR SINR  ESINR s L N L s Lo

Degradation Degradation Degradation due to estimation due to SOM due to clutter (20) The above decomposition introduces two additional quantities. SINRsub is the SINR obtained for a given SOM, in the presence of I+N, and based on the known I+N covariance matrix. SINRo is the SINR obtained for the OP, in the presence of I+N, and based on the known I+N covariance matrix. ESINRL thus represents the losses due to the estimation of statistics and SUBSINRL represents the losses due to the use of a particular SOM. SINRLo thus characterizes the loss in performance that is solely due to the presence of I+N. The new metric SUBSINR L is ideal for comparing the performance of various SOM. Figure 5 shows example CSNR and SUBSINR L curves at a xed , for three different SOMs. OP is shown as a reference. A SUBSINRL value of dB means that no losses are induced by the SOM of interest.

VI. C ONCLUSION Earlier attempts at unifying suboptimum methods for STAP had a limited scope. In this paper, we propose a novel canonical framework that encompasses all suboptimal methods we are aware of. Even if this framework is described in the case of known I+N CM, it can easily be generalized to the case where the CM must be estimated. Finally, we hope that this framework will also apply to methods yet to be developped. In any case, it appears exible enough to be adapted or augmented if necessary. R EFERENCES
[1] R. Klemm, Principles of space-time adaptive processing, IEE Radar, Sonar, Navigation and Avionics 9, 2002. [2] J. Ward, Space-time adaptive processing for airborne radar, Tech. Rep. 1015, MIT Lincoln Laboratory, 1994.

[3] I.S. Reed, J.D. Mallett, and L.E. Brennan, Rapid convergence rate in adaptive arrays, IEEE Transactions on Aerospace and Electronic Systems (AES), vol. 10, no. 6, pp. 853863, 1974. [4] Hong Wang and Lujing Cai, On adaptative spatial-temporal processing for airborne surveillance radar systems, IEEE Trans. on Aerospace and Electronic Systems, vol. 30, no. 3, pp. 660669, 1994. [5] J.S. Goldstein, I.S. Reed, and P.A. Zulch, Multistage partially adaptive STAP CFAR detection algorithm, IEEE Trans. on Aerospace and Electronics Systems, vol. 35, no. 2, pp. 645661, 1999. [6] A.M. Haimovich and Y. Bar-Ness, An eigenanalysis interference canceler, IEEE Transactions on Signal Processing, vol. 30, no. 1, pp. 7684, 1991. [7] L.P. Kirsteins and D.W. Tufts, Adaptive detection using low rank approximation to a data matrix, IEEE Transactions on Aerospace and Electronics Systems (AES), vol. 30, pp. 5567, 1994. [8] J.S. Goldstein and I.S. Reed, Theory of partially adaptive radar, IEEE Transactions on Aerospace and Electronics Systems (AES), vol. 33, no. 4, pp. 13091325, 1997. [9] C.D. Peckham, A.M. Haimovich, T.F. Ayoub, J.S. Goldstein, and I.S. Reed, Reduced-rank STAP performance analysis, IEEE Trans. on Aerospace and Electr. Systems, vol. 36, no. 2, pp. 664676, 2000. [10] X. Lin and R.S. Blum, Robust STAP algorithms using prior knowledge for airborne radar applications, Signal Processing Elsevier, vol. 79, pp. 273287, 1999. [11] M. Rangaswamy, A unied framework for space-time adaptive processing, in Proc. Ninth IEEE SP Workshop on statistical signal and array processing, Portland, Oregon, USA, 14-16 September 1998. [12] Skolnik, Radar Handbook 2nd Edition, McGraw-Hill, 1990. [13] F.D. Lapierre and J.G. Verly, Registration-based solutions to the range-dependence problem in STAP radars, in Adaptive Sensor Array Processing (ASAP) Workshop, MIT Lincoln Laboratory, Boston, 1113 March 2003. [14] P.G. Richardson and Dra Malvern, Relationship between DPCA and adaptive space time processing techniques for clutter suppression, in International Conference on Radar, Paris, 3-6 May 1994. [15] R.S. Blum, W.L. Melvin, and M.C. Wicks, An analysis of adaptive DPCA, in Proc. IEEE National Radar Conference, Univ. of Michigan, 13-16 May 1996. [16] W.F. Gabriel, Adaptive digital processing investigation of the DFT subbanding vs transversal lter canceler, NRL Report 8981, Naval Research Laboratory Report, 1986. [17] Robert C. DiPietro, Extended factored space-time processing for airborne radar systems, in Conference Record of The Twenty-Six Asilomar Conference on Singals, Systems & Computers, vol. 1, Pacic Grove, California, 26-28 October 1992, pp. 425430. [18] Z. Bao, G. Liao, R. Wu, Y. Zhang, and Y. Wang, Adaptive spatial-temporal processing for airborne radars, Chinese Journal of Electronics, vol. 2, no. 1, pp. 27, 1993. [19] Z. Bao, S. Wu, G. Liao, and Z. Xu, Review of reduced rank space-time adaptive processing for airborne radars, in International Conference on Radar (ICR), 8-10 October 1996, pp. 766769. [20] L.E. Brennan, F. Staudaher, and D.J. Piwinsky, Comparison of spacetime adaptive processing approaches using experimental airborne radar data, in IEEE National Radar Conference, Boston, MA, April 1993, pp. 176185. [21] R.D. Brown, R.A. Schneible, M.X. Wicks, H. Wang, and Y. Zhang, Stap for clutter suppression with sum and difference beams, IEEE Transactions on Aerospace and Electronic Systems (AES), vol. 36, no. 2, pp. 634646, 2000. [22] H. Wang, Y. Zhang, and Q. Zhang, An improved and affordable space-time adaptive processing approach, in International Conference on Radar (ICR), Beijing, China, 8-10 October 1996, pp. 7277. [23] J.S. Goldstein, I.S. Reed, and L.L. Scharf, A multistage representation of the Wiener lter based on orthogonal projections, IEEE Transactions on Information Theory, vol. 44, no. 7, pp. 29432959, 1998. [24] J.R. Roman, M. Rangaswamy, D.W. Davis, Q. Zhang, B. Himed, and J.H. Michels, Parametric adaptive matched lter for airborne radar applications, IEEE Trans. on Aerospace and Electr. Systems (AES), vol. 36, no. 2, pp. 677692, 2000. [25] Joseph R. Guerci, Space-Time Adaptive Processing for Radar, Artech House, 2003.

CSNR ( =0) (dB)

You might also like