You are on page 1of 17

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO.

4, APRIL 2014

2261

Radar Coincidence Imaging: An Instantaneous


Imaging Technique With Stochastic Signals
Dongze Li, Xiang Li, Member, IEEE, Yuliang Qin, Member, IEEE, Yongqiang Cheng, Member, IEEE, and
Hongqiang Wang, Member, IEEE

Abstract Motivated by classical coincidence imaging which


has been realized in optical systems, an instantaneous microwaveradar imaging technique is proposed to obtain focused highresolution images of targets without motion limitation. Such
a radar coincidence imaging method resolves target scatterers
based on measuring the independent waveforms of their echoes,
which is quite different from conventional radar imaging techniques where target images are derived depending on timedelay and Doppler analysis. Due to the peculiar features of
coincidence imaging, there are two potential advantages of
the proposed imaging method over the conventional ones: 1)
shortening the imaging time to even a pulse width without
resolution deterioration so as to improve the performance of
processing noncooperative targets and 2) simplifying the receiver
complexity, resulting in a lower cost and platform flexibility in
application. The basic principle of radar coincidence imaging is
to employ the time-space independent detecting signals, which are
produced by a multitransmitter configuration, to make scatterers
located at different positions reflect independent waveforms from
each other, and then to derive the target image based on the
prior knowledge of this detecting signal spatial distribution. By
constructing the mathematic model, the necessary conditions of
the transmitting waveforms are analyzed for achieving radar
coincidence imaging. A parameterized image-reconstruction algorithm is introduced to obtain high resolution for microwave radar
systems. The effectiveness of this proposed imaging method is
demonstrated via a set of simulations. Furthermore, the impacts
of modeling error, noise, and waveform independence on the
imaging performance are discussed in the experiments.
Index Terms Coincidence imaging, radar imaging, stochastic
signals.

I. I NTRODUCTION

MAGING radars take different forms and have various


applications, ranging from stationary radars to synthetic
aperture radars, for aircraft objects or celestial ones. All of
these imaging techniques are mainly developed based on the
mathematical model of range-Doppler (RD) principle [1].
Moreover, Munson [2] and Mensa [3] also provided a rather
Manuscript received August 19, 2012; revised April 5, 2013; accepted
April 5, 2013. Date of publication May 22, 2013; date of current version
January 2, 2014. This work was supported in part by the National Science
Foundation for Distinguished Young Scholars of China under Grant 61025006,
the National Natural Science Foundation for Young Scientists of China under
Grant 61101182, and the National Natural Science Foundation of China under
Grant 61171133.
The authors are with the School of Electronic Science and Engineering,
National University of Defense Technology, Changsha 410073, China (e-mail:
dongzeli1010@yahoo.com.cn; lixiang01@vip.sina.com; yuliang.qin@gmail.
com; nudtyqcheng@gmail.com; oliverwhq@vip.tom.com).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TGRS.2013.2258929

different explanation of radar imaging using the tomography


theory. Both of the two techniques construct high-resolution
images by processing data obtained from many different
perspective views of a target area. The image resolution
of coherent radars is generally produced based on the fundamental processing of measuring range (time-delay) and
changes in range (Doppler gradient) while the observation
angle varying [1]. RD principle treats data (returned pulse)
from various aspect angles as a time history series where the
exhibited Doppler frequency is associated with the scatterer
azimuth position. Then target images are derived via extracting
information of time-delay and Doppler frequency of radar
echoes accumulated in the process of observing the relative
motion between the target and the radar antenna. Alternatively,
tomography explains radar image formation based on the
projection slice theory. Munson et al. [2] demonstrated that
the radar receiving signal derived from a particular aspect
angle actually is the convolution between the transmitting
signal and the projection of the target scattering distribution
at the same angle. Then target images could be obtained
using Fourier methods based on the support area filled by
the returned signals acquired over a range of frequencies
and multiple aspect angles. Note that there are no essential
differences between the two theories for revealing the radar
imaging formulism. Actually, inverse synthetic aperture radar
imaging developed typically based on the RD principle can be
seen as the special case of the tomography in the conditions
of a narrow aspect-angle range, high frequency, and far-field.
Thus, various radar imaging techniques and algorithms could
be demonstrated with the theory of either tomography or
RD principle, depending on the approximations and limiting
assumptions in application.
For both of the RD principle and the tomography, the
azimuth (cross-range) resolution is determined by the angular
variation range during the target observation, which is much
higher than that obtained by the real antenna aperture. Thus,
imaging techniques developed from either of them all aims
at increasing the observation angles to see the target, which
makes the 2-D high-resolution imaging possible. There are two
typical approaches to meet the requirement of multiple aspect
angles. One way is to detect targets using a multiple radar
system as depicted in Fig. 1(a). Multiple radar imaging can
be realized by using the real aperture antenna array, but the
antennas number commensurate with a desired high angular
resolution is quite large and it is too complex to realize the
maximum resolution for a large-scale array limited by the

0196-2892 2013 IEEE

2262

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 4, APRIL 2014

Fig. 1. (a) Radar imaging of real aperture antenna array. (b) Radar imaging
of synthetic aperture.

hardware and physical environment [5], [6]. Another way is


the utilization of the relative motion between targets and the
radar system, which is the derivation of the turntable imaging
model [7]. In the stop-and-shoot model [7], every pulse during
the coherent integration time (CIT) corresponds to a particular
aspect-angle sampling, as depicted in Fig. 1(b).
Note that either tomography or RD principle reconstructs
images in the framework of the Fourier transform, which
has fine properties only under the condition of the uniform
sampling. In other words, the aspect-angle variation, namely,
n = n+1 n , needs to be approximately equivalent. Both
the projection angle in tomography and the relative motion
in RD principle are required to vary uniformly. However,
this requirement will be rarely satisfied in real scenarios of
radar imaging if the CIT is long or the target is involved
in fast maneuvers. Take ISAR imaging for example. If the
translational motion and rotation of the target is uniform
during the CIT, the Fourier transform will focus the scatterers
to their respective cross-range positions and forming the ISAR
image with the fast Fourier transform will lead to a wellfocused image [8]. Unfortunately, practical targets are almost
always engaged in fast maneuvers, such as acceleration, roll,
pitch, yaw, etc. As a result, the generated higher order terms of
the motion vectors induce time-varying Doppler frequency into
the return signals, which widens the frequency spectrum and
badly blurs the images in the cross-range. In order to solve the
problems of imaging noncooperative targets, various motion
compensation algorithms are studied to refocus the imaging
blurs caused by the nonuniform spatial sampling [8][13].
In addition, the angular glint [14] and range glint [15], [16]
might occur during the CIT in the presence of aspect-angle
displacement. The former one would cause the estimation
errors of the target phase center, and the latter one may induce
serious aberrance in the high-range resolution profile (HRRP),
which is the fundamental of producing target 2-D images.
Therefore, the fusion of radar receiving signals obtained
over a relative long CIT (big aspect-angle variation) indeed
provides the desired resolution, but meanwhile the resultant
nonuniform aspect-angle variation would yield glint and imaging blur beyond recognition [9]. The very reason for the
indispensability of the aspect-angle integration (or variation) in
the imaging formulism of both tomography and RD principle
consists in that a single returned pulse reflected from the target
could hardly provide adequate information of Doppler gradient
to reconstruct high-resolution 2-D images unless this radar

Fig. 2.

Geometry of radar imaging.

system has an antenna array with an enormous number of


elements.
Thus, rather than to compensate for the nonuniform sampling, the starting point in this paper is to attempt to establish
an imaging formulism, which could bypass the inconsistency caused by the long integral time. Sparked by classical
coincidence imaging, we propose radar coincidence imaging:
an imaging technique without the limitation of target relative
motion, which can derive high-resolution 2-D target images
using just a single pulse with much fewer antennas.
Let us consider a simple Gedankenexperiment (thought
experiment), which can interpret the essence of radar coincidence imaging. The imaging sketch here is simply that a
radar system located at R0 transmits a signal pulse to detect
a target as depicted in Fig. 2. An XY coordinate is located
at the target center. Let the concrete transmitting waveform
lies over, and all attention be devoted to the signal spatial
distribution in the target area. SI (r , t) denotes the radar signal
on the position r. Because SI (r , t) is generated by a single
pulse of radar system, it could be expressed as


t r
SI (
s I (r , t)
r , t) = rect
(1)
Tp
where T p is the pulse width, s I (r , t) is the envelope function,
and r is the propagation delay. Assume SI (r , t) is known,
and above all its phase, and/or amplitude, and/or frequency
varies according to positions at every time slice. If this variety
is greatly sharp, SI (r , t) would approximately exhibit the
characteristic of time-space independence expressed as



SI (r , t )S I (r  , t  )dt = r r  ,  . (2)
All of the scatterers illuminated by this time-space independent signals thus reflect back echoes with the same waveforms
as SI (r , t), which would surely have resolvable features associated with their spatial locations. Then the receiving signal
Sr (t) can
 be expressed as the superposition of SI (r , t), i.e.,
Sr (t)= r SI (r , tr ), where r is the scattering coefficient

LI et al.: RADAR COINCIDENCE IMAGING

at r, and r is the time-delay caused by the propagation to the


receiver.
The imaging task is to extract all scatterer echoes from
the returned signal superposition, and to match them with
their respective positions. Set an imaging region I which
covers the whole target projection onto the XY plane. Then
SI (r , t) versus the entire I composes a reference set SI =
{SI (r , t), r I }. The imaging processing is to perform the
matched filtering between Sr (t) and every element of SI ,
respectively. In terms of an arbitrary element SI (r1 , t), if
there exists a scatterer at r1 , only the component of this
scatterer echo within Sr (t) will match SI (r1 , t) because of the
time-space independence. Besides, the matching result could
represent the scattering coefficient. If there exists no scatterers
at r1 , no components can match SI (r1 , t) and the matched
filtering would give a null value at this position. Thus, the
matching results processed among the whole SI will give the
target scattering distribution.
There are two preconditions for this imaging formulism:
1) the detecting signal on the imaging area SI (r , t) is required
to be time-space independent and 2) SI (r , t) on the imaging
area is known, using as the reference signal to extract target
spatial scattering distribution from the receiving signal.
Note that the imaging method in this thought experiment
does not refer to Doppler frequency of the receiving signal.
Herein, it can resolve two scatterers even if their ranges
to the receiving antenna and Doppler frequencies are equal,
because they have alternative resolvable features, namely,
independent waveforms. Thus, the nonreliance of resolution on
Doppler gradient leads to that radar coincidence imaging is no
longer restricted by the target relative motion or the aspectangle integration. It could be seen that the target scattering
distribution is herein sampled via the wavefront variety. It is
the time-space independent signals in the imaging plane make
the target image can be extracted from the receiving signal.
This imaging formulism, which derives target images based
on spatial independent detecting signals, is borrowed from
classical coincidence imaging.
As the derivation of this Gedankenexperiment, classical
coincidence imaging infuses a novel perspective to radar imaging, and its noteworthy attributes would potentially provide an
enhancement. However, the distinct differences between radar
signal processing and classical coincidence imaging, which
is an optical phenomenon leads to a series of problems for
accomplishing such a novel imaging method. Thus, how to
overcome these problems is the focus of this paper.
This paper starts with the analysis of classical coincidence
imaging. Based on this imaging formulism, we adopt a
multitransmitting configuration to perform radar coincidence
imaging, which can generate the detecting signals of timespace independence. Then the mathematic model is analyzed,
and the necessary conditions for achieving radar coincidence
imaging are derived. Using the known transmitting signals and
the estimated range of the target, the signal spatial distribution
versus time in the target area can be calculated, based on which
the target scattering pattern can be extracted from the receiving signal. The peculiarities of radar coincidence imaging
are illuminated by contrast with conventional radar imaging.

2263

Then, a parameterized image-reconstruction algorithm rather


than the correlation method is utilized to produce highresolution, which could overcome the nature limitations of
microwave radar system for realizing coincidence imaging. To verify the effectiveness of the proposed method,
simulations are provided in different scenes. As shown in
the results, radar coincidence imaging can obtain focused
high-resolution images for both stationary targets and maneuvering ones.
This paper is organized as follows. Section II gives the fundamental analysis of classical coincidence imaging. Section III
is devoted to the achievement of radar coincidence imaging.
Section IV discusses the waveform and the image reconstruction in detail. Along with simulation results, the effectiveness
and the performance of radar coincidence imaging are analyzed in Section V. Section VI concludes the work.
II. A NALYSIS OF C LASSICAL C OINCIDENCE I MAGING
The first coincidence imaging experiments were performed
using entangled photons from a parametric down converter
by Pittman et al. in 1995 [17], inspired by the theory of
Klyshko [18], [19]. The object images are extracted from the
coincidence rate of photons, where the name of coincidence
imaging is derived. The most attractive characteristic is
the surprising nonlocal feature (images are produced in the
channel without objects), which makes the experiment immediately, named ghost imaging. The quantum entanglement
was once regarded as the necessary ingredient to acquire the
ghost feature [20]. However, it has been theoretically and
experimentally proved that nonlocality can also be reproduced
with classical thermal sources in recent years [21][24]. Thus,
coincidence imaging is identified as two types, i.e., quantum
coincidence imaging, which requires quantum entanglement,
and classical coincidence imaging, which is performed with
the classical thermal light [25]. Because of the much closer
substance properties between microwave and optical wave, this
paper studies radar coincidence imaging as the extension of
classical coincidence imaging.
Classical coincidence imaging can be briefly shown in Fig. 3
[23], [25]. It relies on the use of two spatially correlated
light beams, one of them is transmitted through an unknown
(test) optical system, which contains an object and its total
intensity is captured by a bucket detector (point detector);
the other beam passes through a known (reference) optical
system and its intensity is measured by a CCD camera
(an array of pixel detectors). Then, the object image can be
obtained via calculating the coincidence intensity (intensity
correlation) between the optical field from the test and the
reference system. The bucket detector that captures the optical
field illuminating the object has no spatial resolution, whereas
the CCD camera with high spatial resolution measures the field
that is never interacted with the object. In other words, even
though the reference system does not see the object, it can
tell us what it looks like, which is termed for nonlocality.
The nonlocality of coincidence imaging can be clarified by
contrast with conventional optical imaging. The light field
illuminating the object in conventional optical imaging is

2264

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 4, APRIL 2014

Because the field fluctuation of a thermal source can be


generally modeled by a Gaussian random process with zero
mean [26], then (3) can be expressed as



2
(2)
G r,t (x 1 , x 2 ) = It (x t ) Ir (xr ) +
E t (x t )Er (xr )
. (5)
For expressing the second-order correlation function more
manifestly, the intensity fluctuation is used as follows:
I (x) = I (x) I (x) .

Fig. 3.

Thus, by replacing I (x) with I (x), the intensity correlation


(2)
Ir (xr )It (x t ) of G r,t
(xr , x t ) changes into the intensity fluctuation correlation

Classical coincidence imaging.

straightly received by a CCD camera where the direct output


is the object image. Thus, the setup in Fig. 3 realizes the
separation of object and image, which is the substance of
nonlocality in coincidence imaging.
Now we use classical statistical optics to demonstrate coincidence imaging principles. In Fig. 3, the thermal light emitted
from a thermal source S0 is divided into two beams at the
beam splitter. Due to the same derivation, the two beams
have the same optical-field features as the source, which
means a high-level correlation. One of the produced beams is
transmitted through a reference system, and the other passes
through a test system, which contains the object to be imaged.
The impulse response functions h r (xr , x 0 ) and h t (x t , x 0 ) characterize the reference system and the test system, respectively.
Here, x 0 , xr , and x t is the position in the source plane,
the reference plane, and the detecting plane, respectively.
Z is the distance between S0 and detectors. Ir (xr ) and
It (x t ) denote the intensity distribution in the reference plane
and the detecting plane, respectively. The detector2 (a CCD
camera) receives optical field from the reference system and
records the intensity distribution Ir (xr ), whereas detector1
(a bucket detector) simply records the total intensity collected by the collector lens from the test system. The object
image can be reproduced by calculating coincidence intensity
(2)
G r,t (xr , x t ), which is the second-order correlation of the
thermal field [25], [26]
(2)
G r,t (xr , x t ) = Ir (xr )It (x t )


= Er (xr )Er (xr )E t (x t )E t (x t )

(6)

(3)

where  means the ensemble average, Er (xr ) is the optical


field on the reference plane referred as the reference field,
E t (x t ) is the optical field on the detecting plane referred as
the detecting field.
Before analyzing the imaging procedure, it is necessary
to emphasize the property of the incoherent source S0 .
A classical thermal-light source contains a large number of
particles as sub-sources, which emit lights independently and
randomly [26]. As a result of the superposition of fields from
all sub-sources with random phases, the entire field rapidly
fluctuates and manifests the spatially incoherent feature. Thus,
as a fully incoherent source, S0 is characterized by the
following first-order correlation function:


(4)
E S (x 1 )E S (x 2 ) = I (x 1 )(x 1 x 2 ).

It (x t )Ir (xr ) = [It (x t ) It (x t )] [Ir (xr ) Ir (xr )]
= It (x t )Ir (xr ) It (x t ) Ir (xr )



2
(7)
=
E t (x t )E (xr )
.
r

Since the splitter does not change the spatial distribution of


the split beams from E S (x), the output beams through two
different optical systems can be expressed as
 G/2
E k (x k ) =
E S (x  )h k (x k , x  )d x  ,
k = r, t (8)
G/2

where G is the size of the thermal source. Under the Fresnel


diffraction-propagation and the paraxial approximation, the
transmission function [24] is


ei2 zk /
i
k = r, t
exp
|x k x 0 |2 ,
h k (x k , x 0 ) =
i z k
z k
(9)
where is the source wavelength, z k is the distance between
S0 and detectors. Substituting (4), (8), and (9) to (7), we have
Ir (xr )It (x t )

  G/2

ei2(zt zr )/
=

I (x 1 )(x 1 x 2 )
2 z r z t
G/2

2



i
i
 2
 2



exp
|xr x 2 |
|x t x 1 | d x 1 d x 2
.
z r
z t
(10)
If the source is large enough and the intensity distribution is uniform, we have an approximation of I (x) = I0
[24]. Set the two beams have the same propagation distance,
i.e., z t = z r = d. Then, after some calculations, (10) becomes
Ir (xr )It (x t )

 G/2

2



I0
i
 2
 2


|xr x 1 | |x t x 1 |
=

exp
d x1

2
2
d
G/2 d

G
2
(xr x t )
= Ic sinc
d
Ic (xr x t )
(11)
where Ic = I02 G 2 /4 d 4 is the normalized intensity. Note,
until now the object has not been taken into account and (11)
just shows the spatial independence between the intensity
fluctuations on the detecting plane and reference plane. Then
after passing through the object, the field intensity It (x t ) is
collected by a collector lens, which is large enough to gather

LI et al.: RADAR COINCIDENCE IMAGING

Fig. 4.

2265

Comparison of coincidence imaging between radar system and optical system.

all the light through the test system. The object is characterized
by the transmittance distribution T (x). As the final output of
the test system, the ensemble intensity fluctuation has the form

It (x) T (x)d x
(12)
It =
U

where U is the size of the test plane. Thus, the final correlation
between the intensity fluctuations at the reference detector and
the test detector is

It (x) T (x)d x
Ir (xr )It  = Ir (xr )
U

Ir (xr )It (x) T (x)d x
=
U

Ic (xr x) T (x)d x
U

= Ic T (xr ).

(13)

Then
T (xr ) Ir (xr )It /Ic .

(14)

Therefore, under the conditions of a large, uniform, and


fully incoherent light source, T (xr ) of every position can be
extracted via calculating the intensity fluctuation correlation
between It detected by detector1 and Ir (xr ) detected by the
corresponding pixel of detector2. The spatial independence
between Ir (xr ) and It (x t ) leads to the achievement of
the object image reconstruction. Furthermore, it also reveals
the reason of nonlocality. It is the point-to-point relationship
shown in (11) makes the object spatial distribution totally
mixed in the final receiving fields can be demodulated via
the reference fields. Thus, the assignment of detector1 is
just to record the total intensity, without the responsibility of
resolution.
Reviewing the condition assumed in the formula deduction above, three kernel preconditions are summarized here
for achieving nonlocal imaging: 1) two signal channels,
i.e., the detecting one E t (x t ) and the reference one Er (xr ),
have the same original spatial distribution; 2) their propagation
distances away from the source are equal; and 3) above all,
their original spatial distribution E S (x 0 ) has the characteristic
of fully incoherence. Thus, inheriting from the same incoherent source and propagating the equal distance, both of the
fields E t (x t ) and Er (xr ) have the same spatial incoherent
distribution, which makes their intensity fluctuation correlation

yield nonzero values only at the same position and show null
values in the other cases. Hence, detecting system without
resolution can still give a high-resolution target image.
III. R ADAR C OINCIDENCE I MAGING
The previous Gedankenexperiment, concisely stating radar
coincidence imaging, borrows the principle of coincidence
imaging where target spatial pattern is obtained based on the
signal spatial variety in the imaging plane. Here, classical
coincidence imaging is believed as the derivation of radar coincidence imaging, however, some slight differences between
them might obscure the identity of the two cases. As depicted
in Fig. 4, the extraction of target spatial pattern from the
information superposition (ensemble intensity fluctuation It ,
or receiving signal Sr (t)) in the two imaging methods both
have the mode of the correlation between signal A and B, but
two visible differences in the imaging process exist between
them. First, for radar coincidence imaging, A and B are
detecting signal and receiving signal, respectively. However,
for optical coincidence, they are reference signal (intensity)
and receiving signal (intensity), respectively. By contrast with
the imaging in radar system, a reference channel is added in
optical system. Second, radar coincidence imaging requires the
time-space independence of signal A whereas the optical one
requires the spatial independence between A and C.
In terms of the first difference, the thermal field E t (x t ) and
Er (xr ) have the same source, the same propagation distance,
and the same propagation function. Hence, they are totally the
same to each other before the target intervention. As a result,
the intensity fluctuation Ir (xr ) is equal to It (x t ), as well.
Thus,  Ir (xr )It (x t ) in (11), which has the form of spatial
cross-correlation actually also presents the self-correlation of
the detecting signal
It (xr )It (x t ) = Ir (xr )It (x t ) Ic (xr x t ). (15)
Then, T (xr ) It (x t )It /Ic . Thus, classical coincidence
imaging basically employs the correlation between detecting
signal and receiving signal, as well. The reason why classical
coincidence imaging does not directly use E t (x t ) for correlation is that thermal fields of a fully incoherent source vary
so sharply and randomly that the estimation of the field or
intensity distribution on the detecting plane is very difficult.

2266

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 4, APRIL 2014

Thus, the reference system is arranged for the awareness of


E t (x t ) or It (x t ).
The second difference proceeds from the optical analytical
treatment where we consider for simplicity only spatial variables and ignore the time argument. The complete form of a
fully incoherent source in (4) should be


E S (x 1 , t)E S (x 2 , t + ) = I (x 1 )(x 1 x 2 )( ).
(16)
After some calculating, Ir (xr )It  can be rewritten as
Ir (xr , t)It (x t , t + )



2
=
E t (x t , t)Er (xr , t + )
Ic (xr x t )( ). (17)
Furthermore, as the ensemble average, Ir (xr , t)
It (x t , t + ) is derived by calculating the mean value of a
number of samples of Ir (xr , t) and It (x t , t + ). From the
view of radar signal processing, these samples are generally
regarded as time sequences. Then rewrite (17) as
Ir (xr , t)It (x t , t + )
K
1 
Ir (xr , tk )It (x t , tk + ).
= lim
K K

Fig. 5.

(18)

k=1

Along with (15) and (17), (18) finally changes to


It (xr , t)It (x t , t + )
K
1 
It (xr , tk )It (x t , tk + )
= lim
K K
k=1

Ic (xr x t ) ( ).

(19)

The right-hand side of (19) is just the discretized form


of (2), which presents the time-space independence of the
detecting signal. Therefore, there is no doubt that the
Gedankenexperiment is unified with classical coincidence
imaging. Then, the focal points in radar coincidence imaging
are: how microwave radar system could achieve the formulism of coincidence imaging; and what differences ever
exist between the conventional radar imaging methods and
the proposed one; and what improvement this new imaging
formulism can provide. These questions are going to be
discussed in the following paragraphs.
The basic form of radar coincidence imaging is briefly
shown in Fig. 2 where we have not concerned the transmitting
part. Then the core for achieving coincidence imaging in
microwave radar system is to design the transmitting signal,
which would produce the time-space independent signals in
the imaging plane. This can be referred from the fully incoherent thermal source used in classical coincidence imaging,
which consists of the number of sub-sources emitting light
stochastically.
Therefore, a multitransmitting configuration in a manner
analogous to the thermal source, including multiple independent sub-sources is imaginable and reasonable for radar
coincidence imaging. Then Fig. 5 re-illustrates the simplified
setup of the Gedankenexperiment in detail. There is an array of
N transmitting elements and a receiving element. A coordinate
X Y is built at the center of the detecting (imaging) region
labeled as I . Rn , Rr , and r is the position vector of the n th

Geometry of radar coincidence imaging.

transmitting element, the receiving antenna, and an arbitrary


point within I , respectively. SI (r , t) denotes the radar signal
on I , which represents the spatial distribution of the detecting
signal, or the wavefront on the imaging region. Sr (t) is the
receiving signal. Stn (t) is the transmitting signal of the n th
transmitting element.
Since the thermal source of classical coincidence imaging
is fully temporal and spatial incoherent, Stn (t) would also be
independent in time and space (location of each transmitting
antenna). Thus, the cross-correlation of transmitting signals is
supposed to be

Stn1 (t 1 )Stn2 (t 2 )dt
RT (n 1 , n 2 ; 1 , 2 ) =
= (1 2 ) (n 1 n 2 ).

(20)

That is to say, the signal set {Stn (t), 1 n N } is


time-independent and group-orthogonal. Then SI (r , t) is
expressed as


N

|r Rn |
SI (r , t) =
Stn t
.
(21)
c
n=1

The receiving signal can be written as the superposition


of SI (r , t)



|r Rr |
d r
(22)
Sr (t) = r SI r, t
c
I
where r is the scattering coefficient of the target scatterer
located at r, and for positions without target scatterers r = 0.
Then, with the postulate of (20), the self-correlation of SI (r , t)
turns out to be

SI (r , t )SI (r  , t  )dt
R I (r , r  ; ,  ) =


 
N
|r Rn |

Stn t
=
c
n=1

LI et al.: RADAR COINCIDENCE IMAGING

N



Stn

n  =1
N
N 


2267


|r  Rn |

t

dt
c


Stn (t

n= n  =1


Stn

N
N 

n=1 n  =1

|r Rn |
)
c

|r  Rn |
t

c

 
dt


|r Rn |
RT n, n  ;
c


|r  Rn |

+,
+
c
N



=
|r Rn | |r  Rn |c(  ) . (23)
n=1

Because the target range | Rn | is generally much larger than


the target size, thus we have the approximation of |r Rn | =
 n [1], where D
 n is the unit direction vector, namely,
| Rn | r D
 n . Then, (23) becomes
Rn =| Rn | D
R I (r , r  ; ,  ) =

N


 n c )
(r D

n=1

r = r  r
 =  .

(24)

Then the possible results of (24) can be analyzed through the


following equation:

c
cos 1 sin 1
 c
cos 2 sin 2 
x

(25)
..
.. y = ..

.
.
cos N sin N

c

 n = (cos n , sin n ) , r = (x, y). The coefficient


where D
matrix is labeled as D. Because the direction vectors are noncollinear and different from each other, rank(D) = 2. Then D
denotes the corresponding augmented matrix, expressed as

cos 1 sin 1 c
cos 2 sin 2 c

(26)
D = .
..
.. .
..
.
.
cos N sin N c

Obviously, the equation has a nonzero solution in the case of


N = 2, which implies R I (r , r  ; ,  ) can reach the maximum,
namely 2, in the case of r = 0,   = 0. Then, concern the
case of N > 2. Choose the first three rows of D and calculate
to investigate rank( D)

the third-order determinant det( D)



2 1
det D = 4c sin
2
3 1
3 2
sin
. (27)
sin
2
2
1) If c = 0:
= 0. Then rank( D)
=
Because 3 = 2 = 1 , det( D)
3 > rank(D). As a result, the equation has no solutions,

which indicates no r can make R I (r , r  ; ,  ) reach


the maximum, namely N, when c = 0.
2) If c = 0:
= rank(D) = 2. Then the equation has the
rank( D)
unique solution of 0. It implies that R I (r , r  ; ,  ) can
reach the maximum at r = 0 when c = 0.
Therefore, R I (r , r  ; ,  ) can reach the maximum of N
only under the condition of  = 0 and r = 0 when there
exists more than two transmitting antennas. Then if the antenna
number is big, for example N = 10, the result of (24) would
have the form of an approximate delta function


(28)
R I (r , r  ; ,  ) N r r  ,  .
Therefore, radar signals on the imaging plane approximately
have the time-space independence under the postulate of (20)
for multitransmitting signals. To highlight the formulism of
coincidence imaging, here we use SI (r , t) to structure a
reference signal, like the reference field Er (xr ) in the classical
case


|r Rr |
S (r , t) = SI r, t
, r I.
(29)
c
This reference signal S(r , t) is just the transform of the detecting signal SI (r , t) with an additional time-delay induced by
the propagation to the receiving antenna. Then, the scattering
coefficient at an arbitrary position r  can be obtained via
filtering the receiving signal with S(r  , t)



Sr (t)S r  , t dt
 

 
|r Rr |
d r
=
r SI r, t
c
I


|r  Rr |


dt
SI r , t
c



|r Rr | |r  Rr |
= r R I r, r  ;
d r
,
c
c
I


r N r  r d r
I

= N r .
That is

1
r
N

(30)




Sr (t) S r  , t dt.

(31)

Equation (31) shows that radar coincidence imaging technique can obtain the target image as long as the transmitting
signals satisfy the condition in (20). This condition ensures
the detecting signal has a high-level variety, namely, the timespace independence. Therefore, radar coincidence imaging is
summarized as follows:
Radar coincidence imaging employs a multitransmitting configuration to transmit time-independent and grouporthogonal signals that ensure a target area covered by detecting signals with the time-space independent characteristic.
Then the target scattering distribution will be extracted via
filtering the receiving signal with the known distribution of
this detecting signal, whose independent degree determines
the imaging resolution.

2268

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 4, APRIL 2014

The essence of radar coincidence imaging is straightforward. The transmitting waveform with the characteristic of
time-independence and group-orthogonality increases the spatial variety of wavefronts, which makes the scatterers within a
beam reflect echoes of different waveforms according to their
respective locations. Furthermore, the distribution of detecting
signals covering the target can be calculated based on the
known transmitting signals and the estimated target center, and
then is used as a priori information to reconstruct the target
image.
Several basic issues about radar coincidence imaging should
be clarified here.
a) Differences between radar coincidence imaging and
conventional radar imaging: In conventional cases, the wavefront on a target keeps marked correlation when radar transmits coherent signals. Target scatterers at different positions are illuminated by the signals of almost identical
amplitude, frequency, and phase. Consequently, echoes of
these scatterers have the same waveform. Thus, conventional
radar imaging resolves scatterers by extracting the differences emerging in time-delay and Doppler gradient of their
echoes.
For radar coincidence imaging, the wavefront on a target
exhibits such a considerable variety that it approximately has
spatial independent characteristic. Thus, echoes of scatterers
within a beam do not just differ from each other upon timedelay and Doppler frequency. Their waveforms are highly
different, above all, which provides enough information for
resolving scatterers within a radar beam. Especially, this
resolvable characteristic does not need aspect-angle integration, and it could be achieved using only a single returned
pulse. Therefore, it can be viewed that the target scattering distribution here is sampled via the various spatial pattern of detecting signals. The higher independent degree the
detecting signals have, the better resolution the images can
achieve.
Doppler frequency remains the significant feature for radar
receiving signals, but will not be processed as the key point
to resolve targets in radar coincidence imaging where whether
scatters could be distinguished are determined by their waveform independent degree. Consequently, some fundamental
topics or conventional factors with respect to the analysis of
time-delay and Doppler resolution might be inapposite for the
discussion of radar coincidence imaging. Take the ambiguity
function for an instance, which is generally a major tool to
characterize radar performance. Certainly, the ambiguity function in radar coincidence
can be given as the definition

 imaging

s(t )s (t  ) exp( j 2f t)dt


.
of (,  f ) =
However, it characterizes how well one could identify the
target parameters of time-delay and Doppler based on the
transmission of a known waveform. Evidently, it is certainly
inappropriate to evaluate radar coincidence imaging that does
not rely on range and Doppler method.
b) Differences between radar coincidence imaging and
multielement radar imaging: Although the multitransmitting
configuration employed in radar coincidence imaging has
been widely utilized in existing radar systems [5], [28], they
have substantial differences in imaging formulism and signal

processing procedure. First, the waveform of radar coincidence


imaging is time-independent and group-orthogonal. Conventional multielement radars also make ample use of group
orthogonal signals, but it focuses on the multiple paths or multiple observation angles generated by the multiple antennas.
Orthogonal waveforms, which could avoid transmitting signals
interference, are used to separate the components corresponding to each path before signal processing [29]. Then time-delay
and Doppler frequency of these separated components are
generally extracted, respectively, to rebuild the target image.
Hence, conventional multielement radar imaging utilizes the
waveform orthogonality for separating components of each
path, and then derives target images under the framework of
RD principle where scatterers are resolved via the time-delay
and Doppler differences of their echoes. By contrast, radar
coincidence imaging just needs this interference generated
by multiple transmitting signals. The group orthogonality,
along with the time-independence, is supposed to make the
wavefront shows spatial fluctuations. In addition, the components corresponding to every transmitter are not separated in
the whole imaging procedure, not mention being separately
processed. Thus, radar coincidence imaging utilizes orthogonal
waveforms to increase spatial variety of detecting signals, and
derives target images by resolving scatterers echoes based on
their waveform differences.
c) Potential advantages of radar coincidence imaging:
Radar coincidence imaging does not require target to move
uniformly for cooperating data acquisition. Moreover, since
radar coincidence imaging could obtain target 2-D images
with a single pulse, the impact of target uncooperative motions
on imagery qualities could be markedly decreased due to the
very short imaging time. Therefore, radar coincidence imaging
could derive focused images of either stationary targets or
noncooperatively moving ones.
Additionally, we consider the enhancement brought by the
nonlocal imaging. The direct meaning of nonlocality is
the separation of the object and the image, which indicates
target images are not given by the signal channel modulated
by the target. In a further interpretation, nonlocality means
the separation of the object and the high-resolution catcher
or receiver, which indicates the signal receiver does not need
high resolution. Due to this peculiarity, detector1 of the setup
in Fig. 3 is a simple point detector, which only records the total
signal intensity. Otherwise, the receiver of target signal must
have high resolution for distinguishing fields from different
orientations as the conventional optical imaging. Similarly,
a narrow receiving beam is generally required for the conventional radar imaging to distinguish echoes from different
arrival orientations. However, radar coincidence imaging with
such a nonlocal characteristic can resolve echoes, which are
even within the omnidirectional receiving signals, via their
independent waveforms. As a result, it would surely simplify
the configuration of radar receiving subsystem because of the
reduced requirement to high resolution. This attribute results
in a lower cost and smaller size for radar receivers, which
provides more flexibility in various application platforms.
As previously interpreted, optical system uses a reference
channel to obtain the detecting signal distribution because

LI et al.: RADAR COINCIDENCE IMAGING

2269

thermal field is difficult to be estimated. However, considering


the better controllability and finer stability of microwave radar,
the spatial distribution of detecting signals can be figured
based on the known transmitting waveform and the estimated
target range. Thus, an actual reference channel is unnecessary
for radar coincidence imaging. This is a great convenience
for achieving coincidence imaging with microwave radar by
contrast with optical system.
Despite these advantages shown in this extension of classical coincidence imaging, the microwave radar system cannot
emulate optical system about some superior performance,
which will be discussed later along with the waveform
design.
IV. WAVEFORM AND I MAGE R ECONSTRUCTION

Fig. 6. (a) S I (r , t  ) for coherent signals. (b) Self-correlation of S I (r , t  )


for coherent signals. (c) S I (r , t  ) for stochastic signals. (d) Self-correlation
of S I (r , t  ) for stochastic signals.

Two necessary conditions for achieving radar coincidence


imaging are given according to (20) in the following forms.
stochastically modulating the parameters of amplitude, and/or
1) The group signals of the transmitting array are orthog- frequency, and/or phase, expressed as (36)
onal to each other as follows:
 



t

St
(t)
=
A(t)

exp
j

f
(t)

t
+
(t))

rect
(2
0,
i

=
j

Sti (t)St j (t)dt =


(i, j = 1, 2, . . . , N).
Tp
constant, i = j
(36)
(32)
where A(t), f (t), and (t) are stochastic process functions
2) The transmitting signal of each transmitter is indepen- that specify the fluctuations of amplitude, frequency, and
dent in time domain
phase, respectively. As well known, white noise is an ideal

independent waveform. Then the stochastic transmitting signal

St j (t 1 )St j (t 2 )dt
utilized herein is generated by imposing zero-mean Gaussian
noise modulation on amplitude
0,
1 = 2
( j = 1, 2, . . . , N) (33)
=
 


constant, 1 = 2
t
(37)
St (t) = A (t) exp j (2 f t + ) rect
T
p
where St j (t) has the form of St j (t) = rect(t/T p )
st j (t), and st j (t) is the envelope function. The two where
conditions aim at that a single transmitting pulse could
R A ( ) = E[ A(t)A (t + )]
ensure a time-space independent distribution of the radar

1 T /2
signals on the imaging area. To satisfy the independent
= lim
A(t)A (t + )dt = ( ).
condition, the transmitting signal is required to be as
T T T /2
stochastic as possible.
Then
Microwave radar generally transmits sine waves
 T p /2
1
expressed as
St (t)St (t + )dt
R St (t, t + ) = lim
T p T p T p /2
 T p /2
St (t) = A (t) exp ( j 2 f t + )
(34)
1
A(t)A (t + )
= lim
T p T p T p /2
where A(t) is the complex envelope, f is the carrying frequency, and is the initial phase. The signal is determined
exp[ j (2 f t + )]
by the three parameters. To keep the signal coherence, the
exp[ j (2 f (t + ) + )]dt
envelope, frequency, and phase are generally controlled to
= exp( j 2 f ) R A ( )
vary regularly in conventional cases. For example, the liner
= ( ).
(38)
frequency modulated (LFM) signal expressed as (35) is extensively used in coherent radars, which has wide bandwidth
Therefore, the transmitting signal of a finite pulse width



 
could
be approximately regarded as being time-independent.
1
t
exp j 2 f t + t2
(35) The independent degree of the transmitting signals relies
St (t) = rect
Tp
2
on the independent degree of the stochastic modulation on
where is frequency modulated rate, t = t m T , m is the the three parameters. Similarly, group orthogonal signals can
index of the transmitting pulse, and T is the pulse reputation be derived by specifying the signal parameters of different
time (PRT). This signal above will certainly present high transmitters with mutually independent stochastic processes.
Now we simulate the detecting signals SI (r , t) in the imagcorrelation due to the regularly varying parameters. On the
other hand, time-independent signals then could be derived by ing plane I to compare the signal spatial distribution produced

2270

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 4, APRIL 2014
1
1.5

1.5

1.5

0.5

0.5

0.5

-0.5

-0.5

-0.5

-1

-1

-1

10

0
-5

-5

-5
-10

-10

10

-5

-5
-10

0
10

0.2
5

10
5

0.5

0.4

10

10

10

0.6

-1.5

-1.5

-1.5

0.8

-5
-10

-10

-10

Fig. 7.
Instantaneous wavefronts for three types of stochastic signals.
(a) Stochastic modulation of frequency. (b) Stochastic modulation of amplitude. (c) Stochastic modulation of phase.

50

100

150

200

250

300

350

400

-10

-10

-10

-10

-5

10

1
1

0.8
0.6

0.5

0.4
0
10

0.2

1
1

0.8
0.6
0.4
0.2
0

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0
10

-0.2

0
10
5

10
5

-0.4

-5
-5

-10
x 10

-9

-5
-10

10
5

-5
-10

-5
-10

Fig. 8.
(a) Self-correlation functions of S1 (t) and S2 (t). (b) Center
coincidence of S1 (t). (c) Center coincidence of S2 (t).

by different waveforms. Here, the center of I is 10 km away


from the radar system. In Fig. 6, the varying SI (r , t  ) versus
position vector r is depicted to represent the instantaneous
distribution of detecting signal at the instant t  , where SI (r , t  )
are generated by the coherent signal specified in (35) and the
stochastic signal specified in (37), respectively. In addition,
the 2-D self-correlation of SI (r , t  ) is also given to evaluate
the spatial independent degree in the two cases. Fig. 7(a)(c)
illustrates SI (r , t  ) produced by the stochastic signals, which
are modulated with zero-mean Gaussian-noise upon frequency,
amplitude, and phase, respectively. Clearly, signal distribution
generated by coherent waveforms exhibits marked spatial
correlation. By contrast, stochastic waveforms can produce the
detecting signals that fluctuate incoherently versus positions.
As shown in Fig. 7, every approach of stochastic modulation
upon amplitude, or frequency, or phase could increase wavefront variety. On the other hand, the incoherent distribution
shown in Fig. 6(c) is far from the desired spatial independence,
and Fig. 6(d) also reveals that signals on adjacent positions
are still correlated to a certain extent. The reason of this
degenerated spatial independence could be analyzed from the
incompletely time-independent transmitting signals. Note that
(38) gives a delta function under the condition of continuous
and infinite time domain. However, completely independent
signals are almost impossible for either simulated data or
actual transmitting signals that are generated based on a
discrete and finite time domain. To illustrate the impact of
time-independence on radar coincidence imaging, we compare
the results of center coincidence defined in (39), which will
be produced with two transmitting signals of different timeindependent degrees, labeled as S1 (t) and S2 (t)

R0 (
r ) R I (0, r; 0, 0) = SI (0, t) SI (r , t)dt, r I. (39)
Neglecting the time-delay of | Rr |/c, R0 (r ) can be seen as
the coincidence imaging result of a target, which only contains
a single scatterer located at the center of I , i.e., (0, 0). Thus,

50

100

150

200

250

300

350

400

-5

Fig. 9. Signal feature of microwave and optical wave. (a) Signal spatial
distribution of microwaves. (b) Center coincidence of microwaves. (c) Signal
spatial distribution of optical waves. (d) Center coincidence of optical waves.

R0 (r ) could directly present the imaging performance using


the correlation method. Then for quantitatively expressing the
time-independent degree of a transmitting signal, we define a
self-correlation coefficient as



 Tp

R( )

d
= ln
2T p T p
R(0)


R( ) =
St (t) St (t )dt.
(40)
Tp

Signals with high time-independent degree have small . For


example, considering an absolutely independent signal, if the
pulse width is 50 s, then its is 4. Here, of S1 (t) and S2 (t)
is 5.7 and 6.7, respectively. In Fig. 8(a), the blue real line and
the red dashed line denote the self-correlation of S1 (t) and
S2 (t), respectively.
From the comparison in Fig. 8(b) and (c), S1 (t) with
higher time-independent degree produces much stronger timespace independence, which makes the correlation respond by a
distinct peak only at the center of I . The size of the correlation
peak can be seen herein as the representation of the resolution.
Obviously, the extremely high time-independent degree of
transmitting signals would provide extremely fine resolution,
but it imposes particularly demanding requirements for radar
system at the same time.
As mentioned previously, microwave cannot rival optical
waves upon some attribute. A much shorter wavelength, as
the native feature of optical wave, could provide much better
resolution in coincidence imaging. Fig. 9(a) and (c) depicts
the signal spatial distribution produced by microwaves and
optical waves, respectively. For explicitly comparing signal
fluctuations, the 2-D distributed position vector r within I
is concatenated along the dimension of x-axis. The center
coincidence R0 (r ) using microwaves and optical waves is
given in Fig. 9(b) and (d), respectively.
Clearly the short wavelength makes the signal fluctuate
more sharply. The center coincidence of optical waves reaches
a high peak only at the center and shows very small values
at all the other positions. Shorter wavelength endows optical
waves with the superiority to achieve high-resolution coincidence imaging. By contrast, R0 (r ) of microwaves also gives

10

LI et al.: RADAR COINCIDENCE IMAGING

2271

considerable response at adjacent positions besides the center.


As a result, the spatial independent feature of microwave
signals is not adequate to extract r ; in other words, the correlation method is not effective for radar system to yield high
resolution. Therefore, radar coincidence imaging calls for the
pertinent algorithms to rebuild high-resolution target images,
which can overcome the limitations of microwave upon its
wanting time-independence and relative long wavelength.
A parameterized method proposed in [27] can solve this
problem here. It is less constrained by the time-space independence of the detecting signal SI (r , t). The method structures
an equation based on the relationship between the receiving
signal and the reference signal, expressed as
Sr = S

(41)

where S is the matrix of the reference signal, Sr is the vector


of the receiving signal, and is the unknown vector of the
scattering coefficients.
To compose the equation, the time domain is discretized
as t = [t1 ,t2 , t3 , . . . , t K ]. There
!" Kare no limitations for time
sampling. tk , tk t0 , T p + t0 k=1 can be either uniform
samples or nonuniform ones. Then the imaging region I is
discretized as
I L = [r1 , r2 , . . . , rl , . . . , rL ],

l = 1, 2, . . . , L.

(42)

The imaging region is divided into L imaging cells according


to the minimum unit to be resolved. Every imaging cell is
approximately regarded as its own center. Thus, rl is the
position vector of the lth imaging cell center. The distance
from the imaging cell to every transmitting antenna is regarded
as the distance from the center to it. The scattering coefficient
of the center would stand for the scattering feature of the
entire imaging cell. The scattering coefficient vector is
expressed as
= [1 , 2 , . . . , l , . . . , L ]T , l = 1, 2, . . . , L

(43)

where l represents the target scattering coefficient of the lth


imaging cell. With variables in discrete forms, we rewrite the
receiving signal
Sr = [Sr (t1 ), Sr (t2 ), . . . , Sr (t K )]T
L

Sr (tk ) =
l S(tk , rl ).

(44)

l=1

Similarly, the reference signal can be rewritten as the matrix


L
at the
in (45). The row vector is the spatial samples on {rl }l=1
same sampling instant. The column vector is the time-domain
K
samples at {tk }k=1
on the same imaging cell

S(t1 , r1 ) S(t1 , r2 ) . . . S(t1 , rL )


S(t2 , r1 ) S(t2 , r2 ) . . . S(t2 , rL )

SK L =

..
..
..
..

.
.
.
.
S(t K , r1 ) S(t K , r2 ) . . . S(t K , rL )


N

|rl Rn | + |rl Rr |
.
S(k, l) =
Stn tk
c
n=1

(45)

Thus, (41) can be rewritten as follows:


S(t1 , r1 ) S(t1 , r2 ) . . . S(t1 , rL )


Sr (t1 )
Sr (t2 ) S(t2 , r1 ) S(t2 , r2 ) . . . S(t2 , rL )

.. =
..
..
..
..

.
.
.
.
.
Sr (t K )
S(t K , r1 ) S(t K , r2 ) . . . S(t K , rL )

1
2

. .
(46)
..
L

The equation has a unique solution as long as SK L is a


nonsingularity matrix. Thus, the time-domain samples should
not be less than the imaging cells. The row rank of SLL
relies on the independent degree of the reference signal in time
domain. The column rank of SLL depends on the independent
degree of the reference signal in spatial domain. It clarifies
again that the achievement of an effective solution to target
scattering distribution depends on the time-space independent
degree of reference (detecting) signals. Note that if the imaging
cell size is set too small that the detecting signals among
adjacent imaging cells exhibit considerable coherence, which
would directly lead to linear dependence of column vectors
and then make the matrix SLL fail to give a correct solution.
On the other hand, an imaging cell with a big size, as the
rebuilding unit, will decrease the actual resolution of the target
image. Thus, the imaging cell is ought to be set according
to the desired resolution on the basis of an attainable timeindependence degree for the transmitting signals. Therefore,
regardless of whether how small an imaging cell is, or how
close scatters are, the target will be resolved as far as the
transmitting signals ensure a full-rank SLL .
After the analysis above, we now summarize the imaging
scheme as follows.
1) Estimate R0 : the range between the target center and the
radar system.
2) Compartmentalize the imaging region I to form a vector
IL .
3) Compute the reference signal S(r , t), which is the set
of echoes of imaging cells in I L .
4) Draw out L samples from S(r , t) in time domain to form
SLL .
5) Draw out L samples from Sr (t) at the same sample
points to form Sr .
6) Solve the equation Sr = S .
The whole process of radar coincidence imaging is clarified.
The control block of radar system transfers the waveform
control files to the transmitting array, according to which the
stochastic transmitting signals are generated and transmitted.
Then, the receiving signal is received by the receiving antenna
and is transferred back to the system control block. After
preprocessing, the system control block sent both the transmitting waveform and receiving signal to the signal processing
block, where the steps (a)(f) in the imaging scheme above
are carried out in the end. All of these are shown in Fig. 10.
Although the conclusion is straightforward and the method
is easy to perform, we found some modeling assumptions

2272

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 4, APRIL 2014

Fig. 11. (a) Linear antenna array and the target model. (b) Parameters used
for simulation.
Y

Fig. 10.

Flowchart of radar coincidence imaging.


Scatterers of target

of previous analysis would be serious obstacles to derive an


effective . In the following section, this problem will be
studied along with simulation results.

-1

-0.5

positon errors :

0.5

O
-0.5
-1

0.5

Centers of imaging cell

0.5

-1

-0.5

0.5

-0.5
-1

V. S IMULATION AND D ISCUSSION


This paper uses an N-transmitter 1-receiver linear array
for simulation, which consists of ten antenna elements with
1-m distance between each other, as shown in Fig. 11(a).
We use LFM signal denoted in (35) for ISAR imaging and
use stochastic signals in (37) for radar coincidence imaging,
respectively. Both of them share the same sampling frequency
and pulse width. Detailed parameters are given in Fig. 11(b).
A. Experiment 1: Comparison Between Radar Coincidence
Imaging Using Different Image-Reconstruction Algorithms for
a Stationary Target
This experiment concerns the impact of the error caused
by the assumption that every imaging cell is approximated to
its own center. In the presence of this error, the performances
of different image-reconstruction algorithms are going to be
compared.
First, the target model and relative parameters are stated.
We consider a simple target with four scatterers, located at
(0.5, 0.5), (0.5, 0.5), (0.5, 0.5), and (0.5, 0.5), respectively. The distance between the array center and the target
center is 10 km. Fig. 11(a) shows the arrangement of the radar
array and the target.
In this experiment, the imaging area I is 8 8 m, and is
discretized to 16 16 imaging cells. The target is stationary.
The solution of the equation Sr = S is derived via the
pseudo-inverse method. The pseudo-inverse matrix of S is S ,
then is equal to S Sr [31]. S is derived here via MATLAB
pseudo-inverse function.
Two scenarios are set for comparison. In the first scenario,
target scatterers are assumed to be located at the centers of the
corresponding imaging cells, as shown in Fig. 12(a) where the
red points are imaging cell centers and blue ones are target
scatterers. In the second scenario, the error of 1% is set in
scatterer positions, as shown in Fig. 12(b). The position error

(a)

(b)

Fig. 12. Coordinates of target scatterers and imaging cells. (a) Scatterers
match imaging cells. (b) Scatterers mismatch the imaging cells.

is given as the bias between the scatterer positions and imaging


cell centers, expressed as

4 
1  |x image (i ) x target (i )| |yimage (i ) ytarget (i )|
+
p =
4
2l ximage
2l yimage
i=1

(47)
where (x target (i ), ytarget (i )) is the position of the i th target
scatterer, (x image (i ), yimage (i )) is the position of the imaging
cell center corresponding to the i th scatterer, l ximage and
l yimage are the lengths of the imaging cell in x-axis and y-axis
direction, respectively.
The imaging results are given in Fig. 13(a) and (b). We
found the pseudo-inverse method was so sensitive to error that
even a bias of 1% will make the imaging result completely
blurred. In practice, the position error is most likely larger
than 1%, reaching the maximum 50% on the imaging cell
edges. Consequently, the method is impractical for application
because targets rarely match the centers of imaging cells. Even
if scatterers match the centers perfectly at first, then the results
will be definitely wrong if the target moves during the imaging
time. It destroys the possibility of radar coincidence imaging to
process moving targets and limits its operational application.
Since the pseudo-inverse method is abortive in actual imaging
scenarios, we consider solving the problem using optimized
algorithms. Ideally, Sr S = 0. Thus, Sr S  could
be used as an objective function. The optimized algorithm will
give an optimal , which minimizes the objective function.
After the tryout of several types of optimized algorithms, the
genetic algorithm (GA) [32] turns out to be a method with both

LI et al.: RADAR COINCIDENCE IMAGING

2273

-3

-3

-3

-2

-2

-2

-2

-2

-2

-1

-1

-1

-1

-1

-1

4
-3

-2

-1

-2

4
-3

-2

-1

-3

-2

-1

Fig. 13. (a) Image of pseudo-inverse method when scatterers match the
imaging cell centers. (b) Image of pseudo-inverse method when scatterers
mismatch the imaging cell centers. (c) Image of genetic algorithm when
scatterers mismatch the imaging cells.

efficiency and correctness. As a global optimization technique,


GA mimics the natural biological evolution and is known to be
applicable to inverse problems. As Fig. 13(c) shows, the target
can be well reconstructed in the presence of position error.
Thus, the genetic algorithm has better imaging performance
and less sensitivity to the position error.
B. Experiment 2: Comparisons Between Range-Doppler
Imaging and Radar Coincidence Imaging
The second experiment concerns the imaging performance
of radar coincidence imaging and the range-Doppler imaging
methods, including ISAR imaging and tomography, when the
target moves in three scenarios. The target model remains the
same as the one in Experiment 1. The image reconstruction of
tomography herein is accomplished via the convolution backprojection algorithm [32], [33].
To compare the two imaging techniques under the same
condition, we should process the range-Doppler imaging with
a radar array of ten transmitter elements as well. As we
know, under the condition of a small rotation angle, the
azimuth resolution of the range-Doppler imaging approximates
to a = /2 [1], [2], where  is the total variation of
the aspect angle during the CIT. It can be obtained by the
analysis that two scatterers with a distance difference of x
can be distinguished under the condition of  2[34],
where  is the phase difference of their radar echoes during
the CIT, expressed as
4
4
 x =
xM T.
(48)


Here, is the rotation velocity, M is the number of transmitting pulses during the CIT. As viewed from ISAR imaging, [6]
explains the multitransmitting range-Doppler imaging, which
points out that M pulses of a P-element radar array could be
look upon as P M pulses of one transmitter in the optimal
case. Then the MIMO-ISAR imaging method in [6] gives the
total phase difference of a P-transmitter radar array under the
condition of a maximum resolution
4
xM PT.
(49)
array =

Thus, the azimuth resolution is modified to


)
a = /(2M PT ) = (/P) (2M T ).
(50)
From (50), it could be inferred that the best azimuth resolution
which a P-transmitter radar array can achieve is 1/P of that

2 x 104

2
-2

-1

-2

-2

-2

-1

-1

-1

2
-2

2 x 104

-1

-2

-2

-1

-1

-1

-1

-2

-1

-2

-1

2
-2

-1

2
-2

-2

-2

-2

-1

Fig. 14. Imaging results of tomography, ISAR imaging, and radar coincidence
imaging in three motion scenes. RCI denotes radar coincidence imaging for
short. (a) Tomography in scene 1. (b) Tomography in scene 2. (c) Tomography
in scene 3. (d) ISAR imaging in scene 1. (e) ISAR imaging in scene 2.
(f) ISAR imaging in scene 3. (g) RCI in scene 1. (h) RCI in scene 2. (i) RCI
in scene 3.

for regular ISAR under the condition of the same CIT. On the
other hand, this biggest advantage could also be viewed that
P-transmitter radar array enhances the azimuth resolution for
P times by decreasing the wavelength of the entire array to
the 1/P of that for regular ISAR.
Thus, this experiment simulates the optimal range-Doppler
imaging of a ten-transmitter array by implementing the regular
one through a much shorter wavelength of array = /10 =
c/(10 f c ). It would be reasonable to compare the performance
of imaging moving targets between radar coincidence imaging
and the multitransmitter range-Doppler imaging under such a
conclusion.
Then the three imaging scenes simulated in this experiment
are: 1) the target is stationary; 2) the target moves uniformly
with a rotational velocity of = 5/s; and 3) the target rotates
with a velocity of = 5/s and an acceleration of a =
20/s2 . In all the three cases, the rotation vector has the same
orientation to Z-axis of the XY coordinate in Fig. 11(a). The
imaging results are shown in Fig. 14.
In radar coincidence imaging, the imaging region I is 5 5,
and is discretized to 64 64 imaging cells. Then, the pulse
number for ISAR imaging is 64, as well. According to parameters in Fig. 11(b), a pulse provides 105 (Ns = f s T p = 105 )
samples, which is much bigger than the required number
of 64 64 = 4096 to perform radar coincidence imaging.
Especially, we utilize 4096 seriate samples of the receiving
signal here. Then, the imaging time of radar coincidence
imaging is 4096/f S = 2.048 s, whereas it is 64/ f T = 0.107 s
for the range-Doppler imaging.
Here, we employ the entropy of an image to evaluate these
imaging results. The entropy of an image is defined in [35]

2274

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 4, APRIL 2014

TABLE I
E NTROPY OF I MAGING R ESULTS

-15

-15

-10

-10

-5

-5
0

Scene-1

Scene-2

Scene-3

Tomography

2.8316

2.8145

3.0582

ISAR

1.5081

2.7899

2.8862

10

10

RCI

0.6773

0.6891

0.6853

15

15
-15

and [36] as
E =

Y
X 


D (x, y) ln [D (x, y)]

(51)

x=1 y=1

 X Y
where D(x, y) = |d(x, y)|/ x=1
y=1 |d(x, y)|, d(x, y) is
the data with coordinates (x, y) in the image, and X Y is the
image range. The entropy denotes the sharpness of the image.
Smaller entropy implies a higher degree of sharpness, which
figures better imaging quality. The entropy of the images in
Fig. 14 is given in Table I.
From the imaging results of the stationary target shown
in Fig. 14, tomography and ISAR cannot herein resolve the
scatterers at the same range bin due to the absence of a relative
motion that can produce high-resolution via generating big
aspect-variation. On the other side, radar coincidence imaging
can resolve all scatterers even for the stationary target, which
confirms that its resolution does not rely on the Doppler
gradient or target motion. Additionally, both Fig. 14 and
Table I show that radar coincidence imaging is superior to the
other two methods on the performance of processing maneuvering targets. Because of the considerably short imaging time,
radar coincidence imaging is hardly affected by the target
nonuniform motion. However, for range-Doppler imaging,
the rotational acceleration makes the Doppler frequency vary
against time and widens the frequency spectrum in the crossrange.
C. Experiment 3: Impact of Noise on Radar
Coincidence Imaging
In this simulation, we investigate the robustness of radar
coincidence imaging with respect to Gauss noise. Herein, the
imaging region I is 32 32 m, and discretized to 64 64
imaging cells, and the target model remains the same as the
one in Experiment 1. The imaging quality is going to be
compared under different signal-to-noise ratios (SNR). The
imaging results without noise and the ones under the SNR of
20, 10, and 5 dB are shown in Fig. 15.
The resolution and the reconstruction correctness are
unchanged where the SNR is 20 dB. The target image can
be well rebuilt at a relative high SNR. Because the imaging
spots corresponding to the scatterers are not blurred under the
SNR of 10 and 5 dB, the resolution could be regarded as of no
deterioration even under a low SNR. However, the positions
are wrong estimated and the reconstruction correctness gets
worse. As mentioned in Experiment 1, the images are rebuilt
using the optimized algorithm, which yields an optimal
minimizing the objective function Sr S . In the ideal
case, the optimal solution corresponds to the correct scatterer

-10

-5

10

15

-15

-15

-10

-10

-5

-5

10

10

15

-15

-10

-5

10

15

-15

-10

-5

10

15

15
-15

-10

-5

10

15

Fig. 15. Radar coincidence imaging in different SNR conditions. (a) Imaging
result without noise. (b) Imaging result when SNR = 20 dB. (c) Imaging result
when SNR = 5 dB. (d) Imaging result when SNR = 5 dB.

distribution. However, the objective function is changed into


(Sr + ) S  when there exists noise. It is deemed that
the error of induces disturbance into the varying curve of the
objective function versus . Consequently, a wrong scatterer
distribution may be given as the optimal solution under the
minimization condition. Therefore, radar coincidence imaging
is robust against noise in terms of resolution but sensitive with
respect to the reconstruction correctness under low SNR.
D. Experiment 4: Comparisons of Radar Coincidence Imaging
Using Signals of Different Time-Independent Features
This experiment concerns the significance of the timeindependent feature for radar coincidence imaging. Here, we
utilize transmitting signals with different time independent
degrees specified by their self-correlation coefficients defined
as (40).
The target model is shown in Fig. 16(a). The image region
I is 20 20 m and is separated to 64 64 imaging cells.
The distance between the center of the transmitter array and
the target center is 10 km. The self-correlation coefficient
of the transmitting signal used in Fig. 16(b)(d) is 7.3, 6.9,
and 6.7, respectively.
The entropy for Fig. 16(b)(d) is 3.47, 3.44, and 3.35,
respectively. Then both the entropy and imaging results in
Fig. 16 verify that imaging quality improves while the timeindependent degree of transmitting signals is increased.
Additionally, during the simulation, we note that only a
minority of imaging cells is filled with target scatterers so
that is a sparse vector. Especially, the computational complexity shoots up while the dimension of increases. Hence,
we consider reducing the dimension of via filtering the
receiving signals first before solving the equation. As stated
previously, the independent feature between S(r , t) and Sr (t)
is unsuccessful to extract the scattering coefficients. However,
it can roughly estimate the scatterers distribution range. Due
feature, the correlation function f (r ) =
to the independent
Sr (t)S (r , t)dt has relative large value at positions where

LI et al.: RADAR COINCIDENCE IMAGING

2275

-10

-10

-5

-5

10
-10

-5

10

10
-10

-10

-10

-5

-5

10
-10

-5

10

10
-10

-5

10

-5

10

Fig. 16.
Radar coincidence imaging using signals with different timeindependent features. (a) The target model. (b) Imaging result when = 7.3.
(c) Imaging result when = 6.9. (d) Imaging result when = 6.7.

scatterers are located around. Then, a reasonable threshold is


set to filter the rest positions where the correlation function
values are small. Then, the computational burden is reduced
considerably via decreasing the zero-element number of .
Thus, the last step in the original imaging scheme is modified.
f) Solve the equation of Sr = S .
1) Compute the correlation function between Sr and S.
2) Set a threshold Th to filter the imaging cells where the
correlation values are smaller than Th , resulting in a new
L .
set of position vectors {rl }l=1

L , reduce dimensions of Sr , S and
3) According to {rl }l=1
to get Srsub , Ssub and sub , respectively, forming a new
equation Srsub = Ssub sub .
4) Estimate sub with the genetic algorithm and reconstruct .
VI. C ONCLUSION
In this paper, an instantaneous imaging technique which
combines classical coincidence imaging method and radar
signal processing approach was proposed. In contrast with
conventional radar imaging, methods based on the rangeDoppler principle, the developed radar coincidence imaging
technique does not rely on relative motions between radar and
targets, and can achieve better focusing performance without
resolution deterioration.
Borrowing the principle of classical coincidence imaging
in which the target pattern was obtained via signal spatial
fluctuations on the imaging plane, radar coincidence imaging
utilizes a multitransmitting configuration to construct timespace independent detecting signals, and extracts target scattering distribution via coincidence processing between detecting
signals and receiving signals. The waveform for radar coincidence imaging was required to be time-independent and
group-orthogonal. Without the nature advantages to realize
coincidence imaging, the conventional correlation method
cannot achieve high resolution in microwave radar system

due to the longer wavelength and the lack of adequate


time-independence for microwave signals in comparison with
thermal-light signals. Therefore, to achieve high resolution,
a parameterized method was introduced to reconstruct the
target scattering distribution based on the coincidence imaging
equation expressed in (46). Two approaches, including the
pseudo-inverse method and the genetic algorithm were adopted
to solve the coincidence imaging equation. Experiment results
showed that the pseudo-inverse method is extremely sensitive
to the position error of target scatterers, whereas the genetic
algorithm has better reconstruction performance. Additionally,
simulation results indicated that radar coincidence imaging
presents better focusing performance for maneuvering targets
in comparison with range-Doppler imaging methods. The
higher independent degree the transmitting signals have, the
better imaging performance this method can achieve. It was
also found that the resolution of radar coincidence imaging
is robust against Gaussian noise, however, the reconstruction
correctness declines while the SNR was reduced. Considering the sharply increasing computational complexity against
the imaging cell number, the imaging scheme was modified
according to the sparsity of the scattering coefficient vector.
Several important issues in radar coincidence imaging are
worthy of further studies. Especially, the resolution of radar
coincidence imaging might be of the most interest. Unlike
the conventional radar imaging methods based on the rangeDoppler principle, the resolution of coincidence imaging was
determined by the time-space independent degree of the
detecting signals, which might be quantitatively measured by
its correlation function. For instance, the correlation function
of the detecting signals in thermal coincidence imaging is
sinc2 [ G (xr x t )/d], which quantitatively demonstrates
that the correlation length xr x t , or the resolution was related
to the inverse of the wavelength. Unlike the explicit pointspread shape in thermal case,
 radar coincidence imaging has
 n c ) as shown in
a correlation function of
(r D
(24), which is an interlacement of multiple delta functions
associated with the relationship between correlation position
r and correlation time  . It might not be as straightforward
as thermal coincidence imaging to obtain an explicit relationship between the resolution and system parameters with such
a correlation function.
In addition, the resolution of radar coincidence imaging
should be specially considered according to the different
reconstruction methods discussed in this paper, i.e., the correlation method expressed in (31) and the parameterized
method denoted by (46). The resolution of the former simply
depends on the time-space independence of the detecting field.
However, the resolution of the latter is the minimum imagingcell size subject to the conditions that the coincidence imaging
equation (46) can be solved. And the conditions relate not only
to the field time-space independent degree but also to the algorithms chosen to solve the imaging equation. Therefore, three
factors, including system parameters, time-space independence
of the detecting field, and full-rank solvable conditions of the
imaging equation, were required to be taken into account to
investigate the resolution of radar coincidence imaging. This
issue needs to be further studied as a special subject.

2276

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 4, APRIL 2014

Additionally, more detailed studies on the effect of parameter estimation error, computation complexity of reconstruction
algorithms, as well as the design of transmitter array are
worthy of further considerations.
ACKNOWLEDGMENT
The authors would like to thank the editors and reviewers
for their insightful comments.
R EFERENCES
[1] D. A. Ausherman, A. Kozma, J. L. Walker, H. M. Jones, and E. C. Poggio, Developments in radar imaging, IEEE Trans. Aerosp. Electron.
Syst., vol. AES-20, no. 4, pp. 363400, Jul. 1984.
[2] D. C. Munson, Jr., J. D. OBrien, and W. Jenkins, A tomographic
formulation of spotlight-mode synthetic aperture radar, Proc. IEEE,
vol. 71, no. 8, pp. 917925, Aug. 1983.
[3] D. L. Mensa, S. Halevy, and G. Wade, Coherent doppler tomography
for microwave imaging, Proc. IEEE, vol. 71, no. 2, pp. 254261,
Feb. 1983.
[4] B. D. Steinberg, Microwave imaging of aircraft, Proc. IEEE, vol. 76,
no. 12, pp. 15781592, Dec. 1988.
[5] B. D. Steinberg, Radar imaging from a distorted array: The radio
camera algorithm and experiments, IEEE Trans. Antennas Propagat.,
vol. AP-29, no. 5, pp. 740748, Sep. 1981.
[6] Y. Zhu, Y. Su, and W. Yu, An ISAR imaging method based on
MIMO technique, IEEE Trans. Geosci. Remote Sens., vol. 48, no. 8,
pp. 32903299, Aug. 2010.
[7] C. Margaret and B. Brett, Fundamentals of Radar Imaging. Philadelphia,
PA, USA: SIAM, 2009, ch. 5, pp. 4348.
[8] V. C. Chen and H. Ling, Time Frequency Transforms for Radar Imaging
and Signal Analysis. Norwood, MA, USA: Artech House, 2002, ch. 1,
pp. 113.
[9] C. C. Chen and H. C. Andrews, Target-motion-induced radar imaging,
IEEE Trans. Aerosp. Electron. Syst., vol. AES-16, no. 1, pp. 214,
Jan. 1980.
[10] T. Itoh, H. Sueda, and Y. Watanabe, Motion compensation for ISAR via
centroid tracking, IEEE Trans. Aerosp. Electron. Syst., vol. 32, no. 3,
pp. 11911197, Jul. 1996.
[11] D. E Wahl, P. H. Eichel, D. C. Ghiglla, and C. V. Jakowatz, Phase
gradient autofocus-a robust tool for high resolution SAR phase correction, IEEE Trans. Aerosp. Electron. Syst., vol. 30, no. 3, pp. 827835,
Jul. 1994.
[12] X. Li, G. Liu, and J. Ni, Autofocusing of ISAR images based on
entropy minimization, IEEE Trans. Aerosp. Electron. Syst., vol. 35,
no. 4, pp. 12401252, Oct. 1999.
[13] T. Thayaparan, G. Lampropoulos, S. K. Wong, and E. Riseborough,
Application of adaptive joint time-frequency algorithm for focusing
distorted ISAR images from simulated and measured radar data,
IEE Proc.Radar Sonar Navigat., vol. 150, no. 4, pp. 213220,
Aug. 2003.
[14] D. D. Howard, Radar target angular scintillation in tracking and
guidance system based in echo signal phase front distortion, in Proc.
Nat. Electron. Conf., vol. 15. 1959, pp. 840849.
[15] S. Hudson and D. Psaltis, Correlation filters for aircraft identification
from radar range profiles, IEEE Trans. Aerosp. Electron. Syst., vol. 29,
no. 3, pp. 741748, Jul. 1993.
[16] R. Zhang, X. Z. Wei, X. Li, and Z. Liu, Analysis about the speckle of
radar high resolution range profile, Sci. China Technol. Sci., vol. 54,
no. 1, pp. 226236, 2011.
[17] T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, Optical
imaging by means of two-photon quantum entanglement, Phys. Rev. A,
vol. 52, no. 5, pp. R3429R3432, Nov. 1995.
[18] D. N. Klyshko, A simple method of preparing pure states of an optical
field, of implementing the Einstein-Podolsky-Rose experiment, and of
demonstrating the complementarity principle, Sov. Phys. Usp., vol. 31,
no. 1, pp. 7485, 1988.
[19] D. N. Klyshko, Combine EPR and two-slit experiments: Interference
of advance waves, Phys. Lett. A, vol. 132, nos. 67, pp. 299304,
1988.
[20] A. F. Abouraddy, B. E. A. Saleh, A. V. Sergienko, and M. C. Teich,
Role of entanglement in two-photon imaging, Phys. Rev. Lett., vol. 87,
no. 12, pp. 123602-1123602-4, 2001.

[21] R. S. Bennink, S. J. Bentley, and R. W. Boyd, Two-photon coincidence


imaging with a classical source, Phys. Rev. Lett., vol. 89, no. 11,
p. 113601, 2002.
[22] A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, Ghost imaging
with thermal light: Comparing entanglement and classical correlation,
Phys. Rev. Lett., vol. 93, p. 093602, May 2004.
[23] A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, Correlated
imaging, quantum and classical, Phys. Rev. A, vol. 70, no. 1,
pp. 013802-1013802-10, 2004.
[24] J. Cheng and S. Han, Incoherent coincidence imaging and its
applicability in X-ray diffraction, Phys. Rev. Lett., vol. 92, no. 9,
pp. 093903-1093903-4, Mar. 2004.
[25] Y. Shih, Quantum imaging, IEEE J. Sel. Topics Quantum Eletron.,
vol. 13, no. 4, pp. 10161030, Jul.Aug. 2007.
[26] Y. Shih, The physics of ghost imaging, J. Quantum Inf. Process.,
vol. 11, no. 4, pp. 949993, Aug. 2012.
[27] L. Jiying, Z. Jubo, L. Chuan, and H. Shisheng, High-quality quantum
imaging algorithm and experiment based on compressive sensing, Opt.
Lett., vol. 35, no. 8, pp. 12061208, 2010.
[28] I. Bekkerman and J. Tabrikian, Target detection and localization using
MIMO radars and sonars, IEEE Trans. Signal Process., vol. 54, no. 10,
pp. 38733883, Oct. 2006.
[29] H. Deng, Polyphase code design for orthogonal netted radar systems, IEEE Trans. Signal Process., vol. 52, no. 11, pp. 31263135,
Nov. 2004.
[30] A. Albert, Regress and the Moore-Penrose Pseudoinverse. New York,
NY, USA: Academic, 1972.
[31] D. E. Goldberg, Genetic Algorithms in Search, Optimization and
Machine Learning. Reading, MA, USA: Addison-Wesley, 1989.
[32] M. D. Desai and W. K. Jenkins, Convolution backprojection image
reconstruction for spotlight mode synthetic aperture radar, IEEE Trans.
Image Process., vol. 1, no. 4, pp. 505517, Oct. 1992.
[33] Z. X. Li, S. Papson, R. M. Narayanan, Data-level fusion of multilook
inverse synthetic aperture radar images, IEEE Trans. Geosci. Remote
Sens., vol. 46, no. 5, pp. 13941406, May 2008.
[34] Z. Bao, M. D. Xing, and T. Wang, Radar Imaging Technique. Beijing,
China: Publishing House of Electronics Industry, 2005.
[35] G. Y. Wang and Z. Bao, The minimum entropy criterion of range
alignment in ISAR motion compensation, in Proc. IEE Conf. Radar,
Edinburgh, U.K., Oct. 1997, pp. 1416.
[36] X. H. Qiu, Y. Zhao, and S. Udpa, Phase compensation for ISAR
imaging combined with entropy principle, in Proc. IEEE Antennas
Propag. Soc. Int. Symp., Columbus, OH, USA, Jun. 2003, pp. 195198.

Dongze Li was born in 1985. She received the


B.S. degree in information and communication engineering from the National University of Defense
Technology, Changsha, China, in 2008. She is currently pursuing the Ph.D. degree with the National
University of Defense Technology.
Her current research interests include remote sensing signal processing and coincidence imaging, as
well as advanced signal processing with application
to radar target imaging and identification.

Xiang Li (M10) was born in 1967. He received the


B.S. degree from Xidian University, Xian, China,
in 1989, and the M.S. and Ph.D. degrees from the
National University of Defense Technology, Changsha, China, in 1995 and 1998, respectively.
He is currently a Professor with the National
University of Defense Technology. He was with
Imperial College London, London, U.K., as an Academic Visitor, in 2011. Since 2003, he has been in
the Institute of Space Electronics and Information
Technology, where he focused on target recognition,
signal detection, and radar imaging.

LI et al.: RADAR COINCIDENCE IMAGING

Yuliang Qin (M13) was born in 1980. He received


the B.S., M.S., and Ph.D. degrees in information
and communication engineering from the National
University of Defense Technology, Changsha, China,
in 2002, 2004, and 2008, respectively.
He is currently an Associate Professor with the
School of Electronic Science and Engineering,
National University of Defense Technology. His
current research interests include SAR imaging and
radar signal processing.

Yongqiang Cheng (M12) was born in 1982. He


received the B.S., M.S., and Ph.D. degrees in information and communication engineering from the
National University of Defense Technology, Changsha, China, in 2005, 2007, and 2012, respectively.
He is currently a Lecturer with the National
University of Defense Technology. From September 2009 to November 2010, he was a Visiting
Research Student with Melbourne Systems Laboratory, Department of Electrical and Electronic
Engineering, University of Melbourne, Melbourne,
Australia. His current research interests include statistical signal processing
and information geometry.

2277

Hongqiang Wang (M09) was born in 1970. He


received the B.S., M.S., and Ph.D. degrees from the
National University of Defense Technology, Changsha, China, in 1993, 1999, and 2002, respectively.
He is currently a Professor with the School
of Electronic Science and Engineering, National
University of Defense Technology. He has been
involved in modern radar signal processing research
and development since 1996. His current research
interests include automatic target recognition, radar
imaging, and target tracking.

You might also like