IET RadaR, SonaR and navIgaTIon SERIES 19

Series Editors: Dr N. Stewart
Professor H. Griffths
Radar Imaging
and Holography
Other volumes in this series:
Volume 1 Optimised radar processors A. Farina (Editor)
Volume 3 Weibull radar clutter M. Sekine and Y. Mao
Volume 4 Advanced radar techniques and systems G. Galati (Editor)
Volume 7 Ultra-wideband radar measurements: analysis and processing
L. Yu. Astanin and A.A. Kostylev
Volume 8 Aviation weather surveillance systems: advanced radar and surface
sensors for fight safety and air traffc management P.R. Mahapatra
Volume 10 Radar techniques using array antennas W. Wirth
Volume 11 Air and spaceborne radar systems: an introduction P. Lacomme (Editor)
Volume 13 Introduction to RF stealth D. Lynch
Volume 14 Applications of space-time adaptive processing R. Klemm (Editor)
Volume 15 Ground penetrating radar, 2nd edition D. Daniels
Volume 16 Target detection by marine radar J. Briggs
Volume 17 Strapdown inertial navigation technology, 2nd edition D. Titterton and
J. Weston
Volume 18 Introduction to radar target recognition P. Tait
Volume 19 Radar imaging and holography A. Pasmurov and S. Zinovjev
Volume 20 Sea clutter: scattering, the K distribution and radar performance K. Ward,
R. Tough and S. Watts
Volume 21 Principles of space-time adaptive processing, 3rd edition R. Klemm
Volume 101 Introduction to airborne radar, 2nd edition G.W. Stimson
Volume 102 Low-angle radar land clutter B. Billingsley
Radar Imaging
and Holography
A. Pasmurov and J. Zinoviev
The Institution of Engineering and Technology
Published by The Institution of Engineering and Technology, London, United Kingdom
First edition © 2005 The Institution of Electrical Engineers
New cover © 2009 The Institution of Engineering and Technology
First published 2005
This publication is copyright under the Berne Convention and the Universal Copyright
Convention. All rights reserved. Apart from any fair dealing for the purposes of research or
private study, or criticism or review, as permitted under the Copyright, Designs and Patents
Act, 1988, this publication may be reproduced, stored or transmitted, in any form or by
any means, only with the prior permission in writing of the publishers, or in the case of
reprographic reproduction in accordance with the terms of licences issued by the Copyright
Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to
the publishers at the undermentioned address:
The Institution of Engineering and Technology
Michael Faraday House
Six Hills Way, Stevenage
Herts, SG1 2AY, United Kingdom
www.theiet.org
While the authors and the publishers believe that the information and guidance given in this
work are correct, all parties must rely upon their own skill and judgement when making use
of them. Neither the authors nor the publishers assume any liability to anyone for any loss
or damage caused by any error or omission in the work, whether such error or omission is
the result of negligence or any other cause. Any and all such liability is disclaimed.
The moral rights of the authors to be identifed as authors of this work have been asserted
by them in accordance with the Copyright, Designs and Patents Act 1988.
British Library Cataloguing in Publication Data
Pasmurov, Alexander Ya.
Radar imaging and holography
1. Radar 2. Imaging systems 3. Radar targets 4. Holography
I. Title II. Zinoviev, Julius S. III. Institution of Electrical Engineers
621.3’848
ISBN (10 digit) 0 86341 502 4
ISBN (13 digit) 978-0-86341-502-9
Typeset in India by Newgen Imaging Systems (P) Ltd, Chennai
First printed in the UK by MPG Books Ltd, Bodmin, Cornwall
Reprinted in the UK by Lightning Source UK Ltd, Milton Keynes
Contents
List of figures ix
List of tables xvii
I ntroduction 1
1 Basicconceptsof radar imaging 7
1.1 Optical definitions 7
1.2 Holographicconcepts 10
1.3 Theprinciplesof computerisedtomography 14
1.4 Theprinciplesof microwaveimaging 20
2 Methodsof radar imaging 27
2.1 Target models 27
2.2 Basicprinciplesof aperturesynthesis 31
2.3 Methodsof signal processinginimagingradar 33
2.3.1 SAR signal processingandholographicradar for
earthsurveys 33
2.3.2 ISAR signal processing 34
2.4 Coherent radar holographicandtomographicprocessing 36
2.4.1 Theholographicapproach 36
2.4.2 Tomographicprocessingin2Dviewinggeometry 41
3 Quasi-holographicandholographicradar imagingof point targets
ontheearthsurface 49
3.1 Side-lookingSAR asaquasi-holographicradar 49
3.1.1 Theprinciplesof hologramrecording 50
3.1.2 Imagereconstructionfromamicrowavehologram 53
3.1.3 Effectsof carrier trackinstabilitiesandobject’smotion
onimagequality 57
vi Contents
3.2 Front-lookingholographicradar 60
3.2.1 Theprinciplesof hologramrecording 60
3.2.2 Imagereconstructionandscalingrelations 62
3.2.3 Thefocal depth 67
3.3 A tomographicapproachtospotlight SAR 70
3.3.1 Tomographicregistrationof theearthareaprojection 70
3.3.2 Tomographicalgorithmsfor imagereconstruction 72
4 I magingradarsandpartiallycoherent targets 79
4.1 Imagingof extendedtargets 80
4.2 Mappingof roughseasurface 82
4.3 A mathematical model of imagingof partially
coherent extendedtargets 85
4.4 Statistical characteristicsof partiallycoherent target images 87
4.4.1 Statistical imagecharacteristicsfor zeroincoherent
signal integration 88
4.4.2 Statistical imagecharacteristicsfor incoherent signal
integration 90
4.5 Viewingof lowcontrast partiallycoherent targets 94
5 Radar systemsfor rotatingtarget imaging
(aholographicapproach) 101
5.1 Inversesynthesisof 1DmicrowaveFourier holograms 101
5.2 Complex1DmicrowaveFourier holograms 110
5.3 Simulationof microwaveFourier holograms 112
6 Radar systemsfor rotatingtarget imaging
(atomographicapproach) 117
6.1 Processinginfrequencyandspacedomains 117
6.2 Processingin3Dviewinggeometry: 2Dand3Dimaging 119
6.2.1 Theconditionsfor hologramrecording 120
6.2.2 Preprocessingof radar data 124
6.3 Hologramprocessingbycoherent summationof partial
components 126
6.4 Processingalgorithmsfor hologramsof complexgeometry 130
6.4.1 2Dviewinggeometry 131
6.4.2 3Dviewinggeometry 141
7 I magingof targetsmovinginastraight line 147
7.1 Theeffect of partial signal coherenceon
thecrossrangeresolution 148
7.2 Modellingof pathinstabilitiesof anaerodynamictarget 151
7.3 Modellingof radar imagingfor partiallycoherent signals 152
Contents vii
8 Phaseerrorsandimprovement of imagequality 157
8.1 Phaseerrorsduetotroposphericandionosphericturbulence 157
8.1.1 Therefractiveindexdistributioninthetroposphere 157
8.1.2 Thedistributionof electrondensityfluctuationsin
theionosphere 166
8.2 A model of phaseerrorsinaturbulent troposphere 167
8.3 A model of phaseerrorsinaturbulent ionosphere 172
8.4 Evaluationof imagequality 173
8.4.1 Potential SAR characteristics 173
8.4.2 Radar characteristicsdeterminedfromimages 175
8.4.3 Integral evaluationof imagequality 177
8.5 Specklenoiseanditssuppression 181
8.5.1 Structureandstatistical characteristicsof speckle 182
8.5.2 Specklesuppression 184
9 Radar imagingapplication 191
9.1 Theearthremotesensing 191
9.1.1 SatelliteSARs 191
9.1.2 SAR seaicemonitoringintheArctic 195
9.1.3 SAR imagingof mesoscaleoceanphenomena 204
9.2 Theapplicationof inverseaperturesynthesisfor radar
imaging 215
9.3 Measurement of target characteristics 217
9.4 Target recognition 222
References 231
List of abbreviations 241
I ndex 243
List of figures
Chapter 1
Figure1.1 Theprocessof imagingbyathinlens 8
Figure1.2 A schematicillustrationof thefocal depthof anoptical image:
(a) imageof point M lyingintheoptical axis; (b) imageof
pointA; (c) imageof point B and(d) imageof pointsA andB
intheplanesM
1
, M
2
andM
3
9
Figure1.3 Theprocessof optical hologramrecording: 1– reference
wave; 2– object; 3– photoplateand4– object’swave 11
Figure1.4 Imagereconstructionfromahologram: 1– virtual image;
2– real image; 3– zerodiffractionorder and
4– hologram 13
Figure1.5 Viewinggeometryincomputerisedtomography(from
Reference15):
m
– circumferencefor measurements;

c
– circumferencewiththecentreat point Oenveloping
acrosssection; p– arbitrarypoint inthecirclewiththepolar
coordinatesρ and; A, C andD– widebeamtransmitters; B,
C

andD

– receivers; γ –γ , δ–δ – parallel ellipticarcsdefining
theresolvingpower of transmitter–receiver pair
(CC

andDD

) 15
Figure1.6 A schemeof X-raytomographicexperiment usingacollimated
beam: 1– X-rays; 2– projectionangle;
3– registrationline; 4– projectionaxisand
5– integrationline 17
Figure1.7 Thegeometrical arrangement of theG(x, y) pixelsin
theFourier regionof apolar grid. Theparametersϑ
max
and
ϑ
min
arethevariationrangeof theprojectionangles. The
shadedregionistheSAR recordingarea 18
Figure1.8 Synthesisof aradar aperturepattern: (a) real antennaarray
and(b) synthesisedantennaarray 22
x List of figures
Chapter 2
Figure2.1 Viewinggeometryfor arotatingcylinder: 1, 2, 3– scattering
centres(scatterers) 30
Figure2.2 Schematicillustrationsof aperturesynthesistechniques:
(a) direct synthesisimplementedinSAR, (b) inversesynthesis
for atarget movinginastraight lineand(c) inversesynthesis
for arotatingtarget 32
Figure2.3 Theholographicapproachtosignal recordingandprocessing
inSAR: 1– recordingof a1DFraunhofer or Fresnel
diffractionpatternof target fieldintheformof atransparency
(azimuthal recordingof a1Dmicrowavehologram), 2– 1D
Fourier or Fresnel transformation, 3– display 34
Figure2.4 Synthesisof amicrowavehologram: (a) quadratichologram
recordedat ahighfrequency, (b) quadratichologramrecorded
at anintermediatefrequency, (c) multiplicativehologram
recordedat ahighfrequency, (d) multiplicativehologram
recordedat anintermediatefrequency, (e) quadrature
holograms, (f) phase-onlyhologram 37
Figure2.5 A blockdiagramof amicrowaveholographicreceiver:
1– referencefield, 2– referencesignal cos(ω
0
t +ϕ
0
),
3– input signal Acos(ω
0
t +ϕ
0
−ϕ), 4– signal sin(ω
0
t +ϕ
0
)
and5– mixer 39
Figure2.6 Illustrationfor thecalculationof thephasevariationof
areferencewave 39
Figure2.7 Thecoordinatesusedintarget viewing 42
Figure2.8 2Ddataacquisitiondesigninthetomographicapproach 45
Figure2.9 Thespacefrequencyspectrumrecordedbyacoherent
(microwaveholographic) system. Theprojectionslicesare
shiftedbythevaluef
po
fromthecoordinateorigin 46
Figure2.10 Thespacefrequencyspectrumrecordedbyanincoherent
(tomographic) system 47
Chapter 3
Figure3.1 A schemeillustratingthefocusingpropertiesof aFresnel zone
plate: 1– collimatedcoherent light, 2– Fresnel zoneplate,
3– virtual image, 4– real imageand5– zeroth-order
diffraction 50
Figure3.2 Thebasicgeometrical relationsinSAR 51
Figure3.3 Anequivalent schemeof 1Dmicrowavehologramrecording
bySAR 51
Figure3.4 Theviewingfieldof aholographicradar 60
Figure3.5 A schematicdiagramof afront-lookingholographicradar 61
Figure3.6 Theresolutionof afront-lookingholographicradar along
thex-axisasafunctionof theangleϕ 61
List of figures xi
Figure3.7 Theresolutionof afront-lookingholographicradar along
thez-axisasafunctionof theangleϕ 62
Figure3.8 Generalisedschemesof hologramrecording(a) and
reconstruction(b) 63
Figure3.9 Recording(a) andreconstruction(b) of atwo-point object for
findinglongitudinal magnifications: 1, 2– point objects,
3– referencewavesourceand4– reconstructingwave
source 67
Figure3.10 Thefocal depthof amicrowaveimage: 1– reconstructing
wavesource, 2– real imageof apoint object and
3– microwavehologram 68
Figure3.11 Thebasicgeometrical relationsfor aspot-light SAR 70
Chapter 4
Figure4.1 Thegeometrical relationsinaSAR 85
Figure4.2 A generalisedblockdiagramof aSAR 85
Figure4.3 Thevariationof theparameter QwiththesynthesisrangeL
s
at
λ = 3cm, = 0.02andvariousvaluesof R 90
Figure4.4 Thedependenceof thespatial correlationrangeof theimage
onnormalisedL
s
for multi-rayprocessing(solidlines) at
variousdegreesof incoherent integrationD
e
andfor averaging
of theresolutionelements(dashedlines) at various
G
e
: λ = 3cm, R= 10km; 1, 5–0(curvesoverlap);
2, 6–0.25(λR/2)
1/2
; 3, 7–(λR/2)
1/2
; 4, 8–2.25(λR/2)
1/2
91
Figure4.5 Thevariationof theparameter Q
h
withthenumber of
integratedsignalsN
i
at variousvaluesof K
a
93
Figure4.6 Thevariationof theparameter Q
e
withthesynthesisrangeL
s
at varioussignal correlationtimesτ
c
97
Figure4.7 Theparameter Qasafunctionof thesynthesisrangeL
s
at
varioussignal correlationtimesτ
c
98
Chapter 5
Figure5.1 A schematicdiagramof direct bistaticradar synthesisof
amicrowavehologramalongarcL of acircleof radiusR
0
:
1– transmitter, 2– receiver 102
Figure5.2 A schematicdiagramof inversesynthesisof amicrowave
hologrambyaunistaticradar locatedat point C 103
Figure5.3 Thegeometryof dataacquisitionfor thesynthesisof a1D
microwaveFourier hologramof arotatingobject 103
Figure5.4 Optical reconstructionof 1Dmicrowaveimagesfrom
aquadratureFourier hologram: (a) flat transparency,
(b) spherical transparency 106
Figure5.5 Thedependenceof microwaveimageresolutionon
thenormalisedapertureangleof thehologram 109
xii List of figures
Figure5.6 MicrowaveimagesreconstructedfromFourier holograms: (a)
quadraturehologram, (b) complexhologramwithcarrier
frequency, (c) complexhologramwithout carrier frequency
and(d,e,f) thevariationof thereconstructedimagewiththe
hologramangleψ
s
(complexhologramwithout carrier
frequency) 113
Figure5.7 Thealgorithmof digital processingof 1Dmicrowavecomplex
Fourier holograms 114
Figure5.8 A microwaveimageof apoint object, reconstructeddigitally
fromacomplexFourier hologramasafunctionof theobject’s
aspects
0
(
s
= π/6): (a)
0
= π/12, (b)
0
= 5π/2and
(c)
0
= 3π/4. 115
Chapter 6
Figure6.1 Theaspect variationrelativetothelineof sight of aground
radar asafunctionof theviewingtimefor asatelliteat
theculminationaltitudesof 31

, 66

and88

: (a) aspect α and
(b) aspect β 121
Figure6.2 Geometrical relationsfor 3Dmicrowavehologramrecording:
(a) dataacquisitiongeometry; a–b, trajectoryprojectiononto
aunit surfacerelativetotheradar motionand(b) hologram
recordinggeometry 123
Figure6.3 Thesequenceof operationsinradar dataprocessingduring
imaging 125
Figure6.4 Subdivisionof a3Dmicrowavehologramintopartial
holograms: (a) 1Dpartial (radial andtransversal), (b) 2D
partial (radial andtransversal) and(c) 3Dpartial holograms 128
Figure6.5 Subdivisionof a3Dsurfacehologramintopartial holograms:
(a) radial, (b) 1Dpartial transversal and(c) 2Dpartial 129
Figure6.6 Coherent summationof partial hologram. A 2Dnarrowband
microwavehologram: (a) highlightingof partial holograms
and(b) formationof anintegral image 132
Figure6.7 Coherent summationof partial hologram. A 2Dwideband
microwavehologram: (a) highlightingof partial holograms,
(b) formationof anintegral image 133
Figure6.8 Thecomputational complexityof thecoherent summation
algorithmsasafunctionof thetarget dimensionfor a
narrowbandmicrowavehologram: (a) transversepartial
images, (b) hologramsamples 137
Figure6.9 Therelativecomputational complexityof coherent summation
algorithmsasafunctionof thetarget dimensionfor a
narrowbandmicrowavehologram: (a) transversepartial
images/CCA, (b) hologramsamples/CCA 140
List of figures xiii
Figure6.10 Therelativecomputational complexityof coherent summation
algorithmsof hologramsamplesandtransversepartial images
versusthecoefficientµ inthecaseof awidebandhologram 141
Figure6.11 Therelativecomputational complexityof coherent summation
algorithmsfor radial andtransversepartial imagesversusthe
coefficient µ inthecaseof awidebandhologram 142
Figure6.12 Thetransformationof thepartial coordinateframeinthe
processingof a3Dhologrambycoherent summationof
transversepartial images 143
Chapter 7
Figure7.1 Characteristicsof animagingdeviceinthecaseof partially
coherent echosignals: (a) potential resolvingpower at C
2
= 1,
(b) performancecriterion(1– d
c
= 6.98m, 2– d
c
= 3.49m
and3– d
c
= 0) 151
Figure7.2 Typical errorsintheimpulseresponseof animagingdevice
alongthes-axis: (a) responseshift, (b) responsebroadening,
(c) increasedamplitudeof theresponsesidelobesand
(d) combinedeffect of theabovefactors 153
Figure7.3 Theresolvingpower of animagingdeviceinthepresenceof
rangeinstabilitiesversusthesynthesistimeT
s
andthemethod
of resolutionstepmeasurement: (a) −σ
p
= 0.04m; 1and1

(2and2

) – first (second) wayof resolutionstepmeasurement;
1and2– T
c
= 1.5s, 1

and2

– T
c
= 3s; (b) −σ
p
= 0.05m,
1and1

(2and2

) – first (second) wayof resolutionstep
measurement; 1and2– T
c
= 1.5s, 1

and2

– T
c
= 3s 154
Figure7.4 Theresolvingpower of animagingsysteminthepresenceof
velocityinstabilitiesversusthesynthesistimeT
s
andthe
methodof resolutionstepmeasurement: (a) σ
x

,y
= 0.1m/s
(other detailsasinFig. 7.3), (b) σ
x

,y
= 0.2m/s(other details
asinFig. 7.3) 155
Figure7.5 Evaluationof theperformanceof aprocessingdeviceinthe
caseof partiallycoherent signalsversusthesynthesistimeT
s
andthespacestepof pathinstabilitycorrelationd
c
: 1–d
c
=
6.98m, 2– d
c
= 3.49m 155
Chapter 8
Figure8.1 Thenormalisedrefractiveindexspectrum
n
(χ)/C
2
n
as
afunctionof thewavenumber χ invariousmodels:
1– Tatarsky’smodel-I, 2– Tatarsky’smodel-II, 3– Carman’s
model and4– modifiedCarman’smodel 160
Figure8.2 Theprofileof thestructureconstant C
2
n
versusthealtitudefor
April at theSAR wavelengthof 3.12cm 164
Figure8.3 Theprofileof thestructureconstant C
2
n
versusthealtitudefor
November at theSAR wavelengthof 3.12cm 165
xiv List of figures
Figure8.4 A geometrical constructionfor aspaceborneSAR tracking
apoint object Athroughaturbulent atmosphericstratumof
thicknessh
t
168
Figure8.5 A schematictest groundwithcorner reflectorsfor
investigationof SAR performance 176
Figure8.6 A 1DSAR imageof twocorner reflectors 177
Figure8.7 A histogramof thenoisedistributioninaSAR receiver 178
Figure8.8 Thegrey-level (half-tone) resolutionversusthenumber of
incoherentlyintegratedframesN 179
Figure8.9 Thedependenceof theimageinterpretabilityontheresolution
versuslinear resolutionp
a
= p
r
= p 180
Figure8.10 Thedependenceof thehalf-toneresolutiononthenumber of
incoherent integrationsover thetotal real antennapattern 181
Chapter 9
Figure9.1 ThemeanmonthlyconvoyspeedintheNSR changesfromV
0
(without satellitedata) toV
1
(SAR imagesusedbythe
icebreaker’screwtoselect therouteinseaice). Themeanice
thickness(h
i
) isshownasafunctionof theseason. (N. Babich,
personal communications) 198
Figure9.2 (a) Photoof greaseiceand(b) acharacteristicdarkSAR
signatureof greaseice.
©
EuropeanSpaceAgency 199
Figure9.3 Photoof typical nilaswithfinger-rafting 200
Figure9.4 A RADARSAT ScanSARWideimageof 25April 1998,
coveringanareaof 500km×500kmaroundthenorthern
NovayaZemlya. A geographical gridandthecoastlineare
superimposedontheimage.
©
CanadianSpaceAgency 201
Figure9.5 A RADARSAT ScanSARWideimageof 3March1998,
coveringtheboundarybetweenoldandfirst-year seaicein
theareatonorthAlaska.
©
CanadianSpaceAgency 202
Figure9.6 (a) Photoof atypical pancakeiceedgeand(b) acharacteristic
ERSSAR signatureof pancakeice. A mixedbright anddark
backscatter signatureistypical for pancakeandgreaseice
foundat theiceedge.
©
EuropeanSpaceAgency 203
Figure9.7 A RADARSAT ScanSARWideimageof 8May1998,
coveringthesouth-westernKaraSea.
©
CanadianSpace
Agency 204
Figure9.8 AnENVISATASAR imageof 28March2003, covering
theiceedgeintheBarentsSeawestwardandsouthwardof
Svalbard.
©
EuropeanSpaceAgency 205
Figure9.9 AnERS-2SAR imageof 11September 2001, covering
theRedArmyStrait intheSevernayaZemlyaArchipelago.
©
EuropeanSpaceAgency 206
List of figures xv
Figure9.10 AnERS-2SAR image(100km×100km) takenon24J une
2000over theBlackSea(regiontotheEast Crimeapeninsula)
andshowingupwelling, natural films 208
Figure9.11 SST retrievedfromaNOAAAVHRR imageon24J une
2000. 209
Figure9.12 A fragment of anERS-2SARimage(26km×22km) takenon
30September 1995over theNorthernSeanear theNorwegian
coast andshowingswell 210
Figure9.13 AnERS-2SAR image(100km×100km) takenon
28September 1995over theNorthernSeaandshowinganoil
spill, windshadow, lowwindandoceanfronts 211
Figure9.14 AnERS-1SAR image(100km×100km) takenon
29September 1995over theNorthernSeashowingrain
cells 213
Figure9.15 AnERS-2SAR image(18km×32km) takenon
30September 1995over theNorthernSeashowinganinternal
waveandashipwake 214
Figure9.16 Theschemeof thereconstructionalgorithm 221
Figure9.17 A typical 1Dimageof aperfectlyconductingcylinder 222
Figure9.18 Thelocal scatteringcharacteristicsfor ametalliccylinder
(E-polarisation) 223
Figure9.19 Thelocal scatteringcharacteristicsfor ametalliccylinder
(H-polarisation) 224
Figure9.20 A mathematical model of aradar recognitiondevice 226
List of tables
Chapter 6
Table6.1 Thenumber of spectral componentsof aPH 136
Chapter 8
Table8.1 Themaincharacteristicsof thesyntheticaperturepattern 174
Chapter 9
Table9.1 Technical parametersof SARsbornebytheSEASAT and
Shuttle 192
Table9.2 Parametersof theAlmaz-1SAR 192
Table9.3 Theparametersof theERS-1/2satellites 193
Table9.4 SAR imagingmodesof theRADARSAT satellite 194
Table9.5 TheENVISATASAR operationmodes 194
Table9.6 TheLRIR characteristics 216
Table9.7 Thevariantsof thesignvectors 227
Table9.8 Thevalidrecognitionprobability(aBayesclassifier) 228
Table9.9 Thevalidrecognitionprobability(aclassifier basedon
themethodof potential functions) 228
I ntroduction
Theanalysisof thecurrentstateandtendenciesinradardevelopmentshowsthatnovel
methods of target viewing arebased on adetailed study of echo signals and their
informativecharacteristics. Thesemethodsareaimedatobtainingcompletedataona
target, withemphasisonrevealingnewsteadyparametersfor their recognition. One
wayof raisingtheefficiencyof radar technologyistoimproveavailablemethodsof
radiovision, orimaging. Radiovisionsystemsprovideahighresolution, considerably
extendingthescopeof targetdetectionandrecognition. Thisfieldof radarscienceand
technology isvery promising, becauseit pavestheway fromtheclassical detection
of apoint target totheimagingof awholeobject.
Thephysical mechanismunderlyingtargetviewingcanbeunderstoodonaheuris-
ticbasis. Anelectromagneticwaveincidentonatargetinducesanelectriccurrenton
it, generatingascatteredelectromagneticwave. Inorder tofindthescatteringprop-
erties of thetarget, wemust visualiseits elements makingthegreatest contribution
tothewavescattering. Thisbringsustotheconcept of aradar image, whichcanbe
definedasaspatial distributionpatternof thetarget reflectivity. Therefore, animage
mustgiveaspatial quantitativedescriptionof thisphysical propertyof thetargetwith
aqualitynot lessthanthat providedbyconventional observational techniques.
Radiovisionmakesit possibletosenseanobject asavisual picture. Thisisvery
importantbecausewegetabout90percentof all informationabouttheworldthrough
vision. Of course, aradar imagediffersfromacommonoptical image. For instance,
asurfacerough to light waves will bespecular to radio waves (microwaves), and
images of many objects will look like bright spots, or glare. However, the repre-
sentation of information transported by microwaves as visual images has become
quitecommon. It took muchtimeandeffort to get ahighangular resolutioninthe
microwavefrequency bandbecauseof thelimitedsizeof areal antenna. It wasnot
until the1950–1960sthatasufficientlyhighresolutionwasobtainedbyaside-looking
radar withalargesynthesisedantennaaperture. Thesynthetic aperturemethodwas
thendescribedintermsof therange-Doppler approach.
Ataboutthesametime, anewmethodof imaginginthevisiblespectrumemerged
which was based on recording and reconstruction of thewavefront and its phase,
usingareferencewave. A lens-freeregistrationof thewavefront (theholographic
technique), followedbytheimagereconstruction, wasfirstsuggestedbyD. Gabor in
2 Introduction
1948andre-discoveredby E. LeithandU. Upatnieksin1963. Thetworesearchers
suggestedaholographicmethodwitha‘sidereferencebeam’ toeliminatethezeroth
diffractionorder. Thisprinciplewaslater usedinanew, side-lookingtypeof radar.
A specificfeatureof holographicimagingisthat ahologramrecordsanintegral
Fourier or Fresnel transformof theobject’s scattering function. Theemergenceof
holography radically changedour conceptionof anobject’simage. Earlier, humans
haddealt withimages producedby recordingthedistributionof light intensity ina
certainplane. Butobjectscangeneratealightfieldoranotherkindof electromagnetic
field with all of its parameters modulated: theamplitude, phase, polarisation, etc.
Thisdiscoveryconsiderablyextendedthescopeof spatial informationthat couldbe
extractedabout theobject of interest.
It shouldbenotedthat holography brought about revolutionary changesonly in
optics, becauseitdidnotpossesswaysormeanstosavetherecordedinformationabout
thephasestructureof anoptical fielduntil then. But theapplicationof holographic
principlestothemicrowavefrequencybandproceededeasily, givingexcellentresults.
Thiswasduetothefactthatradioengineeringhademployedmethodsof registration
of theelectromagnetic wavephaselong beforetheemergenceof holography. For
many years, radar imagingdevelopedindependently of holography, althoughsome
workers (E.N. Leith, W.E. Kock, D.L. Mensa, B.D. Steinberg) didnotethat many
intermediatestepsintherecordingandprocessingtechniquesfor radar imagingwere
quitesimilar to thoseof holography andtomography. Theseresearchers, however,
only briefly reviewed theholographic principles just to point out thefundamental
similarityanddifferencebetweenoptical andradar imaging, but theydidnot makea
comprehensiveanalysisof thisfact inthecontext of radiolocation.
E.N. Leith and A.L. Ingalls showed that the operation of a side-looking radar
should be treated in terms of a holographic approach. Holograms recorded in the
microwavefrequencyrangewerereferredtoasmicrowaveholograms, andradarsys-
temsbasedontheholographicprinciplewerecalledbyE.N. Leithquasi-holographic.
Infact, theworkdoneinthoseyearsbecamethebasisfor designingaspecial typeof
radartoperformimaging. Theresearchintoradarimagingwasdevelopingquiteinten-
sively, andmany scientists madetheir contributions to it: L.J . Cutrona, A. Kozma,
D.A. Ausherman, G. Graf, I.L. Walker, W.M. Brown, D.L. Mensa, D.C. Manson,
B.D. Steinberg, N.H. Farhat, V.C. Chen, D.R. Wehner andothers.
Theseeffortswereaccompaniedbythedevelopment of tomographictechniques
for image reconstruction in medicine and physiology (X-ray imaging). Initially,
tomography wastreatedasaway of reconstructingthespatial distributionof acer-
tain physical characteristic of an object by making computational operations with
data obtained during the probing of the object. This resulted in the emergence of
reconstructivecomputerised tomography possessing powerful mathematical meth-
ods. Later, tomographic techniques were suggested capable of reconstructing a
physical characteristicof anobjectbyamathematical processingof thefieldreflected
byit.
Naturally, there have been suggestions to combine the available methods of
radar imaging (e.g. the range-Doppler principles) with tomographic algorithms
(D.L. Mensa, D.C. Manson). At present, the work on radar imaging goes on,
Introduction 3
combining the principles of microwave holography, range-Doppler methods of
reflectedfieldrecordingandtomographicimagereconstruction.
In Russia, the theory of a side-looking radar has been developed by many
workers: Yu.A. Melnik, N.I. Burenin, G.S. Kondratenkov, A.P. Reutov, Yu.A. Feok-
tistov, E.F. Tolstov, L.B. Neronsky and others. Much contribution to thetheory of
inverseaperturesynthesisandtomographicimageprocessinghasbeenmadebyS.A.
Popov, B.A. Rozanov, J .S. Zinoviev, A.Ya. Pasmurov, A.F. Kononov, A.A. Kuriksha,
A. Manukyanandothers.
This book presents systematisedresults ontheapplicationof direct andinverse
aperture synthesis for radar imaging by holographic and tomographic techniques.
Thefocus is ontheresearchdataobtainedby theauthors themselves. Thebook is
primarilyintendedforengineers, designersandresearchers, whoareworkinginradar
designandmaintenanceandareinterestedinthefundamental problemof extracting
useful informationfromradar data.
The book consists of three parts: introductory Chapters 1 and 2, theoretical
Chapters3–8andconcludingChapter 9.
Thefirsttwochapterswill beuseful toareader whohasbutlimitedknowledgeof
optical holography, microwaveholographyandtomography. Theycover thematerial
availableintheliterature, buttheinformationispresentedinsuchawaythatthereader
will beableto better understandthechapters that follow. Besides, Chapter 1treats
theequationfor anoptical holograminanon-trivial waytoexplainthespecklestruc-
tureof aradar image. Chapter 2explains thephysical differencebetweencoherent
(microwaveholographic) andincoherent (tomographic) imaging. Themathematical
relations presented can be regarded as an extension of the classical theoremof a
projectionslicetocoherent imaging. Thisallowsapplicationof theanalytical meth-
odsof reconstructivecomputerisedtomographyfor further development of coherent
imagingtheory.
Chapters3–8representanattempttotreattheimagingradar operationintermsof
holography, microwaveholographyandtomography, withoutresortingtotheDoppler
approach. Most of this material is the authors’ results published during the past
30years.
Chapter 3discussestheholographicapproachasappliedtoaside-lookingradar.
Its azimuthal channel is treatedas aholographic system, inwhichtheformationof
amicrowavehologramrepresents therecording of afield scattered froman artifi-
cial referencesource, andtheimagereconstructionisdescribedintermsof physical
optics. Weshowthat theuseof asubcarrier frequency by turningtheantennabeam
away fromthe direction normal to the track velocity vector leads to a distorted
image. The holographic approach can readily evaluate a permissible deviation of
thecarrier’s pathway fromastraight lineandfindvarious radar parameters, using
conventional geometrical andoptical methods. Theholographic analysisof afront-
lookingradar onthebasisof ageneralisedhologramgeometryshowsthat theimage
is three-dimensional (3D); wedescribetheconditions for recordinganundistorted
imageinthelongitudinal andtransversal directions. Wealso introducetheconcept
of a focal depth and explain the pseudoscopic character of an image. The appli-
cation of tomographic principles to a spot-light radar is largely discussed using
4 Introduction
theresultsof D.S. Manson, whowasthefirsttodemonstratetheirapplicabilitytodata
processing.
Chapter 4considers theradar aperturesynthesis duringtheviewingof partially
coherent andextendedtargets. Themathematical model of theapertureisalsobased
ontheholographic principle; theapertureisthought tobeafilter withafrequency-
contrast characteristic, which registers the space–time spectrumof a target. This
approachisuseful for thecalculationof incoherent integrationefficiencytosmooth
out lowcontrast detailsonanimage.
InChapter5wediscussmicrowaveimagingof arotatingtarget, using1DFourier
hologramtheory andfindthelongitudinal andtransversescales of areconstructed
image, thetarget resolution and acriterion for an optimal processing of aFourier
microwavehologram. Theresolutionof avisual radarimageisfoundtobeconsistent
withtheAbbecriterionfor optical systems. Onespecificity isthat it isnecessary to
introduceaspacecarrier frequency toseparatetwoconjugateimagesandanimage
of thereferencesource. Herewehaveananalogywithsyntheticaperturetheory, with
theexceptionthatweemploytheconceptof acomplexmicrowaveFourierhologram.
Itisshownthatthereisnozerothdiffractionorder indigital reconstruction. Wehave
formulatedsomerequirementsonmethodsanddevicesfor synthesisingthistypeof
hologram. Thismethodiseasyanduseful toimplement inananechoicchamber.
Chapter6focusesontomographicprocessingof 2Dand3Dmicrowaveholograms
of arotatingtarget in3D viewinggeometry withanon-equidistant arrangement of
echosignal recordsintheregistrationof itsaspect variation(for spaceobjects). The
suggestedtechniqueof imagereconstructionisbasedontheprocessingof microwave
hologramsbycoherentsummationof partial holograms. Theseareclassifiedinto1D,
2D, 2Dradial, aswell asnarrowbandandwidebandpartial holograms. Thistechnique
is feasiblein any modeof target motion. Themethodof hologramsynthesis com-
binedwithcoherentcomputerisedtomographyrepresentsanewprocessingtechnique
whichaccountsfor alargevariationof real hologramgeometriesin3Dviewing. This
advantageisinaccessibletoother processingproceduresyet.
Chapter 7isconcernedwithmethodsof hologramprocessingfor atargetmoving
in astraight lineand viewed by aground radar processing partially coherent echo
signals. Thesignal coherenceisassumedtobeperturbedbysuchfactorsasaturbulent
medium, elasticvibrationsof thetarget’sbody, vibrationsof partsof theengines, etc.
Wesuggest anapproachtomodellingthetrackinstabilitiesof anaerodynamictarget
andpresent estimatesof theradar resolvingpower inareal cross-sectionregion.
Chapter 8focusesonphaseerrorsinradar imaging, evaluationof imagequality
andspecklenoise.
Finally, possibleapplications of radar imagingarediscussedinChapter 9. The
emphasisisonspacebornesyntheticapertureradarsfor surveyingtheearthsurface.
Somenovel andoriginal developments by researchers anddesigners at theNansen
Environmental andRemoteSensingCentreinBergen(Norway) andat theNansen
International Environmental and RemoteSensing Centrein St Petersburg (Russia)
are described. They have much experience in processing holograms fromvarious
SARs: Almaz-1(Russia), RADARSAT(Canada), ERS-1/2andENVISATASAR(the
EuropeanSpaceAgency). Of special interest tothereader might betheinformation
Introduction 5
about theuseof microwaveholography for classification of seaice, navigation in
theArctics, aglobal monitoringof oceanphenomenaandcharacteristicstobeused
for surveyinggasandoil resources. Weillustratetheuseof theholographicmethods
in acoherent groundradar for 2D imagingof theRussian spacecraft Progress and
for thestudyof local radar responsestoobjectsof complexgeometryinananechoic
chamber, aimedat target recognition.
To conclude, themethods andtechniques describedinthis book arealso appli-
cable to many other research fields, including ultrasound and sonar, astronomy,
geophysics, environmental sciences, resources surveys, non-destructive testing,
aerospacedefenceandmedical imaging, thathavealreadystartedtoutilisethisrapidly
developingtechnology. Wehopethatour bookwill alsobeusedasanadvancedtext-
book by postgraduateand graduatestudents in electrical engineering, physics and
astronomy.
Acknowledgements
Theideato writeabook about theapplication of holographic principles in radio-
locationoccurredto us at theendof thelast century andwas supportedby thelate
Professor V.E. Dulevich. Weareindebtedtohimfor his encouragement anduseful
suggestions.
Weexpressourgratitudetothestaff membersof theNansenCentres(Bergensand
StPetersburg), whoprovideduswithvaluableinformationaboutthepractical applica-
tionof aside-lookingradar. WeshouldliketothankV.Y. Aleksandrov, L.P. Bobylev,
D.B. Akimov, O.M. J ohannessenandS. Sandvenfor their helpinthepreparationof
thesematerials.
Our deepest thanksalsogotoour colleaguesE.F. TolstovandA.S. Bogachevfor
their excellent descriptionof thecriteriafor evaluationof radar images. This book
is based on theresults of our investigations that havetaken along period of time.
Wehavecollaboratedwithmanyspecialistswhohelpedtoshapeour conceptionof a
coherentradarsystem. Wethankthemall, especiallyS.A. Popov, G.S. Kondratenkov,
P.Ya. Ufimtzev, D.B. Kanareykin and Yu.A. Melnik, whosecontribution was par-
ticularly valuable. We also thank our pupils V.R. Akhmetyanov, A.L. Ilyin and
V.P. Likhachevfortheirassistanceinthepreparationof thisbook. Wearealsograteful
toL.N. Smirnova, thetranslator of thebook, for her immensehelpinproducingthe
Englishversion.
Chapter 1
Basicconceptsof radar imaging
1.1 Optical definitions
At present, thereis acertainclass of microwaveradars capableof imagingvarious
typesof extendedtargets. Theseareusuallytermedimagingradars. Beforegivinga
definitionof a‘microwaveimage’, weshouldliketodrawthereader’sattentiontotwo
circumstances. First, amicrowaveimageisalwaysviewedbyaradar operator inthe
visiblerange, whiletheimagingisperformedinthemicrowaverange. Second, this
bookconsidersradarimagingbasedonacombinationof holographicandtomographic
approaches. Therefore, weshould first recall thebasic concepts necessary for the
descriptionof imagingbyconventional photographicandholographicdevicesinthe
visiblespectral range.
Letusconstructanimageof anobject(AB) formedbyathinlens(Fig. 1.1) [19].
Thelensthicknesscanbeneglected, andonecanassumethattheprincipal planesof the
object AB anditsimageA

B

coincideandpassthroughthelenscentre(lineM

N

).
Theother designations arethefocal lengths HF, HF

, f , f

and thedistances x, x

separatingtheobject anditsimagefromtherespectivefocal pointsF andF

.
Thestraight lineAA

connectingtheverticesof theobject andtheimagepasses
through thecentreof thelens H. If wedraw an auxiliary ray AF intercepting the
principal planeat thepoint N andanauxiliaryrayAM parallel totheoptical axisat
thepoint A

, wheretherefractedraysMA

andNA

intercept, wecanfindtheimage
of thepoint A. If wedraw thenormal A

B

fromthepoint A

to theoptical axis,
weshall get theoptical imageof theobject AB. Thesimilarity conditionsyieldthe
governingequationsfor anoptical image, or Newton’sformulae:
y

y
= −
f
x
= −
x

f

, (1.1)
xx

= ff

. (1.2)
8 Radar imagingandholography
a
1 NЈ
–xЈ –fЈ fЈ xЈ
a
2
B F
N
H
M
y
A



–yЈ

Figure1.1 Theprocessof imagingbyathinlens
Therelationbetweentheelementsof animageandthecorrespondingelementsof an
object isknownasalinear or transversal lensmagnificationV definedas
V =
y

y
. (1.3)
Sincethelensisdescribedbytheequalityf = −f

, Eq. (1.2) gives
xx

= −f
2
. (1.4)
Newton’sformulaerelatethedistancesof theobject andtheimagetotherespective
focal points. However, it issometimesmoreconvenient tousetheir distancestothe
respectiveprincipal planes. Let us denotethesedistances as a
1
anda
2
. Thenusing
Fig. 1.1andEq. (1.2), wecanget
1
a
2

1
a
1
=
1
f
1
. (1.5)
Thelinear magnificationcanbeexpressedthrougha
1
anda
2
as
V =
a
2
a
1
. (1.6)
Consider nowtheconceptof focal depthintheimagespace[80]. Whenconstructing
theimagetobeproducedby alens, weassumedthat theimageandtheobject were
in planes normal to theoptical axis. Supposenowthat theobject AB, say, abulb
filament, isinclinedtotheoptical axis, asisshowninFig. 1.2, whileaphotographic
plateis intheplaneM
1
normal totheoptical axis of theobjectivelens. Inorder to
Basicconceptsof radar imaging 9
M
M
A
F
F FЈ
B
M
A
F FЈ
B









AЈ AЈ
M
3
M
1
M
2
M
3
M
1
M
2
(a)
(b)
(c)
(d)
Photographic
plate
Photographic
plate
Photographic
plate
Figure1.2 Aschematicillustrationof thefocal depthof anoptical image: (a) image
of point M lyingintheoptical axis; (b) imageof point A; (c) imageof
point Band(d) imageof pointsAandBintheplanesM
1
, M
2
andM
3
.
findtheimageonthephotoplate, weshall construct rays of light goingaway from
individual points of the object. The light beams going fromthe object AB to the
objectivelens and fromtheobjectivelens to theimageareconic with thelens as
the base and the points of the object and the image as the vertices. Imagine that
theimageof thepoint M of anobject lyingintheoptical axis is onaphotoplatein
theplaneM
1
(Fig. 1.2(a) and(d)). Thenthebeamof raysconvergingontothisimage
will haveitsvertex ontheplate. Theobject’sextremal pointsA andB will produce
conic rayswiththeverticesinfront of (B

) inFig. 1.2(c) andbehindthephotoplate
10 Radar imagingandholography
(A

) (Fig. 1.2(b)). Thus, it is only thepoint M intheoptical axis that will haveits
imageasabrightpointM

inFig. 1.2(a). TheendpointsA andBof thelinewill look
likelight circlesA

andB

. Theimageof thelinewill look likeM
1
inFig.1.2(d). If
thephotoplateis shiftedtowards A

(Fig. 1.2(b)) or B

(Fig. 1.2(c)), weshall have
different imagesM
2
or M
3
(Fig. 1.2(d)).
It followsfromthisrepresentationthat theimageof a3Dobject extendedalong
theoptical axis will havedifferent focal depths ontheplateat all thepoints inthe
image space. In practice, however, images of such objects have a good contrast.
Therefore, theobjectivelens possesses aconsiderablefocal depth. This parameter
determinesthelongitudinal distancebetweentwopointsof anobject, andthesizesof
theirimagesdonotexceedtheeye’sunitresolution. Therefore, theclassical recording
onaphotoplateproducesa2D image, whichcannot betransformedtoa3D image.
Thethirddimensionmay beperceivedonly dueto indirect phenomenasuchas the
perspective.
Nowlet us describethereal andvirtual optical images andseehowtheimage
of apoint object M canbeconstructedwithrays. Theraysgoaway fromtheobject
inall directions. If oneof theraysencountersalensalongitspathway, itstrajectory
will change. If theraysdeflectedbythelensintercept, whenextendedalongthelight
propagationdirection, apoint imagewill beformedat theinterceptionandcanbe
recordedonascreenor aphotoplate. Thiskindof imageisknownasreal. However,
when the rays are extended along the direction opposite to the light propagation
direction, boththeinterceptionpointandtheimagearesaidtobevirtual. Theimages
inFig. 1.2arereal becausethey areformedby rays interceptingat their extension
alongthelight propagation.
An optical imagepossesses orthoscopic and pseudoscopic properties. Suppose
a2D object has asurfacerelief, its imagewill beorthoscopic if it is not reversed
longitudinally: theconvex parts of theobject look convex ontheimage. Usingthe
aboveapproach, wecanshowthat theimageformedbyathinlensisorthoscopic. If
animagehas areverserelief, it is termedpseudoscopic; suchimages areproduced
byholographiccameras.
Thus, imagesproducedby classical methodshavethefollowingtypical charac-
teristics.
• Imaging includes only therecording of incident light intensity, whileits wave
phase remains unrecorded. For this reason, this sort of image cannot be
transformedtoa3Dimage.
• Animagehasalimitedfocal depth.
• Animageproducedbyathinlensisreal andorthoscopic.
1.2 Holographicconcepts
Holography is alens-freeway of recording images of 3D objects into 2D record-
ingmedia[29]. Thisprocessincludestwostages. Thefirst stageiscalledhologram
recording, duringwhichtheinterferencebetweenthediffractionfieldfromanobject
Basicconceptsof radar imaging 11
1
Reference wave
y
3
4
2
z
u
Figure1.3 The process of optical hologram recording: 1 – reference wave;
2– object; 3– photoplateand4– object’swave
andareferencefieldisrecordedonaphotoplateor another photosensitivematerial.
A necessaryconditionisthat bothfieldsshouldbecoherent. Intheir original experi-
ments, thepioneersof holographyusedmercurysourcesthat werelater replacedby
lasers. Theinterferencepattern registered on aphotoplatewas called ahologram.
Thesecond stageis that of imagereconstruction including theillumination of the
processedphotoplatewithawaveidentical tothereferencewave. Suppose, for sim-
plicity, that thereferencewaveisplane(Fig. 1.3) andpropagatesat anangleθ tothe
z-axis(x, y, zarecoordinatesinthehologramplane). Theobject’swaveisdescribed
byacomplexfunction
u(x, y) = a(x, y) exp(−jϕ(x, y))
andthereferencewavebythefunction
u
o
(x, y) = a
o
exp(−jω
o
x),
whereω
o
= ksinθ, θ is thewaveincidenceonto aphotoplatelocated in thexOy
plane, k = 2π/λ
1
is thewavenumber, andλ
1
is thewavelengthof coherent light
source.
Theintensityof theinterferencepatternonthehologramis
I (x, y) =[u
o
(x, y) +u(x, y)]
2
= a
2
o
+a
2
(x, y)
+exp{j[ϕ(x, y) −ω
o
x]} +exp{−j[ϕ(x, y) −ω
o
x]}a
o
a(x, y)
=a
2
o
+a
2
(x, y) +2a
o
a(x, y) cos[ϕ(x, y) −ω
o
x]. (1.7)
12 Radar imagingandholography
In addition to the constant terma
2
o
+ a(x, y), the hologramfunction in Eq. (1.7)
containsaharmonicterm2a
o
a(x, y) cos(ω
o
x) withtheperiod
T = 2π/ω
o
= λ
1
/ sinθ. (1.8)
Thequantity ω
o
whichdefines this periodis knownas thespacecarrier frequency
(SCF) of a hologram. For example, for a He–Ne laser beam(λ
1
= 0.6328µm)
incident onto ahologramat an angleof 30

, theSCF is ω
o
= 900lines/mm. The
minimumperiodof theSCF isθ = π/2andisequal tothewavelengthλ
1
. Thea/a
o
ratioiscalledthehologrammodulationindex.
ItfollowsfromEq. (1.7) thattheamplitudeandphasedistributionsof theobject’s
waveappear tobecodedbytheSCF amplitudeandphasemodulations, respectively.
Asaresult, ahologramturnsout tobethecarrier of spacefrequencywhichcontains
spatial information, whereas a microwave is the carrier of angular frequency and
containstemporal information. Phase-onlyhologramsrecordonlythephasevariation
rather thantheamplitude.
Thefirst stageof theholographic process is terminatedby recordingthequan-
tity I (x, y). A photoplaterecords ahologram. Thetransmittanceof anexposedand
processedphotoplateis
T
n
(x, y) = I
−γ
, (1.9)
whereγ is theplatecontrast coefficient. It is reasonableto takeγ = −2because
the hologramthen corresponds to a sine diffraction grating which does not form
diffractionordershigher thanthefirst one. Sowehave
t(x, y) =

T
n
(x, y) = I (x, y).
Duringthereconstruction, ahologramisilluminatedbythesamereferencewaveas
wasusedattherecordingstage. Thereconstructionoccursduetothelightdiffraction
onthehologram(Fig. 1.4). Immediatelybehindthehologram, awavefieldisinduced
withthefollowingcomponents:
U(x, y) = e
−jω
o
x
t(x, y) = e
−jω
o
x
I (x, y) = exp(−jω
o
x)[a
2
o
+a
2
(x, y)]
+a
o
a(x, y) exp{−jϕ(x, y)} +exp{−jω
o
x} exp{jϕ(x, y)}a
o
a(x, y).
(1.10)
Withthis, thesecondstageof theholographicprocessisterminated.
Three terms in Eq. (1.10) describe waves that form three different images
(Fig. 1.4). Thefirst wavepreservesthedirectionof thereconstructing(plane) wave
and represents the zero diffraction order, or light background. The second wave
a
o
a(x, y) exp[−jϕ(x, y)] reproduces theobject’s waveto anaccuracy of theampli-
tudefactor a
o
, providingavirtual imageof theobjectobservedbehindthehologram.
At an angle (−2θ) relative to the normal to the hologram, a complex conjugate
wavepropagates, producingareal imageinfront of thehologram. It canbeshown
(Chapter 3) that thevirtual imageisorthoscopicandthereal imageispseudoscopic.
Of importanceisthefact that thevirtual imageis3D.
Basicconceptsof radar imaging 13
1
Reconstruction
wave
3
4
2
z
u
2u
Figure1.4 Image reconstruction froma hologram: 1 – virtual image; 2 – real
image; 3– zerodiffractionorder and4– hologram
Consider thebasicpropertiesof aholographicimage, inparticular, thehologram
informationstructure. Supposetheobject tobeimagedis adiscreteensembleN of
coherentlyradiatingpointswiththecoordinatesr
q
. Theobject’sfieldonthehologram
aperturecanbedescribedbyasum[108]
U
r
=
N

q=1
a
q
exp(−jkr
q
) =
N

q=1
α
q
(1.11)
andtherespectiveintensitydistributionbyanexpression
|U
r
|
2
=
N

q=1
N

p=1
α
q
α

p
, (1.12)
wheretheasteriskdenotesacomplexconjugatequantity.
Thereferencebeamonthehologramaperturewill begivenas
U
o
= a
o
exp(−jkr
o
) = α
o
, (1.13)
wherer
o
arethereferencebeamcoordinates. Thentheintensity of theinterference
patterncanbewrittenas
|U
r
+U
o
|
2
= a
2
o


o
N

q=1
α
q

N

p=1
α

p
+
N

q=1
N

p=1
α

q
α
p
. (1.14)
Thelast terminEq. (1.14) corresponds to Eq. (1.12) but usually it is not analysed
completely. In holographic theory (Eq. (1.7)), one often restricts one’s considera-
tionto thesecondandthirdterms. Commonly, theinformationabout theobject is
assumedtobedistributeduniformlyacrossthehologramaperture; inreality, however,
ahologramissynthesisedfromaset of microholograms. Sotheapertureissplit into
14 Radar imagingandholography
amultiplicityof microapertureshavingvariousinformationsignificance. Suchpartial
microhologramsmaycorrespondtotheobject’sfieldwithvaryingpolarisation.
Thehologramstructurehasthreespace-frequencylevels. Thefirst level isasso-
ciated with the diffraction characteristics of individual radiating scatterers, more
exactly, with their scattering patterns and thedistanceto thehologramplane. The
secondlevel isassociatedwiththeinterferenceof overlappingfieldsof differentradi-
atingscatterers, afactordescribedbythelastterminEq. (1.14). Bothlevelsdetermine
thestructureof theobject’sfield. Thethirdlevel isduetotheinterferencebetween
theobject’sspecklefieldandthereferencebeamfield; thisisaholographicstructure
possessingthehighest spacefrequencies.
A scatterer reflectswavesinall directions; therefore, everypointof thehologram
receivesinformationabouttheobjectasawhole(thethirdinformationlevel). Thatis
why wecaneasily explaintheexperiment withahologrambrokenintopieces: any
piececanreconstruct thewholeimagebecauseit contains informationfromall the
scatterers. If apieceis small, theimagequality will bepoor sincesomedetails are
lostbecauseof apoorerresolution. Theresultisacharacteristicspecklepatterndueto
thegreater effect of thesecond-order elementsonthehologram.
Thus, holographicimageshavethefollowingspecificfeatures.
• Theholographicmethodof imagerecordingregistersthefieldphaseinaddition
totheamplitude.
• Reconstructedimagesare3D.
• Holographicimagespossessaconsiderablefocal depth.
• Holographicimagesmaybeorthoscopicor pseudoscopic.
1.3 Theprinciplesof computerisedtomography
Computerisedtomographyisgenerallydefinedasamethodof reconstructingthetrue
image (density distribution) of an object, using special computational procedures
withdataregisteredwhentheobject issubjectedtoprobing[15]. Generally, probing
isanarbitraryphysical phenomenon(radiation, wavepropagation, etc.) usedfor the
studyof objects’ structure, anddensitymeansthedistributionof anarbitraryphysical
characteristicof theobject tobereconstructed. Thetrueimageisanimage, inwhich
the reconstructed density at any point in space is ideally independent of the true
densitiesbeyondthepointvicinity, or of theminimumobject’svolumeresolvableby
ameasuringsystem. Sinceaprobingwaveinteractswiththeobjectandthisinteraction
is‘integrated’ alongitspassagethroughtheobject, itisclear whytomographyissaid
to be a method of image reconstruction fromintegrated data such as beamsums,
projections andso on. Therefore, computerisedtomography is away of producing
2Dimagesof slicesof 3Dobjectsbymeansof digital processingof amultiplicityof
1Dfunctions(projections)obtainedatvariousvisionangles. Therearethreeimportant
aspectsof thistechnique. First, it istheproblemof reconstructedimagesingularity,
that is, thedegreetowhichtheobject isdescribableby availabledata. Second, it is
necessarytoknowwhether thereconstructionprocessisresistant toerrorsandnoise
intheinitial data. Finally, onemust designanalgorithmfor imagereconstruction.
Basicconceptsof radar imaging 15
B
C
g
g
r
q
0
P

D

Basic line
A
Q
d
d
Φ
Γ
m
Γ
c
Figure1.5 Viewing geometry in computerised tomography (fromReference 15):

m
– circumference for measurements;
c
– circumference with the
centreat point Oenvelopingacrosssection; p– arbitrarypoint inthe
circlewith thepolar coordinates ρ and ; A, C and D – widebeam
transmitters; B, C

andD

– receivers; γ –γ , δ–δ – parallel ellipticarcs
definingtheresolvingpower oftransmitter–receiver pair (CC

andDD

)
Theprincipleof computerisedtomographycanbeconvenientlyillustratedwitha
2Dcase[15]. Weshall firstconsidertomographicproceduresforthereconstructionof
densityacrossabody’sslice. Letusintroduceacircumference
c
(Fig.1.5)enveloping
abody, moreexactly, thecrosssectionof areal 3Dobject. Theinner
c
regioncan
betermedtheimagespacebecauseit includestheobject tobeimaged. Themedium
outsidethisregionisassumedtobefree, whichmeansthat aprobingwaveinteracts
only with theobject. If theprobing sources arelocated outsidethe
c
region, the
methodis calledremote-probingcomputerisedtomography, as opposedto remote-
sensingtomography when thesources arelocatedwithin this region. Thelatter is,
however, of nointeresttoradarimaging. Theprobingeffectsarecommonlymeasured
outsidethe
c
region. It is clear frominformationtheory that measurements made
alongacertaincircumference
m
(Fig. 1.5) embracing
c
will bequitesufficient.
Supposeaprobingradiationtransmitter is locatedinthecircumference
m
. To
prescribethedensityat anarbitrarypoint ρ inthe
c
region, weintroducethepolar
coordinatefunctiong = g(ρ, ) andexpressthetotal probingeffect E = E(ρ, , t)
as asumof theincident E = E(ρ, , t) and secondary E
S
= E
S
(ρ, , t) effects:
E = E
i
+E
S
. Theproblemthenreducestothereconstructionof thedensitydistribution
acrossthe
c
region. Obviously, whenatargetisprobedbyelectromagneticradiation,
16 Radar imagingandholography
thequantity E
i
isthepart of E directly relatedtotheinitial wavefront whichisthe
first toarriveat anypoint in
c
, whileE
S
iscomposedof all effectsscattered, often
repeatedly, byall thepointsinthe
c
region.
Thesecondaryprobingeffect must begivenas
E
S
(ϑ, t) = {g(ρ, ); E(ρ, , t); w}, (1.15)
whereE
S
(ϑ, t) istheamplitudevalueof E
S
(ρ, , t) in
m
, {. . .} isanintegral oper-
ator determinedinthe
c
region, andwisthedistancebetweenthepoint P andthe
receiver B. It iseasytoseethat atomographicproblemisaclassical inversesource
problem, sincethefunctiong(ρ, ) istobereconstructedfromtheknownvaluesof
E
S
(ϑ, t) andthesourcein
m
. Notethatherewearefacedwiththeproblemof dimen-
sionality, becauseE
S
(ϑ, t) measurementsare2D, whiletheresultingeffectE(ρ, , t)
is3D. Becauseof thisdiscrepancy, inversionalgorithmsbecomenumericallyunstable
andsensitivetoanyerror intheinitial data.
Thesolutionto theinversesourceproblemis always approximate. Anapprox-
imation most important to tomography involves geometrical optics allowing the
representation of probing effects as rays. This provides an optimal formulation of
theinverseproblemrelateddirectlytoconventional computerisedtomographywhich
reconstructsimagesfromlinear trajectories. ThiscanbeillustratedwithFig. 1.5. The
signal recordedat point B canberepresentedasafunctionof thevariablesϑ and
to showthat this signal varies withthepositionof thepoint B in
m
andwiththe
radiationincidence:
S(ϑ, ) =
l(B)

l(A)
g(ρ, )dl, (1.16)
wherel isacoordinategoingalongtheray, whoseinitial andfinal pointsaredenoted
asl(A) andl(B), respectively.
Therearenodimensionalityproblemswiththisexpression, becausethemeasured
quantity S(ϑ, ) and the reconstructed quantity g(ρ, ) are 2D. So if S(ϑ, ) is
prescribedfor thenumber of ϑ and pairssufficient for thedescriptionof g(ρ, )
with thedesired accuracy, thetruedensity distribution may bereconstructed such
that thecomputational algorithmis stable. Equation(1.16) is agoverningequation
inconventional tomography. At present, therearevariousreconstructiontechniques
allowingthesolutionof thisintegral equation[88].
Nodoubt, itwouldbedesirabletointegratethetrueimageinthe
c
region(inthe
imagespace). For practical considerations, however, thedatamaybeintegratedina
differentspace, whosepropertiesdependonhowtheexperimental dataarerelatedto
thedensityfunctiong(ρ, ). ThequantitytobemeasuredisoftenaFourier imageof
thedensity distribution, sothedatarecordingissaidtobeperformedintheFourier
space. Anexampleof thistypeof recordingisthatinaradiotelescopewithasynthetic
aperture [118]. Although the data integration in the image space and the Fourier
spaceisidentical theoretically, thepractical algorithmsforimagereconstructiondiffer
Basicconceptsof radar imaging 17
y
u
x
1
2
3
4
5
q
0
Figure1.6 A schemeof X-raytomographic experiment usinga collimatedbeam:
1– X-rays; 2– projectionangle; 3– registrationline; 4– projection
axisand5– integrationline
essentially. Manyof theavailablealgorithmsforthereconstructionof anunknown2D
functiongarebasedontheprojectionslicetheorem[57,95]. Itcanbeformulatedwith
referencetoFig. 1.6by introducing, intheimagespace, tworectangular coordinate
systems xOyanduOv, rotatedby theangleϑ relativetoeachother. Theprojection
of thegfunctionat theangleϑ isdescribedas
P
ϑ
(u) =

−∞
g(ucosϑ −vsinϑ, usinϑ −vcosϑ) dv, (1.17)
whereP
ϑ
(u) calculatedatconstantu= u
o
isa1Dintegral alongtherespectivestraight
lineparallel tothev-axis, sothattheP
ϑ
(u) functiondescribesasetof integralsfor all
ϑ values. Theprojectiontheoremstates[57] that a1DFourier imageof aprojection
madeat anangleP
ϑ
(u) representsa‘slice’ of a2DFourier transformof theg(x, y)
functionat theϑ angletotheX-axis:
P
ϑ
(U) = G(U cosϑ, U sinϑ) (1.18)
with
P
ϑ
(U) =

−∞
P
ϑ
(u)e
−juU
du,
G(X, Y) =

−∞

g(x, y)e
−j(xX+yY)
dxdy.
18 Radar imagingandholography
Y
X
q
max
q
min
Figure1.7 Thegeometrical arrangementof theG(x, y) pixelsintheFourier region
of apolar grid. Theparametersϑ
max
andϑ
min
arethevariationrange
of theprojectionangles. TheshadedregionistheSARrecordingarea
In classical X-ray tomography, a body is probed by a collimated radiation beam
(Fig. 1.6), whiletheP
ϑ
(u) functionismeasuredby aset of sensorslocatedalonga
straightlinenormal totheradiationdirection. Thesetof P
ϑi
(u) projectionsforvarious
ϑ anglesisformedbyrotatingtheobjectorthepowertransmittersandreceivers. Then
oneusuallyusesaconvolutionback-projection(CBP) algorithmtobediscussedlater.
AnalternativeistouseaFourier transform. Thelatter approachisconvenient when
datarecordingismadeintheFourierspaceandthepixel valuesof theP
ϑi
(U) Fourier
imagesareknown. Accordingtotheprojectiontheorem, thesepixelsalsorepresent
thepixelsof theG(X, Y) functionalongalineattheϑ angletotheX-axis. Therefore,
theP
ϑi
(U) valuesobtainedforasetof ϑ anglesprescribetheG(X, Y) pixelsonapolar
grid(Fig. 1.7). Byusinganinterpolationalgorithm, onecangoover totheG(X, Y)
pixels onarectangular gridanduseaninverseFourier transformtoreconstruct the
densityg(x, y). Therehavebeenattemptstocomputeg(x, y) directlyfromtheG(X, Y)
pixelsonapolar gridtoavoidusinganinterpolationalgorithm[100].
Let us nowdiscuss commonapproaches usedincomputerisedtomography and
radar imaging. In thelatter, thetarget position is determined fromthetimedelay
of theradar echoandtheantennaorientation. Therangeresolutionisusually much
higherthantheangularresolution. Supposeawidebeamtransmitterandareceiverof
electromagnetic radiationarelocatedat thepointsC andC

, respectively (Fig. 1.5).
The geometrical positions of scatterers, whose echo-signals arrive at the point C

simultaneously, formanellipsewiththefocusatCandC

. Moreexactly, thisisaband
betweentwo‘concentric’ ellipses, itswidthcharacterisingtheresolutionlimit of the
system. Part of thisbandisdenotedasγ –γ . Inthecaseof imagingonepoint (when
thepoints CC

overlap), theellipses degenerateinto circles. Thetotal scattering
Basicconceptsof radar imaging 19
intensityisproportional totheband-averageddensityof scatterers. Sincethedistance
betweentheellipsesisequal totheresolution, it issufficient tointegratethedensity
alonganaverageellipse. Thesignal recordedat thepoint C canbewrittenas
S(C, C

; γ ) =

γ
g(ρ, ) dl, (1.19)
whereγ denotesanaverageellipseanddl isanelement of theellipselength.
LikeinthecasedescribedbyEq.(1.16), thereisnoproblemof dimensionality. The
scattererslocatedinthevicinityof acertainpointQcanbeidentifiedbychangingthe
positionsof thetransmitter andthereceiver. Figure1.5showsoneof thesepositions,
denotedasDandD

, andtherespectivebandδ–δ.
Thetruedensitydistributioncanbereconstructedfromasufficientlylargenumber
of measurements made at different points. It is clear that the cases described by
Eqs(1.16) and(1.19) differ onlyintheintegrationdirection.
InX-ray tomography, thefunctionρ(x, y) describesanunknowndistributionof
theX-rayattenuationcoefficient acrossatransversal slice(Fig. 1.6) tobemeasured.
TheP
θ
(u) projectionvaluesareobtainedinamulti-beamsystemrepresentedasan
arrayof X-raytransmittersandreceiverslocatedattheθ angletothex-axis(Fig. 1.6).
Theintensity of thereceivedradiationdecreasesexponentionally asthebeamspass
alongthelineof ρ(x, y) integration. Therefore, theprojectionP
θ
(u) of thisfunctionis
P
θ
(u) = −log
I
θ
(u)
I
o
, (1.20)
where I
o
is the X-ray source intensity and I
θ
(u) is the intensity registered by the
receivers. The set of P
θ
i
(u) projections can be obtained by rotating the object or
thearray of transmitters and receivers by discreteangles θ = θ
i
. Thedistribution
of theattenuationcoefficient of g(x, y) is usually reconstructedfromthemeasured
P
θ
i
(u) projections, usingtheCBPmethod[127]. Itenablesonetoestimatethespatial
distributionof theinner physical parametersof thetarget.
It will beshownbelowthat aradar canregister theP
θ
(v) signal. For agivenθ
angle, itsintensity isproportional tothescatteringdensity ρ
θ
(u, v) integratedalong
the u- and v-coordinates, that is, it is a tomographic projection along the v-axis.
SotheP
θ
(v) functionisa1Dfunctionof thevariablevwiththeparameter θ defining
theprojectionorientation. Onecanseethat thereis anessential differencebetween
X-raytomographyandsyntheticapertureradar (SAR) imaging. Inthelatter, alinear
integral usedtoobtainaprojectionistakeninthedirectionnormal tothemicrowave
propagation, whereas inX-ray tomography it is takenalongtheX-ray propagation
direction.
It will beshowninChapter 6that theDoppler andholographicmethodsof SAR
signal processingcanprovidesuchprojections. Another important specificityisthat
theseprojections includeaphasefactor to describethetimeof thedoublepath of
thesignal betweenatargetandaradar antenna. Thus, aprojectionproducedbySAR
is acoherent tomogramthat carries muchmoreinformationabout atarget. This is
especially evident when oneuses holographic projections (seeChapter 6). On the
20 Radar imagingandholography
other hand, atomographicprocessingof projectionsiscapableof reconstructingthe
arrangement of scatterersonatarget, or, infact, itsshape.
1.4 Theprinciplesof microwaveimaging
Thepast decadehaswitnessedanever increasinginterest inradarswithaveryhigh
resolvingpower. For example, theERS-1andERS-2radars(side-lookingsynthetic
aperture radars (SARs) of the European Space Agency [62]) provide microwave
imageryof theearthsurfacewiththeresolutionof 25m×25mintheazimuth-range
coordinates. Anearthareaof 100km×100km(100kmistheradar swathwidth) is
representedby1.6×10
7
pixels. Moderngroundradarshavelargeantennaarrayswith
anapertureof about 10
4
–10
5
λ
1
, whereλ
1
istheradar wavelength. Theyprovidean
angularresolutionof 10
−4
–10
−5
rad[129], sotheradarvisionfieldcanbesubdivided
into10
4
–10
5
beams.
A radar withalinear andangular resolutionmuchhigher thanthatof aTV equip-
ment (7.10
5
− 10
6
pixels) is capableof producing microwaveimages of extended
targets (land areas and water surfaces) and complex objects (aircraft, spacecraft).
Soit isreasonabletogiveadefinitionof amicrowaveimage. At present, thereisno
generallyaccepteddefinition, sowesuggestthefollowingformulation. A microwave
imageis an optical image, whosestructurereproduces on adefinitescalethespa-
tial arrangementof scatterers(‘radiant’ points) onatargetilluminatedbymicrowave
beams. Inadditiontothearrangement, scatterersarecharacterisedbyacertainradi-
ance. It should beemphasised that themicrowavebeams can produce3D images,
whereas the visible range of conventional optical systems gives only 2D images.
Availablemethodsof microwaveimagingcanbegroupedintothreeclasses:
• direct methodsusingreal apertures;
• methodsemployingsyntheticapertures;
• methodscombiningreal andsyntheticapertures.
Imagingby direct methods can, inturn, beperformedby real antennas or antenna
arrays. Real antennas wereused in theearly years of radar history. An earth area
wasviewedby meansof circular scanningor sector rockingof theantennabeamin
theazimuthal plane. Suchsystemsweretermedpanoramicor sector circular radars.
Modernpanoramicradarsuse50–100λ
1
aperturesandtheir resolutionislow. Since
theapplication of airbornepanoramic antennaarrays is ahard task, theonly way
to increasetheresolution is to usethemillimetrewavelength range. Oneis faced
withasimilar problemwhendealingwithaside-lookingreal antennamountedalong
theaircraft fuselage. Suchantennas may beas longas 10–15m; at thewavelength
λ
1
= 3cm, their angular resolutionislessthan10minof arcandthelinear resolution
of theearthsurfaceisafewdozensof metres, whichistoolowfor someapplications.
For thistypeof antenna, theproblemof increasingtheaperturesizewassolvedina
radical way– byreplacingareal aperturewithasynthesisedaperture.
Consider thepotentialitiesof antennaarraysfor aerial surveyof theearthsurface
andforgroundimagingof targetsflyingatlowaltitudes. Supposewearetodesignan
Basicconceptsof radar imaging 21
antennaarray for aircraft imaging. Thetarget hasacharacteristic sizeD andisillu-
minatedbyacontinuousradar pulse. Then, accordingtothesamplingtheorem[103],
theechosignal functionintheaperturereceivercanbedescribedbyaseriesof records
recordedat theintervals
δL =

1
D
, (1.21)
whereRisthedistancetothetarget. Theaperturesizenecessaryfor gettingadesired
resolutiononthetarget , canbedefinedintermsof Abbe’sformula[131]:
=
λ
1
2sinα/2

=
λ
1
R
L
, (1.22)
whereα istheapertureangleandL isitslength.
Thetotal number of receiversonanapertureof lengthL is
N =
L
δL
=
DL
λ
1
R
. (1.23)
WithEq. (1.22), weget
N =
D

. (1.24)
Let us illustrate this with a particular problem. Suppose we have λ
1
= 10cm,
R= 600km, D = 20m, and = 1m. Then weget L = 60km, δL = 3kmand
N = 20. A planar apertureof L × L insizemust containn = N
2
= 400individual
receivers.
Thisexampleshowsthat theapplicability of direct imagingusinglargeantenna
arraysisquitelimited. Nevertheless, oneof thesetechniquesemployingaradiocam-
eradesignedbyB. D. Steinbergisof great interest [129]. Theradiocameraisbased
onapulseradar withareal largeantennaarray andanadaptivebeamforming(AB)
algorithm. Theprincipal taskistoobtainahighresolutionwithalargeapertureavoid-
ingsevererestrictionsonthearrangementof theantennaelements. Theoperationof a
self-phasingalgorithmrequirestheuseof anadditional external phase-synchronising
radiationsourcewithknownparameters, whichcouldgenerateareferencefieldand
wouldbelocatednear thetarget. Theradio cameraprovides an angular resolution
of 10
−4
–10
−5
rad [129], and theimagequality is closeto that of optical systems.
Thereisonelimitation– theradiocamerahasanarrowvisionfield. But still, it may
findawideapplication in radar imagingof theearth surface, in surveyingaircraft
traffic, etc.
To summarise, direct real aperture imaging of remote targets at distances of
hundredsandthousandsof kilometresispracticallyimpossible.
Weturnnowtomethodsemployingasynthesisedaperture. Theideaof aperture
synthesisbornduringthedesigningof aside-lookingapertureradar [32,74,86] was
to replaceareal antennaarray with an equivalent synthetic antenna(Fig. 1.8). An
antennawithasmall apertureistoreceiveconsecutiveechosignalsandmaketheir
coherent summation at various moments of time. For acoherent summation to be
made, theradar must alsobecoherent, namely, it shouldpossessahightransmitter
22 Radar imagingandholography
Summator
L
A
Memory
(delay line)
Summator
x
Output
Output
X
V
0 T
s
t
u
s
u
r
V
x
(a)
(b)
Figure1.8 Synthesis of a radar aperture pattern: (a) real antenna array and
(b) synthesisedantennaarray
frequency stability andhaveareferencevoltageof thestablefrequency tocompare
echo signals. Weshall seebelowthat areferencevoltageis similar to areference
waveinholography, withtheonlydifferencethatthe‘wave’ iscreatedinthereceiver
bythevoltageof acoherent generator.
Undertheconditionsdescribedabove, theechosignalsreceivedbyareal antenna
aresaved in amemory unit as their amplitudes and phases. When an aircraft flies
over an earth area x = L
s
, the signals are summed up at the moment T
s
= x/V
(the final moment of synthesis), where V is the track velocity of the aircraft. As
Basicconceptsof radar imaging 23
aresult of coherent signal processing, which is similar to theprocessing by areal
antenna(Fig. 1.8(a)), asyntheticaperturepatternθ
s
similar toareal aperturepattern
θ
r
isformed. Thus, thereal aperturelengthL
s
isreplacedbythesynthesisedaperture
lengthx(x= L
s
). Thewidthof thisaperturepatternis
θ
s
=
λ
1
2x
. (1.25)
Owingtoitslargesize, asyntheticaperturecanprovideverynarrowpatterns, sothe
trackrangeresolution
δx= θ
s
R, (1.26)
whereRistheslant rangetothetarget, maybeveryhighevenat largedistances. To
illustrate, if thesyntheticaperturelengthisx= 400mandλ
1
= 3cm, theresolution
maybeashighasδx= 6mat R= 160km.
Similar principlesapplytoastationarygroundradar andamovingtarget. If one
needstoobtainahighangular resolution, onecanmakeuseof theso-calledinverse
aperturesynthesis. Weshall showinChapter 2thattheresolutiononthetargetisthen
independent of thedistancetoit but isdeterminedonlybytheradar wavelengthand
thesynthesis angle. As aresult, onecanobtainavery highangular resolutionand
reconstruct thearrangement of thescatterersintoamicrowaveimage.
Thus, current approaches to microwave imaging, based on direct and inverse
synthesisof theaperture, provide2Dimageswhicharestructurallysimilar tooptical
images. Besides, therearemethods combiningbothapproaches. They apply areal,
say, phasedapertureandasyntheticaperturealongtheaircrafttrack. Thesetechniques
alsoproduceimagessimilar instructuretooptical images[2]; theywill bediscussed
indetail inChapter 3. However, therearecertaindifferencesbetweenthetwotypes
of 2Dimages. Wesummarisethemost important onesbelow.
1. Thewavelengthsinthemicrowaverangeare10
3
–10
6
timeslonger thaninthe
visiblerange, andthis determines anessential differenceinthescatteringand
reflectionbynatural andman-madetargets. Inthevisiblerange, thescatteringby
man-madetargetsisbasicallydiffusive, anditcanbeobservedwhenthesurface
roughnessisof theorder of awavelength. Thisfactallowsatargettobeconsid-
eredasacontinuousbody. Inthemicrowaverange, thepictureisquitedifferent
becausethereis no diffusion. Thesignals arereflected by scattering centres,
corner structuresandspecular surfaces. For thisreason, amicrowaveimageof
aman-madetargetisdiscreteandismadeupof ‘dark’ pixelsandthoseproduced
bythestrongreflectorswementionedabove. A goodexampleisthemicrowave
imageof anaircraft that wasobtainedinReference130. Reflectionby natural
targetsproducessimilar images. However, thereflectionspectrumof theearth
surfacecontainsanessential diffusioncomponent.
2. For thesereasons, thedynamic rangeof microwaveimagesvariesbetween50
and90dB, whileit rarely exceeds20dB inoptical images, reachingthevalue
of 30dB inbright sunlight.
24 Radar imagingandholography
3. Thequality of animagedoesnot dependonthenatural luminosity of atarget
anddependsbut slightlyonweather conditions.
4. Image quality strongly depends on the geometry of the earth region to be
imaged, especially its slant angles, roughness andbulk features inthesurface
layer. So microwaveimagingis usedfor all-weather mapping, soil classifica-
tion, detectionof boundaries of backgroundsurfaces, etc. Thereis no unified
optimal angle (in the vertical plane) for viewing geological structures, and
the best values should be adjusted to the local topography. For mountain-
ous and undulated reliefs, for example, a small radiation incidence relative
to the normal is preferable, while the imaging of plains requires the use of
largeincidenceangles, whichincreasethesensitivitytosurfaceroughness. For
this reason, images produced by airborneSAR may beinadequateradiomet-
rically (specklenoise) resultingfromalargevariationintheincidenceacross
aswathbecauseof awideaperturepattern. SpaceSARs possess anapproxi-
matelyconstantradiationincidenceacrossaswath, sothereisnospeckleonthe
image.
5. Thedensity of blackened regions on anegativedepends significantly on the
dielectricbehaviour of thesurfacebeingimaged, inparticular, onthepresence
of moisture, bothfrozenandliquid, inthesoil.
6. Themicrowaverangegivestheopportunitytoprobesubsurfaceareas. Forexam-
ple, the microwave images of the Sakhara desert obtained by a SIR-A SAR
showed thepresenceof dried river beds buried under thesands, which were
invisibleonthedesertsurface. Thisopensupnewopportunitiestoarchaeologi-
cal surveys. It hasbeendemonstratedexperimentallythat theprobingradiation
depthindry sandmay beas largeas 5m. Besides, asandstratumpossessing
alowattenuationis foundto enhanceimages of subsurfaceroughness dueto
refractionattheair–soil interface. Thiseffectisparticularlystrongforhorizontal
polarisationat largeincidenceangles.
7. The specific propagation pattern of the long wavelengths in the microwave
rangeprovidequalityimageryof landscoveredwithvegetation.
8. The interaction of subwater phenomena such as internal waves, subsurface
currents, etc., with theocean surfaceallows imaging thebottomtopography
andvarioussubwater effects.
9. Theuseof movingtarget selectionallows oneto makeprecisemeasurements
of thetarget’sradial velocityrelativetotheSAR.
10. Animportant factor inimageryistheproper choiceof radiationpolarisation.
11. Quitespecificisimagingof urbanareasandotheranthropogenictargets. Thisis
duetoalargenumberof objectswithahighdielectricpermittivity(e.g. metallic
objects), surfaceelements possessingspecular reflection, resonancereflectors
andobjectswithhorizontal andvertical planesthat formcorner reflectors. The
resultof thelatteristhefollowingeffect: streetsparallel totheSARcarriertrack
producewhitelinesontheimage(thepositive), whilestreetsnormal tothetrack
producedark lines. Moreover, thepresenceor absenceontheimageof some
linear elementsof theradar sceneandanaveragedensity of blackeningof the
wholeimagedependontheazimuthal angle, thatis, theanglemadebytheSAR
Basicconceptsof radar imaging 25
beamintheplanetangential to theearthsurface. This is aserious obstacleto
theanalysisof imagesof urbanareas.
12. Animagecontainsspecklenoiseassociatedwiththecoherenceof theimaging
process.
To conclude, a microwave radar image may be 3D if it is recorded by
holographicor tomographictechniques(Chapters5and6, respectively).
Chapter 2
Methodsof radar imaging
2.1 Target models
All radar targetscanbeclassifiedintopointandcomplextargets[138]. A pointtarget
is aconvenient model object commonly usedinradar scienceandpracticetosolve
certaintypesof problems. It isdefinedasatarget locatedat distanceRfromaradar
at theviewingpoint ‘0’, whichscatterstheincident radar radiationisotropically. For
suchatarget, theequiphasesurfaceisaspherewiththecentreat‘0’. Supposearadar
generatesawavedescribedas
f (t) = a(t) expj[ω
o
t + (t)],
wheref
0
= ω
o
/2π is thecarrier frequency, whilea and aretheamplitudeand
phasemodulationfunctionsoverlappingthecarrier frequency.
A point target locatedat distanceRcreatesanechosignal
g(t) = σf
_
t −
2R
c
_
= σa
_
t −
2R
c
_
expj
_
ω
o
_
t −
2R
c
_
+
_
t −
2R
c
__
,
(2.1)
whereσ is acomplex factor includingthetarget reflectanceandsignal attenuation
alongthetrack.
TheDoppler frequencyshift isimplicitlypresent inthevariableR. If weassume
that theradial velocityv
1
isconstant, weshall have
R= R
1
+v
1
t
1
, (2.2)
whereR
1
isthedistancetothetarget at theinitial moment of timet = 0.
Equations(2.1) and(2.2) describeasimplemodel targettobefurther usedfor the
analysisof theaperturesynthesisandimagingprinciples.
In practice, most radar targets refer to the class of complex targets. In spite
of a great variety of particular targets, we can offer a common criterion for their
28 Radar imagingandholography
classification. Thiscriterionisbasedontherelationshipbetweenthemaximumtarget
sizeandtheradarresolvingpowerinthecoordinatespaceof theparametersR, α, β and
˙
R, whicharetherange, theazimuth, theelevationangleandtheradial velocityof the
target, respectively. Anadditional important parameter is thenumber of scattering
centres (scatterers). In accordance with this criterion, all complex targets can be
subdivided into extended compact targets and extended proper targets. A target is
referred to as extended compact if it has a small number of scatterers, its linear
and angular dimensions are much smaller than the radar resolution element, and
thedifferencebetweentheradial velocitiesof theextremal scatterersisappreciably
smaller thanthevelocityresolutionelement. What isimportant isthat thisdefinition
alsoholdsfor targetslocatedatlargedistances. Ontheother hand, atargetwhichhas
asizemuchlarger thantheradar resolutionelement andalargenumber of scatterers
shouldbereferredtoas extendedproper. Earthandwater surfaces areexamples of
suchtargets.
Weshall first discuss extendedcompact targets (airplanes, spacecrafts, etc.). In
thehigh-frequency region, thesetargetsshouldberepresentedasaset of scatterers,
or radiant points. Themathematical model of anextendedcompact target, basedon
theconcept of scatterers, hastheform[138]:
U =
M

m=1

σ
m
exp(j
m
), (2.3)
whereM isthenumber of individual scatterers, σ
m
istheradar cross-section(RCS)
for themthscatterer and
m
isthephaseof thepulsereflectedby themthscatterer
relativeto that of thepulsereflectedby thefirst scatterer. Thevalueof σ
m
is to be
foundfor aparticular polarisation.
Equation(2.3) isusuallyusedformonostaticincidenceintheoptical region(high
frequencyapproximation). Itcanalsobeusedtofindtherelationbetweenmonostatic
andbistaticscatteringatthesametargetaspectα. Forthis, thephasesof thescatterers,

m
, shouldbeexpressedasasumof twoterms[69]:

m
= 2kZ
m
(α) cosβ/2+ ξ
m
, (2.4)
whereZ
m
(α) istheprojectionof thedistancebetweenthemthandthefirst scatterers
ontothebisectrixof thebistaticangle, k = 2π/λ
1
isthewavenumber of theincident
waveandβ isthebistaticangleandξ
m
istheresidual phasecontributionof themth
scatterer, includingthecontributionof thecreepingwave.
For scatterersretainingtheir positionwithchangingbistaticangle, themathemat-
ical model is
U =
M

m=1

σ
m
exp(j2kZ
m
(α) cosβ/2ξ
m
). (2.5)
Methodsof radar imaging 29
Equation(2.5) allows us to introducetheconcept of equivalenceof mono- and
bistatic scattering and to define conditions for this equivalence. The theoremof
R. E. Kell states that (1) if thetotal field can bewritten as asumof thefields of
all scatterers and(2) if thequantity

σ
m
, theZ
m
-coordinateandtheresidual phase
ξ
m
are all independent of the bistatic angle β in a particular range of β values at
anygivenaspect α, thenthetotal bistaticfieldfor theanglesα andβ isequal tothe
monostaticscatteringfieldmeasuredalongthebisectrixof theβ angleatafrequency
reducedby afactor of cos(β/2). This theoremwill beusedinChapter 5to justify
themethodof inverseaperturesynthesisfor recordingandreconstructionof Fourier
holograms.
Theamplitudeandpolarisationcharacteristicsof individual scatterersareof spe-
cial interest for the understanding of diffraction phenomena in extended compact
targets. A comparison of respectiveexperimental and theoretical values should be
basedon precisescatteringmodels substantiatedby thephysical theory of diffrac-
tion, namely, bytheedgewavemethod(EWM) [137] or bythegeometrical theoryof
diffraction(GTD) [70]. Toillustrate, let usconsider thefieldscatteredbyaperfectly
conductingcylinder of finitelengthl andradiusaorientedtowardsthetransmitting
antenna. AccordingtotheEWM [12], thehorizontal andvertical fieldcomponents
inthefar rangeare
E
ϕ
=
ia
2
E
ox
e
ikR
R

(ϑ), E
ϑ
=
ia
2
H
ox
e
ikR
R

(ϑ), (2.6)
wherekisthewavenumber andϑ istheanglebetweentheviewingdirectionandthe
cylinder symmetryaxis, π/2≤ ϑ ≤ π:

(ϑ) =

(1) +

(2) +

(3), (2.7)

(ϑ) =

(1) +

(2) +

(3), (2.8)

(1) = f (1)[J
1
(ζ ) +iJ
2
(ζ )]e
ikl cosϑ
, (2.9)

(2) = f (2)[−J
1
(ζ ) +iJ
2
(ζ )]e
ikl cosϑ
, (2.10)

(3) = f (3)[−J
1
(ζ ) +iJ
2
(ζ )]e
−ikl cosϑ
, (2.11)
ζ = 2kasinζ ,
J
1
(ζ ) andJ
2
(ζ ) arethefirst- andsecond-order Bessel functions, respectively. Indices
1, 2and3correspondtothreescatterersonthecylinder (Fig. 2.1).
Similar expressionscanbeobtainedfor thefunctions

(1),

(2) and

(3) by
replacing f (1), f (2) and f (3) by g(1), g(2) and g(3), respectively. Thelatter are
30 Radar imagingandholography
1
3
z
A
/
2
y
A
q
Figure2.1 Viewinggeometryfor a rotatingcylinder: 1, 2, 3– scatteringcentres
(scatterers)
definedas
f (1)
g(1)
_
=
sin(π/n)
n
_
_
cos
π
n
−1
_
−1
±
_
cos
π
n
−cos
(π −2ϑ)
n
_
−1
_
,
(2.12)
f (2)
g(2)
_
=
sin(π/n)
n
_
_
cos
π
n
−1
_
−1

_
cos
π
n
−cos

n
_
−1
_
, (2.13)
f (3)
g(3)
_
=
sin(π/n)
n
__
cos
π
n
−1
_
−1

_
cos
π
n
−cos
(π +2ϑ)
n
_
−1
_
,
(2.14)
n= 3/2.
Thefunctions(2.7)–(2.14) canbeusedtocalculatethescatteringcharacteristics(the
RCS diagram, the amplitude and phase scattering diagrams, etc.) for an experi-
mental study of diffractioninananechoic chamber (AEC). Thelast two diagrams,
for example, canbefoundas themodulus andargument of thefunctions (2.7) and
(2.8). However, therepresentationof thefieldas asumof thefields re-transmitted
by scatterers provides informationonindividual scatterers. Suchcharacteristics are
referredtoaslocal responses[12]. TheRCSdiagramsfor scatterersonacylinder and
Methodsof radar imaging 31
theE- andH-polarisationsof theincident fieldcanbewrittenas
σ
E
n
(ϑ) = πa
2
¸
¸
¸
¸

n
(ϑ)
¸
¸
¸
¸
2
, σ
H
n
(ϑ) = πa
2
¸
¸
¸
¸

n
(ϑ)
¸
¸
¸
¸
2
, (2.15)
n= 1, 2, 3.
Thephaseresponses of scatterers can bederived in theformof arguments of the
complexvaluedfunctions(2.9)–(2.11). A scatteringmodel foracylinderwithbistatic
incidencewas designedinReference12intheEWM approximation. Besides, it is
showninReferences105and109that theamplituderesponsesandthepositionsof
scatterersonatarget canbestudiedexperimentallyusingimagesreconstructedfrom
microwaveholograms.
Wenowturntomodelsof extendedproper targets. Suchtargetsinclude
• landsurface;
• seasurface;
• largeanthropogenicobjectslikeurbanareasandsettlements;
• special standardobjectsfor radar calibration.
Ananalysisof modelsof all of thesetargetswouldgofarbeyondthescopeof thisbook.
Wegiveabrief surveyof scatteringmodelsof seasurfaceinChapter 4, includinga
model of apartiallycoherent extendedproper target, whichisusedintheanalysisof
microwaveradar imagery.
It shouldbenotedthat extendedcompact targetsmay alsobepartially coherent
(Chapter 7). Ineither case, thesetargetsproduceparasitic phasefluctuationswhich
perturbradar imagingcoherence.
Target models areusedfor several purposes: to justify theprinciples of inverse
aperturesynthesis, tointerpret microwaveimages, toobtainlocal RCS of scatterers
onstandardobjects, andtocalibratemeasurementsmadeinAECs.
2.2 Basicprinciplesof aperturesynthesis
WehavementionedinChapter1thattheuseof asyntheticapertureisnecessaryif one
needstoobtainahighangularresolutionof targetsatlargedistances. Ithasbeenshown
bysomeresearchers[73,109] that theaperturesynthesisis, inprinciple, possiblefor
any formof relativemotionof atarget andareal antenna; what isimportant isthat
thetarget aspect shouldchangetogether withtherelativedisplacement.
Today there are two basic methods of aperture synthesis – direct and inverse.
Direct synthesis can be made by scanning a relatively stationary target by a real
antenna(Fig. 2.2(a)). Thetarget isontheearthsurfaceandtheantennaislocatedon
anaircraft. Radar systems withdirect antennasynthesis areknownas side-looking
synthetic apertureradars (SARs). Theauthors of Reference85havesuggested for
themthetermquasi-holographic radars (Chapter 3). Methods of aperturesynthesis
usinglineartranslational motionof atargetoritsnatural rotationrelativetoastationary
ground antennaarecalled inversemethods and radars based on such methods are
32 Radar imagingandholography
2
1
V
L
c
b
o
2–target 1– radar
2– target 1–radar
2–target 1–radar
(a)
2
2
2
V
L
c
b
o
b
o
1
1
(b)
(c)
Figure2.2 Schematicillustrationsof aperturesynthesistechniques: (a) direct syn-
thesisimplementedinSAR, (b) inversesynthesisfor atarget movingin
astraight lineand(c) inversesynthesisfor arotatingtarget
knownasinversesyntheticapertureradars(ISARs)(Fig.2.2(b)and(c)). Therearealso
combinedapproaches to fieldrecording. For example, afront-lookingholographic
radar (Chapter 3) combinesdirect synthesisalongthetrackandtransversal synthesis
with a one-dimensional (1D) real antenna array (Fig. 3.4). A spot-light mode of
synthesis is also possible: it uses boththelinear movement of anairborneantenna
anditsconstant axial orientationtoagroundtarget (Fig. 3.11). Radarsbasedonthis
principleareknownasspot-light SAR [100].
Finally, ground radars operating in the inverse synthesis mode and viewing a
linearlymovingtargetcancombineareal-phasedantennaarrayandaperturesynthesis.
Methodsof radar imaging 33
ThismethodwassuggestedbyB.D.Steinberg[129]toemployadaptivebeamforming
(AB) together withaperturesynthesis(ISAR + AB).
Inanymethodof aperturesynthesis, theradar azimuthal resolutionisdetermined
bytheapertureangleβ
o
= L
s
/R. Thelinear resolutionalongtheanglecoordinateis
δl = λ
1

o
. It shouldbeemphasised[73] that rotationof asyntheticantennapattern
(SAP) does not shift thetarget phasecentreand, therefore, does not synthesisethe
aperture. For thisreason, onecannotincreasetheangular resolutionbyrotatingareal
antenna, incontrast tothetarget rotation.
2.3 Methodsof signal processinginimagingradar
Imaging radar signal processing can be considered fromdifferent points of view.
Sincethereisanessential differencebetweenthedirectandinversemodesof aperture
synthesis, theprocessingtechniques shouldbedescribedindividually for eachtype
of radar.
2.3.1 SARsignal processingandholographicradar for earthsurveys
TheSAR aperturesynthesisbycoherent integrationistreatedintermsof
• theantennaapproach[74];
• therange-Doppler approach[85,140];
• thecross-correlationapproach[85];
• theholographicapproach[85,143];
• thetomographicapproach[100].
The use of a variety of analytical techniques in radar imaging leads to various
processingdesignsandphysical interpretationsof someof itsdetails.
Thefirstfour approachesprovideafairlycompleteanalysisof theeffectsof SAR
parametersonitsperformancecharacteristicsandtheresultsaregenerallyconsistent.
Eachapproach, however, enablesonetoseetheimagerecordingandreconstruction
in a new light, because each has its own merits and demerits. In this book, we
largelyfollowtheholographicapproachtotheperformanceanalysisof variousSAR
systems, whichinvolvesthetheoriesof optical andholographicsystems. According
tooneof thepioneersof optical andmicrowaveholographyE. H. Leith, aholographic
treatment of SAR performancehasprovedmost fruitful. Therecordingof asignal is
regardedasthatof areducedmicrowavehologramof thewavefieldalongtheazimuth,
that is, along the flight track. Illumination of such a hologramby coherent light
reconstructstheoptical wavefield, whichissimilar totherecordedmicrowavefield
onacertainscale. A schematicdiagramillustratingtheholographicapproachtoSAR
signal recordingandprocessingispresentedinFig.2.3. Forapointtarget, forinstance,
anoptical hologramisaFresnel zoneplate. Whentheplateisilluminatedbycoherent
light, thereal andvirtual imagesof thepoint target arereconstructed(Fig. 3.1).
Thus, a microwave image of a point target can be obtained directly owing to
thefocusingpropertiesof aFresnel zoneplate. Theprocessingopticsinthat caseis
34 Radar imagingandholography
Reference signal
Signal from radar target
Image
2
1
3
Figure2.3 Theholographic approachtosignal recordingandprocessinginSAR:
1 – recording of a 1D Fraunhofer or Fresnel diffraction pattern of
target field in the formof a transparency (azimuthal recording of a
1D microwavehologram), 2– 1D Fourier or Fresnel transformation,
3– display
necessaryonlytocompensatefor variousdistortionsinherent inSAR; anamorphism
andthedifferenceintheazimuthandrangescalefactors. Optical processingof SAR
signalswasfirst analysedintermsof holography[86].
Theholographic approachwill beusedinChapter 3to describeSAR as asys-
temfor combinedrecordingandreconstructionof microwaveholograms. A general
schemeof thisprocessisshowninFig. 2.3. Thereferencesignal hereisaheterodyne
coherent pulse, whoseroleisactuallymuchmoreimportant (seebelow).
Holographic SAR for surveying the earth surface (Chapter 3) uses the cross-
correlation[72] andholographicapproaches. Theschemeillustratingtheholographic
principleissimilar tothatinFig. 2.3withtheonlydifferencethatonedealsherewith
2Dmicrowaveholograms.
The tomographic approach is applied in descriptions of aperture synthesis by
spot-light SAR (Chapter 3).
2.3.2 ISARsignal processing
Methodsof inverseaperturesynthesishavebeendiscussedinanumber of publica-
tions. Thetreatmentsinvolvedare:
• arange-Doppler algorithm[13,21,24];
• acircular convolutionalgorithm(CCA) [94];
Methodsof radar imaging 35
• correlatedprocessing[13];
• extendedcoherent processing(ECP) [13];
• polar format processing[13];
• holographicprocessing[109];
• tomographicprocessing[9,106].
A serious limitation of the range-Doppler algorithmis its applicability only to a
synthesis made at relatively small angle steps, which is an obstacle in achieving
highresolutions. Therestrictions onthetimeintervals of coherent processingwere
formulatedbyD.A.Aushermanetal. [13]. Anyattempttoovercometheserestrictions
leadstodisplacementof individual scattererimagesintoadjacentresolutionelements
and, hence, totheimagedegradation. Therange-Doppler algorithmhasbeenusedin
SARformicrowaveimagingof aircraft[8]. Preliminarily, theradial movementof the
target iscompensatedfor inall rangechannels. Thedevelopment of newprocessing
algorithmsbasedonlargeranglestepsrequiredtheuseof spherical coordinates(polar
coordinates inthe2D case) insteadof theCartesiancoordinates of therange–cross
rangetype. Oneof theseis theCCA permitting thelimit anglestep of 2π with a
preciseaperturefocusingover thewholetargetspace[94]. Moreover, itisapplicable
to theprocessingof bothnarrow- andwide-bandradar signals. Theconditions for
viewing real targets differ fromtheconditions, in which theCCA operates. First,
discreterecordsfortheanglestepsof thetargetaspectvariation, recordedataconstant
repetitionfrequency, arenot equidistant. Second, theanglebetweentheradar lineof
sight(RLOS) andtherotationaxischangesduringtheviewing. Thefirstobstaclecan
bebypassedby interpolatingtheradar data. Thesecondoneinevitably leads tothe
necessity toconsider a3Dproblem. Attemptsat usingthisalgorithm, likeother 2D
algorithms, toprocess3Ddataresult indistortedimages[8].
When applied to narrow-band signals, theCCA has another disadvantage: the
wholeensembleof radar datamust beprocessedsimultaneously. So this algorithm
shouldbeemployedonlyinmeasuringtest areasandinAECs.
Correlatedprocessingprovides well-focusedimages of targets of any size, and
thetimeintervalsof coherent processingmay beof arbitrary duration. Ontheother
hand, itscomputational efficiencyisquitelow[8].
Bothalgorithmsrequirespecial measurestocompensatefor thephaseshift due
totheradial displacement of thetarget.
Extended coherent processing is based on coherent summation of microwave
images, eachof whichisformedbyarange-Doppler algorithmat asmall anglestep.
Theapplicationof thistechniqueincreasestheprocessingrateby approximately an
order of magnitudewithagoodimagequalityfor afairlylongprocessingtime.
Variablemovementof atargetrelativetotheRLOSnecessitatestheuseof different
algorithms for thesynthesis of thefinal imagefrompartial ones. Soalgorithms for
ECP aresubdividedintothosefor wideangleimagingandthosefor multipletarget
rotations. Target aspectssuitablefor wideangleimagingarechosenwhenaground
radar viewsaspacecraft stabilisedalongthreeaxesor rotatingarounditscentreof
mass. Imagingbymultiplerotationshasthefollowingspecificity: whenaspacetarget
isstabilizedbyrotation, theanglestepremainsthesameineveryconsecutiverotation
36 Radar imagingandholography
of thetarget arounditsaxis. Initslatter modification, theECP algorithmisusedfor
3Dandstroboscopicmicrowaveimaging[13].
Polarformatprocessingisanothereffectivewaytoovercomethescatterers’ move-
ment throughtheresolutionelements. It isbasedontherepresentationof radar data
ina3Dfrequencyspace.
Inour opinion, avery perspectiveway of inverseaperturesynthesisisby holo-
graphic processing[109,146]. Thepossibility of usingaholographic approachwas
first suggestedby E. N. Leith[85]. Not only does it provideanewinsight into the
processes occurring in inversesynthesis but it also helps to find novel designs of
recordingandreconstructiondevices.
Theschematic diagramof theholographic approach to ISAR signal recording
andreconstructionis similar to that showninFig. 2.3. Thefirst stepis to recorda
1D quadratureor complex microwaveFourier hologram(thediffraction pattern of
thetarget field) (Section2.3.1). Thereferencesignal isacoherent heterodynepulse.
Thesecondstepis theimplementationof a1D Fourier transform. Thenext stepis
theimagerepresentation.
Tomographicprocessingcanbeperformedusingoneof thethreewaysof image
reconstruction:
• reconstructioninthefrequencyregion[9];
• reconstruction in the space region by using a convolution back-projection
algorithm[9];
• reconstructionbysummationof partial images(Chapter 6).
Thetomographic approachtoISAR analysis will bediscussedinSection2.4.2and
inChapter 6.
2.4 Coherent radar holographicandtomographicprocessing
2.4.1 Theholographicapproach
Direct hologramrecording commonly used in the optical wavelength range finds
alimited application in themicrowaverangebecauseof theabsenceof asuitable
substitutiontoamicrowavephotoplate. Sotheprocessingcanbemadeby either of
thetwomethods– directorinverseaperturesynthesis(Section2.2). Thesetechniques
allowtherecordingof twotypesof hologram. Oneissimilar toanoptical hologram,
whiletheother hasnooptical counterpart.
Supposeaexp(i) is atarget waveanda
o
exp(i
o
) is areferencewave. Inthe
firstcaseasquaremicrowavehologramisformedwhichisdescribedbythefollowing
equation:
H
1
(x, y) = |aexp(i) +a
o
exp(i
o
)|
2
= a
2
o
+a
2
+2aa
o
cos( −
o
),
(2.16)
Suchholograms canberecordedby aquadratic detector inthehigh- andmedium-
frequencyranges(Fig. 2.4(a) and(b) respectively), usingahigh-frequencyreference
Methodsof radar imaging 37
Square
detector
a
o
exp(iΦ
o
)
a
o
exp(iΦ
o
)
a
o
exp(iΦ
o
)
exp(iΦ)
aexp(iΦ)
aexp(iΦ)
aexp(iΦ)
aexp(iΦ
o
)
aexp(iΦ)
Receiver
Receiver
a
o
exp(iΦ
o
)
Square
detector
Intermediate
frequency
(a) (b)
(d) (c)
(e)
(f)
a
o
exp(iΦ
o
)
Multiplicative
detector
Amplitude-
phase detector
Amplitude-phase
detector II
Amplitude-phase
detector I
p/2
Receiver
Amplitude-
limiter circuit
Phase
detector
Figure2.4 Synthesisofamicrowavehologram: (a)quadratichologramrecordedat
a highfrequency, (b) quadratic hologramrecordedat anintermediate
frequency, (c) multiplicative hologramrecorded at a high frequency,
(d) multiplicative hologramrecorded at an intermediate frequency,
(e) quadratureholograms, (f) phase-onlyhologram
wave. Inthesecondcaseamultiplicativehologramisformed[109]whichisdefinedas
H
1
(x, y) = Re[aexp(i) · a
o
exp(−i
o
)] = aa
o
cos( −
o
). (2.17)
Thelatter canalso beformedat highandmediumfrequencies (Fig. 2.4(c) and(d),
respectively).
Ineither caseit ispossibletorecordaquadraturemicrowavehologram
H
2
(x, y) = a
2
o
+a
2
+2aa
o
sin( −
o
), (2.18)
H
2
(x, y) = aa
o
sin( −
o
). (2.19)
38 Radar imagingandholography
A pairof quadraturemicrowaveholograms(2.16), (2.17) or(2.18), (2.19) isrecorded
by using identical reference waves phase-shifted by π/2 relative to each other
(Fig. 2.4(e)).
Optical recordingof thebipolarfunctions(2.17) and(2.19) foroptical reconstruc-
tionrequirestheuseof thereferencelevel H
r
tobefoundfromthecondition
H
r

_
max|H
1
(x, y)|
max|H
2
(x, y)|
(2.20)
andthelinearityof themicrowaverecording. Thenwearriveat theequations
H
1
(x, y) = H
r
+aa
o
cos( −
o
), (2.21)
H
2
(x, y) = H
r
+aa
o
sin( −
o
). (2.22)
Eachpair of quadraturehologramsmakesupacomplexmicrowavehologram:
H(x, y) = H
1
(x, y) +iH
2
(x, y). (2.23)
Thequantity j inEq. (2.23) is introducedat thereconstructionstage, followingthe
recordingof onlytwoquadratureholograms, say, (2.21) and(2.22). Butthisformof a
complexhologramequationmakesitpossibletoconsider thispair asanentity, which
is especially convenient for ananalytical descriptionof thereconstructionprocess.
A complexmicrowavehologramisameansof registrationof thetotal fieldscattered
byatarget. Itwill beshownlater thatthisallowsthereconstructionof asingleimage.
ThedesignsshowninFig. 2.4(a)–(d) arelargelyusedinlaboratoryandtest set-ups,
whileradar stationsusethedesigninFig. 2.4(e). A typical microwaveholographic
receiver basedonthisdesignisshowninFig. 2.5. Incontrast tooptical holography,
thereferencewaveis produced by acoherent generator and phase-shifter 1 in the
receiver.
Therefore, thisisaradically newway, ascomparedtooptical holography when
it createsareferencewavebyelectrical modulation. Wecall it anartificial reference
wave. Itsincidenceanglecanbesimulatedbyvaryingthephasewithphase-shifter 1
operatingsynchronouslywiththemovementof thereal radar antenna. Theincidence
angleα tothecarrier track(Fig. 2.6) canbesimulatedbychangingitsphaseas
=
2πxsinα
λ
1
, (2.24)
wherexisthepositionof thereal antennaduringtheaperturesynthesis.
Microwavehologramscanbeclassifiedintermsof thevolumeof recordeddata
onthetargetwave. If ahologramcontainsdataonthewaveamplitudeandphase, itis
saidtobeanamplitude–phasehologram. If theamplitudefactor a(x, y) isneglected
beforethesummationormultiplicationof thetargetandreferencewaves, ahologram
issaidtobeaphase-onlyhologram[109] (Fig. 2.4(f)).
To describe the fields of reconstructed images, one can conveniently use the
Fresnel–Kirchhoff diffractionformula[121] employedinoptical holography. So it
isreasonabletoclassifyhologramsintermsof thephasefrontsof fieldsinducedby
referencesourcesanddiffractedbyatarget.
Methodsof radar imaging 39
A cos Φ
A sin Φ
Receiver
IF
amplifier
A-phase
detector
Low-pass
filter
A-phase
detector
Low-pass
filter
2
1
5
4
3
Transmitter
XX
Phase
shifter
N
o

2
(90°)
Frequency
synthesiser
Phase
shifter
N
o

1
Figure2.5 A block diagramof a microwaveholographic receiver: 1 – reference
field, 2 – reference signal cos(ω
0
t + ϕ
0
), 3 – input signal
Acos(ω
0
t + ϕ
0
− ϕ), 4– signal sin(ω
0
t + ϕ
0
) and5– mixer
Radar antenna
x
v
z
Plane wave
a
Figure2.6 Illustrationfor thecalculationof thephasevariationof areferencewave
A Fresnel microwavehologramissynthesisedbyregistrationof theinterference
patternof interactionbetweenplaneorspherical referencewavesandwavesdiffracted
byatarget, whichhaveaspherical phasefront inthehologramplane.
AFraunhofermicrowavehologramisformedbyrecordingtheinterferencepattern
of planeorspherical referencewavesinteractingwithdiffractedwaveshavingaplane
phasefront inthehologramplane.
A Fouriermicrowavehologramisformedbyrecordingtheinterferencepatternof
interactionbetweenthediffractedwaveshavingaspherical frontinthehologramplane
andaspherical referencewavewithacurvatureradiusequal toanaveragecurvature
radiusof thewavescomingfromthetarget andpropagatinginthesamedirection.
40 Radar imagingandholography
Fresnel andFraunhofer hologramshavefoundapplicationinSAR theory, while
Fourier hologramsareusedinISAR theory(Chapters3and5).
Sincetheprocessof hologramsynthesisimpliesthat theradar istobecoherent,
thequestionarisesastowhatrequirementsmustbeimposedonthecoherence. Letus
firstdefinetheconceptof coherenceinmicrowaveradar theory. A signal issaidtobe
coherent if it showsnoabrupt changesinthebasicfrequency, or if suchchangesare
small, of theorder of 1–3

[14]. If thebasicfrequencychangesaregreater thanthese
values, thesignal reflectedfromatarget is calledpartially coherent. This happens
whenthecoherenceisperturbeddueto:
• anunstablefrequencyof theradar wavesynthesiser or heterodyne;
• theeffectsof thetarget itself, say, of aseasurface(Chapter 4);
• anon-uniformmotionof theaircraft, for example, yawing, pitchingandbeaking
(Chapter 7);
• theeffects of thetroposphereand ionosphere, such as sporadic changes in the
wavepropagationconditions(Chapter 8).
Withinthisdefinition, acontinuousradiationisalwayscoherent for aperiodof time
whenvarious instabilities inthetransmitter performancecanbeneglected. Whena
radar operatesinapulsemode, coherenceisdeterminedbyanunambiguousrelation
between theinitial phasevalues of thecarrier frequency of atrain of pulses. The
abovedefinitionof coherencealsoappliestoradar signalswithknownphasejumps
that canbeavoidedusingcoherent sensing. Sincethefirst of thefactorsresponsible
for coherenceinstabilityisthemost seriousone, therewasasuggestiontointroduce
in imaging theory the concept of frequency, rather than coherence, stability [87].
A comprehensiveanalysis of requirements on thefrequency stability was madein
SARtheorybyR.O.Harger[55]. AsimplifiedapproachisconsideredinReference87.
The latter will be discussed here in more detail in order to explain the physical
mechanismof SARinstability. Thetreatmentof thisproblemhasyieldedthefollowing
expression:
παT
2
≤ (π/4)(cT/2R), (2.25)
whereα is therateof linear frequency variation dueto theinstability of theradar
generator, T isthetimefor apulsetoreachthetargetatdistanceRandtocomeback.
It isclear fromEq. (2.25) that apermissiblephaseerror πα
2
T
2
isπ/4for thetime
T = 2R/c.
Therefore, Eq. (2.25) isthecriterionfor thecoherencelengthintheholographing
of reflectingtargets; itshouldprovidethefrequencystabilityof thesignal propagation
for atimeconsistent withthescenedepth(afull analogy withoptical holography).
Similar stabilityrequirementscanbeimposedoncoherentISAR, inwhichcoherence
ispreservedif thesignal phasedeviationduetothefrequencyinstabilityislessthan
π/2. Thenwehavetheexpression
2πδf
c
T ≤ π/2, (2.26)
Methodsof radar imaging 41
whereδf
c
isthedeviationof theprobingsignal frequencyfor thetimeT. Neglecting
thesignal delayintheantenna-feeder waveguide, weget
δf
c
≤ c/8R, (2.27)
usingtheconcept of short-terminstability
ε
f
=
f
c
f
c
, (2.28)
wheref
c
istheradar carrier frequency. Thenwehave
ε
f
=
c
8f
c
R
. (2.29)
Thecondition for along-termfrequency instability can befound fromacoherent
processinginthewholetimeinterval of thesynthesis, T
s
, whichvarieswiththetype
of hologramprocessing.
Thefrequency stability inmodernradarsisachievedwithhighly stable, mainly
caesiumatomicbeamstandardsof timeandfrequency. Thefrequencystandardspro-
videalong-terminstabilityover1hwithapossibleadjustmentof about10
−12
–10
−14
anda1nsrandomcomponentof the24hbehaviourof thetimescale[25]. Tomaintain
thestability, modernradarsuseaphaseloopcontrol [44]. Thelong-terminstability
requirements to coherent ISAR arevery high. For example, intheGoldstoneSolar
SystemRadar (GSSR) radar for planetsurveys[44] thisparameter isabout10
−15
for
1000sandthepulse-to-pulseinstabilityislessthan1

. TheGSSRProjectisdesigned
for theobservationof Mercury, Venus andMars. InaLRIR (Long-RangeImaging
Radar), thepulse-to-pulseinstability is about 2–3

[20]. This radar is designedfor
observationof spaceobjects.
Tosummarisethediscussionof factorscausingcoherenceinstabilitiesinradars,
thefrequencystabilitymaintainedbyfrequencystandardsandloopfrequencycontrol
cansolvetheproblemof operationinstabilities of apulsegenerator or heterodyne.
Theother threecausesof instabilitycanberemovedbyspecial signal processingin
theradar (Chapters4and7).
2.4.2 Tomographicprocessingin2Dviewinggeometry
It has been shown above that the signal processing in ISAR can be described in
termsof Doppler frequencies, correlatedprocessing, CCAs, etc. Webelievethat the
most appropriate approach is tomographic processing which allows focusing of a
synthesisedapertureover thewholetarget spaceandprovides animageresolution
restrictedonlybythediffractionlimit [7,9,10]. Another advantageof thistechnique
isgreatpossibilitiesfor optimisationof processingalgorithmsanddevices. Consider
atarget beingprobedbyastationarycoherent radar (Fig. 2.7), whichradiatespulses
withthecarrier frequencyf
c
andthemodulationfunctionw(t)
s(t) = w(t) exp(j2πf
c
t) (2.30)
42 Radar imagingandholography
Point target

r

r
a

r
o
0
Figure2.7 Thecoordinatesusedintarget viewing
andmeasures theamplitudeandphaseof thecomplex envelopeof anecho signal.
Thetarget isassumedtoconsist of asmall number of independent scatterers, whose
positionrelativetothecentreof massof thetarget Oandtheradar isdefinedbythe
respectivevectors(Fig. 2.7). Thetarget movesalonganarbitrarytrajectory, rotating
arounditscentreof mass. Theconditionsforthefarzoneandauniformfieldamplitude
of thewaveincidentonthetargetsurfacefacingtheradar arefulfilled. Thealgorithm
fortheprocessingof thecomplexenvelopeof anechosignal, synthesisedbytheradar
receiver, is
s
v
(t) =
_
v
g(¯ r
o
)w
_
t −
2|¯ r|
c
_
exp(−j2k
c
|¯ r|)d¯ r
o
, (2.31)
whereg(¯ r
o
) isthefunctionof thetargetreflectivityandk
c
= 2π/λ
c
isawavenumber
corresponding to the wavelength of the radar carrier oscillation. Equation (2.31)
allowstheestimationof the ˆ g(¯ r
o
) reflectivityof everyscatterer.
Theintegrationof Eq. (2.31) is madeover thetarget space. Withthecondition
for thefar zone, thevector ¯ r describingthepositionof anarbitraryscatterer relative
totheradar canbesubstitutedbyitsprojectiononthelineof sight:
|¯ r| = |¯ r
a
| + ˆ r, (2.32)
where
ˆ r = ¯ r
o
¯ u, ¯ u= ¯ r
a
/|¯ r
a
| (2.33)
and ¯ uis aunit vector coincidingwiththelineof sight anddirectedaway fromthe
target rotationcentretowardstheradar.
Generally, bothtermsof Eq. (2.32) vary duringtheviewing. However, thecon-
tributiontotheimagingismadeonlybythevariationintherelativerangeˆ r. Onthe
contrary, therangevariationof thetarget’s centreof mass |¯ r
a
| produces distortions
intheimage. BysubstitutingEq. (2.32) intoEq. (2.31) andregroupingthetermsfor
Methodsof radar imaging 43
thecomplexenvelopedistortion, weobtain
s
v
(t) =
_
v
g(¯ r
o
)
_
w
_
t −
2|¯ r
a
|
c

2ˆ r
c
__
exp(−j2k
c
|¯ r
a
|) exp(−j2k
c
ˆ r)d¯ r
o
.
(2.34)
It follows fromtheanalysis of Eq. (2.34) that thecorrectionof thereceivedsignal
is to maintainaconstant delay τ = 2|¯ r
a
|/c andto multiply thesignal by thephase
factor exp(j2k
c
|¯ r
a
|). After makingthecorrection, thesignal canbewritten(assuming
τ = 0) as
s
v
(t) =
_
v
g(¯ r
o
)w
_
t −
2ˆ r
c
_
exp(−j2k
c
ˆ r)d¯ r
o
. (2.35)
Theexponential phasefactor inEq. (2.35) definesthecoherencedegreeof thewhole
imagingsystem(theradar andtheprocessingsystem) over thewholeband-limited
frequencyspectrum. Thecoherenceinstabilitydueto, say, aninaccuratecompensa-
tionfor thetarget radial movement leads to apoorer resolution. Thepossibility of
imagingbyatomographicalgorithmis, inprinciple, preserved. Let usprocessasig-
nal inthefrequencydomain. TheFourier transformof avideosignal corresponding
tothechangeinthetarget aspect relativetotheradar is
S(f ) = F{s(t)} = W(f )
_
v
g(¯ r
o
) exp[−j2(k
c
+k)ˆ r] d¯ r
o
, (2.36)
whereW(f ) = F{w(t)} isthemodulationfunctionspectrum, k = 2π/λ isthewave
number tobedefinedinthefrequencyspectrum, andF{·} isa1DFourier transform
operator. Next, weperformastandardrangeprocessingtoobtaintheresolutionalong
thelineof sight inafilter withthetransmissioncharacteristicK(f ) [18]:
S(f ) = H(f )
_
v
g(¯ r
o
) exp[−j2(k
c
k)ˆ r] d¯ r
o
(2.37)
withH(f ) = W(f )K(f ).
Therangeprocessingcanalsobemadeinthetimedomainof thereceiver using
afilter with theimpulseresponseh(t) = F
−1
{K(f )}, whereF
−1
{·} is theinverse
Fourier transform(IFT) operator.
Notethatthecompensationforthetargetradial displacementcanalsobemadeby
theprocessor (after thetransformationof Eq. (2.37)) bymultiplyingthevideosignal
spectrumbythephasefactor exp[j2(k
c
+k)|¯ r
a
|]. A particular methodof processing
requiresaproper designof thereceiver andprocessor.
WithEq. (2.33), expression(2.37) canbepresentedasa3DFourier transformof
thetarget reflectivity:
S(f ) = H(f )
_
v
g(¯ r
o
) exp[−j2(k
c
+k)¯ u¯ r
o
] d¯ r
o
, (2.38)
where(k
c
+k) isthe3Dfrequencyvector modulus.
44 Radar imagingandholography
Tocalculatethetarget reflectivity, it isnecessarytomakeaninversetransforma-
tionof theFourier functionover therespectivevolume:
ˆ g(¯ r
o
) = F
−1
{S(f )} = g(¯ r
o
) ∗ h(¯ r
o
), (2.39)
where∗ denotesconvolution, h(¯ r
o
) istheprocessingsystemresponsefromasingle
point target inthespacefrequency domain, h(¯ r
o
) = F
−1
{H(f )}, andH(f ) is a3D
aperturefunction.
It isclear fromEq. (2.39) that thevalueof ˆ g(r
o
) isadistortedrepresentationof
thetarget reflectivity g(r
o
). Thedistortion is largely dueto thelimited frequency
spectrumandthesmall anglestepof theaspect variation.
Equation (2.39) can be transformed in the 3D frequency domain. More often,
however, oneneeds2Dimages, whichcanbeobtainedusinganappropriate2Ddata
acquisitiondesign(Fig. 2.8). Equation(2.38) thenhastheform:
S(f ) = H(f )
_
−∞

_
g(u, v) exp[−j2(k
c
+k)v] dudv. (2.40)
Keepinginmindthat thefunction
P
θ
(v) =

_
−∞
g(u, v) du, (2.41)
represents the projection of the target reflectivity on the v-axis, the target aspect
definedbytheangleθ (Fig. 2.8) canbewrittenas
S
θ
(f ) = H(f )

_
−∞
P
θ
(v) exp[−j2(k
c
+k)v] dv. (2.42)
Usingthedenotationf
p
= 2(f
c
+f )/c, weget
S
θ
(f ) = H(f )P
θ
(f
p
) = H(f )

_
−∞
P
θ
(v) exp(−j2πf
p
v) dv, (2.43)
where P
θ
(f
p
) is the Fourier transform of the projection P
θ
(v) with the space
frequency f
p
.
Thesubstitutionof Eq. (2.41) intoEq. (2.43) yields
P
θ
(f
p
) =
_
−∞

_
g(u, v) exp[−j2π(0u+f
p
v)] dudv (2.44)
or
P
θ
(f
p
) = P
θ
(0, f
p
) = P
θ
(f
p
sinθ, f
p
cosθ), (2.45)
Methodsof radar imaging 45
x
u
y
v
u
w
r
o
o
Figure2.8 2Ddataacquisitiondesigninthetomographicapproach
whereP(·) istheFourier transformof thetarget reflectivityinthe(x, y) coordinates.
ThenusingEq. (2.45), wehave
S
θ
(f ) = H(f )P
θ
(f
psinθ
, f
pcosθ
). (2.46)
Equation(2.45) representstheformulationof theprojectiontheoremunderlyingthe
tomographicimagingalgorithms[34,57].
Bearinginmindthatv= ycosθ −xsinθ, wegofromEq. (2.43) tothe2DFourier
transforminthe(x, y) coordinatesrelatedtothetarget:
S(f
x
f
y
) = H(f )
_
−∞

_
g(x, y) exp[−j2π(f
x
x+f
y
y)] dxdy, (2.47)
wheref
x
and f
y
aretherespectivespacefrequencies, f
x
= −(f
po
+ f
p
) sinθ, f
y
=
(f
po
+ f
p
) cosθ, f
po
= 2f
c
/c is the space frequency corresponding to the carrier
frequencyspectrum, f
p
isthespacefrequencydefinedover thewholefrequencyband
of theprobingsignal, 2f
l
/c < f
p
< 2f
u
/c, f
l
andf
u
arethelower andtopfrequency
spectra. ThesolutiontoEq. (2.47) yieldsthetarget reflectivity:
ˆ g(x, y) =
_
−∞

_
S(f
x
, f
y
) exp[j2π(f
x
x+f
y
y)] dxdy= g(x, y) ∗ h(x, y). (2.48)
Thisapproachtoimagingcanbeimplementedinthefrequencyandspacedomains(see
Chapter 6). Notethat theradar dataonasignal arerecordedinpolar coordinates[8],
whiletheimagingdevicesarerepresentedasadotmatrix. Thisinconvenienceneces-
sitates theuseof acumbersomeprocedureof datainterpolationandthenfindinga
compromisebetween thedegreeof interpolation complexity (thegreater thecom-
plexity, thebetter theimagequality) andthecomputationresources. Itwill beshown
46 Radar imagingandholography
Df
p
Du
f
p1
f
pt
f
x
f
y
f
po
Figure2.9 Thespacefrequencyspectrumrecordedbyacoherent(microwaveholo-
graphic) system. Theprojectionslicesareshiftedbythevaluef
po
from
thecoordinateorigin
in Chapter 6 that there is a procedure of processing in the space domain, which
successfullyovercomesthisdifficulty.
Thespacespectrumof eachechosignal isrepresentedinthefrequencyf
x
f
y
plane
(Fig.2.9)asastraightlinecoincidingwitharadial beam. Thebeamangularcoordinate
θ isequal totheangleϑ whichdefinesthetarget positionat themoment of probing
signal reflection. Thespacespectraof echosignals arecentredrelativetoanarc of
radiusf
po
. Inthefrequencyplane, theirmultiplicityformsmicrowavehologramswith
theangleθ equal totheanglestepof thesynthesis, ϑ. Theinner andouter radii
of ahologramaredefinedby thespacefrequencies f
pl
andf
pt
inthelower andtop
frequencyspectraof theprobingsignal.
It follows fromEq. (2.47) that the ensemble of radar data recorded under the
above conditions is a 2D Fourier microwave hologram. The image reconstruc-
tion fromsuch a hologramreduces to IFT. Although the inversion of a hologram
describedbyEq. (2.47) isasimplemathematical procedure, themethodsof itsdigital
implementationarenot asobvious.
Withtheaboveassumptionsof thefar zoneandthehigh-frequencyspectrum, we
cansuggestthatateverymomentof timet = 2v/cthecontributiontotheechosignal
will bemadeonlybythelocal scattererswiththerangecoordinateϑ. Thentheintegral
P
ϑ
(ϑ) =
_
V
g(u, ϑ)du (2.49)
takenalongthetransverserangerepresentstheprojectionof thetargetreflectivityon
theRLOS.
Methodsof radar imaging 47
~Du
Df
p
f
x
f
y
Figure2.10 The space frequency spectrum recorded by an incoherent (tomo-
graphic) system
WithEq. (2.49), expression(2.40) will havetheform:
S
θ
(f
po
+f
p
) = H(f
p
)
_
V
P
ϑ
(ϑ) exp[−j2π(f
po
+f
p
)ϑ]dϑ = H(f
p
)P
θ
(f
po
+f
p
),
(2.50)
where
P
θ
(f
po
+f
p
) = F{P
ϑ
(ϑ) exp(−j2πf
po
ϑ)}. (2.51)
If the right-hand side of Eq. (2.50) is expressed in the Cartesian coordinates, we
shall have
S
θ
(f
p
) = H(f
p
)P
θ
[−(f
po
+f
p
) sinθ, (f
po
+f
p
) cosθ]. (2.52)
Hence, a1D spectrumof theproduct of thereflectivity function projection at the
angleϑ andthephasefactor = exp(−j2πf
po
ϑ) isthecrosssectionof amicrowave
hologramfunctionalongastraightlinepassingthroughthefrequencyplaneoriginat
theangleθ(θ = ϑ).
If thedataacquisitionsystemisincoherentandrecordsonlythecomplexenvelope
shapeof theechosignal, thephasefactor vanishesfromEq. (2.50) toEq. (2.52).
Equation(2.52) thenreducestotheprojectionslicetheorem, oneof thefundamental
theoremsincomputerisedtomography[57].
Letusdiscussthephysical differencesbetweencoherent(holographic) andinco-
herent (tomographic) systems of microwaveradar imaging by comparing Figs 2.9
and2.10. Theanglestepof thetarget aspect variationandthefrequencybandwidth
of theprobingsignal aretakentobeidentical inbothcases.
48 Radar imagingandholography
A specificfeatureof coherent systemsisthat theprojectionslicesof ahologram
areshiftedradially by thevaluef
po
away fromthecoordinateorigin. Other things
beingequal, their resolution, definedinthefirst approximationby thedatadomain
sizeinthefrequencyspace, isthereforehigh[8].
AmoreimportantdifferenceisthattheprojectionP
ϑ
(ϑ) recordedbyanincoherent
systemisareal timefunction. Sothephaseof anyof theprojectedslicesinthedata
domainof thefrequencyspaceiszeroat theinterceptionwiththecoordinateorigin.
Theprojectionslicesareindependent of oneanother. Incontrast, acoherent system
recordsnotonlythechangesinthecomplexenvelopeamplitudealongtheprojection
but also those of the phase of the echo signal carrier oscillation. As a result, in
consecutiveprojectionslicesthephasesof averagerecordswiththespacefrequency
f
po
carryinformationabouttherangesof all unscreenedscatterersof thetargetrelative
toitsrotationcentre. Other recordsof theprojectionslicehaveadditional shiftswith
their spacefrequency differences withrespect to thecentrerecord. Inthis way, all
hologramrecords become interrelated providing a resolution along any direction,
includingthetransverserange.
Thus, themathematical theoryof computerisedtomographyfor designingdigital
processingalgorithms shouldbemodifiedtoadjust it totherequirements of coher-
ent imaging. Theabovemathematical expressions(2.50)–(2.52) canberegardedas
generalisedprojectionslicetheorems for coherent radar imaging. This enables one
toemployanalytical methodsof computerisedtomography[57] asabasisfor further
developmentof thetheoryof coherentimaging. Advantagesof thiskindof treatment
arephysical clarityandcomputationefficiency(Chapter 6).
The holographic approach to the description of inverse synthesis by coherent
radarsaccountsforarbitrarychangesinthetargetaspectandthefrequencybandwidth
of theprobingsignal. Most of theavailablealgorithmsfor microwaveimaginghave
beendesignedfor2Dviewinggeometry, sodigital processingforreal targetsizesand
anglesteps becomes atime-consumingendeavour. Well-elaboratedmathematics of
computerisedtomographycouldconsiderablyfacilitatethedevelopmentof effective
computationalgorithmsfor digital processingof 3Dmicrowaveholograms.
Chapter 3
Quasi-holographicandholographicradar
imagingof point targetsontheearthsurface
3.1 Side-lookingSAR asaquasi-holographicradar
WehaveshowninChapter 2that theaperturesynthesiscanbedescribedindifferent
ways, including a holographic approach. It was first applied by E. N. Leith to a
side-lookingsynthetic apertureradar (SAR) [85,86]. Heanalysedtheoptical cross
correlator, which processes thereceived and thereferencesignals, and concluded
that ‘if the reference function is a lens, the record of a point object’s signal can
alsobeconsideredasalens, becausethereferencefunctionhasthesamefunctional
dependenceasthesignal itself’ [85]. Thesignal fromapointobjectisaFresnel lens,
anditsilluminationbyacollimatedcoherentlightbeamcreatestwobasicimages– a
real imageandavirtual image(Fig. 3.1). Theauthor alsopointedoutthattheimages
formedbyaFresnel lenswereidentical tothosecreatedbycorrelationprocessing. He
drewtheconclusionthat‘byreducingtheoptical systemtoonlythreelenses, weare, it
appears, ledtoabolishingeventhese, aswell asthecorrelationtheoryuponwhichall
hadbeenbased’ [85]. Thiswasaradicallynewconceptof SAR. Theradar theoryand
theprinciplesof optical processingwererevisedintermsof theholographicapproach.
Itskeyideaisthatsignal recordingisnotjustaprocessof datastorage, likeinantenna
or correlationtheories, but it is rather therecordingof aminiaturehologramof the
wavefield along thecarrier’s trajectory. For this, therecording is madeon atwo-
dimensional (2D) optical transparency(the‘azimuth-range’), or acomplexreflected
signal is recorded 2D. The first procedure uses a photographic filmto record the
rangeacrossthefilmbut theazimuthandpathwayrangealongitslength. Inoptical
recording, theimageis reconstructedinthesameway as inconventional off-axial
holography, that is, along the carrier’s pathway line. If a microwave hologramis
recordedoptically, its illuminationby coherent light reproduces aminiatureoptical
representationof theradarwavefield. Therefore, theobject’sresolutionisdetermined
bythesizeof thehologramrecordedalongthepathwayline, ratherthanbytheaperture
50 Radar imagingandholography
4
1
2
3
5
Figure3.1 A schemeillustrating thefocusing properties of a Fresnel zoneplate:
1– collimatedcoherent light, 2– Fresnel zoneplate, 3– virtual image,
4– real imageand5– zeroth-order diffraction
of areal radar antenna. Therangeresolutionisprovidedbythepulsemodulationof
radiated signals. Since the holographic approach to SAR is applicable only to its
azimuthal channel, theauthors of thework [85] termedit quasi-holographic. Inhis
laterpublicationsonthissubject, E.N.Leithpointedoutthataperturesynthesisshould
bedescribedasamicrowaveanalogueof holographytowhichholographicmethods
couldbeapplied, rather thanasholographyproper.
Thus, a combination of SAR and a coherent optical processor represents a
‘quasi-holographic’ system, whoseazimuthal resolutionisachievedby holographic
processingof therecordedwavefield. BothE. N. LeithandF. L. Ingallsbelieve[86]
that this representation is most flexibleand physically clear. Theuseof theholo-
graphic approach for SAR analysis has so far been restricted to optical processors
[87]. Thereisasuggestiontorepresent theentireSAR azimuthal channel asaholo-
graphicsystem[143]. Inthat casetheinitial stageof theholographicprocessinthis
channel (theformationof amicrowavehologram) istherecordingof thefieldscat-
teredby anartificial referencesource. Thesecondstage(theimagereconstruction)
isdescribedintermsof physical optics.
3.1.1 Theprinciplesof hologramrecording
Let us consider aSAR borneby acarrier movingwithvelocity v alongthex

-axis
(Fig3.2). TheSARantennahasthelengthL
R
(thereal aperture) andthebeamwidth
ϑ
R
alongthepathway line. TheSAR irradiates theviewstripeby short pulses and
makes consecutive time recordings of the probing signal reflected by the object.
Thescatteredfieldamplitudeandphaseareregisteredby acoherent (synchronous)
detector duetotheinterferenceof thereferenceandreceivedsignals. Thisproduces
amultiplicativemicrowavehologram(Chapter 2). Theroleof thereferencewaveis
Quasi-holographicandholographicradar imaging 51
playedbyasignal directlysuppliedtothesynchronousdetector; thisistheso-called
‘artificial’ referencewave.
Weshall describenowthereceivingdeviceof thesyntheticaperturewhichrecords
ahologramonacathodetubedisplay. Usually, ahologramisrecordedbymodulating
thetuberadiationintensity, withthephotofilmmovingwithvelocity v
f
relativeto
the screen. For objects with different ranges R
o
fromthe pathway line, one can
use a pulse mode and vertical display scanning. As a result, the device records a
seriesof one-dimensional (1D) hologramshavingdifferent positionsalongthefilm
width, dependingonthedistanceto therespectiveobjects. Supposeall theobjects
arelocatedatadistanceR
o
tothepathwayline. For simplicity, theradiatedsignal can
thenbetakentobecontinuousbecausethepulsednatureof theradiationisimportant
only for the analysis of range resolution. Figure 3.3 shows an equivalent scheme
of 1D microwave hologramrecording. A synthetic aperture is located at point Q
withthecoordinates (x

, 0) (x

= vt, wheret is thecurrent moment of time), anda
hypothetical sourceof thereferencewaveisatpointR(x
r
, z
r
). Thesourcefunctionsin
awaysimilar tothat of thereferencewaveduringthehologramrecording(Fig. 1.2).
Thepoint P(x
o
, z
o
= −R
o
) belongs totheobject beingviewedalongthex
o
-axis. If
V
q
R
R
o

Figure3.2 Thebasicgeometrical relationsinSAR
P(x
r
, z
r
)
Q(xЈ, O)

R(x
r
, z
r
)
x
o
R
o
R
r
0
z
Figure3.3 Anequivalent schemeof 1DmicrowavehologramrecordingbySAR
52 Radar imagingandholography
theobject’sscatteringcharacteristicsaredescribedbythefunctionF(x
o
) anditssize
issmall ascomparedwithR
o
, onecanusethewell-knownFresnel approximationto
definethediffractionfieldalongthe

-axis[103]:
U
o
(x

) = C
o
e
ik
1
R
o

λ
1
R
o

_
−∞
F(x
o
)e
ik
1
((x
o
−x

)/(R
o
))
dx
o
, (3.1)
wherek
1
= 2π/λ
1
is thewavenumber andC
o
is acomplex-valuedconstant. The
complexamplitudeof thereferencewaveis
U
r
(x

) = A
r
e

r
.
Normally, thisisaplanewave, i.e. ϕ
r
= k
1
sin(ϑx

), whereϑ isthewave‘incidence’
onthehologram. Theinclinationof thereferencewaveis equivalent to that of the
referencesignal with alinear phaseshift, providing theintroduction of thecarrier
frequencyω
x
= k
1
sinϑ. A coherent registrationgivesahologramdescribedas
h(x

) = Re(U

r
(x

)U
o
(x

)) (3.2)
or
h(x

) = Im(U

r
(x

)U
o
(x

)).
It followsfromEq. (3.1) that asynthetic aperturegenerally forms1DFresnel holo-
grams. Thefollowingthreetypesof hologramarepossible, dependingontherelation
betweentheobject’ssize, thesynthetic aperturelengthL
s
= vT (T istherecording
timeor thetimeof theaperturesynthesis) andtherangeR
o
.
1. If thecondition
R
o
k
1
(x
2
o
)
max
/2
holds true (here (x
o
)
max
defines the maximum size of the object), we get
Fraunhofer’sapproximationinsteadof Eq. (3.1):
u
o
(x

) = C
o
e
ik
1
R
o

λ
1
R
o
e
ik
1
((x

)
2
/(R
o
))

_
−∞
F(x
o
)e
−ik
1
((2x

−x
o
)/(R
o
))
dx
o
. (3.3)
Thehologramweobtainisof theFraunhofer type.
2. If thecondition
R
o
k
1
(x

o
)
2
max
/2= k
1
L
2
S
/8
is valid, wecaneliminatethetermexp[ik
1
(x

)
2
/R
o
] fromEq. (3.3) to obtaina
Fourier hologram, whichisdescribedas
L
S
≤ 2
_
λ
1
R
o
/π. (3.4)
3. For apoint object, wehave
F(x
o
) ∼ δ(x

−x
o
)
andFraunhofer’sconditionfor diffractionbecomesimmediatelyfulfilled.
Quasi-holographicandholographicradar imaging 53
Using thefiltering properties of theδ-function and Eq. (3.3), wearriveat the
followingequationfor thehologram(withtheconstant phasetermsignored):
h(x

) = A
r
A
o
cos
_
ω
x
x

−k
1
(x

)
2
R
o
+2k
1
x

x
o
R
o
_
, (3.5)
whereA
o
isthescatteredwaveamplitudeat thereceiver input.
If Eq. (3.4) holds, expression(3.5) yields
h(x

) = A
r
A
o
cos
_
ω
x
x

+2k
1
x

x
o
R
o
_
. (3.6)
Thus, asyntheticapertureformseither aFraunhofer or aFourier hologramof apoint
object. Theformerlookslikea1DFresnel zoneplate, inaccordancewithEq.(3.5), and
thelatterisa1Ddiffractiongratingwithaconstantstep, inaccordancewithEq. (3.6).
Duringthephotographicrecording, thehologramsarescaledbysubstitutingthe
x

-coordinateby thex-coordinate, wherex = x

/n
x
andn
x
= v/v
f
. A constant term
h
o
(‘displacement’) isaddedtoEqs(3.5) and(3.6) for thephotographicregistration
of thebipolar functionh(x

).
3.1.2 Imagereconstructionfromamicrowavehologram
Itisreasonabletodiscussthenextstepintheholographicprocessintermsof physical
optics. Illuminationof aphotographic transparency by aplanecoherent wavewith
thewavenumber k
2
produces adiffraction field, whosedistribution at distanceρ
fromthehologramisdescribedbytheHuygens–Fresnel integral:
V(ξ) =
e
i(k
2
ρ−π/4)

λ
2
ρ
v
n
T/2
_
−v
n
T/2
h(x)e
i(k
2
/2ρ)(x−ξ)
2
dx. (3.7)
Thesubstitutionof Eq. (3.5) intoEq. (3.7) gives
V(ξ) = V
o
(ξ) +V
1
(ξ) +V
2
(ξ),
whereV
o
(ξ) isthezerothordercorrespondingtothedisplacementh
o
, V
1
(ξ) andV
2
(ξ)
arethefunctions of thereconstructedimages of apoint object. Thesefunctions are
equal to
V
1
(ξ)
V
2
(ξ)
_
= C
o
v
f
T/2
_
−v
f
T/2
e
i
_
(k
2
/2ρ)±(k
1
n
2
x
/R
o
)
_
x
2
e
−i[(k
2
ξ/ρ)±(ω
x
n
x
+(2k
1
n
x
x
o
/R
o
))]x
dx.
(3.8)
Thepositionsof theimagesalongthez-axiscanbefoundfromtheconditionfor the
zerothpower of thefirst exponent inEq. (3.8):
ρ = ±
λ
1
R
o

2
n
2
x
. (3.9)
Obviously, oneimageisvirtual andtheother real.
54 Radar imagingandholography
ByintegratingEq. (3.8) withtheconditionof Eq. (3.9), weobtain
V
1
(ξ)
V
2
(ξ)
_
= C
o
sin{[ω
x
n
x
+(2k
1
n
2
x
/R
o
)((x
o
/n
x
) −ξ)]v
n
T/2}

x
n
x
+(2k
1
n
2
x
/R
o
)((x
o
/n
x
) −ξ)]v
n
T/2
. (3.10)
Therefore, theimageof apointobjectisdescribedbythesinν/ν-typeof function.
It followsfromEq. (3.10) that theimagepositionalongthex-axisisdefinedby the
zerothvalueof theargument ν, or
ξ = x
o
/n
x

x
R
o
/2k
1
n
x
. (3.11)
The first termin Eq. (3.11) corresponds to the real coordinate of the object and
thesecondonedescribes thecarrier frequency. Images of two-point objects having
thesamecoordinates x
o
(x
o1
= x
o2
) but different ranges R
1
and R
2
(R
1
= R
2
) are
characterised by different coordinates ξ
1
and ξ
2

1
= ξ
2
). Therefore, the use of
the carrier frequency leads to geometrical distortions of the coordinates of point
objects. Weshouldrecall that theuseof thecarrier frequencyinthefirst generation
of SARs (withanoptical processor) was necessitatedby theapplicationof Leith’s
off-axial holographyinordertoseparateimagesfromthezerothorder. Thecarrierfre-
quencybecomes, however, unnecessaryindigital imagereconstructionfromcomplex
holograms(Chapter 2).
Let usnowdiscusstheSAR resolvingpower. AccordingtoReighley’scriterion,
two points arethought to beseparatedif themajor maximumof oneof thesinx/x
functionscoincideswiththefirstzeroof theotherfunction. Thisgivesustheresolving
power
x

= x
1
−x
2
= πR
o
/k
1
L
S
. (3.12)
TheaperturecreatingaFraunhofer hologramwithEq. (3.5) was termeda‘focused
aperture’ inclassical SARtheory. Thefocusinghereistreatedasacompensationfor
thequadraticphaseshift inEq. (3.5) duringimagereconstruction, thecompensation
beingmadewiththetransforminEq. (3.7).
The case of an ‘unfocused aperture’ is described by Eq. (3.6) for the Fourier
hologram, andtheprocessingisperformedwiththeFouriertransformof thehologram
function:
V(ξ) = C
o
v
f
T/2
_
−v
f
T/2
h(x)e
−ik
2
xξ/ρ
dx. (3.13)
Eqs(3.13) and(3.6) yield
V
1
(ξ)
V
2
(ξ)
_
= C
o
sin{[(k
2
/ρ)ξ ±(ω
x
n
x
+(2k
1
/R
o
)n
x
x
o
)]v
f
T/2}
[(k
2
/ρ)ξ ±(ω
x
n
x
+(2k
1
/R
o
)n
x
x
o
)]v
f
T/2
. (3.14)
Here, ρ canbetakentobethefocal lengthof theFourier lens.
Theimagepositionfor apoint object isdefinedas
ξ = ±
ρ
k
2
_
ω
x
n
x
+
2k
1
R
o
n
x
x
o
_
. (3.15)
Quasi-holographicandholographicradar imaging 55
Imagesof two-pointobjectswiththesamecoordinatesx
o
butdifferentrangesR
1
andR
2
will alsobedistortedduetothedependenceof ξ onR
o
. Theresolvingpower
fromReighley’scriterionis
x

= x
1
−x
2
= πR/k
1
n
x
v
f
T. (3.16)
WithEq. (3.4), thepermissiblelimit for this parameter inSAR withunfocused
processinghasthevalue
x

=
_
πλ
1
R
o
/4≈ 0.44
_
λ
1
R
o
. (3.17)
Notethatahologramiswrittenonaphotofilm(inthecaseof anoptical processor)
orinamemorydevice(inthecaseof digital recording) continuouslyduringtheflight.
For this reason, thefocusedor unfocusedapertureregimeis prescribedonly at the
reconstructionstage.
Synthetic apertureradar canalso beconsideredinterms of geometrical optics,
whichimpliesphasestructureanalysisof ahologram. Oneof theexpressionsin(3.2)
canbere-writtenas
h(x

) = A
r
A
o
cos(ϕ
r
−ϕ
o
),
whereϕ
o
isthephaseof thefieldscatteredby theobject. For apoint object located
atpointP(Fig. 3.3), wecanwritetwoexpressionstakingintoaccounttheSARwave
propagationtotheobject andback:
ϕ
o
= −2k
1
(PQ−PO),
ϕ
r
= −2k
1
(RQ−RO),
whereRO = R
r
is thedistancebetweenahypothetical referencewavesourceand
thecoordinatesorigin. Byexpandingϕ
r
andϕ
o
intoseries, weget for thefirst-order
terms
ϕ
r
−ϕ
o

= −

λ
1
_
(x

)
2
_
1
2R
r

1
2R
o
_
−x

_
x
r
R
r

x
o
R
o
__
. (3.18)
Inasimplecaseof x
o
= 0, x
r
= 0andR
r
= ∞(aplanereferencewavewithout
linear phaseshift), wehave
ϕ
r
−ϕ
o
= 4π(x

)
2
/2λ
1
R
o
.
Thespacefrequencyintheinterferencepatternis
ν(x) =
1

∂(ϕ
r
−ϕ
o
)
∂x

= 2x


1
R
o
. (3.19)
At acertainvalueof x

cr
= (L
S
)
max
/2, thefrequencyν mayexceedtheresolving
power of thefieldrecorder, whichis definedinthis caseby thereal apertureangle
andisequal toν
cr
= 1/L
R
. Fromthiswehavethecondition
(L
S
) ≤ λ
1
R
o
/L
R
= ϑ
R
R
o
. (3.20)
56 Radar imagingandholography
The substitution of (L
s
)
max
into Eq. (3.12) gives a classical relation for the
attainablelimit of SAR resolution:
x
lim
= L
R
/2.
Thepulsednatureof thesignal allowsdeterminationof suchanimportant radar
parameter astheminimumrepetitionfrequencyof probingpulses, χ
min
. Obviously,
the pulse mode is similar to hologramdiscretisation. The distance between two
adjacent recordsx

= v
f
/χ must meet thecondition
x

≤ [2ν(x

cr
)]
−1
.
ThisconditionandEq. (3.19) gives
χ
min
= 2ϑ/L
R
.
By followingthemethodsuggestedinReference92wecanobtainrelationsfor
thephasedeviationof thereconstructedwavefromthespherical shape(third-order
waveaberrations):
ϕ
(3)
= −
k
2
2
_
D
o
x
4
4
−D
1
x
3
+D
2
x
2
_
, (3.21)
where
D
k
=
x
k
c
R
3
c

x
k
I
R
3
I
±

m
4−k
_
x
k
o
R
3
o

x
k
rI
R
3
r
_
,
x
c
andR
c
arethecoordinatesof thereconstructingwavesource, µ = λ
2

1
, m= n
−1
x
.
Theimagecoordinatesfor apoint object are
1
R
I
=
1
R
C
±

m
2
_
1
R
o

1
R
r
_
,
x
I
R
I
=
x
I
R
C
±

m
_
x
o
R
o

x
r
R
r
_
.
Thevaluek = 0isfor thespherical aberration, k = 1isfor thecoma, k = 2isfor the
astigmatism. Theserelationscanbeusedtofindthemaximumsizeof thesynthetic
aperture, (L
s
)
max
, fromReighley’sformula(waveaberrationsat thehologramedges
shouldnot belarger thanλ
2
/4). Sincespherical aberrationislargest intheorder of
magnitude, weobtain
(L
S
)
max
= 2
4
_
λ
1
R
3
o
__
1−4
µ
2
m
2
_
. (3.22)
For typical conditions of SAR performance, the value of (L
s
)
max
calculated from
Eq. (3.20) is smaller than (L
s
)
max
found in Eq. (3.22), that is, the effect of wave
aberrationsisinessential.
Quasi-holographicandholographicradar imaging 57
3.1.3 Effectsof carrier trackinstabilitiesandobject’smotionon
imagequality
Thecarrier’s trajectory instabilities areamajor factor that candistort SAR images.
Theuseof geometrical optics intheholographic approachprovides afairly simple
estimationof permissibletrajectorydeviationsfromastraightline. Theobject’swave
phaseϕ
o
(x

) canbewrittenas
ϕ
o
(x

) = −2k
1
__
(z
o
−g)
2
+(x

−x
o
)
2
_
1/2
−R
o
_
,
whereg = g(x

) isthetrajectorydeviationfromthex

-axis. AtR
o
x
o
, x

andg, the
binomial expansionignoringall termsof theg
2
ordergivesanapproximateexpression
for ϕ
o
(x

):
ϕ
o
(x

)

= −

λ
1
_
(x

)
2
−2x
o
x

2R
o

x
4
−4x
o
(x

)
3
+4x
2
o
(x

o
)
2
8R
3
o

z
o
g
R
o
+
z
o
g(x

)
2
−2z
o
gx
o
x

2R
3
o
_
.
The phase equation for a wave reconstructing one of the images has a stan-
dardform:
ϕ
I
= ϕ
c
±(ϕ
o
−ϕ
r
), (3.23)
whereϕ
c
arethereconstructedwavephases.
Ontheother hand, ϕ
I
canbewrittenas
ϕ
I
= −

λ
2
_
x
2
−2x
I
x
2R
I

x
4
−4x
I
x
3
+4x
2
I
x
2
8R
3
I
_
. (3.24)
Thephases ϕ
c
andϕ
r
aredescribedby expressions similar to(3.24). Thephase
differences between therespectivethird-order terms relativeto 1/R
I
in Eqs (3.23)
and(3.24) represent aberrationsdescribedas
= ϕ
(3)

(3)
n
.
Theaberrationsϕ
(3)
aredefinedbyEq. (3.21), andϕ
(3)
n
hastheform:
ϕ
(3)
n
= −k
2
(D
3
g+D
4
gx−D
5
gx
2
), (3.25)
where
D
3
= ∓2µz
o
/R
o
, D
4
= ∓2µz
o
x
o
/mR
3
o
,
D
5
= ∓µz
o
/m
2
R
3
o
, m= 1/n
x
, µ = λ
2

1
andg is thetrajectory deviation. Herethequantities D
3
, D
4
andD
5
areaberrations
arisingfromthetrajectoryinstabilities.
58 Radar imagingandholography
Equation(3.25)describingdistortionsinthehologramphasestructurecanbeused
tocalculatethecompensatingphaseshiftdirectlyduringthesynthesis. For this, SAR
shouldbeequippedwithadigital signal processor.
By applying Reighley’s criterion to each termin Eq. (3.25), one can get the
followingconditionsfor maximumpermissibledeviationsof thecarrier’strajectory:
g
3
≤ λ
2
/4/D
3
= λ
1
R
o
/8Z
o
= λ
1
/8cosϑ
o
, (3.26)
g
4
≤ λ
2
/4/D
4
/x
max
= λ
1
R
3
o
/4L
S
Z
o
x
o
, (3.27)
g
5
≤ λ
2
/4/D
5
/x
2
max
= λ
1
R
3
o
/Z
o
L
2
S
. (3.28)
Besides, if one knows the flight conditions and carrier’s characteristics,
Eqs (3.26)–(3.28) canbeusedto findconstraints imposedontheparameter cosϑ
o
andthemaximumsizeof thesyntheticaperture:
cosϑ
o
≤ λ
1
/8g,
L
S
max
≤ λ
1
R
2
o
/4gx
o
,
L
S
max
≤ R
o
_
λ
1
/g.
Normally, SARmeetstheconditionsL
S
R
o
andx
o
R
o
. SoD
4
andD
5
canbe
neglectedleavingonlythefactor D
3
, whichseverelyrestrictsthetrajectorystability
(seeEq. (3.26)).
Effectsarisinginasyntheticapertureduringtheviewingof movingtargetscanbe
estimatedintermsof physical optics. Supposeapointobjectmovesradially(alongthe
z-axis) at velocityv
o
, suchthat itsdisplacement issmaller thantherangeresolution
forthesynthesistimeT. Then, theequationforthehologram, ignoringconstantphase
terms, is
h(x) ∼ cos
_
ω
x
n
x
x+2k
1
v
o
v
n
x
x−k
1
n
2
x
x
2
R
o
+2k
1
n
x
x
o
x
R
o

k
1
R
o
_
v
o
v
_
2
n
2
x
x
2
_
.
(3.29)
Thesubstitutionof Eq. (3.29) into(3.7) givesaconditionfor viewingthefocused
image:
ρ = ±
_
k
2
2k
1
R
o
n
2
x
___
1+
_
v
o
v
_
2
_
.
Sincev
o
/v 1, theimagecanbeviewedpractically inthesameplaneas that
for animmobileobject. Keepingthis inmind, wecanobtain, after theintegration,
afunctiondescribingoneof thereconstructedimages:
V(ξ) = C
o
sin{[ω
x
n
x
+2k
1
(v
o
/v)n
x
+(2k
1
n
x
/R
o
)(x
o
−n
x
ξ)]v
f
T/2}

x
n
x
+2k
1
(v
o
/v)n
x
+(2k
1
n
x
/R
o
)(x
o
−n
x
ξ)]v
f
T/2
.
Theimagepositionisdefinedasbeing
ξ =
x
o
n
x
+
ω
x
R
o
2k
1
n
x
+
R
o
n
x
v
o
v
.
Quasi-holographicandholographicradar imaging 59
Clearly, theobject’smotionisequivalenttotheuseof additional carrierfrequency
attherecordingstage, whichcausestheimageshift. Theoptical processor dealswith
areal imagerecordedonaphotofilm. Therecordingfieldonthefilmislimitedbya
diaphragmcuttingoff thebackground. Thevalueof v
o
maybecomesolargethat no
imagewill berecordedbecauseof theshift.
Theobject’smotionintheazimuthal direction(alongthex

-axis) atvelocityv
o
is
equivalent tothechangeintheSAR’sflight velocity. ThenEq. (3.9) describingthe
positionof thefocusedimagealongthez-axiscanbere-writtenas
ρ

= ±λ
1
R
o
/2λ
2
n
2
x
= ±λ
1
R
o
v
2
/2λ
2
(v−v
o
)
2
.
Therefore, theobject’smotionalongthex

-axischangesthefocusingconditions
bythevalue
δρ = ρ

−ρ = 2ρ
v
o
v
_
1−
v
o
2v
_
_
_
1−
v
o
v
_
2
, (3.30)
whereρ isfoundfromEq. (3.9). If theconditionv
o
visfulfilled, wehave
δρ ≈ 2ρv
o
/v. (3.31)
Equation(3.30) yields
v
o
= v
_
1−
_
1−δρ/(ρ +δρ)
_
.
On the other hand, a simple geometrical consideration can give the following
relationsfor theresolvingpower of SAR alongthez-axis(longitudinal resolution):
ρ = 2(x

v
f
)
2

2
v
2
= 2(x

)
2

2
n
2
x
. (3.32)
The focusing depth ρ is defined as the focal plane shift along the z-axis by
a distance, at which the azimuthal resolution x

becomes twice as poor as the
diffractionlimit inEq. (3.12).
Theviewingof afocusedimageof anobject movingat velocity v
o
requires an
additional focusingof theoptical processor. Theobject’s velocities that requirethe
focusingcanbefoundfromtheconditionρ < δρ, whereδρ isgivenbyEq. (3.31).
UsingEqs(3.9) and(3.32), weget
v
o
> 2(x

)
2

1
R
o
.
Atlower velocities, thereisnoneedtore-focustheprocessor, andapoorer image
qualitymaybeassumedtobeinessential.
To concludeSection3.1, weshouldliketo emphasisethefollowing. TheSAR
operationprinciplescanbedescribedby conventional methods(Chapter 2) that are
still widely used[73] or withaholographic approachrepresentingtheside-looking
synthetic apertureandtheprocessor as anintegral systemfor recordingandrecon-
structing the wave field. The analysis of the aperture synthesis can be based on
thewell-elaboratedprinciplesof holographyaswell asonphysical andgeometrical
optics. The examples we have discussed support the physical clarity of the holo-
graphicapproachanditsvaluefor SAR analysis. Wecanget abetter insight intothe
60 Radar imagingandholography
mechanismsof imageformationbySAR without relyingonDoppler frequenciesof
reflectedsignalsor oncorrelationtheory.
3.2 Front-lookingholographicradar
The operation principle of a front-looking holographic radar was discussed in
Chapter 2. A high resolution across the pathway line (Fig. 3.4) is provided in it
byamultibeamantennapatternof alargereceivingantennaarraylocated, say, along
the aircraft wings [72]. The resolution along the pathway line is achieved by the
aperturesynthesis. Thereis another radar design, in which thedesired transversal
resolutionisprovidedbyaphasedantennaarraymountedunder thefuselageandthe
longitudinal resolutionbyasyntheticaperture[81,82].
3.2.1 Theprinciplesof hologramrecording
A coherenttransmitter (Fig. 3.5) generatesacontinuousor pulsedsignal (todecouple
thetransmitterandthereceiver) andilluminatesthedesiredsurveyzoneundertheair-
craft. Thereceivingantennarepresentsalinearorphasedarrayof numerousreceivers.
Theamplitudeandphaseof thereflectedsignal arerecordedby eacharray element
for thetimeT
s
, synthesisinga2Dapertureof sizeX
s
Y alongthetrajectorysegment
X
s
= vT
s
. Signals at thereceiver output aresavedby amemory unit, for example,
onaphotofilm[81]. Thefilmrecordcanberegardedasa2Dplaneoptical hologram
equivalent toamicrowavehologramwiththesizeX
s
Y (Fig. 3.4). If theradar hasan
optical processor, it reconstructsthewavefront recordedontheoptical hologramto
produceanoptical imageof theearthsurfacewithintheviewzone. Thus, theoper-
ationprincipleof thistypeof radar istotallyholographicandit isreasonabletocall
Survey
zone
a
X
c
Receiver
antenna
Transmitter
antenna
V
Y
H
y
Line of track
x
Figure3.4 Theviewingfieldof aholographicradar
Quasi-holographicandholographicradar imaging 61
Transmitter
Transmitter
antenna
Receiver-phased
antenna
Receiver Memory Processor
Display
Figure3.5 Aschematicdiagramof afront-lookingholographicradar
20
0
0.5
1
1.5
2
2.5
30 40 50 60 70 80 90 100
dx(w),m
w
Figure3.6 Theresolutionof afront-lookingholographicradar alongthex-axisas
afunctionof theangleϕ
it afront-lookingholographic radar [72,81]. Sinceit isananalogueof a2D optical
holographicsystem, itproducesa3Dimage. Theresolutionof aholographicradarcan
beexaminedbyanalysingtheuncertaintyfunction[72]. Theslicingof thisfunction
intoequal power levelsat thepoint of 0.7givesanapproximateradar resolution:
δy= 0.88λ
1
H/Y sinϕ, (3.33)
δx= 0.45λ
1
H/X
s
sin
3
ϕ, (3.34)
δz = 7λ
1
H
2
/(2X
2
s
sin
3
ϕ +Y
2
sin
3
ϕ), (3.35)
whereX
s
isthesyntheticaperturelength, ϕ = 90

−α.
Figures 3.6 and 3.7 showthedependenceof δx and δz on theangleϕ, plotted
fromthefollowing initial parameters: λ
1
= 1.78cm, H = 300m, Y = 1mand
X
s
= 30m. Onecanseethat aholographic radar possesses afairly largeresolving
power.
ItfollowsfromEqs(3.33), (3.34) and(3.35) thatinadditiontothe‘conventional’
resolutionalongandacrossthepathwayline, aholographicradar hasalongitudinal
62 Radar imagingandholography
20
0
20
40
60
80
120
100
160
180
140
200
30 40 50 60 70 80 90 100
dz(w),m
w
Figure3.7 Theresolutionof afront-lookingholographicradar alongthez-axisas
afunctionof theangleϕ
resolutionδz evenwhenitssignal iscontinuous. Thisisduetothefact that aholo-
gramcontains information about thethreedimensions of theobject, including the
longitudinal range(Chapter 2).
3.2.2 Imagereconstructionandscalingrelations
Consider nowtheprocesses of wavefront recordingandprocessinginthis typeof
radar. As theradar is ananalogueof a2D holographic system, it wouldbenatural
to analyseit interms of theholographic approachdevelopedinSection3.1, which
treats the radar and the processing unit as an integral system. For this, we shall
examineageneralisedhologramgeometry [50,51]. Supposeawavecomes froma
microwave point source with the coordinates (x
o
, y
o
, z
o
), and a reference wave is
generatedby apoint sourcewiththecoordinates(x
r
, y
r
, z
r
), asshowninFig. 3.8(a).
Thewavefieldbeingrecordedhasthewavelengthλ
1
.
At thesecondstage, therecordedhologramis illuminatedby aspherical wave
withthewavelengthλ
2
, comingfromapointsourcewiththecoordinates(x
p
, y
p
, z
p
),
asshowninFig. 3.8(b). A paraxial approximationwill thengivethecoordinatesof
tworeconstructedimages:
x
i
= ±
λ
2
z
i
λ
1
z
o
x
o

λ
2
z
i
λ
1
z
r
x
r

z
i
z
p
x
p
y
i
= ±
λ
2
z
i
λ
1
z
o
y
o
±
λ
2
z
i
λ
1
z
r
y
r

z
i
z
y
p
z
i
=
_
1
z
p
±
λ
2
λ
1
z
r

λ
2
λ
1
z
o
_
−1
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
(3.36)
Theupper arithmeticsignsintheequalitiesof (3.36) arefor thevirtual imageand
thelower onesarefor thereal image. Whenz
i
ispositive, theimageisvirtual andis
ontheleft of thehologram; whenz
i
isnegative, theimageisreal andislocatedon
Quasi-holographicandholographicradar imaging 63
Reference wave
(x
r
, y
r
, z
r
)
z
r
z
p
z
i
z
z
o
z
Y
Y
Photoplate
Object
(x
o
, y
o
, z
o
)
Reconstructing wave
(x
p
, y
p
, z
p
)
Virtual image
(x
i
, y
i
, z
i
)
Real image
(x
/
i
, y
/
i
, z
/
i
)
(a)
(b)
Hologram
Figure3.8 Generalisedschemesof hologramrecording(a) andreconstruction(b)
theright of thehologram. At λ
1
= λ
2
, z
r
= z
o
andz
c
> 0bothimagesarevirtual,
whereasat λ
1
= λ
2
, z
r
= z
o
andz
c
< 0theyarereal.
OnecanshowwithEqs(3.36) that holographicimagesof objectsmorecomplex
thanjust apoint, for example, consistingof twopoint sources, canbemagnifiedor
diminishedrelativetotherespectiveobject [50,51].
Asthereconstructedwavefrontis3D, thetransverse(alongthex- andy-axes) and
thelongitudinal (alongthez-axis) magnificationsobtainedduringthereconstruction
canbeanalysedseparately.
FromEq. (3.36), thetransversemagnificationsare:
for thereal image(superscript ‘r’)
M
r
t
=
∂x
i
∂x
o
=
∂y
i
∂y
o
=
λ
2
z
i
λ
1
z
o
, (3.37)
64 Radar imagingandholography
for thevirtual image(superscript ‘v’)
M
v
t
=
∂x
i
∂x
o
=
∂y
i
∂y
o
= −
λ
2
z
i
λ
1
z
o
(3.38)
or
M
r
t
= M
v
t
=
¸
¸
¸
¸
1−
z
o
z
r

λ
1
z
o
λ
2
z
p
¸
¸
¸
¸
−1
. (3.39)
Herethesuperscript isfor thereal imageandthesubscript isfor thevirtual one.
Thetransversemagnificationdescribestheratioof thewidthandheightof theimage
totheappropriateparametersof thereal object.
Thelongitudinal magnificationcanbefoundbydifferentiatingEq. (3.36) for z
i
:
for thereal image
M
r
l
=
∂z
i
∂z
o
=
λ
2
z
2
i
λ
1
z
2
o

=
λ
1
λ
2
_
M
r
t
_
2
(3.40)
for thevirtual image
M
v
l
=
∂z
i
∂z
o
= −
λ
2
z
2
i
λ
1
z
2
o

= −
λ
1
λ
2
_
M
v
t
_
2
. (3.41)
Thelongitudinal magnificationof avirtual imageisalwaysnegative. Thismeans
that theimagealwayshasarelief inversetothat of theobject: it ispseudoscopic.
Equations(3.37), (3.38) and(3.40), (3.41) showthat thelongitudinal andtrans-
versemagnificationsarenot identical, sotheimageof a3Dobject isdistorted. The
matter isthattheobject’srelief cannotbereproducedexactlyinanimage. Thecondi-
tionforobtaininganundistortedimagecanbederivedfromtheequalityof transverse
andlongitudinal magnifications:
M
r
t
= M
r
l
or
λ
2
z
i
λ
1
z
o
=
λ
2
z
2
i
λ
1
z
2
o
.
Therefore, ageometrical similarityispossibleonlyif theimageisreconstructed
at thesitetheobject occupiedduringtherecording.
By substitutingthecoordinatez
i
= z
o
intoEq. (3.36), wecanget anexpression
for thecoordinatesof thereconstructingsource:
1
z
p
=
1
z
o
_
1∓
λ
2
λ
1
_

λ
2
λ
1
z
r
. (3.42)
Another wayof obtaininganundistortedimageistochangethescaleof thelinear
hologramsizebyafactorof matthetransitionfromtherecordingtothereconstruction
[50]. At m< 1, thehologrambecomes smaller whileat m> 1it becomes larger.
Thecoordinatesof animagereconstructedfromahologramdiminishedmtimescan
Quasi-holographicandholographicradar imaging 65
befoundfrom
x
i
= ±m
λ
2
z
i
λ
1
z
o
x
o
∓m
λ
2
z
i
λ
1
z
r
x
r

z
i
z
p
x
p
;
y
i
= ±m
λ
2
z
i
λ
1
z
o
y
o
∓m
λ
2
z
i
λ
1
z
r
y
r

z
i
z
p
y
p
;
z
i
=
_
1
z
p
±m
2
λ
2
λ
1
z
r
∓m
λ
2
λ
1
z
o
_
−1
.
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
(3.43)
Thetransversemagnificationsare:
for thereal image(superscript ‘r’)
M
r
t
=
∂x
i
∂x
o
=
∂y
i
∂x
o
= m
λ
2
z
i
λ
1
z
o
(3.44)
for thevirtual image(superscript ‘v’)
M
v
t
=
∂x
i
∂y
o
=
∂y
i
∂y
o
= −m
λ
2
z
i
λ
1
z
o
. (3.45)
Thelongitudinal magnificationsare:
for thereal image
M
r
l
=
∂z
i
∂z
o
= m
2
λ
1
λ
2
(M
r
t
)
2
, (3.46)
for thevirtual image
M
v
l
=
∂z
i
∂z
o
= −m
2
λ
1
λ
2
(M
v
t
)
2
. (3.47)
Theconditionfor obtaininganundistortedimagealsofollowsfromtheequality
of transverseandlongitudinal magnifications
z
o
= mz
i
. (3.48)
The substitution of Eq. (3.48) into Eq. (3.36) yields the coordinates of the
reconstructingsource; inparticular, for z
p
wehave
1
z
p
=
m
z
o
_
1±m
2
λ
2
λ
1
_
∓m
λ
2
λ
1
z
r
. (3.49)
If therecordedhologramis magnifiedmtimes, thereconstructedimageis at a
distance
z
i
=
_
1
z
p
±
λ
2
λ
1
z
r
m
2

λ
2
λ
1
z
o
m
2
_
−1
, (3.50)
66 Radar imagingandholography
fromthishologram, andthetransversemagnificationsare:
for thereal image
M
r
t
=
λ
2
z
i
λ
1
z
o
m
(3.51)
for thevirtual image
M
v
t
= −
λ
2
z
i
λ
1
z
o
m
. (3.52)
In the case of imaging a 3D object, the distortions due to the difference in
thetransversal and longitudinal directions will be minimal at M
t
= λ
2

1
. Then
Eqs (3.51) and(3.52) giveM
t
= M
l
. Thedistortions of thereal andvirtual images
duetotheshift arealsoeliminatedat a = b = 0(Fig. 3.9), but theimages andthe
zeroth-orderoverlap, asituationunacceptableforoptical holography. Inaholographic
radar capableof recordingacomplexhologram(Chapter 2), thereisnoproblemwith
decouplingasingleimageandthezerothorder.
Weshall nowturntothelimitinglongitudinal resolutioninaholographic radar
andconsider therecordingandreconstructionschemes(Fig. 3.9(a) and(b), respec-
tively)inordertodefinelongitudinal magnifications. Usingaparaxial approximation,
theauthors of thework [142] haveshownthat theminimal resolvablelongitudinal
distancefor areconstructedreal imageiswrittenas
(d
r
)
min

= l
r
at d R
1
, (3.53)
where
l
r
= l

r
−l

r
,
l

r
= λ
1
R
1
L
1
L
2
/(λ

L
1
L
2
−λ

R
1
L
2
−λ
1
R
1
L
1
), (3.54)
l

r
= λ
1
R
1
L
1
L
2
/(λ

L
1
L
2
−λ

R
1
L
2
−λ
1
R
1
L
1
), (3.55)
λ

andλ

aretheminimal andmaximal wavelengthsof thereconstructingbeam. If the
distancedissmall comparedtoR
1
, thelongitudinal magnificationis
M
r
l
=
d
r
d
at d R
1
.
Hence, wehave
(d
r
)
min
≥ l
r
(M
r
l
)
−1
, (3.56)
where
M
r
l
= λ
1
λ
2
(L
1
L
2
)
2
/[λ
2
L
1
L
2
−λ
2
RL
2
−λ
1
R
1
L
1
]
2
andλ
2
= (λ

λ

)
1/2
istheaveragewavelengthof thereconstructingsource. Similar
expressionscanbederivedfor thereconstructionof avirtual image.
Theanalysis wehavemadeallows oneto choosesuitablerecordingandrecon-
structionprocedures whenoneuses aholographic radar. Clearly, theparameters of
theseprocedures areclosely interrelated, so theradar and its processor should be
regardedasanintegral system.
Quasi-holographicandholographicradar imaging 67
3
2 1
R
1
L
1
d
a
x
Photoplate
Hologram
b
4
2
1
1
d
v d
r
2
l
/
1 l
1
l
2 l
/
2
L
2
(a)
(b)
Figure3.9 Recording(a) andreconstruction(b) of a two-point object for finding
longitudinal magnifications: 1, 2 – point objects, 3 – referencewave
sourceand4– reconstructingwavesource
3.2.3 Thefocal depth
Considernowthefocal depthof animageproducedbyaholographicradar. Chapter2
discussed theproblemof recording a3D imagein a2D mediumusing aclassical
holographic approach. The image quality then depends on the focal depth of the
image. Theprocessof reconstructiongivestheopportunity toobtaina3Dimageof
ascene. Followingthereconstruction, theimageisagainrecordedina2Dmedium,
so theproblemof focal deptharises oncemore. This parameter canbedefinedby
analogywiththerecommendationssuggestedinReference96.
68 Radar imagingandholography
3
1
2
∆z
i
∆x
i
Figure3.10 Thefocal depthof amicrowaveimage: 1– reconstructingwavesource,
2– real imageof apoint object and3– microwavehologram
Thefocal depthof amicrowaveholographicimageisalongitudinal distancez
i
,
along which thecross section of thebeamreconstructing thevirtual or real image
of apoint object is smaller thantheresolutionelements x
i
, sothat it is perceived
asapoint image(Fig3.10). A formulafor thefocal depthof avirtual imagecanbe
derivedfromEqs(3.40) and(3.41) for thelongitudinal magnification:
z
i
= M
v
l
z
o
(3.57)
or
z
i
= −
λ
1
λ
2
(M
v
t
)
2
z
o
. (3.58)
Withtherelationfor thetransversemagnification(3.52), onecanwrite
z
i
= −
λ
1
λ
2
x
i
_
z
o
x
o
_
. (3.59)
Thelast factor inEq. (3.59) canbewrittenas
z
o
x
o
= tgα
o
, (3.60)
whereα
o
istheapertureangleintheobjects’ space. ThenEqs(3.59) and(3.60) yield
z
i
= −
λ
1
λ
2
_
M
v
t
_
x
i
tgα
o
. (3.61)
If thescaleof theinitial hologramisdiminishedmtimes, wehave
z
i
= −m
2
λ
1
λ
2
_
M
v
t
_
x
i
tgα
o
. (3.62)
Letusnowdefinethequantityx
i
. Althoughtheresolutionalongthex- andy-axes
is determinedby different physical conditions, theresolutionelements x andy
must havethesamevalues. Therefore, insteadof x
i
onecanuseδxdescribingthe
resolutionalongthepathwaylineprovidedbytheaperturesynthesis. ThenEqs(3.62)
Quasi-holographicandholographicradar imaging 69
and(3.34) give
z
i
= −
0.45λ
2
1
M
v
t
H
λ
2
X
s
tgα
o
sin
3
ϕ
. (3.63)
A characteristicfeatureof thisexpressionisthat z
i
isinverselyproportional to
thesyntheticaperturelengthX
s
.
It is also worth discussing some practical aspects of scaling in a holographic
radar. UnlikeSAR, thistypeof radar hasnoanamorphism, that is, theimageplanes
coincideinazimuthandrange. Sothereisnoneedtousespecial opticstoeliminate
anamorphism. However, theimageproportionsalongthex- andy-axesdonot coin-
cidebecausethescaling coefficient in azimuth, P
x
, differs fromthat in range, P
y
.
AccordingtoReference81, P
x
isdefinedas
P
x
= v/V, (3.64)
wherevisthevelocityof thetransparencyonwhichthehologramisrecordedandV
isthevelocityof theantennaarray.
Alongthey-axis, thescalingcoefficient P
y
is
P
y
=
W
2a
, (3.65)
whereWisthetransparencywidthand2aisthedoublelengthof theantennaarray.
Asaresult, theholographicimageappearstobedefocusedalongthex- andy-axes.
Theimagescalealongtheseaxes canbeequalisedby special optics – spherical or
cylindrical telescopes. Theoptics suggestedinReference81canchangetheimage
scale from4 to 25 times. Transversal and longitudinal scales of an image can be
equalisedbychoosingaproper coefficient m. Therefore, thefinal valuesof longitu-
dinal magnificationandfocal depthcanbefoundonly after onehasselectedall the
scalingcoefficientsP
y
, P
x
andm.
Toconclude, wesummarisespecificfeaturesof front-lookingSAR systems.
1. It hasbeenshowninReference74that SAR systemshaveaseriouslimitation.
When the view zone approaches the pathway line, the resolution in azimuth
becomes much poorer. This makes it impossible to obtain quality images in
thefront viewzone. Incontrast, aholographic radar providesahighresolution
directlyunder theaircraft.
2. Another essential advantageof aholographic radar is ahigh longitudinal res-
olution even in acontinuous pulsemodealong thez-axis, providing 3D relief
images.
3. The 3D character of a holographic radar image is a basis for obtaining range
contour lines whichcanthenberecalculatedto get surfacecontours [81]. This
operationmodeis ‘purely’ holographic. Infact, it implements theprincipleof
two-frequencyholographicinterferometry.
4. A high3Dqualityof theimagerequirestheuseof anewparameter – theimage
focal depth, byanalogywithoptical systems.
70 Radar imagingandholography
5. Theviewfieldgeometryinaholographicradar isequivalent tothat of airborne
infrared and optical devices, so it is possible to combine microwave images
withinfraredandoptical images. Thiskindof complexingconsiderablyincreases
theradar capabilitytodetect andidentifytargets.
3.3 A tomographicapproachtospotlight SAR
3.3.1 Tomographicregistrationof theearthareaprojection
Todaytherearetwopracticallyvaluablecaseswhentomographicalgorithmscanbe
used for reconstruction of radar images: inverse aperture synthesis by rotating an
object rounditscentreof mass(seeChapter 6) andaperturesynthesisinaspotlight
or telescopicmode[100]. Weshall analysethelatter case.
A microwaveradar withasyntheticaperturebornebyacarrier andoperatingin
aspotlight modehasareal antennaorientedontoanearthareatobesurveyed. The
areais illuminated for alonger timethan is normally donein stripesurfacemap-
ping[100], so this typeof SAR has agreater resolvingpower thanaconventional
side-lookingSAR. Figure3.11showsthebasicgeometrical relationsillustratingthe
spotlightmode. For simplicity, weshall consider a2Dcase. Supposethatthecoordi-
nateoriginisrelatedtoacertainpointontheearth’ssurface; thex-axisistherangeand
they-axisistheazimuth. Duringthecarrier flight, areal antennarayisincident onto
thisareaat anangleϑ tothex-axis. TheSARscansthetarget withwidebandpulses,
for example, linear frequencymodulation(LFM) pulsesof theReS(t) type, where
S(t) =
_
e
j(ω
o
t+αt
2
)
, |t| ≤ τ/2,
0, otherwise
(3.66)
v
y
u
x
Figure3.11 Thebasicgeometrical relationsfor aspot-light SAR
Quasi-holographicandholographicradar imaging 71
whereω
o
istheSAR carrier frequency, 2α istheLFM slopeandτ isthepulsedura-
tion. Notethat thelatter conditionis not obligatory becausethesignal may havea
narrowband. It isassumedthat thetarget isinthefar zoneandthemicrowavephase
front inthetarget vicinityisplanar. Thesignal reflectedbyaunit areaof thesurface
at thepoint (x
o
, y
o
) is
r
o
(t) = ARe
_
g(x
o
, y
o
)S(t −
2R
c
)
_
dxdy, (3.67)
whereAistheamplitudecoefficient accountingfor thesignal attenuationduringthe
propagation; 2R/cisthetimedelayof thesignal whileitcoversthedistanceRinboth
directions; g(x, y) = |g(x, y)| exp[ jϕ
o
(x, y)] isthedensity function, whosephysical
sense here is just the distribution of the earth surface reflectivity; and ϕ
o
(x, y) is
thesignal phaseshift duetothereflection. Wealsoassumethat thefunctiong(x, y)
remainsconstantwithinthegivenrangesof radiationfrequenciesandviewanglesϑ.
Normally, when thedistanceto thetarget is much larger than thetarget’s size,
elements of theellipses inFig. 3.11may beregardedas segments of straight lines.
Therefore, withEqs(1.17) and(3.67) wecanwritedownthetotal echosignal from
all reflectorslocatedwithinanarrowbandnormal totheu-axisandhavingthewidth
duat u= u
o
:
r
1
(t) = ARe
_
p
ϑ
(u
o
)S
_
t −
2(R
o
+u
o
)
c
__
du,
whereR
o
isthedistancebetweentheSAR andthetarget centre.
Thetotal signal fromtheareabeingsurveyedis
r
ϑ
(t) = ARe
_
_
_
L
_
−L
p
ϑ
(u)S
_
t −
2(R
o
+u
o
)
c
_
du
_
_
_
, (3.68)
whereL isthearealengthalongtheu-axisandA= const, whichisvalidat R
o
L.
Incontrast totheclassical situationpresentedinFig. 1.6, thelinear integral usedfor
theprojectionistakenalongthelinenormal tothemicrowavepropagationdirection.
NowwesubstituteEq. (3.66) for theLFM pulseintoEq. (3.68), simultaneously
detecting thereceived signal with acoupleof quadraturemultipliers, and then we
passtheoutput signalsthroughlow-frequencyfiltres. What weeventuallyget isthe
signal
c
ϑ
(t) =
A
2
L
_
−L
p
ϑ
(u) exp
_
j
4αu
2
c
2
_
exp
_
−j
2
c

o
+2α(t −τ
o
)u]
_
du,
where
τ
o
= 2R
o
/c and τ/2+2(R
o
+L)/c ≤ t ≤ τ/2+2(R
o
−L)/c. (3.69)
ThelatterexpressionistheFouriertransformof thefunctionpϑ(u) exp( j4αu
2
/c
2
),
whose exponential factor can be easily eliminated if we find the inverse Fourier
72 Radar imagingandholography
transform of c
ϑ
(t) by multiplying the result by exp(−j4αu
2
/c
2
) and making
Fourier transformagain. This quadratic phasefactor can quiteoften beneglected.
Eventually, wehave
c
ϑ
(t) =
A
2
P
ϑ
_
2
c

o
+2α(t −τ
o
)]
_
, (3.70)
wherethetimet satisfiesEq. (3.69).
Therefore, if one uses LFM pulses, demodulated signals received fromevery
illuminated direction arepart of a1D Fourier function of thecentreprojection of
the earth area at the respective view angle. In other words, the processor output
signal representsaFourier imageof theprojectionfunction(withinthetimeinterval
considered) and the data are registered in Fourier space. In accordance with the
projectiontheorem, thefunction(3.70) isacrosssection, takenat theangleϑ, of the
2DFourier transformG(X, Y) of thedesireddensityfunctiong(x, y). Itfollowsfrom
Eq. (3.69) that thefunctionP
ϑ
(X) isdefinedintherangeX
1
≤ X ≤ X
2
, where
X
1
=
2
c
_
ω
o
−ατ +
4αL
c
_

=
2
c

o
−ατ),
X
2
=
2
c
_
ω
o
+ατ −
4αL
c
_

=
2
c

o
+ατ). (3.71)
Sincemeasurementsareusuallylimitedtoacertainrangeof anglesϑ
min
≤ ϑ ≤
ϑ
max
, it is clear that thecounts of G(X, Y) canbeobtainedat thepolar gridpoints
withinalimitedcircular segment(theshadedregioninFig. 1.7). Theinner andouter
radius of thecircle, X
1
andX
2
, areproportional to thesmallest (ω
o
− ατ) andthe
largest (ω
o
+ατ) frequencyvaluesof theLFM pulse.
Further, onecanemployclassical algorithmsbasedoninterpolationsandinverse
Fourier transforms toreconstruct g(x, y). Beforeperformingthelatter procedure, it
is useful to multiply theG functionby theweight or ‘window’ function, to reduce
parasiticsidelobesintheimage.
3.3.2 Tomographicalgorithmsfor imagereconstruction
Thenextstepindevelopingthisalgorithmistoperforma2DinverseFouriertransform
inthepolar coordinates. Thiscanbedoneasfollows. First, introducethefunction
F(x, y) =

(u,v)∈P
G(u, v)δ(x−u, y−v),
whereP isapolar grid; δ(·) isthedelta-functionof Dirac; G = S· W, whereS isa
complex-valuedfunctionprescribedbyP (experimental data); andWisareal-valued
weight function. Weshouldalsoprescribethereal parametersa > 0, b> 0andthe
integer parametersM > 0andN > 0suchthat therectangular grid
R= {(ma, nb)| −M/2≤ m< M/2, −N/2≤ n< N/2} ,
shouldsatisfyadiscreterepresentationof theobject sought for.
Quasi-holographicandholographicradar imaging 73
ThequantityM×Nisequal tothenumberof pixelsontheimage, eachpixel having
thesizea× b. Accordingto thesamplingtheorem, I /aandI /bareapproximately
equal tothesizeof theP regionalongthex- andy-axes, whileI /(Ma) andI /(Nb)
shouldequal thespacingbetweenthegridnodesalongthesameaxes. Thus, theP grid
consistsapproximately of M radial linesandN pixelsalongeachline. Notethat in
classical tomography, wehaveM

= N andthegridP includes about πM/2radial
linesandN pixelsalongeachline.
WecannowestimatetheinverseFourier transformf of thefunctionF acrossthe
regionR:
f (ma, nb) =
__
F(x, y)E(xma+ynb) dxdy
=

(u,v)∈P
G(u, v)E(uma+vnb)
withE(z) = exp( j2πz).
A straightforwardcalculationof exact values of f intheregionRusingthelast
formulawill requireaboutM
2
N
2
elementaryarithmeticoperations. Forf estimations
inthisregion, however, onecanemployconventional methodswithasmallernumber
of operations, usingtheinterpolationalgorithmmentionedaboveandtheconvolution
algorithmto bediscussedbelow. They areafairly simplealgorithmfor arigorous
calculationof thefunctions f (ma, nb) withtheso-calledhomogeneousconcentrically
squarepolar grid, whichrequiresabout MNlog
2
(MN) operations.
Thepolar gridP isdescribedas
P = {(u(i, k) = A(k) +iB(k), v(i, k) = C +kD)
at 0≤ i < M, 0≤ k < N},
whereA(k) = −(C + kD)tg(ϑ
o
/2), B(k) = −2A(k)/(M − 1), ϑ
o
isthesizeof the
Rregion, C andDaresomeselectedreal positivenumbers.
Thevaluesof thef functionarefoundintwosteps. First, for −M/2≤ m< M/2
and0≤ k < N wefindthefunction
H(m, k) = E(m
2
aB(k)/2)
i=M−1

i=0
{G(i, k)E(i
2
aB(k)/2)}
×E(−(m−i)
2
aB(k)/2).
Second, for −M/2 ≤ m< M/2and−N/2 ≤ n < N wecalculatethedesired
function
f (ma, nb) = E(nbC)
N−1

k=o
{H(m, k)E(maA(k))} E(nk/N).
Consider nowatomographicalgorithmfor reconstructionof SARimages, based
ontheconvolutionback projection(CBP) method. It employs therelationbetween
74 Radar imagingandholography
thefunctionsg(x, y) andG(X, Y) writteninthepolar coordinates[34]:
g(ρ cos, ρ sin) =
1

2
π/2
_
−π/2

_
−∞
G(r cosϑ, r sinϑ)|r|
×exp[ jrρ cos(−ϑ)] dr dϑ.
Withtheprojectiontheorem, thelast expressioncanbere-writtenas
g(ρ cos, ρ sin) =
1

2
π/2
_
−π/2

_
−∞
P
ϑ
(r)|r| exp[ jrρ cos(−ϑ)] dr dϑ.
(3.72)
Theintegral aroundthevariabler canbeinterpretedasinverseFourier transform
withtheargumentρ cos(−ϑ); fromtheconvolutiontheorem, Eq. (3.72) reducesto
g(ρ cos, ρ sin) =
1

π/2
_
−π/2
(P
ϑ
∗ k
r
)ρ cos(−ϑ)dϑ, (3.73)
wherek
r
istheFourier transformof thefunction|r|.
Thealgorithmusedincomputer-aidedtomography(CAT)involvesthecalculation
of theP
ϑ
∗k
r
convolutionforeachvalueof ϑ, followedbyanapproximateintegration
aroundthevariableϑ by summinguptheresults obtained. Sinceonemeasures the
functionP
ϑ
(r), thereconstructionalgorithmshouldbebasedonEq. (3.72)ratherthan
Eq. (3.73). It followsfromEq. (3.72) that P
ϑ
(r) must beknownfor all r values, but
it isclear fromtheforegoing(seeEq. (3.71)) that P
ϑ
(r) isknownonlyfor alimited
rangeof r valueswiththecentreat r = 2ω
o
/c. Besides, thecircular segment of the
P
ϑ
function(Fig. 3.12) shouldbeshiftedtowardstheorigin. Withtheseremarksin
mind, Eq. (3.72) canbereducedto
g(ρ cos, ρ sin) =
1

2
ϑ
max
_
ϑ
min
_
X
2
−X
1
_
0
P
ϑ
(r +X
1
)|r +X
1
|W
1
(r)
×exp[ jrρ cos(ϑ −)]dr
_
×W
2
(ϑ) exp[ jX
1
ρ cos(−ϑ)]dϑ. (3.74)
whereW
1
(r) andW
2
(ϑ) areadditional weight functions[33].
Theinterpolationandconvolutionalgorithmshavebeencomparedquantitatively.
Thecomparisonis basedontwo criteria: (1) thelevel of multiplicativenoise(side
lobes)
R
MN
= 101g
¸
¸
¸
¸
¸
N
2
oml
N
2
iml
¸
¸
¸
¸
¸
,
Quasi-holographicandholographicradar imaging 75
where N
oml
is the number of pixels outside the major lobe on a point scatterer’s
imageandN
iml
isthenumberof pixelsinsidethemajorlobe; and(2) thecomputation
timeandcomplexity, or thenumber of elementaryarithmeticoperationstobemade.
Thevalueof R
MN
for theconvolutionalgorithmhasbeenfoundtobe−(30/40) dB.
A similar result isobtainedusingtheconvolutionalgorithmwithahighinterpo-
lationorder (8–16). Thecomputationcomplexity of thefirst algorithmis about N
3
(N × N is thenumber of pixels on theimage) andthat of thesecondalgorithmis
about kN
2
(k is aconstant varyinginproportionwiththeinterpolationorder). The
computationtimewiththeconvolutionalgorithmis 3–5times longer thanwiththe
interpolationalgorithm. Itsapplicationis, however, preferredbecauseit allowspro-
cessingprimary dataasthey comehandy (e.g. theinternal integral inEq. (3.74)) in
real timefor each projection individually. Theconvolution algorithmcan beused
for simultaneous (systolic) computations by aset of elementary processors suchas
amultiplier, asummator andasavingregister, whicharenot tightly coupledtoone
another.
Therehavebeensomeattemptstodesign‘faster’ tomographicalgorithms, using,
for example, the Hankel transform. The principle of this algorithmis as follows.
Becausethefunctionsg(ρ, ) andG(r, ϑ) areperiodicwiththeperiod2π, theycan
beexpandedintoaFourier series:
g(ρ, ) =

n=−∞
g
n
(ρ) exp( jn),
G(r, ϑ) =

m=−∞
G
m
(r) exp( jmϑ),
where
g
n
(ρ) =
1

π/2
_
−π/2
g(ρ, ) exp(−jn)d,
G
m
(r) =
1

π/2
_
−π/2
G(r, ϑ) exp(−jmϑ)dϑ.
Inaddition, Wecanshowthat
g
n
(ρ) = 2π

_
0
rG
n
(r)J
n
(rρ)dr, (3.75)
whereJ
n
(·) isthefirst-order Bessel function. Thisrelationisknownasthenthorder
Hankel transform[103].
Apparently, these relations can be applied to the reconstruction of g fromthe
knownvaluesof G. Animportant advantageof thisalgorithmistheuseof dataina
76 Radar imagingandholography
polar format without interpolation. TheHankel transformtakesthelargest computa-
tional time. Theavailableproceduresforacceleratingthecomputationarebasedonthe
representationof Eq. (3.75) asaconvolutionandtheuseanasymptoticrepresentation
of theBessel function.
Theavailabletomographicalgorithmsfor imagereconstructioninspotlightSAR
also include signal processing designs accounting for the wave front curvature.
Theseemploy morecomplex transformationsthanjust findingFourier images. The
‘efficiency’ of such algorithms should beevaluated taking into account theinade-
quacy of theproblemformulation. Weshouldrecall that aproblemisconsideredto
beill-posedif it has no solution, or thesolutionis ambiguous or unsteady, that is,
it does not changecontinuously with theinput data. It is thesecond circumstance
that usually takes placein thecasebeing discussed, becauseexperimental datafit
onlyasmall regioninthetransformationspace. Evenif weassumethat theG(X, Y)
valuesareknownover thewholepolar grid, thereisgenerallynosamplingtheorem
for g(ρ, ) inthepolar format.
The tomographic approach allows estimation of all major parameters of the
spotlight SAR. Inparticular, theresolutionwasestimatedas
δ
x

=
πc
2αT
,
δ
y

=
πc

o
sin(|ϑ
min
| +|ϑ
max
|)
avaluecoincidingwithaconventional radar estimate[100]. Theconditionsfor the
input datadiscretisationweredefined. Besides, requirementsfor thesynthesiswere
formulated, providingthatonecouldignorethedeviationof projectionsfromastraight
lineandtheir incoherenceduetothewavefront curvatureinthetarget vicinity.
Wehavemadetheaboveanalysisfora2Dcase, neglectingtheSAR’saltitude. This
circumstancedoesnot, however, violatethegeneralityof our treatment. A correction
forthealtitudecanbeeasilymadeby‘extending’ thelinearrangebyafactorof R
o
/R,
whereR
o
is theslant rangetothetarget’s centreandRis theslant rangeprojection
ontotheearthplane.
Weshould liketo emphasisethefollowing important differencebetween CAT
systems andSAR operatinginaspot-light (telescopic) mode. Inorder toprovidea
highresolution, aCAT radar must cover amuchlarger rangeof anglesthanaSAR,
say, 360

against 6

. Thiscanbeunderstoodintermsof imagereconstructionfrom
dataobtained within alimited region of a2D space–timespectrum. In this sense,
the spectral region utilised by the SAR is shifted relative to the origin by 2ω
o
/c
(Eq. (3.71)), whilethespectral regionof aCAT radar isnot. Weshall trytoshowwhy
ahighresolutioncanbeachievedbyasmall apertureinSAR.
Weshouldfirst recall that resolutioncorrespondstothewidthof themajor lobe
of thepulseresponse, normallyat 3dB. Theresolvingpower of bothCAT andSAR
systems depends only on thefrequency band used in a2D spectrumand it should
beindependent of thecarrier frequencyω
o
, whichisthefrequencyof thebandshift.
Quasi-holographicandholographicradar imaging 77
Toillustrate, therangeresolutionfortheshadedregioninFig. 1.7isinverselypropor-
tional tothefrequencybandwidthalongtheX-axis(or theu-axis) andtheazimuthal
resolutiontothat alongtheY-axis(or thev-axis).
If the number of point objects is large, the image quality becomes poor due
to signal interference. This effect arises becausethepulseresponseof thesystem,
usuallyexpressedasa2Dfunctionsinx/x, containsaconstant phasefactor varying
with the carrier frequency ω
o
and the position of the point object. As is easy to
see, thequalityof areconstructedimageisindependent of theω
o
variationprovided
that thefunctiondescribingtheobject depends onacomplex-valuedvariablewith
anoccasional uncorrelatedphase. Thismeansthat thephasesof signalsreflectedby
differentscatteringcentresarenotcorrelated. Theauthorsevaluatedtheimagequality
fromaformulasimilar tothat for findingamean-squareroot error. Onecansuggest
that theprocess of SAR imagingmeets this condition. As aresult, thespectrumof
the‘initial image’ occupiesawidefrequencybandinFouriertransformspaceandthe
object’sreflectivitycanbereconstructedfromalimitedshiftedspectral region. This
circumstanceissimilar toafact well knowninholography: theimageof adiffusely
scatteringobjectcanbereconstructedfromanyfragmentof thehologram(Chapter2).
Theseaspectsof imagequalitycanbetreatedinadifferent way. Thebandwidth
of spacefrequencies, v, whichdefines theazimuthal resolution, ‘grows’ withthe
shift frequency(Fig. 1.7) as
v= (|ϑ
min
| +|ϑ
max
|)(2ω
o
/c).
For aCAT radar, ω
o
= 0andvis
(|ϑ
min
| +|ϑ
max
|)u,
where u

= 4αT/c 2ω
o
/c. Therefore, in order to obtain a high azimuthal
resolution, onemust haveinformationabout thewholerangeof viewangles, 360

.
Onecaneventually say that theprincipal differencebetweentheCAT andSAR
systemsisthat thelatter iscoherent andcanprocesscomplexsignals.
Toconclude, thetomographic principleof synthetic apertureoperationdoesnot
relyontheanalysisof Doppler frequenciesof reflectedsignals. Weshall turntothis
factor againinChapters 5and6whenwedescribeimagingof arotatingobject by
aninversesyntheticaperture. It will beshownthat theholographicandtomographic
approachesdonot needananalysisof Doppler frequencies.
Chapter 4
I magingradarsandpartiallycoherent targets
Remotesensingof theearthsurfaceinthemicrowavefrequency rangeis arapidly
developingfieldof fundamental andappliedradioelectronics[31,77]. It hasalready
becomeapowerful methodinmanyearthsciencessuchasgeophysics, oceanology,
meteorology, resources survey, etc. Especially among microwave sensors side-
looking synthetic aperture radars (SAR) are capable of providing high-resolution
imagesof abackgroundareaatanytime, irrespectiveof weatherconditions. Extensive
informationhasbeenobtainedbyairborneradarsandradarscarriedbysatellitesand
spacecraft: SEASAT-A andSIR (USA), RADARSAT (Canada), Almaz-1(Russia),
ERSandENVISAT(EuropeanSpaceAgency), Okean(Russia, Ukraine). Achallenge
totheradar scientististheanalysisof syntheticapertureimagingof extendedtargets.
Thevarious tasks of remoteSAR sensing of theearth includethestudy of the
oceansurface, seacurrents, shelf zones, icefields, andmany other problems [62].
Objectstobeimagedarewindslicks, oil spills, internal waves, current boundaries,
etc. Someof thesetargets arecharacterisedby motions withunknownparameters,
so they areconsideredto bepartially coherent. This chapter focuses ontheoretical
problemsof SAR imagingof suchtargetswhiletheir practical aspectsarediscussed
inChapter 9.
Incontrast toaconventional radar whichmeasuresinstantaneousamplitudesof
asignal reflectedbyatarget, theSAR registersthesignal phaseandamplitudefor a
finitesynthesis timeT
s
. Theconversionof thesedatato aradar imagerequires the
knowledgeof thetimevariationof thesecharacteristics, whichcanbefoundif one
knowsapriori thetimevariationof thereflectedsignal. Whentheviewzoneincludes
onlystationarytargets, theprescribeddatahavetheformof thetimedependenceof
distances between theSAR and theobjects being viewed. If thetimevariation of
thesignal phaseis unknown, thecoherenceis violated. This may happennot only
inSAR viewingof theseasurfacebut also becauseof sporadic fluctuations of the
carrier trajectory (seeChapter 7). So partial coherencemay beassociatedwiththe
viewingconditionsor withthetarget itself. Theanalytical methoddiscussedbelow
preservesitsgeneralityinthiscase.
80 Radar imagingandholography
4.1 I magingof extendedtargets
Viewing of background surfaces by SAR involves two kinds of difficulty: one is
associatedwithevaluationandimprovementof imagequalityandtheotherwithimage
interpretation[59]. Thefirst difficulty is dueto thefact that onehas to control the
SARperformance(i.e. theoperationof transmitters/receiversandimagingdevices), to
evaluatethecapabilitiesof test systems, andtocomparethedatafromthesynthetic
apertureand other sensors. Theother difficulty arises fromthediversity of image
applications. The point is that one resolution element contains a large number of
elementary scatterers reflecting coherent signals which interferewith oneanother.
This produces specklenoiseon theradar image. Thesituation becomes especially
complicated, for example, inseasurfaceviewingwhenelementaryscatterersmove,
makingtheimageintensity arandomquantity. For this reason, onehas to employ
statistical methodstodescribetheimagingof extendedproper targets. It isclear that
bothproblemsareclosely interrelated. For instance, thestatistical characteristicsof
specklenoisecanbeusedtoobtaininformationaboutthesurfaceandtoevaluatethe
imagequality.
Imagequalityisaffectedbynumerousindependentparametersof targetimaging.
Therefore, imageevaluationrequirestheuseof quantitativefactorswhichcanobjec-
tivelydescribetheimagecharacteristicsandrelatethisinformationtotheparameters
of the viewing system. The quality of any image, including a radar one, can be
describedby four parameters: geometrical accuracy, spatial resolution, radiometric
precisionandradiometricresolution.
Geometric accuracy defines the longitudinal and latitudinal precision of the
imageasanintegral entity, whichisparticularlyimportantforimagesof poorlyrecog-
nisablesurfaceareas. It alsodeterminesthemappingaccuracyof different pointson
theimagerelativetooneanother.
SinceaSAR isacoherent system, itsabilitytoresolveneighbouringpoint scat-
terersdependsonvariousfactors, suchastherelativephasesof thescatterers, their
relativeeffectivecrosssections, thesystemnoise, etc. Soit isreasonabletodescribe
spatial resolution either with the half width of the major impulse response peak
(usually, at3dB) or withtheenvelopeof thisresponse. Thelatter wayenablesoneto
findtheextenttowhichtheimageisaffectedbythesidelobesof theimpulseresponse,
which arecomparablewith themajor peaks of responses fromneighbouring, less
intensivescatterersthatcanbeerroneouslytakenforimagesof independentpointtar-
gets. Spatial resolutioncanbeevaluatedbyaphotometricstudyof theimageof abright
pointobject, say, of acornerreflector, orbydeterminingtheamplitudeimageprofileof
anobjectwithasharpreflectivityvariation, followedbythecalculationof theimpulse
responsefromthis profilegradient. Thesecondapproachis moreaccuratebecause
theresolutionevaluationislessaffectedbythelimiteddynamicrangeof theaperture.
Radiometric precision indicates to what extent the various brightness levels
of the image reproduce the reflectivity variation of the radar target at particular
wavelengths, polarisationsandradiationincidences. Tomeasuretheradiometricpre-
cision, onecanusecalibratedextendedtargets withdifferent values of thespecific
cross-section(SCS).
Imagingradarsandpartiallycoherent targets 81
Radiometric (contrast) resolution characterises the ability to discern the SCS
valuesof neighbouringelementsandislargelydeterminedbyrandomsignal fluctu-
ationsregisteredontheimage. Suchfluctuationsmayarisealongthesignal pathway
fromapertureorspecklenoise. Theradiometricresolutionforhomogeneousareascan
becalculatedfromthedensitydistributionfunctionof theimageintensityprobability.
Thereflectivitydistributionacrosstheareaof interestisoftenassumedtobenor-
mal. Thentheamplitudedistributionof thereflectedsignal isdescribedbyReighley’s
formulawhilethephaseistakentobeuniformintherangefrom0to2π. Theradar
imageintensity, which is equal to thesquared signal modulus, has an exponential
distribution:
p(χ) = (1+S)
−1
exp[−χ/(1+S)], (4.1)
where χ is the intensity normalised to unit noise power and S is the signal-to-
noiseratio ontheimage. Theaverageintensity andthedistributiondispersionare,
respectively, describedas
χ
m
= 1+S, (4.2)
D
χ
= (1+S)
2
. (4.3)
SCSmeasurementsinvolvealargeambiguity. FromEqs(4.2) and(4.3) itfollowsthat
thestandarddeviationof theSCSvalueisequal totheimageintensity. Toestimatethis
value, it isnecessarytofindthemeannoiseintensityandsubtract it fromtheimage
intensity. Weassumeχ
m
= Sandassumethattheestimatedispersiontobeconstant.
If theradiometric resolutionγ isfoundtobeonthelevel of onestandarddeviation
(theratioof themeanvalueplusonestandarddeviationtothemeanvalue), thenfor
thedistributiondescribedbyEq. (4.1) at zeronoisewehave
γ = 10lg(2+1/S). (4.4)
Obviously, γ will not be larger than 3dB even at S → ∞. The simplest way to
improveradiometric resolution is to averagetheviewing results on several neigh-
bouring resolution elements of an extended target (incoherent signal integration).
Thenweshall have
γ = 10lg[1+(1+S)/N
1/2
S], (4.5)
whereN isthenumber of uncorrelatedintegratedversionsof theimage.
Incoherentsignal integrationbySARcanbeprovidedonlyattheexpenseof spatial
resolutionbecausethisisnormallydonebymulti-rayprocessingor byaveragingthe
intensities of elements of a highly resolved image. For example, the SEASAT-A
apertureusedafour-rayprocessingwhich, nevertheless, couldnottotallyremovethe
specklenoise[99].
Thus, there is a certain contradiction between spatial and radiometric resolu-
tions[61]. A possiblecompromiseistochooseaproper criterionfor imagequality.
However, this is not very easy to do for two reasons. First, such acriterion must
account for specific features of theobject being viewed, which may happen to be
diverse. Second, onemust adapt this criterion to thesubsequent processing of the
82 Radar imagingandholography
image – visual, automated, etc. Moore [99], for example, suggested using visual
expertiseof theimageas acriterionfor evaluationof its quality. For aquantitative
analysisheusedthespatial grey-level (SGL) volumeV = V
a
V
R
V
g
(N), whereV
a
and
V
R
aretheazimuthandrangeresolutions, respectively, andV
g
(N) isthegrey-level
resolutiondefinedbythenumber of uncorrelatedintegratedrealisations, N.
Beforeproceedingwiththediscussionof criteriathat canoptimisethecoherent-
to-incoherentsignal ratiointhesyntheticaperture, wethinkitisnecessarytoconsider
briefly the available methods to describe SAR mapping of a typical fluctuating
extendedtarget – aroughseasurface.
4.2 Mappingof roughseasurface
At present wehavemuch information on rough seasurfaceviewing by SAR sys-
tems [36,62], both airborne and carried by spacecraft. Most of the publications
describe wave movements and their effect on radar image quality. However, this
issuestill remainscontroversial andisasubject of muchdebate[56].
WhentheseasurfaceisviewedbyanairborneorspaceSAR, theprobingradiation
incidencevariesfrom20

to70

. Braggscatteringbysmall-scaleandcapillarywaves
has thegreatest effect on thereflection of electromagnetic radiation. Theeffect of
large-scale(gravitational) wavesontheradar imagerevealsitself inthemodulation
of scattering by small-scale waves. These phenomena are usually described by a
2D model whichconsiders theseasurfaceas asuperpositionof Braggscatterers –
capillaryandlongergravitational waves. Theycanalsobedescribedbyafacetmodel,
inwhichfacetsrepresent small-scalescattererswithsuperimposedcapillary waves;
thescatterers movewith orbital velocities defined by large-scalewaves [59]. The
imagingof large-scalewavesisaffectedbythefollowingfactors:
• the energy modulation of capillary waves due to hydrodynamic interaction
betweencapillaryandgravitational waves;
• themodulation of thefacet inclination, which changes theeffectiveincidence
of theprobing signal with respect to thenormal facet surface, which, in turn,
changestheBraggscatteringcoefficient;
• the variations in the facet parameters (the position and the normal direction)
and the Bragg scattering coefficient due to the facet movement during the
synthesis.
Thefirsttwoprocessesareimportantfor seaviewingbyanyradar, whereasthethird
processaffectsonlySARimaging. Theeffect of movingwavesontheimagequality
canbefoundanalyticallyif onebearsinmindthatthesynthesistime(0.1–3s) ismuch
shorterthantheperiodof alarge-scalewave(8–16s). Thenthefunctionsthatdescribe
thetimevariationof thefacetparametersandscatteringcoefficientscanbeexpanded
intoaTaylor series. Themajor expansiontermsarerelatedtotheradial components
(alongtheslant range) of theorbital velocity andaccelerationof thefacets. These
components are responsible for two effects: the velocity bunching and the image
Imagingradarsandpartiallycoherent targets 83
defocusingalongtheazimuth. Thevelocitybunchingisassociatedwiththeazimuthal
shift of each facet imagebecauseof theradial velocity effect, which represents a
periodicrarefactionandthickeningof virtual positionsof elementaryscatterersalong
thelarge-scalewavepattern. Thebunchingdegreevarieswiththenumber of images
of individual facetsper unit azimuthal length, whichisproportional to
=
R
v
du
r
dx
, (4.6)
whereRistheinclinedrange, vistheSAR carrier velocity, u
r
istheradial velocity
component and x is theazimuthal coordinateon theseasurface. For small values
of ||, thiseffect islinear andischaracterisedbyalinear transfer function; for large
|| values(>π/2), it becomesnonlinear, leadingtoimagedistortions. It isgreatest
for wavesrunningalongtheazimuthal coordinatebut practicallyvanishesfor radial
waves.
Imagedefocusingof large-scalewaves is interpretedas beingeither dueto the
radial accelerationof thefacetsor duetothechangeintherelativeaperturevelocity
becauseof theeffectof theazimuthal phasevelocityof seawaves[61]. Investigations
haveshownthatthelatter explanationisbetter substantiated. Themajor contribution
to the image is made by the amplitude modulation of the reflected signal due to
the surface roughness and facet inclination, whereas the velocity bunching plays
a minor role. As for the image defocusing, it can be removed by correcting the
signal processingconditions, for example, byanadditional adjustment of theoptical
processor or byrefiningthebasefunctionduringdigital imagereconstruction.
Generally, theseawavebehaviour appearstobequitecomplex. For thisreason,
available models of a probing signal reflected by the sea surface depend on the
particular problemtobesolved. Modelsaccountingfor theorbital motionof liquid
droplets aretoo sophisticated to beextended to alargeclass of objects defined as
partiallycoherent. Besides, theydonot readilyapplytotheanalysisof theinfluence
of apertureparameters onimagequality, becauseimagingis thendeterminedonly
bytheseawavecharacteristicsandviewinggeometry. Probably, theonlyfactor that
affectstheseaimagingby SAR andrelatedtothechoiceof radar parametersisthe
imagedefocusing. But even here, wedeal with themapping of seawaves, which
is aparticular problemthat does not represent thewholeclass of partially coherent
targets.
Ontheotherhand, of academicinterestandpractical importancearetheproblems
of backgrounddynamics, various anomalies intheextendedtarget reflectivity (for
thesea, theseareslicks, spillsof surface-activesubstances, etc.), aswell astheproper
choiceof theSAR designfor viewingthis class of targets. Theanalysis shows that
theresultsobtainedcanbeextendedtoalargenumber of partiallycoherentextended
targets.
Inprinciple, thebasiccharacteristicsof extendedtargetimages, includingimages
of seasurface, couldbefoundbysolvingtheproblemof electromagneticwavescat-
teringbyamovingplane. Themethodsof dealingwiththeseproblemsarewell known
but theyinvolvecumbersomecalculations.
84 Radar imagingandholography
Another way of describing aradar signal reflected by an extended target is to
introducetheautocorrelationfunctionfortheobjectbeingviewed, asisdoneinoptical
systemstheory [29]. Inthisapproach, acomplex signal reflectedby theseasurface
canbewrittenasU(x, t) =u(x, t)u
r
(x, t), whereu(x, t) isaco-factor accountingfor
theeffect of large-scaleseawaves andu
r
(x, t) is arandomcomplex component to
describethesignal reflectedbyacapillarywave. Theautocorrelationfunctionof this
signal is
U(x
1
, t
1
)U

(x
2
, t
2
) = u(x
1
, t
1
)u

(x
2
, t
2
)u
r
(x
1
, t
1
)u

r
(x
2
, t
2
),
wheretheasterisk denotes thecomplex conjugateand represents theensemble
average.
Thecomplexcomponent u
r
(x, t) canbewrittenas
u
r
(x, t) = f (x)α(t | x) (4.7)
wheref (x) isacomplexrandomamplitudeof thereflectedsignal, definedbythesur-
faceroughness, andα(t | x) isacomplexreflectivitydescribingthetimefluctuations
of thereflectedsignal withthex-coordinate.
Normally, f (x) describestheGaussianrandomprocesswithazeroaverage, which
happensinthecaseof Braggscatteringof anelectromagneticwaveonaroughsurface.
Thespatial correlationfunctionof thisprocesscanbeapproximatedbytheDiracdelta-
functionwhenthespacingbetweenthefeaturesissufficientlysmall, aconditionoften
fulfilledinpractice:
f (x
1
)f

(x
2
) = pδ(x
1
−x
2
), (4.8)
wherepisafactor proportional totheobject’sSCSandisdefinedbythegoverning
radar equation.
Theautocorrelation function of thetimefluctuations of thesurfaceis, in turn,
equal to
α(t
1
| x
1


(t
2
| x
2
) = [(t
1
−t
2
) | x
1
, x
2
]. (4.9)
It has been termed partial or autocorrelation coherence [103]. The possibility to
employ this formalismis afundamental featureof partially coherent objects which
canthenbetreatedasaspecial classof targets.
Thus, theautocorrelationfunctionof thesignal reflectedby theseasurfacecan
bewrittenas
U(x
1
, t
1
)U

(x
2
, t
2
) = u(x
1
, t
1
)u

(x
2
, t
2
)δ(x
1
−x
2
)[(t
1
−t
2
) | x
1
, x
2
].
(4.10)
Taking thetimefluctuations of thesignal to beconstant, wecan approximatethe
autocorrelationfunctionwiththeexpression
[(t
1
−t
2
) | x
1
, x
2
] = exp[−π(t
1
−t
2
)
2

2
c
], (4.11)
whereτ
c
isthetimeinterval of thecorrelation.
Imagingradarsandpartiallycoherent targets 85
Theradarsignal model discussedaboveagreeswell withexperimental data[112].
Equation(4.11) hasageneral formallowingthesolutionof alargerangeof problems
involvedintheanalysis of extendedtarget imagingby SAR systems. Weshall fur-
ther omit partiallycoherent backgroundmodulationbylarge-scalewaves, assuming
u(x, t) = 1 in order to beableto extend theresults to asufficiently largeclass of
objects.
Themodel wehavedescribedcanprovidethebasic statistical characteristicsof
partially coherent surface images, but we should first outline the imaging model
itself.
4.3 A mathematical model of imagingof partially
coherent extendedtargets
SupposeaSAR is borneby acarrier movinguniformly alongastraight linewitha
velocity v. Thecarrier positionis describedby thecoordinatey = vt andinclined
rangeR, whilethepositionof anarbitraryelementof theviewedsurfaceisdescribed
bythex-coordinate(Fig. 4.1). Theimagingprocessissubdividedintotwostages– the
registrationof thereflectedsignal (hologramrecording)andtheimagereconstruction.
Thisapproachallowsonetorepresentageneral blockdiagramof thesyntheticaperture
(Fig. 4.2) withthecomplexamplitudeof thereconstructedimagewrittenasasumof
convolutions:
s= f ∗ w∗ h+n∗ h, (4.12)
0
R
R
f (y, t)
u

y=vt
Figure4.1 Thegeometrical relationsinaSAR
Surface
model
SAR
receiver
SAR
processor
Radar
image
Noise
+
Figure4.2 Ageneralisedblockdiagramof aSAR
86 Radar imagingandholography
where f is a function of the viewed surface reflectivity; wand h are the impulse
responsesof theradar andtheprocessor, respectively; nisthecomplexamplitudeof
additivenoise; and∗ denotesconvolution.
Theoptimal qualityof imagesof pointobjectsisachievedbymatchingtheimpulse
responsesof theradar andtheapertureprocessor:
h(y) = w

(y). (4.13)
This condition cannot, however, providean optimal imageof an extended proper
object [99], sinceit isimpossibletointegrateanincoherent signal andtoreducethe
specklenoiseontheimage. Ontheotherhand, thefactthattheimageintensityg(u) =
s(u)s

(u) isusuallyregisteredattheapertureprocessor outputallowsintroducingthe
conceptof apartiallycoherentprocessorinsquarefiltrationtheory[58]. Onecanthen
account simultaneously for theeffectsof coherent andincoherent signal integration
by theapertureandeventually obtainthemajor statistical characteristics of images
of partiallycoherentextendedtargets. Thistypeof processor will havethefollowing
impulseresponse:
Q(y
1
, y
2
) = γ (y
1
−y
2
)h(y
1
)h

(y
2
), (4.14)
whereγ (y
1
−y
2
) isafactorcharacterisingthedegreeof incoherentsignal integration.
ThenEq. (4.13) will bevalidfor anyclassof targets.
Toavoidcumbersomecalculations, weshall introduceGaussianapproximations
of thefunctions
w(y) = exp(−ay
2
/2) exp(jby
2
/2), (4.15)
h(y) = exp(−ay
2
/2) exp(jby
2
/2), (4.16)
γ (y
1
−y
2
) = exp[−A(y
1
−y
2
)
2
/2], (4.17)
where

exp(−ay
2
/2)dy= (2π/a)
1/2
= Risthewidthof thereal antennapattern
projection onto the area, which defines the synthesis range b = 2π/λR; λ is the
aperture wavelength; and exp(−Lx
2
/2) dx = (2π/L)
1/2
= L
s
is the synthesised
aperturelength. TheA/a ratio describes thenumber of independent integrations of
anincoherent signal, N = (1+A/a)
1/2
.
WithinthisSAR model, theimageintensityis
g(u) =

−∞

Q(u−y
1
, u−y
2
)[s
r
(y
1
) +n(y
1
)] [s

r
(y
2
) +n

(y
2
)] dy
1
dy
2
,
(4.18)
wheres
r
(y) describesthecomplexhologramfunctionandn(y) isafunctiondescribing
theintrinsicnoiseof theaperture.
Themodel of asyntheticaperturewithapartiallycoherentprocessorcanbeusedto
analysestatistical characteristicsof imagesof partiallycoherenttargetsandtoreveal
theeffectsof coherent andincoherent signal integrationontheimageparameters.
Imagingradarsandpartiallycoherent targets 87
4.4 Statistical characteristicsof partiallycoherent target images
Let us turn back to the synthetic aperture shown in Fig. 4.1. In one of the range
channels, the reflected signal can be represented as a randomcomplex field. For
manyreal surfaces, thefunctionf (y) inthecentimetrewavelengthrangeisaGaussian
randomprocesswiththezeroaverageandthecorrelationfunctionintheformof the
Diracdelta-functionobeyingEq. (4.8).
Thetimerelationsforthesurfacechangescanbedescribedbytheautocorrelation
functionof Eq. (4.9) andthat of thereflectedsignal, assumingu(y
1
, t
1
) ≡ s
0
(y
1
, t
1
):
s
0
(y
1
, t
1
)s

0
(y
2
, t
2
) = pδ(y
1
−y
2
)[(t
1
−t
2
) | y
1
, y
2
], (4.19)
wherethefunction[(t
1
−t
2
) | y
1
, y
2
] isdefinedbyEq. (4.11).
Theprocessof imagingcanbeanalysedintermsof theholographicapproach, as
appliedtoSAR. At thefirst stage, thehologramisrecorded: s
h
= s
0
∗ w+n, where
wistheimpulseresponseof theaperturereceiver, nisadditivenoise, and∗ denotes
convolution. At thesecondstage, theimageisreconstructedwiththeintensity
g = ss

= (s
h
∗ h) ∗ (s
h
∗ h)

, (4.20)
wheresisthecomplex amplitudeof theimageandhistheimpulseresponseof the
apertureprocessor.
To smoothout theimagefluctuations, oneusually uses incoherently integrated
signals. Wecan now evaluatetheeffects of two smoothing procedures: multi-ray
processing and averaging of neighbouring resolution elements on the image [2].
Additionally, weshall considerthepotentialityof incoherentsignal integrationonthe
hologram. Inthefirst case, whentheimageisreconstructedbyanoptical processor,
itsintensityis[60]
g
1
(u) =

g(u, τ)D
a
(τ) dτ. (4.21)
HereD
a
(τ) describesthelightdistributionacrosstheaperturestoplocatedinfrontof
thesecondaryfilmwhichrecordstheimage, τ isthecurrentexposureof thesecondary
film, anduisthereconstructedimagecoordinate.
Inthesecondcase, theimageintensityis
g
2
(u) =

g(u

)G
a
(u−u

) du

, (4.22)
whereG
a
(u−u

) istheweightingfunctionof theaveraging.
Inthethirdcase, theimageintensityisgivenbyEq. (4.20), inwhichthehologram
functionis
s
ha
(y

) =

s
h
(y

)C
a
(y

−y

) dy

, (4.23)
whereC
a
(y

−y

) istheweightingfunctionof theaveragingandy

= vt isthespatial
coordinateonthehologram.
88 Radar imagingandholography
To simplify the calculations, let us approximate the above functions with the
expressions
D
a
(τ) = exp[−D(τv)
2
/2],
G
a
(u) = exp[−Gu
2
/2],
C
a
(y) = exp[−Cy
2
/2],
(4.24)
where

exp(−Dv
2
τ
2
/2)d(vτ) = D
e
is theequivalent width of theaperturestop,
(2π/G)
1/2
= G
e
and(2π/C)
1/2
= C
e
aretheequivalent widths of therespective
weightingfunctions; Eqs(4.15) and(4.16) havebeenusedasapproximationsof the
functionsw(y) andh(y).
Wecannowfindthefollowingparameterscharacterisingthestatistical properties
of theimage: theaverageintensity g
a
, theintensity dispersion σ
2
a
, thesmoothing
degreeg
2
a

2
a
, theautocorrelationrangeu
c
andthesignal-to-noiseratioW
g
= g
a
/g
an
,
whereg
an
istheaveragenoiseontheimage.
4.4.1 Statistical imagecharacteristicsfor zeroincoherent signal
integration
The parameters of interest can be found using the power spectrumof the image
intensity[60]:
S
g
(ω) = (2π)
−1

|H(η, ω −η)|
2
S
h
(ω)S
h
(ω −η) dη, (4.25)
whereH(η, ω) is a2D transfer functionof theapertureprocessor andS
h
(ω) is the
hologrampower spectrum. Inturn, H(η, ω) = H(η)H

(ω), whereH = F{h} isthe
Fourier imageof thefunctionh(x). WithEq. (4.24), weget
H(η, ω) = (L
2
+b
2
)
−1/2
exp

−(η
2

2
)
L
2(L
2
+b
2
)

×exp

j(ω
2
−η
2
)
b
2(L
2
+b
2
)

. (4.26)
The function S
h
(ω) represents the Fourier transformof the hologramspatial cor-
relation function, R
h
(y

), which can be described, for low intrinsic aperture
noise, as
R
h
(y

) = s
h
(y

1
)s

h
(y

2
) = p(π/a)
1/2
exp[−(y

1
−y

2
)
2
(a
2
+b
2
+2aB)/4a]
(4.27)
with
y

= y

1
−y

2
, B = 2π/(vτ
c
)
2
.
Imagingradarsandpartiallycoherent targets 89
Hence, wehave
S
h
(ω) = p[2π/(a
2
+b
2
+2aB)]
1/2
exp[−ω
2
/(a
2
+b
2
+2aB)]. (4.28)
By substituting Eqs (4.26) and (4.28) into Eq. (4.25) and using the expression
cov
g
(u) = F{S
g
(ω)} for thebackground, weobtain
<g
a
> =

S
g
(ω)dω = 2
1/2
πp[aL(a+L +2B) +b
2
(a+L)]
−1/2
,
σ
2
g
= cov
g
(0) = 2(πp)
2
[aL(a+L +2B) +b
2
(a+L)]
−1
,
u
c
=

cov
g
(u)/cov
g
(0)du
= π[2a/(a
2
+b
2
+2aB) +2L/(L
2
+b
2
)]
1/2
,
<g
2
a
>/σ
2
g
= 1.
(4.29)
Assumingthat thespectrumof theintrinsicaperturenoiserecordedonthehologram
isuniformandhasspectral densityS
hn
(ω) = n, wefindtherespectiveparametersof
theimagenoise:
<g
an
> = n(π/L)
1/2
,
σ
2
n
= n
2
(π/L),
u
cn
= π[2L/(L
2
+b
2
)]
1/2
,
g
2
an

2
n
= 1.
(4.30)
Thesignal-to-noiseratioW
h
= g
a
/g
an
canbereducedtoW
h
= W
0
Q, whereW
0
isa
classical quantityand
Q = [(a/b
2
+1/L)(a+L) +2aB/b
2
]
−1/2
(4.31)
isafactor largelydeterminedbythereal antennapattern.
A quantitativeanalysis of Eqs (4.29) and(4.30) shows that thestatistical para-
meters of the image are practically independent of the surface fluctuations at the
typical values of λ ≈ 3cm, R ≈ 10–20km, ≈ 0.02 and τ
c
≈ 0.01s and that
thecorrelation ranges u
c
and u
cn
differ only slightly with amaximumat L
s
=
(λR/2)
1/2
(b = L). The latter circumstance can be attributed to the fact that the
function h(y) essentially represents a linearly frequency-modulated signal, whose
spectral widthisproportional toitsrangeatL
s
> (λR/2)
1/2
andinverselyproportional
at L
s
< (λR/2)
1/2
. So the spectral width of the image fluctuations is minimal at
L
s
= (λR/2)
1/2
.
90 Radar imagingandholography
0 10 20
–1.0
0
Q, db
L
s
/√ lR/2
20 km
R=10 km
Figure4.3 Thevariationoftheparameter QwiththesynthesisrangeL
s
atλ = 3cm,
= 0.02andvariousvaluesof R
At theminimal width of the|H(ω)| function, thedifferencebetween g
a
and
g
an
is also insignificant. This accounts for the maximumof the Q function at
L
s
= (λR/2)
1/2
(Fig. 4.3). A quantitative analysis of Q shows that the influence
of thereal aperturepatternonthesignal-to-noiseratioisslightandrevealsitself only
at largesynthesisranges, L
s
(λR/2)
1/2
.
4.4.2 Statistical imagecharacteristicsfor incoherent signal integration
AccordingtoEq. (4.21), theimageintensityinmulti-rayprocessingis[60]
g(u) =

D
a
(vτ)

s
h
(y

1
)s
h
(y

2
) exp[−L(y

1
−u−vτ)
2
/2]
×exp[−L(y

2
−u−vτ)
2
/2] ×exp[jb(y

1
−u)
2
/2]
×exp[jb(y

2
−u)
2
/2] dy

1
dy

2
d(vτ). (4.32)
This relationdescribes theimpulseresponseof theapertureprocessor andenables
onetofinditstransfer function:
H(η, ω) =r(l
2
+b
2
+2Al)
−1/2
×exp{−[A(η +ω)
2
+l(η
2

2
)]/[2(l
2
+b
2
+2Al)]}
×exp{−jb(η
2
−ω
2
)/[2(l
2
+b
2
+2Al)]}, (4.33)
with
r = 2π/(D+2L)
1/2
, l = LD/(D+2L), and A= L
2
/(D+2L).
Imagingradarsandpartiallycoherent targets 91
Following the same procedure and using the last two relations, we can find the
characteristicsof thebackgroundandnoiseontheimage:
<g
a
> = 2
1/2
πp{D[aL(a+L +2B) +b
2
(a+L)] +2aLb
2
}
−1/2
, (4.34)
σ
2
g
= 2(πp)
2
{[L(D+L)(a
2
+b
2
+2aB) +aD(L
2
+b
2
) +2aLb
2
]
2
−L
4
(a
2
+b
2
+2aB)}
−1
(4.35)
u
c
= 2
1/2
π{L(1+2L/D)/[L
2
+b
2
(1+2L/D)]
+a/(a
2
+b
2
+2aB)}
1/2
, (4.36)
<g
2
a
>/σ
2
g
= {1+2L
2
b
2
/[aD(L
2
+b
2
) +LDb
2
+2aLb
2
]}
1/2
, (4.37)
<g
an
> = πn[2/(LD)]
1/2
, (4.38)
σ
2
n
= π
2
n
2
(1+2L/D)
−1/2
/(LD), (4.39)
u
cn
= 2
1/2
π{L(1+2L/D)/[L
2
+b
2
(1+2L/D)]}
1/2
, (4.40)
<g
2
an
>/σ
2
n
= (1+2L/D)
1/2
. (4.41)
Theanalysisof theserelationsshowsthat theimagesmoothingisimproved, aswas
expected, whilethecorrelation functions of theclutter and radar noiseimages are
practicallythesame, u
c
≈ u
cn
. Figure4.4demonstratesthecorrelationrangever-
susthenormalisedquantityL
s
forvariousdegreesof incoherentintegrationD
e
, orfor
0
20
8
4
3
7
2
6
1,5
2 4
5
40
ͳu
c
ʹ
, m
L
s
/√ lR/2
Figure4.4 Thedependenceof thespatial correlationrangeof theimageonnor-
malisedL
s
for multi-rayprocessing(solidlines) at various degrees of
incoherent integrationD
e
andfor averagingof theresolutionelements
(dashed lines) at various G
e
; λ = 3cm, R = 10km; 1, 5–0 (curves
overlap); 2, 6–0.25(λR/2)
1/2
; 3, 7–(λR/2)
1/2
; 4, 8–2.25(λR/2)
1/2
92 Radar imagingandholography
different aperturestopsizes. It isclear that theimagecorrelationat L
s
> (λR/2)
1/2
(thefocused processing region) will only slightly vary with D
e
but thecorrelation
rangeinincoherentintegrationwill becomelarger (thedefocusedprocessingregion).
Theparameter Qthentakestheform
Q = [(a+L)(a/b
2
+1/L) +2aB/b
2
+2a/D]
−1/2
.
Itsquantitativeanalysisindicatesthat it doesnot muchaffect thesignal-to-noise
ratio.
When the resolutions of neighbouring elements are averaged according to
Eq. (4.22), theprocessor transfer functionisexpressedas
H(η, ω) = r
1
(l
2
1
+b
2
1
+2A
1
l
1
)
−1/2
×exp{−[A
1
(η +ω)
2
+l
1

2

2
)]/[2(l
2
1
+b
2
1
+2A
1
l
1
)]}
×exp{−jb
1

2
−ω
2
)/[2(l
2
1
+b
2
1
+2A
1
l
1
)]} (4.42)
with
r
1
= [2π/(G+2L)]
1/2
, l
1
= LG/(G+2L),
A
1
= (L
2
+b
2
)/(G+2L) and b
1
= bG/(G+2L). (4.43)
Hence, wehave
<g
a
> = 2
1/2
πp{G[a(L
2
+b
2
) +L(a
2
+b
2
+2aB)]}
−1/2
, (4.44)
σ
2
g
= 2(πp)
2
{G
2
[Lb
2
+a(L
2
+b
2
)]
2
+2Gb
2
(L
2
+b
2
)
[Lb
2
+a(L
2
+b
2
)]}
−1/2
, (4.45)
g
2
a

2
g
= {1+2b
2
(L
2
+b
2
)/[aG(L
2
+b
2
) +LGb
2
]}
1/2
, (4.46)
<g
an
> = πn[2/(LG)]
1/2
, (4.47)
σ
2
n
= π
2
n
2
{LG[LG+2(L
2
+b
2
)]}
−1/2
, (4.48)
<g
2
an
>/σ
2
n
= [1+2(L
2
+b
2
)/(LG)]
1/2
, (4.49)
u
c
= 2
1/2
π{[Lb
2
+a(L
2
+b
2
)]/[b
2
(L
2
+b
2
)] +2/G}
1/2
, (4.50)
u
cn
= 2
1/2
π[L/(L
2
+b
2
) +2/G]
1/2
. (4.51)
In this case, we also have u
c
≈ u
cn
. Figure 4.4 illustrates this dependence at
various widths of the integrating function G
e
. Obviously, the image correlation
range increases in proportion with the integrating window width. The expression
for thecoefficient QcoincideswithEq. (4.31), sincethestatistical propertiesof the
backgroundandnoiseimagesaresimilar andcannot contributetothepower.
Imagingradarsandpartiallycoherent targets 93
Inincoherentsignal integrationonthehologram, theH(η, ω) functionisdescribed
byEq. (4.26) andtheR
h
function, after averagingbyEq. (4.23), takestheform:
R
h
(y

) = p[π/C(aC +a
2
+b
2
+2aB)]
1/2
×exp[−(y

)
2
C(a
2
+b
2
+2aB)/4(aC +a
2
+b
2
+2aB)].
Therefore, thenoisecorrelationfunctiononthehologramcanbewrittenas
R
hn
(y

) = n(π/C)
1/2
exp[−(y

)
2
C/4. (4.52)
This expression yields the hologramsignal-to-noise ratio W
h
= R
h
(0)/R
hn
(0) =
W
h0
Q
h
, whereW
h0
isthesinglepulseratiodefinedbythegoverningradar equation
as Q
h
= N
i
(1+ N
2
i
/K
a
)
1/2
, K
a
= d
a
/(2vT
ir
), d
a
is the horizontal dimension of
thereal antenna, T
ir
is therepetitionrateof thepulses, N
i
is thenumber of pulses
integratedby thehologramwithanincoherent averaging. Thevariationof Q
h
with
N
i
isshowninFig. 4.5. Onecanseethat incoherent integrationisprofitableonly at
N
i
≤ K
a
, which agrees well with thecondition C
e
≤ y

hc
= 2(πa)
1/2
/b, where
y

hc
isthecorrelationrangeof thehologram. If thelatter conditionisfulfilled, the
basicstatistical characteristicsof theimagecanbedescribedbyexpressionssimilar
toEqs(4.29)–(4.31), whichmeansthat thereisnoimagesmoothing.
10
8
6
4
2
K
a
=1
0 5 10
5
10
Q
h

N
i
Figure4.5 Thevariationof theparameter Q
h
withthenumber of integratedsignals
N
i
at variousvaluesof K
a
94 Radar imagingandholography
Theresultsobtainedallowthefollowingconclusionstobemade:
1. For typical conditionsof SARviewingof backgroundsurfacesandfor real times
τ
c
≈ 0.01s of reflected signal correlation, all theimageparameters discussed
aboveareactually independent of thedegreeof coherenceof theobjects being
viewed, incontrast totheradar resolvingpower.
2. Thestatistical propertiesof imagesof backgroundsurfacesandaperturenoises
are practically identical. This fact can be successfully used to calibrate radar
aperturesdesignedfor measurement of backgroundSCS. Themaximumperiod
of spatial imagefluctuationsisobservedinthesynthesisrangeL
s
= (λR/2)
1/2
.
3. Theanalytical expressionswehavederivedcanbeusedfor thecalculationof the
imagesmoothingdegreeinthecaseof incoherent signal integration.
4. Thesignal-to-noiseratio of theimageis nearly independent of thesynthesised
aperturelengthor ontheincoherent integrationrange.
5. Incoherent integrationonahologramdoes not changethestatistical character-
istics of theimage, that is, it does not leadto imagesmoothing, providedthat
theintegrating function width is smaller than thehologramcorrelation range.
Otherwise, thereisnonoticeableimprovementof thesignal-to-noiseratioonthe
hologram; therefore, thesignal integrationprocedurebecomesmeaningless.
6. The methods of incoherent signal integration we have discussed (multi-ray
processing and averaging of resolution elements) give similar results on the
smoothing of image fluctuations. Multi-ray processing is performed automat-
icallyif theimageisreconstructedbyexposingthesecondaryfilminanoptical
processor. Inthecaseof digital reconstructionusuallybasedonfastFourieralgo-
rithms, theaveragingof resolutionelementsispreferablebecausethealgorithm
performanceisveryeffectivewhenonehastoprocessvast data. Applicationof
special-purposedigital processorsmayimprovethesituation.
4.5 Viewingof lowcontrast partiallycoherent targets
Themajor SAR characteristicsfor viewinglowcontrast targetssuchasseacurrents,
windslicks, oil spills, etc., arethespatial resolutionandradiometric (contrast) res-
olutiondeterminedby thenumber of incoherent signal integrations [58]. It is clear
that aproper choiceof theproportion between spatial and radiometric resolutions
(coherent andincoherent integration) will dependnot only ontheradar parameters
but onthepropertiesof thetarget tobeviewed. Soit isreasonabletoconsider opti-
misationof SARperformanceinthecontext of partial coherenceof signalsreflected
byanextendedtarget.
Recall that theprocessof imagingincludestwostages. First, thereceivedsignal
is recordedonaradar hologramas u(y

) = w(y

− y)f (y), wherew(y

− y) is the
impulseresponseof theaperturereceiver, f (y) is afunction describing thespatial
distributionof thetarget reflectivity, yisthecoordinateintheviewedsurfaceplane,
y

= vt is theSAR carrier coordinate, and t is current viewing time. Second, the
imagefieldis recorded: g(y

) = u(y

)h(y

− y

), whereh(y

− y

) is theimpulse
responseof theapertureprocessor andy

istheimagecoordinate.
Imagingradarsandpartiallycoherent targets 95
Imagingcanbedescribedinterms of linear filtrationtheory. Theconcepts of a
quadraticfilterandafrequency-contrastcharacteristic(FCC) well knownfromoptics
canbeusedtopresent theimageintensity:
S
I
(ω) = S
o
(ω)K
R
(ω) (4.53)
whereS
o
(ω) isthespacefrequencyspectrumof theSCSof theobject andK
R
(ω) is
theFCC of theaperture.
For instance, if theaverageSCSof thebackgroundisσ
0
, thedistributionof alow
contrast target isdescribedbythefunction
σ(y) = σ
0
[1−mexp(−y
2
A/2)], (4.54)
wherem< 1isafactor definingthetarget’sinitial contrast K
in
= (1−m)/(1+m)
withrespecttothebackground, A= 2π/l
2
isaparameterrelatedtothetarget’ssizel,
andtheapertureFCC isgivenbytheexpression
K
R
(ω) = exp[−ω
2
/(2z)], (4.55)
wherezdenotesitswidth. ThenusingEq. (4.53), wecanwritethespatial distribution
of theimageintensity:
g(y

) = σ
0
{1−m[z/(A+z)]
1/2
exp[−(y

)
2
Az/2(A+z)]}. (4.56)
Hence, theobject’scontrast ontheimageis
K
out
= {1−m[z/(A+z)]
1/2
}/{1+m[z/(A+z)]
1/2
} (4.57)
anditsobservablesizeis
l

= [2π(1/A+1/z)]
1/2
. (4.58)
It is clear that the contrast and target size on the image become distorted but the
knowledgeof theexplicit quantityK
R
(ω) cangivethereal object’sparameters.
For targetswhosereflectivity varieswithtimerandomly, thesignal receivedby
theaperturepossessesapartial coherenceandthehologramfunctionu(y

) isnolonger
aconvolutionintegral. Inthat caseit wouldbeunreasonabletouselinear filtration
theory. Weshall show, however, that statistical methods andphysical assumptions
concerningthetimefluctuations of objects’ reflectivities canmakethis convenient
formalismworksuccessfully.
For this, weshall findtheapertureresponsefor alowcontrast target (m 1),
whosereflectivitydistributionisdescribedbythefunction
f (y, t) = [1+mcos(y)]f (y)α(t | y), (4.59)
where is a certain spacefrequency and α(t | y) is a randomcomplex function
describingthetimefluctuationsof thereflectedsignal.
TheapertureFCC can bewritten as K
R
() = K
out
/K
in
, whereK
in
= (σ −
σ
m=0
)/σ
m=0
≈ 2m; K
out
= (g − g
m=0
)/g
m=0
; σ = f (0, 0)f

(0, 0); σ
96 Radar imagingandholography
andg aretheaveragevaluesof thetarget’sSCSandimageintensity, respectively.
Thecorrelationfunctionof thefieldinEq. (4.59) isdefinedas
R
f
= f (y
1
, t
1
)f

(y
2
, t
2
) = [1+mcos(y
1
)]f (y
1
)f

(y
2
)α(t
1
| y
1


(t
2
| y
2
).
(4.60)
Formanyreal surfaces, f (y) inthecentimetrewavelengthrangeisaGaussianprocess
withazeroaverageandacorrelationfunctionintheformof theDiracdelta-function
of Eq. (4.8). Assumingthetimefluctuationsof thesignal tobeasteady-staterandom
process, wecanusetheapproximationof Eq. (4.11). Together withEq. (4.60) and
m 1, y

= vt, weshall have
R
f
= [1+mcos(y
1
) +mcos(y
2
)]δ(y
1
− −y
2
) exp[−(y

1
−y

2
)
2
B/2
withB = 2π/(vτ)
2
.
Theaverageimageintensityis
g =

h(y

1
)h

(y

2
)R
u
(y

1
, y

2
) dy

1
dy

2
, (4.61)
whereR
u
(y

1
, y

2
) isthecorrelationfunctionof thehologram:
R
u
(y

1
, y

2
) =

R
f
w(y
1
−y

1
)w

(y
2
−y

2
) dy

1
dy

2
.
UsingtheGaussianapproximationsof theimpulseresponsesinEqs(4.15) and(4.16),
weobtain, insteadof Eq. (4.61), g = g
0
+2g

, whereg
0
= (2)
1/2
π[aL(a+
L +(a+L)]
−1/2
istheaverageintensityof thefluctuatingbackgroundimageand
g

=mg
0

×exp[−(
2
/4)((a+L)(a+L +2B)/aL(a+L +2B) +b
2
(a+L))].
(4.62)
For real viewing, wehaveb
2
aL andb
2
LB, whichreducesEq. (4.62) to
K
out
= 2mexp[−
2
(a+L +2B)/4b
2
],
K
R
() = exp[−
2
(a+L +2B)/4b
2
].
(4.63)
ThereisacertainrelationshipbetweentheFCC andtheazimuthal resolutionof the
aperture. Thelatter canbefoundfromthewidthof theaveragedimpulseresponseto
afluctuatingpoint target:
δ
a
=

g(y

)/g(0)dy

. (4.64)
Thesignal reflectedby this target canbeprescribedas f (y, y

) = δ(y)α(y

), where
α(y

) describes thetimefluctuations of thesignal, whosecorrelationproperties are
definedbyEq. (4.11). WithEqs(4.61) and(4.64), weget
δ
a
= [π(a+L +2B)/b
2
]
1/2
.
Imagingradarsandpartiallycoherent targets 97
0 20 40 60 80 100 120 140 160 180 L
s
, m
1
2
3
4
5
6
7

e
,
rad/m
t
c
→∞
t
c
=0.4 s
t
c
=0.2 s
t
c
=0.1 s
Figure4.6 Thevariationof theparameter
e
withthesynthesisrangeL
s
atvarious
signal correlationtimesτ
c
Of course, theapertureFCCcanbepresentedasK
R
() = exp[−
2
δ
2
a
/(4π)] andits
equivalent widthas
e
=

K
R
() d = 2π/δ
a
.
Theconcept of FCC allowstheconsiderationof aSAR asalinear filter of space
frequencies. Ontheotherhand, thefilterdescriptionessentiallydependsonthetarget’s
behaviour throughtheparameter B. Figure4.6illustratesthevariationof
e
withthe
synthesis rangeL
s
for anairborneSAR. Thebasic radar parameters areλ = 3cm,
R = 10km, ≈ 0.02, and v = 250m/s. For zero signal fluctuations (τ → ∞),
thewidth
e
increasesinproportionwithL
s
but at L
s
≈ Rthelinear dependence
isviolatedbecauseof theantennapatterneffect throughtheparameter a. Thesignal
fluctuationsleadtotheresolutionindependenceof L
s
at L
s
> vτ but theyarerather
definedbythecorrelationtimeτ. Equation(4.63) canbere-writtenintheform:
K
R
() = K
0
()K
τ
(), (4.65)
whereK
0
() = exp[−
2
(a+L)/(4b
2
)] istheapertureFCCintheabsenceof signal
fluctuations and K
τ
() = exp[−
2
B/(2b
2
)] is multiplicative noise arising from
fluctuationsintheradar channel.
Therefore, a SAR can be described as a set of two filters – a filter of space
frequenciesK
0
() andanarrowbandspace–timefilter K
τ
(), whosebandwidthis
determinedbythetimeof thesurfacefluctuationcorrelation. Theimagehasaspatial
intensityspectrumS
I
() = S
0
()K
R
().Ontheotherhand, onecanconsiderthatthe
aperturemeasures thespace–timespectrumS

() = S
0
()K
τ
() if oneassumes
itsFCC beingindependent of thetarget’spropertiesanddescribestheradar withthe
functionK
0
().
98 Radar imagingandholography
0 20 40 60 80 100 120 140 160 180 L
s
, m
0.2
0.4
0.6
0.8
1.0
Q
t
c
→∞
t
c
=0.4 s
t
c
=0.2 s
t
c
=0.1 s
Figure4.7 Theparameter QasafunctionofthesynthesisrangeL
s
atvarioussignal
correlationtimesτ
c
Toconclude, theparametersof radar aperturesfor viewingfluctuatingtargetscan
beoptimised by matching thecharacteristics K
0
() ≈ K
τ
(). Thelatter equality
providestheimagingof asurfacewithnearly asmuchdetail aspossiblepotentially
for aparticular typeof object. Thisequalitycanbeobtainedbychoosingthevalueof
L
s
equal toL
s
= vτ, whichmeansthatthesynthesistimeshouldnotbelongerthanthe
timeof thesignal correlation. Asaresult, theapertureresolutionappearstobelimited
to δ
a
= λR/2L
s
but this choiceof L
s
provides the(N = R/L
s
) number of image
realisations. Theaperturecontrast resolution, definedby thenumber of incoherent
integrationsN, isinturnindependent of thesignal coherencetimeτ. Sothechoice
of L
s
> vτ doesnotprovidethedesiredspatial resolutionbutitdecreasesN, making
thecontrast resolutionpoorer.
Thepotentialityof theSAR inviewinglowcontrast targetscanbeconveniently
described by theparameter Q = Nd
h
/(2δ
a
) equal to unity at zero fluctuations. If
thefluctuationsarepresent, Qessentially dependsonthechosensynthesisrangeL
s
(Fig. 4.7). For example, thesignal fluctuations at L
s
< vτ do not noticeably affect
theimagequality andQ = 1. At L
s
> vτ, theapertureperformanceproves to be
inferior toitspotentiality(Q < 1), sincethereal apertureresolutiondoesnot fit the
chosenvalueof L
s
but israther definedbythesignal correlationtimeτ.
Wecandrawthefollowingconclusionsfromtheseresults:
• Todescribetheimagingof fluctuatingtargets, onecanmakeuseof linearfiltration
theory, representingtheradar asafilter withacertainFCC. Theaperturecanbe
considered as adevicemeasuring thespace–timespectrumof theobject being
viewed.
• Onecansuggest that thetimefluctuations of thesignal intheviewingchannel
createmultiplicativenoisedecreasingtheazimuthal resolutionof theaperture.
• Thisapproachprovidesareasonablecompromisebetweenthepotential azimuthal
resolutionandtheaperturecontrastresolution. Thiscompromisecanbeachieved
bychoosingthesynthesistimeequal tothesignal correlationtime.
Imagingradarsandpartiallycoherent targets 99
Theoverall analysisof theresultspresentedinthischapter showsthat theavailable
methods for describing the properties of sea surface images can be supplemented
by amoregeneral approachtoSAR viewingof partially coherent objects. Thecon-
cept of partial coherenceallows oneto cover amuchlarger class of targets andto
describethebasic principles of their imaging. Theadvantages of this approachare
as follows: first, it is based on afairly general model of theradar signal. Expres-
sion(4.10) accounts for general andspecific features of theviewingof fluctuating
targets. Weshall showinthefollowingchaptersthat thecorrelationfunctionof time
fluctuationsinEq. (4.13) canbeused, for example, todescribetrajectoryinstabilities
of theSAR carrier. Second, this approachprovides ananalytical descriptionof the
major statistical characteristicsof imagesof partiallycoherent targets; these, inturn,
enableonetoevaluateimagequality. Finally, therelativesimplicityof mathematical
calculationsandtheclear physical senseof theresultsobtainedmakethisapproach
advantageousandconvenientasatool forsolvingpractical tasksassociatedwithSAR
designingandfor remotesensingof partiallycoherent targets.
Chapter 5
Radar systemsfor rotatingtarget imaging
(aholographicapproach)
Thepossibilityof usingtherotationof anobjecttoresolveitsscatteringcentreswas,
probably, first shown by W. M. Brown and R. J . Fredericks [21]. Independently,
microwave video imaging of rotating objects was demonstrated theoretically and
experimentallybyother researchers[109].
An analysis of three approaches (in terms of the antenna, range-Doppler and
cross-correlationtheories) was madeinReferences 104and146for theimagingof
rotatingtargets. Herewediscussthisproblemintermsof aholographicapproach.
5.1 I nversesynthesisof 1DmicrowaveFourier holograms
Weshall start withthebasicprinciplesof inversesynthesisof microwaveholograms
of an object rotating around thecentreof mass. Theanalysis will bebased on the
holographicapproachdiscussedinSections1.2and2.4.
Lens-freeoptical Fourier holography [131] implies that an optical hologramis
recordedwhentheamplitudeandphaseof thefieldscatteredbytheobjectarefixedin
acertainrangeof bistaticangles0< β < β
0
(Fig. 5.1). Inthemicrowaverange, this
isequivalent tothedisplacement of theradar receiver alongarcL of radiusR
0
from
point A topoint B, whilethetransmitter remainsimmobile. A coherent background
mustbecreatedbyareferencesupplylocatedintheobjectplane. Sincesuchasupply
is unfeasible, thecoherent backgroundis createdby anartificial referencewavein
theradar receiver (Chapter 2). Infurther analysis, weshall useamodel object made
upof scatteringcentres describedby Eq. (2.3). Thenadirect synthesis alongarc L
of radiusR
0
byabistaticradar system(Fig. 5.1) canproduceaclassical microwave
Fourier hologram[109], withasubsequentimagereconstructionasa1Ddistribution
of thescatteringcentresandtheir effectivescatteringsurfaces.
Todiscusstheprinciplesof inversesynthesisandformationof a1Dmicrowave
Fourier hologram, weshall makeuseof thewell-knownrelationfor uni- andbistatic
102 Radar imagingandholography
Target
R
0
A
L
C
B
x
1 N
2
b
0
0
z
y
Figure5.1 A schematic diagramof direct bistatic radar synthesisof amicrowave
hologram along arc L of a circle of radius R
0
: 1 – transmitter,
2– receiver
radars[69]. AccordingtoKell’stheorem, at small bistaticanglesβ thebistaticradar
cross-section(RCS) for theangleα (Eq. 2.5) andthebistaticangleβ isequal tothe
unistaticRCSmeasuredalongthebisectrixof theangleβ at afrequencyreducedby
afactor of cos(β/2) (Chapter 2).
Kell’stheoremandthefactthattherotationof atransmitter–receiver unitaround
theobjectcanbereplacedbytherotationof theobjectrounditsaxispassingthrough
thecentreof mass normal to theradar viewinglineleadoneto theconclusionthat
suchaunit, fixedat thepoint C (Fig. 5.2), cansynthesisea1D microwaveFourier
hologramidentical toalens-freeoptical Fourier hologram. This approachwas first
discussedbyS. A. Popovet al. [109].
Inorder tofindanalytical relationsfor theclassical andsynthesisedFourier holo-
grams, letusconsidertheschematicdiagraminFig. 5.3. Tosimplifythecalculations,
weshall deal onlywithonekthscatteringcentrewiththecoordinates
r
kx
= r
k
sinθ
k
cos(ϕ +ϕ
k
),
r
ky
= r
k
sinθ
k
sin(ϕ +ϕ
k
),
r
kz
= r
k
cosθ
k
,
(5.1)
where ϕ = t is the object rotation angle, = |

| is the angular velocity of
therotating object, ϕ
k
is theinitial anglebetween ther
k
vector projection on the
xOz plane and the positive x-axis, and θ
k
is the angle between the r
k
vector and
the positive z-axis. In our further analysis, we shall follow the References 109
and145.
Radar systemsfor rotatingtarget imaging(holographicapproach) 103
Target
R
0 A
C B
x
1
b
0
y
z
0


Figure5.2 Aschematicdiagramof inversesynthesisof amicrowavehologramby
aunistaticradar locatedat point C
Target
z
y


R

0 d

0
(t

K,
R

0
)
Radar
0
O
1
x
r
K
u
K
w
K
+w
Figure5.3 Thegeometryof data acquisitionfor thesynthesis of a 1D microwave
Fourier hologramof arotatingobject
With Eq. (5.1), theinput receiver signal can bedescribed as afunction of the
object rotationangle:
˙ u
r
(ϕ) = u
0
N

k=1
σ
k
exp
_
−j

λ
1
d(r
k
,

R
0
) · expiω
0
ϕ

_
, (5.2)
104 Radar imagingandholography
where
d(r
k
,

R
0
)

=

R
0
_
1−
r
k
R
0
[sinγ sinθ cos(ϕ +θ
k
) +cosγ cosθ
k
]
_
, (5.3)
λ
1
= 2πc/ω
0
is theradar wavelength; σ
k
is theamplitudecoefficient accounting
for thereflectioncharacteristicsof thekthscatteringcentre; γ = arctg(x
o
/y
o
) isthe
anglebetweenthevector

R
0
andthepositivez-axis; x
o
, y
o
, z
o
aretheobservationpoint
coordinates; O
1
is theobservationpoint; andR
0
= |

R| =
_
x
2
o
+z
2
o
is thedistance
betweentheobservationpoint andthecentreof massof theobject.
In order to derive the hologramfunction in a way shown in Chapter 2, it is
reasonable to use the multiplication procedure performed by an amplitude-phase
detector, followedbyanaveraging. Theartificial referencesignal is
˙ u
ref
(ϕ) = u
0
exp
_
−iω
0
_
ϕ

_

_
, (5.4)
wheret = ϕ/ is thecurrent moment of timeandψ is anarbitrary initial phase.
UsingEqs(5.2) and(5.4), wecanwritedownthehologramfunctionintheform:
H(ϕ) =Re˙ u
r
(ϕ)Re˙ u
ref
(ϕ)
=
u
2
0
2
N

k=1
σ
k
cos
_

λ
r
k
(cosγ cosθ
k
+sinγ sinθ
k
cos(ϕ +ϕ
k
))
_
, (5.5)
wherethesign· · · standsfor theaveraging.
To deriveEq. (5.5), thearbitrary initial phaseof thereferencesignal has been
chosensuchthat
4πR
0
λ
1
−ψ = 0.
Byexpandingthefunctioncos(ϕ +ϕ
k
) intothepower seriesof ϕ andchoosingonly
thefirst twotermsof theseries, weget
H(ϕ) =
u
2
0
2
N

k=1
σ
k
cos2
_
β
k


λ
1
r
k
l
k
(ϕ)
_
, (5.6)
with
β
k
=

λ
1
r
k
(cosγ cosθ
k
+sinγ sinθ
k
cosϕ
k
) (5.7)
and
l
k
(ϕ) = sinγ sinθ
k
_
ϕ sinϕ
k
+
ϕ
2
2
cosϕ
k

ϕ
3
6
sinϕ
k
_
. (5.8)
Consider now the microwave hologram function of the same object (Fig. 5.1),
obtained by a classical method. In this method, the radar receiver scans, with an
angular velocity , thesurfaceof acylinder of radius R
0
sinγ , having thegener-
atrix parallel to the z-axis. The transmitter is at the point A with the coordinates
Radar systemsfor rotatingtarget imaging(holographicapproach) 105
x
A
= R
0
sinγ , y
A
= O, z
A
= R
0
cosγ , whiletheangleβ
0
is equal to therotation
angleϕ. ThenthefunctionH
cl
(ϕ) for theclassical microwaveFourier hologramis
H
cl
(ϕ) =
u
2
0
2
N

k=1
σ
k
cos2
_
β
k

π
λ
1
r
k
l
k
(ϕ)
_
, (5.9)
wherethefunctionsβ
k
andl
k
(ϕ) aresimilar tothoseof Eqs(5.6) and(5.7).
A comparisonof Eqs(5.6) and(5.9) showsthat thefunctionH
cl
(ϕ) differsfrom
the function H(ϕ) for the synthesised hologramof the same object in having the
factor(
1
2
) inthesecondtermof theargumentcos2[· · · ]. Itisclearthatthesynthesised
hologrampossessesadoublecapacitytochangetheargumentand, hence, ithastwice
ashighresolutionbecauseitlooksliketheclassical hologramrecordedinafieldwith
awavelengthtwiceas short as thereal one. This effect is dueto thesimultaneous
scanning by several elements of thetransmitter–object–receiver system. It is easy
to seethat amicrowavehologramrecordedby asimultaneous receiver–transmitter
scanningof afixedobject alongthearcL (Fig. 5.1) istotallyidentical totheH
A
(ϕ)
hologram. Inthecaseof inversescanning, however, therotationof theobject alone
isequivalent tothemovement of twodevices– thetransmitter andthereceiver.
Weshall showbelowthattheconstantinitial phaseβ
k
doesnotaffectthestructure
of microwaveradar imagery. Weshall useasimplifiedexpressionfor thesynthesised
Fourier hologram:
H
1
(ϕ)

=
N

k=1
σ
k
cos
_

λ
1
r
k
sinθ
k
cos(ϕ
k
+ϕ)
_
, (5.10)
wherer
k
, θ
k
, ϕ
k
arethespherical coordinatesof thekthcentre. Equation(5.10) was
derivedfromEq. (5.5) ontheassumptionof γ = 90

andis validfor thefar-zone
approximation.
SincetheH
1
(ϕ) functionbasicallycoincideswithH
cl
(ϕ), theimagereconstruc-
tionfromasynthesisedFourierhologramcanbemadeinvisiblelight, usingthesame
techniquesasthoseof optical Fourier holography[131].
Sometimes, amicrowavehologramrecordedonaflat transparency is placedin
thefrontfocal planeof thelensL (Fig. 5.4(a)). Whenthetransparencyisilluminated
byaplanecoherent light wave, tworeal conjugateimagesof theobject, M andM

,
areformednear therear focal planeof thelens. Analternativeis touseaspherical
transparency of radius F
0
, illuminated by acoherent light beamconverging at the
spherecentre(Fig. 5.4(b)). Thetwovariantsareidentical inthesensethat theopera-
tionstobeperformedarethesame. Practically, itisconvenienttousethefirstvariant
but toanalysethesecondone.
If amicrowavehologramisrecordedonanoptical transparencyuniformlymoving
with velocity v
t
, the angular coordinate α = v
t
τ/F
0
on the transparency in the
reconstructionspacewill berelatedtotheangularcoordinateϕ = τ onthehologram
intherecordingspace:
α = ϕv

/F
0
= ϕ/µ, µ = F
0
/v
t
. (5.11)
106 Radar imagingandholography
H
F
0

0
u
M
Lens
u
M
A
L
F
0
I
0
0

I
0
v
M
v
d(u,v,a) u
M (u,v)
v
(a)
(b)
2a
o
a
Figure5.4 Optical reconstruction of 1D microwave images froma quadrature
Fourier hologram: (a) flat transparency, (b) spherical transparency
For ahologramof apointobject, thedistributionof complex-valuedlightamplitudes
intheimagespaceu, v at thepoint M(u
p
, v
p
) inthevicinity of thepoint O canbe
representedbyanintegral (at θ
k
= 90

):
E(u, v) = A
α
0
_
−α
0
_
1+cos
_


λ
1
r
k
cos(µα +ϕ
k
)
__
×exp
_
−j

λ
2
d(u, v, α)
_
dα = I
0
+I
+1
+I
−1
, (5.12)
I
0
= A
α
0
_
−α
0
exp
_
−j

λ
2
d(u, v, α)
_
dα,
I
±1
=
A
2
α
0
_
−α
0
exp[jψ
±1
(u, v, α)]dα,
I
±1
(u, v, α) = ±4
π
λ
1
r
k
cos(µα +ϕ
k
) −

λ
2
d(u, v, α),
d(u, v, α) = [F
2
0
+2F
0
(vcosα −usinα) +u
2
+v
2
]
1/2
,
Radar systemsfor rotatingtarget imaging(holographicapproach) 107
whereλ
2
isthewavelengthintheoptical range; Aisacomplex-valuedproportionality
factor A = (u
2
0
/4)σ
1
; σ
1
is theamplitudecoefficient accounting for thereflection
characteristics of thescatteringcentrek = 1; d(u, v, α) is thedistancebetweenan
arbitrary point inthearc L andthepoint M near thearc centreontheimage; 2α
0
is
theangular sizeof thehologramintheimagespace; andF
0
isthelensfocal length.
TheintegralsI
0
, I
+1
, andI
−1
describethedistributionof thecomplex-valuedlight
amplitudes in thezeroth and first diffraction orders, both positiveand negative. If
theangular dimensionsof thehologramarenottoolarge, thefunctionscos(µα +ϕ
k
)
andd(u, v, α) canberepresentedasthefirst termof therespectiveexpansionseries
towritedownthefunctionψ(u, v, α):
ψ
±1
(u, v, α) =

λ
2
_
α
_
2
λ
2
λ
1
µr
k
sinϕ
k
±u
_
+
α
2
2
_
2
λ
2
λ
1
µ
2
r
k
cosϕ
k
±v
_

α
3
6
_
2
λ
2
λ
1
µ
3
r
k
sinϕ
k
±u
__
. (5.13)
Herewehaveomittedtheconstant expansiontermsindependent of theargument α.
Thecoordinates of thepoints M(u
M
, v
M
) andM

(u

M
, v

M
), at whichtwo conjugate
imagesof thepoint object areformed, canbefoundfromtheexpressions
∂ψ(u, v, α)
∂α
= O,

2
ψ(u, v, α)
∂α
2
= O. (5.14)
Withx
M
= r
k
sinϕ
k
andy
M
= r
k
cosϕ
k
, usingEq. (5.14), weget
u
M,M
= ±2µ
λ
2
λ
1
x
M
, v
M,M
= ±2µ
2
λ
2
λ
1
y
M
. (5.15)
Equation (5.15), in turn, gives thetransverseand longitudinal scales of theimage
beingreconstructed:
m
y
=
¸
¸
¸
¸
v
M
y
M
¸
¸
¸
¸
= 2µ
2
λ
2
λ
1
, m
x
=
¸
¸
¸
¸
u
M
x
M
¸
¸
¸
¸
= 2µ
λ
2
λ
1
. (5.16)
Anundistortedimageof anobject canbereconstructedonlyif all thederivativesof
ψ(u, v, α) aresimultaneouslyequal tozerowithrespect totheargument α. It iseasy
toshowthat thisconditionismet at onepoint (M andM

) at µ = F
0
/v
t
or ϕ ≡ α.
Thelatter identitydefinesthecriterionfor optical processingof synthesisedFourier
holograms: theapertureanglesintherecordingandreconstructionspacemustbethe
same. If thereconstructionprocedurehasbeendesignedintheoptimal way, wehave
m
x
= m
y
= m, andtheobjectisreproducedwithoutdistortionsalongthelongitudinal
andtransversedirections.
Aspecificfeatureof asynthesisedFourierhologramisthattheresolutionobtained
isindependentof thedistancetotheobject. Indeed, letustakethefollowingexpression
tobethemeasureof theresolvingpower:
= |I (u
M
)|
−2
_

−∞
|I (u)|
2
du, (5.17)
108 Radar imagingandholography
where|I (u)|
2
isthelightintensitydistributionacrossthescatteringcentreimageand
u
M
isthecoordinateof themaximumintensityof theimagefocusing.
Equation(5.17) describes thereceiver pulseresponsetothepoint object. Then,
neglecting all the terms in Eq. (5.13) except for the first one and using the scale
relationsof Eq. (5.16), wecandefinetheresolvingpower of theobject as

x

1
, ψ
S
) =

u
m
x
=
λ
1

0
=
λ
1

S
, (5.18)
whereψ
S
istheobjectanglevariationduringtherecording. Therefore, whentheholo-
gramanglesaresmall, theresolvingpower of theobject varieswiththewavelength
andthesynthesisedapertureangles, rather thanwiththedistancetotheobject or the
reconstructionparameters.
WiththescalerelationsfromEq. (5.16), wefindfor µ = 1
m
y
= m
x
= 2
λ
2
λ
1
.
Then the criterion described by Eq. (5.18) can yield the resolution of a video
microwaveimage:

u

0
) =
x

1
, ψ
S
)m
x
=
λ
2

0
. (5.19)
It follows fromEq. (5.19) that the resolution of a microwave image obtained by
inversesynthesisandoptimal processingisfully consistent withtheAbbecriterion
for optical devices(Chapter 1).
Consider nowdistortionsarisingfromthereconstructionof amicrowaveimage.
Thesearedefinedbythehigh-ordertermsof Eq.(5.13)forthefollowingreason. When
animageisviewedinoneplane, someof thescatteringcentresareshiftedrelativeto
thisplane, thatis, theyaredefocused. Withthequadratictermof Eq. (5.13), thefield
distributioninadefocusedpoint imageisdefinedas
I
+1
(p, t
0
) = A
α
0
t
0
exp
_
πj
_
4r
x
λ
1

p
2
2
__
× {C(t
0
+p) +C(t
0
−p) +j[S(t
0
+p) +S(t −p)]}, (5.20)
wheret =

2(v
M
−v)/λ
2
describestheviewingplaneshift relativetothefocusing
planeandp= u
M

2
t, t
0
= α
0
t andS(z), C(z) aretheFresnel integrals.
Theresolutionof adefocusedmicrowaveimageisdescribedbythefunction
(t
o
) =

_
−∞
|I
+1
(p, t
0
)|
2
|I
+1
(O, t
0
)|
2
dp (5.21)
showninFig. 5.5. Obviously, thebest resolution
ˆ
= 1.2is achievedat acertain
optimal valueof t
0
=
ˆ
t
0
= 1andanoptimal aperturesize
ˆ α
0
= [2(v
M
−v)/λ
2
]
−1/2
. (5.22)
Radar systemsfor rotatingtarget imaging(holographicapproach) 109
1.0
1.2
1.4
1.6
1.8
2.0
0.4 0.6 0.8 1.0 1.2 1.4 1.6
t
0
∆ (t
0
)
Figure5.5 The dependence of microwave image resolution on the normalised
apertureangleof thehologram
At v = 0, whentheviewingplaneissuperimposedwiththefocal planeof thelens,
wecanuseEq. (5.15) toget
ˆ α
0
=
_

_
y
M

1
_
−1
=
_
µ

τ
max
_
−1
, (5.23)
where τ
max
= 2L
max

1
is the maximumlongitudinal dimension of the object,
expressedashalf-wavelengths.
Asthesizeof theobject or theapertureincreases, theinfluenceof thehigh-order
terms of Eq. (5.13) becomes morepronouncedresultingindistortions andalower
resolution. Thesefactorsimposeconstraintsonthesynthesisedaperturesize.
Theimagereconstructionof microwaveFourier hologramshassomespecificity
associated with the way the artificial reference wave is created. If the reference
signal phaseisnot modulated, thephaseof thecoherent referencebackgroundalong
the hologramis constant, a situation equivalent to the position of a point object
at therotation centre. So during thereconstruction, thethreeimages – that of the
referencesourceandthetwoconjugateimages of theobject – overlap. Toseparate
theseimages, oneshouldintroduceaspacecarrier frequency(SCF) bychangingthe
phaseof thereferencesignal at aconstant rate, likeintheexpression
dψ/dτ ≥ 4πr
max

1
, (5.24)
wherer
max
isthevectorradiusof thescatteringcentrelocatedatthemaximumdistance
fromtheobject rotationcentre.
Thereferencewavephasecanbemodulatedbyaphaseshifter or byintroducing
translational motionalongtheviewingline, inadditionto therotational motion. In
thelatter case, thetranslational velocityvmust satisfytheinequalityv> r
max
.
110 Radar imagingandholography
5.2 Complex1DmicrowaveFourier holograms
Wehaveshown in Section 5.1 that a1D quadraturemicrowaveFourier hologram
H
1
(ϕ) canbedescribedbyEq. (5.10). A conjugatequadratureFourier hologramwith
aπ/2phaseshift hastheform:
H
2
(ϕ)

=
N

k=1
σ
k
sin
_

λ
1
r
k
sinθ
k
cos(ϕ
k
+ϕ)
_
. (5.25)
AccordingtoEq. (2.23), thehologramsH
1
(ϕ) andH
2
(ϕ) canformacomplexFourier
hologram:
H(ϕ) = H
1
(ϕ) +jH
2
(ϕ) =
N

k=1
σ
k
exp
_
j

λ
1
r
k
sinθ
k
cos(ϕ
k
+ϕ)
_
. (5.26)
Thisexpressioncanbere-writteninasimpler form:
H(x) = uexp( j), (5.27)
where u and are the amplitude and phase (in the recording plane) of the total
fieldscatteredby theobject. Theargument ϕ of theH functionhas beenreplaced
by the linear x-coordinate, since a 1D microwave hologramis recorded on a flat
transparency.
Theimagereconstructionbyaplanewaveinaparaxial approximationisreduced
totheFourier transformationof thehologramfunction, assumingfor simplicitythat
therecordingandthereconstructionareperformedat thesamewavelength:
V(ω
x
) =

_
−∞
H(x) exp(−jω
x
x) dx, (5.28)
whereω
x
isthespacefrequencycorrespondingtothecoordinateintheimageplane.
ThesubstitutionintoEq. (5.28) of theexpressionsfor thequadratureholograms
inEqs(5.10) and(5.25), re-writtenasEq. (5.27), gives
V
1

x
) =
1
2
_
_

_
−∞
uexp( j) exp(−jω
x
x) dx
+

_
−∞
uexp(−j) exp(−jω
x
x) dx
_
_
, (5.29)
Radar systemsfor rotatingtarget imaging(holographicapproach) 111
V
2

x
) =
1
2j
_
_

_
−∞
uexp( j) exp(−jω
x
x) dx


_
−∞
uexp(−j) exp(−jω
x
x) dx
_
_
. (5.30)
Itisseenthateachquadraturehologramgivestwoconjugateimagesdescribedbythe
appropriatetermsinEqs(5.29) and(5.30).
In a complex hologram, the first quadrature component gives two conjugate
imagesinEq. (5.29), whilethesecondcomponent reconstructstheimages
V
2

x
) =
1
2
_
_

_
−∞
uexp( j) exp(−jω
x
x) dx


_
−∞
uexp(−j) exp(−jω
x
x) dx
_
_
. (5.31)
Thefirst terms inEqs (5.29) and(5.31) areidentical, whilethesecondterms differ
inthephasebythevalueπ. A combinedreconstructionafter summingupthefields
inEqs(5.29) and(5.31) yieldsonepair of conjugateimagesthat enhanceeachother
andanother pair of imagesthat annihilateeachother; soweeventuallyhave
V(ω
x
) =

_
−∞
uexp( j) exp(−jω
x
x) dx. (5.32)
Thecomplex-valuedfunctionV(ω
x
) describestheonly imagereconstructedfroma
complexhologram[145]. Theimageintensitycanbedefinedas
W(ω
x
) = |V(ω
x
)|
2
. (5.33)
Toillustrate, consider thecasewhentheobject isapoint andtheparametersθ
1
and
ϕ
1
areequal toπ/2. For small valuesof ϕ (ϕ < 1rad.) andϕ = x/v
t
, wherev
t
is
thevelocityof therecordingtransparency, Eq. (5.26) reducesto
H(x)

= u exp
_
j

λ
1
r

v
t
x
_
. (5.34)
Sincethehologramisrecordedinafinitetimeinterval,τ ∈ [−T/2, T/2], Eq. (5.28)
yields
V(ω
x
) =
v
t
T/2
_
−v
t
T/2
H(x) exp(−jω
x
x) dx. (5.35)
112 Radar imagingandholography
Thesubstitutionof Eq. (5.34) intoEq. (5.35) andtheintegrationgive
V(ω
x
) = 2σ sin
__

λ
1
r

v
t
−ω
x
_
v
t
T
2
_ __

λ
1
r

v
t
−ω
x
_
. (5.36)
Clearly, this function is of the sinz/z type and has a maximum at ω
x
=
(4π/λ
1
)r(/v
t
), whichcorrespondstotheimageof thepoint.
Digital reconstructionreducestothecalculationof theintegral inEq. (5.28) and
has no zeroth order. So a complex hologramcan be formed without introducing
thecarrier frequency, whichdecreases theamount of datatobeprocessed: asingle
quadraturehologramrequires, at least, twiceasmanydiscretecountsbecauseof the
highcarrier frequency.
Optical reconstructionproduces thezerothorder, inadditionto asingleimage,
becauseof thepresenceof thereferencelevel of H
r
(Eq. (2.20)). Duringtheprocess-
ingof acomplex hologramrecordedwithout thecarrier frequency, thezerothorder
overlapstheimage. Their spatial separationcanbemadebyjust introducingthecar-
rier frequency. Thentheuseof acomplexhologramhasnosense, sinceonedoesnot
havetoremovetheconjugateimage. Besides, theoptical reconstructionof acomplex
hologramishardtomakeduetothestrictrequirementsontheadjustmentof thetwo-
channel processingsuggestedinReference35. Thus, complexmicrowaveholograms
shouldberecordedwithout introducingthecarrier frequencyandreconstructedonly
digitally.
5.3 Simulationof microwaveFourier holograms
A comparisonof varioustechniquesappliedinmicrowaveFourierholographycanbe
madeusingaspecial algorithmfor digital simulationof 1Dquadratureandcomplex
hologramrecording and reconstruction for simpleobjects. Thealgorithmconsists
of two units, oneof which records ahologramfollowing Eq. (5.26) and theother
reconstructstheimage, thatis, calculatestheintegral of Eq. (5.28). Theimagerecon-
struction fromindividual quadrature holograms is performed using an additional
procedurefor thecalculationof theFourier integralsof thereal functionsH
1
andH
2
fromtheFourier transformof thecomplexfunctionH = H
1
+jH
2
.
Figure5.6(a–c) illustratessomeof theresultsof thedigital simulation. Theordi-
nateshows theimageintensity inrelativeunits andtheabscissatheimagesize. In
digital reconstruction, amicrowaveimagerepresentsaseriesof discretecountsspaced
at adistanceλ
1
/2ψ
S
. Themodel object consistedof twoscatteringcentresarranged
toformadumb-bell structureof 10λ
1
inlength, whichrotatedat aconstant angular
velocityroundthecentreof mass. Thequantityθ
k
(Fig. 5.3) wastakentobeequal to
π/2. TheimageillustratedinFig. 5.6(a) wasreconstructedfromasinglequadrature
hologram. Peaks 1 and 2 correspond to oneconjugateimageof thetwo scattering
centresandpeaks3and4totheother. TheimageseparationwasmadeusingtheSCF,
whoseintroductionwassimulatedbytheradial displacement(withthevelocityv
l
) of
theobjectrotationcentrerelativetothereceiver. Oneof theconjugateimagesvanished
duringtheprocessingof thecomplex hologram(Fig. 5(b)), sothecarrier frequency
Radar systemsfor rotatingtarget imaging(holographicapproach) 113
1 2 3 4
1.0
0.6
0.2
–20 –10 0 10 20
r/l
1
r/l
1
r/l
1
r/l
1
r/l
1
r/l
1
0.8
0.4
–9.5 0 9.5
W, rel. un W, rel. un
1.0
0.6
0.2
–20 –10 0 10 20
W, rel. un
1.0
0.6
0.2
–20 –10 0 10 20
W, rel. un
1.0
0.2
0.6
–4.8 0 4.8
W, rel. un
1.0
0.2
0.6
–6.4 0 6.4
W, rel. un
c
s
=p/2
c
s
=p/6
c
s
=p/120
(a) (d)
(b) (e)
(c) (f)
Figure5.6 MicrowaveimagesreconstructedfromFourier holograms: (a) quadra-
turehologram, (b) complexhologramwithcarrier frequency, (c) com-
plexhologramwithout carrier frequencyand(d,e,f) thevariationof the
reconstructed image with the hologramangle ψ
s
(complex hologram
without carrier frequency)
wasnot needed. Thisisclearly seeninFig. 5.6(c) showingtheimagereconstructed
fromacomplexhologramrecordedwithout thecarrier frequency.
Figure5.6(d–f) presents thevariationof thereconstructedimagewiththeholo-
gramangle. The comparison of these results supports the above conclusion that
thereisanoptimal sizeof thesynthesisedaperture. Astheangleψ
S
becomeslarger,
the resolution increases to a certain limit, beyond which distortions arise in the
imagestructure. Theresolving power of this techniqueestimated fromtheresults
of thedigital simulationis∼λ
1
.
Currently, therearetwomethodsusedinmicrowaveFourier holography. Oneis
basedontherecordingof asinglequadraturephase-amplitudehologramof thetype
describedbyEq. (5.10) withthecarrier frequencyandoptical imagereconstruction.
Theother methodrecords acomplex hologramof thetypedescribedby Eq. (5.26)
without introducingthecarrier frequencybut usingadigital imagereconstruction.
Theapplicationof thefirst methodinvolvessomeproblemsassociatedwiththe
useof an anechoic chamber (AEC), becausethelinear displacement of theobject
for introducing the carrier frequency leads to the camera decompensation. So we
114 Radar imagingandholography
Input H
1
Normalisation
H
1
Selection of
synthesis interval
Interpolation
Fast Fourier
transform
Output Re(V) Output Im(V)
Computation W=/V/
2
Output W
Input H
2
Normalisation
H
2
Selection of
synthesis interval
Interpolation
Figure5.7 Thealgorithmof digital processingof 1DmicrowavecomplexFourier
holograms
recommendthesecondtechniquewhenoneusesananechoiccamera. Weshall discuss
someof theresultsobtainedbythesecondmethod.
Figure5.7illustratesthealgorithmof digital imagereconstruction, whichoperates
asfollows. Thesettingof discretedataisfollowedbytheir normalisation, thatis, the
dataarereducedtothevariationrange[−1, 1]. Thehologramisusuallyrecordedfor
afull 2π radrotation; sofor thesubsequentprocessing, oneselectsaseriesof counts
insuchaway that their number describestheoptimal apertureandtheir positionin
thearraycorrespondstotherequiredaspect. Aninterpolationunit makesit possible
toreducethenumberof signal recordsto2
m
, wheremisanatural number. Theimage
reconstructionisperformedbyaFourier transformunit usingtheFFT algorithmfor
thecomplex-valuedfunctionH(x). Arraysof Re(V) andIm(V) numbersthat define
theimage, whoseintensityisfoundasW = Re
2
(V) + Im
2
(V), areproducedat the
unit output.
Figure5.8presentstheresultsof digital processingof 1DcomplexFourier holo-
grams recorded experimentally with an anechoic camera. The image intensity is
plotted in relativeunits along they-axis and its linear dimension along thex-axis.
Theobject is ametallic sphereof radius 0.3λ
1
, rotating along acircumferenceof
radius 3λ
1
. Thepositions of thepoint imageinFig. 5.8(a–c) aredifferent andvary
withtheobject aspect ψ
0
asshownschematicallyineachfigure.
Radar systemsfor rotatingtarget imaging(holographicapproach) 115
–12.8 0 12.8 r, cm
W, rel. un.
1.0
0.6
0.2
c
0
=p/12, c
s
=p/6
–12.8 0 12.8 r, cm
W, rel. un.
1.0
0.6
0.2
c
0
=5p/2, c
s
=p/6
–12.8 0 12.8 r, cm
W, rel. un.
1.0
0.6
0.2
c
0
=3p/4, c
s
=p/6
(a)
(b)
(c)
Figure5.8 A microwave image of a point object, reconstructed digitally from
a complex Fourier hologramas a function of the object’s aspects

0
(
s
= π/6): (a)
0
= π/12, (b)
0
= 5π/2and(c)
0
= 3π/4.
The methods we have discussed have some advantages and limitations. The
recordingof singlequadratureholograms is madein onechannel but requires that
thecarrier frequency shouldbeintroducedinthisway or another. Therecordingof
complexhologramsdoesnotrequirethecarrier frequencybutitismorecomplicated
becausethechannelsmust haveastrict quadraturecharacter, their parametersmust
beidentical, andthemeasurementsmustbewell synchronised. However, therecord-
ingerrorsassociatedwiththesecharacteristicsof atwo-channel systemcanbeeasily
eliminatedby theprocessing. (Wehavementionedabovethat complex microwave
Fourier holograms should be processed only digitally.) The image reconstruction
fromquadratureholograms canbemadebothdigitally andoptically. Thepossibil-
ity of recording ahologramin aformsuitablefor digital processing increases the
dynamic rangeof thesystem. It does not then need theuseof sophisticated units,
116 Radar imagingandholography
suchashigh-resolutioncathode-ray tubesor high-precisionfocusinganddeflecting
devices. Inoptical processing, theaperturesizeisnormallylimitedbythecharacteris-
ticsof thereconstructionunit, soitcannotbemadeoptimal. Ontheotherhand, optical
processingallowsre-focusingof theobservationplanewithout difficulty, providing
a2Dimage(inlongitudinal andtransversal directions).
Theinvestigationandanalysisof methodsformicrowaveFourierholographyhave
shown that they can besuccessfully used for imaging objects which can berepre-
sentedasanarrayof scatteringcentres. Thesemethodsareof interesttothosestudying
diffraction with anechoic cameras (Chapter 9), in particular, for the experimental
verification of theapplicability of thephysical theory of diffraction developed by
P. Ya. Ufimtzev[137] andof thegeometrical theoryof diffractionbyJ . B. Keller[70].
These methods can also be useful in designing radar systems with an inversely
synthesisedaperture(Chapter 9).
Chapter 6
Radar systemsfor rotatingtarget imaging
(atomographicapproach)
6.1 Processinginfrequencyandspacedomains
Section 2.4.2 discussed the tomographic approach to target imaging in
two-dimensional (2D) viewinggeometry. Wesuggestedanalgorithmfor processing
inthefrequencydomain, whichfindsthereflectivityfunctionˆ g(x, y) fromEq. (2.48).
Thefirst proceduretobeperformedis toreconstruct animageinthefrequency
domainbycalculatingtheN number of discreteFourier transform(DFT) recordsof
theechocomplexenvelope
P
θ
(l, m) =
N=1

n=0
s
v
(nt, mδθ) exp(−j2π ln/N) (6.1)
foreachof theM numberof thetargetangularpositionsmδθ, m= 0, . . . , M−1. The
pixelsfoundinthiswayarelocatedatthepolargridnodesformedbytheinterceptions
of concentriccircumferencesseparatedbythefrequencystep1/Nt androtatedby
theradial beamangleδθ fromoneanother.
SinceaninverseDFTcanbemadeonlyonarectangulargrid, thesecondprocedure
shouldincludethefindingof pixelsattheequidistantnodesof arectangulargrid, using
theP
θ
(l, m) valuesobtainedbythefirst procedure. Thisisfollowedbya2Dinverse
DFT computationof thetarget reflectivity ˆ g(x
i
, y
j
) at therectangular gridnodes.
Thisalgorithmhastwoimportant featuresthat deserveattention. First, sincethe
complex envelopeof anechosignal is finite, therearedistortions near the±I/2t
boundariesof themajorperiodof theP
θ
(l, m) spectrum. Thedistortionsarisefromthe
superpositionof high-frequencycomponentsof theadjacentspectral periods. Besides,
thehigh-frequencyspectrummaycontainnoisethat dominatesover thesignal data.
Toreducethenoise, onehastoresort toweightingbymultiplyingtheP
θ
(l, m) DFT
databy a‘window’ function. Thechoiceof suchafunctionshouldbebasedonthe
considerationof howmuchthenoiseabatestheradar dataandwhat kindof target is
beingprobed[57].
118 Radar imagingandholography
Second, sincetheradar is acoherent system, it seems important to definethe
discretisationstepδθ of theθ angleas thetarget aspect changes. Thecriterionfor
choosingaδθ valuecanbeformulatedasfollows: thephaseshift of theechosignal
fromthepoint scatterer most remotefromthetarget centreof mass should not be
larger thanπ whenthetarget aspect changesbyδθ. Thiscriterioniswrittenas
δθ ≤ λ
c
/4|¯ r
o
|
max
. (6.2)
This expressionis validfor relatively narrowbandsignals, whosespectral widthis
muchlessthanthecarrier frequency. Otherwise, oneshouldsubstituteλ
c
inEq. (6.2)
bythewavelengthof thehighest frequencycomponent inthesignal spectrum.
Itisworthnotingthatthemethodof synthesisingtheso-calledunfocusedaperture
isaparticular caseof theaboveprocessingalgorithmfor thefrequencydomain. The
movementof apointscatterer alonganarcisapproximatedbythemovementalonga
tangenttoit. Bysubstitutingv= ycosθ −xsinθ intoEq. (2.40) andusingsinθ ≈ θ
andcosθ ≈ 1− θ
2
/2, weget
S( f ) =H( f )
_
−∞

_
g(x, y){exp[ j(k
c
+k)θ
2
y
]}
×exp[−j2(k
c
+k)y+j2(k
c
+k)θ
x
] dxdy.
If weeliminatethesquaredphaseterm, itwill beclear thattheˆ g(x, y) functioncanbe
reconstructedbyaninverseFourier transform(IFT) over therectangular raster which
hasreplacedtherespectiveregionof thepolar raster. Thisapproximationworkswell
onlyif theaspect variationduringthedataacquisitionwassmall.
Let usdiscussnowtheprocessingalgorithmfor thespacedomain, or theconvo-
lutionalgorithm. For this, Eq. (2.48) will betransformedfromtheCartesiantopolar
coordinates:
ˆ g(x, y) =
π
_
0


_
−∞
S
θ
( f
p
)|f
p
| exp[ j2πf
p
r cos(θ − ϕ)]df
p
. (6.3)
Theinner integral inEq. (6.3) representstheIFT of theproductof f
p
andthefunction
definedbyexpression(2.43). Theresultistheconvolutionof thequantityF
−1
{S
θ
( f
p
)}
withtheso-calledkernel functionq(v) = F
−1
{|f
p
|}. If oneusesthewindowfunction
F( f
p
) toreducetheeffect of high-frequencyspectral noise, onegets
g(v) = F
−1
{|f
p
|F( f
p
)}. (6.4)
Theresultof theintegrationwithrespecttothevariablef
p
inEq. (6.3) usingEq. (6.4)
isknownasaconvolutional projection. It canbeusedfor makingaback projection
procedure:
ˆ g(x, y) =
π
_
0
ξ
θ
[r cos(θ − ϕ)] dθ. (6.5)
Radar systemsfor rotatingtarget imaging(tomographicapproach) 119
Thisprocedureimpliestheintegrationof thecontributionof eachconvolutional pro-
jectionξ
0
(·) totheresultingimage. Thesubstitutionof theintegral inEq. (6.4) bythe
Rimansumgives
ˆ g(x
i
, y
j
) =
M=1

m=0
ξ
0
[r(x
i
, y
j
)mθ] δθ, (6.6)
where
r(x
i
, y
j
, mδθ) =
_
x
2
i
+y
2
j
cos[mδθ −arctg(x
i
/y
j
)]. (6.7)
Thelatter expressionisusedtofind(byinterpolation) thecontributionof theconvo-
lutional projectionobtainedat themthtarget aspect to eachof the(x
i
, y
j
) pixels of
therectangular imagegrid.
An important advantageof theconvolution algorithmis thepossibility of pro-
cessingdataas they becomehandy, becausethecontributionof every projectionto
thefinal imageiscomputedindividually.
If thetransmittersignal containsafinitenumberLof discretefrequencies, Eq.(6.3)
will taketheform:
ˆ g(x, y) =
L

l=1
4πf
p
l
c
π
_
0
S
θ
( f
p
l) exp[ j2πf
p
lr cos(θ − ϕ)] dθ (6.8)
and theprocessing algorithmreduces to summing up 1D integrals with respect to
thevariableθ. Wecanmakecomputationswithformula(6.8) intwoways. Oneisto
calculatetheintegral foreveryvalueof (x
i
, y
j
) andtheotheristosolvethesubintegral
expression for the M number of aspects for every frequency value, followed by
interpolation, asinthecommonconvolutionalgorithm.
Thus, radar imagingof extendedcompact targets by inverseaperturesynthesis
canbemadeby usinganumber of algorithms well knownincomputerisedtomog-
raphy. Theapplicationof theconvolutionalgorithmof theback projectionmethod
allowsareductionintheimagingtime, ascomparedwiththetimeof reconstruction
inthefrequencydomain, duetotheprocessingof individual echosignals. Theinter-
polationcanbeomittedinthecaseof discrete-frequency transmitter signals, giving
anadditional reductionintheprocessingtime.
Anotherimportantfeatureof animagingradarisitscoherence, soitprovidesmore
informationthanconventional systemsusingcomputerisedtomography. Ontheother
hand, coherencemustbemaintainedinall of theradarunitsduringtheoperation. This
circumstancealsoimposesrestrictionsontheminimumrepetitionrateof transmitter
pulses.
6.2 Processingin3Dviewinggeometry: 2Dand3Dimaging
It hasbeenshowninChapter 5that inverseaperturesynthesisisthemost promising
technique for imaging extended proper and extended compact targets with a high
120 Radar imagingandholography
angular resolution. Thefact that such targets can beimaged during their arbitrary
motionmakesitpossibletousethistechniqueinavailableradar systems(Chapter 9).
Theconditionsformicrowavehologramrecordingareprimarilydeterminedbythe
applicationof theimagestobeobtained. For example, if radar responsesarestudied
in an anechoic chamber (AEC) (Chapter 9), it is sufficient to use a 2D geometry
with an equidistant arrangement of theaspect angles. Thetarget rotates uniformly
aroundtheaxis normal tothelineof sight. By deviatingtherotationaxis fromthis
normal after every measurement run, onecan, inprinciple, obtain2D images even
withmonochromaticradar pulses.
6.2.1 Theconditionsfor hologramrecording
Thereareanumber of applied tasks when thetarget aspect variation must reflect
natural viewingconditions. Let us consider theaspect variationrelativeto theline
of sight of a ground radar viewing a hypothetical satellite moving at an altitude
H = 400kmalongacircular orbit withtheinclinationi = 97

(Fig. 6.1). Thetarget
is assumedtobeperfectly stabilisedintheorbital coordinates, andits aspect inthe
orbital planeisdefinedbytheangleα betweenthelongitudinal constructionlineand
theprojectionof thelineof sightontotheorbital plane. Theangleβ betweentheline
of sight and theorbital planedescribes theaspect variation in theplanenormal to
orbital plane. Theanalysis of theplots presentedshows that theaspect variationof
thisclassof targetsduringhologramrecordinginreal viewingconditionsshouldbe
characterisedby (1) a3D viewinggeometry and(2) anon-equidistant arrangement
of sampleswithintheviewzone.
Toderiveanalytical relationsfor thedescriptionof amicrowavehologramfor 3D
viewinggeometry, weshall considerthefollowingconditionsforviewinganorbiting
satellite. The target is scanned by a ground coherent radar transmitting a probing
signal withthecarrierfrequencyf
o
andthemodulationfunctionw(t) fromEq. (2.30).
Theradar measures theamplitudeand phaseof theecho signal (for anarrowband
signal ˙ w(t) = A, whereAisthecomplexenvelopeamplitude).
Thetarget is largerelativeto thewavelength λ of theradar carrier oscillation,
suchthat thetarget canberepresentedasanensembleof individual andindependent
scatterers. Every scatterer is rigidly bound to thetarget’s centreof mass or moves
acrossitssurfaceasitsaspect changeswithrespect totheradar. Thepositionof the
nthscatterer atanymomentof timeisdefinedbytheradiusvector r
no
withtheorigin
at point Origidlyboundtothetarget’scentreof mass.
Thepositionsof thearbitrarynthscatterer andtherotationcentreof thesatellite
will be described by the radius vectors r
no
and

R
o
, respectively (Fig. 2.8). In the
general caseof 3Dviewinggeometry, anechosignal isdefined, withintheaccuracy
of aconstant factor, as
S
v
(t) =
_
V
g(r
no
){w(t −2|

R
o
|/c−2ˆ r
n
/c) exp(−j2πf
0
2|

R
o
|/c)}
×exp(−j2πf
0
2ˆ r
n
/c) dr
no
. (6.9)
Radar systemsfor rotatingtarget imaging(tomographicapproach) 121
300.0
350.0
250.0
200.0
0.0 100.0 200.0 300.0 400.0 500.0
A
s
p
e
c
t

a
n
g
l
e

a
,

g
r
a
d
A
s
p
e
c
t

a
n
g
l
e

b
,

g
r
a
d
31 grad
66 grad
88 grad
Observation time, s
0.0 100.0 200.0 300.0 400.0 500.0
Observation time, s
–10.0
0.0
10.0
20.0
30.0
40.0
50.0
60.0 (b)
(a)
Figure6.1 Theaspect variationrelativetothelineof sight of agroundradar asa
functionof theviewingtimefor asatelliteat theculminationaltitudes
of 31

, 66

and88

: (a) aspect α and(b) aspect β
It followsfromEq. (2.34) that signal noiseduetothepresenceof coordinateinfor-
mationcanbecorrectedbythereceiver. Thecorrectionconsistsinselectingthetime
strobeposition in accordancewith thedelay 2|

R
o
|/c and in introducing thephase
factor exp j2πf
0
(2|

R
o
|/c) inthereferencesignal duringthecoherent sensing.
122 Radar imagingandholography
As aresult of thecompensationfor theradial displacement of thesatellite, the
familyof spectraof videopulsesmust berepresentedasamicrowavehologram. For
this, wegofromtimefrequenciestospacefrequenciestoget
S( f
po
+f
p
) = F{S
v
(ct/2)} = H( f
p
)
_
v
g(r
no
) exp−j2π( f
po
+f
p
)

d(t) dr
no
,
(6.10)
whereF{·} istheFouriertransformoperator, W( f
p
) = F{w(v)} isthespacefrequency
spectrumof thetransmitterpulse, f
po
= 2f
o
/cisthespacefrequencycorrespondingto
thespectral carrier frequency, 2f
l
/c < f
p
< 2f
u
/cisthespacefrequencydetermined
over thewholefrequencybandwidthof thetransmitter pulse, H( f
p
) = W( f
p
)K( f
p
)
istheaperturefunction, andK( f
p
) isthetransfer functionof thefilter for therange
processingof videopulses.
The above analytical description of video pulse spectra in terms of space fre-
quencies has not changedtheˆ r
no
(t) function, whichis still consideredto beatime
function at thesynthesis step. Nowthepair of angular coordinates θ, B in the3D
frequency space(Fig. 6.2(b)) will becomparedat every moment t of thesynthesis
step. Themicrowavehologramfunctioncanbepresentedasa3DFourier transform
inthespherical coordinatesf
p
, θ, B:
S(

f
p
) = H(

f
p
)
_
v
g(r
no
) exp(−j2π

f
p
r
no
) dr
no
, (6.11)
where

f
p
= ( f
po
+f
p
) e(θ, B) istheradiusvectorof thespacefrequencyinthefrequency
domain.
Thegeometrical relationsfortherecordingof suchahologramwill bederivedfor
twotypical casesof groundradar viewingof orbitingsatellites. Fig. 6.2(a) showsthe
viewing geometry and Fig. 6.2(b) illustrates fragments of theholograms obtained.
Theangular positionof theradar lineof sight (RLOS) isdescribedbytheazimuthal
angleθ = α − 3π/2andthepolar angleβ withrespect to thewholebody-related
coordinatesystemxyz. Thelineof sight is represented in spaceas alineacross a
unitspherewiththecentreatthecoordinateorigin. Thearrangementof thehologram
pixels in the frequency domain is defined relative to the f
x
f
y
f
z
coordinates by the
anglesθ, Bandtheradial f
p
coordinate(Fig. 6.2(b)). Thehologramrecordingshould
meet theconditionsθ = θ

andB = β

, whereθ

andβ

aretheestimatesof theθ
andβ angles.
In thefirst of theabovecases, anarrowband radar tracks asatellite, stabilised
bythebody-relatedcoordinatesalongthethreeaxes, duringitstranslational motion
along theorbit. Thelineof sight turns relativeto thesatelliteto describeacurve
ontheunitsphere(theleftsideof Fig. 6.2(a)), whichrepresentsanarcinthexyplane
if theradar islocatedintheorbital plane, or a3Dcurveinall other cases. If theradar
transmits acontinuous wave, thehologramreproduces theshapeof this lineonthe
spheref
po
inthefrequencydomain(Fig. 6.2(b)).
If aradartransmitsapulsedsignal withtherepetitionrateF
r
orif acontinuousecho
signal is appropriately discretised, ahologramwill represent aseries of individual
Radar systemsfor rotatingtarget imaging(tomographicapproach) 123
b
b
b
2
b
1
q
q
a
y
v
z
x
(a)
(b)
f
z
df
B
DB
DB
Du
Df
p
f
po
Du
f
y
B
u
f
x
Spherical surface
Figure6.2 Geometrical relationsfor 3Dmicrowavehologramrecording: (a) data
acquisition geometry; a–b, trajectory projection onto a unit surface
relativetotheradar motionand(b) hologramrecordinggeometry
samples separatedby δf
ψ
= f
po
˙
θ

cosβ

/F
r
, where
˙
θ

= d
˙
θ

(t)/dt is theangular
velocityof thesatelliterotationintheorbital plane.
In the second case, one gets a wideband hologramof a satellite stabilised by
rotationof thebody-relatedcoordinatesaroundthez-axis(therightsideof Fig.6.2(b)).
Duringthetracking, theanglebetweenthelineof sightandtherotationaxischanges
slowlybythevalueβ = β
2
− β
1
with
˙
β
˙
θ. Theinterceptionof theunit sphere
surfacebythelineof sightformsaspiral confinedbetweentwoconicsurfaceswiththe
half anglesπ/2−β
1
andπ/2−β
2
atthevertex. Theresultinghologramrepresentsa
multiplicityof real beamsthatformaspiral band(Fig.6.2(b)). Thebandistransversely
boundedbytwospherical surfacesandis‘fitted’ betweentwoconical surfaces, with
B
1
= β
1
andB
2
= β
2
. Theradii of thespheresareequal tothelower f
pl
andupper f
pu
spacefrequenciesof thehologram. Figure6.2(b)showsafragmentof suchahologram
boundedbytheazimuthal stepθ, whilethesatellitemakesthe
˙
θt/2π number of
124 Radar imagingandholography
rotationsduringthesynthesistimestept. Theadjacenthologramslicessynthesised
duringconsecutiverotationsarespacedbythefrequencystepδf
u
= 2πf
po
˙
β

/
˙
θ

.
Under thecondition
δf
−1
u
≥ D,
whereD is themaximumlinear sizeof asatellite, theresolution can beachieved
by the synthesis in the plane intercepting the z-axis. The resulting 3D wideband
hologramcontaining, atleast, several sliceswill bereferredtoasasurfacehologram.
A surface hologramis usually synthesised by a wideband radar, when tracking a
satellitestabilisedalongthethreeaxes, or when dealingwith amodel target in an
AEC. Inthelatter case, ahologramliesentirelyinthef
x
–f
y
plane.
Every beamof awideband microwavehologramcorresponds to asingleecho
signal andismadeupof acertainnumberof discretepixels, L, sincedigital hologram
processingimpliesdiscretisationof theechopulsespectrum.
Itisclearfromtheforegoingthattheconditionsforrecordingahologramof atarget
performingacomplexmovement relativetoanimagingradar arethecompensation
for itsradial displacement andtherecordingof thevideosignal spectruminaform
adequatefor therespectiveaspect variation, that is, inaspherical or polar geometry.
6.2.2 Preprocessingof radar data
The preliminary processing of radar data integrated in the formof a microwave
hologramtobefurther usedfor imagereconstructioncanbedescribedintermsof a
linear filteringmodel asaprocessingbyaninversefilter inalimitedfrequencyband.
Thetransfer functionof thefilter is
H
f
(

f
p
) = H
−1
(

f
p
)H
o
(

f
p
)H
r
(

f
p
), (6.12)
whereH
r
(

f
p
) isanon-zeroaperturefunctionwithinthechosenboundaries

f
ph
of the
hologram(Fig. 6.2(b)):
H
o
(

f
p
) = rect( f
p
/f
ph
) =
_
1, f
p
⊂ V
f
,
0, f
p
⊂ V
f
;
(6.13)
andH
r
( f
p
) = exp[j2π( f
po
+f
p
)|r
a
|] isthetransfer functionof thecompensationstep
of thetarget radial displacement.
Theprocessof imagereconstructionfromahologramdescribedbyEq. (6.11) can
berepresentedas
ˆ g(r
no
) = F
−1
{S(

f
p
)H
f
(

f
p
)} =
_
V
f
S(

f
p
)H
f
(

f
p
) exp( j2π

f
p
r
no
) d

f
p
= g(r
no
)∗h
o
(r
no
), (6.14)
whereh
o
(r
no
) = F
−1
{H
o
(

f
p
)} isaperfectimpulseresponsewhichonlydescribesthe
imagenoisedueto thefinitediffractionlimit, or to thelimitedsizeof theaperture
functionH
o
(

f
p
).
Radar systemsfor rotatingtarget imaging(tomographicapproach) 125
Thus, theprocessingof anecho signal duringtheimagingincludes two stages
(Fig. 6.3). The signal preprocessing is aimed at synthesising a Fourier hologram,
whose size and shape are determined by the transmitter pulse parameters and the
target aspect variation. Thestructureandcompositionof processingoperations1–5
are conventional radar operations and can be varied with the type of transmitter
Echo signal
Step 1
Pre processing
Coherent detection 1
Range processing 2
Analogue–digital transform 3
DFT 4
Annihilation of phase distortions due to turbulent
troposphere
5
Annihilation of target radial displacement 6
Spherical (polar) recording 7
Microwave hologram
Image reconstruction Step 2
Subdivision into partial holograms 8
Partial image reconstruction by inverse DFT 9
Computation of partial image contributions to the total
image
10
Radar image
Range
estimation
Aspect
estimation
Figure6.3 Thesequenceof operationsinradar dataprocessingduringimaging
126 Radar imagingandholography
signal, the processing techniques used, and the tracking conditions. For example,
amonochromatic pulsedoes not requireoperations 2and4. Whenasignal witha
LFM is subjected to correlated processing, operations 1 and 2 coincide, and oper-
ation 4 becomes unnecessary. The compensation for the radial displacement of a
satelliteduringhologramrecordinginfieldconditions is afairly complex problem
[8,10]. InanAEC, thelatter operationreducestotheintroductionof thephasefactor
exp j2πf
pl
(2R
o
/c), wheref
pl
isthespacefrequencyof thefirstspectral component
of thehologramandR
0
isthedistancebetweentheantennaphasecentreandthetarget
rotationcentre[8,10]. Obviously, thephasefactor isconstant for aparticular AEC.
A necessaryoperationspecifictoISAR systemsat thepreprocessingstageisthe
recordingof thetargetaspectvariation. Itisassumedthateachpixel onthehologram
iscomparedbyadigital recorder withthefamilyof coordinatesdefiningitsposition
inthefrequencydomainf
x
f
y
f
z
(inthefrequencyplanef
x
–f
y
) (seeFig. 6.2).
It is worthdiscussingapossibleapplicationof availableprocessingalgorithms
for imagereconstructionfromamicrowavehologram.
Theexperiencegainedfromtheapplicationof inverseaperturesynthesisforimag-
ingaircraft andspacecraft aswell asfromthestudyof local radar characteristicshas
stimulated thedevelopment of algorithms for processing echo signals by coherent
radars. A fairly detailedanalysisof thealgorithmscanbefoundinReference8and
inChapter 2of thisbook, soweshall discussonly thepossibility of applyingthem
totheaspect variationof real targets.
It hasbeenshowninSection2.3.2andintheReferences9and10that thecondi-
tionsfor trackingreal targetsdiffer fromtheconditionsinwhichavailablealgorithms
operate. First, discreteaspect pixels arenot equidistant becauseof aconstant rep-
etitionrateof thetransmitter pulses. Second, theanglebetweentheRLOS andthe
target rotationaxis changes duringtheviewing. Aninevitableresult of thelatter is
the consideration of a 3D character of the problem. Attempts at applying the 2D
algorithmsdiscussedabovetotheprocessingof 3Ddataleadtoessential errorsinthe
images[8]. Thelevel of errorsriseswithincreasingrelativesizeof atarget (theratio
of themaximumtarget sizetothecarrier radiationwavelength) andwithincreasing
deviationfrom90

of theangleformedbythelineof sightandthetargetrotationaxis.
Toconclude, radarimagingshouldconsidertheviewinggeometry, whichrequires
theuseof aradicallynewapproachtodataprocessing. Theapproachshouldprovide
3Dmicrowavehologramsandbeabletoovercomeanon-equidistantarrangementof
echopixelsrepresentingtheaspect variationof spacetargets.
6.3 Hologramprocessingbycoherent summationof
partial components
It has been shown earlier that image reconstruction froma microwave hologram
shouldgenerally includea3DIFT of thehologramfunction. Theobtainedestimate
of ˆ g(r
no
) isadistortedrepresentationof thetarget reflectivityfunction.
If there is no processing noise and the radial displacement has been perfectly
compensated, an error may bedueto alimited bandwidth of thetransmitter pulse
Radar systemsfor rotatingtarget imaging(tomographicapproach) 127
or alimitedaspect variation. Theresolvingpower of image-synthesisingdevicesis
thenrestrictedonly by thediffractionlimit, andtheimageproducedis knownas a
diffraction-limited image. Recording and processing noiseadditionally deteriorate
imagequality. Sowhendesigningalgorithms andtechniques for imageprocessing,
oneshouldbear thefollowingthingsinmind: (1) thedimensionality of animageis
nottobehigherthanthatof amicrowavehologramand(2)theimageresolutioninany
directionis to beinversely proportional to thehologramlength. Hence, processing
of 3D holograms can yield 1D, 2D and 3D images. An advantageof a3D image
is that it fully represents theinformation recorded on thehologram, but it is to be
computer-processedandanalysed. For visualisation, animagemust bedisplayedon
2Dmedia, suchaspaper or photosensitivefilms, or oncomputer screens. Moreover,
the‘third’ dimensionof ahologramissometimesinsufficienttogetagoodresolution.
Nonetheless, theneglect of anon-3D format of ahologramleads to serious image
errors during its processing. Therefore, the problemof producing undistorted 2D
images from3D holograms seems quite important. We can suggest two ways of
solvingthisproblem.
One way is to obtain a 3D image and then intercept it with a plane of pre-
scribed orientation. However, thecomputations with cumbersome3D coordinates
and dataarrays of lower dimensionality requirespecial processing algorithms and
largecomputationresources.
Amoresimpleandcost-effectiveapproachistocomputedirectlythecontributions
of single3Dhologramcomponentstoa2Dimage, if theirdimensionalityisnothigher
thanthat of theimage. Thecomputations becomeless complex andall highlighted
componentsof ahologramcanbeprocessedsimultaneously, providedthatthenumber
of processorsissufficient. Theapplicabilityof thistechniquecanbeeasilyextended
to2Dholograms.
Thismethodof imagereconstructioncanbetermedcoherentsummationof partial
componentsof ahologram. Thismethodincludesthefollowingprocedures:
Stage1. A microwavehologramissubdividedintoregionsof limitedsizecalled
partial holograms(PH). Sincediscretepixelsmakingupthehologramareformed
bytheinterceptionsof radial lines(correspondingtosingleechosignals)andcofo-
cal spherical surfaces(correspondingtodiscretevaluesof thespacefrequency),
PHscanbeseparatedfromtheinitial hologramindifferent ways.
The PH dimensionality is chosen fromthe initial hologramgeometry and
fromconsiderationsof processingconvenience. Inthecaseof a2Dhologram, the
PHsmay beone- or two-dimensional, whilefor a3Dhologramthey may be, in
addition, three-dimensional. Figures6.4and6.5depict1DPHswithlineshaving
pointsattheir ends, whichrepresenttheinitial andfinal pixels. Thepointsonthe
surfacesof 2Dand3DPHscorrespondtosinglepixels.
One-dimensional PHsarecomposedeitherof pixelscoincidingwiththeradial
rays whichcorrespondto singlepulses (radial PHs) or of pixels locatedonthe
cofocal spherical surfaceswithf
po
= const. (transversePHs). Radial 2DPHsare
madeup of ensembles of 1D radial PHs and represent regions of planar conic
(Fig. 6.5(b)) or morecomplexcurved(Fig. 6.4(b)) surfaces. Transverse2DPHs
128 Radar imagingandholography
Radial
Radial
Transversal
Transversal
∆Ψ≈∆u cos B
∆Ψ≈∆B
(a)
(b)
(c)
Figure6.4 Subdivision of a 3D microwave hologram into partial holograms:
(a) 1Dpartial (radial and transversal), (b) 2D partial (radial and
transversal) and(c) 3Dpartial holograms
can be separated only fromvolume holograms. They are regions of spherical
surfaceswithf
po
= const.
If theangular discretisation of ahologramis uniform, themaximumangle
of 1Dtransverse, 2Dand3DPHsarechosenfromthefollowingconsiderations.
Whenaspherical coordinategrid(orapolargridforplaneholograms) isreplaced
by arectangular grid, thephasenoiseat thePH edges should not exceedπ/2.
Thiscriterionleadstothefollowingrestrictions:
ψ ≤ (λ/D)
1/2
, (6.15)
ψ ≤ c/Df . (6.16)
If theintersamplespacingonahologramvariesslowlybecauseof anon-uniform
rotationof atarget, thechoiceof thePHangleshouldmeet thecondition:
δν

≤ f arccos(1− λ/4r
n
cosβ), (6.17)
whereδν

is thedifferencebetweenthemaximum(or minimum) discretisation
stepanditsaveragevalue. Condition(6.17) isbasedonthelimitedphasenoise
duetothenon-equidistant arrangement of thehologramsamples.
Radar systemsfor rotatingtarget imaging(tomographicapproach) 129
∆Ψ
(a)
(b)
(c)
Figure6.5 Subdivisionofa3Dsurfacehologramintopartial holograms: (a)radial,
(b) 1Dpartial transversal and(c) 2Dpartial
When choosing thePH angle, oneshould always follow themorerigid of
theabovecriteria. TherestrictiononthePH sizeis introducedinorder to keep
thedeviationof thehologramsamplesfromtherectangular gridnodeswithina
prescribed limit. ThePH angles can beeasily calculated analytically at acon-
stant or slightly varyingvalueof oneof theangles of thespherical coordinates
describingthePHs(Fig. 6.4(a)). Inthat casethePH boundarieswill becloseto
thecoordinatesurfaces. If bothangles θ andB changemarkedly (Fig. 6.5), the
angular stepψ shouldbefoundintheplanetangent tothePH.
Stage2. Every PH should besubjected to aDFT providing aradar imagewith
thesamedimensionalityasthat of thePH, whiletheresolutionisdeterminedby
itssize.
Stage3. Thecontributions of partial images to theintegral imagearecomputed.
Whenthedimensionalities of aPH andapartial imagearethesame, thepixels
of thelatter areinterpolatedtothoseof theintegral image. If thedimensionality
of theintegral imageishigher, themajor procedurefor thecomputationisthatof
backprojection[127].
Consideralgorithmsforthereconstructionof 2Dimagesbyprocessingnarrow
andwidebandsurfaceholograms(Fig. 6.5) producedbyathree-axiallystabilised
ground radar. With such algorithms weshall try to justify thespecific features
of coherent summationof partial components: (1) thepossibilityof highlighting
partial regions of various shapes onaPH andtheir independent processingand
(2) thepossibilitytoincreasetheresolutionof theintegral imageastheindividual
contributionsof thepartial componentsareaccumulatedandthediffractionlimit
correspondingtotheinitial hologramsizeisachieved.
Theaboveanalysisallowsthefollowingconclusionstobedrawn. Themost general
approach to radar imaging of a satellite by inverse aperture synthesis, no matter
how it moves and what probing radiation is used, includes two stages of echo
signal processing. The preprocessing involves some conventional operations, the
130 Radar imagingandholography
compensation for the phase noise specific to coherent radars, and data recording
allowstheaspectvariationtoproduceamicrowavehologram. Thesecondstageisto
reconstruct theimagebyaspecial digital processingof PHs.
A procedurespecifictopreprocessingisthecompensationfor thephaseshiftdue
totheradial displacement of aspacetarget. Inthecaseof anAEC, thisoperationis
replacedbytheintroductionof constantphasefactorsinthewidebandechosignal. The
useof monochromatictransmitter pulsesdoesnot requirethisoperation(Chapter 5).
The complex pattern of aspect variation of low orbit satellites requires a 3D
hologramwithanon-equidistant arrangement of theaspect samples. Sincethereare
noadequatemethodsforprocessingsuchholograms, wehavedesignedawayof image
reconstructionbycoherentsummationof PHs. Thisreducesthedigital processingof
a hologramof complex geometry to a number of simple operations. A hologram
is subdivided into PHs, fromwhich partial images are reconstructed using a fast
Fouriertransform(FFT). Thecontributionsof thepartial imagestotheintegral image
arecomputed.
6.4 Processingalgorithmsfor hologramsof complexgeometry
WeshouldfirstchangeEq. (2.38)generallyrelatingthehologramandimagefunctions
totheCartesiancoordinatesnecessaryfor aDFT:

f
p
r
no
= f
x
r
x
+f
y
r
y
+f
z
r
z
, (6.18)
wheref
x
= |

f
p
| sinθ cosB, (6.19)
f
y
= −|

f
p
| cosθ sinB, (6.20)
f
z
= |

f
p
| sinB; (6.21)
r
x
= |r
no
| sinν cosβ, (6.22)
r
y
= −|r
no
| cosν cosβ, (6.23)
r
z
= −|r
no
| sinβ. (6.24)
Thesubstitutionof Eq. (6.18) intoEq. (2.38) reducesittotheconventional 3DFourier
transform. However, it is impossibleto apply it directly to amicrowavehologram
recordedinspherical coordinates(Fig. 6.2(b)). Thetransitiontopixelslocatedatrect-
angular gridnodesisconsideredasaninterpolationproblem. Evenafirst-order inter-
polationfora2Dcasewouldrequirelargecomputational resources. Besides, anynoise
arisingfromtheinterpolationwouldleadtolargeerrorsinthereconstructedimage.
Theprocedureof coherent summation of partial components will simplify this
problemif weusethereverseorder of computational operations: anumber of DFT
operations and theinterpolation of their results (partial images) to therectangular
gridnodesof theintegral image. Of special practical importanceisthecasewhena
PH andits partial imagehavealower dimensionality thantheintegral image. This
is dueto ahigher computationefficiency of thealgorithms used. Theinterpolation
Radar systemsfor rotatingtarget imaging(tomographicapproach) 131
thenrepresentsatransitionfromarectangular gridof lower dimensionalitytothatof
ahigher dimensionality, aprocedureknownasbackprojection[127].
Aspreviouslymentioned, weshall focusondesigningalgorithmsfor producing
2Dimagesbycoherentsummationof 1DPHsandindividual initial hologramsamples.
Thealgorithmfor coherentsummationof 2Dpartial imageswill largelybediscussed
foratheoretical completenessof thetreatment. Theanalysiswill startwithalgorithms
forprocessing2DhologramsrecordedinanAECandduringtheimagingof loworbit
satellitesbyaSAR locatedintheorbit plane.
6.4.1 2Dviewinggeometry
Equation(6.14) will betransformedto polar coordinates by substitutingEq. (6.18)
intoitandusingEqs(6.19)–(6.24). AssumingB = β = 0anddenoting|

f
p
| = f
po
+f
p
and|r
no
| = r, weget:
ˆ g(r, ν) =
θ
f
_
θ
i
f
pu
_
f
pl
S( f
po
+f
p
, θ)|f
p
| exp[j2π( f
po
+f
p
)r cos(ν − θ)] df
p
dθ, (6.25)
whereθ
i
andθ
f
aretheinitial andfinal valuesof theangleθ of thehologram(Figs6.6
and6.7), f
pl
= f
po
−f
p
/2andf
pu
= f
po
−f
p
/2arethelowerandupperboundaries
of thespacefrequencybandalongthehologramradius.
It is easier to start theanalysis of processing algorithms with asimplecaseof
narrowbandmicrowaveholograms. Thelimit of expression(6.25) at f
p
→ 0is
g(r) = f
po
θ
f
_
θ
i
S( f
po
, θ) exp[ j2πf
po
r cos(ν − θ)] dθ. (6.26)
This expression coincides with the formula for the CCA for a narrowband signal
[94]. Whenanimageisreconstructedbythisalgorithm, circular convolutionisper-
formedfor every sampleof thepolar coordinater intheimagespacewithrespect
to theparameter θ of thehologramfunctionandthephasefactor. Thecontribution
of all hologramsamplestoevery(r, ν) nodeof theimagepolar gridiscomputed. If
thesatelliteaspect changesnon-uniformly, thesamplesarearrangedalongtheholo-
gramcircumferencewithavariablestep, soadiscretecircular convolutionbecomes
impossible.
Let us singleout aseries of adjacent regions on ahologram, or PHs shown in
Fig. 6.6(a), withananglesatisfyingtheconditionof Eq. (6.15). Theconvolutionstep
of Eq. (6.26) over thewholehologramanglecanberepresentedasasumof integrals,
eachtakenover alimitedanglestepθ:
ˆ g(r, ν) = f
po
M

m=1
S
m
( f
po
, θ) exp[ j2πf
po
r cos(ν − θ)] dθ, (6.27)
whereS
m
( f
po
θ) isthemthPHandM isthetotal number of suchholograms.
132 Radar imagingandholography
f
po
f
y
f
x
y
x
m
(x
n
, y
n
)
x
O
Du
u
M
u
f
u
i
u
1
(a)
(b)
y
m
o
r
no
q
m
Figure6.6 Coherentsummationof partial hologram. A2Dnarrowbandmicrowave
hologram: (a) highlighting of partial holograms and (b) formation of
anintegral image
We now introduce the Cartesian x
m
y
m
coordinates (Fig. 6.6(b)) for each mth
PH withtheoriginOcoincidingwiththat of therectangular x–ycoordinates of the
integral image. Thex
m
-axis is parallel to thetangent to thearc connectingthemth
PH pixels at its centre. Since the microwave hologramin question is 2D, let us
introducetheazimuthal coordinatef

= f
po
θ to describeit in thefrequency f
x
–f
y
plane(Fig. 6.6(a)), inadditiontotheradial polarcoordinatef
p
. Withx
m
= r sinθ
m
and
y
m
= r cosθ
m
, thetransformationof thephasefactor under theintegral of Eq. (6.27)
Radar systemsfor rotatingtarget imaging(tomographicapproach) 133
Df
p
df
p
df
u
Du
u
f
u
1
u
i
O
f
f
pu
f
x
f
y
f
pe
(a)
(b)
y
y
m
x
m
x
r
no
q
m
o
Figure6.7 Coherent summation of partial hologram. A 2D widebandmicrowave
hologram: (a) highlighting of partial holograms, (b) formation of an
integral image
will give
ˆ g(x, y) =
M

m=1
f
θm
+f
θ
/2
_
f
θm
−f
θ
/2
S( f
po
, θ) exp[ j2πf
θ
x
m
] df
θ

m
, (6.28)
wheredf
θ
= f
po
dθ isthedifferential of thespacefrequencyf
0
, whilef
θm
isthespace
frequencycorrespondingtothemthPHcentre.
134 Radar imagingandholography
Expression (6.28) describes the algorithmfor coherent summation of partial
imagesobtainedfrom1Dtransverse(azimuthal) PHs. Eachpartial imageresultsfrom
aFourier transformationof theappropriatePH andis resolvedalongtheazimuthal
x
m
-coordinate. Thesynthesisof aPHismadesimultaneouslywithitssummationwith
theradar imageby movingthepartial imagealongthey
m
-coordinate(back projec-
tion), accompaniedbythemultiplicationof all of itssamplesbyacoherentprocessing
phasor
m
.
Theprocessof imagesummationbythealgorithmof Eq. (6.28) will bediscussed
withreferenceto apoint scatterer withthex
n
, y
n
coordinates inthex–y coordinate
system(Fig. 6.6(b)). Thisscattererwill beassumedtopossessanisotropiclocal radar
target characteristicg(r
no
) = σ
1/2
n
exp(jϕ
n
).
A narrowbandmicrowavehologramisdefinedas
S( f
po
, θ) = σ
1/2
n
exp[−j2πf
po
ˆ r
n
(ϑ)] exp(jϕ
n
). (6.29)
Therelativerangeof thepoint scatterer isexpressedbytherectangular x
m
, y
m
coor-
dinates. Theexpansionof ˆ r
n
(ϑ) intoaTaylor serieswithrespect tothecentreϑ
m
of
themthpartial anglestepwiththelinear termsonlygives
ˆ r
n
(ϑ) = ˆ r
n

m
) +
˙
ˆ r(ϑ
m
)ϑ −
˙
ˆ r
n

m

m
, (6.30)
where
˙
ˆ r
n
(ϑ) = dˆ r
n
(ϑ)/dϑ,
˙
ˆ r
n

m
) =
˙
ˆ r
n
(ϑ)
¸
¸
ϑ=ϑ
m
.
BysubstitutingEq. (6.30) intoEq. (6.29) anddenotingˆ r
n
(ϑ) = y
m
,
˙
ˆ r
n

m
) = x
m
,
wetransformtheexpressionfor themthPHto
S
m
( f
po
, θ) = σ
1/2
n
exp{−j2πf
po
(y
mn
−x
mn
ϑ
m
)} exp(−j2πf
po
x
mn
ϑ) exp(jϕ
n
).
(6.31)
It isfurther assumedthat theestimateof thetarget rotationrateobtainedduringthe
hologramrecordingcontains no error: θ = ϑ. It shouldalso betakeninto account
that arectangular windowof widthf
θ
= f
po
θ framingthePH (6.31) is shifted
relativetothecentreof thespacefrequencyaxisbyitshalf width: f
po
θ
m
= f
θ
/2.
Thentheexpressionfor thepartial imagecanbewrittenas
ˆ g(x
m
) =
f
θm
+f
θ/2
_
f
θm
−f
θ/2
S( f
po
, θ) exp( j2πf
θ
x
m
) df
θ
= σ
1/2
n
f
θ
{sin[π(x
m
−x
mn
)f
θ
]/π(x
m
−x
mn
)} exp[ jπ(x
m
−x
mn
)f
θ
]
×
mn
exp(jϕ
n
). (6.32)
Theintegral imagewill bedescribedas
ˆ g(x, y) = σ
1/2
n
exp(jϕ
n
)f
θ
M

m=1
{sin[π(x
m
−x
mn
)f
θ
]/π(x
m
−x
mn
)f
θ
}
×exp[ jπ(x
m
−x
mn
)f
θ
]
mn
. (6.33)
Radar systemsfor rotatingtarget imaging(tomographicapproach) 135
It isclear fromEq. (6.33) that thecomplexphasefactorsvaryingwiththex
m
, y
m
coordinatesandlocatedat theintegral imagepoint correspondingtothepositionof
the scatterer response have the maximumvalues equal to unity. The contribution
of thePH to theintegral imageis defined by theproduct of thelocal radar target
characteristic of thescatterer andthesin(x)/x-typeof function. Therefore, thePHs
aresummedequiphasicallyat thepoint x
n
= r
no
sinϑ
n
, y
n
= r
no
cosϑ
n
andat other
points, of theimage, theyaremutuallyneutralised.
Thewidthof themajor lobeof thescatterer inthepartial image(afunctionof the
sin(x)/x-type) isdeterminedby thePH lengthf
θ
or by itsangleθ (Fig. 6.6(a)).
Thelimitingvalueof theresponsewidthinthepartial imagederivedfromEq. (6.14)
isexpressedbytheinequalityδx≥ 0.5(λD)
1/2
. SinceD λ, themajor lobewidth
ismuchgreater thanthetransmitter pulsewavelength.
Itfollowsfromthistreatmentthatthemthpartial componentof theintegral image
mayberegardedasa2Dplanewavesuperimposedontheimageplane. Thewavefront
is normal to they
m
-axis andits periodis equal to thehalf wavelengthof thetrans-
mitter pulse. Theinitial wavephase(alongthex
m
-axis) isdeterminedbythephasor
exp[ jπ(x
m
−x
mn
)f
θ
] insuchawaythat apositivehalf-wavealwaysarrivesat the
scatterer’sx
mn
, y
mn
position. Thewaveamplitudealongthex
m
-axisisdescribedbya
sin(x)/xfunctionwithamaximumat thepoint x
m
. For thisreason, thepartial com-
ponent hasa‘comb’ elongatedbythebackprojectionof thepartial imageparallel to
they
m
-axis.
Notethattheresolutionof theintegral imageisdefinedbythescattererwavelength
rather thanby theresponsewidthinthepartial image. ThereductioninthePH size
fromthemaximumvalueprescribedbyEq. (6.15) toasinglesampleshouldnotaffect
theresult of summationinaPH. Thereforethesynthesisedaperturecanbefocused
accuratelyover thewholeimagefield. Keepinginmind
lim
f
θ
→0
f
θm
+f
θ
/2
_
f
θm
+f
θ
/2
S( f
po
, θ) exp(j2πf
θ
x
m
) df
θ
= f
po
S( f
po
, θ) dθ, (6.34)
weobtainfromEq. (2.4) thealgorithmfor coherent summationof aPH madeupof
individual samplesof theinitial hologram:
ˆ g(x, y) = f
po
M

m=1
S
m
( f
po
, θ)
m
. (6.35)
The coherent summation algorithmfor hologramsamples essentially represents a
particular casefor 1Dtransverse(azimuthal) partial imagesdescribedbyEq. (6.28).
However, eachhasitsownspecificity.
Themajor advantageof thealgorithmfor hologramsamples is theabsenceof
phase errors due to either the PH approximation or the non-equidistant distribu-
tion of samples. As a consequence, this algorithmis applicable to the processing
of microwaveholograms withany knownsamplearrangement. Ontheother hand,
thecoherentsummationalgorithmfor partial imagesdoesnotrequireexcessivecom-
puterresourcesbecausetheexhaustivesearchof therasterpixelsintheintegral image
136 Radar imagingandholography
Table6.1 Thenumber of spectral componentsof aPH
Target size Radial/azimuthal PH
D(m) D
λ
= D/λ µ = 0.02 µ = 0.04 µ = 0.06 µ = 0.08 µ = 0.1
0.5 15 2/12 2/12 3/12 3/12 4/12
1.0 25 2/15 3/15 5/15 6/15 8/15
2.0 50 3/22 6/22 9/22 12/22 15/22
4.0 100 6/30 12/30 18/30 24/30 30/30
6.0 150 9/37 18/37 27/37 36/37 45/30
8.0 200 12/43 24/43 36/43 48/38 60/30
10.0 250 15/48 30/48 45/48 60/38 75/30
15.0 325 23/54 45/54 68/50 90/38 113/30
duringthecomputationof thepartial contributionismadefor agroupof PHsamples
rather thanfor everysinglehologramsample.
Figure6.8–6.9comparesthecomputational complexityof thetwoalgorithmsasa
functionof thetargetsizeforanarrowbandmicrowavehologram. Thecriterionforthe
degreeof complexityistakentobethealgorithmictimeof theprogrammerealisation.
Theunit of measureof thealgorithmic timeis, inturn, takento be1flop(floating
point), that is, thetimefor oneelementaryoperationof summation/multiplicationof
twooperandswithafloatingpoint. Sowehave1Mflop= 10
6
flops. Theestimations
of thecomputational complexityandtheprogrammerealisationtimehavebeenmade
for a2Dimageof 512×512raster pixelsinsizeand2Dmicrowavehologramswith
a120

angle.
Whengoingfromanarrowbandhologramtoawidebandone, wecanjustsuggest
thatthenumberof spectral componentsincreasesfrom1toL. Asthesizeof aone-digit
imageandthehologramdiscretisationstepareinversely proportional toeachother,
theminimal number of spectral components at agivenpulsefrequency bandwidth
must beproportional to thetarget size. Table6.1presents theL values for various
PHsasafunctionof themaximumtargetsize. Thecomputationshavebeenmadefor
0.04mcarrier (centre) frequency of thetransmitter pulsespectrumandtheratio of
theimagefieldsizetothemaximumtarget lengthk = 1.5. Onecaneasily seethat
thenumber of azimuthal PHsamplesriseswiththetarget sizeaslongasthelimiting
PHangleobeystheinequality(6.15).
Whenatarget israther largeandtherelativefrequencybandwidthisµ = f /f
0
(thelower right-handsideof Table6.1), theinequality (6.16) imposes amorerigid
restrictiononthePHsize. ThenboththePHsizeanditsdiscretisationstepdecrease
inversely with respect to the target size. Therefore, the number of PH azimuthal
samplesatagiventransmitterpulsewidthf remainsconstantwithincreasingtarget
sizeD.
Weshall startthediscussionof digital processingof 2Dwidebandhologramswith
thealgorithmfor coherent summationof 1D azimuthal partial images, whichisthe
Radar systemsfor rotatingtarget imaging(tomographicapproach) 137
0.0
0.0
0.5 1.0 1.5 2.0 2.5
0.1
0.2
0.3
0.4
0.5
0.6 (a)
K
p
a
r

i
m
,

M
f
l
o
p

·

1
0
3
Target dimension, m
0.0
0.0
0.5 1.0 1.5 2.0 2.5
2.0
4.0
6.0
8.0
10.0
12.0 (b)
K
h
o
l

s
a
m
,

M
f
l
o
p

·

1
0
3
Target dimension, m
Figure6.8 The computational complexity of the coherent summation algorithms
as a function of the target dimension for a narrowband microwave
hologram: (a) transversepartial images, (b) hologramsamples
extensionof asimilar algorithmfor narrowbandmicrowaveholograms. Letusrelate
Eq. (6.28) tothefirst, 1= 1th, … , Lthspectral component:
ˆ g(x, y) =
M

m=1
f
θm
+f
θ
/2
_
f
θm
−f
θ
/2
S
m
( f
pl
, θ) exp( j2πf
θl
x
m
) df
θl

ml
, (6.36)
138 Radar imagingandholography
wheref
pl
, f
θl
aretheradial andazimuthal spacefrequenciesand
ml
isthecoherent
processingphasor. BysumminguptheL number of PHsineachof theM number of
partial anglesteps, weget
ˆ g(x, y) =
M

m=1
_
¸
_
¸
_
L

l=1
f
pl
f
θm
+f
θ
/2
_
f
θm
−f
θ
/2
S
m
( f
pl
, θ) exp( j2πf

x
m
) df
θ

ml
_
¸
_
¸
_
. (6.37)
Equation(6.37) describesthefollowingprocessingoperations:
• theL number of azimuthal PHsareselectedineachmthpartial anglestep;
• theDFT isappliedtoeachPHtoget theL number of 1Dpartial images;
• theL number of partial images in every mth group areback projected and the
obtainedcontributionsaremultipliedbythecoherent processingphasor
ml
.
Theanalysisof Eq. (6.37)showsthattheconsecutivemultiplicationbythephasor
ml
of thecontributionsof partial imagescanbesupplementedwithaDFT. Theresultisa
newprocessingalgorithm– thecoherentsummationalgorithmfor2Dpartial images:
ˆ g(x, y) =
M

m=1
_
f
p
/2
_
−f
p
/2
|f
p
|
_
f
θm
+f
θ
/2
_
f
θm
−f
θ
/2
S
m
( f
p
, θ) exp( j2πf

x
m
) df
θ
_
×exp( j2πf
p
y
m
) df
p
_

m
. (6.38)
Algorithm(6.38) impliesthefollowingseriesof operations:
• theMnumber of 2D PHs withanangledefinedby theconditions of Eqs (6.15)
and(6.16) areselectedintheinitial microwavehologram;
• eachPHissubjectedtoa2DDFT toproducetheM number of 2Dpartial images.
All of thesehaveacommoncentrewhichcoincideswiththeintegral imagecentre
andarerotatedbytheangleθ relativetooneanother;
• thecontribution of each partial imageto theintegral imageis calculated using
a 2D interpolation and the result is multiplied by the coherent processing
phasor.
Thelast operationgenerally requires largecomputer resources. So weshall further
refer tothecoherent summationalgorithmfor 2D partial images only topreservea
theoretical completeness.
Theadvantages of coherent summation of individual samples discussed above
for narrowband holograms are fully valid for wideband holograms as well.
Equations(6.37) and(6.34) yield
ˆ g(x, y) =
M

m=1
_
L

l=1
f
pl
S
m
( f
pl
, θ)
ml

_
. (6.39)
Radar systemsfor rotatingtarget imaging(tomographicapproach) 139
Amongthewidebandprocessingalgorithms, theonedescribedby Eq. (6.39) is
themost simplebut it requires alargenumber of arithmetic operations to bemade
becausetheprocessingismadeonline. Thecomputational efficiencyof thisalgorithm
canberaisedbyusinga1DDFT alongthemthhologrambeam:
ˆ g(x, y) =
M

m=1
_
¸
_
¸
_
f
p
/2
_
−f
p
/2
S( f
p
, θ)|f
p
| exp( j2πf
p
y
m
) df
p

m
_
¸
_
¸
_
dθ. (6.40)
Inaccordancewiththeacceptedclassification, expression(6.40) isthealgorithmof
coherentsummationof 1Dradial (range) partial images. Itsimplementationinvolves
thefollowingprocessingoperations:
• thehologramsamplesmakingupthemthradial PH aremultipliedby thelinear
frequencyfunctionandarethensubjectedtoDFT;
• theresulting1Drangepartial imageisbackprojectedandtheresultismultiplied
bythecoherent processingphasor.
Thealgorithmof Eq. (6.40) hasmuchincommonwiththenarrowbandalgorithm
for partial images in Eq. (6.28) but it also has somespecific features. Oneis that
the1Dimagemoduleof asinglepoint scatterer isdescribedbytheso-calledkernel
functionof computerisedtomography[57], rather thanbyafunctionof thesin(x)/x
type. Depending on the chosen approximation of the linear frequency function in
Eq. (6.40), thekernel functionisitsFourier imageandmaybedescribedanalytically
invariousways. Italwayshastheformof aninfiniteperiodicfunctionwithonemajor
lobeandsidelobesdecreasingwithamplitude. Another specificityof thisalgorithm
is that theback projection operation is performed along thex
m
-axis. Still another
characteristic of the algorithmof Eq. (2.40) is that the PH samples are arranged
equidistantlyalongradial straightlines, sonorestrictionisimposedonthemaximum
sizeof aPH.
Therelativecomputational complexitiesof widebandprocessingalgorithmsare
comparedinFigs6.10–6.11. Itisseenthatthenumberof arithmeticoperationsalways
increases withtherelativefrequency bandwidthof thetransmitter pulseµ andthe
relativetarget sizeD
λ
, whereas thecomputational complexity of 1D partial image
algorithmschangesdifferently withtheseparameters. At givenvaluesof µ andD
λ
,
moreprofitableis thealgorithmfor aPH withalarger number of samples. This is
becausetheefficiencyof aFFT increaseswiththenumberof lobes, ascomparedwith
anordinaryDFT. For example, at small valuesof µ andD
λ
, it ismorereasonableto
usethealgorithmfor azimuthal partial images(Fig. 6.11). Astherelativefrequency
bandwidthandthetarget sizebecomelarger, thenumber of samples inaradial PH
exceeds, at acertainmoment, that of anazimuthal PH(seeTable6.1). Thishappens
becausetherestriction on theazimuthal PH sizein Eq. (6.15) begins to dominate
over that of Eq. (6.16), such that theuseof thecoherent summation algorithmfor
radial partial imagesbecomesmoreprofitable. Inspiteof itsstructural simplicity, the
coherent summationalgorithmfor hologramsampleshasthegreatest computational
complexity(Fig. 6.10).
140 Radar imagingandholography
0.0
0.0
0.5 1.0 1.5 2.0 2.5
1.0
2.0
3.0
4.0
5.0
6.0 (a)
K
p
a
r

i
m
/
K
C
C
A
Target dimension, m
0.0
0.0
0.5 1.0 1.5 2.0 2.5
25.0
50.0
75.0
100.0
125.0
150.0 (b)
K
h
o
l

s
a
m
/
K
C
C
A
Target dimension, m
Figure6.9 The relative computational complexity of coherent summation algo-
rithmsasafunctionofthetargetdimensionfor anarrowbandmicrowave
hologram: (a) transverse partial images/CCA, (b) hologram sam-
ples/CCA
It is clear that thetimefor awidebandhologramprocessingby theabovealgo-
rithms, estimated fromthe product of the computational complexity and the time
for an elementary multiplication/summation operation, is excessively long, so one
shouldconsider thepossibility of separate, independent processingof PHs inorder
toconsiderablyreducethisparameter.
Radar systemsfor rotatingtarget imaging(tomographicapproach) 141
0.0
0.0
0.5 1.0 1.5 2.0 2.5
5.0
10.0
20.0
15.0
25.0
30.0
35.0
K
h
o
l

s
a
m
/
K
p
a
r

i
m
Target dimension, m
0.04
0.08
0.02
0.06
Figure6.10 The relative computational complexity of coherent summation algo-
rithmsof hologramsamplesandtransversepartial imagesversusthe
coefficient µ inthecaseof awidebandhologram
6.4.2 3Dviewinggeometry
WenowexpressEq. (6.14) inspherical coordinatesanduseEqs(6.19)–(6.21) toget
therelation
g(x, y, z) =
_
V
f
__
S( f
p
, θ, B) exp[ j2π( f
po
+f
p
)(ycosθ cosB
+xsinθ cosB+zsinB)] df
f
dθ dB. (6.41)
142 Radar imagingandholography
0.0
0.0
0.5 1.0 1.5 2.0 2.5
1.0
0.5
2.0
1.5
4.0
4.5
3.0
3.5
2.5
5.0
5.5
6.0
6.5
7.0
K
p
a
r

i
m
/
K
z
a
d

p
a
r

i
m
Target dimension, m
0.04
0.08
0.02
0.06
Figure6.11 The relative computational complexity of coherent summation algo-
rithms for radial andtransversepartial images versus thecoefficient
µ inthecaseof awidebandhologram
Tomakethecoherentsummationof PHsmoreconvenient, itisreasonabletoseparate
theintegration variables in Eq. (6.41). This task could besimplified if oneof the
variablesremainedconstantthroughasynthesisstep. For example, atB = const., the
imageinthe(z = 0) planewill bedescribedas
g(x, y) =
_
V
f
__
S( f
p
, θ, B) exp[ j2πf
pe
(ycosθ +xsinθ)] df
p
dθ, (6.42)
Radar systemsfor rotatingtarget imaging(tomographicapproach) 143
z
f
z
, f Љ
zm
f
ym
, fЈ
ym

xm
, fЉ
xm
x
m
f
x
x
f Љ
ym
y
m
f
y
y
j
m
u
m
B
m
O
f
Figure6.12 Thetransformationof thepartial coordinateframeintheprocessing
of a3Dhologrambycoherentsummationof transversepartial images
wheref
pe
= ( f
po
+ f
0
cosB) is an ‘equivalent’ spacefrequency introduced just to
reduceEq. (6.42) toaconventional form.
Clearly, thealgorithmstobederivedfromEq. (6.42) maydifferfromthosefor2D
hologramsonlyinthespacefrequencyvalue. Inreality, thismayhappeninviewing
geostationary objects stabilised by rotation. In hologramrecording of a low-orbit
satellite, bothanglesdescribingitsgeometrychangesimultaneously. For thisreason,
the polar angle can be considered to be ‘fixed’ only at certain moments of time.
Thishologramgeometry isbest satisfiedby thealgorithmsfor coherent summation
of individual hologramsamples and1D radial partial images. Expressions for such
algorithms can bederived fromEq. (6.42) or directly fromEqs (6.35), (6.39) and
(6.40) bysubstitutingf
pe
for ( f
po
+f
p
).
Todesignalgorithmsfor transversePHs, weneedtointroduceinthefrequency
domainthepartial coordinatesf
xm
, f
ym
, f
zm
withtheoriginat thepoint O
f
, whichis
alsotheoriginof thef
x
, f
y
, f
z
coordinates(Fig. 6.12). Thef
xm
–f
ym
planeof thepartial
coordinatesistangential tothePHatapointwiththeangular coordinatesθ
m
, B
m
(the
f
xm
- andf
ym
-axesarenot showninFig. 6.12).
Letusexpandthepolar angleBasafunctionof theazimuthθ intoaTaylor series
inthevicinityof θ
m
:
B(θ) = B
m
+ [dB(θ)/dθ]
|θ=θ
m
· (θ − θ
m
) +
B
, (6.43)
whereB
m
= B(θ) at θ = θ
m
and
B
aretheresidual termsof theseries. Obviously,
intheideal caseof
B
= 0, thePHliestotallyinthef
xm
–f
ym
plane, andthedifference
betweenthePHandastraightlineisonlydeterminedbythecurvatureof thespheref
po
.
Thenon-zeronatureof thepolar anglederivativeswithanorder abovethefirst one
may generally leadtoadditional phaseerrorsinthePH approximationby astraight
144 Radar imagingandholography
line. However, adigital simulationof theaspect variationof areal target hasshown
that the phase error is negligible. Therefore, we shall assume the PH angle to be
definedbytheconditionsof Eqs(6.15) and(6.16).
To describethepositions of PH samples inthef
xm
–f
ym
plane, weintroducethe
angle ψ and write down the partial Cartesian coordinates of the pixels as f
xm
=
f
p
sinψ, f
ym
= −f
p
cosψ. An acceptableprocessing algorithmcan beobtained if
thef
xm
–f
ym
planeissuperimposedwiththef
x
–f
y
planewhichcorrespondstothex–y
planeintheimagespacecontainingtheintegral image. Thesuperpositionoperation
will bemadebytwoconsecutiverotationsof thepartial coordinates(Fig. 6.12):
• therotationbytheangleξ
m
= arctg(dB/dθ)
|θ=θ
m
|
roundthef
ym
-axisgivesthe
polar f

xm
f

ym
f

zm
coordinates, whosef

xm
-axisliesinthef
x
–f
y
plane;
• therotationof thepolar f

xm
f

ym
f

zm
coordinatesbytheangleB
m
roundthef

xm
-axis
givesthesought for polar f

xm
f

ym
f

zm
coordinates.
Thesetransformationsof thepolar coordinatesresult inthefollowingexpressionfor
thescalar product at themthpartial anglestep:

f
p
r
no
= −f
p
b
m
x

m
sinψ +f
pe
y

m
cosψ (6.44)
with
x

m
= x
m
cosζ
m
+y
m
sinζ
m
,
y

m
= −x
m
sinζ
m
+y
m
cosζ
m
,
b
m
= (1−sin
2
ξ
m
cos
2
B
m
)
1/2
,
f
pe
= f
p
cosB
m
.
Inturn,
sinζ
m
= sinζ
m
sinB
m
/b
m
,
cosζ
m
= cosξ
m
/b
m
.
Thus, thevariationof thepolar angleB duringhologramrecordingintroduces two
specificfeaturesinthecoherent summationalgorithmfor transversepartial images.
Oneisthenecessitytomakeanadditional rotationof thepartial x
m
, y
m
coordinates
by theζ
m
angleroundthez
m
-axis. Theother is achangeinthepartial imagescale
alongthex
m
- andy
m
-axesbyafactor of b
m
andcosB
m
, respectively.
Let usnowderiveanexpressionfor thecoherent summationalgorithmfor trans-
versepartial imagesinthecaseof widebandpulses. Thiscanbedonebysubstituting
Eq. (6.44) intoEq. (6.14) andreducingtheresult totheform:
ˆ g(x, y) =
M

m=1
_
¸
_
¸
_
L

l=1
f
ple
f
ψ
m
+f
ψ
/2
_
f
ψ
m
−f
ψ
/2
S( f
p
, θ, B) exp( j2πf

ψ
o
x

m
) df
ψ
o

mlθ
_
¸
_
¸
_
,
(6.45)
Radar systemsfor rotatingtarget imaging(tomographicapproach) 145
wheref
ψo
= f
po
ψ isthetransversespacefrequency; f

ψo
= f
ψo
b
m
; f
ple
= f
pl
cosB
m
is
theequivalentspacefrequencyfor thefirstspectral feature; and
mlθ
isthecoherent
processingphasor.
Theprocessing with thealgorithmof Eq. (6.45) includes thefollowing opera-
tions:
• theL number of transversePHs (equal to thenumber of spectral features) are
selectedineverymthpartial anglestep;
• aDFT isperformedwithevery PH withthespacefrequency f toproducetheL
number of 1Dpartial images;
• every partial imageis back projected, andthecontributionis multipliedby the
phasor
mlθ
. Theback projection is madealong they

m
-axis rotated by theζ
m
anglerelativetothey
m
-axis.
Equation (6.45) can be easily solved to give expressions for coherent summation
algorithmsfor2Dpartial and1Dtransverseimagesof narrowbandpulses, byanalogy
withthecasediscussedinSection6.4.1.
Ascomparedwiththerespectivealgorithmsfor2Dholograms, thecomputational
complexity of coherent summationof individual hologramsamples and1D partial
radial imagesincreasesonlybecauseof thenecessitytocomputethesineof thepolar
angleB. However, it does not increasemorethanby 2–3per cent eveninthemost
unfavourablecaseof anarrowbandsignal andarelativelysmall target. Thecomplexity
risesconsiderably whena2D geometry isreplacedby a3D geometry of transverse
partial images. Thisisduetoboththepolar anglevariationduringtheviewingand
theappearanceof avariablein thehologramdiscretisation step. As aresult, new
operationscomeintoplay, but theincreaseinthecomputational complexitystill lies
within10per cent.
Theabovetreatment allowsthefollowingconclusionstobemade:
1. Thealgorithmsfordigital processingof microwavehologramsdesignedinterms
of thetheory of coherent summationof partial components provideimagingin
awiderangeof viewingconditions, inparticular, theprobinggeometryandthe
frequencybandwidthof transmitter radiation.
2. Thewider applicability of digital processingby coherent summationof partial
componentsimpliesagreater complexityof computationsthanthat requiredby
availabletechniques. However, onecanchoosetheleast time-consumingalgo-
rithmfor particular valuesof therelativefrequencybandwidthof thetransmitter
pulseandthesizeof thespacetarget. A radical reductionintheprocessingtime
canbeachievedbyusingseparateprocessingof individual PHs.
Chapter 7
I magingof targetsmovinginastraight line
Whenatarget moves inastraight linenormal totheradar lineof sight, theinverse
synthesis of a tracking aperture can be regarded in terms of Doppler information
processing, inaway similar totheprocessingaimedat ahighazimuthal resolution
by a side-looking radar. Clearly, an inverse aperture can then be considered as a
linear antennaarray performingaperiodic timediscretisationof theradiationwave
front. This is theso-called antennaapproach, and its capabilities arediscussed in
Reference139. Theauthor analysedanequivalentarraymadeupof (2N+1) records
of target movement across areal ground antennabeamof sufficient width. It was
shownthattheazimuthal rangeresolutionR
0
andtheresolutionalongtheϕ direction
couldbedefinedas
=
λR
0
2VT
r
(2N +1) cosϕ
, (7.1)
whereλ is thetransmitter pulsewavelength, V is thetarget velocity, ϕ is theangle
betweenthelinedirectedto thetarget andnormal to thesynthesisingaperture, and
T
r
istherepetitionrateof transmitter pulses.
Inverse aperture synthesis for a linearly moving target can also be examined
in terms of a holographic approach. This was first done by H. Rogers to study
ionosphere[85], making useof D. Gabor’s ideas of holography. Rogers described
a method for hologramrecording of microwaves reflected by ionospheric inho-
mogeneities. The principle of this method is as follows. When an ionospheric
inhomogeneitymoves, theresultingdiffractionpatternontheearthsurfacealsomoves
acrossthereceiver aperture. A signal thathasbeensensedisrecordedonaphotofilm
asahologram. What isactually recordedisthewavefront, andonecanreconstruct
theinhomogeneityimagefromthehologram. For thesereasons, E. Leithconsidered
Rogers’ devicetobetrulyholographicrather thanquasi-holographic.
Holographic concepts were successfully introduced in radar imaging by
W. E. Kock[71]whoshowedthatechosignalsfromalinearlymovingtarget, recorded
by the receiver of a coherent continuous pulse radar, were structurally equivalent
148 Radar imagingandholography
to 1D holograms. He pointed out a similarity among an airborne SAR, a ground
coherent radar andaholographicsystem.
The holographic approach treats inverse aperture synthesis of signals froma
linearlymovingtargetasaparticularcaseof hologramrecordingbythescanningtech-
nique(Chapter 3). Hereweshall analysetheprocessof radar imagingintherange–
cross rangecoordinates, usinginversesynthesis under real target flight conditions,
that is, imagingof partiallycoherent signals.
Radar images obtained in therange–cross rangecoordinates allow estimateof
thetarget sizeandshape, aswell asthereflectivityof itsindividual scatterers. Such
imagescanbefurtherusedfortargetidentification. Theimagingshouldbeperformed
byISARstransmittingcomplexpulses[85,104].
Apart fromthe prescribed movement, an aerodynamic target makes acciden-
tal motions withunknownparameters inducedby destabilisingfactors, suchas the
constant component of windvelocity, theoperation of theinternal control system,
turbulent flows, elastic fuselageoscillations andvibrations dueto theengineoper-
ation and thetarget aerodynamics. Someof thesecan beestimated in advanceby
comparingthesynthesistimeT
s
andthecorrelationtimeof perturbingeffectsT
c
and
bycalculatingthephasenoisetheyintroduceintheechosignal.
Among the above factors responsible for phase fluctuations of an echo signal
ψ(ϕ), of special importanceareturbulent flows. This is becausetheconstant wind
velocityfactorcanbeeliminatedduringthecompensationfortheradial displacement
of a target. The second factor becomes important when a target is manoeuvring.
For atypical synthesis time(T
s
∼ 1s), thevalueof T
c
is smaller than that of T
s
.
Theeffect of thefourth factor can beavoided by choosing thewavelength λ such
that theconditionλ/2 ε (whereε isthemaximumdisplacement duetofuselage
oscillations) isfulfilled[17].
Anechosignal fromthiskindof target ispartiallycoherent. Inthecaseof direct
aperturesynthesis, theeffect of turbulent flowsonthecarrier pathway isaccounted
for byintroducingaphasecorrectionintheechosignal, whichisfoundfromrandom
radial velocity andaccelerationmeasurements[136]. Ininversesynthesis, it isvery
hard to correct phasefluctuations ψ(ϕ) of an echo signal. Below, weshall try to
define the imaging conditions, primarily along the cross range coordinate, for a
partially coherent signal [89]. Thenumerical simulationwehavemadeshows that
thedestabilisingfactorsof interest donot affect therangeresolution.
7.1 Theeffect of partial signal coherenceonthe
crossrangeresolution
Assumingthatf (x) isthedistributionof thecomplexscatteringamplitude(thetarget
reflectivity) alongthecrossrangex-coordinate, ϕ isananglecharacterisingtheaspect
variation, andz(ϕ) isanechosignal, wehave
z(ϕ) =
_
f (x) exp
_
−j

λ
ϕx
_
dx. (7.2)
Imagingof targetsmovinginaline 149
After thereconstructionof theradar image, whichreducestoaFourier transformof
theechosignal (7.2) withtheweight functionw(ϕ) andintensity, weobtain
|ν(s)|
2
=
__
f (x
1
)f

(x
2
)U(s−x
1
, s−x
2
) dx
1
dx
2
+ η(x), (7.3)
U(s
1
, s
2
) =
__
w(ϕ
1
)w


2
)
×exp
_
jψ(ϕ
1
) − ψ(ϕ
2
) +
j4π
λ
(s
1
ϕ
1
−s
2
ϕ
2
)
_

1

2
, (7.4)
wheresisthecrossrangecoordinateintheimageplane, thesign

indicatescomplex
conjugation, η(x) iscomplexnoiseontheimage, andU(s
1
, s
2
) isthecrosscorrelation
functionof thehologram.
The statistical characteristics |ν(s)|
2
and U(s
1
, s
2
) will be analysed on the
assumption that f (x) is a sumof the δ-functions of point scatterers and ψ(ϕ) is
defined by thenormal distribution law. Consider theaverageU(s
1
, s
2
) valueover
thephasefluctuations ψ(ϕ), takingthemto beGaussian. Withtheformulafor the
characteristicfunctionandtheexpansionof ρ(ϕ
1
−ϕ
2
) intoaTaylorseriesatσ
2
1,
weget
exp{ j|ψ(ϕ
1
) − ψ(ϕ
2
)|} = exp{−σ
2
[1− ρ(ϕ
1
− ϕ
2
)]}

= exp{−σ
2
/2
2

1
− ϕ
2
)
2
},
whereσ
2
isthephasenoisedispersion, ρ(ϕ
1
−ϕ
2
) isthecorrelationfactor and
2
is
aquantity inverseto thesecond derivativeof thecorrelation factor at zero, which
describestheanglecorrelationstepof thetarget aspect variation.
Assuming w(ϕ) = exp−ϕ
2
/(2θ
2
), where θ describes the angle step of the
synthesis, wefind
U(s
1
, s
2
) =
λ
2
C
64π
_
(d
2
s
+d
2
c
)
2
−d
4
c
×exp
_

d
2
s
+d
2
c
2[(d
2
s
+d
2
c
)
2
−d
2
c
]
·
_
(s
1
−s
2
)
2
+2
d
2
s
d
2
s
+d
2
c
s
1
s
2
__
,
(7.5)
where C = exp(−4π
2
); d
s
= λ/(2θ) is a resolution step corresponding to the
synthesis timeT
s
(or theaspect variation θ = VT
s
sinα/r
o
), V is thelinear target
velocity, α is the angle between the antenna pattern axis and the vector V, d
c
=
λσ/(2) is aspacecorrelation stepof target path instabilities, andr
o
is thetarget
rangeat themoment of timeT
s
/2.
Theaverageintensityof apointtargetimage(theimpulseresponseof thesystem),
derivedfor apartiallycoherent echosignal, is
|ν(s)|
2
= U(s−x
1
, s−x
2
) = C
1
|A|
2
exp
_

(s−x)
2
d
2
s
+2d
2
c
_
, (7.6)
150 Radar imagingandholography
whereC
1
isthesamefactor of theexponentasinEq. (7.5), Aisthesignal amplitude,
andxisthescatterer coordinate.
For atarget composedof amultiplicity of scatterers, eachscatterer will berep-
resentedby apeak intheimagedescribedby Eq. (7.6). Its imagepositionof each
scatterer alongthes-coordinateisitsreal positionalongthex-coordinateinthetarget
plane. Moreover, everypair of scattererswill berepresentedintheimagefunctionby
aninterferenceterm
U(s−x
1
, s−x
2
) = C
1
ReA
1
A

2
exp
_

[s− (x
1
+x
2
)/2]
2
d
2
s
+2d
2
c

(x
1
+x
2
)
2
4d
2
s
_
,
(7.7)
Theadditional terminEq. (7.7) definesthepeaklocatedhalf waybetweentheimages
of therespectivescatterers; it hasthesamewidthasthepeak for anyother scatterer
andisdescribedbytheratioof theinterscatterer distancetotheresolutionstepvalue
atzerophasenoise. If thisratioislarge, theinterferencetermduetothesuperposition
of sidelobes inindividual pixel images is negligibleas comparedwiththeaverage
imageintensity.
Under theconditionsof partial signal coherence, thereal resolutioncanbefound
fromthe0.5level of themaximumintensity|ν(s)|
2
:
d

s
= 2s
|ν(s)|
2
=0.5
= C
2
_
d
2
s
+2d
2
c
= C
2
d
s
_
1+2(σT
s
/)
2
, (7.8)
where C
2
is a constant defined by the function w(ϕ) in exponential and uniform
approximations of C
2

= 1.66andC
2
= 1, respectively. Obviously, if d
s
decreases
bythevalue
s
, thereal resolutiond

s
will improveonlyby

s
(Fig. 7.1(a)):

s
= C
2
_
d
2
s
+2d
2
c
−C
2
_
(d
s

c
)
2
+2d
2
c
(7.9)
andwithincreasingT
s
thegaininthereal resolutionwill becomestill smaller.
Equation(7.9) canbereducedto
ad
2
s
+bd
s
+c = 0, wherea= 4(p
2

2
s
); b= 4
s
(
2
s
−p
2
);
c = p
2
(2
2
s
+4d
2
c
) −p
4

4
s
; p=

s
/C
2
.
Wecannowcalculated
s
andT
s
valuesthat maybeconsideredmost suitablefor the
synthesisat given
s
and

s
:
d
s opt
=
s
/2+
_

2
s
/4−c/a, (7.10)
T
s opt
= λr
o
/2V sinαd
s opt
. (7.11)
At thevalues of λ = 0.1m, r
o
= 50km, V = 600m/s, α = 90

,
s
= 0.1m,

s
= 0.05mand C
2
= 1, wefind T
s opt
= 1.83s for T
c
= 1.5s and T
c
= 3s,
respectively(d
c
= 6.98mandd
c
= 3.49m).
Formula(7.11) defines thesynthesis timeof apartially coherent signal, which
is optimal in thesensethat it will requiregreater computer resources but will not
essentiallyimproveimagequalitydeterminedbythereal resolutiond

s
or the

s
/
s
Imagingof targetsmovinginaline 151
0
0
0.5
1.0
10
20
30
(a)
(b)
d
Ј s
,

m

s
D
Ј s
/
D
s
,

r
e
l
.

u
n
.
1
2
3
T
s, S
2
1
1 2 3 T
s, S
D
s
Figure7.1 Characteristics of animagingdeviceinthecaseof partiallycoherent
echosignals: (a) potential resolvingpower at C
2
= 1, (b) performance
criterion(1– d
c
= 6.98m, 2– d
c
= 3.49mand3– d
c
= 0)
ratio (Fig. 7.1(b)). This ratio quantitatively describes thegain in theangular radar
resolutionowingtothesynthesisof partiallycoherentsignals, ascomparedwiththat
for perfect viewingconditions(d
c
→ 0).
In the next section, we shall estimate the synthesis conditions by numerical
simulation. The key factor in the imaging model to be described is target path
fluctuations.
7.2 Modellingof pathinstabilitiesof anaerodynamictarget
Path instabilities will be considered as random range displacements of a target
(model I) or as independent fluctuations of the target velocity along the x

- and
y

-axes(model II). Theappropriaterandomprocesseswill beexpressedbyrecurrent
differenceequations[26]
Y
i
[n] =
L

l=0
a
l
X[n−l] +
K

k=0
b
k
Y
i
[n−k], (7.12)
wherethecoefficientsa
0
, a
1
, . . . , a
l
andb
1
, b
2
, . . . , b
k
, aswell asL andK varywith
thecross correlationfunction; thesubscript i denotes thenumber of themodel for
152 Radar imagingandholography
a randomdeviation of the target motion parameter. The coefficients a
l
and b
k
in
Eq. (7.12) that are necessary for obtaining the values of Y
i
[n] with a prescribed
coefficient ρ(τ) werepresentedinthework[26].
Inmodel I of pathinstabilities, thecurrentranger
T
[n] toatargetisdescribedasa
sumof thepredeterminedrangevariationr[n] andtherandomcomponentY
l
[n], which
is themeansquaredeviationof therange. Thequantities σ
p
andT
c
are: σ
p
= 0.04
or 0.05m, T
c
= 1.5and3s. Thevaluesof σ
p
andT
c
werefoundheuristicallyfroma
preliminarysimulation.
Inmodel II, thevector modulusof thereal target velocityis
V
r
[n] =
_
(V +Y
2x
[n])
2
+Y
2
2y

[n], (7.13)
whereY
2x
[n] andY
2y
[n] arethecurrent valuesof randomvelocitydeviationsalong
thex

- andy

-axes, respectively; for comparison, themeansquaredeviationof the
velocity is σ
x

,y
= 0.1or 0.2m/s at T
c
= 1.5or 3s. Thevalues of σ
x

,y
andT
c
are
presentedherecourtesyof A. Bogdanov, O. Vasiliev, A. SavelyevandM. Chernykh
who measured themin real flight conditions. Their experimental dataon coherent
radar signalsinthecentimetrewaverangearealsodescribedinReference28.
Thecurrent anglebetweentheantennapatternaxisandthevector V
r
[n], inthis
model, is
α [n] = α +arctg(Y
2y
[n]/(V +Y
2x
[n])). (7.14)
WithEqs (7.13) and(7.14) combinedwiththeviewingconditions of model II, we
havecomputedthereal current ranger
T
[n] tothetarget.
7.3 Modellingof radar imagingfor partiallycoherent signals
Tomakethenextstepinthemodellingof aradar image, wesuggestthatthepredeter-
minedpathcomponent of apoint target isnormal totheantennapatternaxis, that is,
α = 90

, thetransmitter pulseshaveaspectral widthf
c
= 75MHz, andtheir other
parametersarechosenwiththeaccountof well-knownrestrictionsfor theremoval of
imageinhomogeneities[104].
Therangeimageof atarget was formed by coherent correlation processing of
every echo signal. For every pixel on the range image, the nth (n = 1, . . . , 256)
valueof acomplex echosignal wasrecordedtoformamicrowavehologram[138].
Thereferencefunctionwasformedignoringtheerrorsintheestimatedparametersof
target motion. Thereconstructedimage|ν(r, s)|
2
was2Dinther- ands-coordinates
(range and cross range). The simulation showed that the phase noise due to path
instabilities did not affect therangeimageof atarget. Therefore, weshall further
treat onlyitscrossrangesectionalongtherangeaxis.
A visual analysisof impulseresponsesduringtheimagingof partially coherent
echo signals (T
c
= 3s, T
s
= 1.5s) indicates that phase fluctuations largely pro-
duce the following types of noise (Fig. 7.2). First, there is a shift of the impulse
responsealong thes-axis in theimagefield (Fig. 7.2(a)). Second, thepeak of the
Imagingof targetsmovinginaline 153
1.0 (a) (b)
(c) (d)
0.5
0
1.0
0.5
0
20 40 60 20 0 40 60
|
n
(
s
)
|
2
,

r
e
l
.

u
n
.
|
n
(
s
)
|
2
,

r
e
l
.

u
n
.
s, m s, m
Figure7.2 Typical errors intheimpulseresponseof animagingdevicealongthe
s-axis: (a) responseshift, (b) responsebroadening, (c) increasedampli-
tude of the response side lobes and (d) combined effect of the above
factors
major impulseresponsebecomes broader (Fig. 7.2(b)). Third, thesidelobes of the
impulseresponsebecomelarger toformsomeadditional featurescommensurablein
their intensitieswiththemajor peak(Fig. 7.2(c)).
Combinationsof thethreeeffectsonthefinal imagearealsopossible(Fig. 7.2(d)).
It isworthnotingthat thefirst effect canbeeliminatedduringtheimageprocessing
byrelatingthewindowcentretothenthpixel withmaximumintensity.
The presence of distorting effects necessitates finding ways to measure a real
resolution step. A conventional way of estimating resolution is by measuring the
impulseresponseof theprocessingdeviceat thelevel 0.5of themaximumintensity
|ν(s)|
2
. Inthat case, analysisismadeof all theimagesalongthes-axis, independent
of phasenoise.
Another way of measuring aresolution step is that all additional features on a
pointtargetimageatthe0.5level areconsideredtobesidelobes, irrespectiveof their
intensity, andcanberemovedinadvance.
Figures 7.3 and 7.4 present the estimates of an average resolution step d

s
for
modelsI andII of pathinstabilities, respectively. Theaveragevaluewascalculated
from100recordsof pathinstabilityof apoint target for everydiscretetimemoment
T
s
(T
s
= 0.1, . . . , 2.9s). Theestimationof aresolutionstepwithinmodel I fails to
predict the degree of partial coherence effect on the radar image, since we know
nothingabout aperfect imageapriori. Theanalysis of Fig. 7.3has shownthat the
resolution step error is fairly large at σT
s
/ ≥ 1, where σ = 2πσ
p
/λ. It is the
appearanceof falsefeatures abovethe0.5level withincreasingsynthesis timethat
leadstoanoverestimationof theresolutionstepcomputedfromtheimpulseresponse
154 Radar imagingandholography
60 (a)
40
20
0
1
2 29
19
19
80 (b)
40
60
20
0
1
2 29
1 2 3
T
s, S
d
9 s
,

m
d
9 s
,

m
Figure7.3 The resolving power of an imaging device in the presence of range
instabilities versus thesynthesis timeT
s
and themethod of resolution
stepmeasurement: (a) −σ
p
= 0.04m; 1and1

(2and2

) – first(second)
way of resolution step measurement; 1 and 2 – T
c
= 1.5s, 1

and
2

– T
c
= 3s; (b) −σ
p
= 0.05m, 1and1

(2and2

) – first(second) way
ofresolutionstepmeasurement; 1and2– T
c
= 1.5s, 1

and2

– T
c
= 3s
widthand, hence, to alarger error inthetarget sizemeasurement. Suchanerror is
inherent inthismethodof resolutionevaluation.
Inthemodel of velocityinstabilities(model II), thed

s
(T
s
) curvesinFig. 7.3show
areasonableagreementwiththetheoretical curvesinFig. 7.1(a). Thecurvebehaviour
inFig. 7.4differsfromthecalculateddependencesandfromthemodel computations
showninFig. 7.3inthatthed

s
(T
s
) curvehasaminimum. Thelatter isduetoanerror
inthemethodof estimatingaresolutionstep, althoughthecalculatedd

s
(T
s
) curve
doesnot indicatethepresenceof extrema.
Thesimulationresults(curve1

inFig. 7.4(a)) canbeusedtofindthesynthesis
time intervals for a particular type of signal (or a particular imaging algorithm):
I – totally coherent, II – partially coherent and III – incoherent. One can choose
variousimagingalgorithmsforavailablestatistical characteristicsof pathinstabilities
andforaparticulartimeT
s
. Forinstance, itisreasonabletouseincoherentprocessing
algorithmsatsynthesistimesfor whichasignal canbeconsideredasincoherent[78].
For shorter intervals I and II, one should use coherent processing algorithms and
evaluatetheir performanceintermsof thecriterion

s
/
s
(Fig. 7.5).
Imagingof targetsmovinginaline 155
60 (a)
40
20
0
(b)
19
1
1
2 3
T
s
, s
2 29
d
9 s
,

m
d
9 s
,

m
60
40
20
0
2 29
19
1
I II III
Figure7.4 Theresolving power of an imaging systemin thepresenceof velocity
instabilitiesversusthesynthesistimeT
s
andthemethodofresolutionstep
measurement: (a) σ
x
= σ
y
= 0.01m/s (other details as inFig. 7.3),
(b) σ
x
= σ
y
= 0.2m/s(other detailsasinFig. 7.3)
0.5
0
0.2
0.4
0.6
0.8
1.0
1.0 1.5 2.0 2.5
1
2
T
s, S
D
Ј S
/
D
S
,

r
e
l
.

u
n
.
Figure7.5 Evaluation of the performance of a processing device in the case of
partially coherent signals versus the synthesis time T
s
and the space
stepof pathinstabilitycorrelationd
c
: 1– d
c
= 6.98m, 2– d
c
= 3.49m
156 Radar imagingandholography
Theresolutionestimateobtainedbythesecondmethodisclosetothetheoretical
value. However, thisapproachhasaseriouslimitationbecauseareal targetpossesses
alargenumber of scatterers. Thepositions of respectiveintensity peaks onaradar
imageareunknownapriori, sotheapplicationof thistechniquemayleadtoalossof
informationonadjacent scatterersonanimage. Thismethodprovestowork well if
oneknowsinadvancethatthetargetbeingviewedisapointobjectorthatarangepixel
correspondstoasinglescatterer. Inthat case, theimagingdevicecanbe‘calibrated’
byevaluatingthephasenoiseeffect onit.
The discrepancy between the simulation results presented in Figs 7.3 and 7.4
may beinterpretedasfollows. Model I of target pathinstabilitiessimulatesrandom
phasenoiseassociatedonlywiththedisplacement of rangeaperturepixels. Model II
introducesgreater phaseerrorsintheechosignal, becausetheapertureissynthesised
bynon-equidistantpixels, whichareadditionallyrange-displaced. Thismodel seems
to better represent thereal trackingconditions, sinceit accounts for randomtarget
yawinginadditiontorandomrangedisplacements.
Theanalytical expressions given earlier and thesimulation results on partially
coherent signals with zero compensation for the phase noise can provide the real
resolvingpower of animagingdevice. Today, therearenogenerallyacceptedcriteria
for evaluation of the performance of radar devices for imaging partially coherent
signals. Theresults discussedinthis chapter allowestimationof thedeviceperfor-
manceintheideal caseof d
c
→ 0; ontheother hand, they enableoneto evaluate
theefficiency of computer resources tobeusedinterms of thepossiblegaininthe
resolvingpower.
Trackinstabilitiesof real aerodynamictargetsandotherfactorsintroducingphase
noisegiverisetonumerousdefectsonanimage. Sotheapplicationof conventional
waysof estimatingtheresolvingpower of imagingsystemsleadstoerrors. However,
thereisanoptimal synthesistimeinterval whichprovidesthebestangular resolution
with a minimal effect of phase fluctuations. Therefore, when phase noise cannot
beavoided, whichis usually thecaseinpractice, it is reasonableto makeuseof a
statistical databaseonfluctuationsof motionparametersfor variousclassesof targets
andviewingconditions. Theprocessingmodel wehavesuggestedcanbehelpful inthe
evaluationof theoptimal timeof aperturesynthesisinparticular viewingconditions.
Theviewingconditions alsorequireaspecific processingalgorithmtobeused,
soradar-imagingdevicesshouldalsobeclassifiedintocoherent, partiallycoherentor
incoherent. ThesimulationresultspresentedinFig. 7.4donotquestionthevalidityof
analytical relations(7.4), (7.5) and(7.7) butrather definetheir applicability, because
asignal becomesincoherent whenafluctuatingtarget isviewedfor alongtime.
Chapter 8
Phaseerrorsandimprovement of imagequality
Possiblesourcesof phasefluctuationsof anechosignal, whichnegativelyaffect the
aperturesynthesis, areturbulent flows inthetroposphereandionosphere. Fluctua-
tions of therefractiveindex dueto tropospheric turbulenceimposerestrictions on
the aperture centimetre wavelengths. Ionospheric turbulence affects far-decimetre
wavelengths. Phasefluctuationsdecreasetheresolvingpowerof asyntheticaperture,
leadingtoalower imagequality.
8.1 Phaseerrorsduetotroposphericandionosphericturbulence
8.1.1 Therefractiveindexdistributioninthetroposphere
Fluctuations inthetropospheremay arisefromchanges inthemeteorological con-
ditions and air whirls. As a result, there are non-uniform local distributions of
temperatureandhumidity, leadingtoanon-uniformdistributionof refractivityN:
N = (n−1) ×10
6
, (8.1)
wherenistherefractiveindex.
At thecentimetrewavelengths, astaticair volumehasrefractivity N definedby
theSmith–Wentraubformula:
N =
7.7P
T
+
3.73×10
5
e
T
2
, (8.2)
whereP isthetotal atmosphericpressuremeasuredinmillibars, T istemperaturein
Kelvindegreesandeisthespecificwater vapour pressureinmillibars.
ItfollowsfromEq. (8.2) thatthevalueof N atcentimetreandlongerwavelengths
strongly depends on the water vapour concentration, while its variation with the
wavelengthλ is insignificant. Thelatter fact is quiteimportant becauseit makes it
possibletoobtainphasefluctuationspectraforvariouswavelengthsinthemicrowave
range, usinganexperimental spectrummeasuredat anywavelength. Themajor type
158 Radar imagingandholography
of non-uniformityresponsibleforamplitudeandphasefluctuationsof anelectromag-
neticwaveareso-calledglobules. Theserepresentspherical or ellipsoidal structures,
inwhichtherefractiveindexdiffers, for somereason, fromthat intheenvironment.
Generally, globules havearbitrary and irregular shapes. They arisefromthelocal
changesinthetemperature, humidity or pressureaccompanyingturbulent phenom-
enainthetroposphere. Sincethesecausativefactors behavedifferently at different
pointsinspace, thetroposphereisgenerallynon-uniform.
Weshall first brieflydescribethecharacteristicsof aturbulent troposphere. The
refractive index of the troposphere is generally the function n(r, t) of the radius
vector r andtimet, whichcanbewrittenas
n(r, t) = n +δn(r, t), (8.3)
where n is an average value of the refractive index and δn(r, t) is its deviation
fromtheaveragen. Sincetheproblemof interestisthefluctuationof therefractive
index only, we shall further take n = 1. The autocorrelation function of these
fluctuationsis
B
n
(r
1
,r
2
, t
1
, t
2
) = δn(r
1
, t
1
)δn(r
2
, t
2
), (8.4)
wherer
1
, r
2
aretheradiusvectorsof theselectedpoints.
Forasteady-stateturbulence, theautocorrelationfunctionisindependentof t (the
steadystateintime):
B
n
(r
1
,r
2
) = δn(r
1
, t)δn(r
2
, t). (8.5)
For astatistically non-uniformturbulence(thestationarity inspace), thecorrelation
functionwill notchangeif apairof pointsr
1
ur
2
isdisplacedbythesamedistanceand
inthesamedirectionsimultaneously, that is, B(r
1
,r
2
) variesonly withr
1
−r
2
= r.
A spatiallyuniformdistributioniscalledisotropicif B
n
(¯ r) dependsonlyonr = |r|,
that is, onthedistancebetweentheobservationpointsbut not onthedirection.
However, even in the case of a uniformand isotropic randomdistribution of
the refractive index, it appears to be quite difficult to choose an autocorrelation
functionforitsfluctuationssuchthatitcoulddescribethereal troposphereaccurately.
Theonly casewhen thefluctuation distribution can bedescribed fromtheoretical
considerations is alocally uniformisotropic turbulence. Thegeneral theory of this
kindof turbulencewasdiscussedinReferences132and133. Inreal meteorological
conditions, thedistributions of wind velocity, pressure, humidity, temperatureand
therefractiveindex cannot beuniformor isotropic inlargespaceregions. But ina
relativelysmall region, whosesizeL
o
isknownastheouter-scalesizeof turbulence,
thedistributionsmaybetakentobebothuniformandisotropic.
Theoretically, itispossibletodescribefluctuationsof therefractiveindexinterms
of physical considerations of turbulenceoriginanddevelopment. Thetheory treats
statistical fluctuationsof velocity andrelatedscalar quantities(suchastemperature
andtherefractiveindex), inducedbydisturbancesinhorizontal air currentsbecause
of windandbyperturbationsinlaminar flowduetoconvection.
The physical mechanismof turbulence origin and development is as follows.
When the translational wind velocity exceeds the critical Reynolds number, huge
Phaseerrorsandimprovement of imagequality 159
whirls(globules) ariseandtheir sizemayexceedL
o
. Suchwhirlsareproducedowing
totheenergy of translational flowmovement, for example, tothewindpower. This
power is then given off to whirls of sizeL
o
, and so on. Eventually, theenergy is
dissipatedbecauseof viscousfrictioninthesmallest whirlsof sizel
o
knownasthe
inner-scalesizeof turbulence. Inthis way, hugewhirls gradually split into smaller
ones, andthis process goes onuntil thepower of rotational motionof thesmallest
whirlstransformstoheat inovercomingtheviscousforce. For thisreason, aregion
wherehugewhirls transformto small ones is calledaninertiaregion. Withinsuch
aregion, theinstantaneous distribution of therefractiveindex n(r) is an unsteady
randomfunction. However, thedifference
n(r
1
) −n(r
2
)
issteadyunder thecondition
|r
2
−r
1
| < L
o
.
Inother words, n(r) appearstobearandomfunctionwiththefirst incrementsbeing
steady. Randomprocesses, likethosediscussedinthebooks[132,133], canbeconve-
nientlydescribedbystructurefunctions. Theonefor therefractiveindexdistribution
hastheform:
D
n
(r) = [n(r
1
) −n(r
2
)]
2
. (8.6)
Thestructurefunctionisafundamental characteristic of arandomprocesswiththe
first steadyincrements, replacingtheconcept of autocorrelationfunction. Thelatter
just doesnot exist for randomprocesses.
ThequantityD
n
(r) describestheintensityof n(r) fluctuations, whoseperiodsare
smaller or comparablewithr. For alocally uniformandisotropic turbulence, it is
definedas
D
n
(r) = [n(r
1
+r) −n(r
1
)]
2
, (8.7)
wherer isanarbitraryincrement of r
1
.
Letusconsider somestatistical characteristicsof therefractiveindexdistribution
inthetroposphere. ThedetailedanalysismadeinReferences132and133hasshown
that thestructurefunctionof thisparameter canbewrittenas
D
n
(r) = C
2
n
r
2/3
, (l
o
r L
o
), (8.8)
whereC
2
n
isastructureconstant of therefractiveindex. Equation(8.8) describesthe
so-called2/3lawbyObukhovandKolmogorovfor therefractiveindexdistribution.
Numerous measurements made in the near-earth troposphere [132,133] showed a
good agreement between the fluctuation characteristics of n and the 2/3 law. The
valueof l
o
inthetroposphereis foundto be∼1mm. Thequantity L
o
is afunction
of directionandaltitude. Therefore, onemay assumethat thehorizontal extension
of largewhirls near theearthsurfacewill havethesameorder of magnitudeas the
altitude, asfar asthemaximumaltitudeslieintherangefrom100to1000m[110].
160 Radar imagingandholography
Tatarsky’s model-I
Tatarsky’s model-II
Carman’s model
Modified Carman’s model
10
–20
10
–10
10
–12
10
–8
10
–4
1
1
2
3
4
1 10
1
10
2
10
3
=
10
4
Whirl origin region
Inertia region
Energy dissipation
region
Φ
n
(x)
C
n
2
(x)
–11
3
x
o
(L
o
~l m)
2p
L
o
x(m
–1
)
x
m
2p
l
o
l
o
≈l mm
=
Figure8.1 Thenormalisedrefractiveindexspectrum
n
(χ)/C
2
n
asafunctionofthe
wavenumber χ invariousmodels: 1– Tatarsky’smodel-I, 2– Tatarsky’s
model-II, 3– Carman’smodel, 4– modifiedCarman’smodel
Therefractiveindexspectrumobeyingthe2/3lawis

n
( χ) = 0.033C
2
n
χ
−11/3
, at (< χ < χ
m
), (8.9)
whereχ
o
∼ (2π/L
o
), χ
m
∼ (2π/l
o
) andχ is thespatial wavenumber. It has been
foundexperimentally that the
n
(χ) spectrumhas theformof χ
−11/3
inaninertia
regionwherethewavenumbersarelarger thanχ
o
. Figure8.1showsthenormalised
spectrafor threeregions: for theregionof whirl origin(χ < (2π/L
o
)), for theinertia
region((2π/L
o
) χ (2π/l
o
)) andfor thedissipationregion(χ ≥ (2π/l
o
)).
It isseenthat thespectral density
n
(χ) intheregionof χ ≥ (2π/l
o
) decreases
much faster than might beexpected fromthe(χ
−11/3
) formula. But in what way

n
(χ) decreases inthis regionis still unclear theoretically. Oneusually deals with
threekinds of spectrainthedissipationregion. Oneobeys theχ
−11/3
law, another
drops abruptly at χ = χ
m
, implyingthat
n
(χ) = 0at χ = χ
m
, and, finally, the
spectrumchangesonadditionof thefactor exp[−(χ
2

2
m
)].
ThesecondcaseobeysEq. (8.9) inpractice. Wehavetermedtherespectivemodel
spectrumTatarsky’smodel-I. IthasbeensuccessfullyemployedinReference133and
someother studies. InReference132, V. Tatarskyusedthefollowingexpressionfor
Phaseerrorsandimprovement of imagequality 161
therefractiveindexspectrum:

n
(χ) = 0.033C
2
n
χ
−11/3
exp
_

χ
2
χ
2
m
_
(8.10)
withχ
m
/l
o
= 5.92rather than2π, asbefore. Wehavecalledthemodel for thiscase
Tatarsky’s model-II, whichis fully validintheinertiaregionbut is approximateat
χ > χ
m
.
It followsfromtheanalysisof thetwomodelsthat theycanadequatelydescribe
thestatistical characteristicsof therefractiveindex intheinertiaregionandaresat-
isfactory for thedissipationregion. Intheregionof χ < (2π/L
o
), however, these
modelsdonot undergoany modification, that is, thedependence
n
(χ) remainsto
beχ
−11/3
. On theother hand, it is known fromReferences 132 and 133 that the
spectral densitycurve
n
(χ) at χ < (2π/L
o
) isnot universal andmaychangewith
themeteorological conditions. Therefore, themodels of (8.9) and(8.10) arepracti-
cally unableto evaluatetheeffects of this regiononmeasurements. Besides, these
modelsdescribewell onlysmall-scaleturbulence, whichisquiteclear fromFig. 8.1.
Inreality, however, most of theturbulencepulsation‘power’ isaccumulatedinlarge
whirls, at χ ≤ (2π/L
o
). Insuchregions, theuniformity andisotropic character of
therandomdistributionof n(r, t) arealsoviolated. Still, quantitativeestimationscan
bemadefrominterpolationformulaedescribingapproximatelythestructurefunction
behaviour atlargeL
o
values, thatis, intherangeof small χ. Oneof theseisCarman’s
functionhavingthefollowingspatial spectrum[133]:

n
( χ) = 0.063
δn
2
1
L
2
o
(1+χ
2
L
2
o
)
11/6
at χ

L
o
, (8.11)
whereδn
2
1
isthedispersionof refractiveindexfluctuations.
Thespectral model of (8.11) knownasCarman’smodel workswell forlarge-scale
turbulence(Fig. 8.1). OnecanseefromEq. (8.11) that it doesnot includeexplicitly
theconstant C
2
n
relatedtothedispersionδn
2
1
bytheexpression
C
2
n
= 1.9δn
2
1
L
−2/3
o
. (8.12)
UsingEq. (8.12), onecanderiveexpressionsfor Tatarsky’smodelsI andII:

n
( χ) = 0.063δn
2
1
L
−2/3
o
χ
−11/3
at

L
o
χ

l
o
, (8.13)

n
( χ) = 0.063δn
2
1
L
−2/3
o
χ
−11/3
exp
_

χ
2
χ
2
m
_
at

L
o
χ

l
o
. (8.14)
Thisrepresentationisconvenient whentherefractiveindexfluctuationsaregivenas
δn
2
1
rather thanthroughC
2
n
.
The next point to discuss is the applicability of the spectra described by
Eqs (8.9), (8.10) and(8.11). Whenusingthis or that spectral model inproblems of
parameterfluctuationsof anelectromagneticwaveinaturbulentmedium, oneshould
162 Radar imagingandholography
bear inmindthefollowingfactors. First, thespectraarevalidintheinertiaregionof
alocallyuniformandisotropicturbulence. Sometimes, theturbulencespectrummay
stronglydiffer fromtheabovemodels. Second, thespectrumatχ ≤ χ
o
is, atbest, an
approximation, eventhoughonemay useCarman’sspectra. At χ ≥ χ
m
, themodel
spectra are only good approximations. Note that the spectrumof the form(8.11)
transformstothatof (8.9) atχ
2
L
2
1. Inadditiontothethreetypesof spectra, there
isaspectrumof theform:

n
( χ) =
α exp(−χ
2

2
m
)
(1+χ
2
L
2
o
)
11/6
,
α =
δn
2
1
L
3
o
π
3/2
(11/6)
(1/3)
C(χ
m
L
o
),
C(χ
m
L
o
) ≈
_
1+
(11/6)
(1/3)
(−1/3)
(3/2)

m
L
o
)
−2/3
_
−1
. (8.15)
At χ
m
L
o
1, the correction termC(χ
m
L
o
) ≈ 1. Since l
o
∼ (1÷ 10) mmand
L
o
≥ 1m, wehave
χ
m
l
o
= 5.92, χ
m
= (5.92÷59.2),
χ
m
L
o
≥ 5.92×10
3
.
Keepinginmindthisfact and
(11/6)
π
3/2
(1/3)
≈ 0.06,
weget

n
( χ) = 0.06
δn
2
1
L
[1+χ
2
L
2
o
]
−11/6
exp
_

χ
2
χ
2
m
_
(8.16)
or

n
( χ) = 0.06
C
2
n
L
11/3
o
[1+χ
2
L
2
o
]
−11/6
exp
_

χ
2
χ
2
m
_
. (8.17)
It wouldbereasonabletocall aspectrumof thetype(8.16) or (8.17) Carman’smod-
ified spectrum. If relation (8.12) is fulfilled, this spectrumwill coincidewith that
describedby Eqs(8.10) and(8.14) at largevaluesof χ. But intheχ range, it coin-
cides withtheCarmanspectrumshowninFig. 8.1. Thechoiceof aparticular type
of spectrumvarieswiththeproblemtobesolved. Fluctuationsof someelectromag-
netic waveparameters, suchasphaseandamplitude, areoftensensitivetoacertain
turbulencespectrum, or tolarge- or small-scalewhirls. Keepingthis important fact
inmind, oneshouldanalysecarefullytheapplicabilityof thechosenspectrumbefore
usingit.
Phaseerrorsandimprovement of imagequality 163
Thebestwayof verifyingamodel istocomparetheresultsobtainedwithavailable
experimental data. Althoughthemodelsof (8.9) and(8.10) arerather approximateat
χ < (2π/L
o
), theystill provideagoodagreement withmeasurements(e.g. of phase
fluctuations). Moreover, theycangivetheresultsinananalytical form. Ontheother
hand, themodelsof (8.11) and(8.15) aremoreaccuratefor largewhirlsbut theyare
unabletogiveclear analytical results. Thesecircumstanceshavepredeterminedthe
applicability of themodels of (8.8) and (8.10). In thestudy of phasefluctuations,
bothmodelsyieldsimilar analytical expressions.
It isof importancetodiscussinsomedetail avertical profilemodel of thestruc-
tureconstant. Thisconstant describesthedegreeof refractiveindexnon-uniformity,
becauseit relatesthequantitiesD(r) andr (seeEq. (8.8)). Thestructureconstant C
2
n
isrelatedtothetroposphericparametersδn
2
1
andr. Forradiationpropagationalongan
obliquepath, theturbulence‘intensity’ changeswiththealtitude, andtheC
2
n
values
will bedifferent at different altitudes. Thestructurefunctionof n(r) will thenbe
D
n
(r) = C
2
n
(h)r
2/3
,
where C
2
n
(h) is a structure constant varying with altitude. To obtain quantitative
results, oneisfirsttofindtheC
2
n
(h) variation. Thetheoretical treatmentof theproblem
of parameter fluctuationsfor aplanewaveinaturbulent troposphere[132] included
thefollowingC
2
n
(h) models:
C
2
n
= C
2
n0
exp
_

h
h
0
_
, (8.18)
C
2
n
= C
2
n0
1
1+(h/h
0
)
2
, (8.19)
whereC
2
n0
isthestructureconstant of therefractiveindexnear theearthsurface, his
thealtitudeof thepoint inquestion, andh
0
isaconstant.
However, the question whether Eqs (8.18) and (8.19) can really describe the
C
2
n
(h) functioninthemicrowavefrequency bandremains unanswered. Inorder to
findtheexact formof thisfunction, it isnecessarytoexaminethemicrostructureof
therefractiveindexdistributioninthemicrowaverangeandtodesignaC
2
n
(h) model.
This becamepossibleonly after thepublicationof thework [134], whichreported
measurementsmadeinexperimental flightconditions. Thestructureconstantprofile
of therefractiveindex wasmeasuredalonganobliquemicrowavepath. Theresults
of the C
n
(h) measurement were summarised in table 1 of Reference 134. Yet, it
wasimpossibletoplot theC
n
(h) functionfromthesedata, becausethey weretobe
statisticallyprocessed. Thiswasaccomplishedbytheauthorsof Reference144.
Figures 8.2and8.3showsomeC
2
n
(h) plots for different seasons (for April and
November). TheC
2
n
values in theseplots represent records averaged over several
runsof thesquaredstructureconstantmeasurement(theaveragingwasactuallymade
over thetimeof day). Theconfidencelimit was taken to be0.98. Someof theC
n
values presentedinReference134differ considerably fromtheaveragevalues and
donot seemtobeduetoastatistical spread. Toreveal suchdata, theauthorsuseda
164 Radar imagingandholography
0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3.0
h
(km)
10
–17
10
–16
10
–15
10
–14
10
–13
C
n
2
(cm
–2/3
)
Figure8.2 Theprofileof thestructureconstant C
2
n
versusthealtitudefor April at
theSARwavelengthof 3.12cm
criterionbasedontheassumptionof anormal error distribution. TheC
n
recordsthat
differedfromtheaveragebymorethanapossiblemaximumof thestatistical spread
and were lying within the 0.98 error limit were eliminated fromfurther analysis.
Theplotsthusobtainedwereapproximatedbyexponential functions, usingtheleast
squaremethod. As aresult, thefollowinganalytical dependencies werederivedfor
thestructureconstant profileat thewavelengthof 3.12cm:
(a) theC
n
(h) model for April:
C
2
n
(h) = C
2
n0
exp
_

h
h
0
_
(8.20)
withC
2
n0
= 3.69×10
−15
cm
−2/3
andh
0
= 2.17×10
5
cm;
Phaseerrorsandimprovement of imagequality 165
0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3.0
h
(km)
10
–17
10
–16
10
–15
10
–14
10
–13
C
n
2
(cm
–2/3
)
Figure8.3 Theprofileof thestructureconstantC
2
n
versusthealtitudefor November
at theSARwavelengthof 3.12cm
(b) theC
n
(h) model for November:
C
2
n
(h) = C
2
n0
exp
_

h
h
0
_
, (8.21)
withC
2
n0
= 1.27×10
−14
cm
−2/3
andh
0
= 8.89×10
4
cm.
Wecanseethat therefractiveindex fluctuations decreasewithaltitude. Themajor
contributiontothefluctuationsismadebyatroposphericstratum3kmthickabovethe
earth. Thecontributionof theother 7kmthickness(thetotal thicknessof thetropo-
sphereis taken to be10km) is fivetimes smaller. It is known that thefluctuation
of n increases with rising humidity. Themost intensefluctuations areobserved at
166 Radar imagingandholography
theair–cloud interface and inside the clouds. This model, however, ignores these
effects because of the lack of experimental data. But some data are available on
theeffect of humidity andclouds onthedispersionδn
2
of therefractiveindex val-
ues. Therefore, the model of the vertical δn
2
profile allows estimation, in a first
approximation, of thecloudeffect onphasefluctuations.
Toconclude, it seemsreasonabletoextendtheresultsonλ = 3.12cmwavesto
othercentimetrewavelengths, sincetheSmith–Wentraubformula(8.2) indicatesonly
aslight dependenceof nonthewavelengthλ withinthecentimetrefrequencyband.
8.1.2 Thedistributionof electrondensityfluctuationsintheionosphere
In contrast to the troposphere, the ionosphere is characterised by electron density
fluctuations. Let Ne(r) denotefluctuationsof theequilibriumelectrondensityN
o
,
whichistheaverageelectronconcentration. Thevariableξ definedbytheequalityξ =
Ne(r) representsauniformrandomdistributionwithazeroaverageandastandard
deviationσ
ξ
. Bydefinition, theautocorrelationfunctionof thisdistributionis
B
ξ
(r
1
−r
2
) = ξ(r
1
)ξ(r
2
),
wheretheangular bracketsstandfor theaveragingover anensemble.
AccordingtotheWiener–Khinchintheorem, theautocorrelationfunctionandthe
spectrumcreateaFourier transformpair:

ξ
( χ) = (2π)
−3
_

_ _
B
ξ
(r)e
−j χr
d
3
r, (8.22)
B
ξ
(r) = (2π)
−3
_

_
−∞
_

ξ
( χ)e
−j χ

K
d
3
χ, (8.23)
whereχ isthespatial wavenumber.
Experimental investigations have shown [141] that both the phase fluctuation
spectraof awavethat haspassedthroughaturbulent ionosphereandtheamplitude
fluctuation spectra have an asymptotic power dependence. Hence, the spectra of
ionospheric whirls must also haveapower dependence. Assumingthewhirls to be
isotropicwithinaspacescalefrom70mto7km, a3Dwhirl spectrumwill havethe
form[141]:

ξ
( χ) ≈ χ
−P
, (8.24)
whereP isthepower indexof thespectrumvaryingbetween2≤ P ≤ 3.
C. L. Rinoandvariousco-workers[113–116] havesuggestedaspectrumof the
electrondensityfluctuation:

Ne
( χ) = C
S
χ
−(2ν+1)
at χ
o
< χ < χ
i
, (8.25)
whereC
S
istheturbulenceparameter, χ
o
istheouter-scalesizeof ionosphericwhirls,
χ
i
istheirinner-scalesize, and2ν = P isthespectral powerindex. Wehavementioned
Phaseerrorsandimprovement of imagequality 167
abovethat thepower indexvariesbetween2< P < 3, whereasthepower indexfor
thetroposphereisP = 8/3(Kolmogorov’sspectrum).
Theturbulenceparameter isdescribedas
C
S
= 8(π)
3/2
(χ)
P−2
((P +1)/2)
((P −1)/2)
Ne
2
(8.26)
where(·) isthegamma-functionandNe
2
isthemeansquarevalueof thefluc-
tuationcomponentof theelectrondensity. For atypical fluctuationdistributioninthe
ionosphere, C
S
∼ 10
21
(mKs). The quantity Ne
2
varies remarkably with the
ionospheric conditions, so C
S
fluctuates from6.5 × 10
19
(mKs) at P = 2.9 to
1.3×10
23
(mKs) at P = 1.5[22]. Theionospherehasathicknessof about 200km.
ThemaximumelectrondensityliesintheNmF2stratumat analtitudebetween250
and350km.
Theouter-scalesizeof aturbulent whirl alongtheshortest distance(ionospheric
whirlsareanisotropic)isabout10km. Therespectivevalueforaturbulenttroposphere
isabout 1km.
8.2 A model of phaseerrorsinaturbulent troposphere
WhendiscussingaSAR inChapter 3, wepointedout that aturbulent non-uniform
tropospherecould beasourceof spatial phasefluctuations. Let us consider atur-
bulencemodel withreferenceto aparticular typeof SAR – aradar withafocused
aperture. SupposeaSAR islocatedalongthecarrier track (Fig. 8.4). For simplicity,
weshall assumethat thereisonlyonepoint scatterer Aacrosstheswathwidth. This
targetislocatedatthepointhavinganobliquerangeRandisscannedforthesynthesis
timeL
s
, i.e.
L
s
≈ βH,
whereβ istheaperturepatternwidth.
Theequiphasesurfaceof anechosignal representsaspherewiththecentreatthe
target locationpoint. ThetracklineisshownbytheA
1
–A
2
line, andthethicknessof
aturbulent troposphericstratumisdenotedash
t
. Thestructurefunctionof thephase
fluctuationfor aspherical waveof thepoint target Ais
D
ϕ
(ρ) = [ϕ(r +ρ) −ϕ(r)]
2
, (8.27)
whereρ isthedistancebetweenthepoints, at whichthephasefluctuationsaretobe
measured, for example, ρ = d
e
. Tofindananalytical expressionfor D(ρ), consider a
2Dspectrumof wavephasefluctuationsinaturbulent troposphere. Usingagradual
perturbationapproach, theauthorsof Reference133derivedasimpleformularelat-
ing thephasefluctuation parameters to thespectral density of therefractiveindex
fluctuations
n
( χ). The2D spectral density F
ϕ
( χ, 0) and
n
( χ) havethesimplest
relation, becausetheformerisa2DFouriertransformof therespectivephasestructure
functionintheplanex= const. normal tothewavepropagationdirection. Foraplane
168 Radar imagingandholography
Point target
Swath width
S
y
n
th
e
s
is
le
n
g
th
Ls
C
a
r
r
i
e
r
t
r
a
c
k
l
i
n
e
T
r
a
c
k
l
i
n
e
p
r
o
j
e
c
t
i
o
n
o
n
t
o
t
h
e
e
a
r
t
h
V
R
Z
R
o
A
1
A
2
H
x
0
d
e
L
f
A
y
q
a
u
h
t
R
q
Figure8.4 A geometrical construction for a spaceborne SAR tracking a point
object Athroughaturbulent atmosphericstratumof thicknessh
t
wavewiththecrosssectionx= L, wehave
F
ϕ
(χ, 0) = πk
2
L
_
1+
k
χ
2
L
sin
χ
2
L
k
_

n
( χ), (8.28)
whereL isthedistancecoveredbythewavepassingthroughanon-uniformturbulent
medium.
UsingEq. (1.51) fromReference133, wecannowturntothestructurefunction
of phasefluctuationsintheplanex= L:
D(ρ) = 4π
_

0
[1−J
0
(χρ)]F
ϕ
(χ, 0)χdχ, (8.29)
whereρ is thedistancebetweenthepoints, at whichthestructurefunctionis to be
measured in theplanex = L. It follows fromEq. (8.28) that the2D spectrumof
F
ϕ
( χ, 0) issimilar tothespectrumof therefractiveindexfluctuations
n
( χ) multi-
pliedbythefilteringfunction(insquarebrackets). Therefore, thewavepropagation
throughaturbulent mediumissimilar tothelinear filter effect incircuit theory.
Phaseerrorsandimprovement of imagequality 169
Thefilteringfunctionof phasefluctuationsisonlyslightlysensitivetotheparame-
tervariations. Forexample, atχ = 0, F
ϕ
( χ, 0) isequal to2πk
2
L, changingsmoothly
with increasing χ as far as πk
2
L. Therefore, the filtering occurs relatively uni-
formly. Themaximumproduct of thefilteringfunctionand
n
( χ) for typical SARs
isobservedatsmall valuesof χ, or inlargewhirls. For thisreason, phasefluctuations
andphasecorrelationaremost sensitivetotheouter-scalesizeof turbulence, L
o
.
WithEq. (8.29) andtheturbulencemodels of (8.9) and(8.10), wecanarriveat
anexpressionfor auniformturbulenceandaplanewave:
D
ϕ
(ρ) = αk
2
1
L
2
ρ
5/3
, (8.30)
where
α =
_
2.91, at ρ ≥

λL,
1.46, at l
o
ρ

λL,
andL istheelectromagneticwavepathinaturbulent medium.
Inorder toexaminetheeffect of phaseerrorsontherecordingof 1Dholograms
byaside-lookingradar, itwouldbeuseful totrytoextendtheaboveresulttothecase
of anon-uniformturbulenceandaspherical wave[144].
FromTatarsky’snon-uniformmodel-I, wehave
D
ϕ
(ρ) = 1.46k
2
ρ
5/3
L
_
0
C
2
n
(h) dh, (l
o
ρ

λL), (8.31)
D
ϕ
(ρ) = 2.91k
2
ρ
5/3
_
L
0
C
2
n
(h) dh, (ρ

λL). (8.32)
The last two expressions show that phase fluctuations are equally affected by all
whirls, irrespectiveof theirdistancetotheobservationpoint. Moreover, whenρ passes
through thevalue

λL, which is usually somewhereat thebeginning of thepath,
thefactor infront of D
ϕ
(ρ) increases 2-fold. Therefore, theexperimental structure
functionD
ϕ
(ρ) must haveapositiveriseat ρ =

λL.
It isinterestingtofollowhowD
ϕ
(ρ) changeswhenaplanewaveisreplacedby
aspherical one. Theformularelatingthemeansquarevalueof thephasedifference
fluctuationtothebase‘ρ’ for aspherical andplanewave[132] is
_

1
−ϕ
2
)
2
sp
_
= [D
ϕ
(ρ)]
sp
=
1
_
0
D
ϕ
(ρt) dt.
For theplanewaveD
ϕ
(ρ) = αk
2
C
2
n

5/3
, wehave
[D
ϕ
(ρ)]
pl
= A
0
1
_
0
D(tρ)
5/3
dt =
3
8
A
0
ρ
5/3
,
170 Radar imagingandholography
whereA
0
= αk
2
C
2
n
L. Hence,
[D
ϕ
(ρ)]
sp
=
3
8
[D
ϕ
(ρ)]
pl
. (8.33)
Wecanconcludethat phasefluctuationsfor aspherical wavearenot aslargeas
for a planewaveand that thestructurefunctions for theformer differ fromthose
of the latter only in numerical coefficients. For a mediumwith slowly changing
characteristics, wehave
[D
ϕ
(ρ)]
sp
=
3
8
1.46k
2
ρ
5/3
L
_
0
C
2
n
(h) dh, (l
o
ρ

λL), (8.34)
[D
ϕ
(ρ)]
sp
=
3
8
2.91k
2
ρ
5/3
L
_
0
C
2
n
(h) dh, (ρ >

λL). (8.35)
Theinitial expressionfor thestructurefunctionevaluationinaSAR is Eq. (8.35),
becausethereistherelation
ρ = d
e
>

λL.
TheC
2
n
(h) functionwasshownabovetobegivenby
C
2
n
(h) = C
2
n0
exp
_

h
h
0
_
.
Asaresult, wehavetheformula
_
C
2
n
(h)dh= C
2
n0
h
0
_
1−exp
_

L
h
0
__
,
whereL = h
t
cosecθ, θ istheanglebetweenthewavepropagationdirectionandthe
skyline, andh
t
isthetotal altitudeof theturbulent stratum.
A synthetic aperture is characterised by the equality ρ = d
e
, where d
e
is the
equivalent baseat h
t
. It followsfromFig. 8.4that
d
e
=
L
s
h
t
cosecϑ
R
o
=
L
s
h
t
H
.
Thus, weeventuallyget therelation
D
ϕ
(ρ) = β
o
_

λ
_
2
_
L
s
h
t
H
_
5/3
C
n0
h
0
_
1−exp
_

h
t
cosecϑ
h
0
__
, (8.36)
whereL
s
=
¯
VT
s
, T
s
isthesynthesistime,

V isthetrack velocityof theradar carrier
andβ
o
= 1.09.
Phaseerrorsandimprovement of imagequality 171
Equation(8.36) alsoallowsfindingthestandarddeviationof thephasedifference
fluctuationsat thesyntheticapertureends:
σ
ϕ
(ρ) =
_
D
ϕ
(ρ). (8.37)
Weshall nowexaminehowphaseerrorsduetotroposphericturbulenceaffecttheres-
olutionlimitandoptimal lengthof asyntheticaperture. W. BrownandY. Riordan[23]
havecalculatedbothparametersfor thecaseof phaseerrors, withthestructurefunc-
tionobeyingapower law. It wasstatedthat thephasedifference[ϕ(r + ρ) − ϕ(r)]
hasaGaussiandistribution, andthisissupportedexperimentally. For theabovetype
of phaseerrors, theexpression for theapertureresolution along thetrack is found
tobe
ρ
x
=
λR
4πρ
o
(8.38)
withρ
o
= 0.985b. Thequantitybistobecalculatedfromtheequationforthestructure
functionof aphaseerror:
D
ϕ
(ρ) = b
n
ρ
n
, n= 5/3. (8.39)
ThenEqs(8.38) and(8.39) yield
ρ
x
=
λR

[D
ϕ
(ρ)]
3/5
ρ
. (8.40)
Using theequation for thestructurefunction of aphaseerror (8.36) and ρ = d
e
,
weget
ρ
x
= λ
−1/5
RC
0
(C
2
n0
)
3/5
(h
0
)
3/5
_
1−exp
_

h
t
cosecϑ
h
0
__
3/5
, (8.41)
whereC
0
= const. Thisequationshowsthatρ
x
variesbutslightlywithλandincreases
slowlywithincreasingλ.
Theoptimal synthetic apertureaffected by aturbulent troposphere[23] can be
foundas
L
opt
=
13.4
b
. (8.42)
ThenEqs(8.42) and(8.39) give
L
opt
=
d
0
λ
6/5
(C
2
n0
)
3/5
(h
0
)
3/5
_
1−exp
_

h
t
cosecϑ
h
0
__
3/5
(8.43)
withd
0
= const.
Themeansquarevalueof thephaseerrorbetweentheoptimal aperturecentreand
itsextremal point is
σ
ϕ
= (D
ϕ
(L
opt
/2))
1/2
, (8.44)
whereD
ϕ
andL
opt
aretobecalculatedfromEqs(8.36) and(8.43).
172 Radar imagingandholography
Someother methods for reducing propagation-induced phaseerror in coherent
imagingsystemsweresuggestedinReference22and47.
8.3 A model of phaseerrorsinaturbulent ionosphere
It was shownintheAppendix to Reference114that agoodapproximationfor the
structurefunctionof phasefluctuationsistheexpression:
D(y)

= C
2
δ
|y|
2ν−1
, 0.5< ν < 1.5. (8.45)
Thephasestructureconstant C
2
δ
isdefinedas
C
2
δ
=
C
p

2(1.5−ν)
(ν +0.5)(2ν −1)2
2ν−1
, 0.5< ν < 1.5, (8.46)
where C
p
= r
2
e
λ
2
l
p
C
S
, l
p
is the path length of an electromagnetic wave in the
ionosphere, r
e
istheclassical electronradiusr
e
= 2.81×10
−15
m, λ isthetransmit-
ter wavelength, andC
S
is theturbulenceparameter intheionospheredescribedby
Eq. (8.26).
Usingthephasescreenmodel of Reference116andEq. (8.46), onecanshowthat
themeansquarevalueof thephasefluctuationsalongthepathl
p
isdefinedas
δ
2
= 2

πr
2
e
λ
2
l
p
C
S
G
χ
−2ν+1
o
(ν −1/2)
4π(ν +1/2)
, (8.47)
wherethefactor G wasborrowedfromtheAppendix toReference113. Thisfactor
accountsfor:
• thevelocityof thescanningbeammotionrelativetoelectrondensitywhirls(ν
o
),
• thegeometrical parameter duetotheelectrondensityanisotropy(),
• theeffectivevelocityof thescanningbeamacrosstheearthsurface(V
ef
),
• thesynthesisedaperturelengthL
s
.
Thefactor Gisdefinedas
G = (V
ef
L
s

o
)
p−1
. (8.48)
Theequationsfor andV
ef
canbefoundinReference113.
All thefundamental conceptsof themodel wehavejustdiscussedweredeveloped
by Rino, so we think this model should bear his name. It has been successfully
employed to analyse the effects of ionospheric turbulence on communication and
navigation deviceperformance. But wealso believethat this model can beuseful
for theestimationof apertureperformanceinwhirlsandtheir effect ontheazimuth
ambiguityfunction. Thelatterisimportantbecauseonecanthenevaluatetheaperture
resolutionerrors.
Phaseerrorsandimprovement of imagequality 173
8.4 Evaluationof imagequality
1
Synthetic apertures wereprimarily designed for obtaining images to beused by a
humanoperator to solveresearchandappliedproblems. It is natural that theeval-
uation of aperture performance should largely be based on the analysis of image
characteristics. Todoso, oneneedstohaveat one’sdisposal appropriatecriteriafor
aquantitativedescriptionof theperformancecharacteristics of aparticular typeof
aperture to be able to compare themwith those of other apertures and to suggest
appropriateimprovements.
At present, there is no generally accepted criterion for evaluation of aperture
performanceor imagequality, though therehavebeen someattempts madealong
thisline[99]. Difficultiesinvolvedindevelopingareliablecriterionareduenotonly
tothecomplex designandrandombehaviour of asynthetic aperturebut alsotothe
diversity of their applications (e.g. agreat variety of target aspect angles at which
imagingismade). Normally, potential characteristicsor someindividual parameters
areusedascriteriafor theevaluationof apertureperformance.
8.4.1 Potential SARcharacteristics
SARdesignersandresearchersoftenusetheso-calledpotential characteristics, since
they describetheapertureresponseto anecho signal fromapoint scatterer anddo
not containmicronavigationnoise[53]. Thefollowingparameters may bereferred
toaspotential characteristics. Weshall mostly list characteristicsof aperturesusing
adigital signal processingandadigital imagereconstruction.
1. The major lobe width of a synthetic antenna pattern (SAP) characterises the
potential resolvingpower of anapertureinazimuthρ
β
. Thisparameter isdeter-
minedby thewidth of theapertureresponseto apoint target at zero noise. In
practice, a3dB SAP widthismost oftenusedasacriterionfor evaluationof a
potential resolution, butthereareother approaches, too. Thepotential resolution
isusuallyevaluatedwithauniformweightingfunctionH(t) ≡ 1toget
ρ
β
≈ λ/(2Lsinγ ), (8.49)
whereL is theprojectionof thesynthesis stepontothenormal totheviewline
andγ istheincidenceangleof microwaveradiation. If theweightingfunctionis
non-uniform, themajor lobewidthbecomes1.2–2.5timeslarger, dependingon
thetypeof theweightingfunction.
2. Theintegral level of sidelobes
b
i
=
_
_
_
π
_
−π
I
2
(β) dβ −
ρ
β
/2
_
−ρ
β
/2
I
2
(β) dβ
_
_
_
_
π
_
−π
I
2
(β) dβ (8.50)
1
Section8.4waswrittenbyE. F. TolstovandA. S. Bogachev.
174 Radar imagingandholography
Table8.1 The main characteristics of the synthetic aperture
pattern
Typeof weightingfunction RelativeSAP width b
i
20lg(b
m
)
Uniform 1.0 0.0705 −13.3
Parabolic 1.3 — −20.6
Henning’s 1.6 0.0103 −32.0
Hamming’s 1.45 0.178 −42.0
characterises themaximumSAP relativetothebackgroundcreatedby theside
lobes.
3. Themaximumsidelobelevel is
b
m
= I
ms
/I
m
, (8.51)
whereI
ms
andI
m
arethemaximumsideandmajorlobesenses, respectively. This
parameter is effective in sensing microwave-contrast targets against a weakly
reflectingbackground. Theintegral andmaximumsenses of thesidelobes, as
well asthemajor lobewidth, varywiththeweightingfunctionusedintheSAR
(Table8.1). TherelativewidthintheTableistheSAP widthnormalisedtothat
for auniformweightingfunction.
4. Theazimuthal samplecharacteristicis
k
a
= ρ
β

, (8.52)
where ρ

is a step between the azimuthal counts of an image digital signal.
Accordingto thetheoremof samples, thesamplecharacteristic must meet the
conditionρ

< ρ
β
.
This parameter denotes the number of digital signal counts per azimuthal
resolutionelementanddescribestheradarcapabilitytoreconstructanimage. The
largerthesamplecharacteristic, thegreatertheimagecontrast. However, alarger
coefficient entailsagreater complexityof theimagereconstructiondesign. The
optimal valueof thisparameter istakentobek
a
= 1.2.
5. Imagestabilitycharacterisestheabilityof animagedigital reconstructiondevice
tosenseandcounttherelativepositionsof partial framecentresandtoprovidethe
proper scaleover all thesamplecharacteristicswhenpartial framesarematched
andsuperimposed.
6. Thegaininthesignal-to-noiseratioincoherentandincoherentintegrationiscal-
culatedfromthevariationsof thisparameterattheprocessoroutput. Itisassumed
that theechoandimagesignalsareintegratedlinearlyinbothcoherent [17] and
incoherentintegration[59], whereasnoiseisintegratedinquadratures. Therefore,
thetotal gaininthesignal-to-noiseratioK
g
is
K
g
=

Nn, (8.53)
Phaseerrorsandimprovement of imagequality 175
wherenisthenumber of echocountsover asynthesisstepinonerangechannel
andN isthenumber of incoherentlyintegratedpartial frames.
Inreal flightconditions, theactual aperturecharacteristicsdifferfromthepotential
ones. Thereasonfor thisisthenoisefromprocessingandmicronavigationdevices,
aswell asthelimitationsof imagingsystems.
8.4.2 Radar characteristicsdeterminedfromimages
Thereal performancecharacteristicsof aradar systemareevaluatedfromtheresults
of astatistical processingof imageparametersregisteredduringexperimental flights
over a test ground (of the type of Willcox Playa in the United States). The radar
characteristicstobefoundexperimentallyareusuallyasfollows.
1. Therealistic aperturesharpness is taken to betheminimal distancebetween
twocornerreflectorsdiscernibleonanimagealongtherespectivecoordinate, if
thereflectorsproducepulsesof equal intensityandif thepower of thereflected
signals is much greater than thenoise. Notethat thesharpness evaluation is
affectedbythesamplecharacteristicthatcannormallybevariedbytheoperator
duringthetest.
2. Theintensityof specklenoiseonanimageisdefinedastheratioof thestandard
deviationtothemeansignal intensityonanimageforastatisticallyuniformarea
ontheearth. Thespecklearises fromthepresenceof numerous point scatter-
ersinaresolutionelement, whichhaveanapproximately identical radar cross
section(RCS) andareproducedby re-emissionof theantennapatternof ran-
domgeometry. Thespeckleeffect canbereducedbyfilteringor byincoherent
integrationof several independentimagesof thesameregionontheearth. Inde-
pendentimagescanbeobtainedatdifferentradiationfrequencies, polarisations
or aspectratios. DependingontheSARapplication, thenumber of suchimages
variesfrom3to4for militaryapplicationsto70for resourcessurveytasks.
3. Thedark level on an imageis an averageintensity of asignal fromaregion
of thelowest reflectivity. Sometimes, thedark level istakentobetheaverage
imageintensitywithazeroechosignal at theinput (thenoisedark level). This
parameter isrelatedtothesidelobesizeinthesynthesisedantennapatternand
totheprocessingnoise.
4. The dynamic range is defined as the maximum-to-minimumsignal intensity
ratioonanimage. It dependsonthedesignof thetransmitter–receiver unit, the
processor characteristics, thereceiver gaincontrol, etc.
5. Thecontrast of adjacent samples is foundas theratio of themaximumsignal
intensityfromapoint target (muchabovethenoiselevel) totheaverageinten-
sity of theadjacent samples. This parameter characterises theSAR ability to
reconstruct themaximumspacefrequencyonanimage.
6. The mean image power is a parameter affected not only by the transmitter
power, theantennagain, thereceiver sensitivityandthesignal-to-noisegainat
theprocessoroutput, butalsobythepost-processingbeforeasignal isdisplayed
(especially, at thestageof definingitsminimumthreshold).
176 Radar imagingandholography
7. Theintrinsic aperturenoiselevel isthemeanimagesignal level whenthereis
only noise at the aperture input and its gain corresponds to the mean image
signal. This parameter covers thetotal effect of theaperturenoiseduringthe
synthesis.
8. Theradar swathwidthisdeterminedby thescreenparameters(thenumber of
linesandthenumber of pixelsinaline) andbythediscretisationstepinrange
andazimuth. Anacceptablenumberof imagepixelsonascreennormallyvaries
from512×512to1024×1024.
9. Geometrical distortionsof animagearedefinedasthestandarddeviationof the
positions of referencescatterers relativeto their actual positions. Thecentral
referencemarkissuperimposedwiththereal reference. Thestandarddeviation
valueisaffectedbytherange, theviewangle, altitude, thedistancebetweenthe
referenceandtheimagecentre, aswell asbytheimagingtime.
10. Theimagingtimeisanimportantparameterof anapertureoperatinginreal time.
A typical test groundfor thestudy of aperturecharacteristics is astatistically
uniformsurfacewiththree-edgecornerreflectors(Fig. 8.5) arrangedatdifferent
distancesfromeachother (for evaluationof theaperturesharpness). Thereflec-
tors possess different reflectivities, so onecanmeasurethedynamic rangeof
thesystem. Inadditiontoauniformbackground, atestgroundusuallyincludes
somecommonobjectssuchasroads, fields, smoothsurfaces, railwayroads, etc.
Inorder tounderstandbetter thedifferencebetweenthepotential andreal char-
acteristics of asynthetic apertureandaSAR as awhole, weshall makeuseof test
resultswithdigital imagereconstruction(theAN/APQ-102A modification) [53]. Its
potential resolution was 12.2malongtheazimuth andrangecoordinates. Thedis-
cretisationstepfor evaluationof areal azimuthal resolutionwastakentobe3.04m.
Figure8.6shows anazimuthal signal fromtwo corner reflectors. Whenthevalley
Flight direction
1600 m
1
6
0
0

m
Figure8.5 Aschematictest groundwithcorner reflectorsfor investigationof SAR
performance
Phaseerrorsandimprovement of imagequality 177
242 246 250 254 258 262
0.25
0.5
0.8
1.0
Number of azimuth channel
R
a
d
a
r

i
m
a
g
e

i
n
t
e
n
s
i
t
y
Figure8.6 A1DSARimageof twocorner reflectors
betweentheir imageswas2dB, theazimuthal resolutionwasfoundtobe21.28m, or
7pixelsinanimageline.
Part of thetest groundimagewas obtainedby a14-foldincoherent integration
withthemeansignal valueof 0.671andastandarddeviationof 0.201. Theevaluated
specklewasfoundtobe0.3, whichisasufficientlylowlevel.
The dark level was typically 23dB of the grey-level value. Hence, the SAR
dynamicrangeis33dB, withthecontrastof adjacentsamplesbeing2.8or4.5dB. For
asyntheticaperturewithstronglysuppressedsidelobes, thisparameter was6–10dB.
Thelargestandarddeviationinthiscaseisduetotheuseof corner reflectorswitha
largeRCS.
Thedynamicrangeisestimatedfromthesedatatobe33dB, withthecontrast of
adjacentsamplesbeing2.8or4.5dB.Forasyntheticaperturewithstronglysuppressed
sidelobes, thisparameteris6–10dB. Thelargemeansquarevalueof theimagesignals
isduetotheapplicationof corner reflectorswithalargeRCS.
Figure8.7showsahistogramof thenoisedistributionat theapertureoutput, and
onemaysuggest that theprobabilitydensityhasaRayleighpattern. Themeanvalue
of 0.21was takentobethedark level. Oneof thedark regions exhibits aRayleigh
distributionwithameanvalueof 0.42. A screenwith384× 360pixels covereda
viewzoneof 4.8× 4.5km. Theerrorsinthemeasurement of therangepositionsof
thecorner reflectors were14kmand18mat adistanceof 1600mfromtheimage
centre, whereas theradar was at 14.5kmfromit. Theazimuth measurement error
was∼50munder thesameconditions.
8.4.3 Integral evaluationof imagequality
Theauthorsof Reference99havesuggestedamethodof integral evaluationof radar
images. Withthis methodonecancompareimages andestablishacertainstandard
178 Radar imagingandholography
Radar image intensity
0
F
r
e
q
u
e
n
c
y
Figure8.7 Ahistogramof thenoisedistributioninaSARreceiver
for thetransformation of resolution to thenumber of incoherent integrations or to
aparameter related to thedynamic rangeof an imagesignal. It is shown that the
interpretability, or theoperator’s ability to interpret an image, U, is related to the
SGL volumeV as
U = U
0
exp(−V/V
c
), (8.54)
where U
0
is the maximumimage interpretability and V
c
is the critical grey-level
resolution.
It hasbeenfoundempirically that theinterpretability isrelatedtothegrey-level
volumedefinedas
V = p
a
p
r
p
g
, (8.55)
wherep
a
, p
r
arethelinear resolutions in azimuth and range, respectively, and p
g
is thegrey-level resolution (in half-tones). Thenewimageparameter – grey-level
resolution– canbeexpressedastheratioof alevel asignal exceedsin90per centof
casestothat in10per cent of casesfor independent samples. Thisparameter canbe
foundfromtheformula:
p
g
≈ (

N +1.282)/(

N −1.282). (8.56)
However, the calculated value differs noticeably fromthe measurements made at
N < 4 (Fig. 8.8). The experimental interpretability scale ranged from0 for an
uninterpretable image to 4 for a fully interpretable one. Therefore, the maximum
interpretabilityU
0
shouldbe4.
Theauthorsof Reference1haveobtainedamorecomplexequationfor p
g
p
g
= 10lg
1+(N/e)

N
k=1
3k[(N −k)!N
k+1
t]
−1
1−(N/e)

N
k=1
3k[(N −k)!N
k+1
]
−1
.
Wheree= 2.78, thisresult, however, isbasedontheinformationtheory andaddi-
tionally takes into account properties of photointerpreter’s visual analyser. It was
discoveredthataccordingtothecriterionof themaximumimageinformationcapacity
N = 2isoptimal.
Phaseerrorsandimprovement of imagequality 179
1 10 100 N
r
g
Approximation
Experiment
Figure8.8 Thegrey-level (half-tone) resolutionversusthenumber of incoherently
integratedframesN
Animportantexperimental findingwasthecritical volumeV
c
– forasingleframe
synthesisedbytheaperture(N = 1). Forthemajorityof frames, thelengthpersquare
resolutionelementinthecaseof a37percentinterpretabilitywasfoundtobe9.14m.
Suchobjectswerevegetationandurbanareas, low-contrast regions, communication
lines, city andcountry roads, etc. Exceptions weretheboundaries of water bodies
andvegetationcoversshowinga37per centinterpretabilityevenatthelowestlinear
resolutioninazimuthandrange(13.72m). Sincethegrey-level resolutionat N = 1
(Fig. 8.8) is22, it iseasytofindthecritical volume:
V
c
= p
a
p
r
p
g
≈ 9.14
2
×22∼ 1850. (8.57)
Withthis, thefinal interpretabilityexpressiontakestheform:
U = 4exp{−p
a
p
r
p
g
/1850}. (8.58)
Notethat thecalculationof thecritical volumeusedthelinear resolutionof 9.14m.
Figure8.9showstheinterpretabilityplottedagainstthelinear resolutionp
a
= p
r
= p
for different numbersof incoherent integrations.
When analysing the plots in Fig. 8.9, one should bear in mind that both the
measurements and thecalculations werebased on somea priori assumptions. For
example, thehalf-tonescalewaschosenontheassumptionthataphotographhadthe
maximuminterpretabilityandthatithadaninfinitenumberof incoherentintegrations
(N = ∞) andthehalf-toneresolutionp
g
– (Fig. 8.8). Animagesynthesisedwithout
incoherentintegrations(N = 1) wasthoughttohavethepooresthalf-toneresolution,
buttheresolutionwastobefinite(p
g
< ∞), sincetheimagepreservedsome, though
verylow, interpretability. Itwasestablishedexperimentallythatthepooresthalf-tone
resolutionwasequal to22(Fig. 8.8).
Theinterpretabilitywasevaluatedbythreequalifiedandexperiencedinterpreters
of radar andoptical images, usingthefour-level scale(from0to4) mentionedabove.
Theinterpretersworkedwithprintsof 20.32cm× 25.40cminsize. Theresolution
elementsvariedinshapefromsquaretorectangular (withthesideratioof 1:10) and
inthenumber of incoherent integrations varyingfrom1to∞. All theexperiments
180 Radar imagingandholography
0
0.2
0.4
0.6
0.8
25 50 p, m
U/U
o
1
3
10
N=∞
Figure8.9 Thedependenceof theimageinterpretability on theresolution versus
linear resolutionp
a
= p
r
= p
werecarriedoutusingaquadraticdetector becausethedetectionwasperformedona
quadraticfilm. It canbedemonstratedtheoretically, however, that experimental data
canalso beuseful inlinear detectionof imagesignals if thehalf-toneresolutionis
calculatedbyanother approximateformula:
p
gl
≈ (

N +0.6175)/(

N −0.6175). (8.59)
Themajor result of thisseriesof investigations[99] wastheexperimental support of
theideathatimageinterpretabilitydependedonlyonthehalf-tonevolumeresolution,
or ontheproduct of theazimuthal, rangeandhalf-toneresolutions. Therefore, this
parameter varieswiththearearather thantheshapeof aresolutionelement (square
or rectangular). Ontheother hand, it dependsontheresolutionelement areaandthe
number of incoherent integrations. So onecanmakeacompromisewhenchoosing
theresolution in azimuth p
a
, in rangep
r
and in half-tones p
g
[99]. Identical inter-
pretabilities can beachieved by using different combinations of theseparameters.
This conclusion proved to bequiteunexpected and may play an important rolein
solvingsomeappliedproblemswhenonehastochoosebetweenthecomplexityand
thecost of apertureprocessingtechniques.
Indeed, if this conclusion is correct, it is worth making an effort to achievea
highimageinterpretabilitybyimprovinglow-cost resolutions. Toillustrate, ahigher
rangeresolutionandanincoherent integrationinspaceborneSARscanbeachieved
in asimpler way than ahigher azimuthal resolution. For example, onecan fix the
azimuthal resolution but improve the range resolution or increase the number of
incoherent integrations.
Weshall giveagoodexampletoillustratetheeffectivenessof resolutionredistri-
butionwithreferencetoaside-lookingsyntheticaperture. Inthistypeof aperture, the
azimuthal resolutiondependslinearlyonthenumber of incoherent integrationsN:
p
a
(N) = λr
o
N/2L
m
= p
o
N, (8.60)
Phaseerrorsandimprovement of imagequality 181
20
10
0 1 2 3 4 5 6 7 8 9 N
p
g
Figure8.10 Thedependenceofthehalf-toneresolutiononthenumber ofincoherent
integrationsover thetotal real antennapattern
whereλ isthewavelength, r
o
istheobliquerange, L
m
isthemaximumpossiblelength
of theaperture, andp
o
= λr
o
(2L
m
) isthebest apertureresolution. If wenowfixthe
rangeresolution, theminimumproduct of p
a
Np
g
will showtheoptimal combination
of azimuthal resolutionandincoherentintegration(Fig. 8.10). Thisoptimumisfound
tolieat N = 3; hence, p
a
= 3p
o
.
Theintegral criterionfor imageevaluationfromthehalf-tonevolumeresolution
isconvenient andrelativelysimple. But whenusingit inpractice, oneshouldbear in
mindthattheavailableamountof statistical dataisinsufficient, sotheestimationsof
imagequalitymaybequitesubjective.
8.5 Specklenoiseanditssuppression
Syntheticapertureradarremotesensingof theearthisbecomingincreasinglypopular
inmanyareasof humanactivity(Chapter 9.1). Theanalysisof imagesmaybemade
intermsof aqualitativeor quantitativeapproach[2].
A qualitativeanalysis is largely madeby conventional methods of visual inter-
pretation of aerial photography, combined with the researcher’s knowledge and
experience. Although radar images have much in common with aerophotographs
(Chapter 1), thephysical mechanismsof their synthesissetlimitsontheapplicability
of interpretationmethodselaboratedfor optical imagery. Additional difficultiesarise
fromthepresenceof specklenoise.
A quantitiveanalysis is based on themeasurement of target characteristics for
variousbackgroundsandobjects[2], followedbycomputerisedprocessingof video
information. The latter is normally used to solve the following tasks. One often
has to improve image quality and interpretation procedures at the pre-processing
stage, which includes various corrections, noise reduction, contrast enhancement,
highlightingcontours, etc. It mayalsobenecessarytocompressandcodeimagesto
182 Radar imagingandholography
betransmittedthroughcommunicationchannels. Besides, onemay havetoidentify
someof theitems onanimageandclassify various elements present onit. This is
usuallydonebyimagesegmentation, cluster analysisandsoon. Obviously, thiskind
of imagesubdivisionisalwayssomewhat arbitrary.
Hereweshall discuss methods of solving thefirst typeof task with emphasis
on thosetechniques specific to radar imagery, such as specklesuppression. Some
others, likegeometrical andradiometrical correction, havealreadybeendealtwithin
theliterature[2,31]. Someof theimageprocessingtechniquesarequiteversatileand
havealsobeendiscussedindetail [2].
8.5.1 Structureandstatistical characteristicsof speckle
Therehasbeenmucheffort tounderstandtheimagespecklestructure. Theavailable
publications on this subject can be classified into two groups as for the specific
problems being tackled. The more extensive group covers work on speckle as a
noise, suggestingvariouswaysof itsfiltering. Theother groupincludespublications
on useful properties of speckle, in particular, on the possibility to derive fromit
informationabouttheareaof interest. Naturally, thereareproblemsineachtrendthat
remainpoorlyunderstood. A featurecommontoall thepublicationsisthedescription
of statistical characteristicsof speckle.
Letusconsiderthestatistical characteristicsof anechosignal intermsof ageneral
reflectionmodel whenaresolutionelementcontainsmanyechosignalsfromdifferent
pointscatterers. Thesignalsarerandom, independentandhaveaboutthesameinten-
sity. Thenthetotal signal represents aGaussianrandomquantity andits amplitude
has aRayleighpattern. This kindof reflectionmodel is oftentermedtheRayleigh
model. Whenasyntheticaperturechangesitspositionrelativetoatarget, theintensity
fluctuationsof thetotal echosignal giverisetoacharacteristicspecklepatternonan
image. Clearly, theintensity I of individual pixelswill obey theexponential lawof
theprobabilitydensitydistribution:
p
I
(x) =
1

2
o
exp
_

x

2
o
_
(8.61)
withthemeanvalueof
¯
I = 2σ
2
o
andthedispersionσ
2
I
= 4σ
4
o
, whilethephaseθ of
theimagepixelsisequiprobableintherangefrom−π to+π.
Another reflection model is applied when a resolution element has one bright
point together with other point scatterers, such that the total echo signal contains
onedominant signal of muchhigher intensityalongwithmanyrandomindependent
signals of nearly thesamelower intensity. Thentheamplitudeof thetotal signal is
described by theRicedistribution, or by ageneralised Rayleigh distribution. This
kindof model iscalledtheRicereflectionmodel.
Thedistributionof theintensityprobabilitydensityat singlepixelsis
p
I
(x) =
1

2
o
exp
_

x−s
o

2
o
_
I
o
_√
xs
o
σ
2
o
_
(8.62)
Phaseerrorsandimprovement of imagequality 183
withthemeanvalueof
¯
I = 2σ
2
o
+s
o
andthedispersionσ
2
I
= 4σ
4
o
(1+2r), wheres
o
isthesquareamplitudeof thehighestintensitycomponentof thesignal r = s
o
/(2σ
2
o
),
I
o
(·) isamodifiedzero-order Bessel functionof thefirstkind, andthedistributionof
thephaseprobabilitydensityis
p
θ
(x) =
1

exp
_

a
2
2
_
+a
cosx


(acosx) exp
_
−a
2
sin
2
x
2
_
, (8.63)
where
a=

s
o

o
, (t) =
1
2
t
_
−∞
exp
_

τ
2
2
_

istheLaplacefunction.
Sincethesignal-to-noiseratioistheratioof themeanintensity
¯
I tothestandard
deviationσ
I
equal to1and(1+r)/

1+2r fortheRayleighandRicemodels, respec-
tively, theintensityfluctuationamplitudeinthespecklestructureiscommensurable
withtheuseful signal intensityfor acomplextargetat r ≈ 1. For thisreason, images
of suchtargetshaveawell-pronouncedspecklestructure. Sinceitisdifficulttoanal-
yseanechosignal fromatargetwiththeRicereflection, mostauthorsdiscusstargets
withthat of theRayleighreflection.
Itisworthnotingthattheaboveexpressionsfortheprobabilitydensitydistribution
inthecaseof auniformandisotropicbackgroundarevalidfor bothanensembleof
imagesateachresolutionelementandasingleimageoveramultiplicityof resolution
elements. For anon-uniformbackground, however, theseexpressionsarevalidonly
for anensembleof imagerealisations.
WhentheN number of independent imagesof thesameearthareaaresummed
up, theprobabilitydensitydistributionof thespecklestructuretakestheform:
p
I
(x) =
x
N−1
exp(−(x/2σ
2
o
))
(2σ
2
o
)
N
(N)
(8.64)
withthevalueof
¯
I = 2Nσ
2
o
andthedispersionσ
2
I
= 4Nσ
4
o
, where(·) isthegamma-
functiondescribedas(N) = (N−1)! for integer N. Inthiscase, thesignal-to-noise
ratio is

N. The probability density distribution in Eq. (8.64) corresponds to the
gamma-distributionwiththeparametersequal toNand1/(2σ
2
o
), ortoχ
2
-distribution
with2N degreesof freedomat σ
2
o
= 1. A general expressionfor theinitial moments
of distribution(8.64) hastheform:
M
N
k
= [(N +K −1)!/(N −1)!](2σ
2
o
)
k
,
whereM
N
k
isthekthinitial moment.
Reference2 presents thefluctuation spectrumof thespeckleamplitudeand its
autocorrelationfunction. SupposeapointscattererisdescribedbytheDiracδ-function
andF(k) isthetransfer functionof asyntheticaperture, wherek = 2π/λ isthewave
number andλ isthewavelengthof theechosignal. Thentheamplitudespectrumof
theechosignal fromapointscatterer locatedatapointwiththecoordinatexrelative
184 Radar imagingandholography
to theSAR carrier track is F

= (1/2)F(k) exp(jxk). For randomly arrangedpoint
scatterers, thesignal receivedbytheapertureisdefinedas
F

(k) =
1

F(k)
L

l=1
exp(jx
l
k).
At L → ∞, the speckle power density spectrumcan be determined within the
accuracyof aconstant factor:
S(k) = |F

(k)|
2
= |F(k)|
2
.
In other words, it is unambiguously dependent on the aperture transfer function.
Theautocorrelationfunctionof speckleis relatedtoits spectral power density by a
Fouriertransform. Therefore, thespeckleautocorrelationfunctioncanbeusedtofind
theapertureimpulseresponsedirectly.
Thestatistical characteristicsof specklefor abackgroundrepresentedasanarray
of randomly movingpoint scatterersareconsideredinReference2. It isshownthat
the concept of spatial resolution has no sense if the phase fluctuations of signals
fromthepoint scatterersarelarge(thephasechangesby2π several timesduringthe
synthesis).
8.5.2 Specklesuppression
The available methods of suppression or smoothing out of image speckle can be
subdivided into two groups. Somemethods arebased on theaveraging of several
independentimagesof thesamebackground. Thisgroupisnotlargebutthesemethods
have been extensively used owing to their relative simplicity. The other group of
methodsismuchlarger andincludesso-calledaposterior procedureswhenspeckleis
suppressedbyspatial filtering.
Independentimagesof thesameearthareacanbeobtainedindifferentwaysbased
onacommonprincipleof imagesegmentationwithrespecttoaparticular parameter,
forexample, theDopplerfrequency, thecarrierfrequencyorpolarisation(i.e. sensing
abackgroundat different polarisations of probingradiation). Thefirst techniqueis
known as amultibeamprocessing and it is most commonly used in practice[99].
A specificfeatureof multibeamprocessingisaproportional decreaseof theaperture
sharpnessintrackrangewhentheDoppler frequencybandissubdividedintoN iden-
tical non-overlappingsubbands. Thespecificityof specklesuppressionproceduresis
that thesignal-to-noiseratio increases by afactor of

N if N independent images
areaveraged.
Themethods of thefirst group can useother procedures, for example, median
filtering[2], inadditiontotheaveragingof N independent images.
A wide application of aposterior techniques is primarily due to a rapid devel-
opment of imageprocessingtechnology. Thelack of anadequatemodel of speckle
structureanduseful signal makesitdifficulttodesigneffectivealgorithmsforspeckle
suppression. Until recently, nearly all researchers working on speckle problems
haveregardedspeckleasamultiplicativenoisetoauseful signal. However, thereare
Phaseerrorsandimprovement of imagequality 185
morecomplex models. Theauthors consider thepossibility of employingWiener’s
and Calman’s filtering algorithms, homomorphic processing and various heuristic
techniquestosuppressspeckle.
However, alack of objectivecriteriafor evaluation of imagequality by visual
perceptioncreates additional difficulties. For this reason, nearly all theresearchers
citedbelowcomparetheprocessingresultswithexpertise, whichmakesacomparative
analysisof thesuggestedalgorithmsquiteproblematic.
Thefirst attemptstosuppressspeckleby aposterior techniquesusedtheWiener
filteringalgorithmwhichvarieswiththesignal [2]. Theworkersanalysedanadditive,
signal-modelled noise approach and a multiplicative noise model. In the former,
adistortedimageisdescribedbytheexpression:
z(x, y) = s(x, y) ∗ h(x, y) +f [s(x, y) ∗ h(x, y)]n(x, y), (8.65)
whereh(x, y) isthespaceimpulseresponse, f iscommonlyanon-linear functionand
n(x, y) issignal s(x, y) independentnoise. Byintroducingthedesignationsn

(x, y) =
s

(x, y) ×n(x, y) ands

(x, y) = f [s(x, y) ∗ h(x, y)], wetransformEq. (8.65) to
z(x, y) = s(x, y) ∗ h(x, y) +n

(x, y).
Inthesecondnoisemodel, animageisdescribedas
z(x, y) = n(x, y)[s(x, y) ∗ h(x, y)], (8.66)
wheren(x, y) issignal-independent multiplicativenoise. TheWiener’sfilter hasthe
transfer functionM(µ, ν) =
zs
(µ, ν)/
zz
(µ, ν) andminimisesthestandarddevia-
tionof thefiltering, providedthat z(x, y) ands(x, y) arewidebandspatially uniform
randomfields,
zs
and
zz
aretherespectivepower densityspectra. WithEq. (8.65),
thefirst noisemodel givesthefollowingtransfer functionof aWiener’sfilter:
M
1
(µ, ν) =

ss
(µ, ν)H

(µ, ν)

ss
(µ, ν)|H(µ, ν)|
2
+
s

s
(µ, ν) ∗
nn
(µ, ν)
(8.67)
ontheassumptionof n(x, y) = 0. Heren(x, y) isstatistically independent of s(x, y),
s

(x, y) is a uniformwideband field, and H(µ, ν) = F[h(x, y)] is the system’s
transfer function. At f [s(x, y) ∗ h(x, y)] = s(x, y) ∗ h(x, y), wehave
s

s
(µ, ν) =

ss
(µ, ν)|H(µ, ν)|
2
, andEq. (8.67) canbere-writtenas
M
1
(µ, ν) =

ss
(µ, ν)H

(µ, ν)

ss
(µ, ν)|H(µ, ν)|
2
+[
ss
(µ, ν)|H(µ, ν)|
2
] ∗
nn
(µ, ν)
. (8.68)
If thenoiseisuniform, widebandandsignal-independent, thetransfer functionof a
Wiener’sfilter inthesecondmodel will be
M
2
(µ, ν) =
n
ss
(µ, ν)H

(µ, ν)

nn
(µ, ν) ∗ [
ss
(µ, ν)|H(µ, ν)|
2
]
. (8.69)
186 Radar imagingandholography
It isclear from(8.69) that at n(x, y) = 0thefilter transfer functionisM
2
(µ, ν) = 0.
Supposewehaven
1
(x, y) = n(x, y) −n, then
M
2
(µ, ν) =

ss
(µ, ν)H

(µ, ν)/n

ss
(µ, ν)|H(µ, ν)|
2
+(1/n
2
)
n
1
n
1
(µ, ν) ⊗[
ss
(µ, ν)|H(µ, ν)|
2
]
.
(8.70)
Obviously, at n = 1filterswiththetransfer functions(8.68) and(8.70) areequiva-
lent. Modellinghasshownthat aWiener’sfilter for signal-dependent noisewiththe
characteristicsM
1
andM
2
isbetter thanthat for additive, signal-independent noise.
But theessential limitationsof theformer aretheneedfor alargeamount of apriori
informationabout thesignal andthenoise, as well as vast computations. Calman’s
filteringalgorithms[2] suffer fromsimilar disadvantages.
Thepossibilityof ahomomorphicimageprocessingisdiscussedinReference2.
A homomorphicprocessingissupposedtobeanyconversionof observablequantities
if thesignal fluctuations aretransformedto additiveandsignal-independent noise.
Withinthemultiplicativespecklemodel, Eq. (8.64) yields
p(I ) =
N
N
(N)I
_
I
I
_
N−1
exp
_

NI
I
_
(8.71)
with σ
2
I
= I
2
/N
2
. Then the homomorphic transformation reduces to taking the
logarithms. Thedistributiondensityof thequantityD = lnI isdescribedas
p(D) = [N
N
/ (N)] exp[−N(D−D
o
)] exp{−Nexp[−(D−D
o
)]} (8.72)
withD
o
= lnI . Practically, thedistributionof signal-dependentnoiseisoftenapprox-
imatedbyanormal distributionwithasignal-dependent dispersion. At anyvalueof
N, theapproximationaccuracyfor thenormal distribution(8.72) isgreater thanthat
forthedistribution(8.71). ThevariableDcanbeprocessedbyanyalgorithmavailable
inthemodel of additiveandsignal-independentnoise. ItispointedoutinReference2
that theapplicationof Wiener’sfilteringalgorithmwithapreliminaryhomomorphic
processing of an imageprovides better results than aseparateapplication of each
algorithm.
Theauthorsof Reference2believethat ahomomorphictransformationisarea-
sonablealternativetoimageprocessinginsignal-dependentnoise. Ontheotherhand,
experienceindicatesthatthisdoesnotgiveanessential advantageoverheuristicmeth-
ods to bediscussed below. Moreover, thenecessity to useboth direct and inverse
transformationsincreasesthecomputationcostsconsiderably.
Thereisanotherwayof suppressingspecklenoise– alocal statisticstechnique[2].
Withinthemultiplicativespecklemodel, everyelementz
ij
onanimageisrepresented
as the product of the signal s
ij
and the noise n
ij
. The noise has n = 1 and the
dispersionσ
2
n
. Ontheassumptionthat thesignal andthenoiseareindependent, the
authorshavederivedtheexpressions
z = sn= s
Phaseerrorsandimprovement of imagequality 187
and
σ
2
z
= M[(sn−sn)
2
] = M[s
2
]M[n
2
] − ¯ s
2
¯ n
2
.
If the signal intensity averaged over the processing window is constant, the
expressionsare
M[s
2
] = s
2
and σ
2
z
= s
2
(M[n
2
] − ¯ n
2
) = ¯ s
2
σ
2
n
or σ
n
= σ
z
/¯ z.
Thismodel isconsistent withthedataobtainedfromtheanalysisof uniformsurface
imagery. Thestandarddeviationσ
n
isfoundtobeabout0.28, whichisduetoamulti-
beamprocessingandtheuseof other algorithms for improvingimages synthesised
by theSAR SEASAT-A. Usingthelocal statistics techniquefor aselectedwindow
(usually with 5× 5 or 7× 7 resolution elements), onecan find themoving local
average¯ zandthedispersionσ
2
. Thenonegets
¯ s= ¯ z/¯ n, σ
2
s
=
σ
2
z
+ ¯ z
2
σ
2
n
+ ¯ n
2
− ¯ s
2
. (8.73)
Theexpansionof zintoaTaylor serieswiththeaccount of thefirst-order termsonly
yields
z = ¯ ns+ ¯ s(n− ¯ n). (8.74)
According to Eqs (8.73) and (8.74), theminimisation of themean squareerror of
specklesuppressionleadstothefollowingformulafor ˆ s:
ˆ s= ¯ s+k(z− ¯ n¯ s) (8.75)
with
k =
¯ nσ
2
s
¯ s
2
σ
2
n
+ ¯ n
2
σ
2
s
.
Thenat n= 1, onegets
ˆ s= ¯ s+k(z− ¯ s), k =
σ
2
s
¯ sσ
2
n

2
s
. (8.76)
Theheuristicalgorithmderivedfromthelocal statisticsapproachisespeciallyeffec-
tivefor specklesuppressiononimagesof uniformandisotropicsurfaces. It doesnot
removethecontours of extendedproper targets. This algorithmhas providedgood
results when processing imagery fromtheSAR SEASAT-A. Its major advantages
aresimplicity andadaptiveproperties associatedwiththecomputationof thelocal
statistics. It has, however, aseriouslimitation: it cannot predict theerror behaviour
duringthespecklesuppression. Besides, thenecessityof computingthelocal average
and, especially, thedispersioninacommon7×7windowconsiderablyreducesthe
algorithmefficiency.
In order to decrease the computational costs inherent in local statistics algo-
rithms, some workers have suggested using a sigma-filter. For a moving window
188 Radar imagingandholography
of (2m
1
+1) ×(2m
2
+1) insize(m
1
andm
2
areinteger numbers) withthecentral
resolutionelement z
ij
, thesignal ˆ s
ij
isfoundfromtheformula:
ˆ s
ij
=
m
1
+i

k=i−m
1
m
2
+j

l=j−m
2
δ
kl
z
kl
_
_
_
m
1
+i

k=i−m
1
m
2
+j

l=j−m
2
δ
kl
_
_
, (8.77)
where
δ
kl
=
_
1, at (1−2σ
n
)z
ij
≤ z
kl
≤ (1+2σ
n
)z
ij
.
0, otherwise.
Itisclearthatafilterwiththecharacteristic(8.77)will bemorecost-effectivethanthat
with(8.76). A 11×11windowwasusedinReference2toestimateσ
n
. It wasfound
that two passes of a sigma-filter were sufficient to get a satisfactory suppression
of speckle noise without smearing the contours. When the number of passes was
increasedtofour andmore, theimagewasdamaged.
Thefollowingmodificationof thesigma-filter wasdiscussedinReference2for
filteringimpulsenoisetogether withspecklesuppression. Onechooses thethresh-
oldB. If thenumberof elementstoberemovedinaccordancewithEq.(8.77)issmaller
thanorequal tothethresholdB, theaverageof fourneighbouringelementsisascribed
totheestimatedpositionof themovingwindow. Thechoiceof athresholdiscritical
becauseit affectsthecontours. It ispointedout inthiswork that thethresholdvalue
for a7× 7windowshouldbeless than4andfor a5× 5windowless than3. The
useof asigma-filter with a11× 11 window and then another sigma-filter with a
3× 3windowat thethresholdB = 1provedtobemost effective. A small window
allows suppression of impulsenoisein thevicinity of sharp contours. Other filter
modificationsarealsopossible. Thistypeof filterwascomparedwithafilterwiththe
characteristic(8.76) andwithamedianandanaveragedfilter. Itwasconcludedfrom
theexpertisethat asigma-filter provides better results. Its disadvantageis that one
cannotestimateapriori thebehaviour of thespecklesuppressionerror. Animportant
merit of thistypeof filter isitssimplicity, ahighcomputational efficiencyandaddi-
tiveproperties. Thesecharacteristicsmakethefilter suitablefor applicationindigital
imageprocessinginareal-timemode.
Thelocal statisticsmethodcanalsobeimplementedwithalinearfilterminimising
themeansquareerror of thefiltering. Inadditiontothealgorithmsdescribedabove,
thereisalargenumber of heuristicalgorithmsfor specklesuppression. Amongthese
arealgorithms for medianfiltering, averagingover amovingwindowwithvarious
weightingfunctions, algorithmsforanonlineartransformationof theinitial image, the
reductionof animagehistogramtoasymmetricform, etc. Mostheuristicalgorithms
aresimpletouseandhaveafairlyhighcomputationefficiencybutall of thempossess
aserious drawback – they practically ignorethespecific process of SAR imaging:
whilesuppressingnoise, they partly suppresstheuseful signal. It isusually hardto
estimatethespecklesuppressionerror whenusingsuchalgorithms.
Toconclude, imageprocessingcoversawiderangeof tasksandproblems, manyof
whichhavenotbeendealtwithinthischapter. Amongthesearetheprocessingbased
Phaseerrorsandimprovement of imagequality 189
onthepropertiesof ahumanvisual analyser, thecriteriafor imagequalityandimage
optimisation, quantitativeevaluationof informationcontainedinanimage, etc. Dueto
arapiddevelopmentof cybernetics, informationtheory, iconicsandcomputerscience
andpractice, theseareasof investigationareconstantly tryingnewapproaches. For
example, theyhavetestedsomeconceptsof artificial intelligenceintheprocessingof
dataonremoteprobingof theearth, theuseof radar imageryasadatabasefor visual
interpretation and complexing of images obtained in different wavelength ranges.
Theresultsobtainedfromsuchstudiescanprovidemoreinformationabouttheearth
andother planets.
Chapter 9
Radar imagingapplication
9.1 Theearthremotesensing
1
9.1.1 SatelliteSARs
Synthetic aperture radar imagery from satellites and aircraft has a high spatial
resolutionandisindependentof lightandclouds. Nearlyreal-timeinformationanda
comprehensiveSAR imageanalysisisof importancenot only for scientific studies,
but alsobecauseit hasapractical significanceprovidinginformationfor companies
dealingwithoff-shoreoil andgas exploration, deep-oceanmining, fishing, marine
transportation, weather forecast, etc. [65]. In1972theNASA Officeof Applications
initiatedtheEarthandOceansDynamicsApplicationsProgramfor thedevelopment
of techniques of global monitoringof oceanographic phenomenaandthedesignof
anoperational oceandynamicsmonitoringsystem. SatelliteSARstudiesof theearth
environment began in 1978, when the first series of images was obtained by the
SEASAT during its 3 month’s operation. This L-band horizontally polarised radar
operated at a wavelength of 23cmat an incidence angle of 20

. It was primarily
designed for ocean waveimaging, although SAR imagery was also acquired over
ice and terrestrial surfaces. It demonstrated the potential of satellite radar data in
scientific and operativeapplications. TheSEASAT datasupported thenotion that
wind and waveconditions over theocean could bemeasured fromasatellitewith
anaccuracy comparableto that achievedfromsurfaceplatforms [5]. Various SAR
instruments operating at different wavelengths, polarisations and incidenceangles
were mounted on bound of Space Shuttles (Table 9.1). In November 1981 and
October 1984, theSIR-A and SIR-B radars, which used theSEASAT technology
1
Sections9.1.1and9.1.2werewrittenbyV.Y.Alexandrov, O.M.J ohannessenandS.Sandven, Nansen
International Environmental andRemoteSensingCentre, StPetersburg, RussiaNansenEnvironmental and
RemoteSensingCentre, Bergen, Norway. Section9.1.3waswrittenbyD.B.Akimov, NansenInternational
Environmental andRemoteSensingCentre, St Petersburg, Russia.
192 Radar imagingandholography
Table9.1 Technical parametersof SARsbornebytheSEASAT andShuttle
Parameter SAR
SEASAT SIR-A SIR-B SIR-C/X X-SAR
Orbit inclination(

) 108 38 57 57 57
Altitude(km) 800 260 225 225 225
Incidenceangle(

) 20–26 47–53 15–60 20–55 20–55
Frequency(GHz) 1.28 1.28 1.28 1.25and5.3 9.6
Polarisation HH HH HH HH, VV, VH, HV VV
Swathwidth(km) 100 50 30–60 15–90 15–45
Pixel sizefor 25×25 40×40 25 25 30×(10−20)
four looks(m)
Table9.2 Parametersof theAlmaz-1SAR
Parameter Value
Satellitealtitude(km) 270–380
Orbit inclination(

) 72.7
Wavelength(cm) 9.6
Polarisation HH
Radiometricresolution, onelook(dB) 2–3
Swathwidth(km) 40
Spatial resolution, onelook(m) 10–15
withthe23cmwavelengthandHH (Horizontal–Horizontal) polarisation, provided
datatargeted at land applications [77]. TheSIR-C mission using atwo-frequency
multipolarisationSARwithavariableincidenceangle, together withtheX-bandVV
(Vertical–Vertical) SAR, operated in threeflights during theperiod of 1994–1996.
TheSIR-Cwasof interesttooceanremotesensing, anditsdatawereusedtoextendthe
understandingof radarbackscatterfromtheoceanandSARimagingof oceanographic
processes[117].
Thefirst USSR SAR missionstartedinJ uly1987withalaunchof theCosmos-
1870satelliteequippedwithaS-bandSAR. ItsoperationendedinJ uly1989andwas
followedbytheAlmaz-1satellite, whichoperatedfromMay1991until October1992
(Table9.2). Therawdataof 300kmlongand40kmwidestripeswitha10–15mspatial
resolution(onelook) couldbestoredaboardandtransmittedto areceivingground
stationnear Moscowas analogueradio holograms, withSAR images presentedas
photographichardcopies. Applicationsof SARdataincludedstudiesof variousocean
phenomenaandseaice[36].
Radar imagingapplication 193
Table9.3 Theparametersof theERS-1/2
satellites
Parameter Value
Satellitealtitude(km) 785
Orbit inclination(

) 98.52
Wavelength(cm) 5.66
Polarisation VV
Angleof incidence(

) 20–26
Swathwidth(km) 100
Spatial resolution, threelooks(m) 26×30
The first European Space Agency ERS-1 satellite with a C-band SAR aboard
operatedsuccessfully fromits launchinJ uly 1991until 1996andprovidedalarge
amount of global andrepeatedobservations of theenvironment. Thefocus was on
oceanstudiesandseaicemonitoring[62,64]. Inthehigh-resolutionimagingmode,
theERS-1SARprovidesthree-look, noise-reducedimageswithaspatial resolutionof
26minrange(across-track)and30minazimuth(along-track)(Table9.3). Becauseof
theabsenceof onboarddatastorage, anetworkof groundreceivingstationsenableda
widecoveragebySARimages. ERS-2, asecondsatelliteof thisseries, waslaunched
inApril 1995andsincemid-August 1995bothsatellitesoperatedinatandemmode,
whenERS-2imagedthesameareaasERS-1onedaylater.
TheRADARSAT launchedby theCanadianSpaceAgency inNovember 1995
wasthefirstSARsatellitewithaclearoperational objectivetodeliverdataonvarious
earthobjects. Usingtheonboarddatastorage, itprovidesamuchwidercoveragethan
theERS SAR [77]. ProcessedSAR datacouldbedeliveredto users withinseveral
hoursafteracquisition. TheRADARSAToperatesintheC-bandandHH-polarisation,
and in several imaging modes with different combinations of theswath width and
resolution(Table9.4). Oneof itsmainapplicationsisseaicemonitoring[42].
The advanced SAR (ASAR) onboard the European Space Agency ENVISAT
satellite, has been providing image acquisition since 2002 [43]. While its major
parameters aresimilar to thoseof theRADARSAT, theASAR can also operateat
multipolarisationmodes usingtwo out of fivepolarisationcombinations: VV, HH,
VV/HH, HV/HHandVH/VV. Thefivemajor modesare: global, wideswath, image,
alternating polarisation and wavemodes (Table9.5). In theimageand alternating
polarisationmodestheASAR giveshigh-resolutiondata(30mand3look) inarela-
tivelynarrowswath(60–100km), whichcanbelocatedatdifferentdistancesfromthe
subsatellitetrackattheincidenceanglesfrom15

to45

. Thealternatingpolarisation
modeprovidestwoversionsof thesamescene, atHH, VV and/or cross-polarisation.
Thewideswath modeprovides a420kmswath with aspatial resolution of 150m
and12looks. Intheglobal monitoringmode, theASARcontinuouslygivesa420km
swathwithaspatial resolutionof 1000mand8looks.
194 Radar imagingandholography
Table9.4 SARimagingmodesof theRADARSAT satellite
RADARSAT-1
modeswith
selective
polarisation
Beammodes Nominal
swath
width
(km)
Incidence
anglestoleft
or right side
(

)
Number
of looks
Spatial
resolution
(approx.)
(m)
Transmit Hor V Standard 100 20–50 1×4 25×28
ReceiveHor V Wide 150 20–45 1×4 25×28
or (HandV) Small incidence
angle
170 10–20 1×4 40×28
Highincidence
angle
70 50–60 1×4 20×28
Fine 50 37–48 1×1 10×9
ScanSAR wide 500 20–50 4×2 100×100
ScanSAR
narrow
300 20–46 2×2 50×50
Table9.5 TheENVISAT ASARoperationmodes
Operationmode
parameter
Imagemode Alternating/
cross-
polarisation
Wideswath
mode
Global
monitoring
Wavemode
Polarisation VV or HH VV/HH,
HH/HV or
VV/VH
VV or HH VV or HH VV or HH
Spatial resolution
(along-trackand
across-track) (m)
28×28 29×30 150×150 950×980 28×30
Radiometric
resolution(dB)
1.5 2.5 1.5–1.7 1.4 1.5
Swath
width(km)
Upto100
(seven
subswaths)
Upto100
(seven
subswaths)
400
(five
subswaths)
≥400
(five
subswaths)
5KM
(vignette
seven
subswaths)
Incidence
angle(

)
15–45 15–45 15–45
At present, SAR datafromtheERS, RADARSAT andENVISAT satellites are
widely used in earth observations and monitoring of various natural objects and
phenomena. Withitsfine-scaleresolution, aSAR iscapableof observinganumber
of uniqueoceanic phenomena[117]. Theseincludewindandwaves [46,75], ocean
Radar imagingapplication 195
circulation[63], internal waves[33], oil spills[40,41], shallowseabathymetry[6], etc.
Imagingradarsarealsousedinanumberof landapplications, suchasthestudyof soil
moisture[84], forestry[97]andthestudyingandmonitoringof urbanareas[135]. The
useof satelliteSARdatafor monitoringtheArcticseaiceisbrieflydiscribedbelow.
9.1.2 SARseaicemonitoringintheArctic
9.1.2.1 Theuseof satelliteSAR for seaicemonitoring
Theuseof visibleimages for seaicemonitoringintheArctic is limitedby light in
winter, whilethecloudcoverprecludesseaiceobservationsinthevisibleandinfrared
rangesduringapproximately80per cent of timeinsummer [18,37,123]. Therefore,
thedevelopment of remoteradar sensingisessential for thepolar regions. Thefirst
satelliteSAR images wereacquiredby theSEASAT satellitewhichproducedover
100passes over theBeaufort Seaonnearly adaily basis for theanalysis of seaice
motionandchangesintheicedistribution. TheSIR-B SARgavedataontheAntarc-
tic seaicemarginfor October 1984[45]. Several SAR surveysweremadeover the
Antarctic andArctic withtheKosmos-1870andAlmaz-1SARs inspiteof thefact
thatthesatelliteorbitsprecludedcoverageof thehigh-latitudenorthernandsouthern
regions. TheAlmaz-1 SAR data wereused to support an emergency operation in
theAntarctic, whentheresearchvessel Mikhail Somov got stuck intheice. During
thisoperation, it waspossibletodetect icebergsandestimatetheir size, aswell asto
deriveseveral seaiceparameters, suchastheiceextent, theboundariesof stableand
unstablefast ice, theicetypes(nilas, youngandfirst-year ice), prevailingiceforms,
ridgesandareasof stronglydeformedice[3].
TheSARimagesobtainedfromERS-1/2wereusedinanumber of seaicestudies
in theArctic, Antarctic and in theice-covered seas in different parts of theWorld
Ocean[48,68,76,93,120]. TheERS-1SAR provedtobeaverypowerful instrument
for seaiceobservations. AlthoughtheERSsatellitewasnotdesignedfor operational
service, thedatawereapplied in seaicemonitoring in theUnited States, Canada,
Finlandandseveral other countries[18,27].
With the launch of the Canadian RADARSAT in 1995, the first satellite with
operational icemonitoringasaprimeobjective, icemonitoringintheUnitedStates,
Canada, Greenland, Norway, Finland, Sweden and some other countries entered
a new era. The ScanSAR mode with a swath of 450kmwide and with a 100-m
resolutionat 8looksallowsdailymappingof thewholepolar regionnorthof 70

N,
andit isusedfor operational iceservicesintheCanadianArctic, theGreenlandSea,
theBaltic Seaand other areas with ice[18,48,111]. With asystematic acquisition
of ScanSAR imagesover largeArctic seaiceareasandtheuseof theRADARSAT
geophysical processor, it was possibleto estimatetheseaicemotion, deformation
andthicknessfromsequential imageryfor several yearsfrom1996[79]. Within6h,
theUSNational IceCenterroutinelyreceivesScanSARimagesfromtheAlaskaSAR
Facility, theGatineauandTromsøSatelliteStationalmost, whichprovidestotal Arctic
coverage[18]. Theseaiceanalysisismadebyintegratingall availableremotesensing
and in situ data, using theSUN SPARC and UltraWorkstations, and asystemof
satelliteimageprocessing. TheRADARSATimprovedtheIcePatrol’sreconnaissance
196 Radar imagingandholography
efficiency, althoughtheradar icebergidentificationremains problematic evenwith
moderntechniques. TheRADARSATScanSARwidedataprovideadailycoverageof
theCanadianArctic, andhigherresolutionmodesareusedforseaicemonitoringnear
theports, inseveral selectedroutes andintherivers. SAR images aresynthesised
at thereceiving stations PrinceAlbert and Gatineau and aretransmitted to theIce
Centrewithin2.5htobeprocessedandtransmittedtotheicebreakersof theCanadian
Coast Guardandthedepartment of iceoperationsfor visualisationandanalysis. Sea
icemonitoringisthemost successful onlineapplicationof theRADARSAT datain
Canada, whichprovidesthebestcombinationof geographiccoverageandresolution
tosaveabout6milliondollarsannually, ascomparedwithairborneradarsurvey[38].
FromFebruary 1996until theendof 2003, CIS usedapproximately 25,000scenes
for this purpose[42]. During 2003, aspecial servicecarried out iceberg detection
andmonitoringfromsatelliteSAR imagery, andtheInternational IcePatrol wasthe
user of this information[42]. NowtheRADARSAT ScanSAR imagery is themain
datasourcefor seaicemappingintheGreenlandwaters. Windconditions may be
animportant limitationtotheoperational useof radar satelliteimagery inthisarea.
Small (<50macross) yet thick icein concentrations less than 7/10 arefrequently
undetectableonradar imagesasthey areobscuredby astrongbackscatter fromthe
seawaves. Therefore, activeresearchintofilteringandenhancement techniqueshas
beenundertakentoimprovediscriminationbetweeniceandwater [48,49].
The ENVISAT ASAR imagery with almost the same swath as that of the
RADARSAT ScanSAR in theVV- and HH-polarisations is an exampleof further
development of SAR technology. Thewideswath modeof theENVISAT satellite
is especially suitable for sea ice monitoring, providing a practically daily cover-
ageof most of theArctic withahighspatial resolution. Inmid-2003, theCanadian
Iceservicebeganto receivetheENVISAT ASAR datato beusedas anadditional
sourcetotheRADARSAT-1datafor routineproductionof icecharts, bulletinsand
forecasts[43].
The Nansen Centres in Bergen and St Petersburg, in collaboration with the
EuropeanSpaceAgency andMurmansk ShippingCompany, havedoneaseries of
projectstodemonstratethepossibilitiesof SAR datafor seaicemonitoringandfor
supportingnavigationintheNorthernSeaRoute(NSR) [64–66]. TheNSR, which
is amajor Russian transport corridor in theArctic, includes routes suitablefor ice
navigationconfinedtotheentriestotheNovayaZemlyastraitsandtothemeridian
northof CapeZhelaniyainthewest andtotheregionof theBeringStrait intheeast.
InAugust 1991, just after thelaunchof theERS-1satellite, SAR imagerywastrans-
mittedinnear-real timeaboardtheFrenchvessel L’AstrolabeviatheINMARSAT
communicationsystemduringhervoyagefromEuropetoJ apaninselectingherroute
inice[66]. DuringtheperiodfromJ uly1993toSeptember1994, theEuropeanSpace
Agencyprovidedapproximately1000SARscenesfor seaicemonitoring. Threespe-
cific demonstration campaigns in theNSR in theperiods of freeze-up, winter and
latesummer, revealedtheERS SAR capability tomapthekey iceparameters. The
SARimagerywassuccessfullyusedtosolvetasksof navigationthroughhardice. In
1996theESA andtheRussianSpaceAgencyinitiatedtheir firstjointproject, named
ICEWATCH with an overall objective to integrate SAR data into the Russian sea
Radar imagingapplication 197
icemonitoringsystemto support icenavigationintheNSR [65]. DuringJ anuary–
February1996, anexperimentwasmadeaboardtheicebreakersVaygachandTaymyr,
whentheERS-1andERS-2SARs wereoperatingina‘Tandemmission’, givinga
uniqueopportunitytohaveSARcoverageover thesameareawithonlya1-dayinter-
val. However, thenarrow100kmswath of theERS SAR resulted in asubstantial
spatial andtemporal discontinuityincoverage[64].
InAugust–September 1997, theRADARSAT ScanSAR datawereusedto sup-
port the icebreaker Sovetsky Soyuz operations in the Laptev Sea [119]. With its
wideswath, theScanSAR providedamuchbetter coveragethantheERSSAR, and
the selection of scenes along a given ship route was simplified significantly. The
ScanSAR dataprovedtobeavery useful supplement toconventional icemapsand
couldcontributesignificantly to theiceinformation. StartingfromApril 1998, the
ScanSARandtheERS-2SARdatawereacquiredandanalysedtosupporttheexpedi-
tionsaboardtheicebreaker SovetskySoyuz fromMurmansktotheYeniseyGulf [4]
andtheEC ARCDEV expeditionwiththeFinnishtanker Uikkuandtheicebreaker
KapitanDranitsynfromMurmansktoSabetaintheObRiver [107]. Throughout the
expedition, ScanSARimagery, aboardtheicebreaker wasusedtodetectsomeimpor-
tant iceparameters, suchastheicetypes, oldandfast iceboundaries, flawpolynyas,
wideleads, singleicefloes andlargeareas of roughiceandto solvetactical tasks
of navigation. Areas of level anddeformedfast icewereidentifiedintheObestu-
ary, andanoptimal sailingroutewasselectedthroughtheareaswithlevel ice[107].
Theseexpeditions clearly showed that ScanSAR imagery is particularly important
for supportingnavigationindifficult iceconditions, suchas thoseintheKaraSea
duringApril–May1998.
During the summer of 2003, the ENVISAT Wide Swath ASAR imagery was
acquiredandtransmittedaboardtheicebreaker SovetskySoyuzduringher voyagein
theKaraSea, together withvisibleAVHRRNOAA images. Thesatelliteimagesand
icemapsweredisplayedintheelectronic cartographic navigationsystem, suchthat
thenavigator couldseethecurrent icebreaker locationoverlaidonasatelliteimage
andicechart inorder toselect thesailingroute.
A series of demonstration campaigns conducted in the NSR since 1991 have
shownthathigh-resolutionlight- andweather-independentSARimagerycanbeeffec-
tivelyusedfor seaicemonitoring. Theseaiceconditionswereinterpretedandfound
quiteuseful for selectingasailingroute. Thespeedof convoyssignificantlydepends
ontheiceconditionsandvariesfromabout 11–14knotsinpolynyasto4–6knotsin
areas with amediumand thick level FY iceand 2 knots in heavily ridged ice[4].
Theonboarduseof satelliteSAR imagery significantly increasestheconvoy speed
in the pack ice (Fig. 9.1). High-latitude telecommunication systems are the main
‘bottleneck’ inusingSAR imagery aboardtheicebreakers operatingintheNSR. It
mustbeaveragedandcompressedtoabout100–200kBfor their digital transmission.
Duringthefirsthalf of 2004, theENVISAT ASARimagerywasusedforseaicemon-
itoringof theNSR onanexperimental basis. Preliminarily processedimages were
transferredbye-mail totheMurmanskShippingCompanyandthenweretransmitted
viatheTV channels of theOrbitasystemto thenuclear icebreakers Yamal, Sovet-
skySoyuz, Arktika, VaygachandTaymyr. Theicebreaker navigatorscouldinterpret
198 Radar imagingandholography
I II III IV V VI VII VIII IX X XI XII
h
i
(M)
2.5
2
1
1.5
10
5
V
KN
V
h
i
h
i
Ice
Open
water
Ice
Figure9.1 ThemeanmonthlyconvoyspeedintheNSRchangesfromV
0
(without
satellitedata) toV
1
(SARimagesusedbytheicebreaker’screwtoselect
therouteinseaice). Themeanicethickness(h
i
) isshownasafunction
of theseason. (N. Babich, personal communications)
them, adequately selectingtheeasiest sailingthroughlevel thiniceandalongleads
andpolynyas withprevailingnilas andgrey ice. As aresult, thespeedof convoys’
steeringincreasedby40–60per cent onaverage.
9.1.2.2 I nterpretationof satelliteSAR imageryof seaice
A successful applicationof SARimagerytosupportnavigationrequiredtheabilityto
recognisethemajor seaiceparametersandprocessesfromthem. Characteristicsig-
naturesof major seaicetypesandfeaturesinERS, RADARSAT andENVISAT SAR
imageryweredescribedandvalidatedwithsubsatellitedataduringfieldcampaigns.
Themajor stages of icedevelopment describedintheWMO IceNomenclature
includenewice, nilas, young, first-yearandoldice. Theseaicerecentlyformedonthe
water surfacemayhavedarkandlightSARsignatures. Thegreaseicethatrepresents
anagglomerationof frazil crystalsintoasoupylayer precludestheformationof short
waves(Fig. 9.2(a)) andcanbedetectedasdark stripesandspotsamongbright SAR
signaturesof wind-roughenedwatersurface(Fig. 9.2(b)). Slushandshugahaveahigh
backscatter coefficient duetotheir roughsurfaceandareseenintheSAR imagesas
bright elongatedstripes. Nilas represents anelastic icecrust less than10cmthick,
boundingunder thewaveaction(Fig. 9.3); it hasalowbackscatter coefficient anda
dark SAR signature(Fig. 9.4). Youngicerepresentsthenext stageof development;
it is subdividedinto grey iceandgrey–whiteicewiththicknesses of 10–15cmand
Radar imagingapplication 199
Open water
Grease ice
(a)
(b)
Grease ice
Open water
Figure9.2 (a) Photoof greaseiceand(b) acharacteristic darkSARsignatureof
greaseice.
©
EuropeanSpaceAgency
200 Radar imagingandholography
Figure9.3 Photoof typical nilaswithfinger-rafting
15–30cm, respectively. Duringwinter, youngiceisquitecommoninpolynyasand
fractures. Ithasarelativelyhighbackscattercoefficient[102]andcanbedistinguished
frombothnilasandfirst-year iceduetoitsbrightSARsignature(Fig. 9.4). Thefirst-
year ice, whichis subdividedintothin(30–70cm), medium(70–120cm) andthick
(over 120cm) first-year ice, has atypical dark tone. It is difficult to separatethin,
mediumandthickfirst-yeariceusingonlytheirSARsignatures, soknowledgeof sea
iceconditionsindifferentArcticregionsisusedtopartlysolvethisproblem. Oldice
that hassurvivedmeltingduringat least onesummer, isoftenreliablydiscriminated
fromfirst-year ice due to its brighter tone, rounded floes and distinctive texture
(Fig. 9.5). Whenoldandfirst-year icebreaksintosmall icefloeswithsizelessthan
theSARspatial resolution, their separationisimpossible. SARsignaturesof second-
year andmultiyear icearequitesimilar, andit is hardto distinguishthesetypes of
ice[102].
Thebackscatterfromtheiceof thesameagedependsonitsprevailingforms(floe
size) and surfaceroughness. Pancakeicehas arough surfacedueto characteristic
raised pancakerims at theplateedges that lead to ahigh backscatter and abright
tonein aSAR image(Fig. 9.6). Areas of small icefloes unresolvedby radar may
haveaspecificbright SAR signature. Whenthesizeof icefloesgreatlyexceedsthe
radar spatial resolution, they can bedetected in SAR imagery. Singleicefloes of
evenrelativelysmall sizecanbedetectedfromthedarktoneonabright radar image
of wind-roughenedwater surface, whereas their detectionincalmwater surfaceis
moredifficult. Theanalysisof icefloesbecomescomplicatedwhentheytoucheach
other[120,128]. Thebackscatterof deformediceismuchhigherthanthatof level ice,
Radar imagingapplication 201
E55° E60° E65° E70°
E60° E65° E70°
N
7
8
°
N
7
7
°
N
7
6
°
N
7
5
°
N
7
4
°
N
7
7
°
N
7
6
°
N
7
5
°
Nilas
Young ice
First-year ice
Figure9.4 ARADARSATScanSARWideimageof 25April 1998, coveringanarea
of500km×500kmaroundthenorthernNovayaZemlya. Ageographical
gridandthecoastlinearesuperimposedontheimage.
©
CanadianSpace
Agency
therefore, areas of weakly, moderately andstrongly deformedicearedetectablein
ERS, RADARSAT andENVISAT SARimagery(Fig. 9.7). Identificationof strongly
deformedicehazardoustonavigationisparticularlyimportant.
Detectionof openwaterareasamongseaice, suchasfractures, leadsandpolynyas,
is necessary for selectionof anicebreaker’s route. Shoreandflawpolynyas canbe
detectedreliably, andtheirwidth, aswell asthetypeof seaicecanbedetermined. For
example, flawpolynyaalongthewesterncoast of NovayaZemlyaisclearlyevident
in RADARSAT ScanSAR imagery (Fig. 9.4) together with a number of fractures
coveredwithnilas(darktone) oryoungice(lighttone). Itwasfoundthatthedetection
of 100-mwideleadsincompact first-year iceisfeasibleinScanSAR images.
Inwinter, fast icecovers largeareas inthecoastal zones of theEurasianArctic
Seas. TheSARsignatureof fasticeissimilartothatof driftingice, anditchangeswith
thesurfaceroughnessand, tosomedegree, withsalinity. Level fasticehasauniformly
darktone, anditsboundarycanoftenbeidentifiedinSAR images(Fig. 9.7).
Theiceedgepresentsaboundarybetweenopenwater andseaiceof anytypeand
concentration; itmaybebothcompactanddiverged, separatingopenicefromwater.
202 Radar imagingandholography
Multiyear ice
First-year ice
Mainland
Figure9.5 A RADARSAT ScanSAR Wide image of 3 March 1998, covering the
boundarybetweenoldandfirst-year seaiceintheareatonorthAlaska.
©
CanadianSpaceAgency
Theiceedgemaybewell-definedor diffuse, straight or meandering, withiceeddies
andicetongues, extendinginto openwater [67]. Icetongues at theiceedgeinthe
BarentsSeaareevident inENVISAT ASAR imagery(Fig. 9.8). Withfrequent SAR
images, onecaninvestigatetheiceedgedevelopment inmuchdetail [120]. Thesea
iceconcentrationandiceedgelocationarethemostimportantparametersduringthe
summer; theycanbederivedfromSAR imagestogether withlargeicefloes, stripes
of iceinwater, icedrift vectorsandareasof convergence/divergence[119].
A high-resolutionSARisconsideredtobeanoptimal remotesensinginstrument
for detectionof icebergs. Itsbackscatter coefficient significantlyexceedsthat of sea
iceandcalmseasurface; icebergsthataremuchlargerthantheradarspatial resolution
areevidentasbrightspots. Insomecases, icebergshadowsandtracksintheseaicecan
bedetected[125]. Identificationof smaller icebergsiscomplicatedbyspeckle-noise
of SAR systems. Areasof icebergspreadinginFranz J osef Land, east of Severnaya
Zemlya, andinthenorthwest NovayaZemlyahavebeenidentifiedfromERS and
RADARSAT SARdata. ERS-2SARimageryof SevernayaZemlya(Fig. 9.9) shows
anumber of icebergsasbright spotsintheRedArmyStrait.
Recent studies have shown that the sea ice classification can be improved by
usingtheENVISAT alternatingpolarisationmode. Cross-polarisationwill improve
Radar imagingapplication 203
(a)
(b)
Figure9.6 (a) Photoofatypical pancakeiceedgeand(b) acharacteristicERSSAR
signatureof pancakeice. Amixedbrightanddarkbackscatter signature
istypical for pancakeandgreaseicefoundat theiceedge.
©
European
SpaceAgency
thepotential for distinguishingicefromopenwater, whichcansometimesbediffi-
cult todoonly withHH or VV polarisation. Inadditiontothebackscatter variation
in single polarisation data, a proper combination of VV and HH dual polarisa-
tion and cross-polarisation imagery provides additional information on theseaice
parameters[54,101,122].
204 Radar imagingandholography
Moderately
hummocked ice
Strongly
hummocked ice
Fast ice
Open water
Figure9.7 ARADARSATScanSARWideimageof 8May1998, coveringthesouth-
westernKaraSea.
©
CanadianSpaceAgency
Someof theseaiceparameterscannotbefoundfromSARimagery. Forexample,
itisquitedifficulttodistinguishthin, mediumandthickfirst-year ice, or second-year
andmultiyear icetypes. It isimpossibletodeterminethesnowdepthonseaiceand
someother parameters. In somecases largeridges and narrowleads covered with
greyicemayhavesimilar SAR signatures.
9.1.2.3 Conclusions
Thestudieshaveclearlyshownthat asatelliteSAR isapowerful instrument for sea
icemonitoring, andSAR dataarewidely usedfor this purposeincountries witha
perennial orseasonal icecover. ModernSARsprovideapracticallydailycoverageof
theArcticregions. Themost important seaiceparameterscanbederivedfromSAR
imagery, andtheir useincreases thesafety of navigationandspeeds of convoys in
severeArcticiceconditions.
9.1.3 SARimagingof mesoscaleoceanphenomena
TheSAR imagery allowsaglobal viewof most oceanographic phenomena: waves,
currents, fronts, eddiesandslicksreveal hiddenfeatures(suchasinternal waveand
bottomtopography). Althoughmostof theimagingmechanismsarenowwell under-
stood, there are still gaps in our knowledge of certain details. Some aspects still
remainobscure, requiringfurther researchefforts.
A highspatial resolutionandsensitivityof modernsatelliteSARsystemsmakesit
possibletoobservemesoscaleandsmall-scalefeaturesof theseasurface. Thisallows
theuseof SAR imagery for investigation of wind speed over theopen ocean and
Radar imagingapplication 205
Figure9.8 AnENVISAT ASARimageof 28March2003, coveringtheiceedgein
theBarentsSeawestwardandsouthwardof Svalbard.
©
EuropeanSpace
Agency
coastal zone, surfaceroughnesscharacteristicsandsurfacepollutedzonesof different
nature. SAR data help to monitor ocean dynamic processes, frontal boundaries,
convergencezones, etc.
Thenormalisedradar cross-section(NRCS) isameasureof intensityof theecho
signal. Intherangeof themicrowavefrequencies, aradar is sensitivetosmall per-
turbations of theoceansurface. TheNRCS is directly relatedto thesearoughness,
thatis, tostatistical propertiesof theseasurface. Thisallowsaradar todetectalarger
number of near-surfacephenomenathananyother remotesensingtool. Ontheother
hand, thismakestheradar dataextremelyhardtointerpret, especiallyquantitatively,
andrequirestheuseof sophisticatedmodels.
Whendealingwiththeocean, onehastoconsider surfacevelocities. Themotion
associatedwithtravellingwavesaffectssignificantlytheSARimagingmechanisms.
Inparticular, anazimuthal imageshift isduetothemotionof thetarget intherange
direction. This motionhas littleeffect ontheradial velocities andis unaffectedby
the pulse compression. It is intense enough to have an influence on the aperture.
The azimuthal shift and reduction in the signal amplitude are associated with the
motionof thetargetintherangedirection. Wavemotionintheazimuthal directionis
alsoasourceof imagedegradationbut isof lessimportance. It isknownasazimuth
206 Radar imagingandholography
Outlet glacier
Red army strait
Figure9.9 An ERS-2 SAR image of 11 September 2001, covering the Red Army
Strait intheSevernayaZemlyaArchipelago.
©
EuropeanSpaceAgency
defocusingandisduetothedifferencebetweentheDoppler historyof thetarget and
thereferencesignal.
A satellite-borneSAR canmonitor large- andsmall-scalestructural fluctuations
through thedescription of theenergy distribution of theocean waves in thespec-
tral domain. Thelatter is formally described by thewaveaction balanceequation
for thespectrumevolutionunder thecombinedinfluenceof windforcing, dissipa-
tion, resonantwave–waveinteraction, thepresenceof surfactantsandsurfacecurrent
velocitygradients. Thepossibilityof identifyingoceanicprocessesisdirectlyrelated
tochangesinthesurfacescatteringcharacteristicswhichdependontheseprocesses.
For thisreason, thedetectionbecomesimpossiblewhennowindispresent.
When these phenomena are known, an imaging model can be used to derive
thewavespectrumfromtheimagespectrum. Unfortunately, themechanismsrespon-
siblefor thespectrummodulationarenot fully understood. Theanalysis of aSAR
image is always complicated by interpretation ambiguity. The reason is that one
and thesameNRCS contrast may becaused by thevariation in different physical
parameters. Moreover, oneandthesamephenomenonmay manifest itself insome
observation conditions and not in others. Oneof thegenerally recognised features
of radar imageryisthefact that surfacephenomenaaremoreclearlyobservedinthe
horizontal polarisationthaninthevertical one.
A simultaneous study of synchronous SAR images andother datasources (e.g.
infraredandvisibleimages, weather maps) helpsingettingacorrect interpretation.
It shouldbeaddedthat sincetheinfluenceof current velocity gradients, seasurface
temperature, surfactant concentration and other environmental parameters on the
Radar imagingapplication 207
windwavespectrumdepends uponthewavelength, aradar usingacombinationof
different wavelengthsmay bequiteuseful inrevealingthemechanismsresponsible
for theNRCScontrast.
A number of mechanisms havebeensuggestedwhichareresponsiblefor man-
ifestation of dynamic ocean phenomena in radar images. It is assumed that the
wave–current interactionrevealsmost processeshavingthescaleof thecurrent non-
uniformity of about 0.1–10km. The following phenomena fall into this category:
internal waves, currentboundaries, convergencezones, eddiesanddeep-seaconvec-
tion. Thedegreeof theoceanfrontmanifestationinaSARisstronglydeterminedby
theatmosphericboundaryandbyitstransformationover theseasurfacetemperature
non-uniformities. Inanycase, thecomparativesignificanceof amechanismdepends
onthewholesetof factors, includingtheobservedprocess, windconditions, regional
specificityandunknowncircumstances(e.g. Reference16).
Belowwegiveseveral examplesof howdifferentoceanphenomenamaybecome
apparent in SAR images. The ERS-2 SAR image in Fig. 9.10, taken on 24 J une
2000over theBlackSea(east of theCrimeapeninsula), illustratesthemanifestation
of temperature fronts, zones of upwelling and slicks of natural films. The fronts
areclear fromboththebright anddark departures fromthebackgroundNRCS. As
was mentionedbefore, acorrect imageinterpretationneeds additional information.
Figure9.11showstheseasurfacetemperature(SST) fromtheNOAA AVHRRdataa
fewhoursafter ERS-2passage. Itgivesthetemperaturedistributionhelpful inimage
interpretation. Thespatial resolutionof theinfraredimageis1kmascomparedwith
100mprovidedby aSAR. Anupwellingisclearly visibleintheupper right corner
black partially coveredwithclouds(withSST about 16

C). Theblack squareisthe
position of the SAR image and the black curved lines are the distinctive features
takenfromtheSARimage. Thereappearstobearemarkablecorrelationbetweenthe
featuresintheSST andNRCSfields. Theinsignificant shift isduetothedifference
inthetimeof imaging.
Thedarkregionintheupperleftcornerof theSARimageshowsupwelling, when
strongwindsforcethewarmwaterof theupperlayerawayfromtheshoreandthecold
deepwater comes upfrombelow. Upwellings areknownto occur quiteoftennear
theregionof theCrimeanshoreline. A patchof coldwater manifests itself through
amodulation of theso-called friction velocity. This quantity may bedescribed as
‘effectivewind’ becauseit isfrictionvelocity determiningtheenergy flux fromthe
wind to thewaves. Thestratification of theatmospheric boundary layer over cold
water is morestablethanover thesurroundingwarmwater. This results inalower
friction velocity, which means that thewindof thesamespeed(at agiven height)
wouldgeneratelesswavesincoldwaterthaninwarmwater. Surfaceroughnessof the
upwellingzoneisdecreasedreducingitsNRCS. Other conditionsbeingequal, cold
water will appear darker thanwarmwater onaradar image(e.g. Reference16). This
featureallowsaradar tosensethetemperaturenon-uniformitiesof theseasurfacein
general.
Therearedark stretchedfeatures all over theSAR image. Theaccumulationof
surfactantsisassumedtobethecauseof theseareasof lowbackscatter. It maytake
placeinregionsof highbiological activity. Whennatural (organic) substancesreach
208 Radar imagingandholography
Figure9.10 AnERS-2SARimage(100km×100km) takenon24J une2000over
the Black Sea (region to the East Crimea peninsula) and showing
upwelling, natural films
thesurface, theytendtobeadsorbedat theair–water interfaceandremainthereasa
microlayer. Wavestravellingacrossafilm-coveredsurfacecompressandexpandthe
film, givingrisetosurfacetensiongradients, whichleadtovertical velocitygradients
within the surface layers. This induces viscous damping and attenuation of short
Bragg waves. As aresult, thescattered signal returning to theSAR is very much
reduced. Natural films areusually dissolved at wind speeds above7m/s. Because
currents easily redistributethem, suchslicks oftenconfigureinto spatial structures
relatedtothesurfacecurrent circulationpattern.
Figure9.12illustrates howvery longocean waves, theswell, areimagedby a
SAR. This imagewas obtained on 30 September 1995 over theNorthern Sea; the
landontheright istheNorwegiancoast.
We have pointed out that ocean surface roughness of the centimetre scale is
dueto thelocal wind (wind stress). Small-scaleroughness is modulated by large-
scalestructures (longer waves or swells). Threemechanisms areconsidered to be
Radar imagingapplication 209
Figure9.11 SST retrievedfromaNOAAAVHRRimageon24J une2000.
responsibleforthelongerwaveimaging: thetiltmodulation, thehydrodynamiceffect
andvelocitybunching. Thefirstmechanismisthatlongwavestilttheresonantripples
sothatthelocal incidentanglechanges, modifyingthebackscatter. Thehydrodynamic
interactionbetweenthelongwavesandthescatteringripplesleadtotheaccumulation
of scatterers ontheup-windfaceof theswell. This effect is greatest (as for thetilt
modulation) for range travelling waves, and there is no modulation if the ripples
areperpendicular to theswell. Thesefirst two mechanisms, responsiblefor swell
manifestation, reveal themselves in both synthetic and real apertureimagery. The
latter – theso-calledvelocitybunchingeffect– isresponsiblefor swell manifestation
in thecaseof long waves travelling closeto theazimuthal direction; this effect is
observableonlyinSAR images.
A SAR creates a high-resolution imageby recording thephaseand amplitude
of theelectromagneticradiationreflectedbythescatterersandbyprocessingit with
acompressionfilter. Thefilter is designedto matchthephaseperfectly for astatic
target. For thedynamicoceansurface, themotionof eachscatterer withinthescene
distortstheexpectedphasefunctionwithtwoimportantimplications. First, thelinear
component of thetarget motion shifts theazimuth of theimaged location of each
210 Radar imagingandholography
Figure9.12 A fragment of an ERS-2 SAR image (26km× 22km) taken on
30September 1995over theNorthernSea near theNorwegiancoast
andshowingswell
target. Thisleadstoastrongwave-likemodulationintheSARimageduetoaperiodic
forward and backward shift of the scatterer positions. This mechanismis exactly
what isknownasthevelocitybunching. Theother implicationof thedistortedphase
function is the degradation of the image azimuthal resolution due to higher order
componentsof thetarget motion(e.g. Reference56).
TheSAR imageenables oneto study swell transformationas it approaches the
coast. Thewavelengthdecreasesastheswell comestoshallowwater, sothewave-
length is about 350mat point A while near the coast at point B it is only 90m
(Fig. 9.12). Another observable feature is the swell refraction on the sea bottom
relief. Thiseffect isduetothefact that thewavevelocitydecreaseswithdecreasing
depth. Thewavecrestsrotatesoastobeparallel totheisobaths. Itisclearlyvisibleat
pointsB andCthattheswell goesparallel tothecurvedshoreline, thoughinitiallyit
wasnot. Finally, atpointDwecanseeaninterferencepatternproducedbytwoswell
systemsgoinginapproximatelyperpendicular directions.
Figure 9.13 shows the manifestation of the mentioned ocean features and
some new ones. This SAR image was acquired on 28 September 1995 over the
NorthernSea.
Radar imagingapplication 211
Figure9.13 AnERS-2SARimage(100km×100km) takenon28September 1995
over theNorthernSeaandshowinganoil spill, windshadow, lowwind
andoceanfronts
Thefirst distinctivefeaturemarkedas‘A’ inFig9.13candefinitelybeidentified
asanoil spill. Oil slicksareseenaspatchesof different shapeswithverylowNRCS
andrelatively sharpborders. Quiteoften, thespill source(shipor oil drill platform)
is visiblenearby. As compared to natural films, oil films havea higher viscosity,
damping short waves more effectively and remaining observable at higher winds
whennatural slicks woulddisappear. Another characteristic to distinguishbetween
oil andnatural filmsisthat thelatter nearlynever appear assinglelocalisedfeatures
buttendtocover vastareasof intricatepatternsproducedbycurrents. Anthropogenic
oil spillsontheseasurfacemayoriginatefromleaksfromships, offshoreoil plants
andshipwrecks. Inthecaseof shipwreck, aSARcancontributetooil spill detection
andmonitoring, keepingtrackof thedrift andspreadof theslicks.
Usually, theshorter theradar wavelength, themoreintenseisthebackscattering
reductionduetooil presence. Thereductionintheradar backscatteringalsodepends
ontheincidenceangle. Optimumrangeof anglesisdefinedbytheradar wavelength.
212 Radar imagingandholography
Oneof thestrongest obstacles to oil spill detection is thestateof thesea. At low
(2–3m/s) wind speeds, SAR images of theocean becomedark becausetheBragg
scatteringwavesarenot present. Inthiscasealmost nofeaturescanbedistinguished
on the sea surface. At high winds, most kinds of oil are dispersed into the water
columnbythewindwavesandalsobecomeunobservable(e.g. Reference39).
ThesecondfeatureinFig9.13(‘B’)revealsaclearlylineddarkzoneneartheshore
whichseems to havethesamedirectionas thedominatingwind. Themountainous
coastal landscapeandthesharpoutlineallowattributingthisfeaturetowindsheltering
byland. It canbeseenthat theNRCSbecomeslarger asthedistancefromtheshore
alongthewinddirectionincreasesandthesearoughnessbecomesbetter developed.
Thedark areas ‘C1’ and‘C2’ haveblurredcontours andmay beinterpretedas low
windzones.
Besides this, one can see numerous manifestations of the current boundaries
(‘D1’, ‘D2’, ‘D3’). At moderate wind speeds (3–10m/s), the SAR is capable of
revealingthecurrent boundaries, meanders andeddies. TheNRCS variationinthe
vicinityof thecurrentboundary/frontisassociatedwithseveral phenomena, including
changesof thestabilityof theatmosphericboundarylayer, wave–current interaction
andsurfactant accumulation. Theexact viewof theoceanfront onaradar imageis
affectedby many factors: theradar parameters, theobservationgeometry, thewind
conditions, surfacecurrentandtemperaturegradients, etc. Nevertheless, somesimple
rules of thumbexist. Oneof themwas already mentioned: coldwater looks darker
than warmwater. Another is that convergent current fronts usually appear bright,
while divergent fronts appear dark. It is assumed that the features ‘D1’ and ‘D3’
are the ocean fronts where the non-uniformcurrent distribution is combined with
SST changes. Lack of additional sourcesof information(e.g. IR images) retainsthe
interpretationambiguitysinceadarkareacanalsobeassociatedwithlowwinds.
Sometimes, atmospheric phenomenamay beobservableonSAR images, when
they affect the near-surface wind. Depending on the observation conditions, such
phenomenaincreaseor decreasetheradar backscatteringbyintensifyingor damping
theBraggwaves. OneexampleispresentintheERS-1SARimageof Fig. 9.14, taken
on29September 1995over theNorthernSea. Thereareseveral raincellsof different
sizescatteredthroughout thescene. Thefallingraindrops entraintheair to forma
downwardfluxof coldair. Whenhittingtheoceansurface, thefluxtransferscoldair
mass away fromthecell centretoformawindsquall – alineof abrupt increasein
thewindspeed. Theraincellsbecomevisiblebecausethebackgroundwindat their
boundariesissummedwiththewindduetotheraincoldair motion. Asaresult, the
windsquall ontheleesideof thecell increasesthebackgroundwind, decreasingiton
theoppositeside. Thus, onehalf of theraincell becomesbrighterthanthebackground
whiletheoppositesidebecomes darker. Thedistinct boundaries betweenthewind
squallsandthesurroundingbackgroundwater arecalledsquall lines. Whentherain
isheavy, thecentreof araincell mayappear dark becausethefallingdropscreatea
turbulenceintheupper water layer, dampingtheBraggwaves. Suchphenomenaare
typical of subtropical regionsbut maybeencounteredanywhereelse[62].
Figure9.15 shows aERS-2 SAR imagetaken on 30 November 1995 over the
Northern Sea. Points ‘A’, ‘B’ and ‘C’ areexamples of internal waves on theSAR
Radar imagingapplication 213
Figure9.14 AnERS-1SARimage(100km×100km) takenon29September 1995
over theNorthernSeashowingraincells
imagery. Internal waves areoneof themost interestingoceanfeatures revealedby
SAR imagery. At thebeginning of SAR history their detection was entirely unex-
pected. Atpresent, theyarefoundonSARimagesinmanyregionsof theWorldOcean
atvariouswindspeedsandwaterdepths. Theyappearasdarkcrests(troughs) against
alighter backgroundor as light ones against adark background. Thecrests always
occur as packets called trains. In this image, threetrains can beobserved. Often,
internal wavescorrelate(parallel) withthebottomtopography, whentheyarecaused
by theinteraction between thetidal currents and abrupt topographic features. The
distancebetweenindividual darkandlightbandsvariesfromseveral hundredmetres
toafewkilometres, decreasingfromaleadingwavetoatrailingedge(e.g. [126]).
Orbital motionsinducedbyaninternal wavetraingenerateanintermittentpattern
of convergent anddivergent zonesonthesea, whichmoveswiththephasevelocity
of theinternal wave. Convergent zonesaregeneratedbehindtheinternal wavecrest
and divergent zones are behind the troughs. It is these zones that make internal
wavesvisibleonradarimagery. Therearefewcommonlyacceptedexplanationsabout
214 Radar imagingandholography
Figure9.15 An ERS-2 SAR image(18km× 32km) taken on 30 September 1995
over theNorthernSeashowinganinternal waveandashipwake
howthis may happen. Accordingtoonepoint of view, surfactants areaccumulated
in the convergence zones, which results in short wave damping and makes these
zones appear dark on radar images. Another theory states that convergencezones
appear brightbecausethesearezonesof enhancedroughnessduetointensifiedwave
breakingthere. Thequestionof whichimagingmechanismdominatesandunderwhat
conditionsisstill open.
Thenextdistinctivefeatureclearlyobservableontheimage(‘D’), isashipwake.
Theshipitself isseenasanextremelybrightspotbecauseof manymetallicstructures
thatserveascornerreflectors. ThewakeisanarrowV-shapedfeatureassociatedwith
theship mark. It appears on radar images only in low wind conditions dueto the
short lifetimeof theBraggwavesandthecommonshipspeeds. Themajor result of
theshipmovement istheappearanceof thesternwake. Thisturbulent wakedamps
Radar imagingapplication 215
theBraggwaves, producinganareaof dark return, whichissometimessurrounded
by two bright lines. Thelines of high backscatter originatefromtheBragg waves
inducedbyvorticesfromtheship’shull. However, thereisgenerallyalargediversity
of shipwakepatternsincludingcombinationsof dark andbright stripesontheSAR
imagesanddependingontheobservational andseaconditions.
Thus, during the last decades the role of SAR data in earth observations has
increased considerably, and theSAR has becomeamajor remotesensing tool for
environmental monitoring. Improvementof imageinterpretationtechniques, automa-
tiseddatainterpretation, improvement of high-latitudetelecommunicationsystems
andaconvenient presentationof theinformationproductstotheuser arenecessary
for further development of SAR earthmonitoring.
9.2 Theapplicationof inverseaperturesynthesisfor radar imaging
TheimagingtechniqueswehavediscussedinChapters5and6didnotuseholographic
or tomographicprinciplesbut weredevelopedwithinapurelyradar approachinthe
UnitedStatesabout 40yearsago. Thefirst devicewasdesignedandconstructedby
theWestinghousecompanyandrepresentedanarrowbandradar withadiscretevari-
ationof thecarrierfrequencyandasynthesisedspectrum. Ataboutthesametime, the
WillowRunLaboratoryintheUnitedStatesinitiatedworkonconstructingaradarfor
aircraftimaging; themodel radarsweretestedonanopentestground. Somewhatlater,
twoexperimental typesof radar weredesignedfor spacecraftidentification. Onewas
constructedat theUS Air ForceResearchCenter incollaborationwiththeGeneral
ElectricCompanyandtheSyracuseResearchCorporation(thedesignof thedatapro-
cessor). Theothertypeof radarwasmadebytheAerospaceCorporation; ithadthecar-
rierfrequencyof 94GHz, theradiationbandwidthof 1GHzandthepulsebaseof 10
6
.
Thefirst quality images of low-orbit satellites wereobtainedby ALCOR radar
withtherangeresolutionof 50cmintheearly1970s. Further effortsbythedesigners
(theLincolnLaboratory, theMassachusettsInstituteof TechnologyandtheSyracuse
ResearchCorporation)toimprovethissystemwithinaglobal programforspaceobject
identificationresultedinthecreation, inthelate1970s, of along-rangeimagingradar
(LRIR) [20,52,83] withbetter characteristics(Table9.6).
Themajor advantagesof thisradar systemareahigh-frequencystability, apulse
repetitionratehigher thanthemaximumDoppler frequencyof anechosignal, anda
controlledrepetitionratenecessaryfortimediscretisationof transmittedandreceived
pulses. Besides, aLRIRsystemprovidesimagingof targetsonfar-off orbits(including
geostationaryorbits) andhavinghighrotationrates.
The Doppler-range method of echo signal processing for 2D imaging of the
RussianorbitingstationsSalut-7andKosmos-1686wasimplementedinaradar with
a1GHz probingpulsewidth[91]. A theoretical andexperimental investigationof
theimagingof stabilisedlow-orbit satelliteswasdescribedinReference124, using
narrowbandprobingpulses. Theprocessingalgorithms werebasedonholographic
principles. Theauthorsbelievethat current interest inmicrowaveholographyisdue
tothefactthatmanyavailableradarsystemscanacquireanewfunction– 2Dimaging
216 Radar imagingandholography
Table9.6 TheLRIRcharacteristics
Antennatype(primaryreflector) Paraboloidal
Apertureshape Circular
Aperturediameter (m) 36.6
Wavelengths(GHz) K-band
Narrowbandmode(NBM) 5.5–6.5
Widebandmode(WBM) 9.5–10.5
Aperturefielddistribution Cosine
Sidelobelevel (dB) −22.4
Polarisation(intransmissonand
inreception)
Circular
Frequencyband(GHz) 1
Pulseduration(µs) 250
Transmitter pulsepower in
modes1and2(MW)
0.5, 0.8
Averagepower (kW) 200
SecondaryprocessinginWBM Coherent integration
Signal modulation Linear frequencytype
Rangegate(m) 30, 60, 120
(frequencyfilter band(MHz)) (0.8, 1.6, 3.2)
Pulserepetitionrate(Hz) 1600(determinedbyrange
measurement unambiguity)
Pulsecompressibility 250,000
Sidelobelevel of matchedfilter
(inrange) (dB)
32
Interpulseinstability(

) 3–2
Impulsefilling(τ
imp
/T
rep
) (%) 50
Wayof target tracking Singlepulse
Receptionloss(dB) 7.9
Aimof themode
NBM Detection, tracking, rangemeasurement
WBM-1 Target classification
WBM-2 Target classification(fromimages)
Frequencyband(GHz) Upto40
Possibleradar frequency
extension(GHz)
Upto40
Radar location Westford, USA (LincolnLaboratory,
spacesurveyfacilities)
of spacetargets– without beingradicallymodernised. Anechosignal insuchradars
isprocessedbyinversesynthesisof microwavehologramsowingtothetarget angle
variationduringthesatellitemotionalongits orbit. Thealgorithmuses anoriginal
techniquefor synthesisinga2Dimage, intheview-flightpathplane, from1Dimages
obtainedalongalengthytarget path. Thesummationof partial 1Dimagesproduces
Radar imagingapplication 217
intensitymaximaat thebeaminterceptionpointscorrespondingtovariousanglesof
thetarget scatterers. A numerical simulationhasshownthat thisalgorithmprovides
aresolutionof about 10cmfor theviewingtimeof about 2min.
Theexperimentsontestingthistypeof radarusedaradarinterferometerconsisting
of threeantennasof 2.5mindiameter withabaseof 500m[124]. Theantennaswere
co-phasedtoprovideacoherenttransmissionandreceptionof quasi-monochromatic
signals with a4cmwavelength. Theradiation power was 75kW. Theexperiment
includedseveral observationrunsof theProgressspacecraftduringitsdeparturefrom
theMirorbitingstation. Anoptimal 2Dimagewasobtainedfrom551Dpartial images,
eachhavingthesynthesistimeof about2s. Thetimestepbetweenconsecutiveimages
was1.1s, duringwhichthevisionlinewasrotatedbyabout0.01rad. Itappearedthat
someof thescatterersof thisnearlycylindrical targetwerenotresolvedwell enough,
theboundaries betweenthemweresmeared, andtheresultingimagerepresenteda
brightsurface. Still, theimageallowedevaluationof thetarget’sdimensionsconsistent
withthereal ones.
Therefore, theavailableradars designed for entirely different applications can
besuccessfully usedfor spacecraft imaging. For example, theimagereconstruction
algorithms canoperateonthebaseof aphasometric deviceoriginally designedfor
coordinatemeasurements. Itisimportanttoemphasisethatinverseaperturesynthesis
isalsoemployedsuccessfullyinradar viewingof planets. Inparticular, apioneering
experimental imagingof VenuswascarriedoutbythespecialistsattheJ etPropulsion
Laboratory, CaliforniaInstituteof Technology, USA.
9.3 Measurement of target characteristics
Problems involving the analysis of radar performance require a priori informa-
tion about the scattering properties of a target. These properties are described
by a whole combination of independent radar responses to the target of interest.
Today, experimental and theoretical investigations of responses is arapidly devel-
oping area of radar science and technology. It involves the search for new forms
of descriptionof radiationscatteringby various targets andnovel methods of their
measurement [11,12,30,90,138].
Thekey positionamongthemany radar responsesisoccupiedby thescattering
matrix, which characterises thetransformation of theamplitude, phaseandpolari-
sationof anarbitrary planar monochromatic wavescatteredby asmall-size(point)
object. Theknowledgeof thescatteringmatrix is important for thecomputationof
dynamic and static responses for many applications: the justification of the radar
design, thedevelopment of methods anddevices for antiradar measures, designing
of processingalgorithms, etc. Besides, ascatteringmatrixisnecessarytogoover to
responses which describethetarget’s scattering of probing pulses having complex
spectra[138]. It is also indispensableinthecomputationof local responses to find
thescatteringpropertiesof individual partsof atarget [12].
Theoretically, theexact valuesof matrix elementscanbefoundonly for targets
of simplegeometry(spheres, cylinders, etc.). Soacommonwayof determiningradar
218 Radar imagingandholography
responses is by measuringthephysical characteristics. For small-sizetargets, such
measurementsarecommonlymadeduringflightandgroundtests. Natural flighttests
providethemostcompleteandreliabledataonthetargetinquestionbuttheyarevery
costlyandneedspecial equipment andtestingconditions.
Radar responses are often measured in special setups on open and closed test
grounds. Opentestsarecarriedout either withreal targetsor their modelsof natural
size. Thisallowsadetailedstudyof thescatteringcharacteristicsandtheir behaviour
under different conditions. However, the response data are often affected by the
currentweatherconditions, backgroundsignalsfromthesurroundingobjects, natural
andartificial noise, etc. Commonlimitations of anopentest groundarethelack of
anexact frameof referencefor theangular positionof thetarget under study, poor
couplingbetweennormally polarisedmeasurement channels, as well as alowdata
accuracy becauseof thebackgroundeffects. Moreover, ameasurement runfor one
target takesalongtime, from4to6h, andisquitecostlybecauseof thenecessityto
maintainthetest equipment andfacilities.
Closedtestsaremadeinananechoicchamber (AEC), whoseinner wallsarecov-
eredwithamicrowave-absorbingmaterial, allowingsimulationof wavepropagation
infreespace[98]. Buttwoconditionsaretobemetinsuchexperiments: theprobing
wavefrontistobeplanarnearthetargetandthebackgroundnoiseistobekeptbelow
apermissiblelevel. ThemeasurementsmadeinanAEC donot havethelimitations
of anopengroundandtake4–5timesshorter timefor onerun. Suchchambershave
foundawideapplicationbecausetheyarescreenedfromoutsidenoise, providingan
electromagnetic compatibility. Sincetheelectromagnetic, mechanical and climatic
conditions in an AEC can bekept constant for alongtime, themeasurements can
bereadilyautomatisedandthetargetsusedmaybebothreal objectsandmodels(of
natural sizeordiminished). Thechoiceof thetypeof targetisprimarilydeterminedby
thesizeratioof thetargetandtheso-calledecho-freezoneinthechamber, thatis, the
zonewheretheincidentfieldmeetscertainrequirementsastothewavefrontgeometry
andthebackgroundsignal intensity. Thisratiolargely determinestheresponsedata
accuracy. Theecho-freezonesizeis, inturn, determinedbythechamber dimensions
andthewaythewavefrontiscollimated. Whenthetargetof choiceislarger thanthe
echo-freezone, oneusuallyemploysascalingmethod, usingasmaller model object
andashorterradiationwavelength. Oneseriousdisadvantageof thistechnologyisthe
difficultyof measuringradar responsestotargetswithabsorbingor semiconducting
coatingsand, sometimes, of makingsuitablemodel targets.
ThemeasuringfacilitiesusingAECshavesomecommondisadvantages:
1. Themeasurementaccuracyisquitelowbecauseof astrongbackgroundsignal in
thechamber workingarea, associatedwiththemicrowave-absorbingmaterials
of highreflectivity(−20to−30dB).
2. Theecho-freezoneis small becausethecollimators haveasmall apertureand
thechambersasmall size; asaresult, suchmeasurementscannot bemadewith
real targets.
3. Thefrequency bandof transmittedpulses is limitedandbistatic measurements
arerestricted.
Radar imagingapplication 219
Itisclearfromthisanalysisthataclosedtestgroundispreferableformakingresponse
measurementsfor varioustargets, especiallyfor aircraft andspacecraft. Thesefacil-
itiesemploy largeAECsprovidingahighaccuracy of all matrix elementsfor areal
target, andthereisnoneedtousescaling.
On theother hand, many applied radar problems, especially theestimation of
efficienciesof methodsanddevicesfortargetdetectionandrecognition, oftenrequire
anumerical simulation of thewholeradar channel, includingthemicrowavepath,
trackingconditionsandsoon. Todothis, oneshouldcombineanalogueanddigital
simulationmeans, includingaradar measurement ground(theanaloguecomponent)
andacomputer withappropriatesoftwarepackages(thedigital component). If such
equipmentisdesignedfor themeasurementof reflectedsignalswiththeir amplitudes
andphases, itessentiallyrepresentsaradarcapableof microwavehologramrecording,
inother words, of inverseaperturesynthesis. For imaging, it issufficient toinclude
inthesoftwaretheimagereconstructionalgorithmsdescribedinthisbook.
Thenext procedureat theimagingstageisthemeasurement of local responses,
or scatteringmatricesandtheir elements, toobtaindataonindividual target scatter-
ers [12,138]. Objects of simplegeometry, whoselocal responses canbecalculated
precisely, canbeusedasstandardsfor calibrationof measuringdevices. Practically,
it isreasonabletousecylindersasstandardtargets. Anillustrationof thecalculation
of local responses for cylinders by theEWM suggestedby P. Ufimtzev is givenin
Chapter 2.
Thetypical measurement facilitiesinclude:
• anAEC;
• devices for pulsegenerationandtransmissionandfor receptionof echo signals
of variousfrequencies, includingsuperwidebandpulses;
• equipmentfor makingmeasurements, suchasarotatingsupport, atargetrotation
control device, etc.
• hard- andsoftwaretocontrol measurementruns, tokeeprecordsof theincoming
andoperational data, processors, etc.
Thebodyof workonthemeasurement of scatteringparametersof targetsconsistsof
fivestages:
• preparatoryoperations
• preliminarymeasurements
• major measurements
• control measurements
• dataprocessing.
Thepreparatorystageisaimedat preparingthemeasuringdevicesfor asuccess-
ful performance. Preliminarymeasurementsaretoprovideinformationonthedevice
ability to make the necessary measurements, to choose the appropriate operation
modeandtocalibratethedevices. Theaimof themajor measurementsistoproduce
microwavehologramsof thetargetwithaprescribedaccuracy. Control measurements
aremadeinordertocheckthevalidityof thedataobtained. If theamplitudeandphase
220 Radar imagingandholography
errorsfitintotheadmissiblelimitsforthisparticularrun, themajormeasurementsare
consideredtobevalidandarefedintoaprocessor together withthecalibrationdata.
Primary processingis performedto bringrelativedatato their absolutevalues,
that is, tocalibratethemeasurementsandtoevaluatetheerrors. Thefinal resultsare
set into alocal databasefor classified storage. Further processing can bemadeby
variousalgorithmsfor thereconstructionof imagesof different dimensionalities(by
usingholographicandtomographicprocessingof thescatteringmatrixelements) in
order toanalyseandmeasurethelocal responses.
However, theanalogue–digital softwarecanalsobeusedforthefollowingtasks:
• toprocesstheresultsof measurementinorder togetstatistical dataonthescatter-
ingcharacteristicsof thetarget(averagevalues, dispersion, integral distributions,
histogramsandsoon) for giventarget angles;
• to computetheangular positions of thetarget duringits motionwithrespect to
thegroundradar inorder to simulatethedynamic behaviour of theecho signal
andtheradar viewingdevices;
• to simulatethetarget recognitiondevices by usingvarious methods to findthe
target recognitionparameters(fromimages, too) andtodesigndecision-making
schemes.
As aresult, onecan get onlineinformation about various probablecharacteristics
necessaryfor thetarget detectionandrecognition.
Methodsfordirectimagingandformeasurementof local responsesinanAECare
describedindetail inReference138. Soweshall restrict ourselvestoabrief review
of themeasurement proceduresandsomeof theresultsobtained.
Thebestwayof producinganimageinanAECistorecordmultiplicativeFourier
holograms andto subject themto adigital processing. Therecordingcanbebased
ononeof theschemesshowninFig. 2.4, andthereconstructioncanbemadebythe
algorithmpresentedinFig. 9.16.
Theinputdataaretwoquadraturecomponentsh
r1
(ϕ) andh
r2
(ϕ) of a1Dcomplex
microwavehologramh
r
(ϕ) and thecalibration results (thecalibration curve). The
samplingstepfor thefunctions h
r1
(ϕ) andh
r2
(φ) shouldmeet theconditionϕ ≤
λ/l
max
, wherel
max
isthemaximumlinear sizeof thetarget.
Wecansynchronisethequadraturecomponentsby usingthesubroutinefor jus-
tifyingthedatafile. Normally, amicrowavehologramisrecordedwhenthetarget is
rotatedby2radandfurtherprocessingisperformedforasequenceof samples, whose
number correspondstotheoptimal sizeof thesyntheticapertureandthepositionin
thedatafilecorrespondstotherequiredtarget aspect.
Thechosensequenceisnormalised, becauseamicrowavehologramcanbemea-
suredwithdifferent receivingchannel gain, dependingontherecordedsignal value.
This should be taken into account when measuring a local response in the RCS
units. Inorder tovisualisethescatterersandtomeasuretheir relativeintensitiesat a
givenaspectangle, weshouldreducethedomainsof thefunctionsh
r1
(ϕ) andh
r2
(ϕ)
to[−1,1].
For adirect imagereconstruction, oneis to useafast Fourier transform(FFT),
which is simple to make when the number of initial readouts is 2m, where mis
Radar imagingapplication 221
Input of A, sin f, cos f
Correction of measurement nonsynchronism
Formation of
quadrature
components of the
complex
radio-hologram
Choice of the synthesis interval and
object aspect angle
Normalisation
Interpolation
Multiplication by weight function
Finding the FFT of the function
h
r
=h
r1
+ih
r2
=A
exp
(if)
Calulation of the image intensity
Determination of local response
Data output
Input of graduation data
Figure9.16 Theschemeof thereconstructionalgorithm
anatural number. Their necessary number is madeup of an arbitrary set of initial
samples, usinganinterpolationblock. Inorder tominimiseameasurement error in
thelocal response, thechosensampleismultipliedbyanyweightingfunction.
HavingfoundtheFourier transformof thecomplexfunctionh
r
(ϕ), weformthe
filesReV andImV, definingthecomplex amplitudesof thefieldV(ν) scatteredby
thetarget surface. Theimageintensity, W, is foundas thesquaredmodulus of the
functionV(ν). Theimagesampleinterval isν = λ/2ψ
s
, whereψ
s
isthesynthetic
apertureangle. Withthecalibrationdata, theimageintensitiesof individual scatterers
canberepresentedintheRCSunits.
Figure 9.17 illustrates typical 1D images of a perfectly conducting cylinder,
obtainedinanAEC at theaspect angleφ
obs
= 105

, withtheimageintensity plot-
tedalongtheordinateandthenormalisedtarget sizealongtheabscissa. Theimage
intensitypeakscorrespondtotheprojectionsof scatterers1, 2and3ontothenormal
222 Radar imagingandholography
E-polarisation
1.0
0.6
0.2
–4 –2 0 2 4
W, rel. units
v/l
H-polarisation
1.0
0.6
0.2
–4 –2 0 2 4
W, rel. units
v/l
l =6l
a=l/2
Ψ
s
=p/6
Theory
Experiment
(a)
(b)
Figure9.17 A typical 1D image of a perfectly conducting cylinder (l-length of
cylinder, a-radiusof cylinder)
totheviewline(Fig. 2.1). Theanalysisof theseimageshasshownthatthescatterers
arelocalisedjust at thecylinder edges. Scatterers2and3at theendsof thecylinder
generatinglinearewell resolved. Theimagesof 1and2mergebecausetheyaresep-
aratedby adistancesmaller thantheresolutionlimit of themethod. Thedifference
intheintensitiesof individual pointscanbeinterpretedintermsof theEWM or the
GTD. ThedashedlinesinFig. 9.17arefor theformer intensitiesandthelatter com-
putationsyieldsimilar results. Our findingsagreewell withexperimental data. The
polarisation properties of the scatterers manifest themselves in the varying image
intensity due to the changes in the illumination polarisation. Such images can be
usedtoestimatethetarget sizeand, withamoredetailedanalysis, itsgeometry, the
‘brightest’ constructionelementsandsurfacepatches.
Figures9.18and9.19present themeasuredlocal scatteringcharacteristicsfor a
metalliccylinder, theRCSdiagramfor aselectedscatterer, andthesimulationresults
(Sections 5.2and5.3). Theestimatedstandarddeviationfor theexperimental local
responseswas1.8dB. Inadditiontoamethodological error of 0.5dB, thetotal error
includescomponentsduetothebackgroundechosignalsintheAEC, imperfectpolar-
isationchannel insulation, etc. Itisobviousthatthetheory, simulationandexperiment
gavesimilarresultswithintheaccuracyof thetotal measurementerror. Suchmeasure-
mentsprovidedataonlocal scatteringcharacteristicsof targetsof complexgeometry.
Theresultspresentedcanbeusedfor calibrationof measuringsetups.
9.4 Target recognition
Recognitionof targetsisaveryimportanttaskinradarscienceandpractice. Byrecog-
nitionwemeantheprocedureof attributingtheobjectbeingviewedtoacertainclass
inaprescribedalphabetof targetclasses, usingtheradar dataobtained. Accordingto
thegeneral theoryof patternrecognition, radar target recognitionshouldincludethe
followingstages:
• compilingaclassifiedalphabet of radar targetstoberecognised;
• viewingof targets;
• determination(measurement) of sometarget responses fromtherecordedecho
signal parameterstocompiletarget descriptions, or patterns;
Radar imagingapplication 223
110 130 150 170 w
i
, grad
–80
–60
–40
–20
0
10 log , dB
s
nE
pa
2
2
3
1
Scattering
centre 1
Scattering
centre 2
Experiment
Simulation
Experiment
Simulation
Figure9.18 The local scattering characteristics for a metallic cylinder
(E-polarisation). Thesubscripts1, 2, 3at σ denotescatteringcentres
• identificationandselectionof informativesigns(features)fromthecompiledlists;
• targetclassificationor attributionof aparticular targettooneof theclassesonthe
basisof discriminatingsigns.
Theproblemof makingupanalphabetof targetclassesandselectinginformativesigns
todescribeeachclassreliablyisquitecomplicatedandistobesolvedbyqualifiedand
experiencedspecialists. Of course, classificationmaybebasedonvariousprinciples.
Oneof themistogrouptargetsintermsof theirfunctionandapplication. Forexample,
asuccessful management of air traffic needs aclassificationof aircraft: heavy and
light passenger planes, militaryplanes, helicopters, etc.
Eachclassof radartargetscanbedescribedbyadefinitesetof discriminatingchar-
acteristics to beusedfor classification: configuration, thepresenceof well-defined
andreadilyobservableparts, dynamicparameters(e.g. altitude, flight velocity), etc.
A specificfeatureof all radartargetsisthattheradarinputsensesatargetpatterninthe
echosignal domain. Thesizescaleof thisdomainandthephysical meaningof each
of itscomponentsdifferconsiderablyfromthoseof theparametervectorsof thetarget
224 Radar imagingandholography
110 130 150 170 w, grad
10 log , dB
s
1H
pa
2
10
–10
(a)
110 130 150 170 w, grad
10 log , dB
s
2H
pa
2
10
–10
(b)
=Experiment
=Simulation
=Experiment
=Simulation
110 130 150 170 w, grad
10 log , dB
s
3H
pa
2
0
(c)
=Experiment
=Simulation
Figure9.19 The local scattering characteristics for a metallic cylinder
(H-polarisation). Thesubscripts1, 2, 3at σ denotescatteringcentres
class andeachcharacteristic individually. Nomatter howmany identificationsigns
atarget possesses, onecanget informationonly about thosecharacteristics that are
contained in the recorded echo signal parameters. We believe that a holographic
Radar imagingapplication 225
approach to designing target recognition radars is capable of removing this
limitation.
Thetargetdescription(pattern) inaradiovisionsystemisamicrowavehologram
function, whichisgenerallyavector, non-stationaryrandomfunction. Itismanifested
attheradarinputasapatternof acertainclassof objects. Suchpatternsarepractically
unsuitable for classification because they have a complex probabilistic structure,
alargeandvaryingsize, etc. Besides, theindividual valuesof thehologramfunctions
mayalsoincludeminor, unimportantdetailsof atargetthatmayintroduceadditional
recognitionerrors.
Likeinmany other target recognitionproblems, akey task istoreveal themost
informative, discriminatingtarget signs. Thesubsystemof signidentificationmust
includecompressionandpreliminary processingof theinitial radar data[12], such
thattheclassificationsubsysteminputwouldreceiveasize-fixedarrayof signschar-
acterisingtheessential, most typical properties of aparticular target. Theroleof a
sign‘identifier’ maybeplayedbytheoperator of imagereconstructionfromaholo-
gram, whichcangenerallybereducedtoanintegral Fourier transform. Thedistances
betweenindividual scatterersandthelocal target characteristicsmeasuredfromthe
imageswill formadiscretevector domainof arelativelysmall size, whoseelements
canbeconsideredasrecognitionsigns. Theyhaveaclear physical meaning, afactor
important for creatingalibrary of standards for theclassifier operation. Thetarget
recognitionthenbecomesaholographicprocesswithaclear physical meaning. One
does not need a priori dataon thestatistical structureof theecho signal, and this
methodof signdiscriminationmaybeconsideredasdistribution-free.
Thefinal stageintherecognitionprocessistodesignaprocedurefor target clas-
sification, that is, findingthecriteriafor attributingaparticular target to aclass in
agiven alphabet. Theclassification is based on akey ruleattributing thearray of
discriminatingsigns(i.e. thetarget itself) tooneof thepossibletarget classes. Mod-
ernpatternrecognitiontheoryhasat itsdisposal apowerful mathematical apparatus
includingdeterministic, probabilisticandheuristicprocedures, aswell asvarioussets
of criteriafor detectingsimilaritiesanddifferencesbetweenclasses.
Therefore, radar targetrecognitioncanberepresentedasablockdiagramthatcan
serveasthebasisfor amathematical model of aradar recognitiondevice(Fig. 9.20).
This idea has been tested using an analogue and a digital model of a recognition
radar. Thesimulationincludedthemeasurementof microwavehologramsof different
model objectsinanAEC, themathematical modellingof theobject motionandthe
computationof dynamicrealisationsof themicrowavehologramfunctionswithstatic
measurements(for randominitial conditionsof motion), themodellingof theradar
receivingchannel, imagereconstructionandconstructionof signvectors, aswell as
theclassificationof theobjects.
Therelativepositionsof three‘brightest’ scatterers(geometrical characteristics)
andtheir imageintensitiesfor eachimagewerefoundtobe
R
kl
12
=

R
kl
1
−R
kl
2

, R
kl
13
=

R
kl
1
−R
kl
3

, R
kl
23
=

R
kl
2
−R
kl
3

, A
kl
1
, A
kl
2
, A
kl
3
,
wherek, l = 1, 2arethepolarisationindices.
226 Radar imagingandholography
Targets Receptor
Recognition device
Target
descriptions
Separation
of
features
Features
Decision
making
(classifier)
Figure9.20 Amathematical model of aradar recognitiondevice
Thesetof signvectorswasstoredintherecognitiondevicetobeusedforcreating
ateachingor testingstandardof signvectors. Thevectorswerenormalisedsuchthat
onecouldcomparevectors madeupof signs of different physical nature. Smaller-
scale sign vectors were created for further use. Table 9.7 presents the vectors for
theentiresigndomainconstructedto minimisethesignvectors andcomparetheir
informativecharacteristicsfor further recognition. Theminimumsizewas3andthe
maximum9.
A sequence of recognition sign vectors arrives at the classifier input. We had
employedaBayes classifier andanonparametric classifier basedonthemethodof
potential functions. Theformer isoptimal inthesensethat it minimisestheaverage
risk of wrong decisions. Theteaching of theBayes classifier included theevalua-
tion of unknown parameters of theconditioned probability distribution of thesign
vector xintheclassA
i
−p(x/A
i
), whichwastakentobenormal. Thisdecisionrule
isBayes-optimal at theequal cost of errorsfor amoregeneral distribution; inprac-
tice, however, thedifferencebetweentheactual andnormal distributions is usually
neglectedif theformer is smoothandhas onemaximum[12]. Theother classifier
wasusedwhentherewasnoinformationonthesignvector distributionfunction. It
wasassumedthat thegeneral decisionfunctionwasknownanditsparameterswere
estimatedfromtheteachingsamples[12].
Eachexperimental runprovidedaK ×K matrixof decisions(K isthenumber of
classes) attheclassifier output. Theelementk
ij
of thematrixisthenumber of objects
in theith class attributed to thejth class. Fromthematrix K, wecan estimatethe
probabilityof correct recognitionevents, theprobabilityof afalsealarm, etc.
The model suggested was used to test the recognition capabilities for various
objects. We also planned to estimate the efficiency of recognition, to compare
the information contents of different sign vectors and investigate the stability of
theclassificationalgorithmsintermsof thesizeof theteachingsample. For this, we
employedmetallicconeswithaspherical apex(class1)andaspherical base(class2)of
aboutthesamelength. Theprobabilisticstructureof thesigndomainwasestimatedby
constructingexperimental holograms. Their unimodal character wastestedtojustify
theuseof aBayesclassifier. Anexperimental serieswasequal to100inall theruns.
Table9.8compares thevalidrecognitionprobability for objects of bothclasses
and the size of the teaching sequence at different sign vectors for the case of a
Bayesclassifier. Onecanseethat thelargest vectorsmadeupof local responsesare
most effective. Thegeometrical characteristicsgavepoorer results, aswasexpected,
becausetheobjectsinbothclasseswereof aboutthesamesize. Whenthenumber of
teachingvectorsisdecreased, thereisatendencyfor alower recognitionefficiency.
T
a
b
l
e
9
.
7
T
h
e
v
a
r
i
a
n
t
s
o
f
t
h
e
s
i
g
n
v
e
c
t
o
r
s
T
y
p
e
o
f
s
i
g
n
v
e
c
t
o
r
P
o
l
a
r
i
s
a
t
i
o
n
1
2
3
4
5
A
R
A
1
1
1
,
A
1
1
2
,
R
1
1
1
2
A
2
2
1
,
A
2
2
2
,
R
2
2
1
2
A
1
2
1
,
A
1
2
2
,
R
1
2
1
2
A
1
1
1
,
A
1
1
2
,
R
1
1
1
2
A
2
2
1
,
A
2
2
2
,
R
2
2
1
2
A
1
1
1
,
A
1
1
2
,
R
1
1
1
2
A
2
2
1
,
A
2
2
2
,
R
2
2
1
2
A
1
2
1
,
A
1
2
2
,
R
1
2
1
2
A
A
1
1
1
,
A
1
1
2
,
A
1
1
3
A
2
2
1
,
A
2
2
2
,
A
2
2
3
A
1
2
1
,
A
1
2
2
,
A
1
2
3
A
1
1
1
,
A
1
1
2
,
A
1
1
3
A
2
2
1
,
A
2
2
2
,
A
2
2
3
A
1
1
1
,
A
1
1
2
,
A
1
1
3
A
2
2
1
,
A
2
2
2
,
A
2
2
3
A
1
2
1
,
A
1
2
2
,
A
1
2
3
R
R
1
1
1
2
,
R
1
1
1
3
,
R
1
1
2
3
R
2
2
1
2
,
R
2
2
1
3
,
R
2
2
2
3
R
1
2
1
2
,
R
1
2
1
3
,
R
1
2
2
3
R
1
1
1
2
,
R
1
1
1
3
,
R
1
1
2
3
R
2
2
1
2
,
R
2
2
1
3
,
R
2
2
2
3
R
1
1
1
2
,
R
1
1
1
3
,
R
1
1
2
3
R
2
2
1
2
,
R
2
2
1
3
,
R
2
2
2
3
R
1
2
1
2
,
R
1
2
1
3
,
R
1
2
2
3
228 Radar imagingandholography
Table9.8 Thevalidrecognitionprobability(aBayesclassifier)
Number of
teaching
vectors
Typeof
signvector
Polarisation
1 2 3 4 5
50 AR 0.54 0.68 0.68 0.68 0.80
A 0.63 0.61 0.66 0.63 0.78
R 0.56 0.68 0.55 0.66 0.63
40 AR 0.56 0.64 0.63 0.61 0.77
A 0.60 0.61 0.67 0.63 0.78
R 0.56 0.68 0.55 0.66 0.63
30 AR 0.53 0.63 0.52 0.63 0.78
A 0.60 0.60 0.69 0.64 0.77
R 0.57 0.67 0.51 0.64 0.63
20 AR 0.53 0.63 0.51 0.64 0.71
A 0.57 0.58 0.68 0.63 0.73
R 0.52 0.70 0.52 0.62 0.63
10 AR 0.44 0.59 0.58 0.60 0.50
A 0.52 0.55 0.59 0.60 0.55
R 0.50 0.68 0.56 0.54 0.50
Table9.9 The valid recognition probability (a classifier based
onthemethodof potential functions)
Number of
teaching
vectors
Typeof
signvector
Polarisation
1 2 3 4 5
30 AR 0.88 0.90 0.87 0.83 0.81
20 0.82 0.71 0.71 0.78 0.72
10 0.75 0.63 0.67 0.76 0.71
30 A 0.87 0.90 0.90 0.89 0.94
20 0.67 0.82 0.85 0.81 0.84
10 0.64 0.80 0.72 0.75 0.82
30 R 0.80 0.84 0.69 0.72 0.80
20 0.62 0.66 0.54 0.53 0.68
10 0.55 0.60 0.56 0.52 0.63
Table9.9showssimilar resultsfor aclassifier basedonthemethodof potential
functions. Therecognitionefficiencyishigherbutthetimenecessaryfortheteaching
isanorder of magnitudelonger.
The sequence of operations in this model can be used as a procedure for an
estimationof recognitionefficiency for varioustargetsat thestageof designingthe
Radar imagingapplication 229
radar or thetargets. Thismodel providesagreater efficiencyof pre-testsatthedevice
designingstagebecauseonecan
• obtainstatistical dataonpossiblerecognitionof varioustargetsinashort timeat
lower cost;
• get teachingor experimental sequencesof practicallyanysize;
• evaluatetheeffectiveparametersof antirecognitiondevicesduringdirect statis-
tical experiments, etc.
References
1 AKHMETYANOV, V. R., andPASMUROV, A. Ya.: ‘Radar imaginganalysis
based on theory of information’. Proceedings of sixth All-Union seminar on
Optical informationprocessing, Frunze, USSR, 1986, part 2, p. 59(inRussian)
2 AKHMETYANOV, V. R., andPASMUROV, A. Ya.: ‘Radarimageryprocessing
for earthremotesensing’, ZarubezhnayaRadioelectronica, 1987, 1, pp. 70–81
(inRussian)
3 ALEXANDROV, V. Y., LOSHCHILOV, V. S., and PROVORKIN, A. V.:
‘Studies of icebergs and seaicein Antarctic using “Almaz-1” SAR data’, in
POPOV, I. K., and VOEVODIN, V. A. (Eds): ‘Icebergs of the world ocean’
(Hydrometeoizdat, St Petersburg, 1996), pp. 30–36(inRussian)
4 ALEXANDROV, V. Y., SANDVEN, S., J OHANNESSEN, O. M.,
PETTERSSON, L. H., and DALEN, O.: ‘Winter navigation in the Northern
SeaRouteusingRADARSAT data’, Polar Record, 2000, 36(199), pp. 333–42
5 ALLAN, T. D. (Ed.): ‘Satellitemicrowaveremotesensing’ (J ohnWiley&Sons,
NewYork, 1983)
6 ALPERS, W., and HENNINGS, I.: ‘A theory of theimaging mechanisms of
underwater bottomtopographybyreal andsyntheticapertureradar’, J ournal of
Geophysical Research, 1984, 89, pp. 10529–46
7 ARSENOV, S. M., andPASMUROV, A. Ya.: ‘Investigationof local scattering
characteristicsof lumpedobjectsfromtheir radar images’. Proceedingsof All-
UnionsymposiumonWavesanddiffraction, Moscow, USSR, 1990, pp. 153–55
(inRussian)
8 ARSENOV,S.M., andPASMUROV,A.Ya.: ‘Signal processingforaircraftradar
imaging’, ZarubezhnayaRadioelectronica, 1991, 1, pp. 71–83(inRussian)
9 ARSENOV, S. M., andPASMUROV, A. Ya.: ‘Tomographic signal processing
for ISAR’, inGUREVICH, S. B. (Ed.): ‘Optical andoptico-electronicmeansof
dataprocessing’ (USSR Academy of Sciences, Leningrad, 1989), pp. 258–66
(inRussian)
10 ARSENOV, S. M., andPASMUROV, A. Ya.: ‘Compensationof aircraft radial
motionfor ISAR’. Proceedingsof secondAll-UnionconferenceonTheoryand
practiceof spatial-timesignal processing, Sverdlovsk, USSR, 1989, pp. 217–19
(inRussian)
232 References
11 ASTANIN, L. Yu., and KOSTYLEV, A. A.: ‘Ultrawideband radar mea-
surements. Analysis and processing’ (TheInstitution of Electrical Engineers,
London, 1997)
12 ASTANIN, L. Yu., KOSTYLEV, A. A., ZINOVIEV, Yu. S., and
PASMUROV, A. Ya.: ‘Radar target characteristics: measurements and appli-
cations’ (CRC Press, BocaRaton, 1994)
13 AUSHERMAN, D. A., KOZMA, A., WALKER, J . L., J ONES, H. M.,
and POGGIO, E. C.: ‘Development in radar imaging’, IEEE Transactions on
AerospaceandElectronicSystems, 1986, AES-20(4), pp. 363–99
14 BAKUT, P. A., BOLSHAKOV, I. A., GERASIMOV, B. M. et al.: ‘Statistical
theoryof radiolocation’ (Sovetskoeradio, Moscow, 1963, vol. 1) (inRussian)
15 BATES, R. H. T., GARDEN, K. L., andPETERS, T. M.: ‘Overviewof com-
puterizedtomography withemphasisonfuturedevelopments’, Proceedingsof
IEEE, 1983, 71(3), pp. 356–72
16 BEAL, R., KUDRYAVTSEV, V., THOMPSON, D. et al.: ‘The influence of
the marine atmospheric boundary layer on ERS-1 synthetic aperture radar
imageryof theGulf Stream’, J ournal of Geophysical Research, 1997, 102(C3),
pp. 5799–5814
17 BELOCERKOVSKY, S. M., KOCHETKOV, Yu. A., KRASOVSKY, A. L., and
NOVITSKIY, V. V.: ‘Introductioninaeroautoelasticity’ (Nauka, Moscow, 1980)
(inRussian)
18 BERTOIA, C., FALKINGHAM, J ., andFETTERER, F.: ‘Polar SAR datafor
operational seaicemapping’, inTSATSOULIS,C., andKWOK,R.(Eds): ‘Anal-
ysisof SARdataof thepolaroceans. Recentadvances’ (Springer-Praxis, Berlin,
Heidelberg, 1998), pp. 201–34
19 BORN, M., andWOLF, E.: ‘Principlesof optics’ (PergamonPress, NewYork,
1980)
20 BROMAGHIM, D. R., andPERRY, J . P.: ‘A widebandliner FMrampgenerator
forthelong-rangeimageryradar’, IEEETransactionsonMicrowaveTheoryand
Techniques, 1978, MTT-26(5), pp. 322–25
21 BROWN, W. M., and FREDERICKS, R. J .: ‘Range-Doppler imaging with
motion through resolution cells’, IEEE Transactions on Aerospaceand Elec-
tronicSystems, 1969, AES-5(1), pp. 98–102
22 BROWN, W. M., and GHIGLIA, D. C.: ‘Some methods for reducing
propagation-inducedphaseerrorsincoherent imagingsystems’, J ournal of the
Optical Societyof America, 1988, 5(6), pp. 924–41
23 BROWN, W. M., and RIORDAN, J . E.: ‘Resolution limits with propagation
phaseerrors’, IEEE TransactionsonAerospaceandElectronic Systems, 1970,
AES-6(5), pp. 657–62
24 BROWN, W. M.: ‘Synthetic apertureradar’, IEEE TransactionsonAerospace
andElectronicSystems, 1967, AES-3(2), pp. 217–30
25 BUNKIN, B. V., and REUTOV, A. P.: ‘Trends of radar develop-
ment’, in SOKOLOV, A. V. (Ed.): ‘Problems of perspective radiolocation’
(Radiotekhnika, Moscow, 2003), pp. 12–19(inRussian)
References 233
26 BYKOV, V. V.: ‘Digital modellingfor statistical radioengineering’ (Sovetskoe
Radio, Moscow, 1971) (inRussian)
27 CARSEY, F., HARFING R., andWALES, C.: ‘AlaskaSAR facility: TheUS
centerforseaiceSARdata’, inTSATSOULIS, C., andKWOK, R. (Eds): ‘Anal-
ysisof SARdataof thepolaroceans. Recentadvances’ (Springer-Praxis, Berlin,
Heidelberg, 1998), pp. 189–200
28 CHERNYKH,M.M., andVASILIEV,O.V.: ‘Experimental estimationof aircraft
echosignal coherence’, Radiotekhnika, 1999, 2, pp. 75–78(inRussian)
29 COLLIER, R. J ., BURCKHARDT, C. B., andLIN, L. H.: ‘Optical holography’
(AcademicPress, NewYork, London, 1971)
30 CURLANDER, I. C., and Mc DONOUGH, R. N.: ‘Synthetic aperture radar
systemsandsignal processing’ (J ohnWiley&Sons, NewYork, London, 1991)
31 CURRIE, N. C. (Ed.): ‘Radar reflectivity measurement: techniques and
applications’ (ArtechHouse, Norwood, USA, 1989)
32 CUTRONA, L. J ., LEITH, E. N., PORCELLO, L. J ., andVIVIAN, W. E.: ‘On
theapplicationof coherent optical processingtechniques to synthetic aperture
radar’, Proceedingsof IEEE, 54(8), 1966, pp. 1026–32
33 DA SILVA, J . C. B., ROBINSON, I. S., J EANS, D. R. G., andSHERWIN, T.:
‘Theapplicationof near-real-timeERS-1SAR datafor predictingthelocation
of internal waves at sea’, International J ournal of RemoteSensing, 1997, 18
(10), pp. 3507–17
34 DESAI, M., and J ENKINS, W. K.: ‘Convolution back – projection image
reconstructionfor syntheticapertureradar’. Proceedingsof IEEE International
symposiumonCircuitsandsystems, Montreal, Canada, 1984, vol.1, pp. 161–63
35 DESCHAMPS, G.: ‘About microwaveholography’, Proceedings of IEEE, 55
(4), 1967, pp. 58–59
36 DIKINIS, A. V., IVANOV, A. Y., KARLIN, L. N. et al.: ‘Atlas of synthetic
aperture radar images of the ocean acquired by ALMAZ-1 satellite’ (GEOS,
Moscow, 1999) (inRussian)
37 DRINKWATER, M. R.: ‘Satellitemicrowaveradar observations of Antarctic
Seaice’, inTSATSOULIS, C., andKWOK, R. (Eds): ‘Analysisof SARdataof
thepolar oceans. Recentadvances’ (Springer-Praxis, Berlin, Heidelberg, 1998),
pp. 35–68
38 EDEL, H., SHAW, E., FALKINGHAM, J ., andBORSTAD, G.: ‘TheCanadian
RADARSAT program’, Backscatter, 2004, 15(1), pp. 11–15
39 ERMAKOV, S. A., SALASHIN, S. G., andPANCHENKO, A. R.: ‘Filmslicks
on the sea surface and some mechanisms of their formation’, Dynamics of
AtmosphereandOcean, 1992, 16(2), pp. 279–304(inRussian)
40 ESPEDAL, H. A., andJ OHANNESSEN, O. M.: ‘Detectionof oil spillsnearoff-
shoreinstallationsusingsyntheticapertureradar (SAR)’, International J ournal
of RemoteSensing, 2000, 21(11), pp. 2141–44
41 ESPEDAL, H. A., J OHANNESSEN, O. M., J OHANNESSEN, J . A. et al.:
‘COASTWATCH’95: A tandemERS-1/SAR detection experiment of natural
filmontheoceansurface’, J ournal of Geophysical Research, 1998, 103(C11),
24969–82
234 References
42 FLETT, D., and VACHON, P. W.: ‘Marine applications of SAR in Canada’,
Backscatter, 2004, 15(1), pp. 16–21
43 FLETT, D., DeABREU, R., andFALKINGHAM, J .: ‘Operational experience
with ENVISAT ASAR wide swath data at the CIS’. Abstracts of ENVISAT
Symposium, Salzburg, Austria, 2004, Abstract No. 363
44 FREIDEY, A. I., CONROY, B. L., HOPPE, D. I., andBRANJ I, A. M.: ‘Design
conceptsof a1-MWCWX-bandtransmit/receiver systemfor planetaryradar’,
IEEE TransactionsonMicrowaveTheoryandTechniques, 1992, MTT-40(6),
pp. 1047–55
45 FROM PATTERN TOPROCESS: Thestrategyof theEarthobservingsystem.
EOSsciencesteeringcommitteereport, vol. 2, NASA, 1988
46 FUREVIK, B. R., J OHANNESSEN, O. M., and SANDVIK, A. D.: ‘SAR –
retrievedwindinpolar regions– comparisonwithinsitudataandatmospheric
model output’, IEEE Transactions onGeoscienceandRemoteSensing, 2002,
GE-40(8), pp. 1720–32
47 GHIGLIA, D. C., andBROWN, W. D.: ‘Somemethodsfor reducingpropaga-
tion– inducedphaseerrorsincoherentimagingsystems. II. Numerical results’,
J ournal of theOptical Societyof America, 1988, A5(6), pp. 942–56
48 GILL, R. S., andVALEUR, H. H.: ‘Icecover discriminationintheGreenland
watersusingfirst-order textureparametersof ERSSAR images’, International
J ournal of RemoteSensing, 1999, 20(2), pp. 373–85
49 GILL,R.S., VALEUR,H.H., andNIELSEN,P.: ‘Evaluationof theRADARSAT
imageryfor theoperational mappingof seaicearoundGreenland’. Proceedings
of symposiumonGeomaticsintheeraof RADARSAT, Ottava, Canada, 1997,
pp. 230–34
50 GOODMAN, J . W.: ‘An introduction to the principles and applications of
holography’, Proceedingsof IEEE, 1971, 59(9), pp. 1292–304
51 GOODMAN, J . W.: ‘Introduction to Fourier optics’ (McGraw-Hill Book
Company, NewYork, 1968)
52 GOUDEY,K.R., andSCIAMBI,A.F.: ‘HighpowerX-bandmonopulsetracking
feedfor theLincolnlaboratory long-rangeimagingradar’, IEEE Transactions
onMicrowaveTheoryandTechniques, 1978, MTT-26(5), pp. 326–32
53 GRIFFIN, C. R.: ‘Imagequalityparametersfor digital syntheticapertureradar’.
Proceedingsof symposiumonRADAR, 1984, pp. 430–35
54 HAAS, C., DIERKING, W., BUSCHE, T., HOELEMANN, J ., and
WEGENER, C.: ‘Monitoringpolynyaprocesses andseaiceproductioninthe
Laptev sea’, Abstracts of ENVISAT Symposium, Salzburg, Austria, 2004,
Abstract No. 137
55 HARGER, R. O.: ‘Synthetic aperture radar systems. Theory and design’
(AcademicPress, NewYork, 1970)
56 HASSELMANN, K., RANEY, R. K., PLANT, W. J . et al.: ‘Theory of syn-
theticapertureradaroceanimaging: A MARSENview’, J ournal of Geophysical
Research, 1985, 90(10), pp. 4659–86
57 HERMAN, G. T.: Imagereconstructionfromprojections. Thefundamentalsof
computerizedtomography’ (J ohnWiley& Sons, NewYork, 1980)
References 235
58 ILYIN, A. L., andPASMUROV, A. Ya.: ‘Fluctuatedobjects andSAR charac-
teristics’, Izvestiyavysshykhuchebnykhzavedeniy– Radioelectronica, 1989, 32
(2), pp. 65–68(inRussian)
59 ILYIN, A. L., and PASMUROV, A. Ya.: ‘Mapping of partial coherence
extendedtargets by SAR’, Zarubezhnaya Radioelectronica, 1985, 6, pp. 3–15
(inRussian)
60 ILYIN, A. L., andPASMUROV, A. Ya.: ‘Radar imagerycharacteristicsof fluc-
tuatedextendedtargets’, Radiotekhnikai Electronica, 1987, 31(1), pp. 69–76
(inRussian)
61 IVANOV, A. V.: ‘On the synthetic aperture radar imaging of ocean surface
waves’, IEEE J ournal of OceanicEngineering, 1982, OE-7(2), pp. 96–103
62 J OHANNESSEN,J ., DIGRANES,G., ESPEDAL,H., J OHANNESSEN,O.M.,
andSAMUEL, P.: ‘SAR oceanfeaturecatalogue’ (ESA PublicationsDivision,
ESTEC, Noordwijk, TheNetherlands, 1994)
63 J OHANNESSEN, J . A., SHUCHMAN, R. A., J OHANNESSEN, O. M.,
DAVIDSON, K. L., andLYZENGA, D. R.: ‘Synthetic apertureradar imaging
of upper ocean circulation features and wind fronts’, J ournal of Geophysical
Research, 1991, 96(9), pp. 10411–22
64 J OHANNESSEN, O. M., SANDVEN, S., PETTERSSON, L. H. et al.: ‘Near-
real time sea ice monitoring in the Northern Sea Route using ERS-1 SAR
and DMSP SSM/I microwave data’, Acta Astronautica, 1996, 38 (4–8),
pp. 457–65
65 J OHANNESSEN, O. M., VOLKOV, A. M., BOBYLEV, L. P. et al.: ‘ICE-
WATCH – Real-time sea ice monitoring of the Northern Sea Route using
satelliteradar (acooperativeearthobservationproject betweentheRussianand
European Space Agencies)’, Earth Observations and Remote Sensing, 2000,
16(2), pp. 257–68
66 J OHANNESSEN, O. M., and SANDVEN, S.: ‘ERS-1 SAR ice routing
of L’Astrolabe through the Northeast Passage’, Arctic News-Record, Polar
Bulletin, 8(2), pp. 26–31
67 J OHANNESSEN, O. M., CAMPBELL, W. J ., SHUCHMAN, R. et al.:
‘Microwavestudy programs of air–ice–oceaninteractiveprocesses inthesea-
sonal icezoneof theGreenlandandBarentsSeas’, in‘Microwaveremotesensing
of seaice’ (AmericanGeophysical Union, Washington, DC., 1992, Geophysical
MonographNo. 68), pp. 261–89
68 J OHANNESSEN, O. M., SANDVEN, S., DROTTNING, A., KLOSTER, K.,
HAMRE, T., andMILES, M.: ‘ERS-1SARseaicecatalogue’ (EuropeanSpace
Agency, SP-1193, 1997)
69 KELL, P. E.: ‘About bistatic RCS evaluationusingresults of monostatic RCS
measurements’, Proceedingsof IEEE, 1965, 53(8), pp. 1126–32
70 KELLER, J . B.: ‘Geometrical theoryof diffraction’, J ournal of Optical Society
of theAmerica, 1962, 52(2), pp. 116–30
71 KOCK, W. E.: ‘Pulse compression with periodic gratings and zone plane
gratings’, Proceedingsof IEEE, 1970, 58(9), pp. 1395–96
72 KONDRATENKOV, G. S.: ‘The signal function of a holographic radar’,
Radiotekhnika, 1974, 29(6), pp. 90–92(inRussian)
236 References
73 KONDRATENKOV, G. S.: ‘Synthetic aperture antennas’, in
VOSKRESENSKY, D. I. (Ed.): ‘Phasedantennaarraysdesign’ (Radiotekhnika,
Moscow, 2003), pp. 399–416(inRussian)
74 KONDRATENKOV, G. S., POTEKHIN, V. A., REUTOV, A. P., and
FEOKTISTOV, Yu. A.: ‘Earthsurveyingradars’ (Radioi Svyaz, Moscow, 1983)
(inRussian)
75 KORSBAKKEN, E., J OHANNESSEN, J . A., and J OHANNESSEN, O. M.:
‘Coastal windfieldretrievalsfromERSsyntheticapertureradarimages’, J ournal
of Geophysical Research, 1998, 103(C4), pp. 7857–74
76 KORSNES, R.: ‘Some concepts for precise estimation of deformations/rigid
areasinpolarpackicebasedontimeseriesof ERS-1SARimages’, International
J ournal of RemoteSensing, 1994, 15(18), pp. 3663–74
77 KRAMER, H.: ‘Observation of the Earth and its Environment. Survey of
MissionsandSensors’ (Springer, Berlin, 1996)
78 KURIKSHA, A. A.: ‘Movingtarget 2D radar imagingby combinationof the
aperturesynthesisandtomography’, Radiotekhnikai Electronica, 1994, 39(4),
pp. 613–18(inRussian)
79 KWOK, R., andCUNNINGHAM, G. F.: ‘Seasonal iceareaandvolumepro-
ductionof theArctic Ocean: November 1996throughApril 1997’, J ournal of
Geophysical Research, 2002, 107(C10), pp. 8038–42
80 LANDSBERG, G. S.: ‘Optics’ (Nauka, Moscow, 1970, 6thedn) (inRussian)
81 LARSON, R. W., ZELENKA, I. S., andIOHANSEN, E. L.: ‘A microwaveholo-
gramradar system’, IEEE TransactionsonAerospaceandElectronic Systems,
1972, AES-8(2), pp. 208–17
82 LARSON, R. W., ZELENKA, I. S., and IOHANSEN, E. L.: ‘Microwave
holography’, Proceedingsof IEEE, 1969, 57(12), pp. 2162–64
83 LARUE, A., HOFFMAN, K. N., HURLBUT, D. E., KIND, H. J ., and
WINTROUB A.: ‘94-GHzradar for spaceobjectidentification’, IEEETransac-
tionsonMicrowaveTheoryandTechniques, 1969, MTT-17(12), pp. 1145–49
84 LE HEGARAT-MUSCLE, S., ZRIBI, M., ALEM, F., WEISSE, A., and
LOUMAGNE, C.: ‘Soil moisture estimation from ERS/SAR data: Toward
an operational methodology’, IEEE Transactions on Geoscienceand Remote
Sensing, 2002, GE-40(12), pp. 2647–58
85 LEITH, E. N.: ‘Quasi-holographic techniques in the microwave region’,
Proceedingsof IEEE, 1971, 59(9), pp. 1305–18
86 LEITH, E. N., and INGALLS, F. L.: ‘Synthetic antenna data processing by
wavefront reconstruction’, AppliedOptics, 1968, 7(3), pp. 539–44
87 LEITH, E. N.: ‘Side-lookingsyntheticapertureradar’, inCASASENT, D. (Ed.):
‘Optical data processing applications’ (Springer-Verlag, Berlin, Heidelberg,
NewYork, 1978) Chapter 4
88 LEWITT, P. M.: ‘Reconstructionalgorithms: transformmethods’, Proceedings
of IEEE, 1983, 71(3), pp. 390–408
89 LIKHACHEV, V. P., andPASMUROV, A. Ya.: ‘Aircraft radar imagingunder
signal partial coherenceconditions’, Radiotekhnikai Electronica, 1999, 44(3),
pp. 294–300(inRussian)
References 237
90 MAYZELS, E. N., and TORGOVANOV, V. A.: ‘Measurement of
scattering characteristics of radar targets’ (Sovetskoe Radio, Moscow, 1972)
(inRussian)
91 MEHRHOLZ, D., andMAGURA, K.: ‘Radar trackingandobservationof non-
cooperative space objects by reentry of Salut-7-Kosmos-1686’. Proceedings
of International workshop of European Space Operations Center, Darmstadt,
Germany, 1991, pp. 1–8
92 MEIER, R. W.: ‘Magnification and aberration three order of diffraction in
holography’ J ournal of theOptical Societyof America, 1965, 55(7), pp. 987–91
93 MELLING, H.: ‘Detectionof features infirst-year pack iceby synthetic aper-
ture radar (SAR)’, International J ournal of Remote Sensing, 1998, 19 (6),
pp. 1223–49
94 MENSA, D. L.: ‘Highresolutionradar cross-sectionimaging’ (ArtechHouse,
Dedham, USA, 1991)
95 MERSEREA, R. M., andOPPENHEIM, A. V.: ‘Digital reconstructionof mul-
tidimensional signals fromtheir projections’, Proceedings of IEEE, 1974, 62
(10), pp. 1319–38
96 MILER, M.: ‘Holography’ (SNTL, Prague, Czechoslovakia, 1974) (inCzech)
97 MILES, V. V., BOBYLEV, L. P., MAKSIMOV, S. V., J OHANNESSEN, O. M.,
andPITULKO, P. M.: ‘Anapproachforassessingboreal forestconditionsbased
oncombineduseof satelliteSARandmultispectral data’, International J ournal
of RemoteSensing, 2003, 24(22), pp. 4447–66
98 MITSMAKHER, M. Yu., and TORGOVANOV, V. A.: ‘Microwaveanechoic
chambers’ (Radioi Svyaz, Moscow, 1982) (inRussian)
99 MOORE, R. K.: ‘Tradeoff betweenpictureelementdimensionsandnoncoherent
overaginginside-lookingairborneradar’, IEEETransactionsonAerospaceand
ElectronicSystems, 1979, AES-15(5), pp. 697–708
100 MUNSON, D. C., J R., O’BRIEN, J . D., andIENKINS, W. K.: ‘A tomographic
formulationof spotlight-modesynthetic apertureradar’, Proceedingsof IEEE,
1983, 71(8), pp. 917–25
101 NGHIEM, S.: ‘Ontheuseof ENVISAT ASAR for remotesensingof seaice’.
Abstractsof ENVISAT Symposium, Salzburg, Austria, 2004, Abstract No. 672
102 ONSTOTT, R. G.: ‘SARandscatterometersignaturesof seaice’, inCARSEY, F.
(Ed.): ‘Microwaveremotesensingof seaice’ (AGU Geophysical Monograph
68, AGU, 1992), pp. 73–104
103 PAPOULIS, A.: ‘Systems and transforms with applications in optics’
(McGraw-Hill, NewYork, 1968)
104 PASMUROV, A. Ya.: ‘Aircraftradarimaging’, ZarubezhnayaRadioelectronica,
1987, 12, pp. 3–30(inRussian)
105 PASMUROV, A. Ya.: ‘Microwaveholographicprocessmodellingbasedonthe
edgewaves method’, Radiotehnika i Electronica, 1971, 26(10), pp. 2030–33
(inRussian)
106 PASMUROV, A. Ya.: ‘Tomographic methods for radar imaging’. Proceedings
of thefirstAll-UnionconferenceonOptical informationprocessing, Leningrad,
USSR, 1988, pp. 85–86(inRussian)
238 References
107 PETTERSSON, L. H., SANDVEN S., DALEN, O., MELENTYEV, V. V.,
and BABICH, N. G.: ‘Satellite radar ice monitoring for ice navigation of
the ARCDEV tanker convoy in the Kara sea’. Proceedings of the fifteenth
international conferenceonPortandoceanengineeringunderArcticconditions,
Espoo, Finland, 1999, vol. 1, pp. 181–90
108 POLYANSKY, V. K., and KOVALSKY, L. V.: ‘Information content of opti-
cal radiation’. Proceedings of the third III All-Union School on Holography,
Leningrad, USSR, 1972, pp. 53–71(inRussian)
109 POPOV, S. A., ROZANOV, B. A., ZINOVIEV, J . S., andPASMUROV, A. Ya.:
‘Basicprinciplesof microwavehologramsinversesynthesis’. Proceedingsof the
eighthIII All-UnionSchool onHolography, Leningrad, USSR,1976, pp. 275–89
(inRussian)
110 PORCELLO, L. J .: ‘Turbulence-induced phase errors in synthetic aperture
radars’, IEEE TransactionsonAerospaceandElectronicSystems, 1970, AES-6
(5), pp. 634–44
111 RAMSAY, B. R., WEIR, L., WILSON, K., andARKETT, M.: ‘Earlyresultsof
theuseof RADARSATScanSARdataintheCanadianIceService’. Proceedings
of thefourthSymposiumonRemotesensingof thepolarenvironments, Lyngby,
Denmark, 1996, ESA SP-391, pp. 95–117
112 RANEY, R. K.: ‘SAR processing of partially coherent phenomena’, Interna-
tional J ournal of RemoteSensing, 1980, 1(1), pp. 29–51
113 RINO, C. L., andFREMOUW, E. J .: ‘Theangledependenceof singlyscattered
wave fields’, J ournal of Atmospheric and Terrestrial Physics, 1977, 39 (5),
pp. 859–68
114 RINO, C. L., and OWEN, J .: ‘Numerical simulations of intensity scintilla-
tion using thepower low phasescreen model’, Radio Science, 1984, 19 (3),
pp. 891–908
115 RINO, C. L., GONZALEZ, V. H., and HESSING, A. R.: ‘Coherence band-
widthlossintransionosphericradiopropagation’, RadioScience, 1981, 16(2),
pp. 245–55
116 RINO, C. L.: ‘Ontheapplicationof phasescreenmodelstotheinterpretationof
ionosphericscintillationdata’, RadioScience, 1982, 17(4), pp. 855–67
117 ROBINSON, I. S.: ‘Measuring the oceans from space. The principles and
methodsof satelliteoceanography’ (Springer-Praxis, Chichester, UK, 2004)
118 RULE, M.: ‘Radio telescopes of large resolving power’, Reviewof Modern
Physics, 1975, 47(7), pp. 557–66
119 SANDVEN, S., DALEN, O., LUNDHAUG, M., KLOSTER, K.,
ALEXANDROV, V. Y., and ZAITSEV, L. V.: ‘Sea ice investigations in the
Laptevseaareainlatesummer usingSAR data’, CanadianJ ournal of Remote
Sensing, 2001, 27(5), pp. 502–16
120 SANDVEN,S., J OHANNESSEN,O.M., MILES,M.W., PETTERSSON,L.H.,
andKLOSTER, K.; ‘Barentsseaseasonal icezonefeaturesandprocessesfrom
ERS-1syntheticapertureradar: Seasonal icezoneexperiment1992’, J ournal of
Geophysical Research, 1999, 104(C7), pp. 15843–57
References 239
121 SAPHRONOV, G. S., and SAPHRONOVA, A. P.: ‘An introduction to
microwaveholography’ (SovetskoeRadio, Moscow, 1973) (inRussian)
122 SCHEUCHL, B., CAVES, R., FLETT, D., DE ABREU, R., ARKETT, M., and
CUMMING, I.: ‘Thepotential of cross-polarizationinformationfor operational
sea ice monitoring’. Abstracts of ENVISAT Symposium, Salzburg, Austria,
2004, Abstract No. 493
123 SEA ICE INFORMATION SERVICES IN THE WORLD. WMO N 574.
Secretariat of the World Meteorological Organization, Geneva, Switzerland,
2000
124 SEKISTOV,V.N., GAVRIN,A.L., ANDREEV,V.Yu.etal.: ‘Low-orbitsatellite
radar imagingwithnarrow-bandsignals’, Radiotehnikai Electronica, 2000, 45
(7), pp. 830–36(inRussian)
125 SEPHTON, A. J ., andPARTINGTON, K. C.: ‘Towardsoperational monitoring
of ArcticseaicebySAR’, inTSATSOULIS, C., andKWOK, R. (Eds): ‘Analysis
of SAR data of the polar oceans. Recent advances’ (Springer-Praxis, Berlin,
Heidelberg, 1998), pp. 259–79
126 SHUCHMAN, R. A., LYZENGA, D. R., LAKE, B. M., HUGHES, B. A.,
GASPAROVICH, R. F.,andKASISCHKE,E.S.: ‘Comparisonof jointCanada–
U.S. oceanwaveinvestigationproject synteticapertureradar datawithinternal
wave observations and modeling results’, J ournal of Geophysical Research,
1988, 93(C10), pp. 12283–91
127 SKADER, G. D.: ‘Anintroductiontocomputerizedtomography’, Proceedings
of IEEE, 1978, 66(6), pp. 5–16
128 SOH, L.-K., TSATSOULIS, C., and HOLT, B.: ‘Identifying ice floes and
computing ice floe distribution in SAR images’, in TSATSOULIS, C., and
KWOK, R. (Eds): ‘Analysisof SARdataof thepolar oceans. Recentadvances’
(Springer-Praxis, Berlin, Heidelberg, 1998), pp. 9–34
129 STEINBERG, B. D.: ‘Microwave imaging with large antenna arrays. Radio
cameraprinciplesandtechniques’ (J ohnWiley& Sons, NewYork, 1983)
130 STEINBERG, B. D.: ‘Aircraftradar imagingwithmicrowaves’, Proceedingsof
IEEE, 1988, 76(12), pp. 1578–92
131 STROKE, G. W.: ‘An introduction to coherent optics and holography’
(AcademicPress, NewYork, London, 1966)
132 TATARSKY, V. I.: ‘Wave propagation in a turbulent atmosphere’ (Nauka,
Moscow, 1967) (inRussian)
133 TATARSKY, V. I.: ‘Wavepropagationinaturbulent medium’ (McGraw-Hill,
NewYork, 1961)
134 THOMPSON, M. C., andJ ANES, H. B.: ‘Measurementsof phasefront distor-
tion on an elevated line-of-sight path’, IEEE Transactions on Aerospaceand
ElectronicSystems, 1970, AES-6(5), pp. 645–56
135 TISON, C., NICOLAS, J .-M., TUPIN, F., andMAITRE, H.: ‘A newstatisti-
cal model for Markovianclassificationof urbanareas inhigh-resolutionSAR
images’, IEEE TransactionsonGeoscienceandRemoteSensing, 2004, GE-42
(10), pp. 2046–57
240 References
136 TITOV, M. P., TOLSTOV, E. F., andFOMKIN, B. A.: ‘Mathematical modelling
in aviation‘, in BELOCERKOVSKY, S. M. (Ed.): ‘Problems of cybernetics’
(Nauka, Moscow, 1983), pp. 139–45
137 UFIMTZEV, P. Ya.: ‘Method of edge waves in physical diffraction theory’
(SovetskoeRadio, Moscow, 1962) (inRussian)
138 VARGANOV, M. E., ZINOVIEV, J . S., ASTANIN, L. Yu. etal.: ‘Aircraftradar
characteristics’ (Radioi Svyaz, Moscow, 1985) (inRussian)
139 WIRTH,W.D.: ‘Highresolutioninazimuthforradartargetsmovingonastraight
line’, IEEE TransactionsonAerospaceandElectronicSystems, 1980, AES-16
(1), pp. 101–3
140 WALKER, I. L.: ‘Range-Doppler imagingof rotatingobjects’, IEEE Transac-
tionsonAerospaceandElectronicSystems, 1980, AES-16(1), pp. 23–52
141 YEH, K. C., and LIN, C. H.: ‘Radio wave scintillation in the ionosphere’,
Proceedingsof IEEE, 1982, 70(4), pp. 324–60
142 YU, F. T. S.: ‘Introductiontodiffraction, informationprocessing, andhologra-
phy’ (TheMIT Press, Cambridge, MA, 1973)
143 ZINOVIEV, J . S., and PASMUROV, A. Ya.: ‘Holographic principles appli-
cation for SAR analysis’, in POTEKHIN, V. A. (Ed.): ‘Image and signal
optical processing’ (USSR Academy of Sciences, Leningrad, 1981), pp. 3–15
(inRussian)
144 ZINOVIEV, J . S., andPASMUROV, A. Ya.: ‘Evaluationof SAR phasefluctu-
ations causedby turbulent troposphere’, Radiotehnika i Electronica, 1975, 20
(11), pp. 2386–88(inRussian)
145 ZINOVIEV, J . S., and PASMUROV, A. Ya.: ‘Method for recording and pro-
cessingof 1DFourier microwaveholograms’, PismavZhurnal Tekhnicheskoy
Fiziki, 1977, 3(1), pp. 28–32(inRussian)
146 ZINOVIEV,J .S., andPASMUROV,A.Ya.: ‘Methodsof inverseaperturesynthe-
sisfor radar withnarrow-bandsignals’, ZarubezhnayaRadioelectronica, 1985,
3, pp. 27–39(inRussian)
List of abbreviations
1D One-dimensional
2D Two-dimensional
3D Three-dimensional
AB Adaptivebeamforming
AEC Anechoicchamber
CAT Computer-aidedtomography
CBP Convolutional backprojectionmethod
CCA Circular convolutionalgorithm
CIS CanadianIceCentre
DFT DiscreteFourier transform
ECP Extendedcoherent processing
ESA EuropeanSpaceAgencies
EWM Edgewavesmethod
FCC Frequencycontrast characteristics
FFT Fast Fourier transform
GSSR Goldstonesolar systemradar
GTD Geometrical theoryof diffraction
IFT InverseFourier transform
ISAR Inversesyntheticapertureradar
LFM Linear frequencymodulation
LRIR Long-rangeimagingradar
NRCS Normalisedradar cross-section
NBM Narrowbandmode
PH Partial hologram
PRR Pulserepetitionrate
RCS Radar cross-section
RLOS Radar lineof sight
SAP Syntheticantennapattern
SAR Syntheticapertureradar
SCF Spacecarrier frequency
SCS Specificcross-section
SGL Spatial greylevel
SST Seasurfacetemperature
WBM Widebandmode
WMO WorldMeterological Organization
I ndex
Abbe’sformula 21, 108
adaptivebeamforming 33
adaptivebeamformingalgorithm 21
aerodynamictarget 148, 151–2
airborneradars 60, 79, 196
aircraft 60
aircraft imaging 21–3, 215
algorithms
adaptivebeamforming 21
Calman’s 186
circular convolution 34–5
convolutionback-projection 18–19, 73–5,
118–19
heuristic 187–8
interpolation 18, 73–5
processing 130–45
range-Doppler 34–5
reconstruction 221
tomographic 70, 72–7
Wiener 185–6
all-weather mapping 24
Almaz-1 192, 195
amplitudefactor 12
anechoiccamera 114, 116
anechoicchamber 30, 113, 218–22
echo-freezone 218
reconstructionalgorithm 221
Antarctic 195
antennaapproach 33, 147
antennaarrays 20–2, 60
antirecognitiondevices 229
apertureangle 33
aperturecharacteristics 173–8
aperturenoise 176
apertureperformance 173
aperturesynthesis 31–3
aposterior techniques 184–5
archaeological surveys 24
Arctic 195–206
seaicemonitoring 195–8
artificial referencewave 38–9, 51
ASAR 193–4, 196–7, 205
operationmodes 193–4
aspect variation 126
autocorrelationcoherence 84
autocorrelationfunction 84
averagingof resolutionelements 87, 94
azimuthambiguityfunction 172
azimuthdefocusing 205–6
azimuthal resolution 180–1
azimuth-range 49
backprojection 131
bathymetry 195
Bayesclassifier 226, 228
Bessel functions 29
bistaticradar 101–2
bistaticscattering 28–9
Calman’sfilteringalgorithms 186
Carman’smodel 160–2
carrier trackinstabilities 57–8
CAT radar 76
circular convolutionalgorithm 34–5
classification: seetarget classification
cloudeffects 166
coherence 40
coherencelength 40
coherencestability 40–1, 43
coherent imaging 47–8
244 Index
coherent radar 21–3
holographicprocessing 36–41
tomographicprocessing 41–8
coherent signal 40–1
coherent summationof partial components
126–31
1D 139
2Dviewinggeometry 131–42
3Dviewinggeometry 141–5
complexity 136–7, 140–2, 145
complexmicrowaveFourier hologram
110–15
complextargets 27–8
computer-aidedtomography 74, 76
computerisedtomography 14–20, 48
remote-probing 15
remote-sensing 15
seealsotomographicprocessing
contrast 94–9, 175, 177
convolutionback-projectionalgorithm
18–19, 73–5, 118–19
correlatedprocessing 35, 49
correlationfunction 96
critical volume 179
crossrangeresolution 148–51
cross-correlationapproach 33–4
cylinder 29–31, 219, 221–4
local scatteringcharacteristics 223–4
darklevel 175, 177
deformedice 201, 204
densitydistribution 14–16, 19
diffraction 29, 116
diffraction-limitedimage 127
digital processing 112–16, 145
direct synthesis 31–2
distortion 176
Doppler frequencyshift 27
Doppler-rangemethod: seerange-Doppler
method
dynamicrange 175, 177
earthsurfaceimaging 20, 34, 60
satelliteSARs 191–215
earthsurfacesurvey 34, 70–1, 79
echosignal 27, 46, 148, 182–3
edgewavemethod 29
electrondensityfluctuations 166–7
ENVISAT 193–4, 196–7, 205
ERS-1 20, 193, 195, 197, 212
ERS-2 20, 193, 195, 197, 206, 208, 210–11,
214
mesoscaleoceanphenomena 208, 210–11,
214
seaice 206
extendedcoherent processing 35–6
extendedtargets 28–9, 31, 79–85
compact 28–9
partiallycoherent 85–6
proper 28, 31
fast ice 201, 204
first-year ice 200–2
flop 136
focal depth 8–10, 14, 67–70
focal length 7
focal point 7
focusedaperture 54
focusingdepth 59
forestry 195
Fourier microwavehologram 39–40, 52–3
complex 110–15
rotatingtarget 101–9
simulation 112–16
Fourier space 16, 18
Fourier transform 18
Fraunhofer microwavehologram 39–40, 52–3
frequencystability 40–1
frequency-contrast characteristic 95–8
Fresnel lens 49
Fresnel microwavehologram 39–40, 52
Fresnel zoneplate 33, 50
Fresnel-Kirchhoff diffractionformula 38
frictionvelocity 207
front-lookingholographicradar 60–70
hologramrecording 60–3
imagereconstruction 62–7
resolution 61–2
gaininthesignal-to-noiseratio 174–5
geological structures 24
geometricaccuracy 80
geometrical theoryof diffraction 29
globules 158–9
GoldstoneSolar SystemRadar 41
greaseice 198–9
grey-level resolution 178–81
half-toneresolution 178–81
Hankel transform 75–6
Index 245
heuristicalgorithm 187–8
hologram 11–14
real image 12–13, 49–50, 62–6
virtual image 12–13, 49–50, 62–6
wideband 123–4
seealsomicrowavehologram
hologramfunction 11–12
hologrammodulationindex 12
hologramrecording 10–11
1D 51
front-lookingholographicradar 60–3
SAR 50–3
holographicimage 14
holographicprocessing
coherent radar 36–41
front-lookingradar 60–3
ISAR 35–6
rotatingtargets 101–16
SAR 33–4
holographictechnique 1–2
holography 10–14
homomorphicimageprocessing 186
Huygens-Fresnel integral 53
iceedge 201–2, 205
icefloes 200–2
icemonitoring 195–8
icenavigation 196–7, 201
iceparameters 195, 197
icebergs 195–6, 202, 206
icebreakers 197–8, 201
ICEWATCH 196
image
computerisedtomography 14–20
holographic 10–14
microwave 20–5
optical 7–10
thinlens 7–8
imageintensity 81, 87–96
imageinterpretability 178–80
imageinterpretation 80
imagequality 24, 57–60, 77, 80–2, 173–81
integral evaluation 177–81
imagereconstruction 11–14, 16, 36, 124
coherent summationof partial components
126–30
digital simulation 112–16
front-lookingholographicradar 62–7
microwavehologram 53–6
spotlight SAR 72–7
imagesmoothing 88, 91, 93–4
imagestability 174
imagingradars 7
imagingtime 176
impulseresponse 152–3
incoherent signal integration 81, 87, 90–4
inertiaregion 159–61
INMARSAT 196
integral image 132–5
interferencepattern 11, 13
internal waves 212–14
interpolationalgorithm 18, 73–5
interpretability 178–80
intrinsicaperturenoiselevel 176
inverseaperturesynthesis 23, 147–8, 215–17
inverseFourier transform 118
inversesourceproblem 16
inversesynthesis 31–2
rotatingtarget 101–9
inversesyntheticapertureradar: seeISAR
ionosphere 147
electrondensityfluctuations 166–7
turbulence 166–7, 172
turbulenceparameter 167
ISAR 32–3, 148
instability 40–1
signal processing 34–6
tomographicprocessing 41
Kell’stheorem 102
kernel function 139
Kosmos-1870 192, 195
linear filteringmodel 124
linear filtrationtheory 95, 98
linearlymovingtarget 147
local responses 30
local statisticstechnique 186–8
long-rangeimagingradar 215–16
lowcontrast targets 94–9
magnification 8, 63–7
meanimagepower 175
medianfiltering 184
microholograms 13–14
micronavigationnoise 173
microwaveholograms 2, 36–40
1D 101–12
amplitude-phase 38
Fourier 39–40, 46, 52–3, 101–16
246 Index
microwaveholograms(continued)
Fraunhofer 39–40, 52–3
Fresnel 39–40, 52
multiplicative 37, 50
narrowband 131–2, 134, 136–7, 140, 145
phase-only 37–8
quadrature 37–8, 106, 115
wideband 133, 136–42
microwaveholographicreceiver 38–9
microwaveimage 7, 20, 23–4
microwaveimaging 20–5, 101
microwaveradars 7
microwaves 1
monostaticscattering 28–9
movingtargets 58–9
rotating 101–45
straight line 147–56
multibeamprocessing: seemulti-ray
processing
multiplicativenoise 98
multi-rayprocessing 87, 94, 184
narrowbandmicrowavehologram 131–2,
134, 136–7, 140, 145
Newton’sformulae 7–8
nilas 198, 200–1
noisedarklevel 175
noisedistribution 177–8
nonparametricclassifier 226, 228
normalisedradar cross-section 205–7
NorthernSeaRoute 196–8
oceancirculation 194–5
oceancurrents 94, 212
oceandynamics 191
oceanphenomena
mesoscale 204–15
surfacevelocity 205
seealsoseasurfaceimaging
oceanwaves 191, 194–5
internal 212–14
seealsowaveimaging
oceanography 191–2
oil spills 94, 195, 211–12
oldice 200, 202
optical image 7–10, 23
real 10
virtual 10
orthoscopicimage 10, 12, 14
pancakeice 200, 203
panoramicradars 20
partial coherence 79, 84
partial holograms 127–45
spectral components 136
partial images 130, 134, 137, 140–3, 145
radial 142
transverse 137, 140–3
partiallycoherent signal 40, 148–51
radar imagemodelling 152–6
partiallycoherent target imaging 79
extended 83–6
lowcontrast 94–9
mathematical model 85–6
statistical imagecharacteristics 87–94
pathinstabilities 151–6
patternrecognitiontheory 225
phase 12, 14
phaseerrors 157–72
turbulent ionosphere 172
turbulent troposphere 167–72
phasefluctuations 167–70, 172
phasenoise 152, 156
phase-onlyhologram 37–8
pixels 176
planet surveys 41, 217
platecontrast coefficient 12
point targets 27, 58
polar format processing 35–6
polar grid 18
potential functions 226, 228
potential SAR characteristics 173–5
principal planes 7–8
probing 14–18
projectionslicetheorem 17–18, 47, 72, 74
pseudoscopicimage 10, 12, 14, 64
quasi-holographicradar systems 2, 31, 49–60
hologramrecording 50–3
imagereconstruction 53–6
radar characteristics 175–8
radar cross-section 28, 30
radar dataprocessing 124–6
radar imaging 2
basicconcepts 7–25
methods 27–48
microwave 20–5
partiallycoherent signals 152–6
Index 247
radar interferometer 217
radar responses 217–19
closedtests 218–19
opentests 218
RADARSAT 193–7, 201–2, 204
radiocamera 21
radiotelescope 16
radiovision 1
radiometricprecision 80
radiometricresolution 81, 94
raincells 212–13
rangeresolution 180–1
range-Doppler algorithm 34–5
range-Doppler method 1–3, 33, 215
Rayleighmodel 182–3
real antennas 20
real apertures 20–1
recognition: seetarget recognition
reconstructionalgorithm 221
referencevoltage 22
referencewave 11–13
refractiveindex, troposphere 157–66
resolution 23, 33, 59–60, 177
azimuthal 180–1
crossrange 148–51
defocusedmicrowaveimage 108–9
front-lookingholographicradar 61–2
grey-level 178–81
half-tone 178–81
pathinstabilities 153–6
potential 173
radiometric 81, 94
range 180–1
spatial 80, 94
spotlight SAR 76
synthesisedFourier hologram 107–8
resolvingpower: seeresolution
Ricereflectionmodel 182–3
rotatingtarget imaging
holographicapproach 101–16
tomographicapproach 117–45
samplecharacteristic 174
samplingtheorem 21
SAR 31–2, 85–6
holographicapproach 33–4
instability 40
lowcontrast targets 94, 97–8
potential characteristics 173–5
satelliteSARs 191–5
signal processing 33–4
spaceborne 167–8
test ground 176
turbulence 167–8
seealsoside-lookingsyntheticaperture
radars
seealsospot-light SAR
satelliteimaging 79, 120–4, 129, 131, 143,
215
aspect variation 120–1
satelliteSARs 191–5
scaling 64–67, 69
ScanSAR 195–7, 201–2, 204
scatterers 28–31
scatteringmatrix 217
seacurrents 94, 212
seaice 192–3
classification 198–204
imagery 198–206
monitoring 195–8
parameters 198, 203–4
seasurfaceimaging 79–80, 99, 204–5
roughseasurface 82–5
seealsooceanphenomena
seealsowaveimaging
seasurfacetemperature 207, 209
SEASAT 191–2, 195
sharpness 175
shipwakes 214–15
shipwrecks 211
side-lookingradar 1–3, 21
side-lookingsyntheticapertureradars 31–2,
49–60, 79
hologramrecording 50–3
imagereconstruction 53–6
resolutionredistribution 180–1
seealsoSAR
sigma-filter 187–8
signvectors 225–8
signal processing 33–6
signal-to-noiseratio 88–90, 92–4
gain 174–5
SIR-A 191–2
SIR-B 191–2, 195
SIR-C 192
Smith-Wentraubformula 157
soil classification 24
soil moisture 195
spacecarrier frequency 12, 109
248 Index
spacefrequency 44–8
SpaceShuttle 191–2
spaceborneSAR 167–8
spacecraft identification 79, 126, 215–17
2D 215–17
seealsosatelliteimaging
spatial resolution 80, 94
specificcross-section 80–1, 95–6
speckle 24–5, 80, 175, 177, 181–9
statistical characteristics 182–4
suppression 184–9
specklefield 14
spot-light SAR 32, 70–7
imagereconstruction 72–7
resolution 76
squall lines 212
statistical imagecharacteristics 87–94
structurefunction 159, 163–5
subsurfaceprobing 24
subwater imaging 24
surfacehologram 124
swathwidth 176
swell 208–10
synthesisrange 97–8
syntheticantenna 21–2
syntheticantennapattern 173
syntheticaperturelength 23
syntheticaperturepattern 23, 33, 173–4
syntheticapertureradar imaging 19
syntheticapertureradars: seeSAR
syntheticapertures 20–3
seealsoaperturesynthesis
target characteristics 217–24
target classification 222–8
target models 27–31
target recognition 222–9
efficiency 226, 228
mathematical model 225–6
probability 226, 228
signvectors 225–8
target reflectivity 43–5
target viewing 42
targets
aerodynamic 148, 151–2
complex 27–8
extended 28–9, 31, 79–82
lowcontrast 94–9
moving 58–9
movinginastraight line 147–56
partiallycoherent 79, 83–99
point 27, 58
rotating 101–45
three-dimensional images 10, 12, 14, 20, 25,
69, 127
three-dimensional viewinggeometry 119–26
tomographicalgorithms
spot-light SAR 70, 72–7
tomographicprocessing
coherent radar 41–8
frequencydomain 117–18
ISAR 35–6, 41
rotatingtargets 117–45
SAR 33
spacedomain 118–19
spot-light SAR 70–7
seealsocomputerisedtomography
tomographictechniques 2
tomography 14
transmittance 12
troposphere 157–72
near-earth 159
phaseerrors 167–72
refractiveindexdistribution 157–66
turbulence 158–72
trueimage 14
turbulence 158–63, 165–72
inner-scalesize 159
ionosphere 166–7, 172
isotropic 158
outer-scalesize 158
troposphere 158–72
turbulent flows 148
two-dimensional image 127
two-dimensional viewinggeometry 41–8
rotatingtargets 131–42
uncertaintyfunction 61
unfocusedaperture 54
unistaticradar 101–3
upwelling 207–9
urbanareaimaging 24–5
urbanareamonitoring 195
velocitybunchingeffect 209–10
Index 249
waveimaging 82–5, 205–6, 208–9
seealsooceanwaves
whirls 159–61, 166–7, 172
widebandhologram 123–4
widebandmicrowavehologram 133, 136–42
processingalgorithms 138–9
Wiener filteringalgorithm 185–6
Wiener-Khinchintheorem 166
windslicks 94
windsquall 212
windstress 208
X-rayimaging 2
X-raytomography 17–19
youngice 198, 200–1

Sign up to vote on this title
UsefulNot useful