You are on page 1of 143

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/267230532

The Radon transform and its inverse

Technical Report · August 2011


DOI: 10.13140/2.1.2519.2963

CITATIONS READS

0 652

2 authors, including:

A. France
Atomic Energy and Alternative Energies Commission
40 PUBLICATIONS   199 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

European Spallation Source View project

SPIRAL2 View project

All content following this page was uploaded by A. France on 23 October 2014.

The user has requested enhancement of the downloaded file.


1

The Radon transform and its inverse


August 2011
Alain C. France, CEA/Saclay

1. OVERVIEW
Tomography is a process which aims at reconstructing a two-dimensional function from a
collection of its line integrals along specific projection directions, usually uniformly angularly
distributed. While discrete in practical applications, this process is conveniently analyzed in
continuous form. Data sampling then induces aliases, as in Fourier analysis. A minimum number of
projection directions must be respected in order to maintain these aliases at a negligible or at least
acceptable level: if N is the number of samples of one projection, the minimum number of projections
should be > N/2 to prevent the formation of aliases. The situation where a very small number of
projections only is available seems at first glance hopeless - as would be in Fourier analysis. The
present study gives a detailed analysis of this problem. A numerically efficient and fast (a few minutes
on a standard laptop) algorithm is proposed. Its limitations are explored in details, and numerous
numerical examples are given.
We start by a review of the standard and properly sampled tomography problem (§2). On a
mathematical point of view, the source image f(x,y), where x and y are Cartesian coordinates, is
transformed by the projections into its Radon transform (Johann Radon, 1887-1956) Rf(t,), where t is
the metric coordinate across the projection direction defined by polar angle . The tomography
problem consists in recovering f from a given Rf. This inversion is known to be possible using two
different ways. The first one (§2.1) is illustrated by the so-called Projection Slice Theorem, which
states that any radial cut of the source image 2D-Fourier transform at some polar angle is equal to
the 1D-Fourier transform of Rf(t,). The second one (§2.2) introduces the dual transform R *. If g(t,)
is the Radon transform of some function, then R * g(x,y) is defined as the integral of g(t,) over all
values of t such that the line defined by (t,) goes through the point (x,y) (R * turns out to be the
adjoint of R with respect to the canonical inner product in L2 ). Then the Dual Radon Transform
Theorem states that R* Rf is equal to the convolution of f with the 1/kernel, where is the radial
coordinate in the (x,y) plane.
These two ways are quite different on an implementation point of view: the Projection Slice
Theorem offers an exact inversion based on Fourier transform, which may be thought to be quite fast
using digital FFT's (Fast Fourier Transform). On the other hand, the Dual Radon Transform Theorem
requires the computation of the dual transform, followed by deconvolution with the 1/kernel. For
these reasons we consider here the implementation of the Projection Slice Theorem only. This is easily
done via the so-called Filtered Back-Projection Theorem (§2.3). Radon transforms Rf(t,) are first
Fourier-transformed into FRf( ,), where is the radial frequency in the spatial frequency plane.
Then an interpolation scheme transforms the spectrum FRf( ,) defined in polar coordinates into the
spectrum FRf(x ,y ) defined in Cartesian coordinates. Finally a 2D inverse Fourier transform recovers
the original source image. The numerical implementation (§2.4) consists in two computational chains:
the DRT (Discrete Radon Transform) computes the Radon transform of a given source image; the
IDRT (Inverse Discrete Radon Transform) recovers the source image. Both chains use digital FFT,
and typically execute within a few tens of seconds on a standard laptop.
Numerous numerical examples are given in §2.5. In all considered cases, the relative amplitude
ripple of the recovered image is a few 103 . Spurious pixels (poping-up outside the source function
support) are rather low, typically on the order of 50 dB.
Let us now assume a small number L of projection directions. After 1D Fourier transforms of the
Radon transforms, the image spectrum is known along L lines going through (x ,y ) = (0,0), and
regularly distributed every /L. Of course the polar-to-Cartesian coordinates transform won't work in
2

this case, since data samples belonging to distinct lines are too widely separated. However, the 2D
inverse Fourier transform integral can still be computed (by brute force). Note that this integral exists
provided that functions are chosen in the adequate space (here, the Schwartz space); however it is no
more a Fourier transform per se. The interesting point is that this integral receives a meaningful
physical interpretation. To understand why, let us consider an isolated pixel located at polar
coordinates (0 ,0). Its spectrum is a plane wave propagating in the (x,y) plane, with "wavelength"
1/0 and "direction of arrival" 0. The image reconstruction process should recover the pixel location
from this plane wave, sampled along the = ℓ/L, ℓ= 0, ... , L1 lines. This is typically a radio
interferometer problem, and our integral simply describes the interferometer output. And since the
transformation relating the source image to the "interferometer" output is linear, it can be represented
by the convolution of the source image and a point spread function (PSF), which depends only on
the "interferometer" itself. In practical situations, the source image has compact support, and the
spatial frequency plane is truncated at some maximum value. The convolution operator is then
replaced (§3.1) by another linear operator C acting on the source image f, operator which again is
perfectly known (to the contrary of radio-astronomy for instance, where antenna and radio receiver
channels are only approximately known). The remaining problem is then to reconstruct f from some
known g = Cf.
The reconstruction method proposed in §3.2 is the so-called Projected Van Cittert fixed-point
iteration f n+1 = P C T,f n, where T,is defined by T ,u = u (g Cu), is a relaxation parameter
to be chosen, and P C is the orthogonal projection on the space of positive functions. First we observe
that the C  operator has a non-empty kernel (Ker C) which is the set of functions whose spectrum is
zero along the sampling lines; T , is surely not contractive, and at best non-expansive with a proper
choice of . Secondly PC is easily seen to be non-expansive. Hence PC T, is not contractive, and
Banach theorem cannot be used to demonstrate convergence. However, it may be shown (§3.2.1) that
P C T, belongs to the class of averaged operators, and Krasnoselskii-Mann theorem shows that the
iteration converges to a fixed-point, which is of the form PR f + u , where PR f is the orthogonal
projection of f on the range of C, and u is an element of Ker C . Of course the projection of f on
Ker C is not "seen" by the interferometer integral and is definitively lost. Moreover the special
properties of the PSF can be used to show (§3.2.2) that the mean of f n converges to the mean of f as
n ∞. Nevertheless it has not been possible to determine a priori bounds for the norm of u , and the
reconstruction accuracy has to be estimated via numerical simulations. Elementary characterizations of
Ker C are given in §3.2.2.. Note by the way that the algorithm makes use of one single assumption on
the function to be reconstructed, namely that it is positive.
Numerical implementation (§3.3) of an efficient (i.e. fast) IDRT in the case of small number of
projection directions is a little bit tricky. The "interferometer" integral has to be computed by brute
force; however symmetries resulting from the regular angular distribution of sampling lines improve
calculation speed by a factor 1.8. The integral accuracy is also directly related to the sampling rate of
the 1D Fourier transforms FRf(,ℓ/L); a magnification ratio of 8 has been found to be satisfying. The
C  operator acts on an N N image, and returns an N N image. Being linear it may be theoretically
represented by a N2 N2 matrix, which is out of question even for a "small" 64 64 image (the matrix
would have more than 16 million entries). The underlying convolution product is used instead to make
calculations in the frequency domain, via the well known zero-padded-FFT-IFFT method.
The first numerical example (§3.4.1) is a rectangle-supported function taking two discrete values.
The reconstruction accuracy is excellent, with reconstruction error characterized by a 105 mean value,
a 10 3 standard deviation and a 102 peak value. Spurious pixels (outside the source function support)
are lower than 60 dB. Next examples (§3.4.2 to 3.4.4) illustrate what happens with the "lost"
projection of f on Ker C . Clearly two source functions differing only by an element in Ker C  will be
identically reconstructed, and finally the algorithm accuracy is limited by the norm of the orthogonal
projection of the source function on Ker C. Elementary characterizations of Ker C given in §3.2.2
are used to elaborate pathological examples of source images, primarily constituted by constellations
of isolated pixels distributed at vertices of regular polygons. The exercise turned out to be more
difficult than expected, since nothing pathological has been evidenced with L = 6 projection
3

directions. However spectacular cases have been obtained with L = 4. The fundamental reason for this
is that the angular order of spectral components of Ker C elements is rather high; consequently
correspondent spectral amplitudes scale as Bessel functions of high orders, which takes minute values
for moderate values of their argument. This is rather good news, and shows that the reconstruction
accuracy will be "in general" very good. Last examples (§3.4.5 to 3.4.9) consider various functions
(uniform disk, disk-supported Gaussian, and cluster of three disk-supported Gaussian functions). In all
cases, and with L = 6, the reconstruction error is characterized by a mean value of a few 105, a
standard deviation of a few 102 , and a peak value of a few 102 . If the number of projection directions
is increased to L = 12 for instance, the PSF is more peaked with lower side-lobes, and the
reconstruction accuracy is improved by a factor ~10 (§3.4.9).
The behavior of the reconstruction algorithm in presence of additive noise is illustrated in §3.4.10.
A random, uniformly distributed over [0.1,+0.1], and pixel-to-pixel uncorrelated noise has been
added to the cluster of three disk-supported functions of §3.4.9. As any fixed-point iteration, the
present reconstruction algorithm presents the property of semi-convergence in presence of additive
noise: the standard deviation of reconstruction error first decreases with the number of iterations, goes
through a minimum, then increases. The optimum stopping iteration rank can be determined by
observing the behavior of the infimum of the residual g Cf n, which is of course available at each
iteration. This has been done manually here, but could be easily automated since this optimum is
particularly flat. It is observed that this algorithm is able to reject most of the noise, especially in
regions where the noise-free source image is zero. This is of course a consequence of the low-pass
filtering effect of the interferometer formula. The reconstruction accuracy is only slightly degraded by
the added noise; this is related to the low-pass filtering effect, but is also a good surprise for a highly
non-linear algorithm. The optimum number of iterations is much smaller than what has been needed in
noise-free runs; it turned out to be in the [100,1000] range, thus leading to very short CPU time (some
minutes on a standard laptop). Reconstruction error with L = 6 or 12 projection directions is about the
same, with a mean value of ~103, a standard deviation of a few 102 , and a peak value of 8 102 : the
accuracy improvement that has been observed in noise-free cases when multiplying the number of
projection directions by 2 is masked by the noise contribution. Note that low-pass filtering applied to
the Radon transform (prior to the non-linear iterative method) might reduce this noise component, but
it has not been tested.
In conclusion the "interferometer" integral followed by the Projected Van Cittert iterative method
offers a fast way to reconstruct the source image with an accuracy of a few percents, even in presence
of a moderate 10% noise. The number of projection directions is the most stringent parameter in noise-
free or high signal-to-noise ratio situations, but is clearly less critical with 10% amplitude noise. Some
room is also left for theoretical developments, in particular the unicity and bounds of the fixed-point
reached by the iterative method.
4

2. INVERSION OF THE RADON TRANSFORM

2.1. The Projection-Slice theorem


Definition 1. Let f be a real function in L1 (ℝ2). The Radon transform of f is defined by


f ( t cos s sin , t sin s cos ) ds , t ∈ℝ, 0 ≤≤.


Rf ( t , ) 


Note that f ∈L1 (ℝ2) guaranties that Rf (t,) exists and clearly Rf ∈L1(ℝ[0,]).

y
s

t Rf (t, ) Rf (t, )



x

line L(t,)

 
 dy f (x, y) e
j2 ( x x y y )
Proposition 1. (Projection-Slice Theorem). Let F2 f (x , y )  dx be
 

the 2D Fourier transform of f ∈L1(ℝ2); then the Fourier transforms of f and Rf are related by

 Rf (t, ) e
j2 t
dt Ff (cos , sin ) .


P ROOF. First observe that f ∈L1(ℝ2) implies that Ff exists. We have


 
 dy f (x, y) e
j2 ( cos x sin y )
Ff (cos , sin )  dx .
 

The change of variable


x t x cos  sin  t t cos  sin  x
 ,  , 
y s y sin  cos  s s sin  cos  y
is regular with unit Jacobian; hence
 

  ds f (t cos s sin , t sin s cos ) e


j 2 
t
Ff (cos , sin )  dt
 

  Rf (t, ) e
j2 t
dt . 


The inverse of the Radon transform exists if an only if the inverses of the two Fourier transforms in
the Proposition exist. The existence (and uniqueness) of the inverse of Fourier transform may be only
guaranteed with additional conditions on f. First we recall the definition of Schwartz spaces.
5

Definition 2. The Schwartz space S(ℝ2) is the set of function f ∈C(ℝ2) such that

sup(x,y)∈ℝ2 x y   f  remains finite for all indices , , 
, ∈ ℕ.
x y

Note this is an extension to higher dimensions of the familiar definition of rapidly decreasing
functions. With this definition, we have (refer to standard textbooks for proof):

Proposition 2. The Fourier transform F is a continuous linear map on S(ℝ2 ), and F 1 = F*.

Finally we have:

Proposition 3. Let f be a real function in L2(ℝ2 ), and Rf its Radon transform. Then f may be uniquely
recovered by the Projection-Slice Theorem.

Proof. The Schwartz space is dense in L 2 (ℝ2) (the space of square-integrable functions), hence there
exists a unique extension of Fourier transform to L2 (ℝ2 ), which satisfies F1 = F*. Then the Fourier
transforms in the Projection-Slice Theorem have a unique inverse. 

The Projection-Slice theorem allows for the inversion of the Radon transform as illustrated by the
following diagram, where F1 , F11 are 1D Fourier transforms, and F 2, F21 are 2D Fourier transforms.

Projection-Slice Theorem

R
f Rf
F2 F1
F21 F 2f F11

In order to make things clear, we now apply the Projection-Slice Theorem to some simplistic
functions. The main interest of the following Propositions resides in their double proofs. We have

Proposition 4. Let f(x,y) be the characteristic function of a centered disk with radius a, i.e.
f (x , y) 1 if x 2 y 2 a , f (x , y ) 0 otherwise. Then

Rf ( t, ) Rf ( t ) 2 a 2 t 2 rect t .
2a
6

P ROOF 1. Application of Definition 1 clearly shows that Rf does not depend


on . We have s
s
a
Rf ( t)  1 . ds'
s
if a ≤t ≤+a, R f (t ) 0 otherwise;
t
simple geometry gives s  a2 t 2 , and the Proposition follows.

P ROOF 2. (i) In order to apply the Projection-Slice Theorem, we need first


~   ~
  df (, ) e
j2 cos( )
Ff (, ) :Ff (cos , sin )  d ,
0 

where the ~ symbol is used to distinguish expressions in polar coordinates from their counterparts in
Cartesian coordinates. Then plugging the Jacobi-Anger expansion
 
e j2 cos( )  (j)
n 
n
J
n ( 2) ejn ()  (j)
n 
n
J n (2) ejn () (∈ ℝ)

in the integrand, we see that expansion terms yield zero integral for all n to the exception of n = 0, and
what remains is the Hankel transform
~ a

Ff (, ) 2  J 0 (2 
) d .
0

The integral is easily calculated by applying the fundamental relation for derivatives of Bessel
Functions of the First Kind,
m
1 d 
 z J n ( z) z
n m

n
J n m ( z)
z dz 

with m = 1, n = 1; we obtain d 
dz
z J1 ( z ) z J 0 ( z) , hence z J (z) dz J () .
0
0 1

The Hankel transform becomes


~ a 2a
Ff (, )  1 2
2  
0
J 0 (2) 2 d (2)  1 2
2  0
J 0 (z ) z dz a J1 (2 

a ) , i.e.

~ 2J (2a)
Ff (, ) a 2 1 .
2a
(ii) The second (and last) step is to calculate
 ~
Rf (t, )   Ff (, ) e

j2 
t
d.

To this purpose, we will use the following Fourier pair


J (x ) 2j
g (x )  n , Fg( )  ( j) n U n 1 ( 2 
) 1 42 2 rect () ,
x n
where U n-1 is the Chebyshev Polynomial of the Second Kind with degree n1, and rect is the
characteristic function of the interval [1/2,+1/2]. Application with n = 1 gives
 J
1 (x )

 x
ej2 x dx 2 1 4 2 2 rect ( 
),

 J1 ( x ) j2 x
and its complex conjugate is  x
e dx 2 1 4 2 2 rect ( 
).
7

The Radon transform reads

1 (2  1 ( 2a) d (2 a)
J J t
a) j 2. 2 
 
a.
Rf ( t , ) 2 a2 e j2 t d2 
a2 e 2 a
 2 a  2 a 2 a
J ( x ) x t
j2 
a  e 2 a dx 2a 1 42 ( t/2 
a)2 rect ( t/2a) 2 a 2 t 2 rect ( t/2a) ,
1
 x

which is again the Proposition. 

Two simple observations can be made at that point:


~
the supports of f and Rf are bounded, while the support of Ff is unbounded; this suggests that
the application of the Projection-Slice Theorem to numerical data will necessarily induce
truncation errors (in addition to aliasing due to data sampling);
the Radon transform (of the characteristic function of the centered disk) does not depend on ,
hence one single line projection is sufficient to retrieve f .

The next example considers an offset disk. We have

Proposition 5. Let f(x,y) be the characteristic function of the disk with radius a and centered in {x0 =
0 cos 0, y0 = 0 sin 0}; then
t 0 cos(0 ) 
Rf ( t , ) 2 a 2 [ t 0 cos(0 )]2 rect 
 .
 2a 

y
s
P ROOF 1. Application of Definition 1 is straightforward; the
sketch on the right illustrates the effect of disk offset.
t
0 cos( 0) 

0
0
x

P ROOF 2. We follow the steps of Proposition 4, PROOF 2. Let f (x , y ) :f0 (x x 0 , y y 0 ) , where f 0


is the characteristic function of the disk with radius a centered in {0,0}; application of translation
theorem for Fourier transform in Cartesian coordinates reads
j2 ( x x 0 y y 0 )
Ff (x , y ) Ff0 (x , y ) e ;

hence in polar coordinates


~ ~ j2 0 cos( 0 ) 2 J (2 a) j 20 cos( 0 )
Ff (, ) Ff 0 (, ) e a 2 1 e .
2  a
The Radon transform is given by
 ~ 2 J1 ( 2 a) j2 [ t 0 cos( 0 )]

 F (, ) e 
j2 t
Rf ( t , )  d a 2 e d

f
2  a


t 0 cos( 0 ) 


2 a 2 [ t 0 cos( 0 )]2 rect 
 . 
 2a 
8

2.2. The dual transform


The dual Radon transform may be introduced by considering the inverse problem directly: clearly the
value of f in some point (x,y) is only related to values of Rf (t,) with t and such that the integration
line L(t,) goes through (x,y), i.e. t = x cos+ y sin, 0 ≤ ≤ . The dual Radon transform is
defined by
Definition 3. Let g ∈L1(ℝ[0,]); its dual Radon transform is defined by

R g (x , y)  g(x cos y sin , ) d .
0

The duality relation between the R and R * operators is precisely described by the

Proposition 6. Let f ∈L (ℝ ) and g ∈L (ℝ[0,]); then


1 2 1

   
   dy f (x, y) R g (x, y) .

d dt Rf (t, ) g (t, )  dx
0   

P ROOF. We have
  
 dt
I() :

Rf ( t, ) g( t, )   dt  ds f (t cos s sin , t sin s cos ) g(t, ) ;
 

the regular change of variable (t,s)  (x,y) yields


 
I()   dy f (x, y) g(x cos y sin , ) ,

dx


and integration along axis


   
I() dd dx dy f (x, y) g(x cos y sin , )
0 0  
    
 dx dy df (x, y) g(x cos y sin , )  dx dy f (x, y) R g (x, y) ,


  0  

which is the Proposition. 

Now assume that f ∈ L2(ℝ2 ) and g ∈ L2(ℝ[0,]); these two spaces are Hilbert spaces with the
canonical inner products
 
 dx dy f (x, y) f y) , f1, f 2 ∈L2 (ℝ2),

f1 , f2 : 1 2 ( x,
 

 
  dt g (t, ) g ) , g1, g2 ∈L2(ℝ[0,]);

g1 , g 2 : d 1 2 ( t,
0 

Proposition 6 becomes Rf , g  f , R g , i.e. R and R* are adjoint operators.

The composition of R and R* brings us closer to the inverse of the Radon transform; we have

Proposition 7. Let f ∈L1(ℝ2 ) and f bounded everywhere; then


 
R Rf ( x, y )   dy' f (x' , y' ) [(x x ' ) (y y ' )2 ]1/2 .
2
dx'
 
9

P ROOF. (i) We have


  
R Rf (x , y )  d Rf (x cos y sin , ) d ds f (x s sin , y s cos ) ,
0 0 

x s sin 
where we have used the parametric representation for the lines going through (x,y).
y s cos 

s x ' x s sin 
Let's consider the change of variable  ; the Jacobian of the inverse transform
 y ' y s cos 
( x ' , y ' ) sin  s cos 
is J( s, ) : det s , hence
( s, ) cos  s sin 
( s, )
J ( x ' , y ' ) : 1 , J (x ' , y ' )  1 [( x x ' )2 ( y y' )2 ]1 /2 ,
( x ' , y ' ) s s
and our integral becomes
   
R Rf (x , y )  dx' dy' f ( x' , y' ) J(x' , y' )  dx' dy' f (x' , y' ) [(x x' ) (y y ' )2 ]1/2
2
   
.
(ii) The integrand presents a singularity in (x',y') = (x,y), which is integrable if f(x,y) is finite. 

The dual transform is illustrated by the following diagram. Applying the Radon transform, followed
by its dual, results in a copy of the original image f blurred by the convolution with the 1/kernel.

Dual Radon Transform

R
f Rf

 1
R*
x 2 y 2 
R Rf

2.3. The Filtered Back-Projection theorem


The Filtered Back-Projection theorem offers a way to use the dual transform and also eliminate the
image blur; we have

Proposition 8. (Filtered Back-Projection theorem). Let f ∈S(ℝ2), Rf its radon transform; then
f R g ,
 
 dt g(t, ) e  dtRf (t, ) e
j2 
t j2 
t
where g is uniquely defined by  .
 
10

P ROOF. The Fourier transform of f reads


 
 dx dy f ( x, y) e
j2 ( x x y y)
Ff (x , y )  ,
 

and the inverse transform is


 
 d Ff (, ) e
j2 ( x x y y )
f (x , y )  d x y x y .
 

x  x cos 
The Jacobian of the change of variable  defined by is
y  y sin 

(x , y ) cos  sin 


J ( , )  det ;
(, ) sin  cos 
hence
 2 ~
 dd  Ff (, ) e
j2 
(x cos y sin )
f (x , y ) 
0 0
  ~
  dd  Ff (, ) e
j2 ( x cos y sin )
.
 0

Then using the Projection-Slice theorem, one gets


  ~
d d  Ff (, ) e j2 ( x cos y sin )
f (x , y ) 
0 
  
d d   dtRf (t, ) e j2 
t  j2 
(x cos y sin )
 e
0  

 dg( x cos y sin , )
0

R g ( x , y ) ,
 
where we have set g (t, ) :  
d   dtRf (t , ) e

t  j2 
j2 
e t
,

 
or equivalently  dt g(t , ) e

j2 
t
  dt Rf ( t, ) e

j 2t
, which is the Proposition. 

The Filtered Back-Projection Theorem is illustrated by the following diagram.

Filtered Back-Projection Theorem

R
f Rf

R* F1

F11 |
|
g || F1Rf F1 Rf
11

2.4. Numerical application


First observe that the Filtered Back-Projection and the Projection-Slice theorems offer equivalent
descriptions of the inverse Radon transform. Both theorems may be used to implement discrete inverse
Radon transforms (IDRT). An IDRT algorithm based on the Projection-Slice theorem however is
easier to analyze, since it uses only discrete Fourier transform (DFT) and inverse discrete Fourier
transform (IDFT), whose behaviors are well identified.
Let us first examine the discrete Radon transform (DRT).
(1) The source data is a continuous image with size w w. After sampling, this image becomes a 2D
array of N N numbers; the sampling interval is x = w/N along each axis, and the sampling period is
s = 1/s = N/w. In view of using the FFT algorithm to compute the DFT's, N is chosen to be a power
of 2 and the {x = 0, y = 0} sample is arbitrarily set at location {N/2+1, N/2+1} in the array.
(2) A standard N N FFT algorithm produces a spectrum with size N N, extending from x(1) = 0 to
x(N) =  s (N 1)/N along the x-axis, and from y (1) = 0 to y(N) = s (N1)/N along the y-axis. Note
that the spectrum of the discretized image is periodic, with period s along each axis: the FFT merely
returns one period of this spectrum (shown by the grey patch on the diagram). In order to apply the
Projection-Slice theorem, the FFT output must be reorganized a little bit, in such a way to cover a full
[s/2,+s/2] interval along each axis.
(3) A standard interpolation scheme may be used to generate spectrum -slices in the disk with radius
s/2 and centered in {x = 0, y = 0}, each -slice having N equispaced samples ranging from s/2 to
+ s (N1)/2N. Angle is assumed to be uniformly sampled from 0 to (M1)/M.
(4) Finally, the inverse FFT (IFFT) of each -slice yields the desired Radon projection at angle .

N N source image N N spectrum (N+1) (N+1) spectrum


y y

y
s s/2

w x s/2 x

x /2
s

s/2 s s/2 s/2


w
FFT output slices interpolation disk

This scheme will work correctly provided that at least (1) no aliasing occurs when computing the 2D
FFT, and (2) no signal is lost outside the interpolation disk. Both conditions are satisfied if the
spectrum of the continuous source image (prior to sampling) is such that (Nyquist-Kotelnikov
criterion)
s
 2x 2y  .
2
However, its is well known that a function with bounded support has a spectrum with unbounded
support. The Nyquist criterion is then in general not satisfied in the general case, and the best we can
do is low-pass filtering in such a way |Ff(x, y)| becomes negligible for > s/2. For the present
study, we will use a Gaussian filter characterized by its standard deviation .
12

The impulse response of a the Gaussian filter is given by the


2 2 2
( x y )/2
Proposition 9. Let H( x , y ) e ; its Fourier transform is given by
2 2 2
h ( x , y ) :FH( x , y )  1 2 e( x y ) /2x
where x  1 .
2x 2 

P ROOF. We have
  2 2 2   2 2

   
( x y )/2  j2 ( x x yy ) 2 2 y /2 j2 
y y
h (x , y)  dx d y e e  d x ex /2 j 2x x dy e
   

h x (x ) h y (y).
The argument of the exponential in the first integral reads
2 2
2x  x   x 
j2 x x 
 j 2 x 
 2 2  x 
2 2
 j 2 x 
 2 x ;
2 2 2

2 2
 2    2  
x
then using the change of variable 
x  u  j 2 x , hx becomes
2



2 2 2 2 2 2 2 2 2
h x ( x )  2 e 2  x e u du  2e 2  x  1 e x /2 x .
 2 x
2 2
1 y /2 y
In the same way, we have h y ( y )  e , and the Proposition follows. 
2 x

Note that the full width at half maximum (FWHM) of H and h are given along coordinate axes by
x y 2 2 ln 2 x , x y 2 2 ln 2 , in such a way

x x y y 4 ln 2 / 0 .8825  1 .

y)| negligible for > s/2, we will choose such that s/2 is equal to a few
In order to make |Ff(x,
, i.e.

x y 1 s , x y 2 n w ,
n 2 N
with n of the order of a few units. The effect of the filter in the frequency domain is equivalent to the
convolution of the original image by a Gaussian with a FWHM equal to a few pixels, which is
negligible provided that the image has been sampled finely enough.
The DRT is illustrated by the following diagram.

Radon Transform, using the Projection-Slice Theorem

~ ~
f(x,y) F 2f (x,y) H.F2 f (x,y) H . F2 f ( , ) R h∗f (t, )
13

Let's now consider the inverse discrete Radon transform (IDRT).


(1) The input data is a continuous sinogram with size w . After sampling, the image becomes a 2D
array of N N numbers. The sampling intervals are  t = w/N along the t-axis, and =  /N along
the -axis. The sampling rate along the t-axis is 
s = 1/
t = N/w. In view of using the FFT algorithm to
compute the DFT's, N is chosen to be a power of 2 and the t = 0 sample is arbitrarily set at location
N/2+1.
(2) A standard FFT algorithm produces a set of N spectra, each of them having N samples, extending
from (1) = 0 to (N) = s (N1)/N. The periodicity of the FFT is used to re-organize the spectra, in
such a way to cover the full [s/2,+s/2] interval. The spectra are padded with zero both on positive
and negative sides to cover the [s 2 /2 ,+ s 2 /2 ] interval.
(3) An interpolation scheme is used to generate a (N+1) (N+1) 2D spectrum spanning the square
domain [s/2,+s/2] [s/2,+s/2], and uniformly sampled along 
x and y axes.

(4) Finally, a standard 2D IFFT algorithm is used to generate the desired image.

This will work correctly provided that the original sinogram does not contain spectral component at
frequencies higher than s/2. As before, a 1D Gaussian filter may be used to attenuate high frequency
components. The most critical point is then the interpolation from polar to Cartesian grid. Again the
Nyquist criterion must be satisfied. The largest x or y-distance between two samples in the polar grid
is approximately  s /2, and we should maintain   s /2 < s/N to satisfy the Nyquist criterion; hence

N   N .
2
The ideal interpolation scheme for the spectrum of a band-limited function is easily characterized by
the following

Proposition 10. Let {fm,n} 1≤m,n≤N be a 2D discrete function, and {Fk, ℓ}1≤k,ℓ≤N its DFT; the DFT
interpolated at non-integer ,, 1 ≤,< N, is given by
N N N


sin
F,  Fk , D N (k ) D N () , where DN ( ) e j( N1) /2 2

.
k 1 
1 N sin 2

Proof. From the definitions of the DFT and IDFT, we have


( m 1)( k 
1) ( n  1) 
1 )(  ( m 1)( k 
1) ( n  1) 
1 )( 
N N j2 
   N N j2 
  
Fk ,  f m ,n e  N N 
, f m , n  12
N  Fk ,e  N N 
.
m 1 n
1 k
1 1

Then for any real ,, 1 ≤


,

we get
( m 1)( 1) ( n 1)( 1) 
N N j 2  
 f

F,  m ,n e
 N N 

m 1 n 1
N N N N ( m 1)( k 
1 ) ( n 1 )(1)  ( m 1)( 1) ( n 1)( 1) 
j2   j2  
F
 
 12 k , e
 N N   N N 
N m 1 n 1 k
1 1
( m 1)( k ) ( n 
1 )( ) 
N N N N j2  
F
 
 12 k , e
 N N 
N m 1 n 1 k
1 1
N N
  F ( 2)
k , D N ( k , ),
k 1 1
14

where D(N2 ) is the bi-periodic Dirichlet kernel


N N ( m 1)( k ) ( n 1)( )  N ( m 1)( k ) N ( n 1)( )
j2  
  
 j2  j2 
D(N2) ( k , )  12 e  N N  1 e N . e N .
N m
1 n 1 N2 m 1 n
1

The periodic Dirichlet kernel is defined by


N jN sin N2
D N ( )  1
N  e j( n 1)   1 [1  ( e j) n 1 ]  1 e j 1 e j( N 1 )/2
N N e 1 N sin 
,
n 1 2

and we have D(N2 ) ( k , ) D N ( k ) DN ( ) , which is the Proposition. 

Unfortunately, this Proposition (whose 1D equivalent works well) is not practical in 2D for large
arrays. Indeed, let us assume we need to evaluate F on a non-uniform N N grid, i.e. F (p),(q), 1 ≤p,q
≤ N. Following the Proposition, the N 2 values F(p),(q) and the N 2 values Fk,ℓ are related by a linear
relation, represented by a N2 N2 matrix, which should inverted (the Nyquist criterion ensures this is
possible). Such a matrix would contain ~7 1010 entries for N = 512.
This problem may be alleviated rigorously in different ways [1], [2]. In the present study, we use a
standard interpolation scheme based on a local approximation of the function by a cubic spline
(Matlab interp2 algorithm). This strategy has the advantage of being fast. The accuracy issue, if any,
may be solved with upsampling or magnification, the spectrum in step (2) being calculated over MN
points by padding sinogram data with (M1)N zeros along the t-axis.
The IDRT is illustrated by the following diagram.

Inverse Radon Transform, using the Projection-Slice Theorem

F11Rf ( , ) ~ ~
Rf (t, ) ~ H . F2 f ( , ) H.F2 f (x,y) h∗f (x,y)
F2f (  , )

2.5. Examples
We now give a few examples of the full DRT-IDRT chain. The five parameters for this chain are:
w, the physical width of the source image,
N, the digital size of the source image,
NS ≡M N, the digital size of the spectrum (M is the magnification factor),
, the standard deviation of the Gaussian filter,
N, the number of angle samples.
15

2.5.1. Centered disk.


The source object is a uniform disk with radius 0.3 (in some arbitrary metric unit) and unit amplitude
(f(x,y) = 1) centered at image center. The image size is w = 2, N = 256, hence the sampling rate is
s = 128, and the Nyquist rate is s/2 = 64. The magnification ratio is comfortably set to M = 4, hence
Ns = 1024. The standard deviation of the Gaussian filter is set to = 30, which is about half the
Nyquist rate. The number of projection angles is N  = 256. The successive steps of the Radon
transform - inverse Radon transform chain are illustrated and commented in the following figures.

0.9 0.8

0.8 0.6

0.7 0.4

0.6 0.2

0.5 0

0.4 -0.2

0.3 -0.4

0.2 -0.6

0.1 -0.8

0 -1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 1. STEP 1. Source image size is w = 2, N = 256.

60

-20

40

-30

20
-40

-50
0

-60

-20

-70

-40

-80

-60
-90

-60 -40 -20 0 20 40 60

Figure 2. STEP 2. Amplitude of complex spectrum in dB scale (80 dB dynamic range), displayed on
the [s/2,+s/2] 2 domain. The center of the spectrum fits the familiar 2J1 (2a)/2a pattern; the
more complicated structure in outer regions results from aliasing due to sampling.
16

60

-20

40

-30

20
-40

-50
0

-60

-20

-70

-40

-80

-60
-90

-60 -40 -20 0 20 40 60

Figure 3. STEP 2. Amplitude of the real part of the spectrum, in dB scale (80 dB dynamic range), and
displayed on the [ s /2,+
2
s/2] domain. The source image being real and symmetric about the origin,
the spectrum is real, and the plots of the amplitudes of the complex spectrum and its real part indeed
coincide.

-20

2.5

-30

-40 2

-50
1.5

-60

-70

0.5
-80

-90
0

-60 -40 -20 0 20 40 60

Figure 4. S TEP 3. Spectrum slices along iso-lines, computed by interpolation across the Cartesian
map, and displayed in dB scale (80 dB dynamic range). axis is horizontal; -axis is vertical.
Gaussian weighting is also applied at this step.
17

0.6
3

0.5
2.5

0.4
2

0.3
1.5

0.2
1

0.1 0.5

0 0

-4 -3 -2 -1 0 1 2 3

Figure 5. S TEP 4. Radon transform computed by inverse Fourier transform. t-axis is horizontal, -axis
is vertical.

0.7

0.6

0.5

0.4

0.3

0.2

0.1

-0.1
-4 -2 0 2 4
Figure 6. S TEP 4. Radon transform computed from spectrum slices is shown with blue circles, and the
theoretical value with the continuous red line.
18

-20

2.5

-30

-40 2

-50
1.5

-60

-70

0.5
-80

-90
0

-60 -40 -20 0 20 40 60

Figure 7. S TEP 5. Spectrum slices computed from Radon transform b y Fourier transform (dB scale; 80
dB dynamic range). Weighted spectrum slices are perfectly recovered, as a consequence of the
inversion property of the FFT.

60

-20

40

-30

20
-40

-50
0

-60

-20

-70

-40

-80

-60
-90

-60 -40 -20 0 20 40 60

Figure 8. STEP 6. Spectrum reconstructed from spectrum slices (dB scale; 80 dB dynamic range). The
interpolation disk with radius s/2 is clearly visible. The effect of the Gaussian weighting filter can
also be seen.
19

0.9
3

0.8

0.7

1
0.6

0.5 0

0.4
-1

0.3

-2

0.2

-3
0.1

0 -4
-4 -3 -2 -1 0 1 2 3

Figure 9. S TEP 7. Image reconstructed from spectrum by Fourier transform, displayed in linear scale.
The image size is now 4N 4N, as a result of the magnification factor M = 4 used to compute the
spectrum.

0.4
0.9

0.3
0.8

0.2
0.7

0.6 0.1

0.5 0

0.4 -0.1

0.3 -0.2

0.2 -0.3

0.1 -0.4

0 -0.5
-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4

Figure 10. S TEP 7. Reconstructed image displayed in linear scale, and zoomed. A slight 3-pixel blur
can be seen at the edge of the disk. This blur results from the overall bandwidth limits at s/2 and the
Gaussian weighting.
20

-10
3

-20

-30

1
-40

-50 0

-60
-1

-70

-2

-80

-3
-90

-100 -4
-4 -3 -2 -1 0 1 2 3

Figure 11. STEP 7. Reconstructed image displayed in dB scale with 100 dB dynamic range. The
spurious background resulting from aliasing is clearly visible.

0
0.5

-10
0.4

-20
0.3

-30
0.2

-40
0.1

-50
0

-60
-0.1

-70
-0.2

-80
-0.3

-90
-0.4

-100
-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4

Figure 12. S TEP 7. Reconstructed image displayed in dB scale (100 dB dynamic range), and zoomed.
The background aliasing at disk edge is smaller than 50 dB.
21

0.5

1.006
0.4

0.3
1.004

0.2

1.002

0.1

1 0

-0.1

0.998

-0.2

0.996
-0.3

-0.4
0.994

-0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5

Figure 13. S TEP 7. Reconstructed image displayed in linear scale, in the range [10.007, 1+0.007].
Amplitude ripple is only about 0.001 over most of the disk.

2.5.2. Offset disk


We now consider a uniform disk with radius 0.3 (in some arbitrary metric unit) and unit amplitude
(f(x,y) = 1), located in x0 = 0.5, y0 = 0.3. Other parameters are left unchanged: image size w = 2, N =
256, sampling rate s = 128, Nyquist rate s/2 = 64, magnification ratio M = 4, spectrum size Ns =
1024, standard deviation of Gaussian filter = 30, number of projection angles N  = 256. The
successive steps of the Radon transform - inverse Radon transform chain are illustrated and
commented in the following figures.

0.8
0.9

0.8 0.6

0.7 0.4

0.6 0.2

0.5 0

0.4 -0.2

0.3 -0.4

0.2 -0.6

0.1 -0.8

0 -1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 14. S TEP 1. Source image size is w = 2, N = 256.


22

60

-20

40

-30

20
-40

-50
0

-60

-20

-70

-40

-80

-60
-90

-60 -40 -20 0 20 40 60

Figure 15. S TEP 2. Amplitude of complex spectrum in dB scale (80 dB dynamic range), displayed on
the [s/2,+s/2]2 domain.

60

-20
2

40
1

-30
0

-1

20
-40
-2

-3

-50
0 -4

-4 -3 -2 -1 0 1 2 3 4

-60

-20

-70

-40

-80

-60
-90

-60 -40 -20 0 20 40 60

Figure 16. STEP 2. Amplitude of the real part of the spectrum, in dB scale (80 dB dynamic range), and
displayed on the [
s /2,+
2
s /2] domain. The spectrum of the offset disk is

j2 (x x 0 y y 0 )


F(x , y ) F0 (x , y ) e ,

where F 0 is the (real) spectrum of the centered disk. The cos[2( x x0 + yy0 )] modulation in the real
part of F is particularly visible on the zoom of the center region.
23

60
3

-20 2

40 1

-30 0

-1

20
-40
-2

-3

-50
0 -4

-3 -2 -1 0 1 2 3 4

-60

-20

-70

-40

-80

-60
-90

-60 -40 -20 0 20 40 60

Figure 17. STEP 2. Amplitude of the imaginary part of the spectrum, in dB scale (80 dB dynamic
range), and displayed on the [s/2,+s/2]2 domain. The sin[2(x x0 + yy0 )] modulation in the
imaginary part of F is easily identified on the zoom of the center region.

-20

2.5

-30

-40 2

-50
1.5

-60

-70

0.5
-80

-90
0

-60 -40 -20 0 20 40 60

Figure 18. STEP 3. Amplitude of spectrum slices along iso-lines, computed by interpolation across
the Cartesian map, and displayed in dB scale (80 dB dynamic range). Gaussian weighting has been
applied.
24

-20

2.5

-30

-40 2

-50
1.5

-60

-70

0.5
-80

-90
0

-60 -40 -20 0 20 40 60

Figure 19. S TEP 3. real part of spectrum slices along iso-lines, computed by interpolation across the
Cartesian map, and displayed in dB scale (80 dB dynamic range).

-20

2.5

-30

-40 2

-50
1.5

-60

-70

0.5
-80

-90
0

-60 -40 -20 0 20 40 60

Figure 20. STEP 3. Imaginary part of spectrum slices along iso-lines, computed by interpolation
across the Cartesian map, and displayed in dB scale (80 dB dynamic range).
25

0.6
3

0.5
2.5

0.4
2

0.3
1.5

0.2
1

0.1 0.5

0 0

-4 -3 -2 -1 0 1 2 3

Figure 21. STEP 4. Radon transform computed by inverse Fourier transform.

0.6

0.5

0.4

0.3

0.2

0.1

-0.1
-4 -2 0 2 4

Figure 22. STEP 4. Radon transform at = 0°, computed from spectrum slices (blue circles) vs.
analytical formula (continuous red line).
26

-20

2.5
-30

-40 2

-50
1.5

-60

-70

0.5
-80

-90
0

-60 -40 -20 0 20 40 60

Figure 23. STEP 5. Amplitude of spectrum slices computed from Radon transform by Fourier
transform (dB scale; 80 dB dynamic range).

-20

2.5

-30

-40 2

-50
1.5

-60

-70

0.5
-80

-90
0

-60 -40 -20 0 20 40 60

Figure 24. S TEP 5. Real part of spectrum slices computed from Radon transform by Fourier transform
(dB scale; 80 dB dynamic range).
27

-20

2.5
-30

-40 2

-50
1.5

-60

-70

0.5
-80

-90
0

-60 -40 -20 0 20 40 60

Figure 25. S TEP 5. Imaginary part of spectrum slices computed from Radon transform by Fourier
transform (dB scale; 80 dB dynamic range).

60

-20

40

-30

20
-40

-50
0

-60

-20

-70

-40

-80

-60
-90

-60 -40 -20 0 20 40 60

Figure 26. S TEP 6. Amplitude of spectrum reconstructed from spectrum slices (dB scale; 80 dB
dynamic range).
28

4
60

-20
2

40
1

-30
0

-1
20
-40
-2

-3

-50
0
-4 -3 -2 -1 0 1 2 3

-60

-20

-70

-40

-80

-60
-90

-60 -40 -20 0 20 40 60

Figure 27. STEP 6. Real part of spectrum reconstructed from spectrum slices (dB scale; 80 dB
dynamic range).

60
3

-20 2

40 1

-30 0

-1

20
-40
-2

-3

-50
0 -4
-4 -3 -2 -1 0 1 2 3 4

-60

-20

-70

-40

-80

-60
-90

-60 -40 -20 0 20 40 60

Figure 28. STEP 6. Imaginary part of spectrum reconstructed from spectrum slices (dB scale; 80 dB
dynamic range).
29

0.9
3

0.8

0.7

1
0.6

0.5 0

0.4
-1

0.3

-2

0.2

-3
0.1

0 -4
-4 -3 -2 -1 0 1 2 3

Figure 29. STEP 7. Image reconstructed from spectrum by Fourier transform, displayed in linear scale.

0.8
1

0.9 0.7

0.8 0.6

0.7 0.5

0.6 0.4

0.5 0.3

0.4 0.2

0.3 0.1

0.2 0

0.1 -0.1

0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure 30. S TEP 7. Reconstructed image displayed in linear scale, and zoomed. A slight 3-pixel blur
can be seen at the edge of the disk.
30

-10
3

-20

-30

1
-40

-50 0

-60
-1

-70

-2

-80

-3
-90

-100 -4
-4 -3 -2 -1 0 1 2 3

Figure 31. STEP 7. Reconstructed image displayed in dB scale with 100 dB dynamic range. The
complicated background results from sampling, from bandwidth limits at s/2, and from interpolation
residuals.

0.7
-10

0.6
-20

0.5
-30

0.4
-40

-50 0.3

-60 0.2

-70 0.1

-80 0

-90 -0.1

-100 -0.2

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Figure 32. S TEP 7. Reconstructed image displayed in dB scale (100 dB dynamic range), and zoomed.
The background aliasing at disk edge is still smaller than 50 dB, and is not affected by the disk
offset.
31

1.006
0.7

0.6
1.004

0.5

1.002

0.4

1 0.3

0.2

0.998

0.1

0.996
0

-0.1
0.994

-0.2
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure 33. S TEP 7. Reconstructed image displayed in linear scale, in the range [10.007, 1+0.007].
Amplitude ripple is only about 0.002 over most of the disk.

These first two examples clearly demonstrate that the implementation of the Spectrum-Slice Theorem
with 2D Fourier transform and polar-to-Cartesian conversion by interpolation, is able to deliver very
high quality images. The spurious background level can be reduced using a narrower weighting filter,
to the expense of an increased blur. The next figures illustrate what happens when the filter standard
deviation is reduced from 30 to 15 (in the case of the offset disk).

-10
3

-20

-30

1
-40

-50 0

-60
-1

-70

-2

-80

-3
-90

-100 -4
-4 -3 -2 -1 0 1 2 3

Figure 34. Reconstructed image (= 15) in dB scale with 100 dB dynamic range. The background
level is significantly reduced everywhere, to the exception of the main aliasing bands.
32

0.7
-10

0.6
-20

0.5
-30

-40 0.4

0.3
-50

-60 0.2

-70 0.1

-80 0

-90 -0.1

-100 -0.2
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure 35. Reconstructed image (= 15) in dB scale with 100 dB dynamic range, and zoomed. The
background aliasing at disk edge is now smaller than 70 dB. The edge blur extends over about 6
pixels.

0.8

1.006
0.7

0.6
1.004

0.5

1.002

0.4

1 0.3

0.2
0.998

0.1

0.996
0

-0.1
0.994

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure 36. Reconstructed image displayed in linear scale, in the range [10.007, 1+0.007]. The
narrower filter has smoothed the disk amplitude, without changing the amplitude ripple.
33

2.5.3. Three disks


In this example, the source image is the union of three unit-amplitude, non-overlapping disks with
radii 0.30, 0.18, 0.10, and centered in (0.0, 0.0), (0.5, 0.0), (0.0, 0.5) respectively. Other parameters are
as in the first two examples: image size w = 2, N = 256, sampling rate  s = 128, Nyquist rate
s/2 = 64, magnification ratio M = 4, spectrum size N s = 1024, standard deviation of Gaussian filter 
= 30, number of projection angles N  = 256. The output of the Radon transform - inverse Radon
transform chain is illustrated and commented in the following figures.

0.9 0.8

0.6
0.8

0.7 0.4

0.6 0.2

0.5 0

0.4 -0.2

0.3 -0.4

0.2 -0.6

0.1 -0.8

0 -1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 37. S TEP 1. Source image size is w = 2, N = 256.

1
1

0.8
0.9

0.6
0.8

0.7 0.4

0.6 0.2

0.5 0

0.4 -0.2

0.3 -0.4

0.2 -0.6

0.1 -0.8

0
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

Figure 38. S TEP 7. Reconstructed image displayed in linear scale, and zoomed. The 3-pixel blur can
be seen at the edge of the disk.
34

-10
3

-20

-30

1
-40

-50 0

-60
-1

-70

-2

-80

-3
-90

-100 -4
-4 -3 -2 -1 0 1 2 3

Figure 39. S TEP 7. Reconstructed image displayed in dB scale with 100 dB dynamic range.

0 1

-10 0.8

-20 0.6

-30 0.4

-40 0.2

-50 0

-60 -0.2

-70
-0.4

-80 -0.6

-90
-0.8

-100
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

Figure 40. STEP 7. Reconstructed image displayed in dB scale with 100 dB dynamic range, and
zoomed. The background aliasing at edges of disks is about 50 dB, and has not been modified w.r.t.
the single disk example. The 3-pixel edge blur is also unchanged.
35

1.008

0.8

1.006

0.6

1.004

0.4

1.002
0.2

1
0

0.998 -0.2

0.996 -0.4

-0.6
0.994

-0.8
0.992

-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

Figure 41. S TEP 7. Reconstructed image displayed in linear scale, in the range [10.009, 1+0.009].
Amplitude ripple is about 0.002 over each disk.

2.5.4. Inhomogeneous rectangle


In this example, the source image is the union of two adjacent rectangles, the first one with size 0.2 
0.2, center (0.2,0.0) and amplitude 1, and the second one with size 0.4 0.2, center (0.1,0.0) and
amplitude 0.5. Other parameters are as in the previous examples: image size w = 2, N = 256, sampling
rate  s = 128, Nyquist rate s/2 = 64, magnification ratio M = 4, spectrum size Ns = 1024, standard
deviation of Gaussian filter = 30, number of projection angles N = 256. The output of the Radon
transform - inverse Radon transform chain is illustrated and commented in the following figures.

0.9 0.8

0.8 0.6

0.7 0.4

0.6 0.2

0.5 0

0.4 -0.2

0.3 -0.4

0.2 -0.6

0.1 -0.8

0 -1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 42. S TEP 1. Source image size is w = 2, N = 256.


36

0.4
0.9

0.8 0.3

0.7 0.2

0.6 0.1

0.5 0

0.4 -0.1

0.3 -0.2

0.2 -0.3

0.1 -0.4

0 -0.5
-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4

Figure 43. S TEP 7. Reconstructed image displayed in linear scale, and zoomed. The 3-pixel blur can
be seen along the edge of the rectangle, and along the amplitude transition.

-10
3

-20

-30

1
-40

-50 0

-60
-1

-70

-2

-80

-3
-90

-100 -4
-4 -3 -2 -1 0 1 2 3

Figure 44. S TEP 7. Reconstructed image displayed in dB scale with 100 dB dynamic range.
37

0 0.5

-10 0.4

-20 0.3

-30 0.2

-40 0.1

-50 0

-60 -0.1

-70 -0.2

-80 -0.3

-90 -0.4

-100
-0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5

Figure 45. STEP 7. Reconstructed image displayed in dB scale with 100 dB dynamic range, and
zoomed. The background aliasing along rectangle boundary is about 50 dB, as in other previous
examples.

0.2
1.006

0.15

1.004

0.1

1.002 0.05

-0.05

0.998
-0.1

-0.15
0.996

-0.2

0.994

-0.25

-0.45 -0.4 -0.35 -0.3 -0.25 -0.2 -0.15 -0.1 -0.05 0

Figure 46. S TEP 7. Reconstructed image displayed in linear scale, in the range [10.007, 1+0.007].
Amplitude ripple is about 0.002 over the first rectangle.
38

0.506
0.2

0.15
0.504

0.1

0.502

0.05

0.5 0

-0.05

0.498

-0.1

0.496
-0.15

-0.2
0.494

-0.25
-0.1 -0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35

Figure 47. Reconstructed image displayed in linear scale, in the range [0.50.007, 0.5+0.007].
Amplitude ripple is also about 0.002 over the second rectangle.
39

3. INVERSION OF A LIMITED NUMBER OF PROJECTIONS

3.1. Principles
When a very limited number of projections L << N = N/2 is available for inversion, the polar-to-
Cartesian transform by interpolation won't work. This is hopeless, since the spectrum of a point source
f= (xx0 ,yy0 ) located in (x0 ,y0) is
 
 dy (x x
j2 ( x x y y ) j2 (x x 0 y y 0 )
Ff(x , y )  dx 0, y y 0 ) e e ,
 

which is a rapidly varying function. Note that in polar coordinates


~ ~
f(, ) (0 , 0 ) , Ff ( , ) ej2 0 cos( 0 ) ,

and the spectrum may be interpreted as a "plane wave" propagating in the (x,y)-plane, with
"wavelength" 1/0 and "direction of arrival" 0. Consider now a discrete sampling set on a polar grid
in the (x,y)-plane. Each sampling location may be viewed as a "receiving antenna", and our problem
reduces to an interferometer problem, i.e. the estimation of 1/0 and 0 from the set of "received
signals". And since we expect to observe a superposition of many plane waves (in fact a continuous
spectrum of plane waves), we would prefer a linear estimator, i.e. formed by a linear combination of
"signals" received by individual "antennas". In order to formalize this easily, we first consider
continuous sampling along radial lines in the (x,y)-plane. The interferometer output is defined by
12 
L1 0 j 2cos  



g (, )   d d Ff (, 
/L) e
L L ,

0

2 L
1  0

where the radial lines are distributed are regularly distributed at angles ℓ = ℓ/L. This expression
takes the form
L1 0  
 L dFf (, /L) e
j2 cos 
g( , )  L
0
0

L 1  
  L  dFf (, ) (/L) rect (/2) e
0
0 
0
j2 cos


F1 ( Ff . ),
L 1
where (, )   L(/L) rect (/2) . Finally the convolution theorem yields
0
0

g f ,  F1 .

The function can be easily calculated, as shown by the


L 1
Proposition 11. Let (, )  
L
0
(
/L) rect (
/2 ) ; then 0

L
1
1
(, ) F  (, )    (, ) ,
0

2  
where (, )  0 2 sinc [20 cos(
/L)] sinc 2 [0 cos(
/L)] .
L  
40

, ) 
P ROOF. Let (  ( 
/L) rect( /2 0 ) ; we have
L
  0
(, )    /L) e j2 cos( )   
0
d d  (  d e j2 cos( /L)
L 0 
0 L 0
 I [cos( /L)],
L
0 0 0
 d e j2     de
j 2
where I()  d2 cos( 2) 2 Re .
0 0 0

Integration by parts gives


0 0 0
K (
) :  de j 2   e j 2   j21e
j2 
d :K1 () K 2 () ;
0 j2  0 0

0
 
then: K1 ()   e j2   0 (e j2 0 1)  0 2 j sin( 0 
) e j0  02 sinc (0 ) e j 0 ;
j2  0 j2  j2
0 j0
 j2e
j2 
K 2 ( )  1 d  1 ( e j2 0 1)  sinc ( 0 ) e j0 ;
0 ( j2)2 2 


I () 2 
20 sinc (0) cos(0)  0 sinc ( 0) sin( 0 ) 
 
 2  

 0 sin(20)  21 2 sin 2 ( 0 ) 20 [ 2sinc (20) sinc 2 ( 0 )],
 

and the Proposition follows. 

The convolution equation g f should now be solved for f. The unknown function f satisfies two
constraints, namely it is positive f(x,y) 0 and has a compact support 1. Without loss of generality,
we assume
f (x , y ) 0 , (x,y) ∉1 , 1 :[w/2 ,w/2] [w/2,w/2] .
We also define the indicative function of a compact domain by
( x, y) 1, (x , y) ,
0, otherwise.
We have
Proposition 12. Let 1 = [w/2,+w/2]2, 2 = [w,+w] 2, f ∈L1 (1), u ∈L(ℝ2 ); then
(i) u f ,
(ii) 1 . ( u f ) 1 .[ ( 2 u ) f ] .

P ROOF. (i) We have


   
u f ( x , y )   dx 'dy' u(x x' , y y' ) f (x' , y' )  dx' dy' u(x x' , y y' ) f (x' , y' )
   
   
  dx 'dy' u(x x' , y y' ) f (x ' , y' )  u  dx'dy' f (x' , y' )  u
     
f 1
.
41

 
(ii) We have u f   dy' u(x' , y' ) f (x x' , y y' ) .

dx '

Since f is supported by 1 ,

f (x x ' , y y ' ) 0 for w/2 ≤ x'x, y'y ≤ +w/2, i.e. xw/2 ≤ x' ≤ x+w/2 and yw/2 ≤ y' ≤
y+w/2, hence
x w/2 y w/2
u f   x w/2
dx' 
y w /2
dy' u (x ' , y ' ) f (x x ' , y y' ) .

If now (x,y) is restricted to 1 , we see that w ≤x' ≤+w and w ≤y' ≤+w, and
w w
u f 1   dx' dy' u(x' , y' ) f (x x' , y y' )
w w
 
  dx' dy' 
 
2 ( x ' , y ' ) u ( x ' , y ' ) f ( x x ' , y y ' ),

i.e. 1 . ( u f ) 1 .[ ( 2 u ) f ] . 

The deconvolution problem is then


find f 0 such that 1 . [ (2 ) f ] 1 . g ,

where is as defined in Proposition 11. Note that the operator acting on f is linear. The space L 2(1 )
of square-integrable functions defined on 1, equipped with the canonical inner-product (where * is
the complex conjugate)

u() v

f,g  () d,
1

is a Hilbert space. We have the following three characterizations of the convolution operator C.
Proposition 13. The operator C: f  C f :[ (2 ) f ]  , defined on L2(1) is (i) compact, (ii)
1
self-adjoint, and (iii) positive semidefinite.

P ROOF. (i) This is classic. C has the form of the integral operator
C f 1
  f (
1
) d  f ( 
) (  ) k ( , 
) d
,
1

ddk (, ) . C is then a Hilbert-Schmidt operator, hence


2
whose kernel k verifies
1 1

is compact.
(ii) First, we need the adjoint C H of C. Let f, g ∈L2 (1); we have for an arbitrary 

dg () df ( ) d 


) (  ) d g
f ( () (
 
C f , g  )
1 1 1 1

  d  )
f (  dg ( ) [ ( )] 
  : f , C Hg ,
1 1 

hence CH C H , where H ( ) :( ) . In the present case, is real and symmetric about the
origin, CH C and C is self-adjoint.

df df () () :df


 
(iii) We have Cf , f  () ( ) I ( ) .
1 1 1
42

Following Proposition 12, and recalling that f is zero outside 1 , we get

I ( ) 1  df () ()


R2 1
.

Using again the fact that f is zero outside 1, one obtains

df df (



C f , f  ( ) ) ( 
).
R2 R2

dF () F() () 0



Finally, Parseval theorem yields C f , f  
R2

Proposition 14. The operator C  has a pure point spectrum of real positive or null eigen-values.

P ROOF. This is a direct consequence of C being compact, self-adjoint and positive semidefinite. 

Proposition 15. Eigen-values i of C are bounded by i M :sup


1
 () d.
1

P ROOF. Let {i,i} be an eigen-pair of C; we have i i ()   () () d. Hence
1
i

i i ()   () () d () () d.


1
i
1
i

Integrating both sides over 1 gives

i  () dd  () () dd()  () d


1
i
1 1
i
1
i
1
.
 sup  ( 
) d ( 
) d 
, i

1 1 1

hence i  sup

1
 () dsup  () d,
1 1 1

and the proposition follows since i 0 after Proposition 14. 

3.2. Deconvolution
We are now ready to solve our deconvolution problem. The general idea is to use the fixed-point
iteration
f n 1 PC [ f n (g C f n )] , f 0 g C f ,

where all functions are defined over 1, P is the projection on the set C of positive functions
PCu i () u i (), u i () 0 ,
0 , otherwise,
and a scalar parameter to be determined for proper convergence. We also define the operator T
T, u u ( g C u ) ,
43

in such a way the fixed-point iteration becomes


f n 1 PC T, f n , f 0 g Cf .

First observe that Banach fixed-point theorem does not apply here since P C and T are not
contractive, but only non-expansive. Indeed P Cu = u for any positive function, and T u T
v = u 
v as soon as u v belongs to the kernel of operator C. However, PC and T are not just non-
expansive but are also averaged.

3.2.1. Proof of weak convergence using theory of averaged operators


Averaged operators and their nice properties are described in [3]. Some proofs are reproduced from [3]
for reader comfort. || || designates L 2 norm, unless otherwise specified.
1
Definition 4. An operator G defined on metric space E is said to be  strongly monotone (or 1 co-
coercive) if there is some 


such that
Gu Gv, u v  Gu Gv , u,v E.
2

Definition 5. An operator T defined on metric space E is said to be averaged if the operator G = I T


1
is  strongly monotone for some > 1/2; with = 1/2, 0 < < 1, T is said to be -averaged.

Definition 6. An operator N defined on metric space E is said to be non-expansive is


Nu Nv  u v , u,v E.

Averaged operators may be easily identified using the following Proposition.


Proposition 16. An operator T is -averaged if and only if there is a non-expansive operator N and 0
< < 1 such that
T (1 )I N .

P ROOF (reproduced from [3]). (i) Assume T (1 )I N with N non-expansive and 0 < < 1.
Then G :I T (I N) , and N I (1/) G . For arbitrary u, v we have

Nu Nv  u v  ( Gu Gv ), u v 1 ( Gu Gv )


2 1

 u v  12 Gu Gv  Gu Gv , u v .
2 2 2

Since N is non-expansive, we have Nu Nv 2


 u v 2
, and one gets

Gu Gv, u v 21 Gu Gv 2


.

(ii) Assume T is averaged; then there exists 0 < < 1 such that Gu Gv, u v 21 Gu Gv 2
.
Define N such that T (1 ) I N ; then N I (1/) G , and

Nu Nv  u v  12 Gu Gv 2 Gu Gv, u v


2 2 2

 u v 2

2Gu Gv , u v 21 Gu Gv 2
 u v 2
,

which concludes the proof. 


44

The product of averaged operators is also averaged, as shown by the following proposition.
Proposition 17. Let S and T be averaged operators; then ST is also averaged.

P ROOF (see [3]). Since S is averaged, there exists N non-expansive and 0 <  < 1 such that
S (1 )I N , and ST (1 )T NT ; in the same way since T is averaged, there exists M
non-expansive and 0 < < 1 such that T (1 )I M ; then
ST (1 )(1 )I (1 )M (1 )N NM
(1 ) (1 ) 
)I 
(1    M   N   NM 

,
 
where (1 )(1 ) 1 .
(i) NM is non-expansive since NMu NMv  Mu Mv  u v .
(1 )  (1) 
(ii) Then we prove that  M   N   NM is non-expansive; the triangle inequality gives

(1 ) 
 M 
(1 )



N   NM ( u v )
2
     
(1 )  2
 
(1)
 
 u v
2  2

2
,
 
since M, N and NM are non-expansive, and

(1)

M (1
)

N  NM ( u v )
2
 u v 2
,

(1 ) 2 2 (1 )  2 (1 ) 2 (1 ) 22 


since  1 ,  1 ,  1
2  2  2 

(1 )  (1 ) 


and   1 .
  
(iii) Finally Proposition 16 shows that ST is 
-averaged. 

We are now able to prove that the operators implied in our fixed-point iteration are averaged.
Proposition 18. The operator P C of projection on the set C of positive functions is (1/2)-averaged.

P ROOF. (i) First observe that the set of positive functions is convex. Then the projection PC u is the
unique element such that (this is classical; see [3,Proposition 8.4] for instance)
c PCx , PCx x 0 , c C.
Application with c = PC v, x = u yields
PCv PCu , PCu u 0 i.e. PCu PC v, u PC u 0
and application with c = P Cu, x = v yields
PCu PCv , PC v v 0 .
Adding together the two inequalities gives
PCu PCv , u v (PC u PCv ) 0 ,
hence
PC u PC v, u v  PC u PC v 2
0 .
45

(ii) Let G = I PC ; we have


Gu Gv , u v  u v ( PC u PC v ), u v  u v  PC u PC v, u v ,
2

and also
Gu Gv  u v ( PC u PC v ), u v ( PC u PC v )
2

 u v 2
 PC u PC v 2
2 PC u PC v, u v ;
using the previous relation, we obtain
Gu Gv 2
 Gu Gv , u v  PC u PC v 2
 PC u PC v , u v ,

i.e. Gu Gv 2
 Gu Gv, u v  PC u PC v 2
 PC u PC v, u v .
Applying result of (i) gives
Gu Gv , u v  Gu Gv
2
,
hence P C is (1/2)-averaged after Definition 5. 

Proposition 19. The operator T: u  T, u u ( g C u ) with 2 M , 0 < < 1, is -
averaged.

P ROOF. Set 0 < < 1 and write T, u (1 ) u   
u  (g Cu ) . Following Proposition
 
is 
16, T  -averaged provided that operator N defined by

Nu :u ( g Cu )



is non-expansive. We have

Nu Nv ( I C) ( u v) .



Since C  is self-adjoint compact, we have the direct sum
L2 ( 1 ) Ker C  ( Ker C )  ,

where Ker C and (Ker C)⊥ (the kernel of C and its orthogonal) are both invariant under C. Note by
the way that
__________
(Ker C)⊥ = Ran C 

(the closure of the range of C), since C is bounded. The set of eigen-functions of C  forms an
orthogonal basis of L2 (1 ), and may be divided into the two subsets {i}, 
i which are orthogonal

bases of (Ker C) and Ker C respectively.
We have the convergent expansions
u  u i
i i  u
i
i i , v v
i
i i  v
i
i i ,

and Nu Nv becomes


Nu Nv  [1 (/ ) ] (u
i
i i v i )i  (u
i
i v i )i ,
46

where the i are the non-zero eigen-values of C. Hence


Nu Nv
2
 [1 (/ ) ]
i
i
2
(u i v i ) 2  (u i
i v i )2 ,

and Nu Nv
2
 u v
2
 (u
i
i v i )2  (u i
i vi )2

is satisfied provided that [1 ( /) i ] 2 1 i, i.e. 0 (/) i 2 i, hence  2   2


max i M
i

after Proposition 15. 

Proposition 20. Let PC and T as in Proposition 18 and Proposition 19; then PCT is (1
/2-
averaged.

P ROOF. This is Proposition 17, with 1 (1 )(1 ) 1 (1 )/2 (1 )/2 . 

The convergence of our fixed-point iteration is the object of Krasnoselskii-Mann theorem, which is
reproduced here.
Proposition 21. (Krasnoselskii-Mann theorem). Let T be an averaged operator defined on a complete
metric space E. Then the iteration f n 1 Tf n converges to a fixed point of T, whenever T has fixed
point(s).

P ROOF (reproduced from [3]). Since T is averaged, there is a non-expansive N and 0 < < 1 such that
T (1 )I N . Let z be a fixed-point of T; then z (1 )z Nz , i.e. z Nz and z is also a
fixed-point of N. Let G = I T; we have
Gu Gv, u v  u v ( Tu Tv ), u v  u v 2
 Tu Tv, u v ,
and also

Gu Gv  u v ( Tu Tv ), u v ( Tu Tv )


2

 u v  Tu Tv 2 Tu Tv, u v ;
2 2

using the previous relation, we obtain


Gu Gv
2
 u v
2
 Tu Tv
2
2  u v 2
 Gu Gv, u v ,
Gu Gv  Tu Tv  u v 2 Gu Gv, u v ,
2 2 2
i.e.

u v  Tu Tv 2 Gu Gv , u v  Gu Gv


2 2 2
i.e. .

Applying this relation with u = z, v f n , Tz = z, Tf n f n 1 , we obtain


2 n 1 2 2
z f  z f 2 Gz Gf , z f  Gz Gf
n n n n
.
2
Since T is -averaged, we have Gz Gf n , z f n  1 Gz Gf n ; hence
2

z f n
2
 z f n 1
2

 1 1

 Gz Gf n 2
.

Now, Gz z Tz 0 , Gf n f n Tf n f n f n 1 , and what remains is


47

z f n
2
 z f n 1
2

 1 1

f n
f n 1
2
.

Hence the sequence  z f n is decreasing, the sequence  f n f n 1 converges to zero. Let
f E (since E is complete) be a cluster point of sequence f n 
. Then Tf  f ; we may choose
z f  as well, and the sequence  f  f n converges to zero. 

We conclude with the


Proposition 22. The iteration f n 1 PC T,f n converges to a fixed-point of PC T,.

P ROOF. Since PC T is averaged, Krasnoselskii-Mann theorem shows that the iteration
f n 1 PC T, f n converges to a fixed-point of PC T
, if anyLet us show that at least one such fixed-
point exists. Let
f * f f ,
where f C is the original image, and f is an arbitrary element of Ker C  such that f C (note that
at least one such f  exists with the choice f 0 ). Then
PC [ f * ( g C f * )] PC [ f * ( Cf Cf * )] PC [ f * ( Cf Cf )] PC f * f * ,

and f* is a fixed-point. 

The following Proposition immediately gives a characterization of f * that will be useful to interpret
numerical applications.

Proposition 23. The iteration f n 1 PC T,f n converges to f :PR f f , where PR is the
projection on the range of C, and f is an element of Ker C .

Proof. We prove PR f * = PRf using contradiction. Indeed let us assume there exists f * such that
f  PC T, f , PR f  PR f ,

(where the inequality holds in L2 ). Since f * is a fixed point of P C T,we have f  f PC T,f f ,

hence f f 2 P C T,f  f 2 . But we also have

T,f  f ( I C)( PR f  PR f ) PKf  PR f ,

where P R and P K are the operators of projection on the closure of the range and on the kernel of C ,
respectively. Then
T ,f f 2
2
(I C)(PR f  PRf ) 2
2
PK f  PK f 2
2

2 P R f  PR f 2
2
PK f  PK f 2
2,

where is the largest eigen-value of I C  . Let 0 < m < ... < i < ... < M be the non-zero
eigenvalues of C; with the choice = 2/M, 0 < < 1,
max 1 m , 1 M 1.
48

Using Pythagoras formula f f 2


2
P R f PR f 2
2
P K f PK f 2
2, we get

T,f  f 2
2  f  f 2 (1
2
2 ) P R f  PR f 2
2 .

Then PR f  PR f of course implies f  f 2


2
0 and we obtain

 P R f  PR f 2

T ,f  f 2
 f  f 2

1
 (1 2
) 2
 
 f f
2
,
f  f 22
2 2 2
 
which is a contradiction. 

3.2.2. Spectral considerations


First observe that Proposition 22 together with Proposition 23 gives a proof of weak L2-convergence
only, i.e.
lim PR f n PR f 0 .
n  2

Numerical applications however show that this is not the best possible result in many practical cases.
The aim of this section is to analyze more precisely why. The crucial point is to observe that the only
information available for reconstruction is Cf, hence the component of f in Ker C is definitively lost.
Here are a few characterizations of Ker C and their applications. The first proposition is evident.

Proposition 24. Let u Ker C; then u() d0 .


1

2 
P ROOF. Let U(, )  du(, ) e
0 0
j2 
cos( )
d be the Fourier transform of u in polar

, /L) 0 for ℓ= 0 ... L1; hence in particular


coordinates. u belongs to Ker C if U (
2 
du(, ) d0 .
U (0 ,0 ) 
0 0

We immediately conclude to the mean convergence of the fixed-point iterations. We have


Proposition 25. The mean of the iterates f n 1 PC T,f n converges to the mean of f, i.e.

, where f n  1
lim f n = f  f () d , f   1 f () d.
n
n  meas 1 1 meas 1 1

P ROOF. We have lim f n PR f f  after Proposition 23. Then f = PR f = 0 after Proposition 24,
n 

and we have lim f = P Rf + f = PR f + f = PR f = P Rf + PKf = f 


n
. 
n 

This shows that the mean of the reconstruction error is zero, while its standard deviation is

2 = ( f *f f *f )2 = ( f *f )2 = ( f PKf )2 = 1 f  PK f


2
2,
meas 1
49

and we have the (rather loose) bounds for standard deviation of reconstruction error
1 f  PKf  1 f 
 PKf .
meas 1 meas 1
2 2 2 2

Unfortunately we have not been able to derive bounds for || f ||2 , and these will be estimated using
numerical runs. The standard deviation bounds also show that reconstruction error is expected to be
smaller when P Kf = 0. In order to verify this experimentally, we need to build test images with a
known kernel component. This is the object of the following propositions. We start with the case of
infinite bandwidth, i.e. with sampling lines extending to infinity in the spatial frequency plane.

Proposition 26. Let C be defined with 1 = ℝ2 and 0 = +∞. Let u L2(ℝ2 ) ∩Ker C then u has

the spectral expansion u (
, )  a
k 1
k () sin(kL ) , where the ak are real functions of .

P ROOF. Since u is in L2 (1) we have the convergent expansion



u (, )  u
m 
m ()e jm 
,

and since u is real the modal functions um verify u m () u 


m () for all m and . The Fourier
transform of u
2  2 
du(, ) e du(, ) e
j2 cos( ) j 2cos( )
U(, )  d  d
0 0 0 0

becomes using the Jacobi-Anger expansion


2  
n 
jn ( ) 
U (
, )    0
d
0
u ( , ) 


 ( j) J n ( 2 ) e
n 
n

d 

n  
2   u 0
n () ( j) n
e jn J n ( 2 
) d 
n 

 n  
2  0
u 0 () J 0 ( 2 
) d 2  [u
n 1
0

n () ( j) n ejn ( 1) n u n ( ) ( j) n e jn ] J n ( 2) d 

 n  
2  u
0
0 () J 0 ( 2 
) d 2  (j) [u n
0

n () e jn u n ( ) e jn] J n ( 2 ) d .
n 1

, /L) 0 for ℓ= 0 ... L1, i.e.


u belongs to Ker C if U (

u0
0 () J 0 (2 ) d0 ,


[u
0

n () e jn/L u n () e jn /L ] J n (2) d 0 , n 1  , 0  L 1 ,

for all 
. The first equation is satisfied provided that u 0() = 0 (which was expected after Proposition
24). The equations of the second set are satisfied if
u
n () e jn/L u n ( ) e jn/L 0, n 1  , 0  L 1 ,

i.e. u
n () u n () e j 2n /L , n 1  , 0  L 1 .
50

For any given n, this is possible only if e j2 n/L do not depend on ℓ, which in turn is possible only if n
is an integer multiple of L. In this case, one should have un jℝ. Functions in C kernel are then
characterized by
u 0 = 0;
u n jℝ, n 1, n = 0 mod L;
u n = 0, n 1, n ≠0 mod L.

Setting n = kL, u kL() = j ak ()/2 with a k ℝ, one obtains



u (
, )  a
k 1
k () sin( kL) ,

which is the Proposition. 

If now the bandwidth is limited to some 0 , we have the

Proposition 27. Let C be defined with 1 = ℝ2 and 0 < ∞. Let u L2 (ℝ2 ) ∩Ker Cthen u has the


spectral representation u (, )  a
k
1
k () sin( kL) u h (, ) , where the ak are real functions of ,

and uh is any real function satisfying Fuh (, ) 0 , 0 .

, /L) 0 for ℓ= 0 ...


P ROOF. Following the proof of Proposition 26, u belongs to Ker C if U(
L1, 0 ≤≤0, i.e.

u 0
0 () J 0 (2 
) d 0 , 0 ≤≤
0,


[u0

n () ejn /L u n () e jn /L ] J n (2) d0, n 1  , 0  L 1 , 0 ≤≤
0.

We will make use of the orthogonality property of Bessel functions over [0,+∞] given by
 (
)
J
0
n (2 
) J n (2
) d 

.

The first equation may be satisfied either with


(i) u0 () = 0, or
(ii) u 0 () J0 (2 
) with 0 .
In the same way, equations in the second set may be satisfied either with

(iii) un jℝ, n = 0 mod L; un = 0, n ≠0 mod L, or


(iv) u n () J n (2 
) with 0 .
hence the Proposition. 

These propositions may be used to build numerical examples of "good" and "bad" convergence. Let us
start with a distribution of 2P isolated pixels with same amplitude located at the vertices of a regular
2P-gon.
51

Proposition 28. Let f be a distribution of 2P isolated pixels with same amplitude, located at the
vertices of a regular polygon, and such that one pixel is located at zero polar angle. Then f has no
component in Ker C.

P ROOF. The source image is given by


2P 

() (pP) ,
1
f (, )  0
p 0

where 0 is the radius of the regular 2P-gon. The image spectrum is then
2 P1 2P 
p
2  1
Ff ( 
, )    0
(  ) ej2 cos( ) 
d  d(0 ) 
0 P 
0 ej2 cos( p /P ) .
p 0 p 0

Setting 0 = 1 without loss of generality and using the Jacobi-Anger expansion we find
2 P1   2 P1 jnp/P  jn
Ff (
, )   (j)n J n (2 ) ejn ( p /P)   (j) n J n (2 )
 e
e

p 0 n  n  p 0 
  2 P 1  2 P1  
2 P J 0 (2 )  J n (2 
) (j)

 n
   
(1)n  ejnp/P e jn  (j)n  e jnp/P e jn.
  
n
1  p 0  p 0  
Then using the identity
N
sin( N/2) j( N 1 )/2
1

e jp 

sin( /2)
e :S N ( ) e j( N 1) /2
p 0

with N = 2P and n/P , we get



sin( n)
Ff (, ) 2 P J 0 (2 
) (j) n
J n (2) ejn ( 2P 1) /2P e jn e jn ( 2 P 1) /2P e jn
n
1
sin(n /2P)

sin( n)
2 P J 0 (2) 2 (j)
n 1
n
J n (2 ) cos(n n n/2P)
sin( n/2P)
.

Figure 48. The function sin(x)/sin(x/2P) for P = 6.


52

First observe that SN (n /P) is equal to +2P for n = 2kP with even k = 2, 4, ... , to 2P for n = 2kP
with odd k = 1 ,3 ,5 , ... , and to 0 otherwise (Figure 48). In the same time, cos( 2 kP2 kPk )
cos( 2kP k) is equal to cos(2 kP ) for even k and to cos( 2 kP) for odd k, in such a way

Ff (, ) 2 P J0 (2 ) 4 P J
k
1
2kP ( 2 ) cos(2 kP).

Since the sin(kL) functions are orthogonal to the cos(2kP) functions on [0,2], the components of
Ff are never zero along the  /L lines, and the proposition follows. 

We now consider the case of an angularly tilted regular 2P-gon. We have


Proposition 29. Let f be a distribution of 2P isolated pixels with same amplitude, located at the
vertices of a regular polygon, and such that its first pixel is located at 0 polar angle. Let P'/L' be the
irreducible form of the 2P/L fraction. Then:
(i) the angular components of f with order 2k0P is in Ker C if and only if 0 = (2q + 1)/4k0 P, where
k 0 is a multiple of L', and q an arbitrary integer;
(ii) all other angular components with order equal to an odd multiple of 2k0P are also in Ker C .

P ROOF. (i) The source image is given by


2P 
1
p
f (, )  () (  P ) ,
p 0
0 0

where 0 is the tilted angle. The spectrum of f is then



sin( n)
Ff (, ) 2 P J0 (2 ) 2 (j)
n 1
n
J n (2 ) cos(n n 0 n n/2P)
sin(n /2P)
.

The only non-vanishing components are obtained with n = 2kP, k = 1, 2, ... and what remains is

Ff (, ) 2 P J 0 (2 
) 4 P J
k
1
2kP ( 2 ) cos(2 kP2 kP0 ).

Component 2k0 P appears in the kernel if cos(2 k 0 P/L 2k 0 P0 ) 0 for all ℓ= 0, 1, ..., L1, which
/L 2 k 0 P0 can be made equal to an odd number times /2 for all values of
is possible only if 2 k 0 P
ℓ, i.e.
, 0 L 1, q ℕ
 /L 2k 0 P0 (2q 1)/2 .
2 k 0 P
Solutions are obtained with 2k0 Pℓ/L = q and 0 = (2q' + 1)/4k0P, where q' is any integer. The first
condition requires 2k 0P/L to be integer i.e. k0P'/L' to be an integer, where P'/L' is the irreducible form
of the fraction 2P/L. This is possible if and only if k 0 is a multiple of L'.
(ii) Once (k0 ,0 ) has been chosen, components in the kernel are those verifying
, 0 L 1, q ℕ
 kP
/ L2kP 0 (2 q 1)
/2 ,

i.e. k P
/Lq 
, k (2q 1)/2k 0 (2 q1) 
/2, q', q" ℕ,
i.e. kP
/Lq 
, k (2q 1)/k 0 2 q
1,

which may be satisfied if and only if k is an odd multiple of k0. 


53

A direct consequence of Proposition 29 is that a 2P-gonal constellation tilted by half the angular
period (i.e. by /2P) has no component in Ker C. This allows us to build a constellation with a well
controlled kernel component by choosing P = L, and by adding a component which is known to be in
Ker C:
2 L 1
f ( , )  [1 (1) p
a ] (0 ) ( /2L p/L) ,
p 0

where a < 1 is the amplitude of the kernel component. To see how it works, we use
Proposition 30. Let f be a distribution of 2P isolated pixels with alternating amplitudes, located at the
vertices of a regular polygon, and such that its first pixel is located at 0 polar angle. Then f is in Ker
C  if and only if P is an integer multiple of the number of projection directions L, and 0 = (2q +
1)/2P, where q is an integer.

P ROOF. The component of the source image we wish to be in Ker C has the general form
2 P1
p
g ( , )  (1) p

( 0 ) (0 
P
),
p 0

where 0 is a tilt angle to be determined. Its spectrum is


 2P 1 
Fg(, )  
n 
(j)n J n (2 ) 
 (1) e
p 0

p jn ( 0 p/P)  jn 
e


 2P 1  2P 1
 
  ) (j) n (1)n
J n (2 
  
 (1) p e jn ( 0 p/P) e jn (j) n
  
 (1)p e jn ( 0 p/P) ejn .
 
n
1  p 0  p 0  
Using the identity
N 1 N1
sin[ N( ) /2] j( N 1)() /2
 ( 1) p e jp e jp ( )

sin[() /2]
e :S
N () e j( N 1)() /2 ,
p 0 p 0

with N = 2P and = n/P, we obtain



Fg( , )   (j) J n
n (2 ) 
e j( 2 P1)(n /P) /2 e jn (  ) ej( 2 P1)(n /P ) /2 e jn (  ) 
0 0 sin(n P)
sin(n /2 / 2)
n 1

sin( n P)
2  (j) J
n 1
n
n (2 
) cos[ nn 0 ( 2P 1)( n/P ) /2 ]
sin( n /2 / 2)

sin( nP )
2  (j) J n
n (2 
) cos( nn 0 nPn /2P 
/2 )
sin( n/2 / 2 )
.
n 1

We observe that the function S


N (n 
/P ) is equal to 2P for n = (2k+1)P with even k = 0, 2, ... , is equal
to +2P for n = (2k+1)P with odd k = 1, 3, ... , and is equal to zero otherwise (Figure 49). In the same
time
cos( nn0 nPn 
/2P /2 )
cos[( 2k 1) P(2 k 1) P0 ( 2k 1) PP( 2 k 1) /2 /2]
cos[( 2 k 1) P( 2k 1) P0 2 kP 2 P( k 1) ]
cos[( 2k 1) P(2 k 1) P0 ( k 1) ]
54

is equal to cos[( 2k 1)P(0 )] for even k, and to cos[( 2k 1)P(0 )] for odd k, in such a
way what remains is

Fg( , ) 4P ( j) P J ( 2 k 1 )P ( 2) cos[( 2k 1) P( 2k 1) P0 ] .
k 0

Component (2k0 + 1)P is in the kernel if cos[(2k 0 1)P/L (2 k 0 1)P 0 ] 0 for all ℓ= 0, 1, ...,
L1, which is possible only if (2k 0 1)P /L (2 k 0 1)P0 can be made equal to an odd number
times /2 for all values of ℓ, i.e.
, 0 L 1, q ℕ
 (2 k 0 1)P
/L (2k 0 1)P 0 (2q 1)/2 .

Solutions are obtained with (2k0 + 1)Pℓ/L = q and (2k0 + 1)P0 = (2q' + 1)/2, where q' is an integer.
The first condition requires (2k 0 + 1)P/L to be integer i.e. (2k0 + 1)P'/L' to be integer, where P'/L' is the
irreducible form of the P/L fraction. This is possible if and only if L' is odd. The second condition is
satisfied with 0 = (2q' + 1)/2(2k0 + 1)P.
Once (k0 ,0 ) has been chosen, components in the kernel are those verifying
, 0 L 1, q ℕ
 (2 k 1)P
/L(2 k 1) P0 (2q 1) 
/2 ,
i.e. (2k + 1)P'ℓ/L' = q, (2k + 1)(2q' + 1)/(2k0 + 1) = 2q" + 1, q, q"ℕ,
which may be satisfied if and only if 2k + 1 is a multiple of 2k 0 + 1 (i.e. an odd multiple).

Now, g is in Ker C if and only if all components (2k + 1)P, k = 0, 1, ... of Fg vanish in = ℓ/L, ℓ=
0, ... , L1. This is possible only with the choice k0 = 0, which in turn requires L' = 1. 

Figure 49. The function sin(x + P)/sin(x/2P + /2)) for P = 6.

Some general comments can be made in conclusion of this section:


(1) the conditions required to have non-zero kernel components look rather selective, which explains
why the reconstruction algorithm performs so well in many examples;
(2) the polygonal constellations of isolated pixels are used here for better understanding with a
minimum of math implied. Of course the spectral contents of a given source image reflects its
amplitude distribution and its shape as well. For instance a uniform "disk", once discretized, may be
55

viewed as an "approximately regular" polygon with a large number of vertices. All angular
components have non-zero-amplitude, hence all components with an order equal to a multiple of L
will fall in the kernel. On the contrary, a square shape will have a poorer angular spectrum, and a
smaller component in the kernel;
(3) finding a quantitative relation between the reconstruction error and the kernel component remains
an open problem, the key point being to find a relation between the spectrum of a function taking
positive and negative values (T ,f n) and the spectrum of its positive part (P CT,f n).
56

3.3. Numerical applications


The Radon transform is first computed using the DRT algorithm described in §2.4, which is recalled
here.
(STEP 1) The source data is a continuous image with size w w. After sampling, this image becomes a
2D array of N N numbers; the sampling interval is  x = w/N along each axis, and the sampling
period is 
s = 1/s = N/w. In view of using the FFT algorithm to compute the DFT's, N is chosen to be
a power of 2 and the {x = 0, y = 0} sample is arbitrarily set at location {N/2+1, N/2+1} in the array.
(STEP 2) The image spectrum is computed in a domain extended over [s/2,+s/2] along each axis;
spectrum size is (N+1) (N+1) as explained in §2.4.
(STEP 3) A standard interpolation scheme is used to generate L -slices in the disk with radius  s /2
and centered in {x = 0,  y = 0}, each -slice having N equispaced samples ranging from s/2 to
+s (N1)/2N. Angle has to be uniformly sampled from 0 to (L1)/L.
(STEP 4) Finally, the inverse FFT (IFFT) of each -slice yields the desired Radon projection at angle
.
The side-lobe reduction filter will be applied later on, as would be when processing real-life data. The
DRT is illustrated by the following diagram:

Radon Transform, using the Projection-Slice Theorem

~
f(x,y) F 2f (x,y) F2f (x,y) F2 f (, ) R f (t, )

The IDRT is then computed according to §3.1.


(STEP 4) Input data is a set of L Radon transforms, each of them having N samples along the t-axis.
Sampling intervals are  t = w/N along the t-axis, and 
=  /L along the -axis. In view of using the
FFT algorithm to compute the DFT's, N is chosen to be a power of 2 and the t = 0 sample is arbitrarily
set at location N/2+1.
(STEP 5) A standard FFT algorithm produces a set of L spectra, each of them having N samples,
extending from  (1) = 0 to (N) = s (N1)/N. The periodicity of the FFT is used to re-organize the
spectra, in such a way to cover the full [s/2,+s/2] interval. Resulting spectrum slices size is N+1.
(STEP 6) The raw reconstructed image is given by the interferometer formula of §3.1. The integral has
to be computed by brute force; operation count is reduced using all symmetries resulting from the fact
that spectrum slices are angularly equispaced (odd-L and even-L cases are considered individually).
The side-lobe reduction function is also applied at this step; the weighted interferometer integral is
given in Cartesian coordinates by
L 1 12  0 j2 
x cos y sin  



 
g (x , y )        
L
d d Ff ( , /L) H( ) e L L ,

0
2 L
1  0

where H is the weighting function. The [0 ,+0] integration domain is sampled with K+1 samples
extending over [sK/2N,+ sK/2N], where K ≤ N. Choosing K < N may help to eliminate high-
frequency noise. Some improvement has been also added with the use of circular Taylor functions,
which yield the best compromise between a narrower main-lobe and lower far side-lobes level.
57

(STEP 7) The point-spread function (PSF) is computed using again the interferometer integral. The
Fourier transform of an impulse image is Ff = 1, and the Fourier transform of is
L
1
(, )  
0

L
(
/L) rect (/2 0 ) ;

hence the weighted PSF is given in Cartesian coordinates by


L 112  0 x cos 
y sin 

 

 d d H( ) e
j2 
(x , y) 
L L L .

0

2 
1 
L 0

(STEP 8) Finally, the reconstructed image is obtained with the iterative process
f n 1 PC [f n ( g Cf n )] , f 0 g .

The convolution integral Cf n is efficiently computed using Fourier transform and zero-padding to
avoid aliasing. More precisely, the (N+1) (N+1) matrix representing is expanded to a 4N 4N
matrix C with zero-padding. The (N+1) (N+1) matrix representing f n is also expanding to a 4N 
4N matrix f n with zero-padding. Cf n is obtained by computing F 1 [ F( C ) F(f n )] and saving the
appropriate (N+1) (N+1) part of the result.
The IDRT is illustrated by the following diagram:

Inverse Radon Transform for limited number of Projections

F11Rf ( , ) ~ ~ g(x,y)


Rf (t,ℓ) ~ H . F2 f ( , ) ≡C f (x,y) f (x,y)
F2 f (, )

The behavior of the iterative deconvolution process is monitored with the help of the following
parameters:

the normalized L2-norm || r n ||2 / || Cf ||2 of the residual r n :(g C f n ) ,

the normalized L2-norm of the decrement || f n+1 f n || 2 / || Cf ||2 ,

the infimum inf r n () and the supremum sup r n () of the residual,
1 1

the mean of reconstruction error f n f : 1


meas(1 ) [f 1
n
() f ()] d ,

the standard deviation of reconstruction error (f n f ) :  2


12
f n f  f n f ,

the infimum inf [f n () f ()] and the supremum sup [f n () f ()] of reconstruction error.
1 1
58

3.4. Examples
We now give a few examples of the IDRT for limited number of projections. The Radon transforms
are computed with the standard DRT previously described. Seven parameters are needed:
w, the physical width of the source image,
N, the digital size of the source image,
NS ≡M N, the digital size of the spectrum (M is the magnification factor),
K, the size of the spectrum used for reconstruction,
L, the number of angle samples,
the type of side-lobe reduction function.

3.4.1. Large inhomogeneous rectangle


We start with the inhomogeneous rectangle, which conveniently illustrates most properties of the
reconstruction algorithm. The source image is the union of two adjacent rectangles, the first one with
size 0.4 0.4, center (0.4,0.0) and amplitude 2, and the second one with size 0.8 0.4, center
(0.2,0.0) and amplitude 1.0. The image size is w = 2, N = 32, hence the sampling rate is s = 16, and
the Nyquist rate is s/2 = 8. The magnification ratio is set to M = 8, hence Ns = 256. The size of
spectrum used for reconstruction is also K = 256, hence 0 =  s /2. The number of projections is 6. The
side-lobe reduction function is the circular Taylor function with n = 28 (80 dB side lobe level).
The eight steps of the DTR-IDRT chain are illustrated by Figure 50 to Figure 62. The PSF displayed
over domain 2 = [w,+w] 2 (in Figure 55) is used by the C operator. The superposition of the L ℓ
functions is clearly visible as a set of L radial lines. In the central region, these L lines form a narrow
peak surrounded by a null ring. Of course the peak amplitude is L times the line amplitude. While C 
uses only values of over 2 , it is instructive to display over a wider [4w,+4w] 2 domain as in
Figure 56. Frequency domain sampling creates alias lines, located every N/ s from each other. The
first ambiguity region visible at center is a 2L-gon with apothem equal to N/s. Note that 2 should lie
in this first ambiguity region to avoid aliases in the reconstruction process, which is achieved with

w1 N .
2 s
The raw image reconstructed using the interferometer formula is displayed in Figure 57. A blurred
copy of the original image is clearly visible, surrounded by heavy clutter resulting from the
convolution with the ℓ functions. The convolution of the original image with the calculated PSF is
displayed in Figure 58; since the original image is defined over [w/2,+w/2]2 and the PSF over
[w,+w]2 , the resulting digital convolution is defined over [w/2,+3w/2]2. The raw reconstructed
image and the digital convolution perfectly matches over [w/2,+w/2] 2 as expected. Note that
convolution artifacts due to frequency domain sampling are quite high outside the first ambiguity
region (Figure 59, [4w,+4w]2 domain), thus sustaining the importance of choosing 2 inside this first
ambiguity region.
The reconstructed image (Figure 60) looks almost perfect. Spurious pixels located outside the source
image are dB or lower while peak amplitude of source image is +6 dB, leading to a 66 dB
dynamic range (Figure 61). Reconstruction error peaks to 1.75 102 in the source image domain
(Figure 62).
Convergence and accuracy histories are displayed in Figure 63 and Figure 64 respectively. The mean
of reconstruction error is 2.24 105 and the standard deviation is 2.57 103, clearly showing the ability
of the algorithm to reconstruct the source image. The L2 -norm of the decrement and the normalized
L2-norm of the residual vary approximately as 1/n2. The supremum and the infimum of the residual
59

both tend to zero as n  (Figure 63). Note that the high frequency noise visible in Figure 62 could
be easily removed with gentle low-pass filtering, if ever necessary.
The accuracy of the reconstruction process is limited by two factors: the component of the source
image in Ker C  as explained in §3.2.2, and of course data discretization. The first factor will be
explored with details in the next two sections. Discretization errors naturally occur in Cartesian to
polar coordinates interpolation, and may lead to some discrepancy between the digital interferometer
output g and the digital representation of C f. Discretization errors get of course smaller with
increasing magnification ratio M, as shown by the following table. Outputs of corresponding runs are
displayed in Figure 65 to Figure 73. The reconstruction algorithm is always convergent in the sense
that the norm of the decrement || f n+1 f n ||2 goes to zero, while the norm of the residual || g Cf n ||2
goes to a non-zero limit which is smaller with larger magnification ratio M. Data in the table shows
that peak reconstruction error is divided by ~4, mean reconstruction error is divide by ~10 and
reconstruction standard deviation is divided by ~3.5 as M is multiplied by 8 from 2 to 16.

NS = K = 64 NS = K = 128 NS = K = 256 NS = K = 512


magnification ratio M =   
 
 
  
mean  
 
 
 


standard deviation  




 



peak  
 


 


0.8
1.8

0.6
1.6

0.4
1.4

0.2
1.2

0
1

-0.2
0.8

-0.4
0.6

-0.6
0.4

-0.8
0.2

-1
0

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 50. STEP 1. Source image size is 32 32, on domain 1 = [w/2,+w/2] 2


60

-10

-20

-30

-40

-50

-2

-60

-4

-70

-6

-80

-8
-8 -6 -4 -2 0 2 4 6 8

Figure 51. Step 2. Amplitude of complex spectrum in dB scale (80 dB dynamic range), displayed on
the [s/2,+s/2]2 domain.

-10
2.5

-20

-30

1.5
-40

-50
1

-60

0.5

-70

-80

-8 -6 -4 -2 0 2 4 6 8

Figure 52. STEP 3. Spectrum slices along iso-lines, computed by interpolation across the Cartesian
map, and displayed in dB scale (80 dB dynamic range).
61

1.5

2.5

1.5

0.5

0.5

-8 -6 -4 -2 0 2 4 6

Figure 53. STEP 4. Radon transform computed by inverse Fourier transform.

-10
2.5

-20

-30

1.5
-40

-50
1

-60

0.5

-70

-80

-8 -6 -4 -2 0 2 4 6 8

Figure 54. STEP 5. Spectrum slices computed from Radon transform by Fourier transform (dB scale;
80 dB dynamic range).
62

0.16

1.5

0.14

0.12

0.5

0.1

0.08

-0.5
0.06

-1
0.04

-1.5
0.02

0 -2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

Figure 55. STEP 6. Point Spread Function, displayed in linear scale on domain 2 = [w,+w]2 , which
is required by the deconvolution algorithm.

0.16

0.14

0.12

0.1

0.08

-2
0.06

-4
0.04

-6
0.02

0 -8
-8 -6 -4 -2 0 2 4 6 8

Figure 56. STEP 6. Point Spread Function, displayed in linear scale on the larger [4w,+4w] domain.
The first ambiguity region is clear isolated at center of image. Sets of parallel spectral lines are created
by sampling. Here, we have 128 samples over [ s /2,+s /2] = [16,+16], hence line separation is
128/32 = 4.
63

2 1

1.8 0.8

1.6 0.6

1.4 0.4

0.2
1.2

0
1

-0.2
0.8

-0.4
0.6

-0.6
0.4

-0.8
0.2

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

Figure 57. STEP 7. Raw image reconstructed from spectrum slices, using the interferometer formula.

1.8

1.6

1.4
1

1.2

0
1

0.8

-1

0.6

0.4
-2

0.2

0 -3
-3 -2 -1 0 1 2

Figure 58. Here, we have computed the convolution of the original image by the calculated PSF; the
result perfectly matches the raw reconstructed image over 1 = [w/2,+w/2]2
64

2
8

1.8

1.6

1.4

2
1.2

0
1

0.8 -2

0.6
-4

0.4
-6

0.2

-8

0
-8 -6 -4 -2 0 2 4 6 8

Figure 59. Convolution of the original image by the PSF, calculated over a larger domain. Aliasing is
clearly visible.

0.8
1.8

0.6
1.6

0.4
1.4

0.2
1.2

0
1

-0.2
0.8

-0.4
0.6

-0.6
0.4

-0.8
0.2

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 60. S TEP 8. Image reconstructed by the iterative deconvolution, displayed in linear scale.
65

0
0.8

-10
0.6

-20
0.4

-30
0.2

-40
0

-50
-0.2

-60
-0.4

-70
-0.6

-80
-0.8

-90
-1

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 61. S TEP 8. Image reconstructed by the iterative deconvolution, displayed in dB with 100 dB
dynamic range. Spurious pixels are lower than 60 dB, while peak image value is +6 dB. Spurious-
free dynamic range is then ~66 dB.

-3
x 10

0.8
16

0.6
14

0.4
12

0.2

10

-0.2

6
-0.4

4
-0.6

2 -0.8

-1
0

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 62. STEP 8. Reconstruction error, displayed in linear scale over [,+0.02] range.
Reconstruction error in image domain is primarily high-frequency, which could be easily removed
with low-pass filtering. Peak image error is 1.75 10 2 .
66

|| r || / || C f || n+1 n
0  || f - f || / || C f || vs. iteration # 0 sup r vs. iteration #
2 2 2  2
10 10

-2
-1 10
10

-4
10
-2
10

-6
10 0 1 2 3 4
-3 10 10 10 10 10
10
-5 -inf r vs. iteration #
10

-4 -4
10 10

-3
10
-5
10
-2
10

-6 -1
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 63. Convergence history of deconvolution algorithm.

n n n n n vs. iteration # mean lim = +2.2418e-005 std dev lim = 2.5693e-003


<f -f> < f - f > +/-(f - f) sup f - f inf f - f
1

0.5

-0.5

-1

-1.5

-2 0 1 2 3 4
10 10 10 10 10

Figure 64. Accuracy history of deconvolution algorithm.


67

0.04

0.8

0.035

0.6

0.03

0.4

0.025
0.2

0
0.02

-0.2
0.015

-0.4

0.01

-0.6

0.005
-0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 65. Peak reconstruction error is 3.98 102 with NS = K = 64.

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 0
10 10

-2
-1 10
10

-4
10
-2
10

-6
10
0 1 2 3 4
-3
10 10 10 10 10
10
-4
-inf r vs. iteration #
10

-4
10
-3
10

-5
10 -2
10

-6 -1
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 66. Convergence history with NS = K = 64.


68

n n vs. iteration # mean lim = +8.8182e-005 std dev lim = 6.0144e-003


< fn - f > < f - f > +/-(f - f) sup f n - f inf f n - f
1

0.5

-0.5

-1

-1.5

-2
0 1 2 3 4
10 10 10 10 10

Figure 67. Accuracy history with NS = K = 64.

0.025

0.8

0.6
0.02

0.4

0.2
0.015

-0.2
0.01

-0.4

-0.6
0.005

-0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 68. Peak reconstruction error is 2.49 10 2 with NS = K = 128.


69

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 0
10 10

-2
-1 10
10

-4
10
-2
10

-6
10
0 1 2 3 4
-3
10 10 10 10 10
10
-5
-inf r vs. iteration #
10

-4 -4
10 10

-3
10
-5
10
-2
10

-6 -1
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 69. Convergence history with NS = K = 128.

n n n n n vs. iteration # mean lim = +1.3991e-005 std dev lim = 3.3857e-003


<f -f> < f - f > +/- (f - f) sup f - f inf f - f
1

0.5

-0.5

-1

-1.5

-2
0 1 2 3 4
10 10 10 10 10

Figure 70. Accuracy history with NS = K = 128.


70

x 10 -3
10

0.8
9

0.6
8

0.4
7

0.2
6

0
5

-0.2
4

-0.4
3

-0.6
2

-0.8
1

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 71. Peak reconstruction error is 1.01 10-2 with NS = K = 512.

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 0
10 10

-2
10
-1
10

-4
10
-2
10
-6
10

-3
-8
10 10
0 1 2 3 4 5
10 10 10 10 10 10

-6
-inf r vs. iteration #
-4
10 10

-5
10
-5
10 -4
10

-3
10
-6
10
-2
10

-7 -1
10 10
0 1 2 3 4 5 0 1 2 3 4 5
10 10 10 10 10 10 10 10 10 10 10 10

Figure 72. Convergence history with NS = K = 512.


71

n n n n n
<f -f> < f - f > +/- (f - f) sup f - f inf f - f vs. iteration # mean lim = +7.6866e-006 std dev lim = 1.7130e-003
1

0.5

-0.5

-1

-1.5

-2
0 1 2 3 4 5
10 10 10 10 10 10

Figure 73. Accuracy history with NS = K = 512.


72

3.4.2. Isolated pixels


In order to better understand the algorithm behavior, we start by considering three cases: one single
pixel, two pixels aligned along one of the projection axes, and two pixels not aligned along any
projection axis. The seven parameters of the DRT-IDRT chain are as described at beginning of §3.4.1.
In the first case (single pixel), the reconstruction error goes to machine precision: the mean and
standard deviation of reconstruction error are 6.7 1011 and 9.8 1010 respectively after 10 000
iterations. Both decrement and residual also go to machine precision. In the two other cases, the
reconstruction error tends to a limit, with mean values of 2.44 108 and 1.26 10-5 respectively, and
7 4
standard deviations of 6.93 10 and 1.04 10 respectively. The peak value is also smaller with pixels
3
aligned along one projection axis (1.55 10 ) than pixels not aligned along any projection axis (2.47
103 ). As explained in previous section this accuracy limit could be improved with larger
magnification ratio M.

two pixels aligned along two pixels not aligned


reconstruction error : one projection axis along any projection axis
mean value 2.44 108 1.26 105
standard deviation 6.93 107 1.04 104
5 3
peak value 1.55 10 2.47 10

x 10 -8

0.8
2.5

0.6

2 0.4

0.2

1.5
0

-0.2

-0.4

-0.6
0.5

-0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 74. Single pixel reconstruction error magnitude, displayed on linear scale; peak error is
2.77 108 .
73

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 0
10 10

-1
10
-5
10
-2
10

-10
-3
10 10

-4
10
-15
10
0 1 2 3 4
-5
10 10 10 10 10
10
-12
-inf r vs. iteration #
10
-6
10
-10
10
-7
10
-8
10
-8
10 -6
10

-9
10 -4
10

-10 -2
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 75. Single pixel reconstruction: convergence history.

n n
< fn - f > < f - f > +/- (f - f) sup fn - f inf f n - f vs. iteration # mean lim = +6.7385e-011 std dev lim = 9.8653e-010
0.4

0.2

-0.2

-0.4

-0.6

-0.8

-1
0 1 2 3 4
10 10 10 10 10

Figure 76. Single pixel reconstruction: accuracy history.


74

x 10 -6

15

0.8

0.6

0.4

10
0.2

-0.2

5
-0.4

-0.6

-0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 77. Two-pixel (aligned along one projection axis) reconstruction error magnitude, displayed on
linear scale; peak error is 1.55 105.

|| r || / || C f || || f
n+1 n
- f ||2 / || C f ||2 vs. iteration # sup r vs. iteration #
0 2  2  0
10 10

-2
-1 10
10
-4
10
-2
10
-6
10
-3
10 -8
10

-4 -10
10 10
0 1 2 3 4
10 10 10 10 10

-5 -8
-inf r vs. iteration #
10 10

-6
10
-6
10
-7
10

-4
10
-8
10

-9 -2
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 78. Two-pixel (aligned along one projection axis) reconstruction: convergence history.
75

n n
< fn - f > < f - f > +/- (f - f) sup fn - f inf f n - f vs. iteration # mean lim = +2.4429e-008 std dev lim = 6.9340e-007
0.4

0.2

-0.2

-0.4

-0.6

-0.8

-1
0 1 2 3 4
10 10 10 10 10

Figure 79. Two-pixel (aligned along one projection axis) reconstruction: accuracy history.

0.8
0.9

0.6
0.8

0.4
0.7

0.2
0.6

0
0.5

-0.2
0.4

-0.4
0.3

-0.6
0.2

-0.8
0.1

-1
0

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 80. Two-pixel (not aligned along any projection axis) reconstructed image.
76

x 10 -3

0.8

0.6
2

0.4

1.5 0.2

-0.2
1

-0.4

0.5 -0.6

-0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 81. Two-pixel (not aligned along any projection axis) reconstruction error magnitude,
displayed on liner scale; peak error is 2.47 103.

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 0
10 10

-2
10
-1
10
-4
10

-6
-2 10
10
-8
10

-3
-10
10 10
0 1 2 3 4
10 10 10 10 10

-6
-inf r vs. iteration #
-4
10 10

-5
10
-5
10
-4
10

-6
10
-3
10

-7 -2
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 82. Two-pixel (not aligned along any projection axis) reconstruction: convergence history.
77

n
<f -f> < f n - f > +/- (f n - f) n
sup f - f
n
inf f - f vs. iteration # mean lim = +1.2659e-005 std dev lim = 1.0390e-004
0.4

0.2

-0.2

-0.4

-0.6

-0.8

-1
0 1 2 3 4
10 10 10 10 10

Figure 83. Two-pixel (not aligned along any projection axis) reconstruction: accuracy history.
78

3.4.3. Periodically distributed isolated pixels


In order to illustrate Proposition 29 we consider 3 constellations of pixels distributed at vertices of
regular hexagon (P = 3), regular octagon (P = 4) and regular dodecagon (P = 6). For each
constellation, two cases are run: un-tilted 0 = 0, and tilted by 0 = /4P. Results are displayed in
Figure 84 to Figure 101, and summarized in the following table. With these particular choices for 0 ,
and following Proposition 29, we know that spectral components of these constellations have angular
dependences of the form cos(2kP) or sin(2kP). Sine forms are in Ker C, and their angular orders n
= 2kP are given for each case in the table.
All configurations yield quite similar reconstruction errors, with peak error in the range [0.774, 1.52] 
104 , mean value error in the range [2.26, 3.81] 107 , and standard deviation in the range [0.685,
1.54] 105. It seems here that the high angular component orders lead to minute amplitudes of the
Bessel functions, and finally have little impact on reconstruction accuracy. Their only visible trace can
be found on reconstruction error magnitude maps (Figure 87, Figure 93, Figure 99) in the form of an
amplitude modulation of reconstructed pixels error. The convergence of cases with kernel components
are also a little bit slower.

6-pix tilted 6-pix 8-pix tilted 8-pix 12-pix tilted 12-pix


P       
L       
P'/L' ≡2P/L     
  
  
0  /4P   /4P  /4P
k0       
n = 2(2q+1)k0 P, q = 0,1,...   
 

   
 

   
 


     
reconstruction error:      
iterations  
     
        

 
 
  
 
mean value 
    
   
  

  

 
 
  

standard deviation 
   
   
  

  


 
 
  
 
peak value 
       
  

  
79

x 10 -5

12

0.8

0.6
10

0.4

8
0.2

0
6

-0.2

4
-0.4

-0.6

-0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 84. Six-pixel regular polygon reconstruction error magnitude; peak error is 1.23 104

|| r || / || C f || || f
n+1 n
- f ||2 / || C f ||2 vs. iteration # sup r vs. iteration #
0 2  2  0
10 10

-2
10
-1
10
-4
10

-2
10 -6
10

-8
-3
10
10
-10
10
0 1 2 3 4
-4
10 10 10 10 10
10
-7
-inf r vs. iteration #
10
-5
10 -6
10

-5
-6 10
10

-4
10
-7
10
-3
10

-8 -2
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 85. Six-pixel regular polygon: convergence history.


80

n n
< fn - f > < f - f > +/- (f - f) sup fn - f inf f n - f vs. iteration # mean lim = +3.8061e-007 std dev lim = 6.8549e-006
0.4

0.2

-0.2

-0.4

-0.6

-0.8

-1
0 1 2 3 4
10 10 10 10 10

Figure 86. Six-pixel regular polygon: accuracy history.

x 10 -5

14

0.8

12
0.6

0.4
10

0.2

6 -0.2

-0.4
4

-0.6

2
-0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 87. Six-pixel /4P-tilted regular polygon reconstruction error magnitude; peak error is 1.42
104 .
81

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 0
10 10

-2
10
-1
10

-4
10
-2
10
-6
10

-3
-8
10 10
0 1 2 3 4
10 10 10 10 10

-6
-inf r vs. iteration #
-4
10 10

-5
10
-5
10
-4
10

-6
10
-3
10

-7 -2
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 88. Six-pixel /4P-tilted regular polygon: convergence history.

n n n n n vs. iteration # mean lim = +3.0117e-007 std dev lim = 9.8670e-006


<f -f> < f - f > +/- (f - f) sup f - f inf f - f
0.4

0.2

-0.2

-0.4

-0.6

-0.8

-1
0 1 2 3 4
10 10 10 10 10

Figure 89. Six-pixel /4P-tilted regular polygon: accuracy history.


82

x 10 -5

12
0.8

0.6
10

0.4

8
0.2

-0.2

4 -0.4

-0.6

-0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 90. Eight-pixel regular polygon reconstruction error magnitude; peak error is 1.27 104.

|| r || / || C f || || f
n+1 n
- f ||2 / || C f ||2 vs. iteration # sup r vs. iteration #
0 2  2  0
10 10

-2
10
-1
10
-4
10

-2
10 10
-6

-8
-3
10
10
-10
10
0 1 2 3 4
-4
10 10 10 10 10
10
-8
-inf r vs. iteration #
10
-5
10
-6
10

-6
10
-4
10

-7
10 -2
10

-8 0
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 91. Eight-pixel regular polygon: convergence history.


83

n n n n n
<f -f> < f - f > +/- (f - f) sup f - f inf f - f vs. iteration # mean lim = +3.0985e-007 std dev lim = 8.8024e-006
0.4

0.2

-0.2

-0.4

-0.6

-0.8

-1
0 1 2 3 4
10 10 10 10 10

Figure 92. Eight-pixel regular polygon: accuracy history.

-5
x 10

12

0.8

0.6
10

0.4

8
0.2

0
6

-0.2

* 4
-0.4

-0.6

-0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 93. Eight-pixel /4P-tilted regular polygon reconstruction error magnitude; peak error is 1.24
104 .
84

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 0
10 10

-2
10
-1
10

-4
10
-2
10
-6
10

-3
-8
10 10
0 1 2 3 4
10 10 10 10 10

-6
-inf r vs. iteration #
-4
10 10

-5
10
-5
10 -4
10

-3
10
-6
10
-2
10

-7 -1
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 94. Eight-pixel /4P-tilted regular polygon: convergence history.

n < f n - f > +/- (f n - f) n n vs. iteration # mean lim = +2.4299e-007 std dev lim = 9.3902e-006
<f -f> sup f - f inf f - f
0.4

0.2

-0.2

-0.4

-0.6

-0.8

-1
0 1 2 3 4
10 10 10 10 10

Figure 95. Eight-pixel /4P-tilted regular polygon: accuracy history.


85

x 10 -5

0.8
7

0.6

0.4

5
0.2

4 0

-0.2
3

-0.4

-0.6

1
-0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 96. Twelve-pixel regular polygon reconstruction error magnitude; peak error is 7.74 105 .

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 0
10 10

-2
10
-1
10
-4
10

-2
-6
10 10

-8
-3 10
10
-10
10
0 1 2 3 4
-4
10 10 10 10 10
10
-inf r vs. iteration #
-6
10
-5
10
-5
10

-6
10
-4
10

-7
10 -3
10

-8 -2
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 97. Twelve-pixel regular polygon: convergence history.


86

n n n n n
<f -f> < f - f > +/- (f - f) sup f - f inf f - f vs. iteration # mean lim = +2.6292e-007 std dev lim = 9.5313e-006
0.4

0.2

-0.2

-0.4

-0.6

-0.8

-1
0 1 2 3 4
10 10 10 10 10

Figure 98. Twelve-pixel regular polygon: accuracy history.

x 10 -5

15

0.8

0.6

0.4

10

0.2

-0.2

5
-0.4

-0.6

-0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 99. Twelve-pixel /4P-tilted regular polygon reconstruction error magnitude; peak error is 1.52
104 .
87

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 0
10 10

-2
10
-1
10
-4
10

-6
-2 10
10
-8
10

-3
-10
10 10
0 1 2 3 4 5
10 10 10 10 10 10

-6
-inf r vs. iteration #
-4
10 10

-5
10
-5
10 -4
10

-3
10
-6
10
-2
10

-7 -1
10 10
0 1 2 3 4 5 0 1 2 3 4 5
10 10 10 10 10 10 10 10 10 10 10 10

Figure 100. Twelve-pixel /4P-tilted regular polygon: convergence history.

n n n n n vs. iteration # mean lim = +2.2601e-007 std dev lim = 1.5398e-005


<f -f> < f - f > +/- (f - f) sup f - f inf f - f
0.4

0.2

-0.2

-0.4

-0.6

-0.8

-1
0 1 2 3 4 5
10 10 10 10 10 10

Figure 101. Twelve-pixel /4P-tilted regular polygon: accuracy history.


88

3.4.4. Periodically distributed isolated pixels with kernel component


In a first example we apply Proposition 30 with L = 6, P = 6, 0 = /2P = /12 and modulation
amplitude a = 0.2, in such a way the modulated component has all its angular components (2q+1)P =
6, 18, ... in the kernel. Two cases have been run: 0 = 0 (Figure 102 to Figure 105) which has no
component in the kernel, and 0 = /2P (Figure 106 to Figure 109). Results are summarized in the
following table, where the two un-modulated constellations of the previous section have been added
for easier comparison. Orders of spectral angular components in Ker C : 2(2q+1)k0P for the un-
modulated component of the source image, and (2q+1)P for the modulated component, are also given
in the table.
All four cases yield similar reconstruction error, with peak error in the range [0.774, 1.52] 104,
mean value in the range [2.26, 3.57] 107, and standard deviation in the range [0.849, 1.54] 105.
As in the previous section, the high angular orders of kernel components lead to minute amplitudes of
Bessel functions hence a negligible effect on reconstruction accuracy. And again convergence is
slower for cases having a kernel component.

un-tilted 12-pix, tilted 12-pix, un-tilted 12-pix, tilted 12-pix,


un-modulated un-modulated modulated modulated
P    
L    
P'/L' ≡2P/L       
0   /4P   /2P
   
un-modulated component:    
k0    
n = 2(2q+1)k0 P, q = 0,1,...    

  
   
modulated component:    
n = (2q+1)P    
 


   
reconstruction error:    
iterations  
     
   
 
  

mean value         
 
  

standard deviation         
peak value  

 

  

 

89

1.2

0.8

1
0.6

0.4

0.8

0.2

0
0.6

-0.2

0.4
-0.4

-0.6

0.2

-0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 102. Twelve-pixel, modulated, un-tilted regular polygon source image. Amplitude of
modulated component is a = 0.2.

x 10 -5

0.8

12

0.6

10
0.4

0.2
8

6
-0.2

-0.4
4

-0.6

2
-0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 103. Twelve-pixel, modulated, un-tilted regular polygon reconstruction error magnitude (L =
6); peak error is 1.36 104.
90

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 0
10 10

-2
-1 10
10
-4
10
-2
10
-6
10
-3
10 -8
10

-4 -10
10 10
0 1 2 3 4
10 10 10 10 10

-5
-6
-inf r vs. iteration #
10 10

-5
-6 10
10
-4
10
-7
10
-3
10
-8
10 -2
10

-9 -1
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 104. Twelve-pixel, modulated, un-tilted regular polygon: convergence history (L = 6).

n n
< fn - f > < f - f > +/- (f - f) sup fn - f inf f n - f vs. iteration # mean lim = +2.7982e-007 std dev lim = 9.9913e-006
0.4

0.2

-0.2

-0.4

-0.6

-0.8

-1

-1.2
0 1 2 3 4
10 10 10 10 10

Figure 105. Twelve-pixel, modulated, un-tilted regular polygon: accuracy history (L = 6).
91

1.2

0.8

1
0.6

0.4

0.8

0.2

0
0.6

-0.2

0.4
-0.4

-0.6

0.2

-0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 106. Twelve-pixel, modulated, /2P-tilted regular polygon: source image.

x 10 -5
10

0.8
9

0.6
8

0.4
7

0.2
6

0
5

-0.2
4

-0.4
3

-0.6
2

-0.8
1

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 107. Twelve-pixel, modulated, /2P-tilted regular polygon reconstruction error magnitude;
peak error is 1.00 104.
92

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 0
10 10

-2
10
-1
10
-4
10

-2
10 -6
10

-8
-3
10
10
-10
10
0 1 2 3 4 5
-4
10 10 10 10 10 10
10
-6
-inf r vs. iteration #
10
-5
10 -5
10

-4
-6 10
10
-3
10
-7
10
-2
10

-8 -1
10 10
0 1 2 3 4 5 0 1 2 3 4 5
10 10 10 10 10 10 10 10 10 10 10 10

Figure 108. Twelve-pixel, modulated, /2P-tilted regular polygon: convergence history.

n < f n - f > +/- (f n - f) n n vs. iteration # mean lim = +3.5719e-007 std dev lim = 8.4928e-006
<f -f> sup f - f inf f - f
0.4

0.2

-0.2

-0.4

-0.6

-0.8

-1

-1.2
0 1 2 3 4 5
10 10 10 10 10 10

Figure 109. Twelve-pixel, modulated, /2P-tilted regular polygon: accuracy history.


93

In order to evidence the effect of the kernel component on reconstruction accuracy, we need cases
with lower orders of spectral angular components in the kernel. To this purpose, we use a reduced
number of projection directions L = 4 (the corresponding PSF is shown in Figure 112), and an eight-
pixel (P = 4) constellation. Four runs are considered: un-modulated un-tilted, un-modulated /4P-
tilted, modulated un-tilted and modulated /2P-tilted. The amplitude of the modulated component is a
= 0.2. The orders of spectral angular components in Ker C  are given in the following table.
The first and third runs with no kernel component converge and lead to minute reconstruction error.
The second and fourth runs also converge (this is Proposition 23), with minute mean reconstruction
error (this is Proposition 25), but with high peak reconstruction error. In the second run, the
reconstruction error takes the form of a 4P constellation, clearly corresponding to the order 8 of the
fundamental kernel component (Figure 117 and Figure 118). In the fourth case, the kernel component
is merely rejected by the iterative algorithm: the reconstructed image has lost its modulated component
(Figure 127) and the reconstruction error is equal to the modulated component itself (Figure 128).

un-tilted 8-pix, tilted 8-pix, un-tilted 8-pix, tilted 8-pix,


un-modulated un-modulated modulated modulated
P    
L    
P'/L' ≡2P/L    
  
0  /4P  /2P
   
un-modulated component:    
k0    
n = 2(2q+1)k0P, q = 0,1,...   
 

  
   
modulated component:    
n = (2q+1)P     


   
reconstruction error:    
iterations          
 
  

mean value     
    
  
 
standard deviation   

 
 

   
 
 
 

   
peak value        
  

In conclusion, runs presented in this section demonstrate the role of the component of the source
image in Ker C. Spectacular reconstruction errors may be indeed found, which require specific
modulation of the source image with particular angular rates and particular initial angles. The
amplitude of spectral components also decrease very rapidly with order (a general feature of Bessel
functions of the First kind), explaining why dramatic effects have been found with L = 4 projection
angles and none with L = 6.
94

0.8
0.9

0.6
0.8

0.4
0.7

0.2
0.6

0
0.5

-0.2
0.4

-0.4
0.3

-0.6
0.2

-0.8
0.1

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 110. Eight-pixel, un-modulated, un-tilted regular polygon: source image.

0.8
0.9

0.6
0.8

0.4
0.7

0.2
0.6

0
0.5

-0.2
0.4

-0.4
0.3

-0.6
0.2

-0.8
0.1

-1
0

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 111. Eight-pixel, un-modulated, un-tilted regular polygon: reconstructed image (L = 4).
95

0.16

1.5

0.14

0.12

0.5

0.1

0.08

-0.5
0.06

-1
0.04

-1.5
0.02

0 -2

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

Figure 112. L = 4 point spread function.

x 10 -5

0.8

10
0.6

0.4

0.2

0
6

-0.2

4
-0.4

-0.6

-0.8

-1
0

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 113. Eight-pixel, un-modulated, un-tilted regular polygon reconstruction error magnitude (L =
4); peak error is 1.18 104.
96

|| r || / || C f || || f n+1 - f n || / || C f || vs. iteration # sup r vs. iteration #


0 2  2 2  2 0
10 10

-2
10
-1
10
-4
10

-2
10 10
-6

-8
-3
10
10
-10
10
0 1 2 3 4
-4
10 10 10 10 10
10
-7
-inf r vs. iteration #
10
-5
10 -6
10

-5
-6
10 10

-4
10
-7
10
-3
10

-8 -2
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 114. Eight-pixel, un-modulated, un-tilted regular polygon: convergence history (L = 4).

n n
< f n- f > < f - f > +/- (f - f) sup fn - f inf f n - f vs. iteration # mean lim = +2.8412e-007 std dev lim = 8.4528e-006
0.4

0.2

-0.2

-0.4

-0.6

-0.8

-1
0 1 2 3 4
10 10 10 10 10

Figure 115. Eight-pixel, un-modulated, un-tilted regular polygon: accuracy history (L = 4).
97

0.8
0.9

0.6
0.8

0.4
0.7

0.2
0.6

0
0.5

-0.2
0.4

-0.4
0.3

-0.6
0.2

-0.8
0.1

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 116. Eight-pixel, un-modulated, /4P-tilted regular polygon: source image.

0.5

0.8
0.45

0.6
0.4

0.4
0.35

0.2
0.3

0
0.25

-0.2
0.2

-0.4
0.15

-0.6
0.1

-0.8
0.05

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 117. Eight-pixel, un-modulated, /4P-tilted regular polygon: reconstructed image (L = 4).
98

0.5

0.8
0.4

0.6
0.3

0.4
0.2

0.2
0.1

0
0

-0.2
-0.1

-0.4
-0.2

-0.6
-0.3

-0.8
-0.4

-1
-0.5
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 118. Eight-pixel, un-modulated, /4P-tilted regular polygon reconstruction error (L = 4); peak
error is 0.50.

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 0
10 10

-2
10
-1
10
-4
10

-2 -6
10 10

-8
10
0 1 2 3 4
-3
10 10 10 10 10
10
-7
-inf r vs. iteration #
10

-6
-4 10
10
-5
10

-5 -4
10 10

-3
10

-6 -2
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 119. Eight-pixel, un-modulated, /4P-tilted regular polygon: convergence history (L = 4).
99

n n n n n
<f -f> < f - f > +/- (f - f) sup f - f inf f - f vs. iteration # mean lim = +2.5093e-007 std dev lim = 6.0606e-002
0.5

-0.5

-1
0 1 2 3 4
10 10 10 10 10

Figure 120. Eight-pixel, un-modulated, /4P-tilted regular polygon: accuracy history (L = 4).

1.2

0.8

1
0.6

0.4

0.8

0.2

0
0.6

-0.2

0.4
-0.4

-0.6

0.2

-0.8

-1
0

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 121. Eight-pixel, modulated, un-tilted regular polygon: source image.


100

0.8

0.6

0.4
0.8

0.2

0.6 0

-0.2

0.4
-0.4

-0.6

0.2

-0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 122. Eight-pixel, modulated, un-tilted regular polygon: reconstructed image (L = 4).

-5
x 10
14

0.8

12

0.6

10 0.4

0.2

6
-0.2

-0.4
4

-0.6

2
-0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 123. Eight-pixel, modulated, un-tilted regular polygon reconstruction error magnitude (L = 4);
peak error is 1.40 104.
101

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 0
10 10

-2
10
-1
10
-4
10

-2
10 -6
10

-8
-3
10
10
-10
10
0 1 2 3 4
-4
10 10 10 10 10
10
-7
-inf r vs. iteration #
10
-5
10 -6
10

-5
-6 10
10
-4
10
-7
10
-3
10

-8 -2
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 124. Eight-pixel, modulated, un-tilted regular polygon: convergence history (L = 4).

n < f n - f > +/- (f n - f) n n vs. iteration # mean lim = +3.2079e-007 std dev lim = 9.3362e-006
<f -f> sup f - f inf f - f
0.4

0.2

-0.2

-0.4

-0.6

-0.8

-1

-1.2
0 1 2 3 4
10 10 10 10 10

Figure 125. Eight-pixel, modulated, un-tilted regular polygon: accuracy history (L = 4).
102

1.2

0.8

1
0.6

0.4

0.8

0.2

0
0.6

-0.2

0.4
-0.4

-0.6

0.2

-0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 126. Eight-pixel, modulated, /2P-tilted regular polygon: source image. Amplitude of
modulated component is a = 0.2.

0.9 0.8

0.8 0.6

0.7 0.4

0.2
0.6

0.5 0

-0.2
0.4

-0.4
0.3

-0.6
0.2

-0.8
0.1

-1
0

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 127. Eight-pixel, modulated, /2P-tilted regular polygon: reconstructed image (L = 4). The
kernel component has vanished.
103

0.2

0.8

0.15

0.6

0.1
0.4

0.05
0.2

0
0

-0.2

-0.05

-0.4

-0.1

-0.6

-0.15
-0.8

-1
-0.2
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 128. Eight-pixel, modulated, /2P-tilted regular polygon reconstruction error (L = 4); peak
error is alternately 0.2, i.e. the amplitude of the kernel component.

|| r || / || C f || || f
n+1 n
- f ||2 / || C f ||2 vs. iteration # sup r vs. iteration #
0 2  2  0
10 10

-2
10
-1
10
-4
10

-2
10 -6
10

-8
-3
10
10
-10
10
0 1 2 3 4
-4
10 10 10 10 10
10
-8
-inf r vs. iteration #
10
-5
10
-6
10

-6
10
-4
10

-7
10 -2
10

-8 0
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 129. Eight-pixel, modulated, un-tilted regular polygon: convergence history (L = 4).
104

n n n n n
<f -f> < f - f > +/- (f - f) sup f - f inf f - f vs. iteration # mean lim = +3.9996e-007 std dev lim = 1.7142e-002
0.2

-0.2

-0.4

-0.6

-0.8

-1

-1.2
0 1 2 3 4
10 10 10 10 10

Figure 130. Eight-pixel, modulated, /2P-tilted regular polygon: accuracy history (L = 4).
105

3.4.5. Centered uniform disk


The source object is a uniform disk with radius 0.55 (in some arbitrary metric unit) and unit amplitude
(f(x,y) = 1) centered at image center. The image size is w = 2, N = 32, hence the sampling rate is
s = 16, and the Nyquist rate is s /2 = 8. The magnification ratio is set to M = 8, hence Ns = 256. The
size of spectrum used for reconstruction is also K = 256, hence 0 = s/2. The number of projections is
6. The side-lobe reduction function is the circular Taylor function with n = 28 (80 dB side lobe
level). Results at displayed in Figure 131 to Figure 138. This is an example of an image with a non-
negligible kernel component. The reconstruction error is lower than 0.05 over most of the disk, with
2
some isolated pixels peaking at 9.42 10 . The convergence is rather slow, with here 40 000 iterations.
Data in Figure 137 suggests reconstruction accuracy would be a little bit better with more iterations.

0.8
0.9

0.6
0.8

0.4
0.7

0.2
0.6

0
0.5

-0.2
0.4

-0.4
0.3

-0.6
0.2

-0.8
0.1

-1
0

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 131. Centered uniform disk: source image.


106

1
1

0.8
0.9

0.6
0.8

0.4
0.7

0.2
0.6

0
0.5

-0.2
0.4

-0.4
0.3

-0.6
0.2

-0.8
0.1

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

Figure 132. Centered uniform disk: raw reconstructed image

0.9
2

0.8

0.7 1

0.6

0
0.5

0.4

-1

0.3

0.2
-2

0.1

0 -3
-3 -2 -1 0 1 2

Figure 133. Centered uniform disk convolution with the PSF.


107

1
0.8

0.9
0.6

0.8
0.4

0.7
0.2

0.6

0.5

-0.2

0.4

-0.4
0.3

-0.6
0.2

-0.8
0.1

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 134. Centered uniform disk: reconstructed image.

0.8
-10

0.6
-20

0.4
-30

0.2
-40

0
-50

-0.2
-60

-0.4
-70

-0.6
-80

-0.8
-90

-1

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 135: Centered uniform disk: reconstructed image indB scale over 100 dB dynamic range.
Spurious pixels lower than 60 dB.
108

0.09
0.8

0.08
0.6

0.07
0.4

0.06
0.2

0.05
0

0.04
-0.2

0.03 -0.4

0.02 -0.6

0.01 -0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 136. Centered uniform disk reconstruction error magnitude is lower than 5 10 2 over most of
the disk, to the exception of a few isolated pixels peaking at 9.42 102 .

|| r || / || C f || || f
n+1 n
- f ||2 / || C f ||2 vs. iteration # sup r vs. iteration #
0 2  2  0
10 10

-2
10
-1
10
-4
10

-2 -6
10 10

-8
10
0 1 2 3 4 5
-3
10 10 10 10 10 10
10
-5
-inf r vs. iteration #
10

-4 -4
10 10

-3
10
-5
10
-2
10

-6 -1
10 10
0 1 2 3 4 5 0 1 2 3 4 5
10 10 10 10 10 10 10 10 10 10 10 10

Figure 137. Centered uniform disk: convergence history.


109

n
<f -f> < f n - f > +/- (f n - f) n
sup f - f
n
inf f - f vs. iteration # mean lim = +2.7903e-005 std dev lim = 1.9071e-002
0.4

0.2

-0.2

-0.4

-0.6

-0.8

-1
0 1 2 3 4 5
10 10 10 10 10 10

Figure 138. Centered uniform disk: accuracy history.


110

3.4.6. Offset uniform disk


The source object is a uniform disk with radius 0.55 (in some arbitrary metric unit) and unit amplitude
(f(x,y) = 1) centered in (0.2,0.2). The image size is w = 2, N = 32, hence the sampling rate is  s = 16,
and the Nyquist rate is s/2 = 8. The magnification ratio is set to M = 8, hence N s = 256. The size of
spectrum used for reconstruction is also K = 256, hence 0 =  s /2. The number of projections is 6. The
side-lobe reduction function is the circular Taylor function with n = 28 (80 dB side lobe level).
Results(Figure 139 to Figure 145) are quite similar to those of the centered uniform disk.

0.8
0.9

0.6
0.8

0.4
0.7

0.2
0.6

0
0.5

-0.2
0.4

-0.4
0.3

-0.6
0.2

-0.8
0.1

-1
0

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 139. Offset uniform disk: source image.


111

0.8

0.9

0.6

0.8

0.4

0.7

0.2
0.6

0
0.5

-0.2
0.4

-0.4
0.3

-0.6
0.2

-0.8
0.1

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

Figure 140. Offset uniform disk: raw reconstructed image.

0.8

0.6

0.4
0.8

0.2

0.6 0

-0.2

0.4
-0.4

-0.6

0.2

-0.8

-1
0

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 141. Offset uniform disk: reconstructed image.


112

0.8
-10

0.6
-20

0.4
-30

0.2
-40

0
-50

-0.2
-60

-0.4
-70

-0.6
-80

-0.8
-90

-1

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 142. Offset uniform disk: reconstructed image in dB scale over 100 dB dynamic range.
Spurious pixels are lower than 60 dB.

0.35 0.8

0.6
0.3

0.4

0.25
0.2

0.2
0

-0.2
0.15

-0.4

0.1

-0.6

0.05
-0.8

-1
0

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 143. Offset uniform disk reconstruction error magnitude is about 5 102 over most of the disk.
113

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 0
10 10

-2
-1 10
10

-4
10
-2
10

-6
10
0 1 2 3 4 5
-3
10 10 10 10 10 10
10
-inf r vs. iteration #
-5
10

-4
-4
10 10

-3
10
-5
10
-2
10

-6 -1
10 10
0 1 2 3 4 5 0 1 2 3 4 5
10 10 10 10 10 10 10 10 10 10 10 10

Figure 144. Offset uniform disk: convergence history.

n n vs. iteration # mean lim = +3.1172e-005 std dev lim = 3.1640e-002


< fn - f > < f - f > +/-(f - f) sup f n - f inf f n - f
0.5

-0.5

-1
0 1 2 3 4 5
10 10 10 10 10 10

Figure 145. Offset uniform disk: accuracy history.


114

3.4.7. Centered Gaussian disk.


The source object is a Gaussian function supported by a disk with radius 0.5 and centered at image
center. The peak value of the Gaussian function is 1, and its standard deviation is equal to half the
radius. The image size is w = 2, N = 32, hence the sampling rate is  s = 16, and the Nyquist rate is
s/2 = 8. The magnification ratio is set to M = 8, hence N s = 256. The size of spectrum used for
reconstruction is also K = 256, hence 0 = s/2. The number of projections is 6. The side-lobe
reduction function is the circular Taylor function with n = 28 (80 dB side lobe level). Results at
displayed in Figure 146 to Figure 152. Reconstruction error is about 1 102 over the disk, with some
2
isolated pixels peaking at 5.68 10 .

0.8
0.9

0.6
0.8

0.4
0.7

0.2
0.6

0
0.5

-0.2
0.4

-0.4
0.3

-0.6
0.2

-0.8
0.1

-1
0

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 146. Centered Gaussian disk: source image.


115

0.9

0.8

0.8

0.6

0.7

0.4

0.6
0.2

0.5
0

0.4
-0.2

0.3
-0.4

0.2 -0.6

0.1 -0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

Figure 147. Centered Gaussian disk: raw reconstructed image.

0.8
0.9

0.6
0.8

0.4
0.7

0.2
0.6

0
0.5

-0.2
0.4

-0.4
0.3

-0.6
0.2

-0.8
0.1

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 148. Centered Gaussian disk: reconstructed image.


116

0.8
-10

0.6
-20

0.4
-30

0.2
-40

0
-50

-0.2
-60

-0.4
-70

-0.6
-80

-0.8
-90

-1
-100
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 149. Centered Gaussian disk: reconstructed image in dB scale over 100 dB dynamic range.
Spurious pixels are lower than 65 dB.

0.8

0.05

0.6

0.4
0.04

0.2

0.03
0

-0.2

0.02

-0.4

-0.6

0.01

-0.8

-1
0

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 150. Centered Gaussian disk reconstruction error magnitude is about 1 10-2 over most of the
disk; peak error is 5.68 10 -2.
117

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 0
10 10

-2
-1 10
10

-4
10
-2
10

-6
10
0 1 2 3 4
-3
10 10 10 10 10
10
-inf r vs. iteration #
-5
10

-4
10
-4
10

-5
10 -3
10

-6 -2
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 151. Centered Gaussian disk: convergence history.

n n vs. iteration # mean lim = +2.5458e-005 std dev lim = 6.4037e-003


< fn - f > < f - f > +/-(f - f) sup f n - f inf f n - f
0.1

-0.1

-0.2

-0.3

-0.4

-0.5

-0.6

-0.7

-0.8
0 1 2 3 4
10 10 10 10 10

Figure 152. Centered Gaussian disk: accuracy history.


118

3.4.8. Offset Gaussian disk


The source object is a Gaussian function supported by a disk with radius 0.55 and centered in
(0.4,0.2). The peak value of the Gaussian function is 1, and its standard deviation is equal to half the
radius. The image size is w = 2, N = 32, hence the sampling rate is  s = 16, and the Nyquist rate is
s/2 = 8. The magnification ratio is set to M = 8, hence N s = 256. The size of spectrum used for
reconstruction is also K = 256, hence 0 = s/2. The number of projections is L = 6. The side-lobe
reduction function is the circular Taylor function with n = 28 (80 dB side lobe level). Results at
displayed in Figure 153 to Figure 159. Peak reconstruction error is 2.41 102.

0.8
0.9

0.6
0.8

0.4
0.7

0.2
0.6

0
0.5

-0.2
0.4

-0.4
0.3

-0.6
0.2

-0.8
0.1

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 153. Offset Gaussian disk: source image.


119

0.9

0.8

0.8

0.6

0.7

0.4

0.6
0.2

0.5
0

0.4
-0.2

0.3
-0.4

0.2 -0.6

0.1 -0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

Figure 154. Offset Gaussian disk: raw reconstructed image.

0.8
0.9

0.6
0.8

0.4
0.7

0.2
0.6

0
0.5

-0.2
0.4

-0.4
0.3

-0.6
0.2

-0.8
0.1

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 155. Offset Gaussian disk: reconstructed image.


120

0.8
-10

0.6
-20

0.4
-30

0.2
-40

0
-50

-0.2
-60

-0.4
-70

-0.6
-80

-0.8
-90

-1
-100
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 156. Offset Gaussian disk: reconstructed image in dB scale over 100 dB dynamic range.
Spurious pixels are lower than 60 dB.

0.8

0.02
0.6

0.4

0.015 0.2

0.01 -0.2

-0.4

0.005 -0.6

-0.8

-1
0

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 157. Offset Gaussian disk reconstruction error magnitude; peak error is 2.41 10-2.
121

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 0
10 10

-2
10
-1
10
-4
10

-2 -6
10 10

-8
10
0 1 2 3 4
-3
10 10 10 10 10
10
-5
-inf r vs. iteration #
10

-4 -4
10 10

-3
10
-5
10
-2
10

-6 -1
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 158. Offset Gaussian disk: convergence history.

n < f n - f > +/- (f n - f) n n vs. iteration # mean lim = +1.2986e-005 std dev lim = 3.2807e-003
<f -f> sup f - f inf f - f
0.1

-0.1

-0.2

-0.3

-0.4

-0.5

-0.6

-0.7

-0.8
0 1 2 3 4
10 10 10 10 10

Figure 159. Offset Gaussian: disk accuracy history.


122

3.4.9. Three Gaussian disks cluster


The source object is the sum of three Gaussian functions, whose parameters are chosen in order to
demonstrate the ability of the algorithm to reconstruct fine structures. The three Gaussian functions are
supported by disks with radius 0.8, centered in (0,0), (0.5,0) and (0,0.5) respectively. Peak amplitudes
are 1.0, 0.8 and 0.5 respectively; standard deviations are 0.25, 0.18 and 0.13 respectively. Three runs
are presented, whose major distinguishing parameters are the image size N and the number of
projection directions L. The pairs (N,L) are (32,6), (64,6) and (64,12) respectively. Other parameters
are scaled accordingly, in such a way magnification ratio and cut-off frequency stay unchanged: M =
8, 0 = s /2. The side-lobe reduction function is the circular Taylor function with n = 28 (80 dB side
lobe level) for all three runs.
Results of the first and second runs are displayed in Figure 160 to Figure 165, and Figure 166 to
Figure 170 respectively. Reconstruction error is quite similar in both cases, with mean value of a few
107 , standard deviation of 1.6 10-2 and peak value of 8.4 10 -2. This similarity of results shows that all
digital transforms have been properly implemented.
The third run (L = 12, Figure 172 to Figure 176) yields dramatically better image reconstruction, with
standard deviation and peak value of error improved by a factor ~10, and also a much faster
convergence. Source and reconstruction image are undistinguishable with such an accuracy. This of
course results from the sharp peak and low side-lobes of the PSF (Figure 171).

run #1 run #2 run #3


image width w 2 2 2
image size N 32 64 64
sampling rate s 16 32 32
magnification ratio M 8 8 8
spectrum size NS 256 512 512
reconstruction spectrum size K 256 512 512
cut-off frequency 0 s/2 s/2 s/2
projection directions L 6 6 12

reconstruction error:
mean 6.15 107 2.27 107 2.09 106
standard deviation 1.58 102 1.60 102 1.41 103
peak value 8.31 102 8.48 102 7.77 103
123

0.8
0.9

0.6
0.8

0.4
0.7

0.2
0.6

0
0.5

-0.2
0.4

-0.4
0.3

-0.6
0.2

-0.8
0.1

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 160. Three Gaussian disks: source image (N = 32, L = 6).

0.9

0.8

0.8

0.6

0.7
0.4

0.6
0.2

0.5
0

0.4
-0.2

0.3
-0.4

0.2 -0.6

0.1 -0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

Figure 161. Three Gaussian disks: raw reconstructed image (N = 32, L = 6).
124

0.8
0.9

0.6
0.8

0.4
0.7

0.2
0.6

0
0.5

-0.2
0.4

-0.4
0.3

-0.6
0.2

-0.8
0.1

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 162. Three Gaussian disks: reconstructed image (N = 32, L = 6).

0.08

0.8

0.07
0.6

0.06 0.4

0.2
0.05

0
0.04

-0.2

0.03

-0.4

0.02
-0.6

0.01
-0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 163. Three Gaussian disks (N = 32, L = 6) reconstruction error magnitude; peak value is 8.31
102 .
125

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 0
10 10

-2
-1 10
10

-4
-2
10
10

-6
10
-3
10
-8
10
0 1 2 3 4
-4
10 10 10 10 10
10
-8
-inf r vs. iteration #
10
-5
10
-6
10

-6
10 -4
10

-7
10 -2
10

-8 0
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 164. Three Gaussian disks: convergence history (N = 32, L = 6).

n n n n n
<f -f> < f - f > +/- (f - f) sup f - f inf f - f vs. iteration # mean lim = -5.1497e-007 std dev lim = 1.5763e-002
0.1

-0.1

-0.2

-0.3

-0.4

-0.5

-0.6

-0.7

-0.8
0 1 2 3 4
10 10 10 10 10

Figure 165. Three Gaussian disks: accuracy history (N = 32, L = 6).


126

0.8
0.9

0.6
0.8

0.4
0.7

0.2
0.6

0.5 0

0.4 -0.2

0.3 -0.4

0.2 -0.6

0.1 -0.8

0 -1

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 166. Three Gaussian disks: source image (N = 64, L = 6)

0.9 0.8

0.8 0.6

0.7 0.4

0.6 0.2

0.5 0

0.4 -0.2

0.3 -0.4

0.2 -0.6

0.1 -0.8

0 -1

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 167. Three Gaussian disks: raw reconstructed image (N = 64, L = 6)


127

0.08
0.8

0.07
0.6

0.06 0.4

0.2
0.05

0
0.04

-0.2

0.03

-0.4

0.02
-0.6

0.01
-0.8

0 -1

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 168. Three Gaussian disks (N = 64, L = 6) reconstruction error magnitude; peak error is 8.48
102 .

|| r || / || C f || || f
n+1 n
- f ||2 / || C f ||2 vs. iteration # sup r vs. iteration #
0 2  2  0
10 10

-2
10
-1
10

-4
10
-2
10
-6
10

-3
-8
10 10
0 1 2 3 4
10 10 10 10 10

-6
-inf r vs. iteration #
-4
10 10

-5
10
-5
10
-4
10

-6
10
-3
10

-7 -2
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 169. Three Gaussian disks: convergence history (N = 64, L = 6).


128

n n vs. iteration # mean lim = +2.2705e-007 std dev lim = 1.6008e-002


< fn - f > < f - f > +/-(f - f) sup f n - f inf f n - f
0.1

-0.1

-0.2

-0.3

-0.4

-0.5

-0.6

-0.7

-0.8

-0.9
0 1 2 3 4
10 10 10 10 10

Figure 170. Three Gaussian disks: accuracy history (N = 64, L = 6).

0.16

1.5

0.14

0.12

0.5

0.1

0
0.08

-0.5
0.06

-1
0.04

0.02 -1.5

0 -2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

Figure 171. Point spread function for N = 64, L = 12.


129

0.9
0.8

0.8
0.6

0.7 0.4

0.6 0.2

0.5
0

0.4 -0.2

0.3 -0.4

0.2 -0.6

0.1 -0.8

0 -1

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

Figure 172. Three Gaussian disks: raw reconstructed image (N = 64,L = 12).

0.8
0.9

0.6
0.8

0.4
0.7

0.6 0.2

0
0.5

-0.2
0.4

0.3 -0.4

0.2 -0.6

0.1 -0.8

0 -1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 173. Three Gaussian disks: reconstructed image (N = 64, L = 12).


130

x 10 -3

0.8
7

0.6

0.4

5
0.2

4
0

-0.2
3

-0.4

-0.6

1
-0.8

0 -1

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 174. Three Gaussian disks (N = 64, L = 12) reconstruction error magnitude; peak error is 7.77
103 .

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 0
10 10

-2
10
-1
10

-4
10
-2
10
-6
10

-3
-8
10 10
0 1 2 3 4
10 10 10 10 10

-inf r vs. iteration #


-4 -6
10 10

-5
10
-5
10
-4
10

-6
10
-3
10

-7 -2
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 175. Three Gaussian disks: convergence history (N = 64, L = 12).


131

n n
< fn - f > < f - f > +/- (f - f) sup fn - f inf f n - f vs. iteration # mean lim = -2.0941e-006 std dev lim = 1.4058e-003
0.1

-0.1

-0.2

-0.3

-0.4

-0.5

-0.6

-0.7

-0.8

-0.9
0 1 2 3 4
10 10 10 10 10

Figure 176. Three Gaussian disks: accuracy history (N = 64, L = 12).


132

3.4.10. Effects of additive noise


As any fixed-point iteration, and as quoted in [4], the present reconstruction algorithm presents the
property of semi-convergence in presence of additive noise: the standard deviation of reconstruction
error first decreases with the number of iterations, goes through a minimum, then increases. When the
noise-free source image is known (as in our runs), this minimum is easily found. In practical situations
where the noise-free source image is unknown, some criterion has to be defined in order to stop the
iterations at the best rank. This is what we want to evaluate. A first observation is that the minimum of
error standard deviation turns out to be rather flat, hence the stopping rank may be chosen in a wide
range. The only available information is the residual (g Cf ). The infimum of the residual - which
n

is always negative - increases with the number of iterations, then reaches a constant (negative). This
turning point roughly corresponds to the minimum of error standard deviation, and we propose to use
it as a stopping point.
The noisy source images are derived from the source images of the three runs of §3.4.9 by adding a
random, uniformly distributed over [0.1,+0.1], and pixel-to-pixel uncorrelated noise. Runs results are
displayed in Figure 177 to Figure 194, and summarized in the following table. Some interesting
observations can be made:
(1) it appears that this kind of algorithm is able to reject most of the noise, especially in regions where
the noise-free source image is zero. This is of course a consequence of the low-pass filtering effect of
the interferometer formula.
(2) the reconstruction accuracy is only slightly degraded by the added noise; this is related to the low-
pass filtering effect, but is also a good surprise for a highly non-linear algorithm;
(3) the noise-free reconstruction error can be recognized in run #1 and #2 error maps, but not in run #3
error map: in this last case the major component of error is noise;
(4) the optimum number of iterations turns out to be in the [100,1000] range, thus leading to very short
CPU time (one minute or so on a standard laptop).

run #1 run #2 run #3


image width w 2 2 2
image size N 32 64 64
sampling rate s 16 32 32
magnification ratio M 8 8 8
spectrum size NS 256 512 512
reconstruction spectrum size K 256 512 512
cut-off frequency 0 s/2 s/2 s/2
projection directions L 6 6 12

reconstruction error w/o noise:


mean 6.15 107 2.27 107 2.09 106
standard deviation 1.58 102 1.60 102 1.41 103
peak value 8.31 102 8.48 102 7.77 103

reconstruction error with noise:


iterations 600 800 200
mean 9.68 104 9.29 104 1.61 103
standard deviation 2.10 102 1.99 102 1.32 102
peak value 8.52 102 9.55 102 8.28 102
133

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 0
10 10

-2
10
-1
10

-4
10

-2
10
-6
10
0 1 2 3 4
10 10 10 10 10

-3
-inf r vs. iteration #
10
-3
10

-2
-4 10
10

-5 -1
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 177. Three Gaussian disks with noise: convergence history (N = 32, L = 6). The optimum
number of iterations is about 600.

n n n n n vs. iteration # mean lim = +1.8289e-003 std dev lim = 2.3886e-002


<f -f> < f - f > +/-(f - f) sup f - f inf f - f
0.2

0.1

-0.1

-0.2

-0.3

-0.4

-0.5

-0.6

-0.7
0 1 2 3 4
10 10 10 10 10

Figure 178. Three Gaussian disks with noise: accuracy history (N = 32, L = 6). The semi-convergent
behavior is quite visible.
134

0.8

0.9

0.6

0.8

0.4

0.7

0.2
0.6

0
0.5

-0.2
0.4

-0.4
0.3

-0.6
0.2

-0.8
0.1

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 179. Three Gaussian disks: source image with noise (N = 32).

1 1

0.9
0.8

0.8 0.6

0.7 0.4

0.6 0.2

0.5 0

0.4 -0.2

0.3 -0.4

0.2 -0.6

-0.8
0.1

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

Figure 180. Three Gaussian disks with noise: raw reconstructed image (N = 32, L = 6).
135

0.8

0.9

0.6

0.8

0.4

0.7

0.2
0.6

0
0.5

-0.2
0.4

-0.4
0.3

-0.6
0.2

-0.8
0.1

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 181. Three Gaussian disks with noise: reconstructed image after 600 iterations (N = 32, L = 6).

0.08
0.8

0.07 0.6

0.4
0.06

0.2
0.05

0.04

-0.2

0.03

-0.4

0.02
-0.6

0.01 -0.8

-1
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 182. Three Gaussian disks with noise: reconstruction error magnitude after 600 iterations (N =
32, L = 6). Most of the noise is rejected; peak error is 8.52 102.
136

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 -1
10 10

-2
10

-3
-1 10
10
-4
10

-5
10
-2
10
-6
10
0 1 2 3 4
10 10 10 10 10

-3
-inf r vs. iteration #
10
-3
10

-4
10

-5 -2
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Figure 183. Three Gaussian disks with noise: convergence history (N = 64, L = 6). The optimum
number of iterations is about 800.

n n n n n vs. iteration # mean lim = +7.7139e-004 std dev lim = 2.0516e-002


<f -f> < f - f > +/- (f - f) sup f - f inf f - f
0.4

0.2

-0.2

-0.4

-0.6

-0.8

-1
0 1 2 3 4
10 10 10 10 10

Figure 184. Three Gaussian disks with noise: accuracy history (N = 64, L = 6).
137

0.8

0.9

0.6

0.8

0.4

0.7

0.2
0.6

0
0.5

-0.2
0.4

-0.4
0.3

-0.6
0.2

-0.8
0.1

0 -1

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 185. Three Gaussian disks: source image with noise (N = 64).

1 1

0.8
0.9

0.8 0.6

0.7 0.4

0.6 0.2

0.5 0

0.4 -0.2

0.3 -0.4

0.2 -0.6

0.1 -0.8

0 -1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

Figure 186. Three Gaussian disks with noise: raw reconstructed image (N = 64, L = 6).
138

0.8
0.9

0.6
0.8

0.4
0.7

0.2
0.6

0
0.5

-0.2
0.4

0.3 -0.4

0.2 -0.6

0.1 -0.8

0 -1

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 187. Three Gaussian disks with noise: reconstructed image after 800 iterations (N = 64, L = 6).

0.09
0.8

0.08
0.6

0.07
0.4

0.06
0.2

0.05
0

0.04
-0.2

0.03 -0.4

0.02 -0.6

0.01 -0.8

0 -1

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 188. Three Gaussian disks with noise: reconstruction error magnitude (N = 64, L = 6). Peak
value is 9.55 10-2 .
139

|| r || / || C f || n+1 n
 || f - f || / || C f || vs. iteration # sup r vs. iteration #
0 2 2 2  2 -1
10 10

-2
10

-3
-1
10
10

-4
10

-5
10
0 1 2 3
-2
10 10 10 10
10
-3
-inf r vs. iteration #
10

-3
10

-4 -2
10 10
0 1 2 3 0 1 2 3
10 10 10 10 10 10 10 10

Figure 189. Three Gaussian disks with noise: convergence history (N = 64, L = 12). The optimum
number of iteration is about 200.

n n n n n vs. iteration # mean lim = +1.8501e-003 std dev lim = 1.5782e-002


<f -f> < f - f > +/- (f - f) sup f - f inf f - f
0.4

0.2

-0.2

-0.4

-0.6

-0.8

-1
0 1 2 3
10 10 10 10

Figure 190. Three Gaussian disks with noise: accuracy history (N = 64, L = 12).
140

0.8
0.9

0.6
0.8

0.4
0.7

0.2
0.6

0
0.5

-0.2
0.4

-0.4
0.3

-0.6
0.2

0.1 -0.8

0 -1

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 191. Three Gaussian disks: source image with noise (N = 64).

1 1

0.9 0.8

0.8 0.6

0.7 0.4

0.6 0.2

0.5 0

0.4 -0.2

0.3 -0.4

0.2 -0.6

0.1 -0.8

0 -1

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

Figure 192. Three Gaussian disks with noise: raw reconstructed image (N = 64, L = 12).
141

0.8
0.9

0.6
0.8

0.4
0.7

0.2
0.6

0
0.5

-0.2
0.4

-0.4
0.3

0.2 -0.6

-0.8
0.1

0 -1

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 193. Three Gaussian disks with noise: reconstructed image (N = 64, L = 12) after 200
iterations.

0.08

0.8

0.07
0.6

0.06
0.4

0.05 0.2

0
0.04

-0.2

0.03

-0.4

0.02

-0.6

0.01
-0.8

0 -1

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 194. Three Gaussian disks with noise: reconstruction error magnitude (N = 64, L = 12). Peak
value is 8.28 102. Most of the error is lower than 4 102.
142

4. REFERENCES
[1] A. Averbuch, R.R. Coifman, D.L. Donoho, M. Elad, M. Israeli, Fast and accurate Polar Fourier
transform, Appl. Comput. Harmon. Anal. 21 (2006) 145–167.
[2] Gregory Beylkin, Christopher Kurcz and Lucas Monz´on, Grids and transforms for band-limited
functions in a disk, Inverse Problems 23 (2007) 2059–2088.
[3] Charles L. Byrne, Applied and Computational Linear Algebra: A First Course, Department of
Mathematical Sciences, University of Massachusetts Lowell, Lowell, MA 01854, January 4, 2011.
[4] M. Piana, M. Bertero, Projected Landweber method and preconditioning, Inverse Problems 13
(1997) 441–463.

View publication stats

You might also like