undersampled MRI
Iason Kastanis
A dissertation submitted in partial fulﬁllment
of the requirements for the degree of
Doctor of Philosophy
of the
University of London.
Department of Computer Science
University College London
February 3, 2007
2
Statement of intellectual contribution
The work carried out in this thesis is my own work with the exception of some preliminary
phantom studies, which was conducted in collaboration with Avi Silver, who was working in
the Computational Imaging Science Group, Department of Imaging Sciences, Guys Hospital,
Kings College. Clinical data was provided by Dr Michael Schaft Hansen, who was employed
in the Center of Medical Image Computing, UCL.
4 Statement of intellectual contribution
Abstract
Reconstruction of images and shapes from measured data is nowadays an essential requirement
for medicine. Medical imaging enhances the ability of clinicians to perform diagnosis non
invasively.
In Magnetic Resonance Imaging, as well as other imaging modalities, data for a single
image frame requires more time than the object can be considered to be static. Therefore anal
ysis of dynamic objects directly implies the need for fast data acquisition schemes in order to
represent motion in an adequate manner. A necessary condition for this is the collection of data
being limited to a bare minimum. The majority of available methods are designed to deal with
complete data sets. This thesis presents a novel methodology for the reconstruction of very
limited data sets from sparse angular samples. It takes advantage of the dynamic nature of the
reconstruction problem using the theory of inverse problems, as well as statistical analysis. A
model is used to represent the distribution of intensities in the image, as well as the shape of the
object of interest.
The novel reconstruction approach can be used to form both shapes and images directly
from measured data, avoiding some of the constraints of traditional methods, presenting both
qualitative and quantitative results for further analysis by clinicians. The clinical application
of interest is cardiac imaging, where fast imaging, not reliant on periodicity assumptions, is
essential. The method is demonstrated in simulations, phantom and clinical studies for static
and dynamic data sets. The method offers a degree of ﬂexibility in the data collection pro
cess, opening up the possibility of an intelligent acquisition scheme, where parameters can be
adjusted during the collection of data from patients.
6 Abstract
Acknowledgements
First of all, I would like to thank Prof. Simon Arridge and Prof. Derek Hill. It was their
ideas that initiated this exciting project. Simon Arridge has helped me from the beginning
to understand the mathematical nature of the problem. Derek Hill suggested directions and
applications for our methods. His knowledge on MR imaging has been invaluable.
Dr Daniel Alexander also reserves my gratitude for being my second supervisor, providing
useful comments and suggestions in internal examinations and various presentations.
I would like to acknowledge EPRSC MIASIRC for funding this work.
The quality of work is always dependant on its surroundings. I have found working in
the medical imaging group at the computer science department in UCL a great learning and
productive environment. For these reasons I would like to thank, Martin Schweiger for many
suggestions on mathematics and numerics, Rachid Elafouri and Abdel Douiri for their com
ments and attention on my questions and Thanasis Zacharopoulos for the many discussions on
a variety of subjects.
I would like to thank Dr Michael Hansen and Avi Silver for their collaboration and for
providing data to be used in the experiments.
Without stating their names I would like to thank my friends, for their moral support
throughout this period of my life. Finally, I would like to thank my parents, Nikos and Ioanna,
for their continuous support in every imaginable way, ﬁnancial, emotional, advices on cooking
properly and many things words cannot describe. They have even listened to me complain and
explain very speciﬁc problems about my research as I am sure they had little idea what I was
talking about.
8 Acknowledgements
Contents
1 Prologue 23
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.2 Problem statement  Contribution . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.3 Overview of thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2 Magnetic Resonance Imaging 27
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2 Principles of MRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3 Image reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4 Dynamic imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.4.1 Gated imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4.2 Parallel imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.4.3 kt imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3 Shape reconstruction background 41
3.1 Snake methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.2 Level set methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4 Numerical optimization: Inverse problem theory 47
4.1 Inverse Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2 Model selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.2.1 Image parametrization . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.2.2 Shape parametrization . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3 Data discrepancy functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4 Least squares approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
10 Contents
4.4.1 Linear case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.4.2 Nonlinear case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.5 Constrained optimization: The method of Lagrange . . . . . . . . . . . . . . . 60
4.6 Tikhonov regularisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.6.1 Linear case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.6.2 Nonlinear case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.7 Statistical estimation: Kalman ﬁlters . . . . . . . . . . . . . . . . . . . . . . . 65
4.7.1 Linear case: Discrete Kalman ﬁlters . . . . . . . . . . . . . . . . . . . 66
4.7.2 Nonlinear case: Extended Kalman ﬁlters . . . . . . . . . . . . . . . . 70
4.7.3 Fixed interval smoother . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.8 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5 Image reconstruction method 73
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.2 Forward problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.3 Inverse problem: Direct solution . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.3.1 Least squares estimation . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.3.2 Damped least squares estimation . . . . . . . . . . . . . . . . . . . . . 77
5.4 Inverse problem: Iterative solution . . . . . . . . . . . . . . . . . . . . . . . . 78
5.4.1 Lagged diffusivity ﬁxed point iteration . . . . . . . . . . . . . . . . . . 80
5.4.2 Primaldual Newton method . . . . . . . . . . . . . . . . . . . . . . . 82
5.4.3 Constrained optimisation . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.5.1 Simulated cardiac data . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.5.2 Measured data from MRI . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6 Shape reconstruction method 97
6.1 Forward problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6.2 Inverse problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
6.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.3.1 Simulated data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
6.3.2 Measured data from MRI . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Contents 11
7 Combined reconstruction method 113
7.1 Forward and inverse problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
7.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
7.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
8 Temporally correlated combined reconstruction method 125
8.1 Forward and inverse problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
8.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
8.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
9 Conclusions and future directions 141
A Acronyms 145
B Table of notation 147
C Difference imaging 149
12 Contents
List of Figures
2.1 (Left) Cartesian sampling. (Right) Radial sampling. . . . . . . . . . . . . . . 28
2.2 Fromimage to Radon projections. (Top left) Line integrals overlaid on an image
at θ = 45
o
. (Top right) A line integral for τ = 32. (Bottom left) The Radon
transform of the image at θ = 45
o
. (Bottom right) The Radon transform at four
angles. The purple circle indicates the location of the line integral. . . . . . . . 30
2.3 A normal ECG. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4 Sheared sampling pattern in kt space. The taxis represents time and the k
y

axis the sampled locations in the phase encoding direction. Each point denotes
a complete k
x
line in the read out direction. . . . . . . . . . . . . . . . . . . . 37
2.5 Plot of an aliased function. The qaxis is the temporal frequency and the Faxis
is the spatial frequency. Due to the temporal underampling the function has
been shifted in the temporal frequency dimension. This can be corrected with
the application of an appropriate low pass ﬁlter [91]. . . . . . . . . . . . . . . 38
3.1 Level set function and corresponding shape boundary on the zero level set. . . 44
3.2 Level set function and two corresponding shape boundaries on the zero level set. 45
4.1 Regular 3 3 grid. The x and y axes represent the spatial location in R
2
and
the z axis represents the intensity. . . . . . . . . . . . . . . . . . . . . . . . . 51
4.2 Surface plot of the KaiserBessel blob basis in 2D with support radius 1.45 and
α = 6.4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3 Plot of radial proﬁles of linear(solid), Gauss(dashed), Wendland(dashdotted)
and KaiserBessel(dotted). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.4 Plot of Fourier basis functions with N
γ
= 7. Dashed curves are the cos (even)
terms and solid curves are the sin (odd) terms. . . . . . . . . . . . . . . . . . 53
4.5 Plot of Bspline basis functions with N
γ
= 7. . . . . . . . . . . . . . . . . . . 54
4.6 From left to right. N. Wiener, A. Kolmogorov and R. Kalman. . . . . . . . . . 66
14 List of Figures
5.1 Radon data. A sinogram with 8 projections each with 185 line integrals. . . . . 74
5.2 Radial proﬁle of the KaiserBessel blob in Fourier space (Left) and Radon space
(Right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.3 The system matrix J. Each column corresponds to the vectorised basis function
in the Radon space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.4 Ground truth image. SheppLogan phantom. . . . . . . . . . . . . . . . . . . 76
5.5 8 projections. (Left) Filtered backprojection rms = 1.2521. (Right) Least
squares reconstruction 8 8 grid rms = 0.73092. . . . . . . . . . . . . . . . 77
5.6 8 projections. (Left) Filtered backprojection rms = 1.2521. (Right) Damped
least squares reconstruction 64x64 grid rms = 0.61756. . . . . . . . . . . . . 78
5.7 The solid line represents the absolute function [t[ and the dashed line represents
the approximation ψ(t) =
_
t
2
+β
2
with β = 0.1. . . . . . . . . . . . . . . . 79
5.8 The TV
block tridiagonal matrix. . . . . . . . . . . . . . . . . . . . . . . . . 80
5.9 8 projections. (Left) Initial (damped least squares) rms = 0.61756. (Right)
Fixed point reconstruction rms = 0.5975. . . . . . . . . . . . . . . . . . . . 81
5.10 8 projections. (Left) rms error over iteration plot. (Right) Gradient norm plot. 81
5.11 8 projections. (Left) Initial (damped least squares) RMS = 0.61756. (Right)
Primaldual reconstruction RMS = 0.5975. . . . . . . . . . . . . . . . . . . 84
5.12 8 projections. (Left) RMS error over iteration plot. (Right) Gradient norm plot. 84
5.13 8 projections. (Left) Initial (damped least squares) rms = 0.61756. (Right)
Projected primaldual reconstruction rms = 0.4833. . . . . . . . . . . . . . . 86
5.14 8 projections. (Left) rms error over iteration plot. (Right) Gradient norm plot. 86
5.15 Ground truth image. Fully sampled cardiac image. . . . . . . . . . . . . . . . 87
5.16 Simulated data reconstructions. The numbers on the left column indicate the
number of proﬁles. (Left) Filtered backprojection. (Right) Projected primal
dual reconstruction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.17 (Left)Simulated cardiac rms plot over the number of proﬁles. The dashed
line represents the ﬁltered backprojection method and the solid the primaldual
method. (Right) Comparison of central lines of the ground truth and recon
structed images for the case of 8 radial proﬁles. . . . . . . . . . . . . . . . . . 89
5.18 Coil 1 reconstructions from measured data. The numbers on the left column
indicate the number of proﬁles. (Left) Gridding. (Right) Projected primaldual
reconstruction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.19 Coil 1. Fully sampled gridding reconstruction used as ground truth image. . . 91
List of Figures 15
5.20 (Left) Coil 1 rms plot over the number of proﬁles. The dashed line represents
the gridding method and the solid the primaldual method. (Right) Comparison
of central lines of the ground truth and reconstructed images for the case of 8
radial proﬁles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5.21 Multiple coil. Fully sampled LS gridding reconstruction used as ground truth
image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.22 Multiple coil reconstructions from measured data. The numbers on the left
column indicate the number of proﬁles. (Left) LS gridding. (Right) Projected
primaldual reconstruction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.23 (Left) Multiple coil rms plot over the number of proﬁles. The dashed line rep
resents the LS gridding method and the solid the primaldual method. (Right)
Comparison of central lines of the ground truth and reconstructed images for
the case of 8 radial proﬁles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.1 (Right) Contour with selfintersection at parametric point s
e
. (Left) Corrected
contour with the small loop removed. . . . . . . . . . . . . . . . . . . . . . . 98
6.2 Exact parametric points s
1
and s
2
of the intersection of the curve with a pixel. 101
6.3 Ground truth image. Cartoon heart. . . . . . . . . . . . . . . . . . . . . . . . 102
6.4 Simulated data with no background. (Top Left) Initial superimposed to ground
truth image. (Top Right) Initial predicted image. (Bottom Left) Final superim
posed to ground truth image. (Bottom Right) Final predicted image. . . . . . . 103
6.5 Simulated data with no background. Gradient norm plot over iteration. . . . . 103
6.6 Simulated data with no background and 15% added Gaussian noise. (Top Left)
Initial superimposed to ground truth image. (Top Right) Initial predicted image.
(Bottom Left) Final superimposed to ground truth image. (Bottom Right) Final
predicted image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.7 Simulated data with no background and 15% added Gaussian noise. Gradient
norm plot over iteration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.8 Ground truth image with multiple shapes. . . . . . . . . . . . . . . . . . . . . 105
6.9 Simulated data with no background. (Top Left) Initial superimposed to ground
truth image. (Top Right) Initial predicted image. (Bottom Left) Final superim
posed to ground truth image. (Bottom Right) Final predicted image. . . . . . . 105
6.10 Simulated data with no background. Gradient norm plot over iteration. . . . . 106
6.11 Ground truth image. Simulated cardiac phantom. . . . . . . . . . . . . . . . . 106
16 List of Figures
6.12 Simulated data with known background. (Top Left) Initial superimposed to
ground truth image. (Top Right) Initial predicted image. (Bottom Left) Final
superimposed to ground truth image. (Bottom Right) Final predicted image. . 107
6.13 Simulated data with known background. Gradient norm plot over iteration. . . 107
6.14 Ground truth image calculated from a fully sampled single coil data set. . . . . 108
6.15 Measured single coil data with known background. (Top Left) Initial super
imposed to ground truth image. (Top Right) Initial predicted image. (Bottom
Left) Final superimposed to ground truth image. (BottomRight) Final predicted
image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.16 Measured single coil data with known background. Gradient norm plot over
iteration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.17 Ground truth image calculated from a fully sampled multiple coil data set. . . 110
6.18 Measured multiple coil data with known background. (Top Left) Initial super
imposed to ground truth image. (Top Right) Initial predicted image. (Bottom
Left) Final superimposed to ground truth image. (BottomRight) Final predicted
image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.19 Measured multiple coil data with known background. Gradient norm plot over
iteration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.1 Plot of the derivative of ψ(t) for different values of β. These values are as
signed according to the classiﬁcation of intensity coefﬁcients as background
(solid line), interior (dotted line) and boundary (dashed line). . . . . . . . . . . 115
7.2 Ground truth image for the simulated experiments. . . . . . . . . . . . . . . . 116
7.3 Simulated data with unknown background. (Top Left) Initial superimposed to
ground truth image. (Top Right) Initial predicted image. (Bottom Left) Final
superimposed to ground truth image. (Bottom Right) Final predicted image.
The error for the reconstructed image is rms = 0.40217. . . . . . . . . . . . . 117
7.4 Simulated data with unknown background. (Left) Enhanced reconstructed im
age. (Right) Plot of the gradient norm of the shape reconstruction over iteration. 117
7.5 Ground truth image from fully sampled single coil data. . . . . . . . . . . . . 118
7.6 Measured data with unknown background. Coil 5. (Top Left) Initial super
imposed to ground truth image. (Top Right) Initial predicted image. (Bottom
Left) Final superimposed to ground truth image. (BottomRight) Final predicted
image. The error for the reconstructed image is rms = 0.6509. . . . . . . . . 118
List of Figures 17
7.7 Measured data with unknown background. Coil 5. (Left) Enhanced recon
structed image. (Right) Plot of the gradient norm of the shape reconstruction
over iteration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
7.8 Ground truth image from fully sampled multiple coil data. . . . . . . . . . . . 120
7.9 Measured data with unknown background. Multiple coils. (Top Left) Initial
superimposed to ground truth image. (Top Right) Initial predicted image. (Bot
tom Left) Final superimposed to ground truth image. (Bottom Right) Final
predicted image. The error for the reconstructed image is rms = 0.56808. . . 120
7.10 Measured data with unknown background. Multiple coils. (Left) Enhanced re
constructed image. (Right) Plot of the gradient normof the shape reconstruction
over iteration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
8.1 Interleaved sampling pattern. . . . . . . . . . . . . . . . . . . . . . . . . . . 127
8.2 Reconstructions from simulated data. The numbers on the left column indi
cate the time point in the sequence. (Left) Reconstructed shapes superimposed
on ground truth images. (Right) Reconstructed images with restricted interior
intensities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
8.3 Reconstructions from simulated data. The numbers on the left column indi
cate the time point in the sequence. (Left) Filtered backprojection. (Right)
Reconstructed images using shape speciﬁc TV
β
approach. . . . . . . . . . . . 129
8.4 Error plots from simulated data reconstructions. (Left) Plot of the Dice simi
larity coefﬁcient over time (Middle) Plot of rms over time. Filtered backpro
jection (solid line) and temporally correlated combined approach (dotted line).
(Right) Predicted and ground truth areas over time. . . . . . . . . . . . . . . . 130
8.5 xt plots of the central r
x
line in the image over time. The thick arrows point to
the papillary muscle. (Left) Ground truth. (Middle Left) Filtered backprojec
tion. (Middle Right) Shape speciﬁc total variation method. (Right) Combined
shape and image method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
8.6 Reconstructions from measured single coil data. The numbers on the left col
umn indicate the time point in the sequence. (Left) Reconstructed shapes super
imposed on ground truth images. (Right) Reconstructed images with restricted
interior intensities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
18 List of Figures
8.7 Reconstructions from measured single coil data. The numbers on the left col
umn indicate the time point in the sequence. (Left) Gridding. (Right) Recon
structed images using shape speciﬁc TV
β
approach. . . . . . . . . . . . . . . 133
8.8 Error plots from measured single coil data reconstructions. (Left) Plot of the
Dice similarity coefﬁcient over time (Middle) Plot of rms over time. Gridding
(solid line) and temporally correlated combined approach (dotted line). (Right)
Predicted and ground truth areas over time. . . . . . . . . . . . . . . . . . . . 134
8.9 xt plots of the central r
x
line in the image over time. The thick arrows point to
the papillary muscle. (Left) Ground truth. (Middle Left) Gridding reconstruc
tion. (Middle Right) Shape speciﬁc total variation method. (Right) Combined
shape and image method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
8.10 Reconstructions from measured multiple coil data. The numbers on the left
column indicate the time point in the sequence. (Left) Reconstructed shapes
superimposed on ground truth images. (Right) Reconstructed images with re
stricted interior intensities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
8.11 Reconstructions from measured multiple coil data. The numbers on the left
column indicate the time point in the sequence. (Left) Gridding. (Right) Re
constructed images using shape speciﬁc TV
β
approach. . . . . . . . . . . . . 136
8.12 Error plots from measured multiple coil data reconstructions. (Left) Plot of the
Dice similarity coefﬁcient over time (Middle) Plot of rms over time. Gridding
(solid line) and temporally correlated combined approach (dotted line). (Right)
Predicted and ground truth areas over time. . . . . . . . . . . . . . . . . . . . 137
8.13 xt plots of the central r
x
line in the image over time. The thick arrows point to
the papillary muscle. (Left) Ground truth. (Middle Left) Gridding reconstruc
tion. (Middle Right) Shape speciﬁc total variation method. (Right) Combined
shape and image method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
C.1 Difference imaging approach with stationary background. (Top Left) Phantom
image at time point 1. (Top Middle) Phantom image at time point 8. (Top Right)
Image difference between time point 1 and 8. (Bottom Left) Phantom sinogram
data at time point 1. (Bottom Middle) Phantom sinogram data at time point 8.
(Bottom Right) Sinogram difference between time point 1 and 8. . . . . . . . 149
List of Figures 19
C.2 Difference imaging reconstructions. The numbers on the left column indicate
the time point in the sequence. (Left) Ground truth images. (Right) Recon
structed shapes superimposed on groundtruth. . . . . . . . . . . . . . . . . . . 151
C.3 (Left) Plot of the Dice similarity coefﬁcient over time (Right) Predicted and
ground truth areas over time. . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
C.4 Difference imaging approach with stationary background. (Top Left) Phantom
image at time point 1. (Top Middle) Phantom image at time point 8. (Top Right)
Image difference between time point 1 and 8. (Bottom Left) Phantom sinogram
data at time point 1. (Bottom Middle) Phantom sinogram data at time point 8.
(Bottom Right) Sinogram difference between time point 1 and 8. . . . . . . . 153
20 List of Figures
Publications
Conference contributions
A.M.S. Silver, I. Kastanis, D.L.G. Hill and S.R. Arridge, Fourier snakes for the reconstruction
of massively undersampled MRI, Proc. MIUA 2003, Shefﬁeld, 2003
I. Kastanis, S.R. Arridge, A.M.S. Silver, D.L.G. Hill and R. Razavi, Reconstruction of the
Heart Boundary from Undersampled Cardiac MRI using Fourier Shape Descriptors and Local
Basis Functions, Proc. ISBI 2004, pp. 10631066, 2004
A.M.S. Silver, D.L.G. Hill and I. Kastanis, Analysis of Variability of Cardiac MRI Data, Proc.
MIUA 2005, Bristol, pp. 5962, 2005
I. Kastanis, S.R. Arridge, A.M.S. Silver and D.L.G. Hill, Reconstruction of Cardiac Images in
Limited Data MRI, Proc. AIP 2005, Cirencester, 2005
I. Kastanis, S.R. Arridge and D.L.G. Hill, Image reconstruction with basis functions: Applica
tion to realtime radial cardiac MRI, Proc. MIUA 2006, Manchester, pp. 156161, 2006
22 Publications
Chapter 1
Prologue
1.1 Introduction
As the World Health Organization states on their web site
1
: “Although many cardiovascular
diseases (CVDs) can be treated or prevented, an estimated 17 million people die of CVDs each
year.” The need for detection and therefore prevention of heart disease is a major medical imag
ing need, a need of clinicians who require better and faster tools to diagnose cardiovascular
disease. Methods have been developed and cardiac imaging is now a reality. Yet the problem
of imaging the heart is still far from being completely solved. The majority of methods require
a substantial amount of time and effort in order to obtain and analyse cardiac images. While
these methods assume that the measured data is complete, the proposed approach aims to re
construct both images and shapes from limited data sets. This combined reconstruction reduces
the scanning time and simpliﬁes the diagnostic procedure by offering qualitative and quantita
tive results. This novel method, based on the physical reality of the cardiac imaging problem,
escapes some of the assumptions previous methods have made. The next section will give a
more precise idea of the problem in question.
1.2 Problem statement  Contribution
The problem of cardiac imaging is to capture the movement of a dynamic organ. Capturing
the movement of the heart has meant so far to reconstruct images for each phase of the cardiac
cycle. In the analysis of these images it is typical to delineate the left ventricle at each phase
of the cardiac cycle. This is performed manually for every image taking considerable time and
effort. The collection of data for these fully reconstructed images also takes a fair amount of
time, as it will be explained next.
The heart is moving at frequencies approximately between 1  3.3 Hz, that is 60  200 beats
per minute (bpm). Dynamic imaging is the imaging of objects, that are moving while the data is
1
www.who.int
24 Chapter 1. Prologue
being acquired. In the case of cardiac Magnetic Resonance Imaging (MRI), the term dynamic
does not only refer to the motion of the heart, but also to the data acquisition. The data is being
collected sequentially while the heart is beating. The idea of a ‘snapshot’, an image captured in
an instance, does not hold in many medical imaging modalities especially not in MRI. In MRI
the data for a single image of the moving heart requires a lot more time than the time the heart
is considered to be stationary. In biological terms the heart is never stationary and that is a key
property of cardiac imaging.
Given only a small amount of data, where the heart can be considered to be stationary, the
problem becomes illposed. In broad terms a problem is called illposed when the data is not
sufﬁcient for the solution of the problem and an approximation is the best that can be achieved.
In this thesis we present methodology based on inverse problem theory for both image and
shape reconstruction of limited data sets. While our novel approach is applicable in a variety
of tomographic and Fourier imaging problems, we concentrate on the reconstruction of radially
sampled cardiac MR images. The proposed method does not make any assumptions about the
periodicity of cardiac motion, making it suitable for freebreathing cardiac MRI, as well as for
patients suffering from arrythmia. The substantially small amount of data used by this novel
reconstruction approach also offers the ability of realtime imaging. Even though we do not
consider the presented method as a ﬁnal solution for cardiac imaging, we believe that it is a step
in the correct direction, escaping the assumptions of current methodology.
Taking advantage of the ideas of inverse problem theory, cardiac imaging becomes a two
part problem. The ﬁrst part, forward model, is to parameterise the heart and predict howit would
look under an MRI scanner. Predictions are then compared with data collected from the scanner.
The second part of the problem is to transform this comparison, using the inverse model, to the
chosen representation of the heart. These twoparts are iterated until the parameterised solution
is acceptable.
It is desirable to obtain an analysis of cardiac movement. Using a modelbased approach
the heart and the surrounding structures are represented with small set of parameters. This
compact representation makes the problem essentially smaller and therefore easier to solve.
A compact representation is in the mathematical sense a reduction of the dimensionality of
the problem. This parameterised model of the heart automatically separates the heart from
surrounding structures and cardiac motion can be further analysed.
Cardiac imaging is in these terms the problem of choosing the representation of the heart
model, simulating the MR scanner in the forward model and transforming the difference be
tween the prediction and the data, in the inverse problem, to the parameters of the representa
1.3. Overview of thesis 25
tion.
In this thesis we present methods for image and shape reconstruction using an inverse
problem approach. The proposed methods are not considered to be at this stage clinically
applicable, but are aimed to prove that the concept is valid. The modelbased approaches that
will be presented in this thesis are a signiﬁcant contribution to the reconstruction of images and
shapes fromlimited data sets, which are typically encountered in dynamic imaging applications.
Standard methods typically assume that data has been fully sampled, while in the presented
approach this assumption is removed and the reconstruction is stated as a minimisation problem.
In the next section, an overview of the thesis is given.
1.3 Overview of thesis
In chapter ¸2 we give an introduction to image reconstruction in MRI. We explain the basic ideas
in Magnetic Resonance imaging and overviewthe current methodology for the reconstruction of
both static and dynamic images. Shape reconstruction methods are discussed in chapter ¸3. In
chapter ¸4 the mathematical foundations for the proposed reconstruction method are explained.
Inverse problem theory is discussed from a deterministic and a statistical point of view. Chapter
¸5 presents a reconstruction method for images that are uncorrelated in time. The data collection
is considered to be instantaneous. In chapter ¸6 we discuss the method for reconstructing shapes
directly from measured data. We assume that the background and interior intensities in the
image and shape are known. The combination of image and shape reconstruction is the subject
of chapter ¸7. The detection of cardiac boundaries can be used to adjust parameters of the image
reconstruction method. In the combined method both the background and interior intensities are
considered to be unknowns in the problem and they are reconstructed from the data. In chapter
¸8 the method is developed further for the time correlated case. While the methodology of
the previous chapters ¸5  7 considers the reconstructed parameters to be uncorrelated in time,
in this chapter we assume that there is such correlation. This temporal variation is modelled
as a Markov process using the Kalman ﬁlter approach. In the ﬁnal chapter of this thesis we
draw some conclusions on the methodology used and the results obtained. We propose future
directions of the inverse problem approach to dynamic reconstruction in cardiac MRI.
26 Chapter 1. Prologue
Chapter 2
Magnetic Resonance Imaging
2.1 Introduction
2.2 Principles of MRI
MRI [103] is based on the phenomenon of nuclear magnetic resonance that the nuclei of certain
elements exhibit. This phenomenon can be observed in elements that have an odd number of
protons or neutrons or both in their nucleus. The most important element for the MRI of human
tissue is hydrogen H. Hydrogen has odd atomic number and weight, a halfintegral valued
spin, and is found in water molecules H
2
O. Human tissue consists of 60% to 80% water [172,
p. 268], making MR ideal for imaging biological structures.
To collect information for MRI there is a need for spatial localisation of the data. The
magnetic ﬁeld becomes spatially dependant through the use of three magnetic ﬁeld gradients.
They are small perturbations to the main magnetic ﬁeld. The three physical gradients are in
orthogonal directions labelled x,y and z. They are assigned by the operating software to three
logical gradients, the slice selection, the readout or frequency encoding and the phase encoding.
The MR image is simply a phase and frequency map collected from the spatially localised mag
netic ﬁelds at each point of the image. The slice selection is the initial step in 2D MRI, it is the
localisation of the radiofrequency excitation to a region of space. This is accomplished through
a frequency selective pulse and the physical gradient corresponding to the logical slice selection
gradient. When the pulse is sent and at the same time the gradient is applied to a small region,
a slice of the object realises the resonance condition. The gradient orientation is perpendicular
to the slice so that the application of the gradient ﬁeld is the same on every proton on the slice
regardless of its position within the slice. The readout gradient provides spatial localisation
within the slice in one of the two dimensions. It is applied perpendicular to the selected slice
and the protons begin to precess at different frequencies according to the dimension selected
by the gradient. There are two parameters associated with the readout gradient, the Field Of
28 Chapter 2. Magnetic Resonance Imaging
View (FOV) and the number of readout data points in each line of the resulting image matrix.
These data points are obtained without a change in the gradients. To move to a new data line the
gradient has to be changed, which requires substantially more time than to read out points on a
line. The Nyquist frequency [128] depends on both of these parameters. Finally the second di
mension in the selected slice is deﬁned with the help of the phase encoding gradient. The phase
encoding gradient is perpendicular to both the slice selection and the readout gradients. It is the
only gradient that varies its amplitude with time. This is based on the fact that the precession
of protons is periodical. Similarly to the readout gradient there are two parameters to deﬁne for
the phase encoding gradient, the FOV and the number of phase encoding steps. These two will
determine the spatial resolution in the ﬁnal image. After all the data is collected in the Fourier
space often referred to as kspace, the image is most commonly reconstructed by a 2D Fourier
transform. If the data has been acquired radially (ﬁg. 2.1 (Right)) instead of by Cartesian sam
pling (ﬁg. 2.1 (Left)), the image can be reconstructed using the Fourier central slice theorem
[126, p. 11]. It states that the 1D Fourier transform of the projection of a 2D function is the
central slice of the Fourier transform of that function. Lines in kspaces collected in a radial
manner are referred to as radial proﬁles or simply proﬁles. For a complete discussion on MRI
principles refer to [152] and [172]. In the next section, we present the current methodology for
the reconstruction of images in MRI.
−8 −6 −4 −2 0 2 4 6 8
−8
−6
−4
−2
0
2
4
6
8
k
y
k
x
−8 −6 −4 −2 0 2 4 6 8
−8
−6
−4
−2
0
2
4
6
8
k
x
k
y
Figure 2.1: (Left) Cartesian sampling. (Right) Radial sampling.
2.3 Image reconstruction
The foundations for tomographic reconstructions were laid by Johann Radon in 1917 [140].
Radon stated the following integral transform for a function f(r) of the vector variable r ∈ R
n
,
2.3. Image reconstruction 29
now known as the Radon transform
g(θ, τ) = (¹f) (θ, τ) =
_
∞
−∞
f(τu
θ
+sv
θ
)ds, (2.1)
where θ ∈ [0, 2π) is the slope of a line, τ ∈ R is its intercept, u
θ
is the vector deﬁning the
direction of the line and v
θ
is its normal. In the 2D case (n = 2) u
θ
= (cos θ, sin θ) and
v
θ
= (−sinθ, cos θ). The Radon transform ¹ maps a function f ∈ R
n
into the set of its
integrals over the hyperplanes of R
n
. In the case where f ∈ R
2
, then f will be mapped into the
set of its line integrals at angle θ. In ﬁg. 2.2 a description of the steps involved in the 2D Radon
transform is shown. Radon also introduced an inversion formula; ﬁrst we deﬁne:
F
r
(t) =
1
2π
_
2π
0
¹f(θ, ¸r, u
θ
) +t)dθ, (2.2)
where ¸r, u
θ
) is the inner product. In the 2D case the inverse transform is
f(r) = −
1
π
_
∞
0
dF
r
(t)
t
. (2.3)
While this formula is elegant, it suffers from the singularity at t = 0. An alternative derivation
uses the Hilbert transform, which is deﬁned as follows:
f
H
(y) = H[f(x)] =
1
π
_
∞
−∞
f(x)
x −y
dx. (2.4)
This is essentially a convolution operator f
H
(y) = (h ∗ f)(y) where the convolution kernel
h(x) = 1/πx. The equivalent Radon inversion formula is
f(r) =
1
2π
_
∞
−∞
∂g
H
(θ, r
y
−θr
x
)
∂r
y
dθ, (2.5)
where r = ¦r
x
, r
y
¦. The singularity is still present in the above integral, but it can be handled
as a Cauchy principal value. Apart from eqs. (2.3) and (2.5), other inversion formulas can be
derived. For more information refer to [126], [79] and for a modern treatise on the subject see
[29].
As the theory for tomographic reconstruction already existed, Magnetic Resonance Imag
ing initially used these available techniques. When data is acquired radially in MRI, it is trivial
to convert it to a set of projections by means of a 1D inverse Fourier transform according to the
Fourier central slice theorem
T
1
¹f(ω, α) = T
2
f(k), (2.6)
where the ndimensional Fourier transform T
n
and inverse Fourier transform T
−n
for a func
tion f(r), r ∈ R
n
are
30 Chapter 2. Magnetic Resonance Imaging
r
x
r
y
10 20 30 40 50 60
10
20
30
40
50
60
θ
s
τ
0 10 20 30 40 50 60 70
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
s
f
(Rf)(θ , τ ) =
s
f
s
(Rf)(θ, τ ) =
f(s)ds
θ = 45
o
τ = 32
0 10 20 30 40 50 60 70
0
2
4
6
8
10
12
14
16
18
20
Rf
τ
θ =45
o
0
20
40
60
80
0
50
100
150
0
5
10
15
20
25
θ =0
o
θ =45
o
θ =90
o
θ =135
o
Rf
θ
τ
Figure 2.2: From image to Radon projections. (Top left) Line integrals overlaid on an image
at θ = 45
o
. (Top right) A line integral for τ = 32. (Bottom left) The Radon transform of
the image at θ = 45
o
. (Bottom right) The Radon transform at four angles. The purple circle
indicates the location of the line integral.
F(k) = (2π)
−n/2
_
R
n
f(r)e
−ir·k
dr (2.7)
f(r) = (2π)
−n/2
_
R
n
F(k)e
ik·r
dk. (2.8)
Using this theorem the problem of reconstruction in radially sampled MRI is similar to the
Computed Tomography (CT) problem. In the early days [103] of MRI data was acquired radi
ally and MRI borrowed much of the theory from CT. Quickly though it took its own path.
Algebraic Reconstruction Techniques (ART) existed from the early 1970’s, [59], [58] and
[79]. It is the application of Kaczmarz’s method to Radon’s integral equations [126]. The main
idea of these methods was to state the reconstruction problem as a system of linear equations
g = ¹f . (2.9)
ART approximates ¹f ≈ cUf and the previous equation becomes
2.3. Image reconstruction 31
g = cUf , (2.10)
where U is a matrix indicating the locations each line integral intercepts pixels in the image
f(r) and c is an approximate correction factor. The predicted data g
t
j
for the jth line integral
is calculated as:
g
t
j
= c
j
U
j
f
t
, (2.11)
where f
t
is the tth estimated image vector, U
j
is a matrix (with a single row) with the i locations
corresponding to the jth line integral equal to 1 and c
j
is a correction factor for that line
integral. The size of the linear system in eq. (2.10) prohibited the direct solution and ART is
essentially an iterative solver. The updated estimate of the image vector f
t+1
is given by:
f
t+1
= max
_
0, f
t
+
_
g
j
c
j
−
g
t
j
c
j
_
/N
j
_
, (2.12)
where N
j
is the total number of intercepts of the jth line integral with f(r). ART can be
initialised with all the image elements equal to the mean density of the object [58].
Amore recent variant of ART methodology is to use basis functions to approximate the dis
tribution of intensities in the image by replacing matrix U with the matrix of the basis functions.
Hanson and Wecksung [70] used local radially symmetric basis functions for image reconstruc
tion in CT. To solve this linear system they used ART. In 1990 Lewitt [106] improved on the
method with the use of more general basis functions. Again Lewitt used an iterative method for
the solution of the large linear system. Schweiger and Arridge [147] compared different basis
functions for image reconstruction in optical tomography using an iterative nonlinear conjugate
gradient solver. Gardu˜ no and Herman [52] presented a method for surface reconstruction of
biological molecules using 3D basis functions.
Returning back to the early days of MRI and CT, ﬁltered backprojection was originally
discovered by Bracewell and Riddle [15]. The ﬁltered backprojection is a discrete approxi
mation to the analytic formula in eq. (2.5), where the derivative and the Hilbert transform are
replaced with a ramp or a similar ﬁlter
f(r) =
π
N
θ
N
θ
i=1
Q
θ
i
(r u
θ
i
), (2.13)
where N
θ
is the number of projections, u
θ
i
= (cos θ
i
, sinθ
i
) and Q
θ
i
is the ﬁltered data at angle
θ
i
Q
θ
i
(r u
θ
i
) = g
θ
i
∗ h, (2.14)
32 Chapter 2. Magnetic Resonance Imaging
where g
θ
i
is the projection at angle θ
i
, h is a high pass ﬁlter and ∗ denotes convolution. The high
pass ﬁlter enhances high frequency components, such as edge information and noise. The cal
culation of the ﬁlter and the convolution can be performed directly in Fourier space to decrease
computational costs
Q
θ
i
(r u
θ
i
) = T
−1
_
T
1
(g
θ
i
) T
1
(h)
_
. (2.15)
In 1971 the method was independently rediscovered by Ramanchandran and Lakshmi
narayanan [141]. By 1973, when Lauterbur published the ﬁrst paper [103] on MRI, using a
backprojection method to reconstruct the image of two glass tubes containing water, it was
already widely accepted that ﬁltered backprojection methods were superior to algebraic re
construction techniques. In 1974 Shepp and Logan [150] compared ﬁltered backprojection to
ART. They used the now famous SheppLogan phantom and concluded that the ﬁltered back
projection method was superior to ART.
In 1975 Kumar et al [100] described an imaging method which took advantage of a se
quence of orthogonal linear ﬁeld gradients. They were able to obtain Fourier data on a Cartesian
grid. For image reconstruction a direct Fourier inversion was used instead of the iterative solu
tions of large systems of linear equations. The fast Fourier transform (FFT) was known at that
time [30]. Edelstein et al [41] extended the method of Kumar et al [100] in 1980 with the use of
varied strength gradients instead of the constant ones Kumar et al had previously suggested. In
this manner they were capable of overcoming the ﬁeld inhomogeneities problems of Kumar’s
method, making their method applicable to wholebody imaging.
While the inversion of Cartesian Fourier samples by means of an FFT algorithm is fast and
computationally not very demanding, the inversion of radial samples requires interpolation in
to a regular grid. Interpolation is in general a computationally expensive operation, especially if
it is to be precise. The reason for this is that it requires convolution with a sinc function, which
is the ideal interpolation function. The sinc function has inﬁnite support making it prohibitive
for numerical implementations. It was not until 1981 that the groundwork was laid for what
is now the standard method for image reconstruction in radially sampled MRI. In [158] Stark
et al presented various methods for interpolating from polar to Cartesian samples. O’Sullivan
[130] used a KaiserBessel function for this task to improve on the efﬁciency and quality of
the reconstruction. Jackson et al [82] further extended this methodology and compared various
convolution functions. If we deﬁne the data in MRI to be
g
fr
(k) =
_
T
2
f(r)
_
A
r
(k), (2.16)
2.4. Dynamic imaging 33
where A
r
is a sampling function
A
r
(k) =
N
i=1
δ(k −k
i
), (2.17)
with N being the number of samples and δ the Dirac delta function. The aim is to interpolate
the signal g
fr
as follows:
g
fi
(k) = g
fr
(k) ∗ h(k), (2.18)
where h(k) is the convolution kernel. To compensate for the nonuniform sampling, a density
weighting function w(k) = A
r
(k) ∗ h(k) is introduced and the previous equation becomes
g
fwi
(k) =
g
fr
(k)
w(k)
∗ h(k). (2.19)
Resampling at Cartesian coordinates
g
fwc
(k) = g
fwi
(k) A
c
(k), (2.20)
where A
c
(k) =
i=1
j=1
δ(k
x
− i, k
y
− j) is a comb function III (k). Combining eqs. (2.18),
(2.19) and (2.20), we obtain
g
fwc
(k) =
_
g
fr
(k)
w(k)
∗ h(k)
_
A
c
(k). (2.21)
These methods are commonly referred to as gridding.
2.4 Dynamic imaging
Dynamic imaging has emerged as an important research area in the last couple of decades. It is
desirable to be able to image moving or dynamic parts of the human anatomy, like the brain and
the heart. Often this is not easy, since the dynamic object is moving faster than the data can be
collected in a scan. Ideally data for each different image must be collected faster than the object
is moving. In the case of cardiac MRI, if the scanning is done in a purely sequential manner, the
data cannot be collected fast enough to represent different phases of the cardiac cycle clearly.
If the images are formed with enough data to satisfy the Nyquist spatial rate, then the collected
data will only be enough for a very small number of cardiac phases and the images of these
phases will be corrupted by motion artifacts. On the other hand, if more images, corresponding
to more phases, are formed then the data will not be enough for each separate image causing
heavy artifacts and rendering them clinically useless.
Much research has been done in the area of sequence design and as Weiger et al mentions
“ ..., the time efﬁciency of collecting data by mere gradient encoding seems to be approaching a
34 Chapter 2. Magnetic Resonance Imaging
Figure 2.3: A normal ECG.
fundamental limitation.” [180, p. 177]. This means that new methods that explore other dimen
sions of dynamic imaging in MR have to be investigated, other than just using magnetisation
techniques. Some work has been done in Fourier techniques to reduce the scanning time. An
example of this is Feinberg et al[44], who decreased the imaging time to half by compromising
the quality of the image.
2.4.1 Gated imaging
One of the most commonly used techniques to image the heart is gated cardiac imaging. This
method uses the electrocardiogram (ECG) signal to gate the cardiac cycle. When the heart is
contracting it exhibits electrical activity, this is exactly what the ECG measures. The electrical
activity of the heart can be used to determine the phase of the cardiac cycle. As seen in ﬁg. (2.3),
the various letters represent different stages of the heart cycle. The most important is the interval
between the two highest peaks (RR interval), which represents the duration of the cardiac cycle.
Assuming that the ECG is exact in determining the phase of the cardiac cycle and that each
cardiac beat has the same duration, data lines that belong on to the same phase of the cardiac
cycle are collected in different beats of the heart at equal time intervals. This implies that the
data lines required to reconstruct an image, representing one phase of the cardiac cycle, are
collected with one heart beat difference each. The ECG signal provides a means to determine
in which phase of the cardiac cycle the collection of the data is done. This way there is enough
information to reconstruct clear images of various phases of the heart. To extend this idea of
gated imaging, it can be considered that instead of collecting one kspace proﬁle for a phase
at each heart beat, more proﬁles could be collected. This assumes that while these data lines
are being collected in one heart beat for one phase, the heart is almost stationary. It should be
2.4. Dynamic imaging 35
noted that gated cardiac imaging is performed on a single breath hold to reduce motion in the
surrounding structures due to the breathing process. Examples of gated cardiac imaging can be
found in Lanzer et al[101] who used different techniques to gate the cardiac motion. In [56],
Go et al study volumetric and planar cardiac imaging. In [47], Fletcher et al are using gated
cardiac imaging to study congenital heart malformations. An early system to reconstruct and
display gated cardiac movies was developed in [6].
2.4.2 Parallel imaging
Another approach for the solution of the dynamic imaging problem is the use of partial parallel
imaging. In parallel imaging an array of coils is used instead of just one. Data is collected
for each coil and combined to form one image. The beneﬁt of using multiple coils is that the
data can be undersampled. Using information from each coil, artifacts due to undersampling
can be reduced in the reconstruction. There are two main methods for parallel imaging in MRI,
SMASH [155] , Simultaneous Acquisition of Spatial Harmonics and SENSE [139], Sensitivity
Encoding for fast MRI. Both methods work by approximating the sensitivity information for
each coil. SMASH uses the sensitivity variations to replace some of the phase encoding. Sensi
tivity information is approximated by ﬁtting linear combinations of sensitivity matrices to form
spatial harmonics. The MR signal in the phase encoding direction at coil j can be expressed as:
g
j
(k
y
) =
_
f(r
y
)S
j
(r
y
)e
ik
y
r
y
dr
y
, (2.22)
where f(r
y
) is the signal and S
j
(r
y
) is the coil sensitivity at each phase encoded line. Sensi
tivity values are expressed as a linear combination to generate values from all coils
˜
S
m
(r) =
N
c
j=1
w
m
j
S
j
(r) ≈ e
im∆k
y
r
y
, (2.23)
where N
c
is the number of receiver coils, ∆k
y
= 2π/FOV , FOV is a scalar representing
the ﬁeld of view and m ∈ Z is the order of the spatial harmonics. This can be solved for the
weights w
m
j
by ﬁtting the coil sensitivities S
j
to the spatial harmonics e
im∆k
y
r
y
. Using eqs.
(2.22) and (2.23), an expression for the calculation of shifted kspace lines ˜ g(k
y
+m∆k
y
) using
the measured sensitivity matrices S
j
can be derived
N
c
j=1
w
m
j
g
j
(k
y
) ≈ ˜ g(k
y
+m∆k
y
). (2.24)
Using eq. (2.24) missing kspace lines can be generated. In the SENSE approach data is reduced
by decreasing the size of the FOV for each separate receiver coil. Samples are located further
away in kspace. This creates folding artifacts. Sensitivity matrices are calculated in the spatial
36 Chapter 2. Magnetic Resonance Imaging
domain, unlike SMASH which works in kspace. The full FOV image is calculated as a linear
combination of all the receiver coils by resolving for the superimposed image locations
f
n
=
j,k
R
j,k
g
j,k
, (2.25)
where f
n
is the vector of images values, j is the coil index, k is the kspace position index and
R is the reconstruction, or unfolding, matrix of the n superimposed image positions and it is
calculated as follows:
R =
_
S
H
C
−1
S
_
−1
S
H
C
−1
, (2.26)
where S is the N
c
N
s
coil sensitivity matrix with N
c
being the total number of coils and
N
s
the total number of samples, C is the N
c
N
c
receiver noise matrix and the superscript
H
denotes the conjugate transpose. Eq. (2.25) is solved for every position in the reduced FOV
image to produce the full FOV image. Both techniques in their original formulation require the
collection of extra data to be used for the sensitivity calculations. Initially SMASH imaging was
restricted to speciﬁc coil design [64] and imaging geometries [84]. Some recent developments
[19], [153], [78] have extended the coil combinations and coil geometry. Bydder et al [19]
reversed eq. (2.23) to express the coil sensitivity matrices S
j
as linear combinations of the
spatial harmonics
S
j
(r) ≈
p
m=−q
w
m
j
e
im∆k
y
r
y
, (2.27)
where q, p ∈ Z are integers deﬁning the number of Fourier coefﬁcients w
m
j
for the j
th
coil. This
allowed the construction of a linear system not as restrictive as the original SMASH formula
tion. Sodickson et al [153] included an extra term S
0
in eq. (2.23) to account for sensitivity
variations in the phase encode direction
N
c
j=1
w
m
j
S
j
(r) ≈ S
0
e
im∆k
y
r
y
. (2.28)
Another very recent variant of SMASH imaging named GRAPPA [63], an extension of [78],
provides unaliazed images for each coil, which can then be combined to produce even higher
SignaltoNoise Ratio (SNR) than the original SMASH. An analysis of the SNR in SMASH
can be found in [154]. Extensions of the SENSE method are also popular. In [138] Pruessmann
et al extended the original SENSE formulation to arbitrary kspace trajectories using gridding
operations to improve the numerical efﬁciency of the reconstruction method. Kellman et al
combined SENSE with UNFOLD [114] in [90], which will discussed in the following section.
A detailed review of parallel MR imaging was presented in [14].
2.4. Dynamic imaging 37
0 2 4 6 8 10 12 14 16
−8
−6
−4
−2
0
2
4
6
8
k
y
t
Figure 2.4: Sheared sampling pattern in kt space. The taxis represents time and the k
y
axis
the sampled locations in the phase encoding direction. Each point denotes a complete k
x
line
in the read out direction.
2.4.3 kt imaging
One of the most important recently developed methods for dynamic imaging is UNFOLD. It
uses the idea of kt space. Even though it was not stated in these terms in the original UNFOLD
paper [114], it has been redescribed in more recent papers by Tsao et al [166], [169]. UNFOLD
works by encoding information in the temporal dimension. Especially after the kt framework
was introduced by Tsao in [166], it has been understood that the data collection in MRI is in a
spectro  temporal space. The main idea of the kt space methods is that signals are modulated
by collecting data in an interleaved manner and that for dynamic imaging it makes sense to
investigate the Fourier Transform in the temporal dimension.
As seen in ﬁg. 2.4, only one of every four samples is taken. This interleaved sampling
pattern drastically reduces scanning time up to a fourthfold. When the FT is taken in time, the
modulation of the data will push aliased signals to the end of the spectrum (ﬁg. 2.5), which
allows the removal of ghost artifacts in the image with a low pass ﬁlter. Information about low
pass ﬁlter design for UNFOLD can be found in [91]. The concept behind this approach is that
modulation caused by the sheared sampling pattern is a shift in the phase encoding direction.
According to the Fourier shift theorem, a shift in the frequency domain results to a linear phase
shift in the time domain. In the xf space the signals that are static will have little frequency in
time, implying that more bandwidth can be dedicated to the dynamic part.
Intuitively speaking this idea tries to pack the xf space and therefore reduce scanning
times. The idea of using more bandwidth for the dynamic part is ideal for cardiac imaging,
where the main motion present is the heart beating, while everything else surrounding it is
38 Chapter 2. Magnetic Resonance Imaging
−100 −50 0 50 100
0
0.2
0.4
0.6
0.8
1
1.2
q
F
Aliased signal
Low pass ﬁlter
Figure 2.5: Plot of an aliased function. The qaxis is the temporal frequency and the Faxis is
the spatial frequency. Due to the temporal underampling the function has been shifted in the
temporal frequency dimension. This can be corrected with the application of an appropriate low
pass ﬁlter [91].
static or close to static in single breath hold imaging. The basic idea of the UNFOLD method
can be summarised in the following concepts, the interleaved pattern, which reduces scanning
time and combined with the low pass ﬁlter that removes artifacts and allows more bandwidth
to the dynamic part of the image. There has been much interest in the UNFOLD method. One
of the most interesting extensions is the combination of BLAST (Broaduse Linear Acqusition
Speedup Technique) [167] and SENSE with the kt framework in [169] and [168]. BLAST is
a uniﬁcation of priorinformation methods for fast scanning
f(r, q) =
_
˜
S
H
C
−1
nk,t
˜
S +C
−1
sr,q
_
−1
˜
S
H
C
−1
nk,t
g
k,t
, (2.29)
where
˜
S is the Fourier transform, from xf to kt space T
rq→kt
, of the sensitivity encoding
matrix S, C
n
is the noise covariance matrix and C
s
is the signal covariance matrix. It provides a
method to accelerate imaging as well as a common equation for the most important accelerating
methods. Other parallel imaging combinations with the kt ideas exist. In [113] UNFOLD is
combined with partialFourier imaging and SENSE. Hansen et al [66] presented a kt BLAST
method applied to nonCartesian sampling. An extension of UNFOLD to 3D is presented in
[186], as well as a different method to apply the UNFOLD technique by comparing spectral
energy.
2.5 Discussion
The majority of reconstruction methods in MRI is intended for data sets that satisfy or are
close to the Nyquist limit. When these methods are applied to limited data problems the re
2.5. Discussion 39
construction produces severe artifacts, usually corrupting the image to a degree unacceptable
for analysis. In dynamic imaging there is a need for ﬁner temporal resolution. To increase the
acquisition speed in MRI, the data available for each frame is necessarily reduced.
To overcome the problem of limited data in cardiac MRI, the common approach is to
use, as mentioned previously, ECG gating. ECG gated cardiac imaging makes two important
assumptions, the ﬁrst one is that the ECG signal is exact in giving the location of the heart cycle
and repeats itself in an exact manner and the second one is that the heart is beating in precisely
the same way. The ﬁrst assumption is a good approximation of the truth, but the second is
not necessary valid. Typically each monitored cardiac cycle is shrunk or stretched to ﬁt an
average cardiac cycle. This becomes a problem especially in the case of patients with heart
abnormalities and examinations under stress. In examinations under stress the heart is beating
a lot faster than normally, it is therefore important to reduce the scanning to a bare minimum
in order to avoid having the patient under stress for a long time. If more than one data line is
collected for each phase in each heart cycle, the reconstructed image will have blurring artifacts
due to the motion of the heart. Gated imaging can be thought of as time averaged, in the sense
that a single image is formed by data from many time points at theoretically equal intervals.
Nevertheless it is not desirable to form an averaged image, the effort is to record the motion of
the heart.
Another drawback of this technique is that obtaining high resolution images requires more
data lines, implying longer scanning times. Gated cardiac imaging is a compromise between
resolution or quality, both spatial and temporal, and scanning time. Increasing the spatial res
olution would imply capturing less phases of the heart cycle or more scanning time. If the
temporal resolution was increased the spatial resolution would have to be decreased or again
the scanning time would have to be longer.
Further to that the single breath hold approach limits the total imaging time, implying that
the spatial and temporal resolution are bounded. For the quantiﬁcation of ventricular function
typical cardiac MRI often requires the collection of data over many heart beats and also for
more than one breath hold. The long times consumed inside the MRI scanner are stressful and
certainly not desired for patients. Extended breath holds lead to poorly understood ﬂow and
pressure changes within the cardiac region [122]. It is also desirable though to image objects,
which do not behave in a periodic manner and gated imaging cannot be applied.
The vast majority of methods, with the main exception of the kt approach, do not take
advantage of the dynamic nature of the problem. They consider the problem of reconstructing
a temporal sequence of images as a series of static problems. Some information in the image
40 Chapter 2. Magnetic Resonance Imaging
can be recovered taking advantage of areas which are not in motion. Statistical properties of the
motion of the object can also be taken into account to improve results.
In the next chapter we will discuss the current approaches in shape reconstruction.
Chapter 3
Shape reconstruction background
Shape reconstruction has been a subject which has received much interest in the image pro
cessing community. For many machine vision tasks and generally for quantitative analysis a
segmented shape of interest is required. In this chapter we will introduce basic approaches for
the reconstruction of shapes. In the ﬁrst section, methods based on an explicit formulation of
the shape will be discussed. Following that the discussion will be on a more modern approach,
which has an implicit formulation of the shape.
3.1 Snake methods
Kass et al introduced in [89] the Active Contour Models, more commonly known as snakes.
Snakes are a speciﬁc case of the deformable model theory of Terzopoulos [163]. The de
formable model theory is based on Fischler and Elschlager’s spring loaded templates [46] and
Widrow’s rubber mask technique [184] and [120, p. 92]. Snakes are 2D contours, that approx
imate locations and shapes of structures in an image. This is done by minimizing an energy
functional E
snake
, that depends on the image and the smoothness or elasticity of the snake
E
snake
(v) =
_
1
0
E
int
(v(s)) +E
ext
(v(s)) +E
image
(v(s))ds, (3.1)
where v(s) =
_
_
x(s)
y(s)
_
_
is a parametric contour with s ∈ [0, 1) with x(s) and y(s) deﬁning
the x and y coordinates respectively. In the original snake formulation, these were deﬁned as
parametric splines. E
int
is the internal energy of the snake, which controls its smoothness.
E
ext
is an external force used for automatic initialisation and userintervention. Finally E
image
is the force deﬁned by the image, usually using image gradients, edge locations or other image
features of interest, to drive the snake closer to the desired segmentation.
Many researchers have extended the original snake formulation in a variety of ways.
Staib and Duncan [157] presented a method based on Fourier parameterisation for the contour.
42 Chapter 3. Shape reconstruction background
Fourier representations are global representations, while splines depend on control points, im
plying that they are local representations of closed curves on the plane. Fourier parameterisation
is more compact and usually only a few parameters are enough to deﬁne complex shapes. The
idea of representing shapes with Fourier descriptors dates back at least to the 1970’s, where var
ious researchers used them for shape discrimination. In 1982 Kuhl [99] determined the Fourier
coefﬁcients of chainencoded contours. In this work Kuhl presented properties of Fourier de
scriptors, such as normalisation and invariants. Further to that he discussed a recognition system
for arbitrary shaped, solid objects. In 1987 Lin [109] presented new invariants based on Fourier
descriptors with application to pattern recognition. In [133] shape discrimination was discussed
with applications in skeleton ﬁnding, character and machine parts recognition. In the same line
of research Aquado et al [5] used Fourier descriptors to parameterize shapes by extraction with
the Hough transform. Shapes were not restricted to closed curves, the parameterisation was
extended to open curves as well. Fourier parameterisations have also been used in an inverse
problem framework for the recovery of region boundaries. Kolehmainen [94] et al used mul
tiple Fourier contours to reconstruct shapes with known internal intensity directly from optical
tomography measurements. In a similar methodology Zacharopoulos et al [189] reconstructed
3D surfaces using a spherical harmonics representation. Battle et al [10] reconstructed a trian
gulated surface with constant interior density directly from tomographic measurements using
a Bayesian approach. Further development of this Bayesian methodology was presented in
[9], applied to lung images. They deﬁned two homogeneous regions, one for each lung, and
then determined the internal density and location of the boundaries by a Newton minimization
method. Instead of deforming the surfaces directly, they use freeform deformation models to
warp the space surrounding them.
Other recent extensions of the original snake method include extensions that work in color
image space instead of gray scale. Sclaroff and Isidoro [149] presented a method which uses
both shape and color texture information. This deﬁnition differs signiﬁcantly from most other
snake approaches, it resembles more the Active Appearance Models (AAM) approach of Cootes
et al [31]. AAM are a combination of Active Shape Models (ASM) [32] with a greylevel
appearance. ASM are statistical models of the shape of interest obtained using a training set.
The images in the training set are aligned with a modiﬁed Procrustes method and their main
modes of variation are calculated using eigenanalysis
C
C
p
j
= λ
j
p
j
, (3.2)
where C
C
is the covariance matrix of the aligned shapes, λ
j
is the jth eigenvalue and p
j
is the
3.1. Snake methods 43
jth eigenvector. The eigenvectors p
j
provide a way of deﬁning the possible ways a shape can
vary
( =
¯
( +Pw, (3.3)
where
¯
( is the mean of the aligned shapes, P is the matrix of the ﬁrst n eigenvectors and w is
a vector of weights. They have the advantage and at the same time disadvantage of being based
on a training set. This set is the prior knowledge. In some cases this might prove to be limiting
the possible shapes and therefore forcing the algorithm to ﬁnd a shape, which might not be the
real one. Initial applications of ASM were in hand gesture recognition tasks, while AAM were
targeting face recognition. Stegmann et al [159] used AAM to segment cardiac MR images.
Returning to the work of Sclaroff and Isidoro [149], shapes were deﬁned using a triangu
lar mesh model, based on a Delaunay triangular meshing algorithm. Their aim was to detect
the motion of objects and the registration process requires minimization of the residual error
with respect to the parameters of their snake model. For this optimisation problem, Sclaroff
and Isidoro use the LevenbergMarquardt method. A different approach to color snakes was
presented in [54]. Their method is interactive in the sense that it allows the user to choose
subimages, where the object of interest lies. The image segmentation is performed with the use
of snakes that are based on color invariants.
Another recent paradigm of snakes is that of geodesic snakes [24]. Geodesic snakes are
based on the ideas of curve evolution in a metric space with minimal distance curves. The
connection between the calculation of minimal distance curves in the space induced from the
image and the snakes is shown in that work. To calculate the geodesic curve a level set approach
is used. One of the beneﬁts of level set approaches is that curves are topologically adaptive.
Level set methods will be discussed in the next section. An application of the geodesic snakes
can be found in [144]. In that work geodesic snakes were combined with Gabor analysis.
A method for topologically adaptive shapes was presented by McInerney and Terzopoulos
[121]. The curves were deﬁned using nodes connected with edges. The role of the afﬁne cell
image decomposition (ACID) comes in the step of the reparametarisation of the contour. Using
a particular kind of cell decomposition, simplicial, the space is subdivided into triangles. The
triangles can be of any size, offering ﬁne detail or possibly a multiscale approach. The inter
sections of these triangles with the contour are then detected at every M steps of the iteration
and every intersection point gets assigned with an inside or outside value. By tracking the in
terior vertices of the intersected triangles at every M steps, the contour can be reparametirised
including topological changes, such as splitting, merging and selfintersecting. This approach is
44 Chapter 3. Shape reconstruction background
referred to as Topologically adaptive snakes, Tsnakes. An extension of Tsnakes is developed
by Giraldi et al in [55]. Giraldi et al used dual snakes, one snake for the outside of the edge
and one for the inside. This was implemented in a Dynamic Programming framework. Evans
et al [43] used Tsnakes to segment livers from CT images. An interesting new paradigm of
snakes by McInerney et al is presented in [119]. Following the general concepts of Alife [162]
McInerney et al develop the idea of using artiﬁcially intelligent snakes.
3.2 Level set methods
McInerney and Terzopoulos based their decision to use ACID, in the grounds that level set
higher order implicit formulations are not as convenient as the explicit, particularly when it
comes to deﬁning the internal deformation energy term, controlling the snake via user inter
action and imposing arbitrary geometric or topological constraints [121, p. 7475]. Level set
methods though are becoming increasingly popular since their introduction in 1988 by Osher
and Sethian [129]. Paragios [131] used a level set method for the segmentation of the left
cardiac ventricle in 2 dimensions. Whitaker and Elangovan [182] reconstructed both 2D con
tours and 3D surfaces directly from limited tomographic data. In diffusion optical tomography
Schweiger et al [148] reconstructed both the shape and the contrast values of the homogeneous
objects using two level set functions for the absorption and the diffusion values.
Figure 3.1: Level set function and corresponding shape boundary on the zero level set.
Level set methods are based on the ideas of front propagation. The boundary of a shape is
embedded on a higher dimensional function. For a boundary in R
2
the level set function will
3.2. Level set methods 45
be a surface in R
3
. Next we give a brief introduction to the level set approach along the lines of
[145]. The boundary of the region of interest Ω ⊂ R
n
is described by a function φ(r)
∂Ω = ¦r : φ(r) = 0¦. (3.4)
The level set function is build as a sequence of functions φ
t
(r) which approach the real region
Ω as t increases Ω
t
→ Ω with ∂Ω
t
= ¦r : φ
t
(r) = 0¦. Assuming that the image f(r) with
r ∈ R
n
can be modelled as
f(r) =
_
_
_
f
int
(r) if r ∈ Ω
f
ext
(r) if r / ∈ Ω
, (3.5)
then the level set function φ(r) (ﬁg. 3.1) is tied together with the image function as follows:
f(r) =
_
_
_
f
int
(r) if φ(r) < 0
f
ext
(r) if φ(r) > 0
. (3.6)
The boundary of the region is given by the zero level set, φ(r) = 0. While topological changes,
such as splitting and merging, are rather difﬁcult to deal in R
2
, the level set function, a surface
in R
3
, can incorporate these naturally without changing the topology of the surface in R
3
(ﬁg.
3.2). The same is true for any dimension R
n
.
Figure 3.2: Level set function and two corresponding shape boundaries on the zero level set.
In the case where the topology is known in advance, artiﬁcial constraints have to be introduced
in the level set representation to maintain this topology. A detailed review of levelset methods
46 Chapter 3. Shape reconstruction background
is given in [38].
3.3 Discussion
Most shape reconstruction methods work in the image domain, thus require image reconstruc
tion. In this two step approach, ﬁrst reconstruction and then segmentation, the quality of the
segmentation is dependant on the quality of the reconstruction. If the image reconstruction is
illposed, and this is the case in many dynamic imaging problems, the reconstructed image will
contain a large amount of noise. The quality of the segmentation, which is the goal of the anal
ysis, is ultimately dependant on the reconstruction. If the image reconstruction is of low quality
than segmentation will not be accurate nor robust.
Direct shape reconstruction methods do not use an image to reconstruct the shape, it is
created directly from measured data. These methods typically assume that the object and back
ground are homogenous and clearly distinguishable. This is not true for cardiac MRI, as well
as many other dynamic imaging applications. The problem of reconstructing a shape with in
homogeneous interior in an inhomogeneous background is much more complex. A promising
approach was presented by Ye et al [187], which does not require the region of interest to have
a smooth intensity distribution. It also capable of dealing with known inhomogeneous back
ground. In addition their level set function does not require reinitialisation, which typically
has a high computational cost.
Chapter 4
Numerical optimization: Inverse problem
theory
4.1 Inverse Problems
Inverse problems are very common in Physics and image analysis among other areas of science.
In image analysis the application of inverse problem theory is fairly new since this brand of
science has only existed for around half a century. In the theory of inverse problems, a problem
is separated in to two parts, the forward part and the inverse part. The forward or direct part
of a problem is the prediction of the observable data given the parameters of a model. In this
sense a direct problem would be to calculate the position of a moving object at given time, with
the assumption that the velocity vector is known. The inverse part of a problem is to predict the
model parameters given the observable data. An inverse part problem would be to calculate the
forces acting on a planet given the observation of its trajectory. The terms forward and inverse
are tied together with the deﬁnition of what the model is and what the observations of the system
are. For the example of the moving object, if the initial and current positions are considered to
be the observations, then the estimation of the velocity becomes an inverse problem. Therefore
it is important to give a more formal deﬁnition of the model and data spaces.
The model space P
m
will contain a minimal parameterisation p ∈ P
m
that completely
describes the system, where P
m
is typically an mdimensional Hilbert space H or in the more
general case a Banach space B. A Hilbert space is a vector space with an embedded norm
deﬁned by an inner product. A Banach space is a generalisation of a Hilbert space in the sense
that the norm need not be deﬁned by an inner product. All Hilbert spaces are Banach spaces,
but the converse does not always hold. The data space Y
n
will contain the observations g ∈ Y
n
that can be made about a particular system. The notions of forward modelling and inverse
modelling are of major importance to the concept of inverse problems. Forward modelling is
the discovery of the mapping Z from the model space to the data set
48 Chapter 4. Numerical optimization: Inverse problem theory
Z : P
m
→Y
n
. (4.1)
Intuitively, it is the set of laws that allows the prediction of the observations g, which can
be made on the system, given the model parameters p. Inverse modelling is the mapping Z
†
from the observations to the parameters of the model
Z
†
: Y
n
→P
m
. (4.2)
This implies that by observing a particular system, the parameters of the model can be deduced.
If we assume that there is no measurement noise, the model is
g = Z(p). (4.3)
For the operator Z, we deﬁne the range to be the set of all values that Z can take as p varies in
P
m
Range(Z) = ¦g ∈ Y
n
[ g = Z(p) for some p ∈ P
m
¦ (4.4)
and the nullspace, as the set of all vectors p that solve the equation Z(p) = 0
Null(Z) = ¦p ∈ P
m
[ Z(p) = 0¦. (4.5)
Assuming that the operator Z is a linear mapping Z : P
m
→ Y
n
, then the noisefree model of
eq. (4.3) can be expressed as a matrix multiplication
g = Zp, (4.6)
where Z ∈ R
n×m
. The singular value decomposition (SVD) of a matrix Z ∈ R
n×m
with
n > m is given by:
Z = UDV
T
, (4.7)
where
4.1. Inverse Problems 49
D =
_
_
Σ
r
0
0 0
_
_
∈ R
n×m
.
Σ
r
= diag(σ
1
, σ
2
, ..., σ
r
), σ
1
≥ σ
2
≥ ... ≥ σ
r
> 0 are the singular values of Z and r =
min¦m, n¦ is the smallest dimension of the matrix. Matrices U = (u
1
, u
2
, ..., u
n
) ∈ R
n×n
and V = (v
1
, v
2
, ..., v
m
) ∈ R
m×m
are orthonormal, i.e. UU
T
= U
T
U = I and V V
T
=
V
T
V = I and the vectors u
i
and v
i
are the left and right singular vectors σ
j
u
j
= Zv
j
. If the
matrix Z ∈ R
n×m
has more columns than rows m > n, then the matrix D in eq. (4.1) will not
have any zeros on the diagonal. The condition number of a matrix is equal to the ratio of the
smallest singular value divided by the largest cond(Z) =
σ
1
σ
r
.
Eq. (4.3) expresses a forward problem. The inverse problem is then to calculate the pa
rameters p from the measurements g. In the idealized case, where the forward mapping Z is
exact and there is no noise, this will be
Z(p) −g = 0. (4.8)
Typically though this is not the case. The system of eq. (4.3) is said to be wellposed in the
Hadamard sense [174], provided the following are true:
(i) ∀ g ∈ Y
n
, there exists a solution p ∈ P
m
for which g = Z(p) holds.
(ii) the solution p is unique
(iii) the solution is stable, i.e. if g
0
= Z(p
0
) and g = Z(p), then p →p
0
when g →g
0
If the system does not fulﬁll all three of the above requirements it is said to be illposed.
Hadamard was convinced that illposed problems are not motivated by the physical reality [12,
p. 7]. This dismissal of illposed problems can be reasoned in the general scientiﬁc mentality
of the early 20th century. This scientiﬁc mentality in the early 1900’s of absolute truth and
formality are characteristic in the works of Bertrand Russell and David Hilbert [33].
Deﬁnition of illconditioning
(i) If the forward mapping Z is rankdeﬁcient, then eq. (4.3) has not got a unique solution. A
simple case where this happens is that the number of unknowns is more than the number
of equations. In other words the system is underdetermined.
50 Chapter 4. Numerical optimization: Inverse problem theory
(ii) Undetermined systems can also appear if the rank r of the n m matrix is less than
its smallest dimension r < m ≤ n. The rank of a matrix is the dimension of its range
r = dim(Range(Z)). In this case some of the columns of Z are linearly dependant.
Equivalently, the null space of Z will not be the empty set Null(Z) ,= ¦0¦.
(iii) Z can be numerically rankdeﬁcient. Even though Null(Z) = ¦0¦, there can be a few
columns which are very close to being linearly dependant. This can be seen by a clear
gap in the decay of the singular values ¦σ
1
, σ
2
, ..., σ
r
¦, for some k > 1 the spectrum of
the singular values will drop suddenly σ
k+1
¸ σ
k
. The matrix Z will then contain k
numerically linearly independent columns and its effective rank will be equal to k.
(iv) The spectrum of the singular values is connected with the oscillations in the singular
vectors u
j
,v
j
. The smaller the singular values σ
j
are, the more oscillatory the singular
vectors are [67], causing the ampliﬁcation of noise, due to the discretezation, measure
ment noise and inexactness of the forward model, in the solution.
The condition number in both numerically rankdeﬁcient and discrete illposed problems is
typically very large, and the problem is effectively underdetermined.
If the system is illposed then the idealized solution in eq. (4.8) is unobtainable. In such
cases one seeks to minimize some discrepancy functional between the predicted and measured
data
p
min
= arg min
p
Φ(p), (4.9)
where Φ(p) is the objective function, also referred to as cost function and is typically deﬁned
as a norm [[g −Z(p)[[. In the next section the details of model selection will be discussed.
4.2 Model selection
4.2.1 Image parametrization
If the intensity variation across spatial locations is considered to be sufﬁciently smooth, an
image f(r), with r ∈ R
n
, can be approximated using local basis functions
˜
f(r) =
N
ζ
k=1
B
k
(r)ζ
k
, (4.10)
where ¦ζ
1
, ζ
2
, ..., ζ
N
ζ
¦ = p
ζ
∈ P forms the parameter vector, N
ζ
is number of basis functions,
the grid resolution and P ⊂ R
N
ζ
is the parameter vector space. The kth basis function is
deﬁned on a regular grid (ﬁg. 4.1)
4.2. Model selection 51
B
k
(r) = B
0
(r +r
k
) = B
0
(r) ∗ δ(r −r
k
) , r ∈ R
n
, (4.11)
0
20
40
60
80
0
20
40
60
80
0
0.5
1
x
y
z
Figure 4.1: Regular 3 3 grid. The x and y axes represent the spatial location in R
2
and the z
axis represents the intensity.
where r
j
is a set of grid points in the ndimensional spatial domain R
n
and B
0
is the central
basis function of general form:
B
0
(r) =
_
_
_
b(r) if d ≤ a
0 if d ≥ a
, (4.12)
where d is the Euclidean distance from the center of the jth basis function d = [[r − r
j
[[
2
.
A typical choice for the basis functions b(r) are the KaiserBessel window function [106] (ﬁg.
(4.2))
b(r) =
1
I
m
(α)
(
_
1 −(d/a)
2
)
m
I
m
(α
_
1 −(d/a)
2
), (4.13)
where I
m
is the modiﬁed Bessel function of the ﬁrst kind, m is the degree, a is the support and
α is a shape parameter. KaiserBessel functions will be used for the reconstruction of images
52 Chapter 4. Numerical optimization: Inverse problem theory
Figure 4.2: Surface plot of the KaiserBessel blob basis in 2D with support radius 1.45 and
α = 6.4.
throughout the thesis. Another option for the basis functions are the Wendland functions ([181]
and [17]). These are deﬁned for C
6
continuity as follows:
b(r) = (1 −d)
8
(32d
3
+ 25d
2
+ 8d + 1). (4.14)
A computationally faster alternative are the linear basis functions
b(r) = (1 −d/a) (4.15)
and the Gauss radial basis functions [147]
b(r) = e
−d
2
/σ
2
. (4.16)
where σ controls the width.
Let B = (B
1
, B
2
, ..., B
N
ζ
) be the matrix whose kth column is the vectorized image of
the kth basis function B
k
, then eq. (4.10) can be rewritten as a matrix equation
˜
f = Bp
ζ
. (4.17)
Applications of local basis functions in 3D image reconstructions from tomographic data
can be found in [52], [107], [115], [116] and [118].
4.2.2 Shape parametrization
The boundary of a region ((s) can be represented with global basis functions as follows:
4.2. Model selection 53
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Linear
Gauss σ = a/2
Wendland C
6
Bessel α = 1.45
Figure 4.3: Plot of radial proﬁles of linear(solid), Gauss(dashed), Wendland(dashdotted) and
KaiserBessel(dotted).
((s) =
_
_
x(s)
y(s)
_
_
=
N
γ
n=1
_
_
γ
x
n
θ
n
(s)
γ
y
n
θ
n
(s)
_
_
, s ∈ [0, 1], (4.18)
where θ
n
are periodic and differentiable basis functions, N
γ
is the number of basis functions
and γ
x
n
, γ
y
n
∈ R are the weights of θ
n
. Let γ denote the vector of all boundary coefﬁcients, i.e.
γ = ¦γ
x
1
, γ
y
1
, γ
x
2
, γ
y
2
, ..., γ
x
N
γ
, γ
y
N
γ
¦ ∈ P. It is γ that controls the shape of the region ((s).
The trigonometric basis functions θ
n
(ﬁg. 4.4) are deﬁned as follows:
θ
1
(s) = 1
θ
n
(s) = sin(2π(
n
2
)s), if n is even (4.19)
θ
n
(s) = cos(2π(
n −1
2
)s), if n is odd.
Figure 4.4: Plot of Fourier basis functions with N
γ
= 7. Dashed curves are the cos (even) terms
and solid curves are the sin (odd) terms.
54 Chapter 4. Numerical optimization: Inverse problem theory
Staib and Duncan [157] describe this Fourier parameterisation as rotating phasors deﬁned by
groups of four parameters, two for each axis, where the parameters are the weights of the basis
functions. In a simplistic way, the generation of a contour using these functions can be thought
of as the movement of a robotic arm, made out of several parts all rotating continuously at
different speeds, with a pencil on the end. Generally this reminds the idea of the epicycles of
Epimenedes, that described planetary motion.
An alternative to the trigonometric functions are parametric splines. These are local basis
functions, in the sense that each one inﬂuences the curve locally. They are formed as overlap
ping functions between different control points. N
γ
in eq. (4.18) is in this case the total number
of control points and θ
n
(s) is deﬁned as follows:
θ
1
(s) = θ
0
(s)
θ
2
(s) = θ
0
(s) +d (4.20)
θ
n
(s) = θ
0
(s) + (n −1)d,
where d is the distance between neighboring control points and θ
0
can be deﬁned by cubic
polynomial segments [83] (ﬁg. 4.5)
θ
0
(s) =
_
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
_
s
3
6
, if s < 0.25d
1 + 3s + 3s
2
−3s
3
6
, if 0.25d ≤ s < 0.5d
4 −6s
2
+ 3s
3
6
, if 0.5d ≤ s < 0.75d
1 −3s + 3s
2
−s
3
6
, if 0.75d ≤ s < d
. (4.21)
A further example of the spline approach can be found in [50].
Figure 4.5: Plot of Bspline basis functions with N
γ
= 7.
4.3. Data discrepancy functionals 55
4.3 Data discrepancy functionals
Continuing from eq. (4.9), a variety of data discrepancy functionals can be used for the objective
function. The purpose of these functionals is to describe how close the predicted solution y =
Zp is to the measured data g. The KullbackLeibler distance is a statistical measure
D
kl
(g, y) =
n
j=1
g
j
log
g
j
y
j
. (4.22)
An example of minimization of the KullbackLeibler distance under certain conditions, is
the expectation maximization (EM) method of Richardson[142]Lucy[110]. More commonly
found are data discrepancy functionals based on the L
p
norm. For a function f(x) in a measure
space (X, L), where X is a measure space and L is a metric, the L
p
norm is
L
p
= [[f[[
p
=
__
X
[f(x)[
p
dx
_
(1/p)
. (4.23)
The discrete version of this norm is for a vector f ∈ B
n
, where B
n
is an appropriate
1
n
dimensional Banach space
l
p
= [[f [[
p
=
_
_
n
j=1
[f
j
[
p
_
_
(1/p)
. (4.24)
The next data discrepancy functional is based on the l
1
norm, the taxicab distance
D
l
1(g, y) = [[g −y[[
1
=
n
j=1
[g
j
−y
j
[. (4.25)
For examples on the use of l
1
norm in optimization problems refer to [37]. The most common
of all, the least squares (LS) functional is based on the squared l
2
norm, the Euclidean distance
D
2
l
2
(g, y) = [[g −y[[
2
2
=
n
j=1
[g
j
−y
j
[
2
. (4.26)
4.4 Least squares approximation
The solution to the illposed nature of certain problems has been studied by mathematicians
like PierreSimon Laplace in 1799, who used the L
1
and the L
∞
norms, ‘leastabsolute values’
1
By appropriate, we mean that an l
p
norm can be embedded. For example it is not a Tsirelson space [170]
56 Chapter 4. Numerical optimization: Inverse problem theory
and ‘minimax’ [161] criterions in his own words. It was right in the beginning of the 19th cen
tury that the ‘leastsquares’ criterion was established by Adrien Marie Legendre in 1805[104],
Robert Adrain in 1808[2] (possibly aware of Legendre’s work [160]) and Johann Carl Friedrich
Gauss in 1809[53]. Details on the history of the ‘leastsquares’ method can be found in the
following works of Harter [71, 72, 74, 75, 73, 76, 77].
In most real cases g ,∈ Range(Z) and the idealized solution g − Z(p) = 0 cannot be
obtained. The aim is to minimize the error between predictions and measured data, in this
approach in a least squares sense
Φ(p) = D
2
l
2
(g, y), (4.27)
where g are the measurements and y are the predictions, which are a function of the model
parameters y = Z(p). Let us ﬁrst examine the case where the forward model Z is linear.
4.4.1 Linear case
If the mapping Z is linear then it can be expressed as a matrix Z : P → Y, where P ⊂ H
m
and Y ⊂ H
n
. Note that a Hilbert space is an example of a Banach space B with the l
2
norm
embedded. The l
2
norm is the only l
p
norm that can be embedded in a Hilbert space, as it can
be deﬁned by an inner product. The objective functional is going to be
Φ(p) = [[g −Zp[[
2
2
. (4.28)
Expanding the above equation
Φ(p) = (g −Zp)
T
(g −Zp)
Φ(p) = g
T
g −(Zp)
T
g −g
T
(Zp) + (Zp)
T
(Zp).
Note that (Zp)
T
g = g
T
(Zp)
Φ(p) = g
T
g −2p
T
Z
T
g +p
T
Z
T
Zp.
To obtain the minimum solution p
min
= arg min
p
Φ(p), the derivative of the objective func
tional is set to zero
∂Φ(p)
∂p
= 0
4.4. Least squares approximation 57
∂Φ(p)
∂p
= 0
∂
_
g
T
g −2p
T
Z
T
g +p
T
Z
T
Zp
_
∂p
= 0
∂g
T
g
∂p
−2
∂p
T
Z
T
g
p
+
∂p
T
Z
T
Zp
p
= 0.
Using the matrix derivative identities
∂p
T
a
∂p
= a and
∂p
T
Ap
∂p
= (A + A
T
)p, where p and a are
column vectors and A is a square matrix; we obtain
0 −2Z
T
g +
_
(Z
T
Z) + (Z
T
Z)
T
_
p = 0. (4.29)
Noting that the matrix Z
T
Z is symmetric, i.e. Z
T
Z = (Z
T
Z)
T
, we derive the set of normal
equations
Z
T
Zp = Z
T
g. (4.30)
The solution can be obtained by inversion
p =
_
Z
T
Z
_
−1
Z
T
g, (4.31)
where
_
Z
T
Z
_
−1
Z
T
is the MoorePenrose generalized inverse, also referred to as pseudoin
verse, discovered independently by Moore in 1920 [124] and by Penrose in 1955 [134]. The
MoorePenrose inverse A
†
is uniquely determined by the following four conditions:
(1) AA
†
A = A, (2) A
†
AA
†
= A
†
,
(3) (AA
†
)
H
= AA
†
, (4) (A
†
A)
H
= A
†
A .
These properties of the MoorePenrose inverse are also true for the proper inverse A
−1
. The
pseudoinverse A
†
is equivalent to the proper inverse if matrix Ais square and nonsingular. The
system in eq. (4.30) is typically solved by more efﬁcient methods than matrix inversion, such
as QR decomposition [13].
In some cases the matrix A will not have full rank Null(A) ,= ¦0¦ and the least squares
problem in eq. (4.28) will not have a unique solution. More often though some columns will be
numerically close to being linearly dependant. The SVD spectrum of the theoretically rankr
matrix will have a clear jump in its decay. Some singular values will be very close to zero.
Formally, for a small number there exist singular values such that σ
k+1
, ..., σ
r
≤ , while the
rest of the singular values will be far larger than , that is ¸ σ
1
, ..., σ
k
. It is this k that is
58 Chapter 4. Numerical optimization: Inverse problem theory
the effective rank of matrix A. Inversion of this numerically rankdeﬁcient matrix will result in
very unstable solutions. The remedy to this problem is to truncate the singular values that are
below by setting them equal to zero. This will reduce the variance in the solutions.
A
†
k
= V
_
_
Σ
−1
k
0
0 0
_
_
. ¸¸ .
∈R
m×n
U
T
,
where Σ
k
= diag¦σ
1
, ..., σ
k
¦ ∈ R
k×k
. This approach is named Truncated Singular Value
Decomposition (TSVD). The solution to the TSVD modiﬁed problem is given by p
TSV D
=
Z
†
k
g, where Z
k
is the best rankk approximation of Z.
4.4.2 Nonlinear case
For the nonlinear observation model, as in eq. (4.3), we deﬁne the weighted least squares func
tional:
Φ(p) = [[L
w
(g −Z(p))[[
2
2
, (4.32)
where L
w
is a weight matrix
2
. To minimize this functional, we approximate it with a linear
model in the local neighborhood of a given prediction p
k
.Under the assumption that the func
tional is continuously differentiable in the local neighborhood of p
k
it can be expanded into a
Taylor series
Φ(p) =
∞
n=0
Φ
(n)
(p
k
)
n!
(p −p
k
)
n
. (4.33)
It is typical in numerical methods to take the ﬁrst two terms of its Taylor series expansion, the
quadratic approximation, around the current estimate p
k
˜
Φ(p) = Φ(p
k
) +
_
∂Φ
∂p
(p
k
) +
1
2
(p −p
k
)
T
∂
2
Φ
∂p
2
(p
k
)
_
(p −p
k
). (4.34)
Setting the derivative of the quadratic functional
˜
Φ(p) equal to zero, we obtain the updated
estimate p
k+1
as the minimiser of
˜
Φ(p)
∂
˜
Φ
∂p
(p
k+1
) =
∂Φ
∂p
(p
k
) + (p
k+1
−p
k
)
T
∂
2
Φ
∂p
2
(p
k
) = 0. (4.35)
2
In the linear case the weighted least squares solution can be found simply by replacing Z with
˜
Z = L
w
Z and
g with ˜ g = L
w
g in eq. (4.31)
4.4. Least squares approximation 59
Assuming that
∂
2
Φ
∂p
2
, the Hessian matrix, is invertible
p
k+1
= p
k
−
_
∂
2
Φ
∂p
2
(p
k
)
_
−1
∂Φ
∂p
(p
k
). (4.36)
Eq. (4.36) is iterated until it converges or the stopping criteria are met. The derivative of the
objective functional
∂Φ
∂p
∈ R
m
is
∂Φ
∂p
= −2
_
∂Z
∂p
(p
k
)
_
T
L
T
w
L
w
(g −Z(p
k
)), (4.37)
where
∂Z
∂p
(p
k
) = J
k
is the Jacobian matrix. The Hessian matrix
∂
2
Φ
∂p
2
∈ R
m×m
will then be
∂
2
Φ
∂p
2
= −2
_
_
N
j=1
(g
j
−Z(p
k
))L
T
w
L
w
∂
2
Z
j
∂p
2
_
_
+ 2J
T
k
L
T
w
L
w
J
k
. (4.38)
Setting
N
j=1
(g
j
− Z(p
k
))L
T
w
L
w
∂
2
Z
j
∂p
2
= K
k
and substituting eqs. (4.37) and (4.38) into
eq. (4.36), we obtain the NewtonRaphson iteration formula
p
k+1
= p
k
+s
k
_
K
k
+J
T
k
L
T
w
L
w
J
k
_
−1
J
T
k
L
T
w
L
w
(g −Z(p
k
)) , (4.39)
where s
k
is the step parameter, controlling the convergence of the iterations [13]. It is the
distance to be travelled on the minimizing direction. The computation of the step size can
be done with a line search algorithm ([11],[85]), that solves the 1D minimization problem of
ﬁnding the correct distance to be travelled on the downhill direction, so the minimum will not
be missed.
Computation of the Hessian matrix is usually a slow process, which can be a prohibiting
factor for many applications. Various approximations are often used as an alternative to the
NewtonRaphson method, these are referred to as quasiNewton methods since they lack second
derivative information. Replacing K
k
+ J
T
k
L
T
w
L
w
J
k
with the identity matrix we obtain the
steepest descent method
p
k+1
= p
k
+s
k
J
T
k
L
T
w
L
w
(g −Z(p
k
)) . (4.40)
Another common approximation of eq. (4.39) is the GaussNewton method, where the termK
k
is ignored
60 Chapter 4. Numerical optimization: Inverse problem theory
p
k+1
= p
k
+s
k
_
J
T
k
L
T
w
L
w
J
k
_
−1
J
T
k
L
T
w
L
w
(g −Z(p
k
)) . (4.41)
Each iteration of the above method solves the linearised problem
p
min
= arg min
p
[[L
w
(g −(Z(p
k
) +J
k
(p −p
k
)))[[
2
2
, (4.42)
which will be used to update the estimated solutions p
k+1
= p
k
+ p
min
. At each step of the
iterations p
min
represents the error between the predicted data Z(p
k
) and the measured data g
in the model space. The GaussNewton method can be thought of as a series of approximated
linear problems, which improve the estimated solutions at each iteration.
Finally, replacing K
k
with a control term λI and assuming that the step paremeter is ﬁxed
s
k
= 1, we arrive at the Levenberg[105]Marquardt[117] method
p
k+1
= p
k
+
_
J
T
k
L
T
w
L
w
J
k
+λI
_
−1
J
T
k
L
T
w
L
w
(g −Z(p
k
)) , (4.43)
where λ ≥ 0 is a control parameter. The convergence of the method depends on the parameter λ
and is in between the GaussNewton direction (when λ = 0) and the steepest descent direction
(when λ = ∞). The LevenbergMarquardt method at each iteration minimizes the following
objective functional:
Φ(p) = [[L
w
(g −(Z(p
k
) +J
k
(p −p
k
)))[[
2
2
+λ[[p −p
k
[[
2
2
. (4.44)
4.5 Constrained optimization: The method of Lagrange
The majority of methods presented in the previous section ¸4.4.2 require a line search algorithm
for the calculation of the step length in the descent direction. Another approach is to deﬁne a
region where the linearized model is considered to be a good approximation of the objective
functional. An example of this methodology is the LevenbergMarquardt method (eq. (4.43)),
which simultaneously evaluates both the step length and the direction. While minimizing the
linearized objective functional
˜
Φ(p), the update ∆p = p
k+1
− p
k
is restricted to be within a
region of radius δ
t
, that is [[∆p[[
2
2
≤ δ
t
. These methods are called trustregion or restricted step
methods. This is a constrained optimization problem and it can be expressed as:
min
p
˜
Φ(p) , subject to [[∆p[[
2
2
≤ δ
t
. (4.45)
4.5. Constrained optimization: The method of Lagrange 61
In the standard LevenbergMarquardt algorithm the trust region radius is indirectly controlled
with the use of λ (eq. (4.43)). Modern trust region methods offer direct control of δ
t
. They do
not seek an exact solution to the above equation, but instead a near optimal one. Mor´ e [125]
gives details of such methods. For further details on the LevenbergMarquardt method refer to
Marquardt’s original paper [117] as well as [85] and [13]. Bazaraa et al in [11] discuss also the
necessary conditions for the existence of an optimal solution in nonlinear programming.
The methodology for constrained optimization problems was initially discovered by Leon
hard Euler and Joseph Louis Lagrange in the midst of the 18th century. While Euler had orig
inally worked out a geometrical proof of his method, Lagrange was able to prove the same
using analysis alone. Lagrange published some applications of his method in 1788 and further
generalized it in 1797. More details on the history of the calculus of variations can be found in
[57].
Consider the following optimization problem:
min
p
Φ(p) , subject to f
i
(p) = 0 and c
j
(p) ≤ 0, (4.46)
where p ∈ R
m
, f
i
is an array of constraint functions with i ∈ [1, N
e
] and similarly c
j
is an array
of constraint functions with j ∈ [1, N
i
]. By the method of Lagrange the constrained problem in
m variables is transformed in to an unconstrained problem with additional variables. We form
the following objective functional:
Λ(p, λ, µ) = Φ(p) +λf(p) +µc(p), (4.47)
where Λ(x, λ, µ) is the Lagrangian and λ ∈ R
N
e
and µ ∈ R
N
i
are the Lagrange multipliers.
The inequality constraints can be converted in to equality constraints with the introduction of
a slack variable, c
j
(x) ≤ 0 is equivalent to c
j
(x) + a
2
= 0 [111]. An alternative to the
slack variable method was presented in [178]. The unconstrained problem will be a problem in
m+N
e
+N
i
variables with the additional ones being the Lagrange multipliers. As previously,
to minimize this we need to ﬁnd the stationary points. We set the derivatives with respect to x
and to the Lagrange multipliers λ and µ equal to zero
∂Λ(x, λ, µ)
∂x
= 0
∂Λ(x, λ, µ)
∂λ
= 0
∂Λ(x, λ, µ)
∂µ
= 0.
62 Chapter 4. Numerical optimization: Inverse problem theory
Solutions are obtained by solving the above system of equations subject to existence and opti
mality conditions; typically the KarushKuhnTucker conditions are used [11], [45]. These sta
tionary points of the Lagrangian are potential solutions of the constrained problem in eq. (4.46).
In general equality constraints are simpler to solve than inequality ones, both though do
not usually have a trivial solution. Constraints can be employed to restrict the solutions in to
some speciﬁc set, such as positive numbers R
+
or more generally to impose prior knowledge
in to the solution set. More details on constrained optimization and Lagrangian methods can be
found in [183] and [45].
4.6 Tikhonov regularisation
Illposed problems suffer from instability in their solutions. The illconditioning nature of the
inverse operator can cause the solution to be far away from the real or sought solution. To
cure this, the illposed problem is replaced by a wellposed approximation, such that the new
approximate solution is stable and unique. Common regularization techniques include methods
based on the TSVD, as discussed in ¸4.4.1, and Tikhonov regularization. Tikhonov regulariza
tion became widely known from the works of Andrey Nikolayevich Tikhonov in 1943[165] and
David Phillips in 1962[136]. Both applied the method to integral equations. Essentially, it is a
method of obtaining stable solutions in illposed problems by forming a constrained optimiza
tion problem, the constraint being that the set of solutions is a compact one [164].
The generalized Tikhonov regularization assumes the following objective functional:
Φ(p) = [[L
w
(g −Z(p)) [[
2
2
+λP(p), (4.48)
where L
w
is a weight matrix, λ > 0 is a regularization parameter and P is a penalty func
tional. The penalty functional is used to penalize unwanted solutions, typically nonsmooth
ones. The Tikhonov regularization method is essentially a solution to the following constrained
optimization problem [13]:
min
p
[[L
w
(g −Z(p)) [[
2
2
, subject to P(p) ≤ ε, (4.49)
where ε ∈ R
+
is a scalar governing the balance between a small residual and a penalized
solution. Both Tikhonov [164, p. 57] and Phillips [136, p. 86] used Lagrange multipliers to
derive the regularization method. An analysis of constraints and penalty functionals can be
found in [11].
Typical choices for the penalty functional are the l
1
(see for e.g. [48]) and l
2
norms,
introduced in ¸4.3, as well as the total variation (TV) functional
4.6. Tikhonov regularisation 63
TV (f) =
_
Ω
¸
¸
¸
¸
∂f
∂x
¸
¸
¸
¸
dx =
_
Ω
[∇f[dx, (4.50)
where x ∈ Ω is an mdimensional vector, Ω ⊆ R
m
is a bounded open set and
∂f
∂x
= ∇f is the
gradient of f. The TV functional has been used in image processing optimization problems in
the works of Cand` es et al [23], Rudin et al [143] and Chan et al [25]. For the theory on TV
regularization refer to [174]. The discussion here will be restricted to the l
2
penalty functional,
as in the original formulation by Tikhonov.
4.6.1 Linear case
The forward model Z can be expressed as a linear mapping Z : P → Y, where P ⊂ H
m
and Y ⊂ H
n
are deﬁned as usual. We set the penalty functional to P(p) = [[L(p − p
∗
)[[
2
2
,
where p
∗
is an a priori parameter estimate and L is a regularization operator. The Tikhonov
regularization functional is formulated then as follows:
Φ(p) = [[L
w
(g −Zp)[[
2
2
+λ[[L(p −p
∗
)[[
2
2
. (4.51)
Setting the derivative of the objective functional Φ(p) equal to zero, we obtain the set of normal
equations
_
Z
T
L
T
w
L
w
Z +λL
T
L
_
p = Z
T
L
T
w
L
w
g +λL
T
Lp
∗
. (4.52)
If the regularization operator L is chosen so that Null(Z) ∩ Null(L) = ¦0¦, which directly
implies that Z
T
L
T
w
L
w
Z +λL
T
L is positive deﬁnite, then the solution is unique and deﬁned as:
p =
_
Z
T
L
T
w
L
w
Z +λL
T
L
_
−1
_
Z
T
L
T
w
L
w
g +λL
T
Lp
∗
_
. (4.53)
The incorporation of a priori information using the regularizing operator can be seen better in
the “stacked form”
Φ(p) =
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
_
L
w
g
√
λLp
∗
_
_
−
_
_
L
w
Z
√
λL
_
_
p
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
2
2
. (4.54)
Intuitively speaking, this can be interpreted as converting the (numerically) underdetermined
least squares problem to an overdetermined system of equations by adding the a priori infor
mation encoded in the regularization operator L.
As in the ordinary Tikhonov scheme we assume that the regularization operator L and the
weighting matrix L
w
are equal to the identity matrix I and that we have no prior estimate of p,
i.e. p
∗
= 0. Performing these substitutions in eq. (4.53) we obtain the solution
64 Chapter 4. Numerical optimization: Inverse problem theory
p =
_
Z
T
Z +λI
_
−1
Z
T
g, (4.55)
which minimizes the following objective functional:
Φ(p) = [[g −Zp[[
2
2
+λ[[p[[
2
2
, (4.56)
with its equivalent “stacked form”:
Φ(p) =
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
_
g
0
_
_
−
_
_
Z
√
λ
_
_
p
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
2
2
. (4.57)
This is known as damped least squares [171]. In the statistical literature it is referred to as ridge
regression [13].
The choice of the regularization parameter is an important research topic. The effort is to
ﬁnd a method that automatically calculates λ. Such methods include generalized cross valida
tion, the discrepancy principle and the Lcurve method [174],[68]. A discussion on the Lcurve
method can be found in [173]. This method requires to solve the minimisation problem with
a variety of λ parameters and then choose the optimal. This is prohibitive for many applica
tions due to the computational cost. The literature on the topic of “automatic” regularisation
parameter choice is vast and a proper discussion on these methods exceeds the purposes of this
thesis.
4.6.2 Nonlinear case
For the nonlinear observation model Z(p), the generalized Tikhonov regularization is expressed
as in eq. (4.48)
Φ(p) = [[L
w
(g −Z(p)) [[
2
2
+λP(p). (4.58)
The NewtonRapshon method, based on successive linear approximations, is derived similar to
¸4.4.2 by setting the derivative of the linearized objective functional equal to zero
∂
˜
Φ
∂p
= 0. This
leads to the following formula:
p
k+1
= p
k
−
_
∂
2
Φ
∂p
2
(p
k
)
_
−1
∂Φ
∂p
(p
k
). (4.59)
For convenience we will use Newton’s notation and set
∂P
∂p
(p
k
) = P
(p
k
) and
∂
2
P
∂p
2
(p
k
) =
P
(p
k
). The gradient of the objective functional
∂Φ
∂p
∈ R
m
is then expanded to
4.7. Statistical estimation: Kalman ﬁlters 65
∂Φ
∂p
(p
k
) = −2J
T
k
L
T
w
L
w
(g −Z(p
k
)) +λP
(p
k
). (4.60)
The Hessian matrix
∂
2
Φ
∂p
2
∈ R
m×m
is evaluated as
∂
2
Φ
∂p
2
(p
k
) = 2K
k
+ 2J
T
k
L
T
w
L
w
J
k
+λP
(p
k
). (4.61)
Ignoring the K
k
term in the Hessian matrix, we obtain the GaussNewton regularized solution
with an iteration formula of the following kind:
p
k+1
= p
k
+s
k
_
J
T
k
L
T
w
L
w
J
k
+
1
2
λP
(p
k
)
_
−1
_
J
T
k
L
T
w
L
w
(g −Z(p
k
)) −
1
2
λP
(p
k
)
_
.
(4.62)
Typically the
1
2
term can be absorbed by the regularization parameter λ to simplify the iteration
formula further. Setting the penalty functional P(p) = λ[[p[[
2
2
, one arrives at the Levenberg
Marquardt method described in ¸4.4.2.
Additional constraints, such as positivity of the solutions p ∈ R
m
+
can be incorporated
in a similar way as presented here. Up to this point all presented methodology is based on an
algebraic or deterministic approach to minimization. In the next section a statistical approach
to minimization will be discussed.
4.7 Statistical estimation: Kalman ﬁlters
An alternative to the deterministic approach to inverse problems is to formulate the problem
using tools from statistical analysis. More details on statistical approaches and Bayesian meth
ods can be found in [161] and [86]. The focus on this section will be on the theory of Kalman
ﬁlters.
During World War II Norbert Wiener(18941964) and Andrey Kolmogorov(19031987)
set the foundations for the theory of ﬁltering. Their efforts were to solve target tracking prob
lems and they formulated independently a theory, now known as WienerKolmogorov ﬁltering
theory, in 1941 (Kolmogorov [98]) and in 1942 (Wiener [185]). While many people worked on
ﬁltering theory, the best known results are the Kalman ﬁlters. Rudolf Kalman (1930) published
in 1960 [87] his results on the WienerKolmogorov problems. Kalman went further than Wiener
and Kolmogorov by allowing time variation in the system. For a further historical perspective
refer to [156].
Kalman ﬁltering has found many applications in imaging and in engineering in general.
In Electrical Impedance Tomography (EIT) Kolehmainen et al in [97] presented a method for
66 Chapter 4. Numerical optimization: Inverse problem theory
Figure 4.6: From left to right. N. Wiener, A. Kolmogorov and R. Kalman.
estimating time varying boundaries of regions with known conductivity. In [93] Kim et al
developed a method for dynamic imaging in EIT. Baroudi et al [8] worked on the estimation of
gas temperature distribution in electric wire tomography. In [88] Kao et al presented a sinogram
restoration technique for image reconstruction in Positron Emission Tomography (PET). In
[123] Kalman ﬁlters were used in cardiac MRI to estimate myocardial motion using velocity
ﬁelds.
4.7.1 Linear case: Discrete Kalman ﬁlters
The discrete Kalman ﬁlter algorithm solves the estimation problem of the state p ∈ R
m
of a
time controlled process given some measurements g ∈ R
n
. This process is governed by a linear
equation of the form:
p
t
= Sp
t−1
+w
t−1
. (4.63)
The process state is related to the measurement by
y
t
= Zp
t
+n
t
. (4.64)
S is the state transition model, it relates the states of time step t − 1 to next time step. It
represents our knowledge of the motion of the states. If the motion is unknown, the identity
matrix can be used as S to express a randomwalk model. Z : P → Y is assumed to be linear
and it represents the mapping from the state space to the measurement space. In the inverse
problem terminology, this is referred to as the forward model. The variables w
t
and n
t
rep
resent the process and measurement white noise, they are assumed to have normal probability
distributions.
p(w
t
) ∼ N(0, C
w,t
) (4.65)
p(v
t
) ∼ N(0, C
n,t
), (4.66)
4.7. Statistical estimation: Kalman ﬁlters 67
where C
w,t
and C
n,t
are the process and measurement noise covariance matrices at time t,
respectively.
We deﬁne p
tt−1
to be the a priori state estimate representing our knowledge of the process
prior to the step t and p
tt
to be the a posteriori state estimate at time step t. The a priori and
the a posteriori estimate errors are
e
tt−1
≡ p
t
−p
tt−1
(4.67)
e
tt
≡ p
t
−p
tt
. (4.68)
Then the a priori and a posteriori estimate error covariances are
C
tt−1
= E[e
tt−1
e
T
tt−1
] (4.69)
C
tt
= E[e
tt
e
T
tt
], (4.70)
where E[f(x)] is the expectation operator.
We seek to ﬁnd an equation that computes an a posteriori state estimate p
tt
as linear com
bination of the a priori state estimate p
tt−1
and weighted difference between a measurement
g
t
and a predicted measurement Zp
t
p
tt
= p
tt−1
+G
t
(g
t
−Zp
tt−1
). (4.71)
The matrix G
t
is called the Kalman gain. The aim is to ﬁnd G
t
such that the meansquare
estimation error is minimized. The derivation presented in this thesis is along the general lines
of [16]. An alternative derivation from the viewpoint of regression analysis can be found in
[40]. To minimize the meansquare error, the trace of the a posteriori error covariance matrix
has to be minimized. The trace of C
tt
represents the sum of the error expectations between
measurements and predictions. The argument is that the individual meansquare errors will be
minimized when the total is minimized [16, p. 215]. Continuing from eq. (4.71), we substitute
g
t
from eq. (4.64)
p
tt
= p
tt−1
+G
t
(Zp
t
+n
t
−Zp
tt−1
).
Substituting this into eq. (4.68) and the result into eq. (4.70)
68 Chapter 4. Numerical optimization: Inverse problem theory
C
tt
= E[
_
p
t
−p
tt−1
+G
t
(Zp
t
+n
t
−Zp
tt−1
)
_
_
p
t
−p
tt−1
+G
t
(Zp
t
+n
t
−Zp
tt−1
)
_
T
].
Calculating the expectations and assuming that the a priori error (p
t
− p
tt−1
) is uncorrelated
with the measurement error n
t
, the following is obtained:
C
tt
= C
tt−1
−G
t
ZC
tt−1
−C
tt−1
Z
T
G
T
t
+G
t
_
ZC
tt−1
Z
T
+C
n,t
_
G
T
t
. (4.72)
Taking the derivative of the trace of C
tt
with respect to G
t
and noting that Tr[A] = Tr[A
T
]
and C
tt−1
is a covariance matrix and therefore symmetric
dTr[C
tt
]
dG
t
= −2
dTr[G
t
ZC
tt−1
]
dG
t
+
dTr[G
t
_
ZC
tt−1
Z
T
+C
n,t
_
G
T
t
]
dG
t
. (4.73)
Using matrix derivative identities in eq. (4.73), we obtain
dTr[C
tt
]
dG
t
= −2(ZC
tt−1
)
T
+G
t
_
_
ZC
tt−1
Z
T
+C
n,t
_
+
_
ZC
tt−1
Z
T
+C
n,t
_
T
_
.
The matrix ZC
tt−1
Z
T
+C
n,t
is symmetric since C
tt−1
and C
n,t
are both covariance matrices.
dTr[C
tt
]
dG
t
= −2C
tt−1
Z
T
+ 2G
t
_
ZC
tt−1
Z
T
+C
n,t
_
.
Setting this derivative equal to zero and solving for G
t
G
t
= C
tt−1
Z
T
_
ZC
tt−1
Z
T
+C
n,t
_
−1
. (4.74)
When C
n,t
approaches zero, G
t
weights the residual more heavily
lim
C
n,t
→0
G
t
= C
tt−1
Z
T
_
ZC
tt−1
Z
T
_
−1
. (4.75)
Setting
˜
Z = ZC
1/2
tt−1
lim
C
n,t
→0
G
t
= C
1/2
tt−1
˜
Z
T
_
˜
Z
˜
Z
T
_
−1
. (4.76)
4.7. Statistical estimation: Kalman ﬁlters 69
Noting that
˜
Z
T
_
˜
Z
˜
Z
T
_
−1
is the underdetermined version of the MoorePenrose inverse
˜
Z
†
lim
C
n,t
→0
G
t
= C
1/2
tt−1
C
−1/2
tt−1
Z
†
lim
C
n,t
→0
G
t
= Z
†
. (4.77)
In simple terms it means that as the measurement error covariance decreases the actual mea
surements are trusted more. If the a priori error covariance C
tt−1
approaches zero, the Kalman
gain G
t
will weight less the residual
lim
C
tt−1
→0
G
t
= 0. (4.78)
This means that as the a priori error covariance decreases, less trust is put on the actual mea
surements g
t
and more on the predicted measurements y
t
.
Having deﬁned the matrix G
t
, the full expression for the a posteriori covariance matrix C
tt
can
be obtained. Substituting eq. (4.74) in to eq. (4.72) and setting D = ZC
tt−1
Z
T
+C
n,t
C
tt
= C
tt−1
−C
tt−1
Z
T
D
−1
ZC
tt−1
−C
tt−1
Z
T
_
C
tt−1
Z
T
D
−1
_
T
+C
tt−1
Z
T
D
−1
D
_
C
tt−1
Z
T
D
−1
_
T
.
Noting that D and C
tt−1
are symmetric, we obtain
C
tt
= C
tt−1
−G
t
ZC
tt−1
. (4.79)
The ﬁrst step of the discrete Kalman ﬁlter algorithm is to project the state ahead
p
t+1t
= S
t
p
tt
. (4.80)
Then the error covariance
C
t+1t
= S
t
C
tt
S
T
t
+C
w,t
. (4.81)
The previous two equations constitute the prediction step of the algorithm. Next step is to
correct. First we compute the Kalman gain G
t
as in eq. (4.74)
G
t
= C
tt−1
Z
T
(p
tt−1
)
_
Zp
tt−1
C
tt−1
Z
T
p
tt−1
+C
n,t
_
−1
. (4.82)
70 Chapter 4. Numerical optimization: Inverse problem theory
Next we update the state estimate with respect to the measurement g
t
p
tt
= p
tt−1
+G
t
(g
t
−Zp
tt−1
). (4.83)
The ﬁnal step is to update the error covariance
C
tt
= C
tt−1
−G
t
Zp
tt−1
C
tt−1
. (4.84)
The previous equations are iterated until convergence and they are known as the discrete
Kalman ﬁlter algorithm. As mentioned previously this derivation assumes that the process is
linear, for nonlinear systems the extended Kalman ﬁlter is used. This is derived by linearization,
as presented previously ¸4.4.2, details will be given in the following section.
4.7.2 Nonlinear case: Extended Kalman ﬁlters
The parameters of the nonlinear forward model are mapped to the data space Y using the fol
lowing relation:
y
t
= Z(p
t
) +n
t
, (4.85)
where Z(p) : R
m
→ R
n
is a mapping from the parameters space to the data space and n
t
is some noise process related to the forward mapping at time t. The forward model Z(p) is
nonlinear and it will be linearized. The observation equation becomes
y
t
= Z(p
∗,t
) +J
p,t
(p
∗,t
)(p
t
−p
∗,t
) +n
t
, (4.86)
where p
∗,t
is the linearization point, J
p,t
=
∂Z(p
t
)
∂p
, is the Jacobian matrix and n
t
is a zeromean
Gaussian observation noise process with covariance matrix C
n,t
.
The linearized problem can be solved recursively using the extended Kalman ﬁlter algo
rithm equations
G
t
= C
tt−1
J
T
p,t
(p
∗,t
)(J
p,t
(p
∗,t
)C
tt−1
J
T
p,t
(p
∗,t
) +C
n,t
)
−1
(4.87)
p
tt
= p
tt−1
+G
t
(g
t
−Z(p
∗,t
) −J
p,t
(p
∗,t
)(p
t
−p
∗,t
)) (4.88)
C
tt
= C
tt−1
−G
t
J
p,t
(p
∗,t
)C
tt−1
(4.89)
p
t+1t
= S
t
p
tt
(4.90)
C
t+1t
= S
t
C
tt
S
T
t
+C
w,t
. (4.91)
4.8. Discussion 71
G
t
is the Kalman gain and the matrices C
tt
and C
t+1t
are the covariance matrices of the
Kalman ﬁltered state vector p
tt
and the predictor p
t+1t
respectively. If the observation model
is linearized at the current value of the predictor, p
∗,t
= p
tt−1
, eq. (4.87) becomes
p
tt
= p
tt−1
+G
t
(g
t
−Z(p
tt−1
)). (4.92)
Eq. (4.88) is correcting the parameters p of the model according to the measured data g.
Eq. (4.90) is projecting the parameters to the next time point using a priori knowledge in
corporated in the state transition matrix S
t
.
4.7.3 Fixed interval smoother
The extended Kalman ﬁlter algorithm can be used in real time processing as it only uses current
and past measurements. In the case where data does not have to be processed in real time, esti
mates can be calculated from all measurements. The estimates from the Kalman ﬁlter algorithm
can be further processed using the ﬁxed interval smoother algorithm [4, pp. 187190]
X
t−1
= C
t−1t−1
S
T
t−1
C
−1
tt−1
(4.93)
p
t−1T
= p
t−1t−1
+X
t−1
(p
tT
−p
tt−1
). (4.94)
4.8 Discussion
In this chapter various approaches to minimization were presented. Optimization is still a mod
ern topic of research and we have only covered the area that is relevant to the work presented
in this thesis. Least squares estimation gives an introduction on the principles of optimization
methods. Standard Newtontype algorithms typically include a step length parameter s
k
, which
requires a line search at each iteration. In addition the regularization parameter λ needs to be
deﬁned. Trust region approaches, e.g. the LevenbergMarquardt method, are robust in most
cases and do not require the step length nor the regularization parameter to be deﬁned.
Using the method of Tikhonov, constraints can be incorporated in to the problem with
the use of penalty functions. In the original formulation of Tikhonov the constraints aimed to
replace the inverse (Z
T
Z)
−1
, which is often singular, with a wellbehaved bounded inverse
(Z
T
Z + λ
˜
P)
−1
. These penalty functions are a representation of our prior knowledge of the
process governing a particular problem. The Tikhonov regularization can be seen from the
stochastic point of view as a maximum a posteriori estimate. Deterministic methods in general
make many assumptions, for example on the distribution of noise in a process. Statistical esti
mation on the other hand offers the ability to express the problem with stochastic assumptions
72 Chapter 4. Numerical optimization: Inverse problem theory
about the distribution of data and noise. The focus in deterministic methods is to obtain as much
information as possible from measured data by building an exact forward model with some ba
sic assumptions about the noise distribution. In stochastic methods the aim is to incorporate
accurately a priori information about the sought solution p and the noise statistics. The uncer
tainty in the accuracy of the measurements and the forward model is expressed in probability
density functions.
Kalman ﬁlters have the cognitive machinery to estimate a temporal process with much
more control of the error distributions, when compared to standard Tikhonov regularization.
Kalman ﬁlters also have a builtin matrix S
t
for progressing the parameters of the model ahead
in time. The extended Kalman ﬁlter algorithm will be used in this thesis for the solution of
the dynamic shape estimation problem in ¸8. In the absence of noise both methods reduce
to the MoorePenrose inverse, as it can be seen in ¸4.7.1 eq. (4.77). Setting the weighting
matrix L
w
in the Tikhonov method equal to the estimate error covariance matrix C
1/2
tt
shows
the close relation between the two methods. In the case of Tikhonov L
w
is a typically set as
a diagonal matrix, representing the variances between parameters of the model. The estimate
error covariance matrix C
tt
has values off the diagonal representing the covariances between
different parameters of the model, while they change in time. In this sense, it can be said that
Kalman ﬁlters offer a formula for calculating the weighting (or covariance) matrix method in
time. A derivation of Newtontype methods from a WienerKolmogorov ﬁltering perspective
has been discussed in [51].
This chapter concludes the discussion on the background of this thesis. In the following
chapters we present methods based on numerical optimisation techniques for the reconstruction
of images and shapes. We begin with the image reconstruction method in the next chapter.
Chapter 5
Image reconstruction method
5.1 Introduction
Standard techniques for reconstructing images from tomographic data such as ﬁltered back
projection and gridding, fail in limited data cases as they assume that the data set is complete.
Recent results have been very promising in solving problems with limited data. In [151] and
[96] Kolehmainen et al used statistical analysis to reconstruct images in dental radiology. They
chose to minimize an approximation to the total variation functional, because of the difﬁculty to
calculate its derivatives. More examples of the Bayesian approach in optimisation can be found
in [146],[69]. Cand` es et al [22], [23] presented exact reconstructions of simulated phantoms
using very limited data (22 radial proﬁles). An application of the theories presented in [22],
[23] incorporating the kt approach was presented in [112] for case of cardiac MRI. Cand` es
has also presented methods for image reconstruction using curvelets [21], [20]. Curvelets are a
particular application of wavelet analysis to the Radon transform. Ye et al [187] used a level set
methodology to reconstruct images from randomly undersampled Fourier data. The minimisa
tion process was applied on a leastsquares functional. Yu and Fessler [188] presented a level
set method for the reconstruction of images in PET.
In this chapter we present a method for the reconstruction of images from limited data sets,
which are typically encountered in dynamic imaging. These limited data sets have the beneﬁt
that data for each time frame can be collected very fast. Thus, motion of the imaged object
during the collection of data is reduced signiﬁcantly. The presented method does not make any
assumptions about the completeness of the data set. The aim is minimize the difference between
predicted data and measured data. To achieve this, the image reconstruction is formulated as a
least squares problem, as it will be explained in the following sections.
74 Chapter 5. Image reconstruction method
5.2 Forward problem
We approximate the image f(r) with local basis function as in eq. (4.17)
˜
f = Bp
ζ
, (5.1)
where r ∈ R
2
is a vector of the following form r = ¦r
x
, r
y
¦, B
k
is a basis function, p
ζ
∈ P is
the parameter vector and N
ζ
is the total number of basis functions used in a given grid.
The choice of basis functions was based on their performance in terms of image quality in
the results of [106] and [147]. The KaiserBessel radially symmetric basis functions,or as they
are commonly referred to blobs, are of the following type:
b(r) =
1
I
m
(α)
(
_
1 −(d/a)
2
)
m
I
m
(α
_
1 −(d/a)
2
) , (5.2)
where I
m
is the modiﬁed Bessel function of the ﬁrst kind, m is the degree, a is the support,
d is the distance from the center of the basis function and α is a shape parameter. Optimal
combinations of the shape parameter α and the support can be found in [147] and [61].
1 2 3 4 5 6 7 8
20
40
60
80
100
120
140
160
180
Figure 5.1: Radon data. A sinogram with 8 projections each with 185 line integrals.
The projection data g = ¹(f ) ∈ R
n
(ﬁg. 5.1) will then be equal to
g = ¹Bp
ζ
, (5.3)
where p
ζ
∈ R
N
ζ
are the unknown blob coefﬁcients and B ∈ R
N
r
×N
ζ
is the basis functions
matrix, where N
r
is the total number of pixels. Each column of matrix B is going to be the
image vector of each basis function. A discrete method for the calculation of the system matrix
J = ¹B (ﬁg. 5.3) is to create the image of each blob and then take its Radon transform. The
5.3. Inverse problem: Direct solution 75
0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
1.2
1.4
0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Figure 5.2: Radial proﬁle of the KaiserBessel blob in Fourier space (Left) and Radon space
(Right).
matrix can also be calculated analytically in both Fourier and Radon domains. The Kaiser
Bessel blobs remain radially symmetric in both cases. The derivation of the Fourier transform
of the basis functions uses the Hankel transform, which is equivalent to the Fourier transform
for a rotationally symmetric function. The analytic Fourier formula (ﬁg. 5.2 (Left)) is given in
[106], [61]
b
F
(k) =
a
k
α
m
(2π)
k/2
J
m+k/2
(z)
I
m
(α)z
m+k/2
, with z =
_
(2πa[[k[[)
2
−α
2
, (5.4)
where J
m
is the mth degree Bessel function of the ﬁrst kind and k is the dimension of the
space.
The Radon transform of the basis functions (ﬁg. 5.2 (Right)) is calculated with the use of the
central slice theorem and its formula is given in [106],[62]
b
R
(t) = a
I
m+1/2
(α)
I
m
(α)
_
2π
α
b
m+1/2
(t). (5.5)
The linear system in eq. (5.3) is typically nonsquare and a unique solution is unobtainable. We
use the least squares functional to obtain a solution of minimum norm
p
ζ,min
= arg min
p
ζ
[[g −Jp
ζ
[[
2
2
. (5.6)
5.3 Inverse problem: Direct solution
5.3.1 Least squares estimation
A direct solution for the minimization problem of eq. (5.6) is to use the MoorePenrose pseu
doinverse, which gives the following solution:
76 Chapter 5. Image reconstruction method
500 1000 1500 2000 2500 3000 3500 4000
200
400
600
800
1000
1200
1400
Figure 5.3: The system matrix J. Each column corresponds to the vectorised basis function in
the Radon space.
p
ζ
= (J
T
J)
−1
J
T
g. (5.7)
We test this method with the SheppLogan phantom [150] in ﬁg. 5.4. The resolution of
the image is 128 128 pixels. Data is simulated by taking the Radon transform at 8 angles.
Images are reconstructed using ﬁltered backprojection and the proposed method (ﬁg. 5.5). To
compare images numerically, we use the relative mean square error rms = [[f
g
− f [[
2
2
/[[f
g
[[
2
2
,
where f
g
and f are the ground truth and predicted vectorised images.
Figure 5.4: Ground truth image. SheppLogan phantom.
5.3. Inverse problem: Direct solution 77
Figure 5.5: 8 projections. (Left) Filtered backprojection rms = 1.2521. (Right) Least squares
reconstruction 8 8 grid rms = 0.73092.
5.3.2 Damped least squares estimation
The use of such a small number of basis functions results in very poor reconstructions. The
increase of this number has direct effect in the decrease of the conditioning of matrix J
T
J,
making it practically singular. To stabilize this inversion we augment the effectively underde
termined system in to an overdetermined one with the incorporation of a priori information
_
_
J
λI
_
_
p =
_
_
g
0
_
_
. (5.8)
This corresponds to the ordinary Tikhonov regularization functional
Φ(p) = [[g −Jp
ζ
[[
2
2
+λ[[p[[
2
2
, (5.9)
which has the following minimizer
p
ζ
= (J
T
J +λI)
−1
J
T
g. (5.10)
Using the damped least squares method, the grid resolution can be increased to 64 64. As
seen in ﬁg. 5.6 the reconstructed image is much more detailed than the least squares version
in ﬁg. 5.5. The increase on the grid resolution produces angular artifacts, that are common
with ﬁltered backprojection methods. In the next section we discuss iterative reconstruction
methods, which can signiﬁcantly reduce these artifacts.
78 Chapter 5. Image reconstruction method
Figure 5.6: 8 projections. (Left) Filtered backprojection rms = 1.2521. (Right) Damped least
squares reconstruction 64x64 grid rms = 0.61756.
5.4 Inverse problem: Iterative solution
Using the damped least squares as an initial estimate, we can further increase the quality of the
reconstructions by solving the following constrained optimization problem
p
min
= arg min
p
ζ
TV (p) subject to [[g −Jp
ζ
[[
2
2
= σ
2
, (5.11)
where TV is the total variation functional introduced in ¸4.6 and σ is a known noise level. We
solve the equivalent Tikhonov regularization problem by minimizing the following objective
functional
Φ(p
ζ
) = [[g −Jp
ζ
[[
2
2
+λTV (p
ζ
). (5.12)
The minimization of this can be thought as a penalty approach to the constrained problem [111].
Wellposedness and convergence of this problem have been proved in [1].
The TV functional favors solutions with small total variation, yet allowing discontinuities
in the solution. It tends to preserve edge information, while favoring solutions with smaller
derivatives. A discussion in the edgepreserving properties of the TV functional in statistical
estimation can be found in [102]. The total variation functional was originally used for image
denoising tasks [143], [177], [27], [108] and [36]. The method has been applied to tomographic
reconstruction problems both with full and limited data. In [28] the total variation functional
was used for the reconstruction from full data. Their algorithm was named ARTUR. A gen
eralization of ARTUR was discussed in [34], where the authors reconstructed a phantom from
limited data. From the Bayesian point of view the TV approach of Kolehmainen et al [96],
with its theoretical background given in [151], was applied to both limited sparse and limited
5.4. Inverse problem: Iterative solution 79
view data. They reconstructed 2D and 3D images from dental xray data. Persson et al [135]
described an expectation maximization (EM) algorithm. They applied their method for 3D
reconstructions of cardiac phantoms from limited view data.
The TV functional is deﬁned as in eq. (4.50)
TV (p
ζ
) =
_
Ω
[∇p
ζ
[dp
ζ
. (5.13)
The presence of the absolute value function makes the TV functional nondifferentiable at the
origin. Therefore we approximate the absolute function, with a continuous function ψ(t) =
_
t
2
+β
2
, as seen in ﬁg. 5.7. Other options for this function can be found in [174]. The
−1.5 −1 −0.5 0 0.5 1 1.5
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
t
y
Figure 5.7: The solid line represents the absolute function [t[ and the dashed line represents the
approximation ψ(t) =
_
t
2
+β
2
with β = 0.1.
approximated TV functional is
TV (p
ζ
) =
_
Ω
ψ([∇p
ζ
[)dp
ζ
. (5.14)
Setting the gradient [∇p
ζ
[ =
_
(D
x
p
ζ
)
2
+ (D
y
p
ζ
)
2
, where D
x
and D
y
are x and y differential
operators, we obtain the discrete version
TV (p
ζ
) =
N
ζx
i=1
N
ζy
j=1
_
((D
x,ij
p
ζ
)
2
+ (D
y,ij
p
ζ
)
2
) +β
2
, (5.15)
where N
ζx
and N
ζy
are the total numbers of basis functions in the x and y directions. A variety
of choices for the differential operators can be found in [83] and [137].
It is clear now that the objective functional Φ in eq. (5.12) is nonlinear, since the TV
penalty term is nonlinear. The derivative of the TV penalty functional is
80 Chapter 5. Image reconstruction method
TV
(p
ζ
) = D
T
x
diag(ψ
(∇p
ζ
))D
x
+D
T
y
diag(ψ
(∇p
ζ
))D
y
, (5.16)
where ψ
(∇p
ζ
) denotes the derivative of the absolute value approximation. The TV
(p
ζ
) ∈
R
N
ζ
×N
ζ
matrix is block tridiagonal, with the blocks on the diagonal being tridiagonal matrices
and the offdiagonal blocks being diagonal matrices. This is a sparse matrix, as shown in
ﬁg. 5.8.
10 20 30 40 50 60
10
20
30
40
50
60
Figure 5.8: The TV
block tridiagonal matrix.
The next sections are presenting two methods for the solution of the nonlinear problem in
eq. (5.12).
5.4.1 Lagged diffusivity ﬁxed point iteration
Using the EulerLagrange equation for the unconstrained optimization problem in eq. (5.12) we
obtain a nonlinear partial differential equation
g(p
ζ
) = −λ∇
_
∇p
ζ
_
[∇p
ζ
[
2
+β
2
_
+Z
∗
(Zp
ζ
−g) = 0, (5.17)
where Z
∗
denotes the adjoint operator of Z. Rudin, Osher and Fatemi [143] solve this system
of nonlinear equations using a time marching method. It results in a gradient descent method,
or steepest descent with the addition of a line search algorithm for globalization. The slow
convergence of the steepest descent method is not ideal for many applications. In this section
we use the lagged diffusivity ﬁxed point iteration of Vogel [176]. It is based on successive
linearisations of the objective functional using a quasiNewton approach. It can be considered
as a special case of the ARTUR algorithm [28]. ARTUR has also been applied in the limited
data case in [34]. A complete derivation of the ﬁxed point algorithm can be found in [176] and
[177].
5.4. Inverse problem: Iterative solution 81
Fromanother point of viewthe constrained problemcan be thought of as Tikhonov regular
ization ¸4.6, with a nonlinear penalty functional. Unconstrained problems are in general easier
to handle from a computational standpoint. Both methods (constrained and unconstrained) pro
duce the same results as long as the parameters are selected appropriately [176]. The iteration
formula resembles the nonlinear Tikhonov regularization iteration
p
k+1
= p
k
+ (Z
T
Z +λTV
k
)
−1
_
Z
T
(g −Zp
k
) −λTV
k
p
k
_
, (5.18)
where TV
k
= TV
(p
k
) denotes the derivative of the total variation functional with parameters
p
k
at the kth iteration. Note that we have dropped the second derivative in the Hessian. (Z
T
Z+
λTV
(p
k
)) of the objective functional, which results in a quasiNewton approach.
Figure 5.9: 8 projections. (Left) Initial (damped least squares) rms = 0.61756. (Right) Fixed
point reconstruction rms = 0.5975.
0 5 10 15 20 25
0.596
0.598
0.6
0.602
0.604
0.606
0.608
0.61
0.612
0.614
k
rms
0 5 10 15 20 25
0
50
100
150
200
250
300
350
400
450
k
gr ad
Figure 5.10: 8 projections. (Left) rms error over iteration plot. (Right) Gradient norm plot.
The results of the ﬁxed point method are presented in ﬁgs. 5.9 and 5.10. As seen in the right
plot of ﬁg. 5.10, the norm of the gradient has dropped most of the way at about 5 iterations. The
82 Chapter 5. Image reconstruction method
decrease after that is very small, yet it is not insigniﬁcant. Examining the rms plot (ﬁg. 5.10
(Left)) the error on the image space reduces signiﬁcantly up to about 15 iterations.
5.4.2 Primaldual Newton method
Another option for the solution of the constrained minimization problem has been proposed
by Chan et al [27]. Their approach is based on a fullNewton scheme. The highly nonlinear
nature caused by the TV functional is linearized more efﬁciently with the introduction of a dual
variable. We introduce a =
∇p
ζ
√
∇p
ζ

2
+β
2
as the new variable. This transforms the problem of
eq. (5.17) in to a system of nonlinear equations
−λ∇a +Z
∗
(Zp
ζ
−g) = 0
a
_
[∇p
ζ
[
2
+β
2
−∇p
ζ
= 0. (5.19)
For the 2D case we are examining in this chapter, we split the dual variable in to its x and y
components, a = ¦u, v¦. Setting B(∇p
ζ
) = diag(ψ
(∇p
ζ
)), the system of (5.19) is now
deﬁned
M(u, v, p
ζ
) =
_
¸
¸
¸
_
B(∇p
ζ
)u −D
x
p
ζ
B(∇p
ζ
)v −D
y
p
ζ
λD
T
x
u +λD
T
y
v +Z
T
(Zp
ζ
−g)
_
¸
¸
¸
_
= 0 (5.20)
and its derivative
M
(u, v, p
ζ
) =
_
¸
¸
¸
_
B(∇p
ζ
) 0 B
(∇p
ζ
)u −D
x
p
ζ
0 B(∇p
ζ
) B
(∇p
ζ
)v −D
y
p
ζ
λD
T
x
λD
T
y
Z
T
Z
_
¸
¸
¸
_
. (5.21)
The solution of the system (5.20) by Newton’s method requires the iterative solution of
_
¸
¸
¸
_
B
k
0 B
k
u
k
−D
x
p
k
0 B
k
B
k
v
k
−D
y
p
k
λD
T
x
λD
T
y
Z
T
Z
_
¸
¸
¸
_
_
¸
¸
¸
_
∆u
∆v
∆p
_
¸
¸
¸
_
= −
_
¸
¸
¸
_
M
u
M
v
M
p
k
_
¸
¸
¸
_
, (5.22)
where p
k
, u
k
, v
k
are the parameter and dual vectors at the kth iteration, B
k
= B(∇p
k
) and
M
u
, M
v
and M
p
k
denote the u, v and p
ζ
components of M at the kth iteration. We convert
the system (5.22) to block upper triangular form by block row reduction
5.4. Inverse problem: Iterative solution 83
_
¸
¸
¸
_
B
k
0 −E
11
D
x
−E
12
D
y
0 B
k
−E
21
D
x
−E
22
D
y
0 0 Z
T
Z +λH
_
¸
¸
¸
_
_
¸
¸
¸
_
∆u
∆v
∆p
_
¸
¸
¸
_
= −
_
¸
¸
¸
_
M
u
M
v
grad
k
_
¸
¸
¸
_
, (5.23)
where
E
11
= diag(w. ∗ D
x
p
k
. ∗ u
k
)
E
12
= diag(w. ∗ D
y
p
k
. ∗ u
k
)
E
21
= diag(w. ∗ D
x
p
k
. ∗ v
k
)
E
22
= diag(w. ∗ D
y
p
k
. ∗ v
k
),
the discretised diffusion operator is
H = D
T
x
B
k
E
11
D
x
+D
T
x
B
k
E
12
D
y
+D
T
y
B
k
E
21
D
x
+D
T
y
B
k
E
22
D
y
(5.24)
and grad
k
= Z
T
(Zp
k
− g) + λTV
k
p
k
is the gradient of the objective functional at the kth
iteration. From the system (5.23) we obtain ﬁrst the update for p
ζ
∆p = −(Z
T
Z +λH)
−1
grad
k
(5.25)
and the updated parameter vector
p
k+1
= p
k
+ ∆p. (5.26)
The updates for the dual variables are
∆u = −u
k
+B
k
(D
x
p
k
+ (E
11
D
x
+E
12
D
y
)∆p) (5.27)
∆v = −v
k
+B
k
(D
x
p
k
+ (E
21
D
x
+E
22
D
y
)∆p). (5.28)
The dual variables have to be restricted to be within the conjugate set C
∗
= ¦y ∈ R
n
[ [y[ ≤ 1¦,
i.e. the unit ball, to ensure convergence. This can be done by a line search method as follows:
s
k
= max¦0 ≤ s ≤ 1[(u
k
+s∆u, v
k
+s∆v) ∈ C
∗
¦. (5.29)
With the step length s
k
calculated, we can update the dual variables
84 Chapter 5. Image reconstruction method
u
k+1
= u
k
+s
k
∆u
v
k+1
= v
k
+s
k
∆v.
For a more detailed discussion on the primaldual method refer to [27] and [174]. The
reconstructed images are presented in ﬁg. 5.11.
Figure 5.11: 8 projections. (Left) Initial (damped least squares) RMS = 0.61756. (Right)
Primaldual reconstruction RMS = 0.5975.
0 5 10 15 20 25
0.596
0.598
0.6
0.602
0.604
0.606
0.608
0.61
0.612
0.614
k
rms
0 5 10 15 20 25
0
50
100
150
200
250
300
350
400
450
k
gr ad
Figure 5.12: 8 projections. (Left) RMS error over iteration plot. (Right) Gradient norm plot.
The primaldual method produces the same reconstruction (ﬁg. 5.11) with the ﬁxed point
method (ﬁg. 5.9). It is the convergence speed that is of interest. While the gradient norm plot
(ﬁgs. 5.10 and 5.12 (Right)) of both methods are very similar, the rms image error of the
primaldual method in ﬁg. 5.12 converges in 5 iterations. In the next section the method is
further developed to incorporate constraints on the maximal and minimal intensity expected in
the reconstructed images.
5.4. Inverse problem: Iterative solution 85
5.4.3 Constrained optimisation
Further to the previous results, we can enforce constraints on the intensity values of the parame
ters. We can incorporate the a priori knowledge that the image will not contain negative values.
The constrained optimization problem to be solved is
min
p
ζ
Φ(p
ζ
) subject to c(p
ζ
) ≥ 0. (5.30)
For the solution of this problem we modify the primaldual method with a projected method
[174],[111],[49]. We deﬁne an active set as follows:
A(p) = ¦i[0 ≤ p
i
≤ ¦, (5.31)
where is a small number, typically reducing as the iteration progress. This can be based on
statistical analysis or assumptions about the noise in the system. The active parameters are
going to be on the boundaries of the feasible region. We reduce the Hessian as follows:
H(p) =
_
_
_
δ
ij
if i ∈ A(p) or ∈ A(p)
∂
2
Φ
∂p
i
∂p
j
otherwise
. (5.32)
We obtain a solution from the linear system of equations and update the parameter vector
p
k+1
= T p
k
+ ∆p, (5.33)
where T is a projection operator deﬁned as:
T (p) =
_
_
_
p
i
if p
i
≥ 0
0 if p
i
≤ 0
. (5.34)
An alternative to the projection methods are the interior and exterior point methods [45].
An interior point method has been used in [108]. Kolehmainen et al [96] used an exterior point
method to apply the positivity constraint.
Using the projected method, as described, we can also enforce an upper bound constraint
by replacing 0 in the above equations with a maximal value and changing the order of the
inequalities in eq. (5.34). This maximal value can be obtained in cardiac MRI with fair accuracy,
by building a time averaged image, which is fully sampled.
86 Chapter 5. Image reconstruction method
Figure 5.13: 8 projections. (Left) Initial (damped least squares) rms = 0.61756. (Right)
Projected primaldual reconstruction rms = 0.4833.
1 1.5 2 2.5 3 3.5 4 4.5 5
0.48
0.5
0.52
0.54
0.56
0.58
0.6
k
rms
1 1.5 2 2.5 3 3.5 4 4.5 5
0
1000
2000
3000
4000
5000
6000
k
gr ad
Figure 5.14: 8 projections. (Left) rms error over iteration plot. (Right) Gradient norm plot.
The application of intensity inequality constraints combined with the total variation penalty
functional has a great effect in the reduction of artifacts, as seen in ﬁg. 5.13. As the constraints
are enforced, the gradient norm (ﬁg. 5.14 (Right)) increases as it climbs out of an infeasible
solution area of negative and very high intensity values. In the next section we apply the method
to simulated and measured cardiac data sets.
5.5 Results
In this section we apply the primaldual method with both constraints enforced by an active set
method. The grid resolution as previously was ﬁxed at 64 64 and the reconstructed images
128 128. Results are obtained for various degrees of undersampling. For comparison we
reconstruct images using ﬁltered backprojection and gridding, currently considered to be the
standard in image reconstruction from tomographic data.
5.5. Results 87
5.5.1 Simulated cardiac data
Data was simulated by taking the Radon transform at 4, 8, 13 and 16 equispaced angles from a
fully sampled cardiac image in ﬁg. (5.15). This acts as a ground truth image for the numerical
comparison, with rms, between the ﬁltered backprojection and the primaldual method.
Figure 5.15: Ground truth image. Fully sampled cardiac image.
The reconstructed images for both methods are shown in ﬁg. 5.16 with various degrees of
radial undersampling. Fig. 5.17 shows the corresponding rms errors. In the case of 4 proﬁles,
the ﬁlteredbackprojection produces an image dominated by streaks, making it practically im
possible to distinguish the location and shape of the heart. In the case of the primaldual method
some gross features of the anatomy are visible. As the number of proﬁles increase these features
become clearer in both images. The corruption of the ﬁltered backprojection images by an
gular artifacts deforms small anatomical features completely and in some locations introduces
new features which are not present in the ground truth image. In the primaldual reconstruction
the streaky artifacts are missing. While general features are represented well, even some of the
ﬁner ones are also visible. On this ﬁner scale the primaldual method does not introduce new
signals or deforms their apparent shape. Details that are not reconstructed are typically blurred.
88 Chapter 5. Image reconstruction method
4
8
13
16
Figure 5.16: Simulated data reconstructions. The numbers on the left column indicate the num
ber of proﬁles. (Left) Filtered backprojection. (Right) Projected primaldual reconstruction.
5.5.2 Measured data from MRI
ECG gated data was acquired from a healthy volunteer. A total of 25 phases each with 208
radial proﬁles were collected using a ﬁveelement array receive coil. The data used in this
5.5. Results 89
4 6 8 10 12 14 16
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
Filtered backprojection
Primal−dual
rms
number of proﬁles
0 20 40 60 80 100 120
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
Ground truth
Filtered backprojection
Primal−dual
x
intensity
Figure 5.17: (Left)Simulated cardiac rms plot over the number of proﬁles. The dashed line
represents the ﬁltered backprojection method and the solid the primaldual method. (Right)
Comparison of central lines of the ground truth and reconstructed images for the case of 8
radial proﬁles.
experiment was generated from the ﬁrst phase of the acquired data by undersampling in various
degrees. This was done by using every nth proﬁle according to the total number (N
under
) of
proﬁles in the undersampled set with n = 208/N
under
. Using 8 radial proﬁles results in a
26fold acceleration compared to the original radial acquisition. For the case of realtime MRI,
a total of about 200 radial proﬁles can be collected within a single heart beat using a fast steady
state free precession sequence. To transform the data in the Radon space, we 1D inverse Fourier
transformed along each radial proﬁle, according to the central slice theorem.
The quality of the reconstructions is compared using normalized images. First we will
investigate the single coil reconstructions. The fully sampled gridding reconstruction is shown
in ﬁg. 5.19. This was used as ground truth data and undersampled reconstructions with the
gridding method from ¸2.3 are compared with primaldual ones in ﬁg. 5.18. In ﬁg. 5.20 the
rms errors are presented for both methods using different degrees of undersampling.
90 Chapter 5. Image reconstruction method
Single coil reconstructions
4
8
13
16
Figure 5.18: Coil 1 reconstructions from measured data. The numbers on the left column
indicate the number of proﬁles. (Left) Gridding. (Right) Projected primaldual reconstruction.
5.5. Results 91
Figure 5.19: Coil 1. Fully sampled gridding reconstruction used as ground truth image.
4 6 8 10 12 14 16
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
Gridding
Primal−dual
number of proﬁles
rms
0 20 40 60 80 100 120
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Ground truth
Gridding
Primal−dual
x
intensity
Figure 5.20: (Left) Coil 1 rms plot over the number of proﬁles. The dashed line represents the
gridding method and the solid the primaldual method. (Right) Comparison of central lines of
the ground truth and reconstructed images for the case of 8 radial proﬁles.
92 Chapter 5. Image reconstruction method
Multiple coil reconstructions
To reconstruct images from multiple receive coils using the proposed method, the operator
J was changed to a Radon transform followed with a multiplication with the coil sensitivity
matrix. As there was no body coil used for the collection of data, sensitivity matrices were
calculated by dividing a time average image of each coil with the square root of the sum of
squares of all the coil images, similarly to [139]. Time averaged images can be reconstructed
with a gridding method using data fromall time points in a scanning sequence where proﬁles are
interleaved in order to span the 180 degrees. For comparison the gridding reconstructed single
coil images were combined in a leastsquares sense using the sensitivity matrices by solving the
following linear system for each pixel
C
x,y
= S
x,y
f(x, y) (5.35)
f(x, y) = S
†
x,y
C
x,y
,
where S
x,y
is a vector containing all coil sensitivity values at pixel location (x, y), f is the image
and C
x,y
is the vector containing the intensity value of each coil image at x, y. This method
produces superior results when compared with the square root of the sums of squares of the coil
images. It is computationally very efﬁcient since it can be solved per pixel. Normalised images
of the proposed method and the least squares gridding method were compared with the fully
sampled image (ﬁg. 5.21), which was reconstructed using the least squares gridding approach
with the sensitivity matrices described previously. The results of the LS gridding and projected
primaldual reconstructions are displayed in ﬁg. 5.22 and the corresponding rms errors in ﬁg.
5.23.
Figure 5.21: Multiple coil. Fully sampled LS gridding reconstruction used as ground truth
image.
5.6. Discussion 93
4
8
13
16
Figure 5.22: Multiple coil reconstructions from measured data. The numbers on the left column
indicate the number of proﬁles. (Left) LS gridding. (Right) Projected primaldual reconstruc
tion.
5.6 Discussion
In this chapter we have presented direct and iterative methods for the reconstruction of images
using an inverse problem approach. The total variation functional in combination with the ap
94 Chapter 5. Image reconstruction method
4 6 8 10 12 14 16
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
LS gridding
Primal−dual
number of proﬁles
rms
0 20 40 60 80 100 120
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Ground truth
LS gridding
Primal−dual
x
intensity
Figure 5.23: (Left) Multiple coil rms plot over the number of proﬁles. The dashed line rep
resents the LS gridding method and the solid the primaldual method. (Right) Comparison of
central lines of the ground truth and reconstructed images for the case of 8 radial proﬁles.
plication of constraints improves on the quality of the reconstructed images and greatly reduces
angular artifacts. The approaches discussed were based on direct inversion of a large sparse ma
trix. This is a computationally and memory demanding task. Apart from the typical argument
of increasing memory capacity and processing power in computers, the direct inversion can be
replaced with a more efﬁcient iterative linear system solver, for example a conjugate gradient
method [174]. Conjugate gradient methods do not necessitate the complete matrix to be stored,
thus allowing higher grid resolutions. Further improvements to the computational aspect of the
minimization problem might be achieved using a multigrid scheme [175].
While the damped least squares method can be solved efﬁciently using techniques for
sparse matrix inversion, the nonlinear methods require a lot more inversions due to their iterative
nature. The primaldual method was the most computationally demanding method per iteration.
When compared though to the ﬁxed point method in terms of convergence speed, its beneﬁts
become clear, as it reaches the desired solution in approximately a third of the iterations needed
by the ﬁxed point method. The difference in convergence speed is due to the use of second
order derivatives and the improved linearisation in the primaldual method. The novel methods
introduced in this chapter were compared to the standard ﬁltered backprojection and gridding
approaches. It has to be noted that both of these approaches produce very similar results in
limited data cases. These methods were originally designed to deal with complete data sets and
propagate high frequency information very well in that case. In the case of undersampled radial
MRI high frequency information is very sparsely sampled, while sampling near the center is
dense. Standard approaches can be smoothed to overcome these problems, but they tend to blur
a lot of useful information as well. The iterative approach discussed in this chapter does not
throw away high frequency information, but attempts to reconstruct it, with the only restriction
5.6. Discussion 95
being the grid resolution, some times successfully, while other times it produces smooth results.
In the majority of cases it does not produce high frequency artifacts, like introduction and
deformation of small structures, while greatly reducing the angular artifacts.
It was shown that the inverse problem approach produces superior results both visually and
numerically to standard methods in both simulated and measured data studies. For the measured
data case we applied the method to both single and multiple receive coil data. It is not the aim of
our method to replace existing parallel imaging technologies but to demonstrate the feasibility
of its application with multiple receive coil data. We believe that our method could potentially
be combined with SENSE (¸2.4.2). It is also worth noting that while the results of this chapter
were based on angular samples spanning the 180 degrees, the method can be also be applied
to limited view problems, where the total imaging angle is less than 180, and generally to any
Fourier sampling pattern.
In the next chapter we leave temporarily the problem of image reconstruction aside and
focus on the direct reconstruction of shapes from measured data.
96 Chapter 5. Image reconstruction method
Chapter 6
Shape reconstruction method
In this chapter we present a method for the reconstruction of shapes directly from measured
data. Our modelbased approach is stated as a minimisation problem between the predictions
of our model and the data. It has the beneﬁt that images can be segmented from very limited
data by working directly in the data space. Feng et al [65] have presented a level set method,
which segments objects of constant interior in simple backgrounds. A level set method for
the reconstruction of surfaces was discussed in [182] and [42]. The surfaces contained volumes
with homogeneous intensity in an homogenous but separable background. In [10] and [9] Battle
et al reconstructed surfaces using a Bayesian approach.
We assume in this chapter that the background and constant interior intensity of the object
are known. Our approach is based on least squares minimisation using global basis functions for
the boundary of the shape. Using this direct approach shapes can be reconstructed from very
few samples, which would not be adequate to form an image and segment it with traditional
‘snake’ approaches, presented in ¸3.1.
6.1 Forward problem
A planar curve can be modelled with the use of basis functions
((s) =
_
_
x(s)
y(s)
_
_
=
N
γ
n=1
_
_
γ
x
n
θ
n
(s)
γ
y
n
θ
n
(s)
_
_
, s ∈ [0, 1], (6.1)
where θ
n
are periodic and differentiable basis functions, N
γ
is the number of harmonics and
γ
x
n
, γ
y
n
∈ R are the weights of θ
n
. The basis functions used are trigonometric functions [157],
as described in ¸4.2.2.
A global representation of the form (6.1) does not have such limitations as convexity for
admissible domains, but it does have the drawback that it is difﬁcult to set constraints such as
nonselfintersection. Selfintersection occurs when the contour intercepts itself, that is ((s
1
) =
98 Chapter 6. Shape reconstruction method
((s
2
) for two parametric points s
1
and s
2
. This clearly is a problem as it does not represent
realistic boundaries.
So far there is no analytic solution to the problem of deﬁning a relation in the parameters
γ, when a selfintersection occurs. We solve this problem by detecting the selfintersection
with an exhaustive search. This is feasible due to the small size of the search space, that is the
number of pixels belonging to the curve. This can be reduced further by taking in to account
that neighboring pixels by deﬁnition cannot intersect each other.
Given a point of selfintersection s
e
, the pixels belonging to the smallest loop are removed
from the curve, as seen in ﬁg. 6.1. The remaining curve, free of selfintersections, requires
20 40 60 80 100 120
20
40
60
80
100
120
s
e
C( s)
r
x
r
y
20 40 60 80 100 120
20
40
60
80
100
120
r
x
r
y
C (s)
Figure 6.1: (Right) Contour with selfintersection at parametric point s
e
. (Left) Corrected
contour with the small loop removed.
reparameterisation, as the s points will not be spread evenly in the unit interval [0, 1). There
will be a clear gap where the selfintersection was removed. The parametric length of the curve
will be reduced. This reduction can be calculated by taking the parametric difference between
the exit and the entry sample of the selfintersection. Due to this reduction s
r
in parametric
length it is required to scale the corrected contour to have again length equal to 1. The scale
factor is calculated as follows:
s
c
=
1
1 −s
r
.
The scaling of the new list for the nonselfintersecting contour is preformed by multiplying
each parametric difference between two consecutive samples with s
c
. At the point where the
selfintersection occurred, the new parametric distance is unknown. This is approximated from
the mean of all parametric distances in the corrected contour. The new list in the correct scale
will be used to calculate the boundary parameters γ in a similar manner to [5].
The parametarisation formula based on the fact that the basis functions θ
n
resemble a
6.1. Forward problem 99
truncated Fourier series, is the following:
If k is odd
γ
x
k
= 2
_
1
0
(
x
(s) cos((k −1)πs) ds
γ
y
k
= 2
_
1
0
(
y
(s) cos((k −1)πs) ds.
If k is even
γ
x
k
= 2
_
1
0
(
x
(s) sin(kπs) ds
γ
y
k
= 2
_
1
0
(
y
(s) sin(kπs) ds.
Using the trapezoidal rule for numerical integration, the discretised form of the previous equa
tions becomes:
if k is odd
γ
x
k
= 2
N
s
n=1
(((
x
(n) cos((k −1)πs
n
)) + ((
x
(n + 1) cos((k −1)πs
n+1
)))
∆s
n
2
γ
y
k
= 2
N
s
n=1
(((
y
(n) cos((k −1)πs
n
)) + ((
y
(n + 1) cos((k −1)πs
n+1
)))
∆s
n
2
,
if k is even
γ
x
k
= 2
N
s
n=1
(((
x
(n) sin(kπs
n
)) + ((
x
(n + 1) sin(kπs
n+1
)))
∆s
n
2
γ
y
k
= 2
N
s
n=1
(((
y
(n) sin(kπs
n
)) + ((
y
(n + 1) sin(kπs
n+1
)))
∆s
n
2
,
where ∆s
n
is the nth parametric difference between the nth sample and the (n+1)th sample
and N
s
is the total number of samples in the contour. Keep in mind that the DC terms in the
parameters of contours are equal to
1
2
γ
x
1
and
1
2
γ
y
1
. Results for the reparameterisation of curves
with N
γ
= 7 are presented in table 6.1. The parameters ˜ γ are estimated from pixel locations
and parametric differences of each sample of the contour using the above equations. The small
deviations from the original parameters γ used to produce the two contours are in the sub pixel
level.
100 Chapter 6. Shape reconstruction method
γ
x
32 10 0 0 0 0 0
˜ γ
x
32 10.0049 0.0314 0.0133 0 0.0075 0
γ
y
27 0 12 0 0 0 0
˜ γ
y
26.9880 0.0377 11.9939 0.0001 0.0080 0 0.0030
γ
x
32 10 0 4 0 0 0
˜ γ
x
32 9.9996 0.0313 4.0152 0.0251 0.0171 0
γ
y
27 1 11 0 0 0 0
˜ γ
y
26.9880 1.0356 10.9881 0.0027 0.0073 0.0015 0.0027
Table 6.1: Results for 2 reparameterised contours. γ
x
and γ
y
are the original coefﬁcients of the
basis functions and ˜ γ
x
, ˜ γ
y
are the reconstructed coefﬁcients.
Pixel intensities are calculated depending on whether they belong in the interior, boundary
or background. This constitutes the mapping of the model to the image space ((p) : P → X,
where P is the parameter space and X is the pixel space. The forward problem is completed
by the mapping of the predicted image to the data space ¹ : X → Y, where ¹ is the Radon
transform.
The combined mapping Z = ¹(((γ)) is nonlinear and illposed. We seek a solution to
the following Tikhonov regularization problem
γ
min
= arg min
γ
[[g −Z(γ)[[
2
2
+λ[[γ[[
2
2
. (6.2)
6.2 Inverse problem
The nonlinear problem of eq. (6.2) can be solved by the method of Levenberg [105] and Mar
quardt [117]. This method solves a linear approximation at each iteration. The update is given
by
γ
k+1
= γ
k
+ (J
T
J +λI)
−1
J
T
(g −Jγ
k
), (6.3)
where J =
∂(RG)(γ)
∂γ
is a Jacobian matrix and λ is a regularization parameter controlled by the
optimization method dependant on the objective error. The Jacobian of the operator ( has been
analytically calculated in [94]
J
G
=
_
¸
_
¸
_
_
s
2
s
1
n
y
(s)θ
n
(s) ds n ∈ γ
x
−
_
s
2
s
1
n
x
(s)θ
n
(s) ds n ∈ γ
y
, (6.4)
6.3. Results 101
where s
1
and s
2
are the parametric intersection points of the curve with a boundary pixel (ﬁg.
6.2) and n
y
= γ
y
n
∂θ
y
n
∂s
and n
x
= −γ
x
n
∂θ
x
n
∂s
are the x and y directions of the normal to curve. The
derivatives of the basis functions θ
n
are as follows:
∂θ
1
∂s
= 0
∂θ
n
∂s
= 2π
n
2
cos(2π
n
2
), if n is even
∂θ
n
∂s
= −2π
n−1
2
sin(2π
n−1
2
), if n is odd
.
Figure 6.2: Exact parametric points s
1
and s
2
of the intersection of the curve with a pixel.
Due to the very small size of a pixel it can be assumed that θ
n
and n
x
(s) are constant over the
pixel, eq. (6.4) becomes for x
_
s
2
s
1
n
y
(s)θ
x
n
ds ≈ (s
2
−s
1
)n
y
(
s
1
+s
2
2
)θ
x
n
(
s
1
+s
2
2
), (6.5)
where s
1
and s
2
are the parametric points of intersection. Similarly eq. (6.4) for y is
_
s
2
s
1
n
x
(s)θ
x
n
ds ≈ −(s
2
−s
1
)n
x
(
s
1
+s
2
2
)θ
y
n
(
s
1
+s
2
2
)
_
s
2
s
1
n
x
(s)θ
x
n
ds ≈ (s
1
−s
2
)n
x
(
s
1
+s
2
2
)θ
y
n
(
s
1
+s
2
2
). (6.6)
6.3 Results
To test the shape reconstruction method we examine two cases: the background being equal to
zero and the more general one where the background is nonzero but it is known. For both of
these cases we assume that the interior of the shape is known and constant. Data was simulated
by taking the Radon transform at 8 angles that span the 180 degrees. For all the experiments we
have used a total of 14 basis functions coefﬁcients, 7 for each dimension, for the description of
the contours.
102 Chapter 6. Shape reconstruction method
6.3.1 Simulated data
The ﬁrst case where background contains no signal is demonstrated with a cartoon heart of
constant interior (ﬁg. 6.3). The boundary of the object has high curvature at some locations,
yet the method is capable of the recovering the shape almost perfectly (ﬁg. 6.4). The norm
of the gradient of the objective functional is reducing till it converges approximately after 7
iterations, as seen in ﬁg. 6.5. The shape is also reconstructed accurately even if the data has
been corrupted with noise. Reconstructed data with 15% added Gaussian noise is shown in
ﬁg. 6.6. The gradient norm over iteration plot can be seen in ﬁg. 6.7. The shape reconstruction
approach can be extended for multiple objects. In multiple shape reconstruction it would be
desirable to impose constraints such as spring models between the contours. The trigonometric
approach makes it difﬁcult to apply these local methods as there are no control points to where
the springs can be attached and the forces calculated. To overcome the problem of the shapes
intersecting each other, we ﬁll the intersected area with the sum of both constant interiors. This
imposes a heavy penalty on intersecting areas and if the model is accurate enough it tends to
overcome this problem. We demonstrate the case of two objects in ﬁg. 6.9. Both of the contours
are reconstructed with accuracy (ﬁg. 6.9 and ﬁg. 6.10). In ﬁgs. 6.126.13 results are shown with
simulated data from a cardiac image. The cardiac image (ﬁg. 6.11) was created from a fully
sampled data set with a ﬁltered backprojection method. The reconstructed shape approximates
the truth well. There are two sources of error: the approximation of the interior of the shape
with a constant value and the papillary muscle which has very low intensity values compared
with the rest of the interior of the left ventricle.
Figure 6.3: Ground truth image. Cartoon heart.
6.3. Results 103
Figure 6.4: Simulated data with no background. (Top Left) Initial superimposed to ground truth
image. (Top Right) Initial predicted image. (Bottom Left) Final superimposed to ground truth
image. (Bottom Right) Final predicted image.
0 2 4 6 8 10 12 14 16
0
50
100
150
200
250
300
k
gr ad
Figure 6.5: Simulated data with no background. Gradient norm plot over iteration.
104 Chapter 6. Shape reconstruction method
Figure 6.6: Simulated data with no background and 15% added Gaussian noise. (Top Left)
Initial superimposed to ground truth image. (Top Right) Initial predicted image. (Bottom Left)
Final superimposed to ground truth image. (Bottom Right) Final predicted image.
0 2 4 6 8 10 12 14 16
0
20
40
60
80
100
120
140
k
gr ad
Figure 6.7: Simulated data with no background and 15% added Gaussian noise. Gradient norm
plot over iteration.
6.3. Results 105
Figure 6.8: Ground truth image with multiple shapes.
Figure 6.9: Simulated data with no background. (Top Left) Initial superimposed to ground truth
image. (Top Right) Initial predicted image. (Bottom Left) Final superimposed to ground truth
image. (Bottom Right) Final predicted image.
106 Chapter 6. Shape reconstruction method
0 5 10 15
0
20
40
60
80
100
120
140
k
gr ad
Figure 6.10: Simulated data with no background. Gradient norm plot over iteration.
Figure 6.11: Ground truth image. Simulated cardiac phantom.
6.3. Results 107
Figure 6.12: Simulated data with known background. (Top Left) Initial superimposed to ground
truth image. (Top Right) Initial predicted image. (Bottom Left) Final superimposed to ground
truth image. (Bottom Right) Final predicted image.
0 2 4 6 8 10 12 14 16
0
5
10
15
20
25
30
35
40
45
k
gr ad
Figure 6.13: Simulated data with known background. Gradient norm plot over iteration.
108 Chapter 6. Shape reconstruction method
6.3.2 Measured data from MRI
Data was obtained directly from an MRI scanner as described in ¸5.5.2. As seen, in ﬁg. 6.14,
from the fully sampled reconstruction the signal contains signiﬁcant noise, which appears as
freckles in the fully reconstructed image. The shape reconstruction method was able to approx
imate the true boundary closely (ﬁg. 6.15). The gradient norm plot is shown in ﬁg. 6.16. In
ﬁg. 6.18 the shape was reconstructed from multiple coil data. The fully sampled reconstruction
using the LS gridding method, presented in ¸5.5.2, is the ground truth image ﬁg. 6.17. The
sensitivity matrices for the multiple coil reconstruction were calculated as described previously
in ¸5.5.2.
Single coil reconstructions
Figure 6.14: Ground truth image calculated from a fully sampled single coil data set.
6.3. Results 109
Figure 6.15: Measured single coil data with known background. (Top Left) Initial superimposed
to ground truth image. (Top Right) Initial predicted image. (Bottom Left) Final superimposed
to ground truth image. (Bottom Right) Final predicted image.
1 2 3 4 5 6 7 8
13.2
13.4
13.6
13.8
14
14.2
14.4
14.6
14.8
k
gr ad
Figure 6.16: Measured single coil data with known background. Gradient norm plot over itera
tion.
110 Chapter 6. Shape reconstruction method
Multiple coil reconstructions
Figure 6.17: Ground truth image calculated from a fully sampled multiple coil data set.
Figure 6.18: Measured multiple coil data with known background. (Top Left) Initial superim
posed to ground truth image. (Top Right) Initial predicted image. (Bottom Left) Final superim
posed to ground truth image. (Bottom Right) Final predicted image.
6.4. Discussion 111
1 2 3 4 5 6 7 8
1.76
1.78
1.8
1.82
1.84
1.86
1.88
1.9
1.92
k
gr ad
Figure 6.19: Measured multiple coil data with known background. Gradient norm plot over
iteration.
6.4 Discussion
The shape reconstruction method has been demonstrated to work with both simulated and mea
sured data. Results have been demonstrated for the case of 8 radial proﬁles. The method though
can be applied to other degrees of angular sampling. The number of objects has to be known
in advance. This might be considered as a disadvantage when compared to level set methods
which are topologically adaptive. It is often the case though that the number of shapes is known
in many applications and additional constraints have to be applied to level set methods to re
strict the topology. The quality of the reconstruction is dependant on the knowledge of the
interior of the shape. If the model of the shape is accurate, then the shape reconstruction can
be expected to be robust and precise. Nevertheless the shape reconstruction method gives good
approximations even when this knowledge is not available. In the case of noisy data combined
with an inaccurate interior intensity model the quality of the reconstructed shapes decreases as
seen in ﬁgs. 6.15 and 6.18.
As it is typical with many ‘snake’ approaches, a close initialisation will result in a good
approximation of the boundary. Our inverse problem approach is not based on derivative in
formation, but on a accurate model of the object. As it was demonstrated in the results of the
previous section, when the model is accurate the segmentation is robust and the initial posi
tion does not have to be very close to the real boundary. In this case, our shape reconstruction
method will produce accurate results even from less than 8 angular samples. When the model
is precise then the solution can be thought of as a global minimum for the objective func
tions. On the other hand, when the model is not a good approximation of the true object then
g / ∈ Range(Z(γ)), which typically implies that there are many local minima, the true solution
112 Chapter 6. Shape reconstruction method
is hard to ﬁnd. The problem of reconstruction from incomplete Radon data also depends on the
background structure. In a simple, for example a zero intensity, background, the solutions are
easy to ﬁnd as they lie on a deep valley of the objective function. In a complex background
with a variety structures, many solutions exist that minimize the objective function locally. The
valleys are smaller, making the true solution much harder to be tracked. It is then required
to incorporate a priori information that will restrict the solution space to a welldeﬁned set of
meaningful results.
In the next chapter we discuss the problem of reconstructing a shape of unknown, but
smooth, interior intensity in an unknown background.
Chapter 7
Combined reconstruction method
Totalvariationbased methods in image reconstruction from tomographic data have been
demonstrated in the works of [28], [96] as discussed in ¸5.4. The approach discussed in this
chapter differs signiﬁcantly from these methods in the fact that it combines the image recon
struction with the shape reconstruction. Ye et al [187] have presented a method, which alternates
the minimization process between the reconstruction of the image and the shape. Our approach
is based along the same general lines of an alternating minimisation approach. It uses the re
constructed image for the estimation of the background and interior of the shape and then takes
advantage of the shape information in the image reconstruction. TV based methods have the
property of enhancing edges when compared to standard Tikhonov regularisation which tend to
produce smooth solutions. This edge enhancing property applies globally to all locations of the
image. The combined shape and image reconstruction method enjoys this global edge enhanc
ing property while giving the ability to enhance edges further on the boundary of the estimated
shape.
Our method is based on the ideas presented in chapters ¸5 and ¸6. It is an initial solution to
the problem of estimating shapes with an unknown intensity in an unknown background from
limited data.
7.1 Forward and inverse problem
The ﬁrst part of the problem is to estimate the boundary of the shape. The aim is to obtain the
solution to the nonlinear minimisation problem
γ
min
= [[g −Z(γ)[[
2
2
+λ
γ
[[γ[[
2
2
, (7.1)
where the operator Z(γ) = (¹()(γ) is the forward operator, g is the measured data, γ is the
shape parameter vector and λ
γ
is a regularization parameter.
The operator ( : P → X maps the parameters of the shape in to the image space. Pixels
114 Chapter 7. Combined reconstruction method
are classiﬁed as belonging to the interior, exterior or background. The interior of the object is
modelled as a smooth varying distribution of intensities. These intensities are calculated from
the reconstructed image and they are restricted to be within a small percentage of the maximal
intensity in the interior of the object. The background intensities are estimated directly from
the reconstructed image. Boundary pixels are calculated relative to their area belonging in the
interior and the background. Surrounding these pixels, we assume that there is a narrow band of
low intensities. This is modelled from the reconstructed image in a similar way to the interior
intensities by selecting the minimum value within that band and restricting the corresponding
coefﬁcients as before. The operator ¹ : X → Y is the Radon transform, which maps the
predicted image to the data space. The shape reconstruction problem in eq. (7.1) is solved with
the LevenbergMarquardt method, which gives the following update:
γ
k+1
= γ
k
+ (J
T
J +λI)
−1
J
T
(g −Jγ
k
). (7.2)
In the image reconstruction part of the problem we solve
min
p
ζ
= arg min
p
ζ
[[g −J
p
p
ζ
[[
2
2
+λTV
β
(p
ζ
) subject to c(p
ζ
) ≥ 0, (7.3)
where c are the constraint functions, as described previously in ¸5.4.3, p
ζ
is the blob parameter
vector and the total variation functional
TV
β
(p
ζ
) =
_
Ω
ψ
β
([∇p
ζ
[)dp
ζ
(7.4)
is modiﬁed to each speciﬁc shape by altering the function ψ according to the estimated shape.
Douiri et al [39] proposed the use of the Huber function [80] for the approximation of the
absolute value function. The Huber function is only C
1
continuous, making it unsuitable for
the primaldual method which requires the calculation of the second derivative. The proposed
ψ =
_
t
2
+β
2
does not show severe signs of the staircase effect
1
[26]. By increasing β,
the derivative of the ψ function is becoming more isotropic, approximating more a Tikhonov
solution. The reduction of β makes the function more anisotropic. All these changes in β
mainly effect the area where the gradient is small (ﬁg. 7.1). In the intervals where the gradient
is large the function is approximately the same.
Our image reconstruction approach is based on the estimation of the shape, which enhances
our belief that an edge exists in a particular location. Interpreting the behavior of the ψ
in
relation to our shape estimation method, leads us to the following conclusions. If a location
1
Solutions that are piecewise constant
7.2. Results 115
−1 −0.5 0 0.5 1
0
2
4
6
8
10
12
14
16
18
20
background β = 0.1
interior β = 0.15
boundary β = 0.05
t
ψ
′
(t)
Figure 7.1: Plot of the derivative of ψ(t) for different values of β. These values are assigned ac
cording to the classiﬁcation of intensity coefﬁcients as background (solid line), interior (dotted
line) and boundary (dashed line).
is estimated as a boundary coefﬁcient we will penalise even the small gradients. If a location
is estimated to be in the interior, then small gradients will be penalised less as we assume that
the interior intensities are smoothly distributed. On the background region, we seek to enhance
edges but we do not know their exact locations and for that reason we use a value that lies in
between the interior and the boundary β values. The minimisation problem in eq. (7.4) is solved
with the projected primaldual method presented in sections ¸5.4.2 and ¸5.4.3.
7.2 Results
The intensity coefﬁcients are initialised using the damped least squares method, presented in
¸5.3.2. Given this reconstructed image the shape parameters are initialised near the object of in
terest. We begin the iteration by solving ﬁrst the shape estimation problem with the Levenberg
Marquardt approach. After each iteration of the shape estimation problem we resolve the im
age reconstruction problem using a few iterations of the projected primaldual method with the
shape speciﬁc β values. In this approach we use the following beta values β
i
= 0.11, β
e
= 0.1
and β
b
= 0.09 for the interior, background and boundary coefﬁcients. A more sophisticated
approach can choose β values from the statistical properties of the expected edges, interior of
the shape and background intensities. This alternating minimisation process is repeated until
the convergence criteria are met.
We demonstrate our approach using 5 image iterations for every shape iteration. We begin
with simulated cardiac data, produced by taking the Radon transform of a fully reconstructed
cardiac image (ﬁg. 7.2) at 8 angles. The reconstructed shapes and images are shown in ﬁg. 7.3.
116 Chapter 7. Combined reconstruction method
An enhanced image has been created using the estimation of the boundary of the shape and the
reconstruction of the image ((Left) ﬁg. 7.4). Further to that we apply our method to measured
MRI data with 8 radial proﬁles, as described in the previous chapters, from single and multiple
receiver coils. The single coil reconstructions can be seen in ﬁg. 7.6. On the left of ﬁg. 7.6
the initial and ﬁnal estimated shapes are superimposed on the ground truth image (ﬁg. 7.5).
In the left part of ﬁg. 7.7 we display the enhanced image. The right plot of ﬁg. 7.7 shows
the gradient norm over the iteration as it climbs out of an infeasible solution, which contains
intensities below zero and/or above the maximal value. The results from multiple coil data can
be seen in ﬁgs. 7.8  7.10.
Simulated data reconstructions
Figure 7.2: Ground truth image for the simulated experiments.
7.2. Results 117
Figure 7.3: Simulated data with unknown background. (Top Left) Initial superimposed to
ground truth image. (Top Right) Initial predicted image. (Bottom Left) Final superimposed
to ground truth image. (Bottom Right) Final predicted image. The error for the reconstructed
image is rms = 0.40217.
1 2 3 4 5 6 7 8
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
k
gr ad
Figure 7.4: Simulated data with unknown background. (Left) Enhanced reconstructed image.
(Right) Plot of the gradient norm of the shape reconstruction over iteration.
118 Chapter 7. Combined reconstruction method
Single coil reconstructions
Figure 7.5: Ground truth image from fully sampled single coil data.
Figure 7.6: Measured data with unknown background. Coil 5. (Top Left) Initial superimposed
to ground truth image. (Top Right) Initial predicted image. (Bottom Left) Final superimposed
to ground truth image. (Bottom Right) Final predicted image. The error for the reconstructed
image is rms = 0.6509.
7.2. Results 119
1 2 3 4 5 6 7 8
1
1.5
2
2.5
3
3.5
4
4.5
5
5.5
k
gr ad
Figure 7.7: Measured data with unknown background. Coil 5. (Left) Enhanced reconstructed
image. (Right) Plot of the gradient norm of the shape reconstruction over iteration.
120 Chapter 7. Combined reconstruction method
Multiple coil reconstructions
Figure 7.8: Ground truth image from fully sampled multiple coil data.
Figure 7.9: Measured data with unknown background. Multiple coils. (Top Left) Initial su
perimposed to ground truth image. (Top Right) Initial predicted image. (Bottom Left) Final
superimposed to ground truth image. (Bottom Right) Final predicted image. The error for the
reconstructed image is rms = 0.56808.
7.2. Results 121
1 2 3 4 5 6 7 8
0.4
0.45
0.5
0.55
0.6
0.65
0.7
0.75
k
gr ad
Figure 7.10: Measured data with unknown background. Multiple coils. (Left) Enhanced re
constructed image. (Right) Plot of the gradient norm of the shape reconstruction over iteration.
122 Chapter 7. Combined reconstruction method
7.3 Discussion
By combining the image reconstruction method with the shape estimation, we have presented
a method which is capable of reconstructing objects with smoothly varying intensities in an
unknown background. The TV based image reconstruction method enjoys the beneﬁt of global
edgeenhancement. Using the estimated shape we can further improve on the reconstruction of
edges at the boundary of the object. The smooth varying interior model used for the intensities
inside the object is an improvement on the constant interior model. Still the exactness of our
model suffers from the existence of very low intensities within the left ventricle. An improved
cardiac model should allowfor outliers within the interior, but these outliers have to be restricted
according to some a priori knowledge. The problem of nonﬁlled interiors is that the shape can
continue to grow even when it has passed through the correct boundary. This happens because
we are trying to include low intensities within the interior of the shape without knowing their
location in advance. The result is that the interior model ends up including low intensity values
belonging outside of the object and progresses even further.
A model for the reconstruction of shapes has to describe the interior and the background in
a distinct manner. In our approach the aim was to ﬁnd an object which has a smooth distribution
of high intensity values. Given an exact model of the heart the problem is then to grow or shrink
it according to the estimated shape. This could potentially be achieved with a model of the heart
based on an anatomical atlas or on a learned model from a large data set. Then the problem
of shape reconstruction would be to ﬁt the best, according to some metric, registered model to
the data. While this is an important subject and it raises interesting questions, it exceeds the
purposes of this thesis.
Results for our method have been presented in simulated and measured data studies. The
presented method can be used to reconstruct realtime freebreathing cardiac MR data by solv
ing a minimisation problem for each time step. Freebreathing imaging has the potential of
being clinically more useful than breathold imaging due to the poorly understood changes in
blood ﬂow and pressure in the region of the heart [122] during the extended breathholds re
quired for the collection of data in traditional methods. The motion of the object is very small
during the collection of the limited amount of data (8 radial proﬁles) that we have used for our
experiments. This makes the method ideal for dynamic imaging as there will be very little cor
ruption due to motion artifacts. The method can be applied in other dynamic imaging modalities
where the collected data is limited. As the method does not make any assumptions about the
motion of the object it removes the restriction of periodicity, which is a limiting factor in gated
studies. This offers the possibility to apply the method to patients suffering from arrythmia.
7.3. Discussion 123
In the next chapter we present a method which assumes that shapes are correlated in time.
The problem is solved in the temporal dimension as a state estimation problem using Kalman
ﬁlters.
124 Chapter 7. Combined reconstruction method
Chapter 8
Temporally correlated combined
reconstruction method
In the case of cardiac MRI, data is temporally correlated. The location of the cardiac boundaries
at a particular time point is dependant on their previous location. We consider this to be a
Markov process, where the boundary is determined only by its previous position. The problem
of reconstructing the shape is expressed as a state estimation problem, where the states are the
parameters of the shape. We solve this stochastic problem using Kalman ﬁlters. A similar
approach for the calculation of diffusion and absorption coefﬁcients in optical tomography has
been presented in [95].
While the temporally correlated problem can be solved with the combined method pre
sented in chapter ¸7 as a sequence of minimisation problems, it would not be trivial to incorpo
rate statistics that change in time. Kalman ﬁlters have the temporal estimation of statistics build
in. The formulation of the problem in this temporal case is also simpler using the Kalman ﬁlter
approach.
8.1 Forward and inverse problem
In the Kalman ﬁlters algorithm the aim is to minimise the a posteriori error covariance
C
tt
= E[e
tt
e
T
tt
], (8.1)
where e
tt
= g
t
− y
tt
is the a posteriori estimate error, g
t
is the measured data at time t and
y
tt
= Z(γ) is the predicted data at time t given the data at time t. Choosing the linearisation
point to be the current estimate of the predictor γ
∗,t
= γ
tt−1
simpliﬁes the nonlinear Kalman
ﬁlter equations (4.87)(4.91) to
126 Chapter 8. Temporally correlated combined reconstruction method
G
t
= C
tt−1
J
T
γ,t
(γ
tt−1
)(J
γ,t
(γ
tt−1
)C
tt−1
J
T
γ,t
(γ
tt−1
) +C
n,t
)
−1
(8.2)
γ
tt
= γ
tt−1
+G
t
(g
t
−Z(γ
tt−1
)) (8.3)
C
tt
= C
tt−1
−G
t
J
γ,t
(γ
tt−1
)C
tt−1
(8.4)
γ
t+1t
= S
t
γ
tt
(8.5)
C
t+1t
= S
t
C
tt
S
T
t
+C
w,t
, (8.6)
where J
γ,t
is the shape Jacobian matrix, described in eq. (6.4). Eq. (8.3) updates the estimate
according to the measured data and eq. (8.5) according to the model of the state process. The
matrix S
t
represents our knowledge about the change of the states from one time point to the
next. Setting S
t
= I we are representing random motion.
To solve the combined image and shape reconstruction we use an alternating minimisation
approach, similar to chapter ¸7. Intensity parameters are initialized using the projected primal
dual method, presented in ¸5. Shape parameters are initialized near the object of interest. We
estimate the location and shape of the left ventricle with the extended Kalman ﬁlters using a
single radial proﬁle as our data g
t
at time point t. After a number of radial proﬁles has been
used for the shape estimation, we switch to the reconstruction of the intensities with the shape
speciﬁc TV
β
projected primaldual method (¸7). As our data for the intensity reconstruction
we use all the radial proﬁles from the time point of the previous switch till the current time t.
The data vector used is g = ¦g
t−n
, ..., g
t
¦, where n is the number of proﬁles we have used for
the estimation of the shape before switching to the image reconstruction method. The intensity
reconstruction is iterated until sufﬁcient convergence is achieved. The method is then switched
again to the estimation of the shape using the intensity parameters calculated previously. The
combined estimation of the shape and intensity parameters progresses by alternating every n
time points until all radial proﬁles have been processed.
8.2 Results
We choose to solve the image reconstruction problem after 8 iterations of the shape estimation
method. The shapes are estimated with the Kalman ﬁlters for each proﬁle. In our numerical
experiments we have found that the heart is not exhibiting much motion during the collection
of 8 radial proﬁles, and we take advantage of this to reduce the computational cost without any
signiﬁcant loss on the quality of the reconstructions. The acquisition of data is separated in
groups of 8 proﬁles. In each group, these 8 proﬁles are chosen so they have the largest angular
distance between them in order to span the 180 degrees in a near optimal way. All groups are
8.2. Results 127
interleaved with each other to span the 180 degrees perfectly in time (ﬁg. 8.1).
−8 −6 −4 −2 0 2 4 6 8
−8
−6
−4
−2
0
2
4
6
8
ky
kx
time point 1
−8 −6 −4 −2 0 2 4 6 8
−8
−6
−4
−2
0
2
4
6
8
kx
ky
time point 2
−8 −6 −4 −2 0 2 4 6 8
−8
−6
−4
−2
0
2
4
6
8
kx
ky
time point 3
−8 −6 −4 −2 0 2 4 6 8
−8
−6
−4
−2
0
2
4
6
8
kx
ky
all time points
Figure 8.1: Interleaved sampling pattern.
The interleaved pattern (ﬁg. 8.1) allows to construct sensitivity matrices from the time averaged
images without using a body coil. The fully sampled reconstructed data was segmented manu
ally for comparison with the automatic segmentations. For this task we use the Dice similarity
coefﬁcient [35]
dsc(C, C
g
) =
2 N(C ∩ C
g
)
N(C) +N(C
g
))
, (8.7)
where C and C
g
are the areas of the predicted and ground truth contours, respectively. N(C)
is the number of pixels within an area. Further to that we show the predicted and ground truth
areas over the cardiac phase in a graph.
Simulated data was produced by reconstructing a fully sampled data set and then taking
the Radon transform at 8 angles per cardiac phase. The full data set consisted out of 64 cardiac
phases, which we have undersampled to 16 by taking every fourth phase. The resulting under
sampled data set consists out of 128 radial proﬁles. The reconstructed shapes superimposed on
the ground truth images can be seen in the left column of ﬁg. 8.2 next to the predicted images
with the smoothly varying constant interior. In ﬁg. 8.3 we display the ﬁltered backprojection
reconstructions next to the projected primaldual ones. The similarity coefﬁcient, the rms im
age errors and the areas over time are presented in ﬁg. 8.4. In ﬁg. 8.5 we show one line passing
through the center of the image over time for the ground truth, ﬁltered backprojection and the
combined reconstructions.
128 Chapter 8. Temporally correlated combined reconstruction method
Simulated data reconstructions
1
4
7
16
Figure 8.2: Reconstructions from simulated data. The numbers on the left column indicate the
time point in the sequence. (Left) Reconstructed shapes superimposed on ground truth images.
(Right) Reconstructed images with restricted interior intensities.
8.2. Results 129
16
7
4
1
Figure 8.3: Reconstructions from simulated data. The numbers on the left column indicate the
time point in the sequence. (Left) Filtered backprojection. (Right) Reconstructed images using
shape speciﬁc TV
β
approach.
130 Chapter 8. Temporally correlated combined reconstruction method
0 2 4 6 8 10 12 14 16
0.65
0.7
0.75
0.8
0.85
0.9
t
dsc
0 2 4 6 8 10 12 14 16
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
Filtered backprojection
Time correlated combined approach
t
rms
0 2 4 6 8 10 12 14 16
100
150
200
250
300
350
400
Ground truth
Predicted area
area
t
Figure 8.4: Error plots from simulated data reconstructions. (Left) Plot of the Dice similarity
coefﬁcient over time (Middle) Plot of rms over time. Filtered backprojection (solid line) and
temporally correlated combined approach (dotted line). (Right) Predicted and ground truth
areas over time.
t
r
x
ground
F BP
TV
β
TV
β
+
left
ventricle
right
ventricle
C
Figure 8.5: xt plots of the central r
x
line in the image over time. The thick arrows point to the
papillary muscle. (Left) Ground truth. (Middle Left) Filtered backprojection. (Middle Right)
Shape speciﬁc total variation method. (Right) Combined shape and image method.
8.2. Results 131
Measured data
ECG gated data was acquired from a healthy volunteer. A total of 25 phases each with 208
radial proﬁles were collected using a ﬁveelement array receive coil, as described in ¸5.5.2. The
data used in this experiment was generated by undersampling each phase to 8 proﬁles. This was
done by using every 8th proﬁle of the fully sampled data set. Using this very undersampled
data set results in a 26fold acceleration compared to the original radial acquisition. For the case
of realtime MRI, a total of about 200 radial proﬁles can be collected within a single heart beat
using a fast steady state free precession sequence. To transform the data in the Radon space, we
1D inverse Fourier transformed along each radial proﬁle, according to the central slice theorem.
The results for the single and multiple coil reconstructions are presented in ﬁgs. 8.6  8.13, as
previously.
132 Chapter 8. Temporally correlated combined reconstruction method
Single coil reconstructions
1
4
7
16
Figure 8.6: Reconstructions from measured single coil data. The numbers on the left column
indicate the time point in the sequence. (Left) Reconstructed shapes superimposed on ground
truth images. (Right) Reconstructed images with restricted interior intensities.
8.2. Results 133
16
7
4
1
Figure 8.7: Reconstructions from measured single coil data. The numbers on the left column
indicate the time point in the sequence. (Left) Gridding. (Right) Reconstructed images using
shape speciﬁc TV
β
approach.
134 Chapter 8. Temporally correlated combined reconstruction method
0 5 10 15 20 25
0.74
0.76
0.78
0.8
0.82
0.84
0.86
0.88
dsc
t
0 5 10 15 20 25
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
Gridding
Time correlated combined approach
rms
t
0 5 10 15 20 25
60
80
100
120
140
160
180
200
Ground truth
Predicted area
area
t
Figure 8.8: Error plots from measured single coil data reconstructions. (Left) Plot of the Dice
similarity coefﬁcient over time (Middle) Plot of rms over time. Gridding (solid line) and tem
porally correlated combined approach (dotted line). (Right) Predicted and ground truth areas
over time.
r
x
t
ground T V
β
T V
β
+C
right
ventricle
left
ventricle
GRID
Figure 8.9: xt plots of the central r
x
line in the image over time. The thick arrows point to the
papillary muscle. (Left) Ground truth. (Middle Left) Gridding reconstruction. (Middle Right)
Shape speciﬁc total variation method. (Right) Combined shape and image method.
8.2. Results 135
Multiple coils reconstructions
1
4
7
16
Figure 8.10: Reconstructions frommeasured multiple coil data. The numbers on the left column
indicate the time point in the sequence. (Left) Reconstructed shapes superimposed on ground
truth images. (Right) Reconstructed images with restricted interior intensities.
136 Chapter 8. Temporally correlated combined reconstruction method
1
4
7
16
Figure 8.11: Reconstructions frommeasured multiple coil data. The numbers on the left column
indicate the time point in the sequence. (Left) Gridding. (Right) Reconstructed images using
shape speciﬁc TV
β
approach.
8.2. Results 137
0 5 10 15 20 25
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
t
dsc
0 5 10 15 20 25
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
Gridding
Time correlated combined approach
t
rms
0 5 10 15 20 25
40
60
80
100
120
140
160
180
Ground truth
Predicted area
t
area
Figure 8.12: Error plots from measured multiple coil data reconstructions. (Left) Plot of the
Dice similarity coefﬁcient over time (Middle) Plot of rms over time. Gridding (solid line)
and temporally correlated combined approach (dotted line). (Right) Predicted and ground truth
areas over time.
ground GRID TV
β
TV
β
+ C
r
x
t
left
ventricle
right
ventricle
Figure 8.13: xt plots of the central r
x
line in the image over time. The thick arrows point to the
papillary muscle. (Left) Ground truth. (Middle Left) Gridding reconstruction. (Middle Right)
Shape speciﬁc total variation method. (Right) Combined shape and image method.
138 Chapter 8. Temporally correlated combined reconstruction method
8.3 Discussion
In this chapter a method for the estimation of both shape and intensity parameters has been
presented. The time correlated combined reconstruction does not make any assumptions about
periodic motion. This makes it applicable in many dynamic imaging problems, where gated
methods are simply infeasible. If the background is assumed to be completely stationary then
another interesting approach is to reconstruct the difference between two sequential time points.
With a stationary background the only signal left if we subtract data from two different time
points will be the motion of the object of interest. This approach does not require the knowledge
of the background structures, which will be completely removed by the subtraction. More
details on the difference imaging method are given in the appendix.
The beneﬁts of time correlated combined approach can be seen in the reconstructed images
and especially in the xt plots (ﬁgs. 8.5, 8.9 and 8.13). While standard reconstruction methods,
such as ﬁltered backprojection and gridding, produce images which are highly corrupted by
noise, our reconstruction method shows results where the cardiac ventricles can be clearly de
lineated and the presence of noise is limited. The restricted intensity model for the interior of
the shape causes underestimation of the area, as seen in ﬁgs. 8.4 and 8.12, of the cardiac bound
ary. This mismatch is caused due to our model not representing the interior intensities correctly.
It is also clear that the low intensities that represent the papillary muscle are a source of error
for our model, as seen in ﬁg. 8.5. A more sophisticated model should be able to capture the
expected intensity distribution within the heart with precision. This would result in improved
estimation of the boundary, and therefore the predicted area would approximate the truth closer.
While we would expect the quality of the multiple receive coils reconstructions to be su
perior to the single coil case, this is not clear from the obtained results. A reason for this is our
approximation of the sensitivity matrices. As mentioned in ¸5.5.2 we have divided each time
averaged fully sampled single coil image with the root of the sum of squares of all coil images.
Sensitivity matrices could also be obtained by dividing the single coil images with an image
obtained from a body coil, which covers all the kspace sampling positions, instead of the sum
of squares that we have used. In our case a body coil signal was not available. To eliminate
noise we have lowpass ﬁltered the sensitivity matrices. This results in errors at object edges.
An alternative approach for smoothing is to perform a polynomial ﬁt for each pixel to the noisy
images [139]. The exact calculation of sensitivity matrices exceeds the purposes of this thesis.
Further improvement in the case of cardiac MRI is to enforce constraints on the center of
the estimated shape, limiting its motion to a bare minimum. It is the boundary of the object that
is moving and not its center. High frequency parameters should be changing faster than lower
8.3. Discussion 139
frequency ones, since they represent details on the boundary and these are changing faster
from one time point to the next. While we can solve the shape reconstruction problem in time
using a least squares approach such as the LevenbergMarquardt method, it would be much
harder to incorporate statistics that change in time. Kalman ﬁlters offers the tools for this task.
Using the Kalman ﬁlter approach, we can solve the problem considering each radial proﬁle to
belong to a different time point. In our experiments, this robustness was not evident with the
LevenbergMarquardt method, which requires a number of radial proﬁles, for the estimation
of the shape at each time point. Thus, Kalman ﬁlters provide shape estimates for each radial
proﬁle reaching the physical limit of temporal resolution in MRI, as it is the change of gradients
that takes considerably more time than to read data out on a particular proﬁle. Another beneﬁt
of the stochastic approach is that the state transition matrix can vary in time, representing the
movement from one state to the next. To calculate its elements we can incorporate them in the
minimisation problem as parameters of the objective function. If the data can be reconstructed
offline we can use the ﬁxedinterval smoother ¸4.7.3, which calculates the estimates from past
and future measurements. Further to that we can consider the nontrivial problem of a non
Markov process where the parameters depend on a longer history of the motion of the states.
140 Chapter 8. Temporally correlated combined reconstruction method
Chapter 9
Conclusions and future directions
Novel approaches for the reconstruction of images and shapes in cardiac MRI have been pre
sented in this thesis. The presented methods have been applied to the problem of cardiac MRI.
Yet the methodology is general enough to be applicable to many limited data problems. It also
offers the ability to be combined with other MRI methods, such as kt and SENSE approaches
(¸2.4).
In chapter ¸2 and ¸3 we presented the foundations of dynamic imaging in MR and dis
cussed current approaches in shape reconstruction. The basis of our methods is inverse problem
theory. This we have introduced in chapter ¸4.
The image reconstruction method in ¸5 produces superior results to standard methodol
ogy, such as ﬁltered backprojection and gridding algorithms. Our method reduces the severe
streaky artifacts, which dominate standard methods, without oversmoothing edge information.
This reduction of angular artifacts in the reconstructed images can be of importance in kt ap
plications where training about the motion in the images is required. These methods typically
use interleaved data acquisition patterns. This implies that the angular artifacts will be rotating
depending on the choice of angles. These moving artifacts will be detected as motion which is
clearly unwanted. The quality and usability of the reconstructed images can only be assessed
by clinicians, which we plan to include in our future work. Apart from the standard argument
of increasing computer capabilities, higher resolution images using ﬁner basis functions grids
can be obtained by replacing direct matrix inversions with iterative linear system solvers, for
example preconditioned conjugate gradient methods. The application of these methods also
offers the extension of the presented methodology to three dimensions, where the blob basis
functions are naturally extended due to their symmetric nature. 3D dynamic imaging requires
the collection of large data sets. Using our approach these requirements can be signiﬁcantly
decreased, which will reduce motion artifacts in the reconstructed images.
In chapter ¸6 we have presented a shape reconstruction method in the case of simple or
142 Chapter 9. Conclusions and future directions
known background. Shapes in our method are not estimated from edge information, but directly
from the measured data using a modelbased approach. It is our model that deﬁnes the shape
we wish to reconstruct. In this chapter we have presented a basic approach, where the interior
of the shape is considered to be of constant intensity.
In chapter ¸7 we combined the image and shape reconstruction methods. The background
and interior intensities are estimated using the image reconstruction method. The shape recon
struction assists the estimation of images by providing local information about the expected
intensity distribution at the edges and interior of the boundary. Our model for the interior inten
sities shows its limitations in the estimation of real cardiac shapes. The deﬁnition of a more so
phisticated cardiac model based on anatomical knowledge would make the shape reconstruction
method more exact and robust. The combination of our methods with registration techniques
could be a possible direction for the exact segmentation of cardiac images. The extension from
planar contours to surfaces will be another exciting future direction. The trigonometric func
tions used in this thesis can be replaced by spherical harmonics for the description of surfaces.
The reconstruction of shapes offers quantitative analysis of the cardiac motion. Solving
this as a state estimation problem in chapter ¸8 offers a compact method for dynamic shape
detection directly from MR samples. As a future direction, we believe that estimating the
parameters of the motion, that is the state transition fromone time point to the next, will improve
the reconstruction results.
The limited data used in our experiments has the potential to reduce scanning time and
make high temporal resolution realtime cardiac imaging a clinical possibility. The beneﬁt
of reducing scanning time and breath hold requirements is clear in the case of many patients
who ﬁnd MRI scanners claustrophobic. Some patients ﬁnd it difﬁcult to maintain even a short
breath hold. It would also simplify examinations under stress. Realtime imaging escapes
the problem of normalising, shrinking or stretching, monitored cardiac cycles to ﬁt an average
cycle. As we have not assumed the periodic nature of cardiac motion, our novel approach
is applicable under conditions where gated methods are infeasible. It escapes the temporally
averaging nature of gated imaging methods and has the potential of reducing motion artifacts.
Applications of interest for the presented method in cardiac MRI are patients with arrythmia
and freebreathing imaging. Freebreathing imaging can be potentially more clinically relevant
due to the poorly understood changes in blood ﬂow and pressure within the cardiac region
during extended breath holds [122]. On top of that, data acquisition time is no longer limited
by the ability of patients to hold their breath and can be extended in order to obtain images
from more slices or even complete volumes of the heart. Other potential applications include
143
the imaging during pregnancy, where it would be very hard to maintain the fetus static. Imaging
of infants in situations where it is hard to keep them still is also another possibility. Another
interesting application is 3D mammography where the 2D images are of high resolution, but the
limited number of views makes the use of standard tomosynthesis algorithms ([60],[179]) not
ideal for the reconstruction of 3D images. Generally our novel approach can be used in many
tomographic and Fourier imaging problems.
Our proposed method is applicable to any choice of angles, even in limited view problems,
where data cannot be collected from the whole 180 degrees. An interesting choice of kspace
positions is random sampling, which has been explored recently for its possibility of producing
higher quality reconstructions [23],[22] and [112]. In radial data sampling the choice of angles
can be exploited by an intelligent acquisition scheme, where the angles are chosen according
to the cardiac motion. In simple terms, at the end diastolic phase the heart is moving slower
and more data can be collected without fear of corruption by motion artifacts. The detection
of the shape can also be employed for the choice of scanning angles. Assuming that there is
more interest in reconstructing the cardiac contours precisely than reconstructing the surround
ing structure, we can collect data at angles tangent to the reconstructed shape. If the object
of interest is not perfectly round, then more views where the curvature of the shape is high
contribute more in the correct reconstruction, than those that have a low curvature.
In this thesis we have presented methods for the reconstruction of images and shapes in
a dynamic problem. We have applied our methods to radially sampled cardiac MRI, and seen
the improvements and limitations of our modelbased approaches. Reconstructed images are
visually and numerically superior to standard methods. Further evaluation of the image recon
struction method with multiple data sets will be required to access its clinical feasibility. The
detection of boundaries of objects with unknown interior intensities in an unknown background
is a difﬁcult problem. A robust shape reconstruction method will require further development,
especially of the interior intensity model. We believe that we have made a signiﬁcant ﬁrst step
in the solution of these limited data dynamic problems with the introduction of modelbased
techniques, which aim to approximate data as close as possible without making assumptions
about completeness of the measurement set.
144 Chapter 9. Conclusions and future directions
Appendix A
Acronyms
MR Magnetic resonance
MRI Magnetic resonance imaging
CT Computed tomography
EIT Electrical impedance tomography
PET Positron emission tomography
ECG Electrocardiogram
FOV Field of view
SNR Signal to noise ratio
FT Fourier transform
LS Least squares
EM Expectation maximisation
SVD Singular value decomposition
TSVD Truncated singular value decomposition
TV Total variation
146 Appendix A. Acronyms
Appendix B
Table of notation
a Scalar
a Column vector
a
i
ith element of vector a unless otherwise deﬁned within the context
A Matrix or linear operator expressed as matrix
diag(a) Diagonal matrix with the elements of a on its diagonal
I Identity matrix
Range(A) Range of A
Null(A) Nullspace of A
dim(A) Dimensions of A
→ Maps to
A
n
ndimensional space named A
Z Nonlinear operator
T
Transpose
H
Conjugate transpose
†
Pseudoinverse
∗
Adjoint
[[a[[
p
l
p
norm of a
E[] Expectation operator
Φ Objective functional
Λ Lagrangian
∪ Set union
∩ Set intersection
⊂ Subset of
∈ Belongs to
148 Appendix B. Table of notation
∇ xy gradient
C
n
Continuity of a function for n derivatives
III(x) Comb function
Multiplication
rms Relative mean square error
dsc Dice similarity coefﬁcient
∗ Convolution
.∗ Elementwise multiplication
./ Elementwise division
grad Gradient of the objective functional
Appendix C
Difference imaging
Another approach for the dynamic imaging problem is to calculate the difference data between
two time points by subtracting kspace proﬁles (5 in this experiment) at the same positions
belonging to different time points. Assuming that the background structures remain stationary,
then the difference data will show the only areas where an object moved or changed shape.
τ
θ θ
τ
θ
τ
Figure C.1: Difference imaging approach with stationary background. (Top Left) Phantom im
age at time point 1. (Top Middle) Phantom image at time point 8. (Top Right) Image difference
between time point 1 and 8. (Bottom Left) Phantom sinogram data at time point 1. (Bottom
Middle) Phantom sinogram data at time point 8. (Bottom Right) Sinogram difference between
time point 1 and 8.
This requires that the collection of data at each time point is done at the same locations in
kspace instead of the interleaved sampling pattern. In a multiple coil experiment this will
complicate the construction of sensitivity matrices from time averaged images and will require
the use of a body coil for this task. The beneﬁt of such an approach is that in singlebreathhold
150 Appendix C. Difference imaging
cardiac MRI most of the motion in the image is from the heart and especially the left ventricle.
The background structures are practically stationary. We present results on simulated data with
15% added, to the Radon data, Gaussian noise (ﬁg. C.1) to demonstrate the power of this
approach. In ﬁg. C.2 the ground truth images and the reconstructed shapes are shown. In ﬁg. C.3
we compare the reconstructions with manually segmented shapes. Data was simulated from a
dynamic phantom at 12 time points. Note that the difference imaging method does not require
knowledge of the stationary background structure. We initialize the shape parameters exactly
at the boundary of the object of interest. For each subsequent time point, we calculate the
difference data between the current time point and the previous one and estimate the shape using
the Kalman ﬁlter algorithm. The Kalman ﬁlter algorithm is essentially estimating motion by
comparing the predicted and measured difference data. The predicted motion is the difference
between the predictions in the Radon space at time point t and t −1. The measured motion, as
described before, is the difference between the data at two sequential time points. The interior
of the object is ﬁlled with a known constant value.
151
1
4
7
12
Figure C.2: Difference imaging reconstructions. The numbers on the left column indicate the
time point in the sequence. (Left) Ground truth images. (Right) Reconstructed shapes superim
posed on groundtruth.
152 Appendix C. Difference imaging
0 2 4 6 8 10 12
0.89
0.895
0.9
0.905
0.91
0.915
0.92
0.925
t
dsc
0 2 4 6 8 10 12
1050
1100
1150
1200
1250
1300
1350
1400
1450
Ground truth
Predicted area
t
area
Figure C.3: (Left) Plot of the Dice similarity coefﬁcient over time (Right) Predicted and ground
truth areas over time.
The immediate problem with the difference imaging approach is that unless we can guar
antee convergence of the estimated shapes to the true ones, errors will be propagated further
in time. If the shape does not approximate the true boundaries precisely, then on the next time
step we will be modelling different motion than what is happening in the data, as our initial
estimated position is wrong. This directly implies that the initialization of the shape parameters
has to be exact. In the case of singlebreathhold cardiac MRI, there is more than one shape
in motion. Even though most of the background is removed there is still some differences at
locations other than the heart (ﬁg. C.4). Using a simple model, which estimates only shapes of
the left and right ventricles, will be a cause of error. This error will be ampliﬁed as the method
propagates in time.
If the model is very precise and convergence can be guaranteed, then the difference imag
ing approach could give exciting results. Assuming that such a precise model exists, then the
need for using the same angles in the sampling sequence can be removed, by predicting the data
at any angle from the image space model.
153
θ
τ τ
θ θ
τ
Figure C.4: Difference imaging approach with stationary background. (Top Left) Phantom im
age at time point 1. (Top Middle) Phantom image at time point 8. (Top Right) Image difference
between time point 1 and 8. (Bottom Left) Phantom sinogram data at time point 1. (Bottom
Middle) Phantom sinogram data at time point 8. (Bottom Right) Sinogram difference between
time point 1 and 8.
154 Appendix C. Difference imaging
Bibliography
[1] R. Acar and C.R. Vogel. Analysis of bounded variation penalty methods for illposed
problems. Inverse problems, 10(6):1217–1229, 1994.
[2] R. Adrain. Research concerning the probabilities of the errors which happen in making
observations. The Analyst, 1:93–109, 1808.
[3] A. Aldroubi, A.F. Laine, and M.A. Unser, editors. Wavelet Applications in Signal and
Image Processing VIII, volume 4119 of Proceedings of SPIE, San Diego, CA, USA,
2000. SPIE.
[4] B.D.O. Anderson and J.B. Moore. Optimal Filtering. PrenticeHall, Englewood Cliffs,
N.J., 1979.
[5] A. Aquado, M. Nixon, and M. Montiel. Parameterizing Arbitary Shapes via Fourier De
scriptors for EvidenceGathering Extraction. Computer Vision and Image Understand
ing, 69(2):202–221, 1998.
[6] S.L. Bacharach, M.V. Green, J.S. Borer, M.A. Douglas, H.G. Ostrow, and G.S. Johnston.
A realtime system for multiimage gated cardiac studies. Journal of Nuclear Medicine,
18:79–84, 1977.
[7] A. Bachem, M. Gr¨ otschel, and B. Korte, editors. Mathematical Programming: The State
of the Art. SpringerVerlag, Berlin, 1983.
[8] D. Baroudi, J. Kaipio, and E. Somersalo. Dynamical electric wire tomography: a time
series approach. Inverse Problems, 14:799–813, 1998.
[9] X.L. Battle, Y.J. Bizais, C. Le Rest, and A. Turzo. Tomographic Reconstruction Using
FreeForm Deformation Models. SPIE, 3661:356–366, 1999.
[10] X.L. Battle, G.S. Cunningham, and K.M. Hanson. 3D Tomographic Reconstruction
Using Geometrical Models. SPIE, 3034:346–357, 1997.
156 Bibliography
[11] M.S. Bazaraa, H.D. Sherali, and C.M. Shetty. Nonlinear Programming. John Wiley and
Sons, New York, 1993.
[12] M. Bertero and P. Boccacci. Introduction to Inverse Problems in Imaging. Institute of
Physics Publishing, Bristol, 1998.
[13]
˚
A. Bj¨ orck. Numerical methods for least squares problems. SIAM, Philadelphia, 1996.
[14] M. Blaimer, F. Breuer, M. Mueller, R. M. Heidemann, M. A. Griswold, and P. M. Jakob.
SMASH, SENSE, PILS, GRAPPA: how to choose the optimal method. Topics in Mag
netic Resonance Imaging, 15(4):223–236, 2004.
[15] R.N. Bracewell and A.C. Riddle. Inversion of fanbeam scans in radio astronomy. The
Astrophysical Journal, 150:427–434, 1967.
[16] R.G. Brown and P.Y.C. Hwang. Introduction to Random Signals and Applied Kalman
Filtering. John Wiley & Sons, New York, 1985.
[17] M.D. Buhmann. Radial basis functions. Acta Numerica, pages 1–38, 2000.
[18] H. Burkhardt and B. Neumann, editors. Computer Vision  ECCV ’98, volume II of LNCS
1407, Berlin, 1998. SpringerVerlag.
[19] M. Bydder, D.J. Larkman, and J.V. Hajnal. Generalized SMASH Imaging. Magnetic
Resonance in Medicine, 47:160–170, 2002.
[20] E.J. Candes and D.L. Donoho. Curvelets and Reconstruction of Images from Noisy
Radon Data. In [3], pages 108–117, 2000.
[21] E.J. Candes and D.L. Donoho. Recovering edges in illposed inverse problems optimality
of curvelet frames. Annals of Statistics, 30:784–842, 2000.
[22] E.J. Candes, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inac
curate measurements. Communications on Pure and Applied Mathematics, IN PRINT.
Available at http://www.acm.caltech.edu/ emmanuel/papers/StableRecovery.pdf.
[23] E.J. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal recon
struction from highly incomplete frequency information. IEEE Transactions on Infor
mation Theory, 52(2):489–509, 2006.
[24] V. Caselles, R. Kimmel, and G. Sapiro. Geodesic active contours. International Journal
of Computer Vision, 22(1):61–79, 1997.
Bibliography 157
[25] T. Chan, S. Esedoglu, F. Park, and A. Yip. Total Variation Image Restoration: Overview
and Recent Developments. In [132], pages 1–18. SpringerVerlag, 2005.
[26] T. Chan, A. Marquina, and P. Mulet. HighOrder Total VariationBased Image Restora
tion. SIAM Journal on Scientiﬁc Computing, 22(2):503–516, 2000.
[27] T. F. Chan, G. H. Golub, and P. Mulet. Anonlinear primaldual method for total variation
based image restoration. SIAMJournal on Scientiﬁc Computing, 20(6):1964–1977, 1999.
[28] P. Charbonnier, L. BlancF´ eraud, G. Aubert, and M. Barlaud. Deterministic Edge
Preserving Regularization in Computed Imaging. IEEE Transactions On Image Pro
cessing, 6(2):298–311, 1997.
[29] R. Clackdoyle and F. Noo. A large class of inversion formulae for the 2d radon transform
of functions of compact support. Inverse Problems, 20(4):1281–1291, 2004.
[30] J. W. Cooley and J. W. Tukey. An Algorithm for the Machine Calculation of Complex
Fourier Series. Mathematics of Computation, 19(90):297–301, 1965.
[31] T.F. Cootes, G.J. Edwards, and C.J. Taylor. Active appearance models. In [18], pages
483–498. SpringerVerlag, 1998.
[32] T.F. Cootes, C.J. Taylor, D.H. Cooper, and J. Graham. Active shape models  their
training and application. Computer Vision and Image Understanding, 61(1):38–59, 1995.
[33] R. Courant and D. Hilbert. Methods of mathematical Physics, Vol.2, Partial differential
equations. Interscience, New York, 1962.
[34] A.H. Delaney and Y. Bresler. Globally Convergent EdgePreserving Regularized Recon
struction: An Application to LimitedAngle Tomography. IEEE Transactions On Image
Processing, 7(2):204–221, 1998.
[35] L.R. Dice. Measures of the amount of ecological association between species. Ecology,
(26):297–302, 1945.
[36] D.C. Dobson and F. Santosa. Recovery of blocky images from noisy and blurred data.
SIAM Journal on Applied Mathematics, 56(4):1181–1198, 1996.
[37] Y. Dodge, editor. Statistical Data Analysis Based on the L
1
norm and Related Methods.
NorthHolland, Amsterdam New York Oxford Tokyo, 1987.
158 Bibliography
[38] O. Dorn and D. Lesselier. Level set methods for inverse scatterings. Inverse Problems,
22(4):R67–R131, 2006.
[39] A. Douiri, M. Schweiger, J. Riley, and S. Arridge. Local diffusion regularization method
for optical tomography reconstruction by using robust statistics. OPTICS LETTERS,
30(18):2439–2441, 2005.
[40] D.B. Duncan and S.D. Horn. Linear Dynamic Recursive Estimation from the Viewpoint
of Regression Analysis. Journal of American Statistical Association, 67(340):815–821,
1972.
[41] W.A. Edelstein, J.M.S. Hutchison, G. Johnson, and T. Redpath. Spin warp NMR imag
ing and applications to human wholebody imaging. Physics in Medicine and Biology,
25:751–756, 1980.
[42] V. Elangovan and R.T. Whitaker. From sinograms to surfaces: A direct approach to the
segmentation of tomographic data. In [127], pages 213–223, 2001.
[43] A. Evans, T. Lambrou, A. Linney, and A. ToddPokropek. Automatic Segmentation of
Liver using a Topology Adaptive Snake. In Proceedings of the Second International
Conference Biomedical Engineering. Innsbruck, Austria.
[44] D.A. Feinberg, J.D. Hale, J.C. Watts, L. Kaufman, and A. Mark. Halving mr imaging
time by conjugation: Demonstration at 3.5kg. Radiology, 161:527–531, 1986.
[45] A.V. Fiacco and G.P. McCormick. Nonlinear Programming: Sequential Unconstrained
Minimization Techniques. Wiley, New York, 1968.
[46] R. Fischler and M.Elschlager. The representation and matching of pictorial structures.
IEEE Transactions on Computers, 22(1):67–92, 1973.
[47] B. Fletcher, M.D. Jacobstein, A.D. Nelson, T.A. Riemenschneider, and R.J. Alﬁdi. Gated
magnetic resonance imaging of congenital cardiac malformations. Radiology, 137
140:150, 1984.
[48] R. Fletcher. Penalty Functions. In [7], pages 87–114. SpringerVerlag, 1983.
[49] R. Fletcher. Practical Methods for Optimization. John Wiley and Sons, Chichester, 1987.
[50] M. Flickner, J. Halner, E.J. Rodriguez, and J.L.C. Sanz. Periodic quasiorthogonal spline
bases and applications to leastsquares curve ﬁtting of digital images. IEEE Transactions
on Image Processing, 5(1):71–88, 1996.
Bibliography 159
[51] M. Foster. An application of the WienerKolmogorov smoothing theory to matrix inver
sion. Journal of the SIAM, 9:387–392, 1961.
[52] E. Gardu˜ no and G.T. Herman. Implicit surface visualization of reconstructed biological
molecules. Theoretical Computer Science, 346:281–299, 2005.
[53] C.F. Gauss. heoria motus corporum coelestium in sectionibus conicis solem ambientium.
F. Perthes et I. H. Besser, Hamburg, Germany, 1809.
[54] T. Gevers and A.W.M. Smeulders. Interactive query formulation for object search. In
[81], pages 593–600. SpringerVerlag, 1999.
[55] G. Giraldi, E. Strauss, and A. Oliveira. Dualtsnakes model for medical imaging seg
mentation. Pattern Recognition Letters, 24(7):993–1003, 2003.
[56] R.T. Go, W.J. MacIntyre, H.N. Yeung, D.M. Kramer, M. Geisinger, W. Chilcote,
C. George, J.K. O’Donnell, D.S. Moodie, and T.F. Meaney. Volume and planar gated car
diac magnetic resonance imaging: A correlative study of normal anatomy with thallium
201 spect and cadaver sections. Radiology, 129135:150, 1984.
[57] H.H. Goldstine. A History of the Calculus of Variations from the 17th through the 19th
Century. SpringerVerlag, New York Heidelberg Berlin, 1980.
[58] R. Gordon. A tutorial on ART. IEEE Transactions on Nuclear Science, 21:78–93, 1974.
[59] R. Gordon and G. Herman. Reconstruction of pictures from their projections. Commu
nications of the ACM, 14(12):759–768, 1971.
[60] D.G. Grant. Tomosynthesis: a ThreeDimensional Radiographic Imaging Technique.
IEEE Transactions on Biomedical Imaging, 19(1):20–28, 1972.
[61] J. J. Green. Approximation with the radial basis functions of Lewitt. In Jeremy Leversley,
Iain Anderson, and John C. Mason, editors, Algorithms for Approximation IV, pages
212–219. University of Huddersﬁeld, 2002.
[62] J. J. Green. Discretising Barrick’s equations. In S. G. Sajjadi and Julian C. R. Hunt, edi
tors, Proceedings of Wind over Waves II: Forecasting and Fundamentals of Applications,
pages 219–232, Chichester, 2003. IMA and Horwood.
[63] M.A. Griswold, P.M. Jacob, R.M. Heidemann, , M. Nittka, V. Jellus, J. Wang, B. Kiefer,
and A. Haase. Generalized autocalibrating partially parallel acquisitions (grappa). Mag
netic Resonance in Medicine, 47:1202–1210, 2002.
160 Bibliography
[64] M.A. Griswold, P.M. Jakob, R.R. Edelman, and D.K. Sodickson. An RF array designed
for cardiac SMASH imaging. In Proceedings of ISMRM. 6th Scientiﬁc Meeting. Sydney,
Australia.
[65] W. C. Karl H. Feng and D. A. Casta˜ non. A curve evolution approach to objectbased to
mographic reconstruction. IEEE Transactions on Image Processing, 12(1):44–47, 2003.
[66] M.S. Hansen, C. Baltes, J. Tsao, S. Kozerke, K.P. Pruessmann, and H. Eggers. kt
BLAST Reconstruction From NonCartesian kt Space Sampling. Magnetic Resonance
in Medicine, 55:85–91, 2006.
[67] P.C. Hansen. Numerical tools for analysis and solution of Fredholm integral equations
of the ﬁrst kind. Inverse Problems, 8:849–872, 1992.
[68] P.C. Hansen. RankDeﬁcient and Discrete IllPosed Problems. Numerical Aspects of
Linear Inversion. SIAM, Philadelphia, 1998.
[69] K. M. Hanson and G. W. Wecksung. Bayesian Estimation of 3D Objects from Few
Radiographs. Journal of the Optical Society of America, 73:1501–1509, 1983.
[70] K. M. Hanson and G. W. Wecksung. Local basisfunction approach to computed tomog
raphy. Applied Optics, 24:4028–4039, 1985.
[71] H.L. Harter. The Method of Least Squares and Some AlternativesPart I. International
Statistical Review, 42(2):147–174, 1974.
[72] H.L. Harter. The Method of Least Squares and Some AlternativesPart II. International
Statistical Review, 42(3):235–264+282, 1974.
[73] H.L. Harter. The Method of Least Squares and Some AlternativesAddendum to Part IV.
International Statistical Review, 43(3):273–278, 1975.
[74] H.L. Harter. The Method of Least Squares and Some AlternativesPart III. International
Statistical Review, 43(1):1–44, 1975.
[75] H.L. Harter. The Method of Least Squares and Some AlternativesPart IV. International
Statistical Review, 43(2):125–190, 1975.
[76] H.L. Harter. The Method of Least Squares and Some AlternativesPart V. International
Statistical Review, 43(3):269–272, 1975.
Bibliography 161
[77] H.L. Harter. The Method of Least Squares and Some AlternativesPart VI Subject and
Author Indexes. International Statistical Review, 44(1):113–159, 1976.
[78] R.M. Heidemann, M.A. Griswold, A. Haase, and P.M. Jacob. VDAUTOSMASH imag
ing. Magnetic Resonance in Medicine, 45:1066–1074, 2001.
[79] G.T. Herman. Image reconstruction from projections : the fundamentals of computerized
tomography. Academic Press, New York, London, 1980.
[80] P.J. Huber. Robust Statistics. John Wiley and Sons, New York, 1981.
[81] D.P. Huijsmans and A.W.M. Smeulders, editors. VISUAL ’99, LNCS 1614, Berlin, 1999.
SpringerVerlag.
[82] J.I. Jackson, C.H. Meyer, D.G. Nishimura, and A. Macovski. Selection of a convolution
fucntion for fourier inversion using gridding. IEEE Transactions on Medical Imaging,
10(3):473–478, 1991.
[83] R. Jain, R. Kasturi, and B.G. Schunck. Machine Vision. McGrawHill, New York, 1995.
[84] P.M. Jakob, M.A. Griswold, R.R. Edelman, W.J. Manning, and D.K. Sodickson. Car
diac imaging with SMASH. In Proceedings of ISMRM. 6th Scientiﬁc Meeting. Sydney,
Australia.
[85] Jr. J.E. Dennis and R.B Schnabel. Numerical Methods for Unconstraint Optimisation
and Nonlinear Equations. PrenticeHall, Englewood Cliffs, New Jersey, 1983.
[86] J.P. Kaipio and E. Somersalo. Statistical and Computational Inverse Problems, volume
160 of Applied Mathematical Sciences. SpringerVerlag, New York, 2004.
[87] R.E. Kalman. A new approach to linear ﬁltering and prediction problems. Transaction
of the ASME  Journal of Basic Engineering, 82(Series D):35–45, 1960.
[88] ChienMin Kao, M.N. Wernick, and ChinTu Chen. Kalman sinogram restoration for
fast and accurate pet image reconstruction. IEEE Transactions on Nuclear Science,
45(6):3022–3029, 1998.
[89] M. Kass, A. Witkin, and D. Terzopoulos. Snakes: Active contour models. International
Journal of Computer Vision, 1(4):321–331, 1988.
[90] P. Kellman, F.H. Epstein, and E.R. McVeigh. Adaptive sensitivity encoding incorporating
temporal ﬁltering (TSENSE). Magnetic Resonance in Medicine, 45:846–852, 2001.
162 Bibliography
[91] P. Kellman, J.M. Sorger, F.H. Epstein, and E.R. McVeigh. Low latency temporal ﬁlter
design for realtime mri using unfold. Magnetic Resonance in Medicine, 44:933–939,
2000.
[92] M. Kerckhove, editor. ScaleSpace 2001, LNCS 2106, Berlin, 2001. SpringerVerlag.
[93] K. Y. Kim, S. I. Kang, M. C. Kim, S. Kim, Y. J. Lee, and M. Vauhkonen. Dynamic image
reconstruction in electrical impedance tomography with known internal structures. IEEE
Transactions on Magnetics, 38(2):1301–1304, 2002.
[94] V. Kolehmainen, S.R. Arridge, W.R.B. Lionheart, M Vauhkonen, and J.P. Kaipio. Re
covery of region boundaries of piecewise constant coefﬁcients of an elliptic pde from
boundary data. Inverse Problems, 15:1375–1391, 1999.
[95] V. Kolehmainen, S. Prince, S.R. Arridge, and J.P. Kaipio. Stateestimation approach
to the nonstationary in optical tomography problem. Journal of the Optical Society of
America A, 20(5):876–889, 2003.
[96] V. Kolehmainen, S. Siltanen, S. Jarvenpaa, J.P. Kaipio, P. Koistinen, M. Lassas, J. Pirt
tila, and E. Somersalo. Statistical inversion for medical xray tomography with few ra
diographs:ii. application to dental radiology. Physics in Medicine and Biology, 48:1465–
1490, 2003.
[97] V. Kolehmainen, A. Voutilainen, and J. P. Kaipio. Estimation of nonstationary region
boundaries in eitstate estimation approach. Inverse Problems, 17:1937–1956, 2001.
[98] A.N. Kolmogorov. Interpolation and Exrepolation of Stationary Random Sequences ,
translated by W. Doyle and J. Selin. Rept. RM3090PR, RAND Corp., Santa Monica,
California, 1962.
[99] F.P. Kuhl and C.R. Giardina. Elliptic fourier features of a closed contour. Computer
Graphics and Image Processing, 18:236–258, 1982.
[100] A. Kumar, D. Welti, and R.R. Ernst. NMRFourier Zeugmatography. Journal of Magnetic
Resonance, 18:69–83, 1975.
[101] P. Lanzer, E.H. Botvinick, N.B. Schiller, L.E. Crooks, M. Arakawa, L. Kaufman, P.L.
Davis, R. Herfkens, M.J. Lipton, and C.B. Higgins. Cardiac imaging using gated mag
netic resonance. Radiology, 121127:150, 1984.
Bibliography 163
[102] M. Lassas and S. Siltanen. Can one use total variation prior for edgepreserving Bayesian
inversion? Inverse problems, 20(5):1537–1563, 2004.
[103] P.C. Lauterbur. Image Formation by Induced Local Interactions: Examples Employing
Nuclear Magnetic Resonance. Nature, 242:190–191, 1973.
[104] A.M. Legendre. Nouvelles methodes pour la determination des orbites des cometes.
Courcier, Paris, 1805.
[105] K. Levenberg. A method for the solution of certain nonlinear problems in least squares.
Quart. Appl. Math., 2:164–168, 1944.
[106] R.M. Lewitt. Multidimensional digital image representations using generalized Kaiser
Bessel window functions. Journal of Optical Society of America A, 7(10):1834–1846,
1990.
[107] R.M. Lewitt. Alternatives to voxels for image representation in iterative reconstruction
algorithms. Physics in Medicine and Biology, 37:705–716, 1992.
[108] Y. Li and F. Santosa. A Computational Algorithm for Minimizing Total Variation in
Image Restoration. IEEE Transactions On Image Processing, 5(6):987–995, 1996.
[109] ChunShin Lin and ChiaLin Hwang. New Forms of Shape Invariants From Elliptic
Fourier Descriptors. Pattern Recognition, 20(5):535–545, 1987.
[110] L.B. Lucy. An iterative method for the rectiﬁcation of observed distributions. Astronom
ical Journal, 79(6):745–754, 1974.
[111] D.G. Luenberger. Linear and Nonlinear Programming. AddisonWesley, Reading, Mas
sachusetts, 1984.
[112] M. Lustig, J.M. Santos, D.L. Donoho, and J.M. Pauly. ktSPARSE:High framerate
dynamic MRI exploiting spatiotemporal sparsity. In ISMRM RealTime MRI Workshop
2006. SantaMonica, CA, USA.
[113] B. Madore. Using unfold to remove artifacts in parallel imaging and in partialfourier
imaging. Magnetic Resonance in Medicine, 48:493–501, 2002.
[114] B. Madore, G.H. Glover, and N.J. Pelc. UNaliasing by Fourierencoding the Overlaps
using the temporaL Dimension (UNFOLD), applied to cardiac imaging and fMRI. Mag
netic Resonance in Medicine, 42:813–828, 1999.
164 Bibliography
[115] R. Marabini, G.T. Herman, and J.M. Carazo. 3D reconstruction in electron microscopy
using ART with smooth spherically symmetric volume elements (blobs). Ultrami
croscopy, 72:5365, 1998.
[116] R. Marabini, E. Rietzel, R. Schrderand G.T. Herman, and J.M. Carazo. Three
dimensional reconstruction from reduced sets of very noisy images acquired following a
singleaxis tilt schema: application of a new threedimensional reconstruction algorithm
and objective comparison with weighted backprojection. Journal of Structural Biology,
120:363–371, 1997.
[117] D.W. Marquardt. An Algorithm for LeastSquares estimation of Nonlinear Parameters.
Journal of the Society for Industrial and Applied Mathematics, 11(2):431–441, 1963.
[118] S. Matej and R.M. Lewitt. Practical considerations for 3D image reconstruction us
ing spherically symmetric volume elements. IEEE Transactions on Medical Imaging,
15:6878, 1996.
[119] T. McInerney, G. Hamarneh, M. Shenton, and D. Terzopoulos. Deformable organisms
for automatic medical image analysis. Medical Image Analysis, 6:251–266, 2002.
[120] T. McInerney and D. Terzopoulos. Deformable models in medical image analysis: a
survey. Medical Image Analysis, 1(2):91–108, 1996.
[121] T. McInerney and D. Terzopoulos. Tsnakes: Topology adaptive snakes. Medical Image
Analysis, 4:73–91, 2000.
[122] K. McLeish, S. Kozerke, W.R. Crum, and D.L.G. Hill. Freebreathing radial acquisitions
of the heart. Magnetic Resonance in Medicine, 52:1127–1135, 2004.
[123] F.G. Meyer, R.T. Constable, A.J. Sinusas, and J.S. Duncan. Tracking myocardial defor
mation using phase contrast mr velocity ﬁelds: A stochastic approach. IEEE Transac
tions on Medical Imaging, 15(4):453–465, 1996.
[124] E. H. Moore. On the reciprocal of the general algebraic matrix. Bulletin of the American
Mathematical Society, 26:394–395, 1920.
[125] J.J. Mor´ e. Recent Developments in Algorithms and Software for Trust Region Methods.
In [7], pages 258–287. SpringerVerlag, 1983.
[126] F. Natterer. The Mathematics of Computerized Tomography. Wiley, New York, 1986.
Bibliography 165
[127] W. Niessen and M. Viergever, editors. MICAI 2001, LNCS 2208, Berlin, 2001. Springer
Verlag.
[128] H. Nyquist. Certain Topics in Telegraph Transmission Theory. Transaction of the
A.I.E.E., 47:617–644, 1928.
[129] S. Osher and J.A. Sethian. Fronts Propagating with Curvature Dependent Speed: Algo
rithms Based on HamiltonJacobi Formulation. J. Comp. Phys., 79:12–49, 1988.
[130] J.D O’Sullivan. A fast sinc gridding algorithm for fourier inversion in computer tomog
raphy. IEEE Transactions on Medical Imaging, 4(4):200–207, 1985.
[131] N. Paragios. A Variational Approach for the Segmentation of the Left Ventricle in Car
diac Image Analysis. International Journal of Computer Vision, 50(3):345–362, 2002.
[132] N. Paragios, Y. Chen, and O. Faugeras, editors. Handbook of Mathematical Models in
Computer Vision. Springer, Berlin, 2005.
[133] E. Peerson and KingSun Fu. Shape Discrimination Using Fourier Descriptors. IEEE
Transactions PAMI, 8(3):388–397, 1986.
[134] R.A. Penrose. A generalized inverse for matrices. Proceedings of the Cambridge Philo
sophical Society, 51:406–413, 1955.
[135] M. Persson, D. Bone, and H. Elmqvist. Total variation norm for threedimensional itera
tive reconstruction in limited view angle tomography. Physics in Medicine and Biology,
46:853866, 2001.
[136] D.L. Phillips. A Technique for the Numerical Solution of Certain Integral Equations of
the First Kind. Journal of the ACM (JACM), 9(1):84–97, 1962.
[137] W. Pratt. Digital Image Procesing. Wiley, New York, 1991.
[138] K.P. Pruessmann, M. Weiger, P. Bornert, and P. Boesiger. Advances in Sensitivity Encod
ing With Arbitrary kSpace Trajectories. Magnetic Resonance in Medicine, 46:638–651,
2001.
[139] K.P. Pruessmann, M. Weiger, M.B. Scheidegger, and P. Boesiger. SENSE: Sensitivity
Encoding for Fast MRI. Magnetic Resonance in Medicine, 42:952–962, 1999.
166 Bibliography
[140] J. Radon.
¨
Uber die bestimmung von funktionen durch ihre integralwert l¨ angs gewisser
mannigfaltigkeiten. Berichte S¨ achsische Akademie der Wissenschaften,Leipzig,Math.
Phys. Kl., 69:262–267, 1917.
[141] G.N. Ramanchandran and A.V. Lakshminarayanan. Three dimensional reconstruc
tions from radiographs and electron micrographs: Application of convolution instead
of fourier transforms. Proceedings of the National Academy of Sciences, 68:2236–2240,
1971.
[142] W.H. Richardson. BayesianBased Iterative Method of Image Restoration. Journal of
the Optical Society of America, 62(1):55–59, 1972.
[143] L. Rudin, S. Osher, and E. Fatemi. Nonlinear Total Variation based Noise Removal
Algorithms. Physica D, 60:259–268, 1992.
[144] C. Sagiv and N.A. Sochen adn Y.Y. Zeevi. Geodesic active contours applied to texture
feature space. In [92], pages 344–352. SpringerVerlag, 2001.
[145] F. Santosa. A levelset approach for inverse problems involving obstacles. ESAIM:
Control, Optimisation and Calculus of Variations, 1:17–33, 1996.
[146] K. Sauer, Jr. J. Sachs, and C. Klifa. Bayesian Estimation of 3D Objects from Few
Radiographs. IEEE Transactions On Nuclear Science, 41(5):1780–1790, 1994.
[147] M. Schweiger and S. R. Arridge. Image reconstruction in optical tomography using local
basis functions. Journal of Electronic Imaging, 12(4):583–593, 2003.
[148] M. Schweiger, S.R. Arridge, O. Dorn, A. Zacharopoulos, and V. Kolehmainen. Recon
structing Absorption and Diffusion Shape Proﬁles in Optical Tomography by a Level Set
Technique. Optics letters, 31(4):471–473, 2006.
[149] S. Sclaroff and J. Isidoro. Active blobs. In Proc. International Conference on Computer
Vision, pages 1146–1153, 1998.
[150] L.A. Shepp and B.F. Logan. The Fourier reconstruction of a head section. IEEE Trans
actions on Nuclear Science, 21:21–43, 1974.
[151] S. Siltanen, V. Kolehmainen, S. Jarvenpaa, J.P. Kaipio, P. Koistinen, M. Lassas, J. Pirt
tila, and E. Somersalo. Statistical inversion for medical xray tomography with few
radiographs:i. general theory. Physics in Medicine and Biology, 48:1437–1463, 2003.
Bibliography 167
[152] R.C. Smith and R.C. Lange. Understanding Magnetic Resonance Imaging. CRC Press,
Boca Raton New York, 1998.
[153] D.K. Sodickson. Tailored SMASH Image Reconstructions for Robust In Vivo parallel
MR Imaging. Magnetic Resonance in Medicine, 44:243–251, 2000.
[154] D.K. Sodickson, M.A. Griswold, P.M. Jacob, R.R. Edelman, and W.J. Manning. Signal
tonoise ratio and signaltonoise efﬁciency in smash imaging. Magnetic Resonance in
Medicine, 41:1009–1022, 1999.
[155] D.K. Sodickson and W.J. Manning. Simultaneous acquisition of spatial harmonics
(smash): fast imaging with radiofrequency coil arrays. Magnetic Resonance in Medicine,
38:591–603, 1997.
[156] H.W. Sorenson. Leastsquares estimation:from Gauss to Kalman. IEEE Spectrum, 7:63–
68, 1970.
[157] L.H. Staib and J.S. Duncan. Boundary ﬁnding with parametrically deformalbe contour
models. IEEE Transactions PAMI, 14(11):1061–1075, 1992.
[158] H. Stark, J.W. Woods, I. Paul, and R. Hingorani. Direct Fourier Reconstruction in Com
puter Tomography. IEEE Transactions on Acoustics, Speech and Signal Processing,
29(2):237–245, 1981.
[159] M. B. Stegmann, J. C. Nilsson, and B. A. Grønning. Automated segmentation of cardiac
magnetic resonance images. In Proceedings of ISMRM. 9th Scientiﬁc Meeting. Glasgow,
Scotland, UK.
[160] S.M. Stigler. Mathematical Statistics in the Early States. The Annals of Statistics,
6(2):239–265, 1978.
[161] A. Tarantola. Inverse Problem Theory and Methods for Model Parameter Estimation.
SIAM, Philadelphia, 2004.
[162] D. Terzopoulos. Artiﬁcial life for computer graphics. Communications of the ACM,
42(8):32–42, 1999.
[163] D. Terzopoulos and K. Fleischer. Deformable models. The Visual Computer, 4(6):306–
331, 1988.
168 Bibliography
[164] A. Tikhonov and V. Arsenin. Solutions of IllPosed Problems. Winston and Sons, Wash
ington, D.C., 1977.
[165] A.N. Tikhonov. On the stability of inverse problems. Dokl. Akad. Nauk SSSR, 39(5):195–
198, 1943.
[166] J. Tsao. On the unfold method. Magnetic Resonance in Medicine, 47:202–207, 2002.
[167] J. Tsao, B. Behnia, and A.G. Webb. Unifying linear priorinformationdriven methods
for accelerated image acquisition. Magnetic Resonance in Medicine, 46:652–660, 2001.
[168] J. Tsao, P. Boesiger, and K. Pruessmann. kt BLAST and kt SENSE: Dynamic MRI
With High Frame Rate Exploiting Spatiotemporal Correlations. Magnetic Resonance in
Medicine, 50:10311042, 2003.
[169] J. Tsao, K. Pruessmann, and P. Boesiger. Priorinformationenhanced dynamic imaging
using single or multiple coils with kt BLAST and kt SENSE. In Proceedings of ISMRM.
10th Scientiﬁc Meeting. Honolulu, Hawaii, USA.
[170] B.S. Tsirelson. Not every Banach space contains an imbedding of lp or c0. Functional
Analysis and Its Applications, 8(2):138–141, 1974.
[171] J.M. Varah. A Practical Examination of Some Numerical Methods for Linear Discrete
IllPosed Problems. SIAM Review, 21(1):100–111, 1979.
[172] M.T. Vlaardingerbroek and J.A. den Boer. Magnetic Resonance Imaging. Springer
Verlag, Berlin, 1999.
[173] C.R. Vogel. Nonconvergence of the Lcurve regularization parameter selection method.
Inverse Problems, 12:535–547, 1996.
[174] C.R. Vogel. Computational Methods for Inverse Problems. Frontiers in Applied Mathe
matics. SIAM, Philadelphia, 2002.
[175] C.R. Vogel and M.E. Oman. A multigrid method for total variationbased image de
noising. In K. Bowers and J. Lund, editors, Progress in Systems and Control Theory:
Computation and Control IV, Basel, 1995. Birk¨ auser.
[176] C.R. Vogel and M.E. Oman. Iterative methods for total variation denoising. SIAM Jour
nal on Scientiﬁc Computing, 17:227–238, 1996.
Bibliography 169
[177] C.R. Vogel and M.E. Oman. Fast, robust total variation–based reconstruction of noisy,
blurred images. IEEE Transactions on Image Processing, 7:813–824, 1998.
[178] B. Wah and T. Wang. Efﬁcient and adaptive Lagrangemultiplier methods for nonlinear
continuous global optimization. Journal of Global Optimization, 14(1):1–25, 1999.
[179] R.L. Webber, R.A. Horton, D.A. Tyndall, and J.B. Ludlow. Tunedaperture computed
tomography (TACT). Theory and application for threedimensional dentoalveolar imag
ing. Dentomaxillofacial Radiology, 26:53–62, 1997.
[180] M. Weiger, K.P. Pruessmann, and P. Boesiger. Cardiac realtime imaging using sense.
Magnetic Resonance in Medicine, 43:177–184, 2000.
[181] H. Wendland. Error estimates for interpolation by compactly supported radial basis func
tions of minimal degree. Journal of Approximation Theory, 93:258–272, 1998.
[182] R.T. Whitaker and V. Elangovan. Adirect approach to estimating surfaces in tomographic
data. Medical Image Analysis.
[183] P. Whittle. Optimization under Constraints. WileyInterscience, London, 1971.
[184] B. Widrow. The rubber mask technique. Pattern Recognition, 5(3):175–211, 1973.
[185] N. Wiener. The Exrepolation, Interpolation and Smoothing of Stationary Time Series.
John Wiley & Sons, New York, 1949.
[186] Y. Wu, EunKee Jeong, D.L. Parker, and A.L. Alexander. Unfold using a temporal sub
traction and spectral energy comparisson technique. Magnetic Resonance in Medicine,
48:559–564, 2002.
[187] J.C. Ye, Y. Bresler, and P. Moulin. A SelfReferencing LevelSet Method for Image
Reconstruction from Sparse Fourier Samples. International Journal of Computer Vision,
50(3):253–270, 2002.
[188] D. F. Yu and J.F. Fessler. Edgeperserving tomographic reconstruction with nonlocal
regularization. IEEE Transactions on Medical Imaging, 21:159–173, 2002.
[189] A. Zacharopoulos, S. Arridge, O. Dorn, V. Kolehmainen, and J. Sikora. Three
dimensional reconstruction of shape and piecewise constant region values for optical
tomography using spherical harmonic paremetrization and a boundary element method.
Inverse Problems, 22(5):1509–1532, 2006.
2
Statement of intellectual contribution
The work carried out in this thesis is my own work with the exception of some preliminary phantom studies, which was conducted in collaboration with Avi Silver, who was working in the Computational Imaging Science Group, Department of Imaging Sciences, Guys Hospital, Kings College. Clinical data was provided by Dr Michael Schaft Hansen, who was employed in the Center of Medical Image Computing, UCL.
4 Statement of intellectual contribution .
The method is demonstrated in simulations. as well as the shape of the object of interest. as well as other imaging modalities. . A model is used to represent the distribution of intensities in the image. The clinical application of interest is cardiac imaging. opening up the possibility of an intelligent acquisition scheme. where parameters can be adjusted during the collection of data from patients. presenting both qualitative and quantitative results for further analysis by clinicians. The majority of available methods are designed to deal with complete data sets. Medical imaging enhances the ability of clinicians to perform diagnosis noninvasively. avoiding some of the constraints of traditional methods. This thesis presents a novel methodology for the reconstruction of very limited data sets from sparse angular samples. In Magnetic Resonance Imaging. is essential. phantom and clinical studies for static and dynamic data sets. A necessary condition for this is the collection of data being limited to a bare minimum. Therefore analysis of dynamic objects directly implies the need for fast data acquisition schemes in order to represent motion in an adequate manner.Abstract Reconstruction of images and shapes from measured data is nowadays an essential requirement for medicine. as well as statistical analysis. not reliant on periodicity assumptions. It takes advantage of the dynamic nature of the reconstruction problem using the theory of inverse problems. The novel reconstruction approach can be used to form both shapes and images directly from measured data. data for a single image frame requires more time than the object can be considered to be static. where fast imaging. The method offers a degree of ﬂexibility in the data collection process.
6 Abstract .
Acknowledgements
First of all, I would like to thank Prof. Simon Arridge and Prof. Derek Hill. It was their ideas that initiated this exciting project. Simon Arridge has helped me from the beginning to understand the mathematical nature of the problem. Derek Hill suggested directions and applications for our methods. His knowledge on MR imaging has been invaluable. Dr Daniel Alexander also reserves my gratitude for being my second supervisor, providing useful comments and suggestions in internal examinations and various presentations. I would like to acknowledge EPRSC MIASIRC for funding this work. The quality of work is always dependant on its surroundings. I have found working in the medical imaging group at the computer science department in UCL a great learning and productive environment. For these reasons I would like to thank, Martin Schweiger for many suggestions on mathematics and numerics, Rachid Elafouri and Abdel Douiri for their comments and attention on my questions and Thanasis Zacharopoulos for the many discussions on a variety of subjects. I would like to thank Dr Michael Hansen and Avi Silver for their collaboration and for providing data to be used in the experiments. Without stating their names I would like to thank my friends, for their moral support throughout this period of my life. Finally, I would like to thank my parents, Nikos and Ioanna, for their continuous support in every imaginable way, ﬁnancial, emotional, advices on cooking properly and many things words cannot describe. They have even listened to me complain and explain very speciﬁc problems about my research as I am sure they had little idea what I was talking about.
8
Acknowledgements
Contents
1 Prologue 1.1 1.2 1.3 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Problem statement  Contribution . . . . . . . . . . . . . . . . . . . . . . . . . Overview of thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23 23 23 25 27 27 27 28 33 34 35 37 38 41 41 44 46 47 47 50 50 52 55 55
2 Magnetic Resonance Imaging 2.1 2.2 2.3 2.4 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Principles of MRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dynamic imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 2.4.2 2.4.3 2.5 Gated imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Parallel imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . kt imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 Shape reconstruction background 3.1 3.2 3.3 Snake methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Level set methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4 Numerical optimization: Inverse problem theory 4.1 4.2 Inverse Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 4.2.2 4.3 4.4 Image parametrization . . . . . . . . . . . . . . . . . . . . . . . . . . Shape parametrization . . . . . . . . . . . . . . . . . . . . . . . . . .
Data discrepancy functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . Least squares approximation . . . . . . . . . . . . . . . . . . . . . . . . . . .
.5. . . . . . . . . . . . . . . . . . . . . 5. . . . . . . .8 Discussion . . . 5 Image reconstruction method 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . 6 Shape reconstruction method 6. . . . . . . . . . . . . . . . .2 Linear case . . . . . . . . . . Nonlinear case . . . . . . . . . . . .1 5. . . . .1 6. .4 Discussion . . . . . .7 Statistical estimation: Kalman ﬁlters . . . . . . . . . . . . . . . .4. .1 5. . . . . . . . .6 Discussion . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 6. . . . . Primaldual Newton method . . . . . . . . . . . . . . . . . . . . . . .2 4. . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . .5 4. . . . . . . .6 Contents Linear case . . . . . . . . . . . . . . . . . . . . . . . . . . .7. . . . . .5. . . . . . . . . . . 102 Measured data from MRI . . . . . . . . . . . . . . . . Inverse problem: Iterative solution . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 4. . . . . . . . . . . . . . . . . . . . . . Damped least squares estimation . . . . . . . Tikhonov regularisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 4. . . . .1 5. . . . . . . . . .6. . . . . . . . . . . . . . . 5. . . . . . . . .2 5. . . . . . . . . .3 Forward problem . Fixed interval smoother . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . .1 4. . . . .2 5. . . . . . . . . . . .2 Simulated data . . . . . . . . . . 101 6. . .4. . . . . . . . . Constrained optimisation . . .3. . . . . . 56 58 60 62 63 64 65 66 70 71 71 73 73 74 75 75 77 78 80 82 85 86 87 88 93 97 97 Constrained optimization: The method of Lagrange . . .3. 100 Results . . . . .3. . . . . . . . . . . . . . . . .3 Linear case: Discrete Kalman ﬁlters . . 4. . . . . . . . . Nonlinear case: Extended Kalman ﬁlters . . . . . .4. . . . . . . 5. . . . . . . . . .3 Introduction . . . . . . . . . . . .1 6. . . . . . .1 4. . . . . . . . . . . . Inverse problem . . . . . . .3 Lagged diffusivity ﬁxed point iteration .3. . . . . . . . . .7. . . .4 Least squares estimation . . . . . . . . . . . . .1 5. . . . . . . . . Measured data from MRI . . . . . . . . . . .2 Simulated cardiac data . . . . 111 . . . . . . . . . . .2 5. . . . Nonlinear case . . . . . . . . . . . . . . . . 4. . . 4. . . . . . . . . Forward problem . . . . . . . . . . . . . . . .4. . . . . . . . . . . . .2 6. . .7. . . . .2 4. . . . . . .5 Results . . . . Inverse problem: Direct solution . . . . . . . . .
. . . . . . . . . . . . . . . . . . .1 7. . . . . . . . . . . 126 Discussion . . . . . . . . . . . 125 Results . . . . . . . .Contents 7 Combined reconstruction method 7. . . . . . . . . . . . . . . 138 141 145 147 149 9 Conclusions and future directions A Acronyms B Table of notation C Difference imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 125 8 Temporally correlated combined reconstruction method 8. . . . . . . . . . . . . . . .3 11 113 Forward and inverse problem . . . .3 Forward and inverse problem .2 8. . . . . . . . . . . 115 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 8. . . . . . . . . . . . . . . . . . . . . . . . 113 Results . . .2 7. . . . . . . . . .
12 Contents .
List of Figures 2. . . 52 4. . . . . . . . . . . .3 2. . . . .3 Plot of radial proﬁles of linear(solid). Wiener. . . . . .4 A normal ECG. . . . . . . . . . . . . . . . . . This can be corrected with the application of an appropriate low pass ﬁlter [91]. Kalman. . . . . . . . . . . . . . . (Right) Radial sampling. . . . . . .1 2. . . . . . . . . . . 30 34 2. The purple circle indicates the location of the line integral. . . . . . . . . . . . . . . . . Level set function and two corresponding shape boundaries on the zero level set. . . . Gauss(dashed). . . . . .2 4. . . The x and y axes represent the spatial location in R2 and the z axis represents the intensity.2 (Left) Cartesian sampling. . A. . . . . . . . . . . . . . Sheared sampling pattern in kt space. .4 Plot of Fourier basis functions with Nγ = 7. . . . . (Top right) A line integral for τ = 32. . . N. (Bottom left) The Radon transform of the image at θ = 45o . . . Dashed curves are the cos (even) terms and solid curves are the sin (odd) terms. . . . . . . . . . From left to right. . . . . . .1 3. . . .1 Level set function and corresponding shape boundary on the zero level set. . . .5 4. . 45 Regular 3 × 3 grid. . .2 Surface plot of the KaiserBessel blob basis in 2D with support radius 1. . . . . . . (Top left) Line integrals overlaid on an image at θ = 45o . 37 2. . . . . . . . . . . Each point denotes a complete kx line in the read out direction. . . 28 From image to Radon projections. . . . The qaxis is the temporal frequency and the F axis is the spatial frequency. . . . . . . 51 4. . . . Wendland(dashdotted) and KaiserBessel(dotted). . . . . . . . 53 4. . . . . . . . Kolmogorov and R. . . . . . . . 53 54 66 4. . (Bottom right) The Radon transform at four angles. .45 and α = 6. . . . Due to the temporal underampling the function has been shifted in the temporal frequency dimension. . . . . . . . . .4. .5 Plot of an aliased function. . . . . The taxis represents time and the ky axis the sampled locations in the phase encoding direction. . . . . .6 Plot of Bspline basis functions with Nγ = 7. . . . 38 44 3. . . . .
. . (Left) Gridding. . . . . . . . . . . . . . . .17 (Left)Simulated cardiac rms plot over the number of proﬁles. . . 8 projections. . . . .14 8 projections. . . .8 5. . . . . . . .3 The system matrix J. . . . . . . . . . . . . .4833. . . . (Left) rms error over iteration plot. . . . 5.61756. . .11 8 projections. . (Left) rms error over iteration plot. . . . . . .61756. (Left) Initial (damped least squares) rms = 0. .73092. . .5 Ground truth image. 5. . . . . .2521. . . SheppLogan phantom. (Right) Damped least squares reconstruction 64x64 grid rms = 0. . . 5. 84 84 5. .19 Coil 1. . . . . Fully sampled cardiac image. 86 86 87 88 5. (Left) Initial (damped least squares) RM S = 0. . (Right) Gradient norm plot. . . . . .61756. .9 The T V block tridiagonal matrix. . . . . . (Right) Projected primaldual reconstruction rms = 0. 81 81 5. . . . . . . . . . .18 Coil 1 reconstructions from measured data. . 5. .10 8 projections. . . . . .2 List of Figures Radon data. . . 76 76 75 74 8 projections. . (Right) Fixed point reconstruction rms = 0.5975. . . . . . 5. . . . . . . . . . . . . (Right) Gradient norm plot. .6 8 projections. . . . .1. . The dashed line represents the ﬁltered backprojection method and the solid the primaldual method. . . . . A sinogram with 8 projections each with 185 line integrals. . . . . . . . 90 91 89 . . 5. . . . .2521. . . (Left) RM S error over iteration plot. . . . . . . . The numbers on the left column indicate the number of proﬁles. . 79 80 5. . . . . . . . . . .5975. . . . . . .1 5. . . . (Right) Least squares reconstruction 8 × 8 grid rms = 0. . . 5.14 5. . . . . . Fully sampled gridding reconstruction used as ground truth image. . . . . (Right) Gradient norm plot. . . . . . . . . . (Left) Filtered backprojection rms = 1. . . . . . (Right) Projected primaldual reconstruction. . . (Right) Primaldual reconstruction RM S = 0. (Right) Comparison of central lines of the ground truth and reconstructed images for the case of 8 radial proﬁles. . . . . . . (Left) Filtered backprojection rms = 1. . Radial proﬁle of the KaiserBessel blob in Fourier space (Left) and Radon space (Right). . . . . (Right) Projected primaldual reconstruction. . . . . . . . . . 5. 5. .16 Simulated data reconstructions. Each column corresponds to the vectorised basis function in the Radon space. . .13 8 projections. . . .7 The solid line represents the absolute function t and the dashed line represents the approximation ψ(t) = t2 + β 2 with β = 0. . . (Left) Filtered backprojection. . . 78 5. . . . . . . . . . . . . . . . . . .12 8 projections. . . . . . . . . . . . . . . .15 Ground truth image. . . . . . . . . . . .61756. . . . . . . . . . . . . . (Left) Initial (damped least squares) rms = 0. . 77 5. . . .4 5. . . . . . . . . . . The numbers on the left column indicate the number of proﬁles. . .
104 6. . . . 103 Simulated data with no background and 15% added Gaussian noise. . . . (Top Right) Initial predicted image. 103 6. . . . . Gradient norm plot over iteration. . . . . . . . (Bottom Right) Final predicted image. . . . . . 106 6. . . . . . . (Left) LS gridding. (Bottom Left) Final superimposed to ground truth image. . . .9 Ground truth image with multiple shapes. . . . . . . . . . . . . . . . . . . . The dashed line represents the gridding method and the solid the primaldual method. . . . .22 Multiple coil reconstructions from measured data. . . . . . . . . . . . . . . . . . . Fully sampled LS gridding reconstruction used as ground truth image. . 106 . . (Top Left) Initial superimposed to ground truth image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . (Right) Comparison of central lines of the ground truth and reconstructed images for the case of 8 radial proﬁles. . . . .5 6. . . 15 91 5. . . . (Bottom Left) Final superimposed to ground truth image. Gradient norm plot over iteration. . . . .11 Ground truth image. . . . . .8 6. . . . . . . . . . . . . (Right) Projected primaldual reconstruction. . . . . . (Bottom Right) Final predicted image.7 Simulated data with no background and 15% added Gaussian noise. .2 6. . . . . . . . . . . 6.10 Simulated data with no background. . . . . .23 (Left) Multiple coil rms plot over the number of proﬁles. . . . (Top Left) Initial superimposed to ground truth image. . .List of Figures 5. . .6 Simulated data with no background. 105 6.1 (Right) Contour with selfintersection at parametric point se . Cartoon heart. . . . 104 6. . . The dashed line represents the LS gridding method and the solid the primaldual method. . . .21 Multiple coil. . . . . . . . . . . . 6. . .4 Exact parametric points s1 and s2 of the intersection of the curve with a pixel. . . . . . . . . . . . . . . . . The numbers on the left column indicate the number of proﬁles. . . . Simulated cardiac phantom. .3 6. . . . . . . . . . . . (Left) Corrected contour with the small loop removed. (Top Right) Initial predicted image. . . . . . . . . . . . . . . . . . . (Top Left) Initial superimposed to ground truth image. . . . . . . . . . . Gradient norm plot over iteration. (Top Right) Initial predicted image. . . . . . . . 102 Simulated data with no background. . . . . . . . . . . . . . . (Bottom Left) Final superimposed to ground truth image. . . . . . . . . . . (Bottom Right) Final predicted image. Ground truth image. (Right) Comparison of central lines of the ground truth and reconstructed images for the case of 8 radial proﬁles. . . . . 105 Simulated data with no background. . . 98 101 94 93 . . 5. . . . .20 (Left) Coil 1 rms plot over the number of proﬁles. . . . . . . . 92 5. .
. . . . . . . . . 118 . . . . . . . . 110 6. . . . . (Bottom Left) Final superimposed to ground truth image. . .13 Simulated data with known background. . . . . . . . . . . . . . . . . . . 107 6. . . . .14 Ground truth image calculated from a fully sampled single coil data set. . . . . . . . . 118 Measured data with unknown background. These values are assigned according to the classiﬁcation of intensity coefﬁcients as background (solid line). . .18 Measured multiple coil data with known background. . (Top Left) Initial superimposed to ground truth image. 109 6. . . (Bottom Right) Final predicted image. . (Left) Enhanced reconstructed image. . . . 107 6. The error for the reconstructed image is rms = 0. . . . . . . 109 . . . . . Gradient norm plot over iteration. . . . . (Top Left) Initial superimposed to ground truth image. . . (Right) Plot of the gradient norm of the shape reconstruction over iteration. (Top Left) Initial superimposed to ground truth image. . . . . . .19 Measured multiple coil data with known background. . . .17 Ground truth image calculated from a fully sampled multiple coil data set. . . (Top Right) Initial predicted image. . . . .40217. . .15 Measured single coil data with known background. . (Top Right) Initial predicted image. (Bottom Left) Final superimposed to ground truth image. . . . . . . . . . 117 7. Gradient norm plot over iteration. . . . . . . . . . .16 Measured single coil data with known background. (Top Right) Initial predicted image. . . . . . Coil 5. . . . . . . . . . . . . (Bottom Right) Final predicted image. Gradient norm plot over iteration. . . . . . . . . . . . . . . . (Bottom Right) Final predicted image. . . . (Bottom Right) Final predicted image.6 Ground truth image from fully sampled single coil data. . . .16 List of Figures 6. . . (Bottom Left) Final superimposed to ground truth image. (Top Left) Initial superimposed to ground truth image. . 7. . (Bottom Right) Final predicted image. . . 117 7. . . . . . . . . . . . .12 Simulated data with known background. (Bottom Left) Final superimposed to ground truth image.2 7. . . (Bottom Left) Final superimposed to ground truth image. . 111 Plot of the derivative of ψ(t) for different values of β. interior (dotted line) and boundary (dashed line). . . . . . . .3 Ground truth image for the simulated experiments. . . 110 6. . . . . . . . . (Top Right) Initial predicted image. . 115 7. .4 Simulated data with unknown background. . . . 116 Simulated data with unknown background. . . .6509. . The error for the reconstructed image is rms = 0. . .1 . . . .5 7. (Top Left) Initial superimposed to ground truth image. . . 108 6. . . . . . . . 6. . (Top Right) Initial predicted image.
. . . . . . . . . . . (Left) Plot of the Dice similarity coefﬁcient over time (Middle) Plot of rms over time. . The error for the reconstructed image is rms = 0. . . . . . . . . 130 8. . . . . . . . . The numbers on the left column indicate the time point in the sequence. . . . 121 8. Filtered backprojection (solid line) and temporally correlated combined approach (dotted line). . 127 Reconstructions from simulated data.List of Figures 7. . . . (Left) Reconstructed shapes superimposed on ground truth images. . .1 8. . . . . . . . . . . . . . . (Left) Ground truth. . . . . 129 8.9 Ground truth image from fully sampled multiple coil data. . . . . . . (Left) Filtered backprojection. . . . (Right) Reconstructed images using shape speciﬁc T Vβ approach. . . . . . . . . . . . . . . (Left) Enhanced reconstructed image. (Right) Reconstructed images with restricted interior intensities. (Middle Right) Shape speciﬁc total variation method. . . . . . . 130 8. . . . . . . . . .10 Measured data with unknown background. . . . . . (Right) Plot of the gradient norm of the shape reconstruction 17 over iteration. . (Left) Enhanced reconstructed image. . . . . . . . (Right) Combined shape and image method. . . . . . . (Bottom Left) Final superimposed to ground truth image. . . (Top Left) Initial superimposed to ground truth image. 120 7.56808. . . . . . . . Multiple coils. . . . . .2 Interleaved sampling pattern. . . . . . . . . . . . . . (Right) Plot of the gradient norm of the shape reconstruction over iteration. . . . . (Left) Reconstructed shapes superimposed on ground truth images. . . . . . . . . . . . . . . . . . . .3 Reconstructions from simulated data. . .7 Measured data with unknown background. . . . . . . . . . . . . (Right) Predicted and ground truth areas over time. . . . . 119 7. . . . . . . . . . . . (Right) Reconstructed images with restricted interior intensities. . The numbers on the left column indicate the time point in the sequence. . . . . (Top Right) Initial predicted image. . . . . . . . . . . The numbers on the left column indicate the time point in the sequence. . . . . . . . (Middle Left) Filtered backprojection. 120 Measured data with unknown background.5 xt plots of the central rx line in the image over time. . 128 8. 132 . . Multiple coils. . . . .8 7. The thick arrows point to the papillary muscle. . . . . . . .4 Error plots from simulated data reconstructions. (Bottom Right) Final predicted image. Coil 5. .6 Reconstructions from measured single coil data. . . . .
149 . . . (Right) Reconstructed images using shape speciﬁc T Vβ approach. . . . (Middle Left) Gridding reconstruction. . Gridding (solid line) and temporally correlated combined approach (dotted line). 134 8. . (Left) Ground truth. . . .12 Error plots from measured multiple coil data reconstructions. . . . . . . . . . . 134 8. . . Gridding (solid line) and temporally correlated combined approach (dotted line). . . . 137 C. . . . . . (Right) Predicted and ground truth areas over time. . . . The numbers on the left column indicate the time point in the sequence. (Top Middle) Phantom image at time point 8. . . . . .7 List of Figures Reconstructions from measured single coil data. (Left) Reconstructed shapes superimposed on ground truth images. . . . . . . . . . (Middle Right) Shape speciﬁc total variation method. . . . . . . . . . . (Middle Right) Shape speciﬁc total variation method. . . . . (Left) Plot of the Dice similarity coefﬁcient over time (Middle) Plot of rms over time. (Top Left) Phantom image at time point 1. (Top Right) Image difference between time point 1 and 8. . . . . . . . . . . 135 8. . . . . . . . The thick arrows point to the papillary muscle. .13 xt plots of the central rx line in the image over time. . (Right) Reconstructed images using shape speciﬁc T Vβ approach. . . . . . . . The thick arrows point to the papillary muscle. . (Right) Combined shape and image method. .11 Reconstructions from measured multiple coil data. . . . . (Middle Left) Gridding reconstruction.8 . . .9 xt plots of the central rx line in the image over time. . . . .18 8. . . (Left) Gridding. (Right) Combined shape and image method. . . . . . . . 137 8. . (Right) Reconstructed images with restricted interior intensities. . 136 8. . . . . . (Left) Ground truth. 133 Error plots from measured single coil data reconstructions. . . (Bottom Left) Phantom sinogram data at time point 1. The numbers on the left column indicate the time point in the sequence. . . (Left) Plot of the Dice similarity coefﬁcient over time (Middle) Plot of rms over time. . .1 Difference imaging approach with stationary background. . . . . . . . .10 Reconstructions from measured multiple coil data. . . . . . . . . . . . (Bottom Right) Sinogram difference between time point 1 and 8. . The numbers on the left column indicate the time point in the sequence. (Right) Predicted and ground truth areas over time. . 8. . . . . (Left) Gridding. . . . . . . . (Bottom Middle) Phantom sinogram data at time point 8. .
. . .List of Figures C. 153 . . . . . . . The numbers on the left column indicate the time point in the sequence. . (Bottom Left) Phantom sinogram data at time point 1. . . (Top Left) Phantom image at time point 1. . (Right) Recon 19 structed shapes superimposed on groundtruth. 152 C. . . . . .4 Difference imaging approach with stationary background. . . . . . . . . . . .3 (Left) Plot of the Dice similarity coefﬁcient over time (Right) Predicted and ground truth areas over time. . . . . . . . .2 Difference imaging reconstructions. (Top Right) Image difference between time point 1 and 8. . . . (Bottom Right) Sinogram difference between time point 1 and 8. . . . . . . . (Left) Ground truth images. . . (Top Middle) Phantom image at time point 8. 151 C. (Bottom Middle) Phantom sinogram data at time point 8. . . .
20 List of Figures .
Fourier snakes for the reconstruction of massively undersampled MRI. S. Hill and R. 10631066. 156161.G. Kastanis. AIP 2005. 2003 I. Proc.G.L. Proc.R.L. Hill. 2004 A. Bristol.G. ISBI 2004.S.S. Arridge. Arridge. MIUA 2006.L. Hill and I.S.M.R. Cirencester. Analysis of Variability of Cardiac MRI Data. Proc. Kastanis. MIUA 2003. A. MIUA 2005. Arridge and D.L. Kastanis. D. I. pp. Image reconstruction with basis functions: Application to realtime radial cardiac MRI. Shefﬁeld. Hill. 2006 . S. D.R. Hill and S. Proc.Publications Conference contributions A.M. D. Razavi.G. Manchester. 2005 I. Kastanis.G.M. Reconstruction of the Heart Boundary from Undersampled Cardiac MRI using Fourier Shape Descriptors and Local Basis Functions.S. Proc. Silver. Silver. pp. Silver and D.L. 2005 I. A.M. 5962. pp. Kastanis. Arridge. S.R. Reconstruction of Cardiac Images in Limited Data MRI. Silver.
22 Publications .
This is performed manually for every image taking considerable time and effort. The majority of methods require a substantial amount of time and effort in order to obtain and analyse cardiac images. 1. The heart is moving at frequencies approximately between 1 . This combined reconstruction reduces the scanning time and simpliﬁes the diagnostic procedure by offering qualitative and quantitative results. In the analysis of these images it is typical to delineate the left ventricle at each phase of the cardiac cycle. Methods have been developed and cardiac imaging is now a reality. a need of clinicians who require better and faster tools to diagnose cardiovascular disease.200 beats per minute (bpm).3. The collection of data for these fully reconstructed images also takes a fair amount of time.2 Problem statement .Contribution The problem of cardiac imaging is to capture the movement of a dynamic organ. an estimated 17 million people die of CVDs each year. The next section will give a more precise idea of the problem in question. based on the physical reality of the cardiac imaging problem.who.” The need for detection and therefore prevention of heart disease is a major medical imaging need.1 Introduction As the World Health Organization states on their web site 1 : “Although many cardiovascular diseases (CVDs) can be treated or prevented.int . the proposed approach aims to reconstruct both images and shapes from limited data sets. that are moving while the data is 1 www. Capturing the movement of the heart has meant so far to reconstruct images for each phase of the cardiac cycle. Yet the problem of imaging the heart is still far from being completely solved. While these methods assume that the measured data is complete. that is 60 . as it will be explained next. This novel method. escapes some of the assumptions previous methods have made.Chapter 1 Prologue 1. Dynamic imaging is the imaging of objects.3 Hz.
In MRI the data for a single image of the moving heart requires a lot more time than the time the heart is considered to be stationary. we concentrate on the reconstruction of radially sampled cardiac MR images. the term dynamic does not only refer to the motion of the heart. The substantially small amount of data used by this novel reconstruction approach also offers the ability of realtime imaging. Cardiac imaging is in these terms the problem of choosing the representation of the heart model. The idea of a ‘snapshot’. does not hold in many medical imaging modalities especially not in MRI. in the inverse problem. Using a modelbased approach the heart and the surrounding structures are represented with small set of parameters. Even though we do not consider the presented method as a ﬁnal solution for cardiac imaging. where the heart can be considered to be stationary. the problem becomes illposed. In the case of cardiac Magnetic Resonance Imaging (MRI). The ﬁrst part. Prologue being acquired. Given only a small amount of data. In broad terms a problem is called illposed when the data is not sufﬁcient for the solution of the problem and an approximation is the best that can be achieved. The data is being collected sequentially while the heart is beating. While our novel approach is applicable in a variety of tomographic and Fourier imaging problems. It is desirable to obtain an analysis of cardiac movement. Taking advantage of the ideas of inverse problem theory. but also to the data acquisition. an image captured in an instance. we believe that it is a step in the correct direction. making it suitable for freebreathing cardiac MRI. escaping the assumptions of current methodology. as well as for patients suffering from arrythmia. In this thesis we present methodology based on inverse problem theory for both image and shape reconstruction of limited data sets. to the parameters of the representa . In biological terms the heart is never stationary and that is a key property of cardiac imaging. simulating the MR scanner in the forward model and transforming the difference between the prediction and the data. is to parameterise the heart and predict how it would look under an MRI scanner. Predictions are then compared with data collected from the scanner.24 Chapter 1. The second part of the problem is to transform this comparison. A compact representation is in the mathematical sense a reduction of the dimensionality of the problem. This parameterised model of the heart automatically separates the heart from surrounding structures and cardiac motion can be further analysed. This compact representation makes the problem essentially smaller and therefore easier to solve. The proposed method does not make any assumptions about the periodicity of cardiac motion. using the inverse model. to the chosen representation of the heart. forward model. cardiac imaging becomes a twopart problem. These twoparts are iterated until the parameterised solution is acceptable.
3 Overview of thesis In chapter §2 we give an introduction to image reconstruction in MRI. The detection of cardiac boundaries can be used to adjust parameters of the image reconstruction method. Standard methods typically assume that data has been fully sampled. Shape reconstruction methods are discussed in chapter §3. In chapter §6 we discuss the method for reconstructing shapes directly from measured data. We explain the basic ideas in Magnetic Resonance imaging and overview the current methodology for the reconstruction of both static and dynamic images. which are typically encountered in dynamic imaging applications. In the ﬁnal chapter of this thesis we draw some conclusions on the methodology used and the results obtained. an overview of the thesis is given. We propose future directions of the inverse problem approach to dynamic reconstruction in cardiac MRI. 25 In this thesis we present methods for image and shape reconstruction using an inverse problem approach. This temporal variation is modelled as a Markov process using the Kalman ﬁlter approach. while in the presented approach this assumption is removed and the reconstruction is stated as a minimisation problem. Chapter §5 presents a reconstruction method for images that are uncorrelated in time. We assume that the background and interior intensities in the image and shape are known. The modelbased approaches that will be presented in this thesis are a signiﬁcant contribution to the reconstruction of images and shapes from limited data sets.7 considers the reconstructed parameters to be uncorrelated in time.3. In the combined method both the background and interior intensities are considered to be unknowns in the problem and they are reconstructed from the data. In chapter §8 the method is developed further for the time correlated case. The proposed methods are not considered to be at this stage clinically applicable. 1. . Overview of thesis tion. In chapter §4 the mathematical foundations for the proposed reconstruction method are explained. Inverse problem theory is discussed from a deterministic and a statistical point of view. The combination of image and shape reconstruction is the subject of chapter §7. The data collection is considered to be instantaneous. While the methodology of the previous chapters §5 . in this chapter we assume that there is such correlation. In the next section. but are aimed to prove that the concept is valid.1.
26 Chapter 1. Prologue .
This phenomenon can be observed in elements that have an odd number of protons or neutrons or both in their nucleus. the slice selection. The three physical gradients are in orthogonal directions labelled x. the Field Of . This is accomplished through a frequency selective pulse and the physical gradient corresponding to the logical slice selection gradient.1 Introduction 2. a slice of the object realises the resonance condition. and is found in water molecules H2 O. 268]. They are assigned by the operating software to three logical gradients. It is applied perpendicular to the selected slice and the protons begin to precess at different frequencies according to the dimension selected by the gradient. making MR ideal for imaging biological structures. They are small perturbations to the main magnetic ﬁeld.y and z. The MR image is simply a phase and frequency map collected from the spatially localised magnetic ﬁelds at each point of the image.Chapter 2 Magnetic Resonance Imaging 2. The gradient orientation is perpendicular to the slice so that the application of the gradient ﬁeld is the same on every proton on the slice regardless of its position within the slice. To collect information for MRI there is a need for spatial localisation of the data. a halfintegral valued spin. The magnetic ﬁeld becomes spatially dependant through the use of three magnetic ﬁeld gradients. it is the localisation of the radiofrequency excitation to a region of space. the readout or frequency encoding and the phase encoding. When the pulse is sent and at the same time the gradient is applied to a small region.2 Principles of MRI MRI [103] is based on the phenomenon of nuclear magnetic resonance that the nuclei of certain elements exhibit. p. There are two parameters associated with the readout gradient. Hydrogen has odd atomic number and weight. The slice selection is the initial step in 2D MRI. Human tissue consists of 60% to 80% water [172. The readout gradient provides spatial localisation within the slice in one of the two dimensions. The most important element for the MRI of human tissue is hydrogen H.
These data points are obtained without a change in the gradients. the image can be reconstructed using the Fourier central slice theorem [126. For a complete discussion on MRI principles refer to [152] and [172]. 2. The Nyquist frequency [128] depends on both of these parameters.1: (Left) Cartesian sampling. These two will determine the spatial resolution in the ﬁnal image. we present the current methodology for the reconstruction of images in MRI. 8 6 4 2 8 6 4 2 0 −2 −4 −6 −8 −8 k y ky 0 −2 −4 −6 −8 −8 −6 −4 −2 0 k x 2 4 6 8 −6 −4 −2 0 kx 2 4 6 8 Figure 2. It states that the 1D Fourier transform of the projection of a 2D function is the central slice of the Fourier transform of that function. Finally the second dimension in the selected slice is deﬁned with the help of the phase encoding gradient.28 Chapter 2. 2. p. . It is the only gradient that varies its amplitude with time. Magnetic Resonance Imaging View (FOV) and the number of readout data points in each line of the resulting image matrix.1 (Right)) instead of by Cartesian sampling (ﬁg. In the next section.1 (Left)). 2. the image is most commonly reconstructed by a 2D Fourier transform. The phase encoding gradient is perpendicular to both the slice selection and the readout gradients. After all the data is collected in the Fourier space often referred to as kspace.3 Image reconstruction The foundations for tomographic reconstructions were laid by Johann Radon in 1917 [140]. (Right) Radial sampling. which requires substantially more time than to read out points on a line. 11]. This is based on the fact that the precession of protons is periodical. Radon stated the following integral transform for a function f (r) of the vector variable r ∈ Rn . To move to a new data line the gradient has to be changed. If the data has been acquired radially (ﬁg. the FOV and the number of phase encoding steps. Similarly to the readout gradient there are two parameters to deﬁne for the phase encoding gradient. Lines in kspaces collected in a radial manner are referred to as radial proﬁles or simply proﬁles.
ry }. Radon also introduced an inversion formula. sin θ) and vθ = (− sin θ.4) This is essentially a convolution operator fH (y) = (h ∗ f )(y) where the convolution kernel h(x) = 1/πx.3) While this formula is elegant. (2. it is trivial to convert it to a set of projections by means of a 1D inverse Fourier transform according to the Fourier central slice theorem F 1 Rf (ω. which is deﬁned as follows: 1 π ∞ −∞ fH (y) = H[f (x)] = f (x) dx. An alternative derivation uses the Hilbert transform. The equivalent Radon inversion formula is 1 2π ∞ −∞ f (r) = ∂gH (θ. As the theory for tomographic reconstruction already existed. t (2.2.3. τ ∈ R is its intercept. (2. cos θ). uθ is the inner product. Image reconstruction now known as the Radon transform ∞ 29 g(θ. In the 2D case (n = 2) uθ = (cos θ. ∂ry (2. Apart from eqs. (2.6) where the ndimensional Fourier transform F n and inverse Fourier transform F −n for a function f (r). [79] and for a modern treatise on the subject see [29]. τ ) = (Rf ) (θ. For more information refer to [126]. The singularity is still present in the above integral. r ∈ Rn are . it suffers from the singularity at t = 0. ry − θrx ) dθ. In the 2D case the inverse transform is f (r) = − 1 π ∞ 0 dFr (t) . ﬁrst we deﬁne: Fr (t) = 1 2π 2π 0 Rf (θ.2) where r. 2. In ﬁg. The Radon transform R maps a function f ∈ Rn into the set of its integrals over the hyperplanes of Rn . uθ is the vector deﬁning the direction of the line and vθ is its normal. In the case where f ∈ R2 .1) where θ ∈ [0. uθ + t)dθ. other inversion formulas can be derived.2 a description of the steps involved in the 2D Radon transform is shown. 2π) is the slope of a line. τ ) = −∞ f (τ uθ + svθ )ds. r.5) where r = {rx . then f will be mapped into the set of its line integrals at angle θ.5). (2. When data is acquired radially in MRI. x−y (2. but it can be handled as a Cauchy principal value. Magnetic Resonance Imaging initially used these available techniques. α) = F 2 f (k).3) and (2.
6 (Rf)(θ. τ ) = s fs θ = 45o τ = 32 0. The purple circle indicates the location of the line integral.7) (2.8 20 0.9 10 (Rf)(θ .9) . (Top right) A line integral for τ = 32. (Top left) Line integrals overlaid on an image at θ = 45o . Magnetic Resonance Imaging 1 0. The main idea of these methods was to state the reconstruction problem as a system of linear equations g = Rf . F (k) = (2π)−n/2 Rn f (r)e−ir·k dr F (k)eik·r dk.2 0. ART approximates Rf ≈ cU f and the previous equation becomes (2. It is the application of Kaczmarz’s method to Radon’s integral equations [126].4 0. (Bottom left) The Radon transform of the image at θ = 45o .5 0. Algebraic Reconstruction Techniques (ART) existed from the early 1970’s.8) f (r) = (2π)−n/2 Using this theorem the problem of reconstruction in radially sampled MRI is similar to the Computed Tomography (CT) problem. Rn (2. [59].2: From image to Radon projections. (Bottom right) The Radon transform at four angles. In the early days [103] of MRI data was acquired radially and MRI borrowed much of the theory from CT. τ ) = f(s)ds r y 30 f 40 0.7 0. Quickly though it took its own path.1 0 0 10 20 30 θ rx s 40 50 60 70 20 18 16 14 12 θ =45o 25 20 15 10 5 0 150 100 R 10 f 8 6 4 2 0 0 10 20 30 40 50 60 70 Rf θ =135 o θ =90 o θ =45 o θ =0 o 50 20 0 0 80 60 40 θ τ τ Figure 2.30 Chapter 2. [58] and [79].3 50 τ 60 s 10 20 30 40 50 60 0.
sin θi ) and Qθi is the ﬁltered data at angle θi Qθi (r · uθi ) = gθi ∗ h.13) where Nθ is the number of projections. The size of the linear system in eq. The updated estimate of the image vector f t+1 is given by: f t+1 = max 0. i=1 (2. ﬁltered backprojection was originally discovered by Bracewell and Riddle [15]. Again Lewitt used an iterative method for the solution of the large linear system.10) where U is a matrix indicating the locations each line integral intercepts pixels in the image t f (r) and c is an approximate correction factor. ART can be initialised with all the image elements equal to the mean density of the object [58]. where the derivative and the Hilbert transform are replaced with a ramp or a similar ﬁlter f (r) = π Nθ Nθ Qθi (r · uθi ). (2. (2. Uj is a matrix (with a single row) with the i locations corresponding to the jth line integral equal to 1 and cj is a correction factor for that line integral. (2. A more recent variant of ART methodology is to use basis functions to approximate the distribution of intensities in the image by replacing matrix U with the matrix of the basis functions. Returning back to the early days of MRI and CT. uθi = (cos θi .11) where f t is the tth estimated image vector. Hanson and Wecksung [70] used local radially symmetric basis functions for image reconstruction in CT.5). In 1990 Lewitt [106] improved on the method with the use of more general basis functions.3. Gardu˜ o and Herman [52] presented a method for surface reconstruction of n biological molecules using 3D basis functions. (2.10) prohibited the direct solution and ART is essentially an iterative solver. (2. (2. Image reconstruction 31 g = cU f .12) where Nj is the total number of intercepts of the jth line integral with f (r).14) . Schweiger and Arridge [147] compared different basis functions for image reconstruction in optical tomography using an iterative nonlinear conjugate gradient solver. The predicted data gj for the jth line integral is calculated as: t gj = cj Uj f t . To solve this linear system they used ART. The ﬁltered backprojection is a discrete approximation to the analytic formula in eq.2. f t + t gj gj − cj cj /Nj .
especially if it is to be precise. The calculation of the ﬁlter and the convolution can be performed directly in Fourier space to decrease computational costs Qθi (r · uθi ) = F −1 F 1 (gθi ) × F 1 (h) . h is a high pass ﬁlter and ∗ denotes convolution. If we deﬁne the data in MRI to be gf r (k) = F 2 f (r) × Ar (k). making their method applicable to wholebody imaging. it was already widely accepted that ﬁltered backprojection methods were superior to algebraic reconstruction techniques. While the inversion of Cartesian Fourier samples by means of an FFT algorithm is fast and computationally not very demanding. In [158] Stark et al presented various methods for interpolating from polar to Cartesian samples. The sinc function has inﬁnite support making it prohibitive for numerical implementations. The fast Fourier transform (FFT) was known at that time [30]. which is the ideal interpolation function. They were able to obtain Fourier data on a Cartesian grid. the inversion of radial samples requires interpolation in to a regular grid. O’Sullivan [130] used a KaiserBessel function for this task to improve on the efﬁciency and quality of the reconstruction. Jackson et al [82] further extended this methodology and compared various convolution functions. when Lauterbur published the ﬁrst paper [103] on MRI. (2. Edelstein et al [41] extended the method of Kumar et al [100] in 1980 with the use of varied strength gradients instead of the constant ones Kumar et al had previously suggested. (2. In 1974 Shepp and Logan [150] compared ﬁltered backprojection to ART. In this manner they were capable of overcoming the ﬁeld inhomogeneities problems of Kumar’s method.16) .32 Chapter 2. It was not until 1981 that the groundwork was laid for what is now the standard method for image reconstruction in radially sampled MRI. They used the now famous SheppLogan phantom and concluded that the ﬁltered backprojection method was superior to ART. Magnetic Resonance Imaging where gθi is the projection at angle θi . Interpolation is in general a computationally expensive operation. using a backprojection method to reconstruct the image of two glass tubes containing water.15) In 1971 the method was independently rediscovered by Ramanchandran and Lakshminarayanan [141]. such as edge information and noise. In 1975 Kumar et al [100] described an imaging method which took advantage of a sequence of orthogonal linear ﬁeld gradients. By 1973. The reason for this is that it requires convolution with a sinc function. The high pass ﬁlter enhances high frequency components. For image reconstruction a direct Fourier inversion was used instead of the iterative solutions of large systems of linear equations.
(2. (2.19) (2.20) δ(kx − i.17) with N being the number of samples and δ the Dirac delta function. If the images are formed with enough data to satisfy the Nyquist spatial rate. In the case of cardiac MRI. Dynamic imaging where Ar is a sampling function N 33 Ar (k) = i=1 δ(k − ki ). Combining eqs. like the brain and the heart. corresponding to more phases. then the collected data will only be enough for a very small number of cardiac phases and the images of these phases will be corrupted by motion artifacts.. It is desirable to be able to image moving or dynamic parts of the human anatomy. (2.18) where h(k) is the convolution kernel. where Ac (k) = i=1 j=1 gf r (k) ∗ h(k). if more images. The aim is to interpolate the signal gf r as follows: gf i (k) = gf r (k) ∗ h(k). 2. ky − j) is a comb function III (k).18). On the other hand. the data cannot be collected fast enough to represent different phases of the cardiac cycle clearly. To compensate for the nonuniform sampling. a density weighting function w(k) = Ar (k) ∗ h(k) is introduced and the previous equation becomes gf wi (k) = Resampling at Cartesian coordinates gf wc (k) = gf wi (k) × Ac (k).21) These methods are commonly referred to as gridding. w(k) (2. are formed then the data will not be enough for each separate image causing heavy artifacts and rendering them clinically useless. Often this is not easy.20).4. we obtain gf wc (k) = gf r (k) ∗ h(k) × Ac (k).2.. (2.19) and (2.. the time efﬁciency of collecting data by mere gradient encoding seems to be approaching a .4 Dynamic imaging Dynamic imaging has emerged as an important research area in the last couple of decades. if the scanning is done in a purely sequential manner. Much research has been done in the area of sequence design and as Weiger et al mentions “ . Ideally data for each different image must be collected faster than the object is moving. w(k) (2. since the dynamic object is moving faster than the data can be collected in a scan.
more proﬁles could be collected. (2. The most important is the interval between the two highest peaks (RR interval). It should be . As seen in ﬁg. this is exactly what the ECG measures. This means that new methods that explore other dimensions of dynamic imaging in MR have to be investigated. Some work has been done in Fourier techniques to reduce the scanning time. This assumes that while these data lines are being collected in one heart beat for one phase. it can be considered that instead of collecting one kspace proﬁle for a phase at each heart beat. the various letters represent different stages of the heart cycle. This way there is enough information to reconstruct clear images of various phases of the heart. 177]. The ECG signal provides a means to determine in which phase of the cardiac cycle the collection of the data is done. who decreased the imaging time to half by compromising the quality of the image.4.1 Gated imaging One of the most commonly used techniques to image the heart is gated cardiac imaging. An example of this is Feinberg et al[44]. data lines that belong on to the same phase of the cardiac cycle are collected in different beats of the heart at equal time intervals. This method uses the electrocardiogram (ECG) signal to gate the cardiac cycle. 2. To extend this idea of gated imaging. When the heart is contracting it exhibits electrical activity. are collected with one heart beat difference each. the heart is almost stationary. This implies that the data lines required to reconstruct an image. Assuming that the ECG is exact in determining the phase of the cardiac cycle and that each cardiac beat has the same duration.3: A normal ECG. representing one phase of the cardiac cycle. The electrical activity of the heart can be used to determine the phase of the cardiac cycle.3). Magnetic Resonance Imaging Figure 2. p.” [180.34 Chapter 2. which represents the duration of the cardiac cycle. other than just using magnetisation techniques. fundamental limitation.
Examples of gated cardiac imaging can be found in Lanzer et al[101] who used different techniques to gate the cardiac motion. Simultaneous Acquisition of Spatial Harmonics and SENSE [139].2 Parallel imaging Another approach for the solution of the dynamic imaging problem is the use of partial parallel imaging. In the SENSE approach data is reduced by decreasing the size of the FOV for each separate receiver coil. Dynamic imaging 35 noted that gated cardiac imaging is performed on a single breath hold to reduce motion in the surrounding structures due to the breathing process.24) missing kspace lines can be generated. F OV is a scalar representing the ﬁeld of view and m ∈ Z is the order of the spatial harmonics. (2. The beneﬁt of using multiple coils is that the data can be undersampled.23) where Nc is the number of receiver coils. In parallel imaging an array of coils is used instead of just one.22) where f (ry ) is the signal and Sj (ry ) is the coil sensitivity at each phase encoded line. an expression for the calculation of shifted kspace lines g (ky +m∆ky ) using ˜ the measured sensitivity matrices Sj can be derived Nc m wj gj (ky ) ≈ g (ky + m∆ky ). Sensitivity information is approximated by ﬁtting linear combinations of sensitivity matrices to form spatial harmonics. Both methods work by approximating the sensitivity information for each coil. Sensitivity matrices are calculated in the spatial . Samples are located further away in kspace. SMASH [155] . ∆ky = 2π/F OV .2. In [47]. Sensitivity Encoding for fast MRI. (2. There are two main methods for parallel imaging in MRI.24) Using eq. Using eqs. Go et al study volumetric and planar cardiac imaging.4. Fletcher et al are using gated cardiac imaging to study congenital heart malformations. The MR signal in the phase encoding direction at coil j can be expressed as: gj (ky ) = f (ry )Sj (ry )eiky ry dry . ˜ j=1 (2. (2.4. artifacts due to undersampling can be reduced in the reconstruction. j=1 (2. Data is collected for each coil and combined to form one image. Sensitivity values are expressed as a linear combination to generate values from all coils ˜ S m (r) = Nc m wj Sj (r) ≈ eim∆ky ry . SMASH uses the sensitivity variations to replace some of the phase encoding.22) and (2.23). This creates folding artifacts. An early system to reconstruct and display gated cardiac movies was developed in [6]. Using information from each coil. This can be solved for the m weights wj by ﬁtting the coil sensitivities Sj to the spatial harmonics eim∆ky ry . 2. In [56].
This allowed the construction of a linear system not as restrictive as the original SMASH formulation.26) where S is the Nc × Ns coil sensitivity matrix with Nc being the total number of coils and Ns the total number of samples. Magnetic Resonance Imaging domain. Bydder et al [19] reversed eq. Initially SMASH imaging was restricted to speciﬁc coil design [64] and imaging geometries [84]. an extension of [78]. Eq. A detailed review of parallel MR imaging was presented in [14]. provides unaliazed images for each coil.23) to express the coil sensitivity matrices Sj as linear combinations of the spatial harmonics p Sj (r) ≈ m=−q m wj eim∆ky ry .36 Chapter 2.27) m where q. Extensions of the SENSE method are also popular. (2. unlike SMASH which works in kspace. k is the kspace position index and R is the reconstruction. C is the Nc × Nc receiver noise matrix and the superscript H denotes the conjugate transpose. j is the coil index.28) Another very recent variant of SMASH imaging named GRAPPA [63]. [78] have extended the coil combinations and coil geometry. j=1 (2. (2. Some recent developments [19]. Sodickson et al [153] included an extra term S0 in eq. An analysis of the SNR in SMASH can be found in [154].k gj. (2. Both techniques in their original formulation require the collection of extra data to be used for the sensitivity calculations.k . In [138] Pruessmann et al extended the original SENSE formulation to arbitrary kspace trajectories using gridding operations to improve the numerical efﬁciency of the reconstruction method.25) where fn is the vector of images values.25) is solved for every position in the reduced FOV image to produce the full FOV image.k Rj. . [153]. which will discussed in the following section.23) to account for sensitivity variations in the phase encode direction Nc m wj Sj (r) ≈ S0 eim∆ky ry . (2. Kellman et al combined SENSE with UNFOLD [114] in [90]. matrix of the n superimposed image positions and it is calculated as follows: R = S H C −1 S −1 S H C −1 . (2. (2. or unfolding. The full FOV image is calculated as a linear combination of all the receiver coils by resolving for the superimposed image locations fn = j. p ∈ Z are integers deﬁning the number of Fourier coefﬁcients wj for the j th coil. which can then be combined to produce even higher SignaltoNoise Ratio (SNR) than the original SMASH.
In the xf space the signals that are static will have little frequency in time. only one of every four samples is taken. It uses the idea of kt space.4. The idea of using more bandwidth for the dynamic part is ideal for cardiac imaging. while everything else surrounding it is . the modulation of the data will push aliased signals to the end of the spectrum (ﬁg. which allows the removal of ghost artifacts in the image with a low pass ﬁlter. UNFOLD works by encoding information in the temporal dimension.2. [169]. where the main motion present is the heart beating. a shift in the frequency domain results to a linear phase shift in the time domain. This interleaved sampling pattern drastically reduces scanning time up to a fourthfold. it has been understood that the data collection in MRI is in a spectro .3 kt imaging One of the most important recently developed methods for dynamic imaging is UNFOLD. 2. Even though it was not stated in these terms in the original UNFOLD paper [114]. Each point denotes a complete kx line in the read out direction. implying that more bandwidth can be dedicated to the dynamic part.5). 2. Dynamic imaging 8 6 4 2 37 ky 0 −2 −4 −6 −8 0 2 4 6 8 t 10 12 14 16 Figure 2.4: Sheared sampling pattern in kt space. When the FT is taken in time.4. The main idea of the kt space methods is that signals are modulated by collecting data in an interleaved manner and that for dynamic imaging it makes sense to investigate the Fourier Transform in the temporal dimension. Information about low pass ﬁlter design for UNFOLD can be found in [91]. 2. As seen in ﬁg. Intuitively speaking this idea tries to pack the xf space and therefore reduce scanning times. Especially after the kt framework was introduced by Tsao in [166].temporal space. According to the Fourier shift theorem. The concept behind this approach is that modulation caused by the sheared sampling pattern is a shift in the phase encoding direction.4. The taxis represents time and the ky axis the sampled locations in the phase encoding direction. it has been redescribed in more recent papers by Tsao et al [166].
of the sensitivity encoding matrix S. as well as a different method to apply the UNFOLD technique by comparing spectral energy.t . the interleaved pattern.5: Plot of an aliased function. q) = −1 ˜ −1 ˜ S H Cnk. The qaxis is the temporal frequency and the F axis is the spatial frequency. In [113] UNFOLD is combined with partialFourier imaging and SENSE. Cn is the noise covariance matrix and Cs is the signal covariance matrix.38 Chapter 2.29) ˜ where S is the Fourier transform. There has been much interest in the UNFOLD method.2 Aliased signal 1 0. Due to the temporal underampling the function has been shifted in the temporal frequency dimension.q −1 −1 ˜ S H Cnk.5 Discussion The majority of reconstruction methods in MRI is intended for data sets that satisfy or are close to the Nyquist limit. BLAST is a uniﬁcation of priorinformation methods for fast scanning f (r. static or close to static in single breath hold imaging.t gk. This can be corrected with the application of an appropriate low pass ﬁlter [91].6 0.8 F 0. 2. The basic idea of the UNFOLD method can be summarised in the following concepts. It provides a method to accelerate imaging as well as a common equation for the most important accelerating methods. An extension of UNFOLD to 3D is presented in [186]. When these methods are applied to limited data problems the re . One of the most interesting extensions is the combination of BLAST (Broaduse Linear Acqusition Speedup Technique) [167] and SENSE with the kt framework in [169] and [168]. which reduces scanning time and combined with the low pass ﬁlter that removes artifacts and allows more bandwidth to the dynamic part of the image.t S + Csr. from xf to kt space F rq→kt .4 Low pass ﬁlter 0. Magnetic Resonance Imaging 1. Other parallel imaging combinations with the kt ideas exist.2 0 −100 −50 q 0 50 100 Figure 2. (2. Hansen et al [66] presented a kt BLAST method applied to nonCartesian sampling.
2.5. Discussion
39
construction produces severe artifacts, usually corrupting the image to a degree unacceptable for analysis. In dynamic imaging there is a need for ﬁner temporal resolution. To increase the acquisition speed in MRI, the data available for each frame is necessarily reduced. To overcome the problem of limited data in cardiac MRI, the common approach is to use, as mentioned previously, ECG gating. ECG gated cardiac imaging makes two important assumptions, the ﬁrst one is that the ECG signal is exact in giving the location of the heart cycle and repeats itself in an exact manner and the second one is that the heart is beating in precisely the same way. The ﬁrst assumption is a good approximation of the truth, but the second is not necessary valid. Typically each monitored cardiac cycle is shrunk or stretched to ﬁt an average cardiac cycle. This becomes a problem especially in the case of patients with heart abnormalities and examinations under stress. In examinations under stress the heart is beating a lot faster than normally, it is therefore important to reduce the scanning to a bare minimum in order to avoid having the patient under stress for a long time. If more than one data line is collected for each phase in each heart cycle, the reconstructed image will have blurring artifacts due to the motion of the heart. Gated imaging can be thought of as time averaged, in the sense that a single image is formed by data from many time points at theoretically equal intervals. Nevertheless it is not desirable to form an averaged image, the effort is to record the motion of the heart. Another drawback of this technique is that obtaining high resolution images requires more data lines, implying longer scanning times. Gated cardiac imaging is a compromise between resolution or quality, both spatial and temporal, and scanning time. Increasing the spatial resolution would imply capturing less phases of the heart cycle or more scanning time. If the temporal resolution was increased the spatial resolution would have to be decreased or again the scanning time would have to be longer. Further to that the single breath hold approach limits the total imaging time, implying that the spatial and temporal resolution are bounded. For the quantiﬁcation of ventricular function typical cardiac MRI often requires the collection of data over many heart beats and also for more than one breath hold. The long times consumed inside the MRI scanner are stressful and certainly not desired for patients. Extended breath holds lead to poorly understood ﬂow and pressure changes within the cardiac region [122]. It is also desirable though to image objects, which do not behave in a periodic manner and gated imaging cannot be applied. The vast majority of methods, with the main exception of the kt approach, do not take advantage of the dynamic nature of the problem. They consider the problem of reconstructing a temporal sequence of images as a series of static problems. Some information in the image
40
Chapter 2. Magnetic Resonance Imaging
can be recovered taking advantage of areas which are not in motion. Statistical properties of the motion of the object can also be taken into account to improve results. In the next chapter we will discuss the current approaches in shape reconstruction.
Chapter 3
Shape reconstruction background
Shape reconstruction has been a subject which has received much interest in the image processing community. For many machine vision tasks and generally for quantitative analysis a segmented shape of interest is required. In this chapter we will introduce basic approaches for the reconstruction of shapes. In the ﬁrst section, methods based on an explicit formulation of the shape will be discussed. Following that the discussion will be on a more modern approach, which has an implicit formulation of the shape.
3.1 Snake methods
Kass et al introduced in [89] the Active Contour Models, more commonly known as snakes. Snakes are a speciﬁc case of the deformable model theory of Terzopoulos [163]. The deformable model theory is based on Fischler and Elschlager’s spring loaded templates [46] and Widrow’s rubber mask technique [184] and [120, p. 92]. Snakes are 2D contours, that approximate locations and shapes of structures in an image. This is done by minimizing an energy functional Esnake , that depends on the image and the smoothness or elasticity of the snake
1
Esnake (v) = where v(s) = x(s)
0
Eint (v(s)) + Eext (v(s)) + Eimage (v(s))ds,
(3.1)
y(s) the x and y coordinates respectively. In the original snake formulation, these were deﬁned as parametric splines. Eint is the internal energy of the snake, which controls its smoothness. Eext is an external force used for automatic initialisation and userintervention. Finally Eimage is the force deﬁned by the image, usually using image gradients, edge locations or other image features of interest, to drive the snake closer to the desired segmentation. Many researchers have extended the original snake formulation in a variety of ways. Staib and Duncan [157] presented a method based on Fourier parameterisation for the contour.
is a parametric contour with s ∈ [0, 1) with x(s) and y(s) deﬁning
In a similar methodology Zacharopoulos et al [189] reconstructed 3D surfaces using a spherical harmonics representation. Fourier parameterisation is more compact and usually only a few parameters are enough to deﬁne complex shapes. while splines depend on control points. In the same line of research Aquado et al [5] used Fourier descriptors to parameterize shapes by extraction with the Hough transform. In 1982 Kuhl [99] determined the Fourier coefﬁcients of chainencoded contours. In [133] shape discrimination was discussed with applications in skeleton ﬁnding. λj is the jth eigenvalue and pj is the . ASM are statistical models of the shape of interest obtained using a training set. AAM are a combination of Active Shape Models (ASM) [32] with a greylevel appearance. They deﬁned two homogeneous regions. The images in the training set are aligned with a modiﬁed Procrustes method and their main modes of variation are calculated using eigenanalysis CC pj = λj pj . (3.2) where CC is the covariance matrix of the aligned shapes. Further development of this Bayesian methodology was presented in [9]. implying that they are local representations of closed curves on the plane. solid objects. and then determined the internal density and location of the boundaries by a Newton minimization method. The idea of representing shapes with Fourier descriptors dates back at least to the 1970’s. it resembles more the Active Appearance Models (AAM) approach of Cootes et al [31]. Kolehmainen [94] et al used multiple Fourier contours to reconstruct shapes with known internal intensity directly from optical tomography measurements. Shape reconstruction background Fourier representations are global representations.42 Chapter 3. Fourier parameterisations have also been used in an inverse problem framework for the recovery of region boundaries. In 1987 Lin [109] presented new invariants based on Fourier descriptors with application to pattern recognition. they use freeform deformation models to warp the space surrounding them. In this work Kuhl presented properties of Fourier descriptors. Battle et al [10] reconstructed a triangulated surface with constant interior density directly from tomographic measurements using a Bayesian approach. the parameterisation was extended to open curves as well. Shapes were not restricted to closed curves. where various researchers used them for shape discrimination. This deﬁnition differs signiﬁcantly from most other snake approaches. character and machine parts recognition. Other recent extensions of the original snake method include extensions that work in color image space instead of gray scale. such as normalisation and invariants. Further to that he discussed a recognition system for arbitrary shaped. one for each lung. Instead of deforming the surfaces directly. Sclaroff and Isidoro [149] presented a method which uses both shape and color texture information. applied to lung images.
Using a particular kind of cell decomposition. An application of the geodesic snakes can be found in [144]. P is the matrix of the ﬁrst n eigenvectors and w is a vector of weights. shapes were deﬁned using a triangular mesh model. such as splitting. offering ﬁne detail or possibly a multiscale approach. Initial applications of ASM were in hand gesture recognition tasks.3) ¯ where C is the mean of the aligned shapes.3. the contour can be reparametirised including topological changes. The connection between the calculation of minimal distance curves in the space induced from the image and the snakes is shown in that work. Returning to the work of Sclaroff and Isidoro [149]. This set is the prior knowledge. The eigenvectors pj provide a way of deﬁning the possible ways a shape can vary ¯ C = C + P w. By tracking the interior vertices of the intersected triangles at every M steps. The role of the afﬁne cell image decomposition (ACID) comes in the step of the reparametarisation of the contour. (3. Level set methods will be discussed in the next section. the space is subdivided into triangles. Sclaroff and Isidoro use the LevenbergMarquardt method. Geodesic snakes are based on the ideas of curve evolution in a metric space with minimal distance curves. which might not be the real one. Their aim was to detect the motion of objects and the registration process requires minimization of the residual error with respect to the parameters of their snake model.1. One of the beneﬁts of level set approaches is that curves are topologically adaptive. In that work geodesic snakes were combined with Gabor analysis. The curves were deﬁned using nodes connected with edges. The triangles can be of any size. where the object of interest lies. A method for topologically adaptive shapes was presented by McInerney and Terzopoulos [121]. based on a Delaunay triangular meshing algorithm. Stegmann et al [159] used AAM to segment cardiac MR images. merging and selfintersecting. The image segmentation is performed with the use of snakes that are based on color invariants. For this optimisation problem. This approach is . The intersections of these triangles with the contour are then detected at every M steps of the iteration and every intersection point gets assigned with an inside or outside value. They have the advantage and at the same time disadvantage of being based on a training set. Their method is interactive in the sense that it allows the user to choose subimages. Another recent paradigm of snakes is that of geodesic snakes [24]. while AAM were targeting face recognition. In some cases this might prove to be limiting the possible shapes and therefore forcing the algorithm to ﬁnd a shape. simplicial. Snake methods 43 jth eigenvector. A different approach to color snakes was presented in [54]. To calculate the geodesic curve a level set approach is used.
Level set methods are based on the ideas of front propagation. Evans et al [43] used Tsnakes to segment livers from CT images. p. controlling the snake via user interaction and imposing arbitrary geometric or topological constraints [121.2 Level set methods McInerney and Terzopoulos based their decision to use ACID. This was implemented in a Dynamic Programming framework. Level set methods though are becoming increasingly popular since their introduction in 1988 by Osher and Sethian [129]. Shape reconstruction background referred to as Topologically adaptive snakes.1: Level set function and corresponding shape boundary on the zero level set. Whitaker and Elangovan [182] reconstructed both 2D contours and 3D surfaces directly from limited tomographic data.44 Chapter 3. Giraldi et al used dual snakes. In diffusion optical tomography Schweiger et al [148] reconstructed both the shape and the contrast values of the homogeneous objects using two level set functions for the absorption and the diffusion values. particularly when it comes to deﬁning the internal deformation energy term. Following the general concepts of Alife [162] McInerney et al develop the idea of using artiﬁcially intelligent snakes. An extension of Tsnakes is developed by Giraldi et al in [55]. For a boundary in R2 the level set function will . The boundary of a shape is embedded on a higher dimensional function. 3. Tsnakes. one snake for the outside of the edge and one for the inside. in the grounds that level set higher order implicit formulations are not as convenient as the explicit. Figure 3. An interesting new paradigm of snakes by McInerney et al is presented in [119]. Paragios [131] used a level set method for the segmentation of the left cardiac ventricle in 2 dimensions. 7475].
2. the level set function. 3. a surface in R3 .1) is tied together with the image function as follows: fint (r) f (r) = f (r) ext if φ(r) < 0 if φ(r) > 0 . (3.2). A detailed review of levelset methods . Level set methods 45 be a surface in R3 .3. artiﬁcial constraints have to be introduced in the level set representation to maintain this topology. (3. In the case where the topology is known in advance. While topological changes.5) then the level set function φ(r) (ﬁg. Assuming that the image f (r) with r ∈ Rn can be modelled as fint (r) f (r) = f (r) ext if r ∈ Ω if r ∈ Ω / . such as splitting and merging. Next we give a brief introduction to the level set approach along the lines of [145]. The same is true for any dimension Rn . (3. can incorporate these naturally without changing the topology of the surface in R3 (ﬁg.4) The level set function is build as a sequence of functions φt (r) which approach the real region Ω as t increases Ωt → Ω with ∂Ωt = {r : φt (r) = 0}.2: Level set function and two corresponding shape boundaries on the zero level set. are rather difﬁcult to deal in R2 . φ(r) = 0. The boundary of the region of interest Ω ⊂ Rn is described by a function φ(r) ∂Ω = {r : φ(r) = 0}. 3. Figure 3.6) The boundary of the region is given by the zero level set.
which is the goal of the analysis. it is created directly from measured data. It also capable of dealing with known inhomogeneous background. which does not require the region of interest to have a smooth intensity distribution. If the image reconstruction is illposed. which typically has a high computational cost. The problem of reconstructing a shape with inhomogeneous interior in an inhomogeneous background is much more complex. Shape reconstruction background 3.46 is given in [38]. thus require image reconstruction. the quality of the segmentation is dependant on the quality of the reconstruction.3 Discussion Most shape reconstruction methods work in the image domain. . The quality of the segmentation. and this is the case in many dynamic imaging problems. Chapter 3. This is not true for cardiac MRI. In addition their level set function does not require reinitialisation. ﬁrst reconstruction and then segmentation. the reconstructed image will contain a large amount of noise. If the image reconstruction is of low quality than segmentation will not be accurate nor robust. as well as many other dynamic imaging applications. In this two step approach. These methods typically assume that the object and background are homogenous and clearly distinguishable. Direct shape reconstruction methods do not use an image to reconstruct the shape. is ultimately dependant on the reconstruction. A promising approach was presented by Ye et al [187].
For the example of the moving object. but the converse does not always hold. A Hilbert space is a vector space with an embedded norm deﬁned by an inner product. In image analysis the application of inverse problem theory is fairly new since this brand of science has only existed for around half a century. a problem is separated in to two parts. then the estimation of the velocity becomes an inverse problem. In this sense a direct problem would be to calculate the position of a moving object at given time. Forward modelling is the discovery of the mapping Z from the model space to the data set . Therefore it is important to give a more formal deﬁnition of the model and data spaces.1 Inverse Problems Inverse problems are very common in Physics and image analysis among other areas of science. The forward or direct part of a problem is the prediction of the observable data given the parameters of a model. if the initial and current positions are considered to be the observations.Chapter 4 Numerical optimization: Inverse problem theory 4. The notions of forward modelling and inverse modelling are of major importance to the concept of inverse problems. An inverse part problem would be to calculate the forces acting on a planet given the observation of its trajectory. where Pm is typically an mdimensional Hilbert space H or in the more general case a Banach space B. with the assumption that the velocity vector is known. In the theory of inverse problems. The model space Pm will contain a minimal parameterisation p ∈ Pm that completely describes the system. The data space Yn will contain the observations g ∈ Yn that can be made about a particular system. A Banach space is a generalisation of a Hilbert space in the sense that the norm need not be deﬁned by an inner product. All Hilbert spaces are Banach spaces. The inverse part of a problem is to predict the model parameters given the observable data. The terms forward and inverse are tied together with the deﬁnition of what the model is and what the observations of the system are. the forward part and the inverse part.
then the noisefree model of eq. (4. which can be made on the system. The singular value decomposition (SVD) of a matrix Z ∈ Rn×m with n > m is given by: Z = U DV T . given the model parameters p.7) .3) For the operator Z. we deﬁne the range to be the set of all values that Z can take as p varies in Pm Range(Z) = {g ∈ Yn  g = Z(p) for some p ∈ Pm } and the nullspace.5) Assuming that the operator Z is a linear mapping Z : Pm → Yn . the parameters of the model can be deduced. the model is g = Z(p). as the set of all vectors p that solve the equation Z(p) = 0 (4. (4. (4. it is the set of laws that allows the prediction of the observations g. (4. Numerical optimization: Inverse problem theory Z : Pm → Yn .2) This implies that by observing a particular system. Inverse modelling is the mapping Z † from the observations to the parameters of the model Z † : Yn → Pm .48 Chapter 4. (4.6) where Z ∈ Rn×m .1) Intuitively.4) N ull(Z) = {p ∈ Pm  Z(p) = 0}. (4. where (4.3) can be expressed as a matrix multiplication g = Zp. If we assume that there is no measurement noise.
(ii) the solution p is unique (iii) the solution is stable. i. 49 Σr = diag(σ1 . The inverse problem is then to calculate the parameters p from the measurements g. Inverse Problems D= Σr 0 0 0 ∈ Rn×m . (4. where the forward mapping Z is exact and there is no noise...1) will not have any zeros on the diagonal. (4. n} is the smallest dimension of the matrix. this will be Z(p) − g = 0. (4.e. 7]. v2 . Matrices U = (u1 . In the idealized case. i. This dismissal of illposed problems can be reasoned in the general scientiﬁc mentality of the early 20th century. Eq. σ1 ≥ σ2 ≥ . ≥ σr > 0 are the singular values of Z and r = min{m. . σ2 . then p → p0 when g → g0 If the system does not fulﬁll all three of the above requirements it is said to be illposed. Deﬁnition of illconditioning (i) If the forward mapping Z is rankdeﬁcient.. In other words the system is underdetermined. then eq.4. A simple case where this happens is that the number of unknowns is more than the number of equations. .e.. The system of eq. u2 ..3) expresses a forward problem... (4.1. U U T = U T U = I and V V T = V T V = I and the vectors ui and vi are the left and right singular vectors σj uj = Zvj . un ) ∈ Rn×n and V = (v1 .. if g0 = Z(p0 ) and g = Z(p). If the matrix Z ∈ Rn×m has more columns than rows m > n. (4..8) Typically though this is not the case.. vm ) ∈ Rm×m are orthonormal. p. Hadamard was convinced that illposed problems are not motivated by the physical reality [12. The condition number of a matrix is equal to the ratio of the smallest singular value divided by the largest cond(Z) = σ1 σr . there exists a solution p ∈ Pm for which g = Z(p) holds. . .3) is said to be wellposed in the Hadamard sense [174].3) has not got a unique solution. provided the following are true: (i) ∀ g ∈ Yn . This scientiﬁc mentality in the early 1900’s of absolute truth and formality are characteristic in the works of Bertrand Russell and David Hilbert [33]. then the matrix D in eq. σr )..
In the next section the details of model selection will be discussed. The smaller the singular values σj are.. (iii) Z can be numerically rankdeﬁcient. . there can be a few columns which are very close to being linearly dependant. In such cases one seeks to minimize some discrepancy functional between the predicted and measured data pmin = arg min Φ(p). Even though N ull(Z) = {0}.vj . σ2 . measurement noise and inexactness of the forward model.. In this case some of the columns of Z are linearly dependant.. ζ2 . the more oscillatory the singular vectors are [67].1) .2 Model selection 4. also referred to as cost function and is typically deﬁned as a norm g − Z(p). Numerical optimization: Inverse problem theory (ii) Undetermined systems can also appear if the rank r of the n × m matrix is less than its smallest dimension r < m ≤ n. p (4.1 Image parametrization If the intensity variation across spatial locations is considered to be sufﬁciently smooth.10) where {ζ1 . Nζ is number of basis functions.. (4. 4. The rank of a matrix is the dimension of its range r = dim(Range(Z)). Equivalently. an image f (r). The kth basis function is deﬁned on a regular grid (ﬁg. σr }. . The matrix Z will then contain k numerically linearly independent columns and its effective rank will be equal to k. 4. in the solution. (4. due to the discretezation. If the system is illposed then the idealized solution in eq..8) is unobtainable. for some k > 1 the spectrum of the singular values will drop suddenly σk+1 σk .9) where Φ(p) is the objective function.2. with r ∈ Rn .. (iv) The spectrum of the singular values is connected with the oscillations in the singular vectors uj . causing the ampliﬁcation of noise. and the problem is effectively underdetermined. the null space of Z will not be the empty set N ull(Z) = {0}. ζNζ } = pζ ∈ P forms the parameter vector. The condition number in both numerically rankdeﬁcient and discrete illposed problems is typically very large. the grid resolution and P ⊂ RNζ is the parameter vector space.50 Chapter 4. This can be seen by a clear gap in the decay of the singular values {σ1 . can be approximated using local basis functions Nζ ˜ f (r) = k=1 Bk (r)ζk .
m is the degree. B0 (r) = 0 if d ≥ a (4. A typical choice for the basis functions b(r) are the KaiserBessel window function [106] (ﬁg. (4.11) 1 z 0. a is the support and α is a shape parameter. Model selection 51 Bk (r) = B0 (r + rk ) = B0 (r) ∗ δ(r − rk ) . (4.13) where Im is the modiﬁed Bessel function of the ﬁrst kind. (4.5 0 80 60 80 y 40 40 20 0 20 60 x Figure 4. where rj is a set of grid points in the ndimensional spatial domain Rn and B0 is the central basis function of general form: b(r) if d ≤ a .12) where d is the Euclidean distance from the center of the jth basis function d = r − rj 2 . The x and y axes represent the spatial location in R2 and the z axis represents the intensity.1: Regular 3 × 3 grid.4.2)) 1 ( 1 − (d/a)2 )m Im (α Im (α) b(r) = 1 − (d/a)2 ). r ∈ Rn . KaiserBessel functions will be used for the reconstruction of images .2.
45 and α = 6. 4. Numerical optimization: Inverse problem theory Figure 4. [107]. These are deﬁned for C 6 continuity as follows: b(r) = (1 − d)8 (32d3 + 25d2 + 8d + 1).2: Surface plot of the KaiserBessel blob basis in 2D with support radius 1. .14) b(r) = (1 − d/a) and the Gauss radial basis functions [147] (4. f (4. [115]. throughout the thesis. B2 . 2 /σ 2 .16) Let B = (B1 .. Another option for the basis functions are the Wendland functions ([181] and [17]).4. (4..17) Applications of local basis functions in 3D image reconstructions from tomographic data can be found in [52].52 Chapter 4. then eq. [116] and [118]..2 Shape parametrization The boundary of a region C(s) can be represented with global basis functions as follows: .15) b(r) = e−d where σ controls the width.2. (4.10) can be rewritten as a matrix equation ˜ = Bpζ . BNζ ) be the matrix whose kth column is the vectorized image of the kth basis function Bk . A computationally faster alternative are the linear basis functions (4.
4.2. Model selection
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 Linear Gauss σ = a/2 Wendland C6 Bessel α = 1.45
53
Figure 4.3: Plot of radial proﬁles of linear(solid), Gauss(dashed), Wendland(dashdotted) and KaiserBessel(dotted).
C(s) = x(s) y(s)
=
Nγ n=1
x γn θn (s) y γn θn (s)
, s ∈ [0, 1], (4.18)
where θn are periodic and differentiable basis functions, Nγ is the number of basis functions
x y and γn , γn ∈ R are the weights of θn . Let γ denote the vector of all boundary coefﬁcients, i.e. y x y x y x γ = {γ1 , γ1 , γ2 , γ2 , ..., γNγ , γNγ } ∈ P. It is γ that controls the shape of the region C(s).
The trigonometric basis functions θn (ﬁg. 4.4) are deﬁned as follows:
θ1 (s) = 1 n θn (s) = sin(2π( )s), if n is even 2 n−1 θn (s) = cos(2π( )s), if n is odd. 2 (4.19)
Figure 4.4: Plot of Fourier basis functions with Nγ = 7. Dashed curves are the cos (even) terms and solid curves are the sin (odd) terms.
54
Chapter 4. Numerical optimization: Inverse problem theory
Staib and Duncan [157] describe this Fourier parameterisation as rotating phasors deﬁned by groups of four parameters, two for each axis, where the parameters are the weights of the basis functions. In a simplistic way, the generation of a contour using these functions can be thought of as the movement of a robotic arm, made out of several parts all rotating continuously at different speeds, with a pencil on the end. Generally this reminds the idea of the epicycles of Epimenedes, that described planetary motion. An alternative to the trigonometric functions are parametric splines. These are local basis functions, in the sense that each one inﬂuences the curve locally. They are formed as overlapping functions between different control points. Nγ in eq. (4.18) is in this case the total number of control points and θn (s) is deﬁned as follows:
θ1 (s) = θ0 (s) θ2 (s) = θ0 (s) + d θn (s) = θ0 (s) + (n − 1)d, where d is the distance between neighboring control points and θ0 can be deﬁned by cubic polynomial segments [83] (ﬁg. 4.5) s3 , 1 + 3s +6 2 − 3s3 3s , 6 θ0 (s) = 2 + 3s3 4 − 6s , 6 1 − 3s + 3s2 − s3 , 6 (4.20)
if s < 0.25d if 0.25d ≤ s < 0.5d if 0.5d ≤ s < 0.75d if 0.75d ≤ s < d . (4.21)
A further example of the spline approach can be found in [50].
Figure 4.5: Plot of Bspline basis functions with Nγ = 7.
4.3. Data discrepancy functionals
55
4.3 Data discrepancy functionals
Continuing from eq. (4.9), a variety of data discrepancy functionals can be used for the objective function. The purpose of these functionals is to describe how close the predicted solution y = Zp is to the measured data g. The KullbackLeibler distance is a statistical measure
n
Dkl (g, y) =
j=1
gj log
gj . yj
(4.22)
An example of minimization of the KullbackLeibler distance under certain conditions, is the expectation maximization (EM) method of Richardson[142]Lucy[110]. More commonly found are data discrepancy functionals based on the Lp norm. For a function f (x) in a measure space (X, L), where X is a measure space and L is a metric, the Lp norm is
L = f p =
p
f (x) dx
X
p
(1/p)
.
(4.23)
The discrete version of this norm is for a vector f ∈ Bn , where Bn is an appropriate1 ndimensional Banach space lp = f p =
j=1
n
(1/p) fj p . (4.24)
The next data discrepancy functional is based on the l1 norm, the taxicab distance
n
Dl1 (g, y) = g − y1 =
j=1
gj − yj .
(4.25)
For examples on the use of l1 norm in optimization problems refer to [37]. The most common of all, the least squares (LS) functional is based on the squared l2 norm, the Euclidean distance
n 2 Dl2 (g, y) = g − y2 = 2 j=1
gj − yj 2 .
(4.26)
4.4 Least squares approximation
The solution to the illposed nature of certain problems has been studied by mathematicians like PierreSimon Laplace in 1799, who used the L1 and the L∞ norms, ‘leastabsolute values’
1
By appropriate, we mean that an lp norm can be embedded. For example it is not a Tsirelson space [170]
in this approach in a least squares sense 2 Φ(p) = Dl2 (g. The aim is to minimize the error between predictions and measured data.1 Linear case If the mapping Z is linear then it can be expressed as a matrix Z : P → Y. the derivative of the objective funcp tional is set to zero ∂Φ(p) ∂p =0 . The l2 norm is the only lp norm that can be embedded in a Hilbert space. 72. 75. Note that a Hilbert space is an example of a Banach space B with the l2 norm embedded.56 Chapter 4. Details on the history of the ‘leastsquares’ method can be found in the following works of Harter [71.28) Φ(p) = (g − Zp)T (g − Zp) Φ(p) = gT g − (Zp)T g − gT (Zp) + (Zp)T (Zp). (4. It was right in the beginning of the 19th century that the ‘leastsquares’ criterion was established by Adrien Marie Legendre in 1805[104]. Robert Adrain in 1808[2] (possibly aware of Legendre’s work [160]) and Johann Carl Friedrich Gauss in 1809[53]. 76. 77]. y). as it can be deﬁned by an inner product. The objective functional is going to be Φ(p) = g − Zp2 . 73. where P ⊂ Hm and Y ⊂ Hn . In most real cases g ∈ Range(Z) and the idealized solution g − Z(p) = 0 cannot be obtained. Note that (Zp)T g = gT (Zp) Φ(p) = gT g − 2pT Z T g + pT Z T Zp.27) where g are the measurements and y are the predictions. 74.4. 4. 2 Expanding the above equation (4. Let us ﬁrst examine the case where the forward model Z is linear. which are a function of the model parameters y = Z(p). Numerical optimization: Inverse problem theory and ‘minimax’ [161] criterions in his own words. To obtain the minimum solution pmin = arg min Φ(p).
(4. In some cases the matrix A will not have full rank N ull(A) = {0} and the least squares problem in eq.. The MoorePenrose inverse A† is uniquely determined by the following four conditions: (1) AA† A = A.e.29) Noting that the matrix Z T Z is symmetric.4.4.30) is typically solved by more efﬁcient methods than matrix inversion.. (3) (AA† )H = AA† . (4. The SVD spectrum of the theoretically rankr matrix will have a clear jump in its decay. The system in eq. discovered independently by Moore in 1920 [124] and by Penrose in 1955 [134].31) Z T is the MoorePenrose generalized inverse. Some singular values will be very close to zero. These properties of the MoorePenrose inverse are also true for the proper inverse A−1 . .28) will not have a unique solution. for a small number there exist singular values such that σk+1 . More often though some columns will be numerically close to being linearly dependant. (4.. σr ≤ . that is σ1 . . = a and ∂pT Ap ∂p = (A + AT )p.. The pseudoinverse A† is equivalent to the proper inverse if matrix A is square and nonsingular. (4) (A† A)H = A† A . The solution can be obtained by inversion p = where Z T Z −1 (4. (4. also referred to as pseudoin verse. Least squares approximation 57 ∂Φ(p) ∂p T g − 2pT Z T g + pT Z T Zp ∂ g ∂p Tg T ZT g ∂g ∂p ∂pT Z T Zp −2 + ∂p p p Using the matrix derivative identities ∂pT a ∂p = 0 = 0 = 0. while the rest of the singular values will be far larger than .30) ZT Z −1 Z T g. i. we obtain 0 − 2Z T g + (Z T Z) + (Z T Z)T p = 0... we derive the set of normal equations Z T Zp = Z T g. Z T Z = (Z T Z)T . It is this k that is . Formally. where p and a are column vectors and A is a square matrix. σk . (2) A† AA† = A† . such as QR decomposition [13].
Σ−1 0 k 0 0 ∈Rm×n where Σk = diag{σ1 . (4. The remedy to this problem is to truncate the singular values that are below by setting them equal to zero. The solution to the TSVD modiﬁed problem is given by pT SV D = † Zk g. ∂p 2 ∂p2 (4. Numerical optimization: Inverse problem theory the effective rank of matrix A.4. we obtain the updated ˜ estimate pk+1 as the minimiser of Φ(p) ˜ ∂Φ ∂Φ ∂2Φ (pk+1 ) = (pk ) + (pk+1 − pk )T (pk ) = 0.3).58 Chapter 4. ∂p ∂p ∂p2 2 (4.. 4. we approximate it with a linear model in the local neighborhood of a given prediction pk . as in eq. To minimize this functional.35) ˜ In the linear case the weighted least squares solution can be found simply by replacing Z with Z = Lw Z and ˜ g with g = Lw g in eq.Under the assumption that the functional is continuously differentiable in the local neighborhood of pk it can be expanded into a Taylor series ∞ Φ(p) = n=0 Φ(n) (pk ) (p − pk )n .. around the current estimate pk ˜ Φ(p) = Φ(pk ) + ∂Φ 1 ∂2Φ (pk ) + (p − pk )T (pk ) (p − pk ).33) It is typical in numerical methods to take the ﬁrst two terms of its Taylor series expansion.34) ˜ Setting the derivative of the quadratic functional Φ(p) equal to zero. σk } ∈ Rk×k . 2 (4. the quadratic approximation. where Zk is the best rankk approximation of Z. Inversion of this numerically rankdeﬁcient matrix will result in very unstable solutions. (4.2 Nonlinear case For the nonlinear observation model..31) . we deﬁne the weighted least squares functional: Φ(p) = Lw (g − Z(p))2 . This approach is named Truncated Singular Value Decomposition (TSVD). This will reduce the variance in the solutions.32) where Lw is a weight matrix2 . A† = V k UT . n! (4. .
Least squares approximation Assuming that ∂2Φ .39) where sk is the step parameter.36). Replacing Kk + Jk LT Lw Jk with the identity matrix we obtain the w steepest descent method T pk+1 = pk + sk Jk LT Lw (g − Z(pk )) . so the minimum will not be missed. The Hessian matrix ∂2Φ ∈ Rm×m will then be ∂p2 ∂2Φ ∂p2 Setting N j=1 (gj N j=1 = −2 ∂ 2 Zj T + 2Jk LT Lw Jk . (4. (4.37) and (4. these are referred to as quasiNewton methods since they lack second T derivative information. is invertible ∂p2 pk+1 = pk − ∂2Φ (pk ) ∂p2 −1 59 ∂Φ (pk ). (4. ∂p (4. Computation of the Hessian matrix is usually a slow process.38) − Z(pk ))LT Lw w = Kk and substituting eqs. where the term Kk is ignored . that solves the 1D minimization problem of ﬁnding the correct distance to be travelled on the downhill direction. the Hessian matrix.[85]).39) is the GaussNewton method. w (4. The computation of the step size can be done with a line search algorithm ([11].37) = Jk is the Jacobian matrix. (gj − Z(pk ))LT Lw w w ∂p2 ∂ 2 Zj ∂p2 (4.4.38) into eq.36) Eq. w (4. w (4. controlling the convergence of the iterations [13]. The derivative of the ∂Φ ∈ Rm is objective functional ∂p ∂Φ ∂p where ∂Z ∂p (pk ) = −2 ∂Z (pk ) ∂p T LT Lw (g − Z(pk )).40) Another common approximation of eq. we obtain the NewtonRaphson iteration formula T pk+1 = pk + sk Kk + Jk LT Lw Jk w −1 T Jk LT Lw (g − Z(pk )) . (4. It is the distance to be travelled on the minimizing direction. Various approximations are often used as an alternative to the NewtonRaphson method.4. which can be a prohibiting factor for many applications.36) is iterated until it converges or the stopping criteria are met.
The convergence of the method depends on the parameter λ and is in between the GaussNewton direction (when λ = 0) and the steepest descent direction (when λ = ∞).41) Each iteration of the above method solves the linearised problem pmin = arg min Lw (g − (Z(pk ) + Jk (p − pk )))2 . w (4. These methods are called trustregion or restricted step 2 methods. This is a constrained optimization problem and it can be expressed as: ˜ min Φ(p) . which simultaneously evaluates both the step length and the direction.45) .60 Chapter 4.42) which will be used to update the estimated solutions pk+1 = pk + pmin .43)). (4.44) 4. At each step of the iterations pmin represents the error between the predicted data Z(pk ) and the measured data g in the model space. 2 p (4. replacing Kk with a control term λI and assuming that the step paremeter is ﬁxed sk = 1.2 require a line search algorithm for the calculation of the step length in the descent direction. 2 2 (4. w (4. The LevenbergMarquardt method at each iteration minimizes the following objective functional: Φ(p) = Lw (g − (Z(pk ) + Jk (p − pk )))2 + λp − pk 2 . 2 p (4. Another approach is to deﬁne a region where the linearized model is considered to be a good approximation of the objective functional.5 Constrained optimization: The method of Lagrange The majority of methods presented in the previous section §4. Finally. we arrive at the Levenberg[105]Marquardt[117] method T pk+1 = pk + Jk LT Lw Jk + λI w −1 T Jk LT Lw (g − Z(pk )) . subject to ∆p2 ≤ δt . which improve the estimated solutions at each iteration.4. Numerical optimization: Inverse problem theory T pk+1 = pk + sk Jk LT Lw Jk w −1 T Jk LT Lw (g − Z(pk )) . that is ∆p2 ≤ δt . While minimizing the ˜ linearized objective functional Φ(p). the update ∆p = pk+1 − pk is restricted to be within a region of radius δt . An example of this methodology is the LevenbergMarquardt method (eq.43) where λ ≥ 0 is a control parameter. The GaussNewton method can be thought of as a series of approximated linear problems.
We form the following objective functional: Λ(p. For further details on the LevenbergMarquardt method refer to Marquardt’s original paper [117] as well as [85] and [13]. µ) ∂µ = 0 = 0 = 0. An alternative to the slack variable method was presented in [178]. Lagrange was able to prove the same using analysis alone. but instead a near optimal one. We set the derivatives with respect to x and to the Lagrange multipliers λ and µ equal to zero ∂Λ(x. While Euler had originally worked out a geometrical proof of his method. More details on the history of the calculus of variations can be found in [57]. Lagrange published some applications of his method in 1788 and further generalized it in 1797. λ.46) where p ∈ Rm . (4.4. Bazaraa et al in [11] discuss also the necessary conditions for the existence of an optimal solution in nonlinear programming. λ. cj (x) ≤ 0 is equivalent to cj (x) + a2 = 0 [111]. fi is an array of constraint functions with i ∈ [1. Ni ]. µ) ∂λ ∂Λ(x. subject to fi (p) = 0 and cj (p) ≤ 0. . λ. Mor´ [125] e gives details of such methods. λ. Constrained optimization: The method of Lagrange 61 In the standard LevenbergMarquardt algorithm the trust region radius is indirectly controlled with the use of λ (eq. µ) is the Lagrangian and λ ∈ RNe and µ ∈ RNi are the Lagrange multipliers. The unconstrained problem will be a problem in m + Ne + Ni variables with the additional ones being the Lagrange multipliers. Modern trust region methods offer direct control of δt . The inequality constraints can be converted in to equality constraints with the introduction of a slack variable. They do not seek an exact solution to the above equation. The methodology for constrained optimization problems was initially discovered by Leonhard Euler and Joseph Louis Lagrange in the midst of the 18th century. λ. (4.5. µ) = Φ(p) + λf (p) + µc(p). Consider the following optimization problem: min Φ(p) . Ne ] and similarly cj is an array of constraint functions with j ∈ [1.47) where Λ(x. p (4. µ) ∂x ∂Λ(x. As previously.43)). By the method of Lagrange the constrained problem in m variables is transformed in to an unconstrained problem with additional variables. to minimize this we need to ﬁnd the stationary points.
such as positive numbers R+ or more generally to impose prior knowledge in to the solution set. The illconditioning nature of the inverse operator can cause the solution to be far away from the real or sought solution. typically nonsmooth ones. the illposed problem is replaced by a wellposed approximation. 57] and Phillips [136.1.g.49) where ε ∈ R+ is a scalar governing the balance between a small residual and a penalized solution. and Tikhonov regularization. 2 (4. Tikhonov regularization became widely known from the works of Andrey Nikolayevich Tikhonov in 1943[165] and David Phillips in 1962[136]. [48]) and l2 norms.48) where Lw is a weight matrix. λ > 0 is a regularization parameter and P is a penalty functional. More details on constrained optimization and Lagrangian methods can be found in [183] and [45].3. Both applied the method to integral equations. These stationary points of the Lagrangian are potential solutions of the constrained problem in eq.46). 2 p (4. 4. Numerical optimization: Inverse problem theory Solutions are obtained by solving the above system of equations subject to existence and optimality conditions. as discussed in §4. Essentially. the constraint being that the set of solutions is a compact one [164]. Common regularization techniques include methods based on the TSVD. such that the new approximate solution is stable and unique.4. it is a method of obtaining stable solutions in illposed problems by forming a constrained optimization problem. 86] used Lagrange multipliers to derive the regularization method. Typical choices for the penalty functional are the l1 (see for e.6 Tikhonov regularisation Illposed problems suffer from instability in their solutions. subject to P (p) ≤ ε. The generalized Tikhonov regularization assumes the following objective functional: Φ(p) = Lw (g − Z(p)) 2 + λP (p).62 Chapter 4. In general equality constraints are simpler to solve than inequality ones. The Tikhonov regularization method is essentially a solution to the following constrained optimization problem [13]: min Lw (g − Z(p)) 2 . [45]. introduced in §4. The penalty functional is used to penalize unwanted solutions. Constraints can be employed to restrict the solutions in to some speciﬁc set. p. Both Tikhonov [164. as well as the total variation (TV) functional . both though do not usually have a trivial solution. An analysis of constraints and penalty functionals can be found in [11]. (4. p. typically the KarushKuhnTucker conditions are used [11]. To cure this.
(4. as in the original formulation by Tikhonov. As in the ordinary Tikhonov scheme we assume that the regularization operator L and the weighting matrix Lw are equal to the identity matrix I and that we have no prior estimate of p. w w (4. where P ⊂ Hm and Y ⊂ Hn are deﬁned as usual.53) we obtain the solution . Rudin et al [143] and Chan et al [25]. For the theory on TV e regularization refer to [174].4. then the solution is unique and deﬁned as: w p = Z T LT Lw Z + λLT L w −1 Z T LT Lw g + λLT Lp∗ .53) The incorporation of a priori information using the regularizing operator can be seen better in the “stacked form” Φ(p) = Lw g Lw Z √ − √ p λLp∗ λL 2 .52) If the regularization operator L is chosen so that N ull(Z) ∩ N ull(L) = {0}. The discussion here will be restricted to the l2 penalty functional. The TV functional has been used in image processing optimization problems in the works of Cand` s et al [23].e. Ω ∂f ∂x (4. 4. 2 2 (4. p∗ = 0.6. Ω ⊆ Rm is a bounded open set and gradient of f . i. The Tikhonov regularization functional is formulated then as follows: Φ(p) = Lw (g − Zp)2 + λL(p − p∗ )2 .54) Intuitively speaking. We set the penalty functional to P (p) = L(p − p∗ )2 .50) = f is the where x ∈ Ω is an mdimensional vector. we obtain the set of normal equations Z T LT Lw Z + λLT L p = Z T LT Lw g + λLT Lp∗ . Performing these substitutions in eq.51) Setting the derivative of the objective functional Φ(p) equal to zero. Tikhonov regularisation 63 T V (f ) = Ω ∂f dx = ∂x  f dx. this can be interpreted as converting the (numerically) underdetermined least squares problem to an overdetermined system of equations by adding the a priori information encoded in the regularization operator L. 2 (4.6. 2 where p∗ is an a priori parameter estimate and L is a regularization operator.1 Linear case The forward model Z can be expressed as a linear mapping Z : P → Y. w (4. which directly implies that Z T LT Lw Z + λLT L is positive deﬁnite.
the generalized Tikhonov regularization is expressed as in eq. based on successive linear approximations. (4. The literature on the topic of “automatic” regularisation parameter choice is vast and a proper discussion on these methods exceeds the purposes of this thesis. 2 (4.2 Nonlinear case For the nonlinear observation model Z(p). 2 (4.48) Φ(p) = Lw (g − Z(p)) 2 + λP (p). The choice of the regularization parameter is an important research topic. This is prohibitive for many applications due to the computational cost. This pk+1 = pk − ∂2Φ (pk ) ∂p2 −1 ∂Φ (pk ).6. Such methods include generalized cross validation.4. 2 2 with its equivalent “stacked form”: Z − √ p Φ(p) = 0 λ g 2 (4.[68].57) This is known as damped least squares [171].55) which minimizes the following objective functional: Φ(p) = g − Zp2 + λp2 .2 by setting the derivative of the linearized objective functional equal to zero leads to the following formula: ˜ ∂Φ ∂p = 0. In the statistical literature it is referred to as ridge regression [13]. This method requires to solve the minimisation problem with a variety of λ parameters and then choose the optimal. Numerical optimization: Inverse problem theory p = Z T Z + λI −1 Z T g. (4.64 Chapter 4. 4. The effort is to ﬁnd a method that automatically calculates λ. is derived similar to §4. A discussion on the Lcurve method can be found in [173].56) . ∂p = P (pk ) and (4. the discrepancy principle and the Lcurve method [174].59) ∂2P (pk ) ∂p2 For convenience we will use Newton’s notation and set P (pk ). The gradient of the objective functional ∂Φ ∂p ∂P ∂p (pk ) = ∈ Rm is then expanded to .58) The NewtonRapshon method.
Rudolf Kalman (1930) published in 1960 [87] his results on the WienerKolmogorov problems.7 Statistical estimation: Kalman ﬁlters An alternative to the deterministic approach to inverse problems is to formulate the problem using tools from statistical analysis. The focus on this section will be on the theory of Kalman ﬁlters.4. Their efforts were to solve target tracking problems and they formulated independently a theory. w ∂p2 (4. In the next section a statistical approach to minimization will be discussed. Statistical estimation: Kalman ﬁlters 65 ∂Φ T (pk ) = −2Jk LT Lw (g − Z(pk )) + λP (pk ). During World War II Norbert Wiener(18941964) and Andrey Kolmogorov(19031987) set the foundations for the theory of ﬁltering.7. Setting the penalty functional P (p) = λp2 .62) term can be absorbed by the regularization parameter λ to simplify the iteration formula further. w ∂p The Hessian matrix ∂2Φ ∂p2 (4.61) Ignoring the Kk term in the Hessian matrix. While many people worked on ﬁltering theory. the best known results are the Kalman ﬁlters. one arrives at the Levenberg2 Marquardt method described in §4. we obtain the GaussNewton regularized solution with an iteration formula of the following kind: 1 T pk+1 = pk + sk Jk LT Lw Jk + λP (pk ) w 2 Typically the 1 2 −1 1 T Jk LT Lw (g − Z(pk )) − λP (pk ) . such as positivity of the solutions p ∈ Rm can be incorporated + in a similar way as presented here. Kalman went further than Wiener and Kolmogorov by allowing time variation in the system. Additional constraints. In Electrical Impedance Tomography (EIT) Kolehmainen et al in [97] presented a method for . For a further historical perspective refer to [156]. Up to this point all presented methodology is based on an algebraic or deterministic approach to minimization. 4. now known as WienerKolmogorov ﬁltering theory. in 1941 (Kolmogorov [98]) and in 1942 (Wiener [185]).2. Kalman ﬁltering has found many applications in imaging and in engineering in general. w 2 (4.4. More details on statistical approaches and Bayesian methods can be found in [161] and [86].60) ∈ Rm×m is evaluated as ∂2Φ T (pk ) = 2Kk + 2Jk LT Lw Jk + λP (pk ).
In [88] Kao et al presented a sinogram restoration technique for image reconstruction in Positron Emission Tomography (PET). Wiener. This process is governed by a linear equation of the form: pt = Spt−1 + wt−1 .7.66) . N. It represents our knowledge of the motion of the states. (4. Kolmogorov and R.1 Linear case: Discrete Kalman ﬁlters The discrete Kalman ﬁlter algorithm solves the estimation problem of the state p ∈ Rm of a time controlled process given some measurements g ∈ Rn . In [93] Kim et al developed a method for dynamic imaging in EIT.64) S is the state transition model.t ). Cn. (4. Kalman.66 Chapter 4. Baroudi et al [8] worked on the estimation of gas temperature distribution in electric wire tomography. 4. The variables wt and nt represent the process and measurement white noise. Z : P → Y is assumed to be linear and it represents the mapping from the state space to the measurement space. estimating time varying boundaries of regions with known conductivity. Cw.6: From left to right. In [123] Kalman ﬁlters were used in cardiac MRI to estimate myocardial motion using velocity ﬁelds.65) (4. p(wt ) ∼ N (0.63) yt = Zpt + nt . the identity matrix can be used as S to express a randomwalk model. If the motion is unknown. Numerical optimization: Inverse problem theory Figure 4. they are assumed to have normal probability distributions.t ) p(vt ) ∼ N (0. The process state is related to the measurement by (4. this is referred to as the forward model. A. it relates the states of time step t − 1 to next time step. In the inverse problem terminology.
(4. (4. Substituting this into eq. (4. The aim is to ﬁnd Gt such that the meansquare estimation error is minimized. (4. 215]. Then the a priori and a posteriori estimate error covariances are (4.64) ptt = ptt−1 + Gt (Zpt + nt − Zptt−1 ). tt where E[f (x)] is the expectation operator.t and Cn. The trace of Ctt represents the sum of the error expectations between measurements and predictions. (4.67) (4. we substitute gt from eq.71). We deﬁne ptt−1 to be the a priori state estimate representing our knowledge of the process prior to the step t and ptt to be the a posteriori state estimate at time step t. respectively.69) (4. Statistical estimation: Kalman ﬁlters 67 where Cw.68) and the result into eq. Continuing from eq. The argument is that the individual meansquare errors will be minimized when the total is minimized [16.4.7.t are the process and measurement noise covariance matrices at time t. The derivation presented in this thesis is along the general lines of [16]. p. An alternative derivation from the viewpoint of regression analysis can be found in [40].68) Ctt−1 = E[ett−1 eT ] tt−1 Ctt = E[ett eT ].70) We seek to ﬁnd an equation that computes an a posteriori state estimate ptt as linear combination of the a priori state estimate ptt−1 and weighted difference between a measurement gt and a predicted measurement Zpt ptt = ptt−1 + Gt (gt − Zptt−1 ). (4. the trace of the a posteriori error covariance matrix has to be minimized. The a priori and the a posteriori estimate errors are ett−1 ≡ pt − ptt−1 ett ≡ pt − ptt .70) .71) The matrix Gt is called the Kalman gain. To minimize the meansquare error.
the following is obtained: Ctt = Ctt−1 − Gt ZCtt−1 − Ctt−1 Z T GT + Gt ZCtt−1 Z T + Cn. (4. (4.t GT .75) Cn. (4.73) Using matrix derivative identities in eq.t GT ] t + .72) t t Taking the derivative of the trace of Ctt with respect to Gt and noting that T r[A] = T r[AT ] and Ctt−1 is a covariance matrix and therefore symmetric dT r[Ctt ] dGt = −2 dT r[Gt ZCtt−1 ] dT r[Gt ZCtt−1 Z T + Cn. (4.68 Chapter 4. Gt weights the residual more heavily Cn. Numerical optimization: Inverse problem theory Ctt = E[ pt − ptt−1 + Gt (Zpt + nt − Zptt−1 ) · pt − ptt−1 + Gt (Zpt + nt − Zptt−1 ) T ]. Setting this derivative equal to zero and solving for Gt Gt = Ctt−1 Z T ZCtt−1 Z T + Cn.t .t is symmetric since Ctt−1 and Cn.76) .t + ZCtt−1 Z T + Cn. we obtain dT r[Ctt ] dGt = −2(ZCtt−1 )T + Gt ZCtt−1 Z T + Cn.t T .t are both covariance matrices. dT r[Ctt ] dGt = −2Ctt−1 Z T + 2Gt ZCtt−1 Z T + Cn.t approaches zero. The matrix ZCtt−1 Z T + Cn. Calculating the expectations and assuming that the a priori error (pt − ptt−1 ) is uncorrelated with the measurement error nt .73).74) When Cn.t →0 1/2 ˜ Setting Z = ZCtt−1 lim Gt = Ctt−1 Z T ZCtt−1 Z T −1 .t →0 1/2 ˜ ˜˜ lim Gt = Ctt−1 Z T Z Z T −1 .t −1 . dGt dGt (4. (4.
72) and setting D = ZCtt−1 Z T + Cn.82) . Ctt = Ctt−1 − Gt ZCtt−1 .t →0 In simple terms it means that as the measurement error covariance decreases the actual measurements are trusted more.4.78) This means that as the a priori error covariance decreases.t →0 lim Gt = Ctt−1 Ctt−1 Z † lim Gt = Z † . less trust is put on the actual measurements gt and more on the predicted measurements yt . Statistical estimation: Kalman ﬁlters ˜ ˜˜ Noting that Z T Z Z T −1 69 ˜ is the underdetermined version of the MoorePenrose inverse Z † Cn. (4. the full expression for the a posteriori covariance matrix Ctt can be obtained. the Kalman gain Gt will weight less the residual Ctt−1 →0 lim Gt = 0. we obtain T T .74) in to eq. Next step is to correct.74) Gt = Ctt−1 Z T (ptt−1 ) Zptt−1 Ctt−1 Z T ptt−1 + Cn.77) 1/2 −1/2 Cn.81) The previous two equations constitute the prediction step of the algorithm. Having deﬁned the matrix Gt .t −1 . (4.t . (4. First we compute the Kalman gain Gt as in eq. (4. The ﬁrst step of the discrete Kalman ﬁlter algorithm is to project the state ahead (4. Substituting eq.80) T Ct+1t = St Ctt St + Cw.79) pt+1t = St ptt .7. (4.t Ctt = Ctt−1 − Ctt−1 Z T D−1 ZCtt−1 − Ctt−1 Z T Ctt−1 Z T D−1 + Ctt−1 Z T D−1 D Ctt−1 Z T D−1 Noting that D and Ctt−1 are symmetric. (4. (4. If the a priori error covariance Ctt−1 approaches zero. Then the error covariance (4.
t (p∗.t (p∗.t = ∂Z(pt ) ∂p .t .85) where Z(p) : Rm → Rn is a mapping from the parameters space to the data space and nt is some noise process related to the forward mapping at time t.t )Ctt−1 pt+1t = St ptt T Ct+1t = St Ctt St + Cw. where p∗.t ) + Jp.86) is a zeromean Gaussian observation noise process with covariance matrix Cn.7.t (p∗.t ) + Cn.t ) − Jp.70 Chapter 4.91) ptt = ptt−1 + Gt (gt − Z(p∗. This is derived by linearization.90) (4.t is the linearization point. The linearized problem can be solved recursively using the extended Kalman ﬁlter algorithm equations T T Gt = Ctt−1 Jp. Numerical optimization: Inverse problem theory Next we update the state estimate with respect to the measurement gt ptt = ptt−1 + Gt (gt − Zptt−1 ).t )) Ctt = Ctt−1 − Gt Jp. is the Jacobian matrix and nt (4.84) The previous equations are iterated until convergence and they are known as the discrete Kalman ﬁlter algorithm. The forward model Z(p) is nonlinear and it will be linearized.t ) + nt . for nonlinear systems the extended Kalman ﬁlter is used.t (p∗. As mentioned previously this derivation assumes that the process is linear.t )−1 (4.t .t )(pt − p∗. The observation equation becomes yt = Z(p∗.4.87) (4.t (p∗.88) (4.t )(Jp. (4. as presented previously §4.89) (4. . The ﬁnal step is to update the error covariance (4. 4.2 Nonlinear case: Extended Kalman ﬁlters The parameters of the nonlinear forward model are mapped to the data space Y using the following relation: yt = Z(pt ) + nt .t )(pt − p∗.2.t )Ctt−1 Jp.t (p∗.83) Ctt = Ctt−1 − Gt Zptt−1 Ctt−1 . Jp. (4. details will be given in the following section.
estimates can be calculated from all measurements. (4.8 Discussion In this chapter various approaches to minimization were presented. Using the method of Tikhonov. The estimates from the Kalman ﬁlter algorithm can be further processed using the ﬁxed interval smoother algorithm [4. e.94) pt−1T = pt−1t−1 + Xt−1 (ptT − ptt−1 ).87) becomes ptt = ptt−1 + Gt (gt − Z(ptt−1 )). p∗. In the original formulation of Tikhonov the constraints aimed to replace the inverse (Z T Z)−1 .8. constraints can be incorporated in to the problem with the use of penalty functions. eq. Trust region approaches. Eq. In addition the regularization parameter λ needs to be deﬁned. 4. In the case where data does not have to be processed in real time.3 Fixed interval smoother The extended Kalman ﬁlter algorithm can be used in real time processing as it only uses current and past measurements. Discussion 71 Gt is the Kalman gain and the matrices Ctt and Ct+1t are the covariance matrices of the Kalman ﬁltered state vector ptt and the predictor pt+1t respectively. which is often singular. These penalty functions are a representation of our prior knowledge of the process governing a particular problem. Standard Newtontype algorithms typically include a step length parameter sk .88) is correcting the parameters p of the model according to the measured data g.93) (4. (4. The Tikhonov regularization can be seen from the stochastic point of view as a maximum a posteriori estimate.4. 4. with a wellbehaved bounded inverse ˜ (Z T Z + λP )−1 . are robust in most cases and do not require the step length nor the regularization parameter to be deﬁned.g. which requires a line search at each iteration. Deterministic methods in general make many assumptions. Optimization is still a modern topic of research and we have only covered the area that is relevant to the work presented in this thesis. Least squares estimation gives an introduction on the principles of optimization methods.7.t = ptt−1 . Statistical estimation on the other hand offers the ability to express the problem with stochastic assumptions . If the observation model is linearized at the current value of the predictor.92) Eq. pp. 187190] −1 T Xt−1 = Ct−1t−1 St−1 Ctt−1 (4. (4.90) is projecting the parameters to the next time point using a priori knowledge incorporated in the state transition matrix St . (4. for example on the distribution of noise in a process. the LevenbergMarquardt method.
The uncertainty in the accuracy of the measurements and the forward model is expressed in probability density functions. A derivation of Newtontype methods from a WienerKolmogorov ﬁltering perspective has been discussed in [51]. while they change in time.7. (4. when compared to standard Tikhonov regularization. We begin with the image reconstruction method in the next chapter. Setting the weighting matrix Lw in the Tikhonov method equal to the estimate error covariance matrix Ctt shows the close relation between the two methods. representing the variances between parameters of the model. Numerical optimization: Inverse problem theory about the distribution of data and noise. as it can be seen in §4. Kalman ﬁlters also have a builtin matrix St for progressing the parameters of the model ahead in time.77). Kalman ﬁlters have the cognitive machinery to estimate a temporal process with much more control of the error distributions. In the absence of noise both methods reduce to the MoorePenrose inverse. This chapter concludes the discussion on the background of this thesis. In stochastic methods the aim is to incorporate accurately a priori information about the sought solution p and the noise statistics.1 eq. it can be said that Kalman ﬁlters offer a formula for calculating the weighting (or covariance) matrix method in time. In this sense.72 Chapter 4. In the following chapters we present methods based on numerical optimisation techniques for the reconstruction of images and shapes. The extended Kalman ﬁlter algorithm will be used in this thesis for the solution of the dynamic shape estimation problem in §8. In the case of Tikhonov Lw is a typically set as a diagonal matrix. 1/2 . The focus in deterministic methods is to obtain as much information as possible from measured data by building an exact forward model with some basic assumptions about the noise distribution. The estimate error covariance matrix Ctt has values off the diagonal representing the covariances between different parameters of the model.
Curvelets are a particular application of wavelet analysis to the Radon transform. Cand` s et al [22]. as it will be explained in the following sections. Recent results have been very promising in solving problems with limited data. Yu and Fessler [188] presented a level set method for the reconstruction of images in PET. [23] presented exact reconstructions of simulated phantoms e using very limited data (22 radial proﬁles). The presented method does not make any assumptions about the completeness of the data set. They chose to minimize an approximation to the total variation functional. An application of the theories presented in [22]. In [151] and [96] Kolehmainen et al used statistical analysis to reconstruct images in dental radiology. The minimisation process was applied on a leastsquares functional. [20]. . the image reconstruction is formulated as a least squares problem. [23] incorporating the kt approach was presented in [112] for case of cardiac MRI. motion of the imaged object during the collection of data is reduced signiﬁcantly. More examples of the Bayesian approach in optimisation can be found in [146].Chapter 5 Image reconstruction method 5. In this chapter we present a method for the reconstruction of images from limited data sets. fail in limited data cases as they assume that the data set is complete. Ye et al [187] used a level set methodology to reconstruct images from randomly undersampled Fourier data.[69].1 Introduction Standard techniques for reconstructing images from tomographic data such as ﬁltered backprojection and gridding. Thus. because of the difﬁculty to calculate its derivatives. The aim is minimize the difference between predicted data and measured data. which are typically encountered in dynamic imaging. Cand` s e has also presented methods for image reconstruction using curvelets [21]. These limited data sets have the beneﬁt that data for each time frame can be collected very fast. To achieve this.
5. (5. A sinogram with 8 projections each with 185 line integrals. A discrete method for the calculation of the system matrix J = RB (ﬁg.3) is to create the image of each blob and then take its Radon transform. ry }. pζ ∈ P is the parameter vector and Nζ is the total number of basis functions used in a given grid.1: Radon data. The . Optimal combinations of the shape parameter α and the support can be found in [147] and [61]. Each column of matrix B is going to be the image vector of each basis function. The projection data g = R(f ) ∈ Rn (ﬁg. m is the degree. are of the following type: 1 ( 1 − (d/a)2 )m Im (α Im (α) b(r) = 1 − (d/a)2 ) . Bk is a basis function.2 Forward problem We approximate the image f (r) with local basis function as in eq. f (5. 20 40 60 80 100 120 140 160 180 1 2 3 4 5 6 7 8 Figure 5.or as they are commonly referred to blobs. (5. The choice of basis functions was based on their performance in terms of image quality in the results of [106] and [147]. The KaiserBessel radially symmetric basis functions. Image reconstruction method 5. 5. (4.74 Chapter 5.1) where r ∈ R2 is a vector of the following form r = {rx .2) where Im is the modiﬁed Bessel function of the ﬁrst kind.3) where pζ ∈ RNζ are the unknown blob coefﬁcients and B ∈ RNr ×Nζ is the basis functions matrix.17) ˜ = Bpζ . d is the distance from the center of the basis function and α is a shape parameter. a is the support. where Nr is the total number of pixels.1) will then be equal to g = RBpζ .
4 0.4 0.3.2 0.2 0. matrix can also be calculated analytically in both Fourier and Radon domains.8 0.2 (Left)) is given in [106]. (5.8 1 Figure 5.2 0 0 0.8 0. with z = (2πak)2 − α2 . Inverse problem: Direct solution 1.3 Inverse problem: Direct solution 5. which gives the following solution: .4 0.[62] Im+1/2 (α) Im (α) 2π b (t).2 (Right)) is calculated with the use of the central slice theorem and its formula is given in [106].5.6 0. The Radon transform of the basis functions (ﬁg.2: Radial proﬁle of the KaiserBessel blob in Fourier space (Left) and Radon space (Right). (5. 5. α m+1/2 bR (t) = a (5.3.4 1.6) 5.4 0. We use the least squares functional to obtain a solution of minimum norm pζ.2 1.2 1 1 0.3) is typically nonsquare and a unique solution is unobtainable.6 0. [61] ak αm (2π)k/2 Jm+k/2 (z) Im (α)z m+k/2 bF (k) = .8 1 0 0 0. (5.2 0.4) where Jm is the mth degree Bessel function of the ﬁrst kind and k is the dimension of the space. 5.min = arg min g − Jpζ 2 .1 Least squares estimation A direct solution for the minimization problem of eq. The derivation of the Fourier transform of the basis functions uses the Hankel transform. which is equivalent to the Fourier transform for a rotationally symmetric function.6) is to use the MoorePenrose pseudoinverse. 2 pζ (5.6 0.6 0.4 75 1. The analytic Fourier formula (ﬁg. The KaiserBessel blobs remain radially symmetric in both cases.5) The linear system in eq.
7) We test this method with the SheppLogan phantom [150] in ﬁg. we use the relative mean square error rms = fg − f 2 /fg 2 .3: The system matrix J. Images are reconstructed using ﬁltered backprojection and the proposed method (ﬁg. The resolution of the image is 128 × 128 pixels. (5. 5. SheppLogan phantom.76 Chapter 5.5). Figure 5. Data is simulated by taking the Radon transform at 8 angles. . pζ = (J T J)−1 J T g. Each column corresponds to the vectorised basis function in the Radon space. 5. 2 2 where fg and f are the ground truth and predicted vectorised images. To compare images numerically.4: Ground truth image. Image reconstruction method 200 400 600 800 1000 1200 1400 500 1000 1500 2000 2500 3000 3500 4000 Figure 5.4.
2 2 which has the following minimizer (5. which can signiﬁcantly reduce these artifacts.3. 5. (Right) Least squares reconstruction 8 × 8 grid rms = 0.6 the reconstructed image is much more detailed than the least squares version in ﬁg.2521. 5. The increase on the grid resolution produces angular artifacts.3. Inverse problem: Direct solution 77 Figure 5. . The increase of this number has direct effect in the decrease of the conditioning of matrix J T J. 5. (5.5.9) pζ = (J T J + λI)−1 J T g.5: 8 projections. the grid resolution can be increased to 64 × 64. (5. As seen in ﬁg.2 Damped least squares estimation The use of such a small number of basis functions results in very poor reconstructions.8) p = This corresponds to the ordinary Tikhonov regularization functional Φ(p) = g − Jpζ 2 + λp2 . To stabilize this inversion we augment the effectively underdetermined system in to an overdetermined one with the incorporation of a priori information J λI g 0 .10) Using the damped least squares method. (Left) Filtered backprojection rms = 1.73092. that are common with ﬁltered backprojection methods.5. In the next section we discuss iterative reconstruction methods. making it practically singular.
78 Chapter 5.6: 8 projections. The T V functional favors solutions with small total variation. 2 pζ (5.12) The minimization of this can be thought as a penalty approach to the constrained problem [111].6 and σ is a known noise level. [177]. A discussion in the edgepreserving properties of the T V functional in statistical estimation can be found in [102]. Wellposedness and convergence of this problem have been proved in [1]. A generalization of ARTUR was discussed in [34]. Their algorithm was named ARTUR. while favoring solutions with smaller derivatives. 5.2521. In [28] the total variation functional was used for the reconstruction from full data. [108] and [36]. Image reconstruction method Figure 5.61756. The method has been applied to tomographic reconstruction problems both with full and limited data. The total variation functional was originally used for image denoising tasks [143].4 Inverse problem: Iterative solution Using the damped least squares as an initial estimate. yet allowing discontinuities in the solution. (Left) Filtered backprojection rms = 1.11) where T V is the total variation functional introduced in §4. From the Bayesian point of view the T V approach of Kolehmainen et al [96]. 2 (5. we can further increase the quality of the reconstructions by solving the following constrained optimization problem pmin = arg min T V (p) subject to g − Jpζ 2 = σ 2 . where the authors reconstructed a phantom from limited data. (Right) Damped least squares reconstruction 64x64 grid rms = 0. was applied to both limited sparse and limited . It tends to preserve edge information. We solve the equivalent Tikhonov regularization problem by minimizing the following objective functional Φ(pζ ) = g − Jpζ 2 + λT V (pζ ). with its theoretical background given in [151]. [27].
we obtain the discrete version Nζx Nζy T V (pζ ) = i=1 j=1 ((Dx.15) where Nζx and Nζy are the total numbers of basis functions in the x and y directions.ij pζ )2 + (Dy.5 0 t 0.5 1 1. (4. (5.4 1. (5. since the T V penalty term is nonlinear. The derivative of the T V penalty functional is . with a continuous function ψ(t) = t2 + β 2 .14) (Dx pζ )2 + (Dy pζ )2 .ij pζ )2 ) + β 2 .2 1 y 0.5.6 1. It is clear now that the objective functional Φ in eq. 5. as seen in ﬁg.5 −1 −0. They reconstructed 2D and 3D images from dental xray data.13) The presence of the absolute value function makes the T V functional nondifferentiable at the origin. approximated T V functional is T V (pζ ) = Setting the gradient  pζ  = Ω ψ( pζ )dpζ . Therefore we approximate the absolute function.5 Figure 5.8 0. A variety of choices for the differential operators can be found in [83] and [137].12) is nonlinear. The TV functional is deﬁned as in eq. Other options for this function can be found in [174]. (5.6 0.7: The solid line represents the absolute function t and the dashed line represents the approximation ψ(t) = t2 + β 2 with β = 0. The 1. where Dx and Dy are x and y differential operators.1.50) T V (pζ ) = Ω  pζ dpζ .7.2 0 −1.4 0.4. Persson et al [135] described an expectation maximization (EM) algorithm. Inverse problem: Iterative solution 79 view data. (5. They applied their method for 3D reconstructions of cardiac phantoms from limited view data.
1 Lagged diffusivity ﬁxed point iteration Using the EulerLagrange equation for the unconstrained optimization problem in eq. It results in a gradient descent method.17) where Z ∗ denotes the adjoint operator of Z. ARTUR has also been applied in the limited data case in [34]. (5. or steepest descent with the addition of a line search algorithm for globalization. It is based on successive linearisations of the objective functional using a quasiNewton approach.80 Chapter 5. as shown in ﬁg. (5.16) where ψ ( pζ ) denotes the derivative of the absolute value approximation.8: The T V block tridiagonal matrix. A complete derivation of the ﬁxed point algorithm can be found in [176] and [177]. This is a sparse matrix. (5. It can be considered as a special case of the ARTUR algorithm [28].12) we obtain a nonlinear partial differential equation pζ  pζ 2 + β 2 g(pζ ) = −λ + Z ∗ (Zpζ − g) = 0.4.12). 10 20 30 40 50 60 10 20 30 40 50 60 Figure 5.8. with the blocks on the diagonal being tridiagonal matrices and the offdiagonal blocks being diagonal matrices. 5. 5. In this section we use the lagged diffusivity ﬁxed point iteration of Vogel [176]. Rudin. The next sections are presenting two methods for the solution of the nonlinear problem in eq. Image reconstruction method T T T V (pζ ) = Dx diag(ψ ( pζ ))Dx + Dy diag(ψ ( pζ ))Dy . The T V (pζ ) ∈ RNζ ×Nζ matrix is block tridiagonal. (5. Osher and Fatemi [143] solve this system of nonlinear equations using a time marching method. . The slow convergence of the steepest descent method is not ideal for many applications.
0. (Z T Z+ λT V (pk )) of the objective functional. The results of the ﬁxed point method are presented in ﬁgs.5.9: 8 projections.606 450 400 350 300 250 rms 0.61756. The . (5. 5.61 0.608 0. Unconstrained problems are in general easier to handle from a computational standpoint.6. (Left) Initial (damped least squares) rms = 0.4. 5. Both methods (constrained and unconstrained) produce the same results as long as the parameters are selected appropriately [176].604 0. As seen in the right plot of ﬁg.598 0.10.602 0. (Right) Gradient norm plot.10: 8 projections.614 0.9 and 5.5975. the norm of the gradient has dropped most of the way at about 5 iterations.10.6 0.18) where T Vk = T V (pk ) denotes the derivative of the total variation functional with parameters pk at the kth iteration.596 gr ad 200 150 100 50 0 0 5 10 k 15 20 25 0 5 10 k 15 20 25 Figure 5. Figure 5. which results in a quasiNewton approach. with a nonlinear penalty functional. Note that we have dropped the second derivative in the Hessian. The iteration formula resembles the nonlinear Tikhonov regularization iteration pk+1 = pk + (Z T Z + λT Vk )−1 Z T (g − Zpk ) − λT Vk pk . (Left) rms error over iteration plot.612 0. (Right) Fixed point reconstruction rms = 0. Inverse problem: Iterative solution 81 From another point of view the constrained problem can be thought of as Tikhonov regularization §4.
yet it is not insigniﬁcant.20) by Newton’s method requires the iterative solution of Bk 0 T λDx 0 Bk T λDy . Their approach is based on a fullNewton scheme. (5.22) to block upper triangular form by block row reduction . pζ ) = B( pζ )u − Dx pζ B( pζ )v − Dy pζ T T λDx u + λDy v + Z T (Zpζ − g) =0 (5.17) in to a system of nonlinear equations −λ a + Z ∗ (Zpζ − g) = 0 a  pζ 2 + β 2 − pζ = 0.4.21) The solution of the system (5. TZ T Z λDy 0 B ( pζ )u − Dx pζ (5. v.10 (Left)) the error on the image space reduces signiﬁcantly up to about 15 iterations. (5. Bk = B( pk ) and Mu . The highly nonlinear nature caused by the T V functional is linearized more efﬁciently with the introduction of a dual variable. We convert the system (5. vk are the parameter and dual vectors at the kth iteration. uk . v}.20) and its derivative M (u. 5.19) is now deﬁned M (u. Setting B( pζ ) = diag(ψ ( pζ )). 5. (5. Examining the rms plot (ﬁg.2 Primaldual Newton method Another option for the solution of the constrained minimization problem has been proposed by Chan et al [27]. pζ ) = B( pζ ) 0 T λDx B( pζ ) B ( pζ )v − Dy pζ . v and pζ components of M at the kth iteration. a = {u. Image reconstruction method decrease after that is very small.22) Bk uk − Dx pk ∆u Mu Bk vk − Dy pk ∆v = − Mv TZ Z Mpk ∆p where pk .19) For the 2D case we are examining in this chapter. we split the dual variable in to its x and y components. v. This transforms the problem of eq.82 Chapter 5. Mv and Mpk denote the u. We introduce a = √ pζ  pζ 2 +β 2 as the new variable. the system of (5.
∗ vk ) E22 = diag(w. ∗ Dx pk . ∗ uk ) E21 = diag(w. Inverse problem: Iterative solution 0 0 where Bk 0 −E11 Dx − E12 Dy ∆u Mu .29) . the unit ball. ∗ Dx pk .24) and gradk = Z T (Zpk − g) + λT Vk pk is the gradient of the objective functional at the kth iteration. ∗ Dy pk . ∗ uk ) E12 = diag(w.26) ∆u = −uk + Bk (Dx pk + (E11 Dx + E12 Dy )∆p) ∆v = −vk + Bk (Dx pk + (E21 Dx + E22 Dy )∆p). This can be done by a line search method as follows: sk = max{0 ≤ s ≤ 1(uk + s∆u. we can update the dual variables (5. 83 Bk −E21 Dx − E22 Dy ∆v = − Mv ∆p gradk 0 Z T Z + λH (5.23) E11 = diag(w. (5.e. vk + s∆v) ∈ C∗ }.28) The dual variables have to be restricted to be within the conjugate set C∗ = {y ∈ Rn  y ≤ 1}. ∗ Dy pk . From the system (5.23) we obtain ﬁrst the update for pζ ∆p = −(Z T Z + λH)−1 gradk and the updated parameter vector (5. to ensure convergence. ∗ vk ).27) (5. the discretised diffusion operator is T T T T H = Dx Bk E11 Dx + Dx Bk E12 Dy + Dy Bk E21 Dx + Dy Bk E22 Dy (5. With the step length sk calculated.25) pk+1 = pk + ∆p.4. i. The updates for the dual variables are (5.5.
602 0.12 converges in 5 iterations.606 450 400 350 300 250 rms 0. In the next section the method is further developed to incorporate constraints on the maximal and minimal intensity expected in the reconstructed images.84 Chapter 5.11) with the ﬁxed point method (ﬁg. Figure 5.11: 8 projections.5975. (Left) Initial (damped least squares) RM S = 0. 5. 5.612 0.596 gr ad 200 150 100 50 0 0 5 10 k 15 20 25 0 5 10 k 15 20 25 Figure 5. 5.12: 8 projections.12 (Right)) of both methods are very similar. While the gradient norm plot (ﬁgs.9). The reconstructed images are presented in ﬁg. For a more detailed discussion on the primaldual method refer to [27] and [174].10 and 5. 0. (Right) Gradient norm plot.608 0. It is the convergence speed that is of interest. The primaldual method produces the same reconstruction (ﬁg. the rms image error of the primaldual method in ﬁg. (Right) Primaldual reconstruction RM S = 0.61 0.604 0. 5.61756.614 0. 5.6 0. (Left) RM S error over iteration plot.598 0. .11. Image reconstruction method uk+1 = uk + sk ∆u vk+1 = vk + sk ∆v.
5. This maximal value can be obtained in cardiac MRI with fair accuracy.4.32) We obtain a solution from the linear system of equations and update the parameter vector pk+1 = T pk + ∆p. we can also enforce an upper bound constraint by replacing 0 in the above equations with a maximal value and changing the order of the inequalities in eq. (5.34). An interior point method has been used in [108]. We deﬁne an active set as follows: A(p) = {i0 ≤ pi ≤ }. This can be based on statistical analysis or assumptions about the noise in the system. where T is a projection operator deﬁned as: pi if pi ≥ 0 T (p) = .30) For the solution of this problem we modify the primaldual method with a projected method [174]. we can enforce constraints on the intensity values of the parameters. H(p) = ∂ 2 Φ otherwise ∂pi ∂pj (5.31) is a small number. pζ (5. Using the projected method. typically reducing as the iteration progress. We can incorporate the a priori knowledge that the image will not contain negative values. as described.[49]. 0 if p ≤ 0 i (5. where (5.34) An alternative to the projection methods are the interior and exterior point methods [45]. Inverse problem: Iterative solution 85 5. . The constrained optimization problem to be solved is min Φ(pζ ) subject to c(pζ ) ≥ 0. which is fully sampled. by building a time averaged image. The active parameters are going to be on the boundaries of the feasible region. We reduce the Hessian as follows: δij if i ∈ A(p) or ∈ A(p) .[111].4.33) (5. Kolehmainen et al [96] used an exterior point method to apply the positivity constraint.3 Constrained optimisation Further to the previous results.
(Left) Initial (damped least squares) rms = 0. Image reconstruction method Figure 5. As the constraints are enforced.14 (Right)) increases as it climbs out of an infeasible solution area of negative and very high intensity values. .13: 8 projections. The grid resolution as previously was ﬁxed at 64 × 64 and the reconstructed images 128 × 128. (Left) rms error over iteration plot.5 Results In this section we apply the primaldual method with both constraints enforced by an active set method.5 3 k 3. In the next section we apply the method to simulated and measured cardiac data sets.14: 8 projections.13. The application of intensity inequality constraints combined with the total variation penalty functional has a great effect in the reduction of artifacts. the gradient norm (ﬁg.5 5 Figure 5.5 4 4.56 4000 0. For comparison we reconstruct images using ﬁltered backprojection and gridding. 5. as seen in ﬁg.4833. Results are obtained for various degrees of undersampling. 0.5 1000 0.86 Chapter 5.5 3 k 3.5 2 2.54 3000 rms 0. (Right) Projected primaldual reconstruction rms = 0.5 4 4.48 1 1. currently considered to be the standard in image reconstruction from tomographic data. 5.58 5000 0. 5.61756.6 6000 0.5 5 0 1 1. (Right) Gradient norm plot.52 gr ad 2000 0.5 2 2.
Figure 5. The reconstructed images for both methods are shown in ﬁg. Details that are not reconstructed are typically blurred. In the case of the primaldual method some gross features of the anatomy are visible. Fig. In the primaldual reconstruction the streaky artifacts are missing. On this ﬁner scale the primaldual method does not introduce new signals or deforms their apparent shape.15: Ground truth image.1 Simulated cardiac data Data was simulated by taking the Radon transform at 4.15). (5. In the case of 4 proﬁles. Fully sampled cardiac image.5. making it practically impossible to distinguish the location and shape of the heart. While general features are represented well.5.16 with various degrees of radial undersampling. even some of the ﬁner ones are also visible.5. . with rms. between the ﬁltered backprojection and the primaldual method. 5. This acts as a ground truth image for the numerical comparison. 8. 5. the ﬁlteredbackprojection produces an image dominated by streaks. Results 87 5. The corruption of the ﬁltered backprojection images by angular artifacts deforms small anatomical features completely and in some locations introduces new features which are not present in the ground truth image.17 shows the corresponding rms errors. As the number of proﬁles increase these features become clearer in both images. 13 and 16 equispaced angles from a fully sampled cardiac image in ﬁg.
Image reconstruction method 4 8 13 16 Figure 5. The numbers on the left column indicate the number of proﬁles.5.16: Simulated data reconstructions. (Right) Projected primaldual reconstruction. The data used in this .2 Measured data from MRI ECG gated data was acquired from a healthy volunteer. 5.88 Chapter 5. (Left) Filtered backprojection. A total of 25 phases each with 208 radial proﬁles were collected using a ﬁveelement array receive coil.
20 the rms errors are presented for both methods using different degrees of undersampling.2 4 6 number of proﬁles 8 10 12 14 16 0 20 40 60 x 80 100 120 Figure 5. 5. experiment was generated from the ﬁrst phase of the acquired data by undersampling in various degrees. The dashed line represents the ﬁltered backprojection method and the solid the primaldual method. 5.17: (Left)Simulated cardiac rms plot over the number of proﬁles. a total of about 200 radial proﬁles can be collected within a single heart beat using a fast steady state free precession sequence.4 0.2 0.4 1. Using 8 radial proﬁles results in a 26fold acceleration compared to the original radial acquisition.8 0.8 rms 1 intensity 0.5.6 Filtered backprojection Primal−dual 1.5.2 Ground truth Filtered backprojection Primal−dual 89 1 0. For the case of realtime MRI. In ﬁg.19.8 1. according to the central slice theorem. . First we will investigate the single coil reconstructions.2 0. This was used as ground truth data and undersampled reconstructions with the gridding method from §2. This was done by using every nth proﬁle according to the total number (Nunder ) of proﬁles in the undersampled set with n = 208/Nunder .6 1. Results 1.2 −0.4 0.6 0 0.18. (Right) Comparison of central lines of the ground truth and reconstructed images for the case of 8 radial proﬁles.3 are compared with primaldual ones in ﬁg. we 1D inverse Fourier transformed along each radial proﬁle. The quality of the reconstructions is compared using normalized images. The fully sampled gridding reconstruction is shown in ﬁg. To transform the data in the Radon space. 5.
(Left) Gridding.18: Coil 1 reconstructions from measured data. Image reconstruction method 4 8 13 16 Figure 5.90 Single coil reconstructions Chapter 5. The numbers on the left column indicate the number of proﬁles. . (Right) Projected primaldual reconstruction.
15 0.2 intensity 0.5.19: Coil 1.05 0.4 0.25 0. .4 Ground truth Gridding Primal−dual 1.35 0.20: (Left) Coil 1 rms plot over the number of proﬁles.1 rms 1 0.5 0. (Right) Comparison of central lines of the ground truth and reconstructed images for the case of 8 radial proﬁles.6 0. The dashed line represents the gridding method and the solid the primaldual method.2 0.5. Fully sampled gridding reconstruction used as ground truth image. 1.4 4 6 8 10 12 14 16 0 0 20 40 60 80 100 120 number of proﬁles x Figure 5.3 1.8 Gridding Primal−dual 1.6 0. Results 91 Figure 5.8 0.45 0.
sensitivity matrices were calculated by dividing a time average image of each coil with the square root of the sum of squares of all the coil images. Fully sampled LS gridding reconstruction used as ground truth image.21: Multiple coil.y . It is computationally very efﬁcient since it can be solved per pixel. (5. Figure 5. y) † f (x. 5.y = Sx. Normalised images of the proposed method and the least squares gridding method were compared with the fully sampled image (ﬁg. For comparison the gridding reconstructed single coil images were combined in a leastsquares sense using the sensitivity matrices by solving the following linear system for each pixel Cx.y Cx. . y). similarly to [139].y is the vector containing the intensity value of each coil image at x. As there was no body coil used for the collection of data.y f (x. which was reconstructed using the least squares gridding approach with the sensitivity matrices described previously.23. y.22 and the corresponding rms errors in ﬁg. The results of the LS gridding and projected primaldual reconstructions are displayed in ﬁg. y) = Sx. the operator J was changed to a Radon transform followed with a multiplication with the coil sensitivity matrix. 5. This method produces superior results when compared with the square root of the sums of squares of the coil images. 5.y is a vector containing all coil sensitivity values at pixel location (x.21). f is the image and Cx. Time averaged images can be reconstructed with a gridding method using data from all time points in a scanning sequence where proﬁles are interleaved in order to span the 180 degrees. Image reconstruction method Multiple coil reconstructions To reconstruct images from multiple receive coils using the proposed method.92 Chapter 5.35) where Sx.
5. 5. The numbers on the left column indicate the number of proﬁles.22: Multiple coil reconstructions from measured data. Discussion 93 4 8 13 16 Figure 5.6 Discussion In this chapter we have presented direct and iterative methods for the reconstruction of images using an inverse problem approach. (Left) LS gridding. The total variation functional in combination with the ap .6. (Right) Projected primaldual reconstruction.
6 1 0. but attempts to reconstruct it. While the damped least squares method can be solved efﬁciently using techniques for sparse matrix inversion. (Right) Comparison of central lines of the ground truth and reconstructed images for the case of 8 radial proﬁles.2 Chapter 5.3 0. while sampling near the center is dense.9 0. It has to be noted that both of these approaches produce very similar results in limited data cases. for example a conjugate gradient method [174].1 0. These methods were originally designed to deal with complete data sets and propagate high frequency information very well in that case. but they tend to blur a lot of useful information as well.1 0. thus allowing higher grid resolutions. Image reconstruction method 0. as it reaches the desired solution in approximately a third of the iterations needed by the ﬁxed point method.5 4 6 number of proﬁles 8 10 12 14 16 0 0 20 40 60 x 80 100 120 Figure 5. Standard approaches can be smoothed to overcome these problems.4 rms 0.7 0. Further improvements to the computational aspect of the minimization problem might be achieved using a multigrid scheme [175]. Conjugate gradient methods do not necessitate the complete matrix to be stored. the nonlinear methods require a lot more inversions due to their iterative nature. In the case of undersampled radial MRI high frequency information is very sparsely sampled.8 intensity 0. The approaches discussed were based on direct inversion of a large sparse matrix. its beneﬁts become clear. the direct inversion can be replaced with a more efﬁcient iterative linear system solver. The iterative approach discussed in this chapter does not throw away high frequency information.6 0. plication of constraints improves on the quality of the reconstructed images and greatly reduces angular artifacts. The primaldual method was the most computationally demanding method per iteration. The dashed line represents the LS gridding method and the solid the primaldual method.7 LS gridding Primal−dual Ground truth LS gridding Primal−dual 1. The difference in convergence speed is due to the use of second order derivatives and the improved linearisation in the primaldual method. This is a computationally and memory demanding task.23: (Left) Multiple coil rms plot over the number of proﬁles. with the only restriction .94 1. When compared though to the ﬁxed point method in terms of convergence speed.2 0.5 0. The novel methods introduced in this chapter were compared to the standard ﬁltered backprojection and gridding approaches. Apart from the typical argument of increasing memory capacity and processing power in computers.
. the method can be also be applied to limited view problems. It was shown that the inverse problem approach produces superior results both visually and numerically to standard methods in both simulated and measured data studies. Discussion 95 being the grid resolution. In the majority of cases it does not produce high frequency artifacts. where the total imaging angle is less than 180. while other times it produces smooth results. In the next chapter we leave temporarily the problem of image reconstruction aside and focus on the direct reconstruction of shapes from measured data.4. We believe that our method could potentially be combined with SENSE (§2. while greatly reducing the angular artifacts. It is also worth noting that while the results of this chapter were based on angular samples spanning the 180 degrees. and generally to any Fourier sampling pattern. It is not the aim of our method to replace existing parallel imaging technologies but to demonstrate the feasibility of its application with multiple receive coil data.5.2). For the measured data case we applied the method to both single and multiple receive coil data.6. some times successfully. like introduction and deformation of small structures.
Image reconstruction method .96 Chapter 5.
The basis functions used are trigonometric functions [157]. A level set method for the reconstruction of surfaces was discussed in [182] and [42].1) where θn are periodic and differentiable basis functions. Feng et al [65] have presented a level set method. 1]. presented in §3.1) does not have such limitations as convexity for admissible domains.1. which would not be adequate to form an image and segment it with traditional ‘snake’ approaches.2. as described in §4. (6. Our approach is based on least squares minimisation using global basis functions for the boundary of the shape. We assume in this chapter that the background and constant interior intensity of the object are known. which segments objects of constant interior in simple backgrounds. Our modelbased approach is stated as a minimisation problem between the predictions of our model and the data.1 Forward problem A planar curve can be modelled with the use of basis functions C(s) = x(s) y(s) = n=1 Nγ x γn θn (s) y γn θn (s) . The surfaces contained volumes with homogeneous intensity in an homogenous but separable background. A global representation of the form (6. s ∈ [0. It has the beneﬁt that images can be segmented from very limited data by working directly in the data space. In [10] and [9] Battle et al reconstructed surfaces using a Bayesian approach.Chapter 6 Shape reconstruction method In this chapter we present a method for the reconstruction of shapes directly from measured data. Selfintersection occurs when the contour intercepts itself. Nγ is the number of harmonics and x y γn . but it does have the drawback that it is difﬁcult to set constraints such as nonselfintersection. 6.2. that is C(s1 ) = . γn ∈ R are the weights of θn . Using this direct approach shapes can be reconstructed from very few samples.
There will be a clear gap where the selfintersection was removed. The scale factor is calculated as follows: sc = 1 . This reduction can be calculated by taking the parametric difference between the exit and the entry sample of the selfintersection. (Left) Corrected contour with the small loop removed. the pixels belonging to the smallest loop are removed from the curve. free of selfintersections. The remaining curve. as the s points will not be spread evenly in the unit interval [0. 6. Given a point of selfintersection se . Due to this reduction sr in parametric length it is required to scale the corrected contour to have again length equal to 1. At the point where the selfintersection occurred. 1 − sr The scaling of the new list for the nonselfintersecting contour is preformed by multiplying each parametric difference between two consecutive samples with sc .1. We solve this problem by detecting the selfintersection with an exhaustive search. The new list in the correct scale will be used to calculate the boundary parameters γ in a similar manner to [5]. 1). reparameterisation. So far there is no analytic solution to the problem of deﬁning a relation in the parameters γ. the new parametric distance is unknown. This clearly is a problem as it does not represent realistic boundaries. The parametric length of the curve will be reduced. as seen in ﬁg. This is approximated from the mean of all parametric distances in the corrected contour.1: (Right) Contour with selfintersection at parametric point se . The parametarisation formula based on the fact that the basis functions θn resemble a . requires 20 C( s) 20 C (s) 40 40 60 ry 80 se 60 ry 80 100 100 120 20 40 60 80 100 120 120 20 40 60 80 100 120 rx rx Figure 6. Shape reconstruction method C(s2 ) for two parametric points s1 and s2 . when a selfintersection occurs.98 Chapter 6. that is the number of pixels belonging to the curve. This can be reduced further by taking in to account that neighboring pixels by deﬁnition cannot intersect each other. This is feasible due to the small size of the search space.
6. Using the trapezoidal rule for numerical integration. 2 if k is even Ns x γk =2 Ns ((Cx (n) sin(kπsn )) + (Cx (n + 1) sin(kπsn+1 ))) n=1 ∆sn 2 y γk =2 ((Cy (n) sin(kπsn )) + (Cy (n + 1) sin(kπsn+1 ))) n=1 ∆sn . If k is even x γk = 2 y γk = 2 1 0 1 0 Cx (s) sin(kπs) ds Cy (s) sin(kπs) ds. Keep in mind that the DC terms in the y x parameters of contours are equal to 1 γ1 and 1 γ1 . Forward problem truncated Fourier series.1. The parameters γ are estimated from pixel locations ˜ and parametric differences of each sample of the contour using the above equations.1. The small deviations from the original parameters γ used to produce the two contours are in the sub pixel level. . 2 where ∆sn is the nth parametric difference between the nth sample and the (n + 1)th sample and Ns is the total number of samples in the contour. the discretised form of the previous equations becomes: if k is odd Ns x γk =2 Ns ((Cx (n) cos((k − 1)πsn )) + (Cx (n + 1) cos((k − 1)πsn+1 ))) n=1 ∆sn 2 y γk =2 ((Cy (n) cos((k − 1)πsn )) + (Cy (n + 1) cos((k − 1)πsn+1 ))) n=1 ∆sn . is the following: 99 If k is odd x γk = 2 y γk = 2 1 0 1 0 Cx (s) cos((k − 1)πs) ds Cy (s) cos((k − 1)πs) ds. Results for the reparameterisation of curves 2 2 with Nγ = 7 are presented in table 6.
0377 10 9. ˜ ˜ Pixel intensities are calculated depending on whether they belong in the interior.0080 0 0.3) is a Jacobian matrix and λ is a regularization parameter controlled by the optimization method dependant on the objective error.0171 0 0.0073 0 0.γy are the reconstructed coefﬁcients.9881 0 0. This constitutes the mapping of the model to the image space G(p) : P → X. (6. 2 2 γ (6. (6.0314 12 11.0001 4 4.0027 0 0 0 0.0075 0 0 0 0.0313 11 10.0152 0 0.1: Results for 2 reparameterised contours. where R is the Radon transform.2 Inverse problem The nonlinear problem of eq. The update is given by γk+1 = γk + (J T J + λI)−1 J T (g − Jγk ).9880 32 32 27 26.4) nx (s)θn (s) ds . This method solves a linear approximation at each iteration.9939 0 0. The forward problem is completed by the mapping of the predicted image to the data space R : X → Y.100 γx γx ˜ γy γy ˜ γx γx ˜ γy γy ˜ 32 32 27 26.2) 6. We seek a solution to the following Tikhonov regularization problem γmin = arg min g − Z(γ)2 + λγ2 .0049 0 0. The combined mapping Z = R(G(γ)) is nonlinear and illposed. where P is the parameter space and X is the pixel space. where J = ∂(RG)(γ) ∂γ (6.0251 0 0.9880 Chapter 6. Shape reconstruction method 10 10. γx and γy are the original coefﬁcients of the basis functions and γx .2) can be solved by the method of Levenberg [105] and Marquardt [117].0015 0 0 0 0.0027 Table 6.0133 0 0.9996 1 1. boundary or background.0356 0 0. The Jacobian of the operator G has been analytically calculated in [94] JG = s2 s1 ny (s)θn (s) ds s2 s1 n ∈ γx n ∈ γy − .0030 0 0 0 0.
3 Results To test the shape reconstruction method we examine two cases: the background being equal to zero and the more general one where the background is nonzero but it is known. 2 2 Figure 6.6) 6. 2 2 (6. 2 2 s1 x nx (s)θn ds ≈ −(s2 − s1 )nx ( s2 (6.2) and ny = γn ∂θn and nx = −γn ∂θn are the x and y directions of the normal to curve.4) for y is s1 + s2 y s1 + s2 )θn ( ) 2 2 s1 s2 s1 + s2 y s1 + s2 x nx (s)θn ds ≈ (s1 − s2 )nx ( )θn ( ). For both of these cases we assume that the interior of the shape is known and constant. if n is odd = −2π n−1 sin(2π n−1 ). . 7 for each dimension. Due to the very small size of a pixel it can be assumed that θn and nx (s) are constant over the pixel.5) where s1 and s2 are the parametric points of intersection. Data was simulated by taking the Radon transform at 8 angles that span the 180 degrees. (6. y x 6. The ∂s ∂s y x derivatives of the basis functions θn are as follows: ∂θ1 ∂s ∂θn ∂s ∂θn ∂s = = 0 2π n cos(2π n ). For all the experiments we have used a total of 14 basis functions coefﬁcients.4) becomes for x s2 s1 x ny (s)θn ds ≈ (s2 − s1 )ny ( s1 + s2 x s1 + s2 )θn ( ).3.6.2: Exact parametric points s1 and s2 of the intersection of the curve with a pixel. (6. eq. for the description of the contours. 2 2 if n is even . Similarly eq. Results 101 where s1 and s2 are the parametric intersection points of the curve with a boundary pixel (ﬁg.
6. Cartoon heart. 6.4). 6. This imposes a heavy penalty on intersecting areas and if the model is accurate enough it tends to overcome this problem. as seen in ﬁg. The reconstructed shape approximates the truth well. The cardiac image (ﬁg.5. 6.3). In ﬁgs. To overcome the problem of the shapes intersecting each other.3. we ﬁll the intersected area with the sum of both constant interiors. 6. The boundary of the object has high curvature at some locations. Figure 6.13 results are shown with simulated data from a cardiac image.126. .7. The gradient norm over iteration plot can be seen in ﬁg. The shape is also reconstructed accurately even if the data has been corrupted with noise. yet the method is capable of the recovering the shape almost perfectly (ﬁg. 6.102 Chapter 6. Reconstructed data with 15% added Gaussian noise is shown in ﬁg.9. 6. 6. In multiple shape reconstruction it would be desirable to impose constraints such as spring models between the contours. We demonstrate the case of two objects in ﬁg. 6. Both of the contours are reconstructed with accuracy (ﬁg. Shape reconstruction method 6.1 Simulated data The ﬁrst case where background contains no signal is demonstrated with a cartoon heart of constant interior (ﬁg.6. The trigonometric approach makes it difﬁcult to apply these local methods as there are no control points to where the springs can be attached and the forces calculated. There are two sources of error: the approximation of the interior of the shape with a constant value and the papillary muscle which has very low intensity values compared with the rest of the interior of the left ventricle.11) was created from a fully sampled data set with a ﬁltered backprojection method.10). The norm of the gradient of the objective functional is reducing till it converges approximately after 7 iterations.3: Ground truth image. 6.9 and ﬁg. The shape reconstruction approach can be extended for multiple objects.
4: Simulated data with no background. (Bottom Right) Final predicted image. (Bottom Left) Final superimposed to ground truth image.3. Results 103 Figure 6.5: Simulated data with no background. 300 250 200 150 gr ad 100 50 0 0 2 4 6 8 k 10 12 14 16 Figure 6. . (Top Right) Initial predicted image. Gradient norm plot over iteration.6. (Top Left) Initial superimposed to ground truth image.
104
Chapter 6. Shape reconstruction method
Figure 6.6: Simulated data with no background and 15% added Gaussian noise. (Top Left) Initial superimposed to ground truth image. (Top Right) Initial predicted image. (Bottom Left) Final superimposed to ground truth image. (Bottom Right) Final predicted image.
140
120
100
80
gr ad
60
40
20
0
0
2
4
6
8
k
10
12
14
16
Figure 6.7: Simulated data with no background and 15% added Gaussian noise. Gradient norm plot over iteration.
6.3. Results
105
Figure 6.8: Ground truth image with multiple shapes.
Figure 6.9: Simulated data with no background. (Top Left) Initial superimposed to ground truth image. (Top Right) Initial predicted image. (Bottom Left) Final superimposed to ground truth image. (Bottom Right) Final predicted image.
106
Chapter 6. Shape reconstruction method
140
120
100
80
gr ad
60
40
20
0
0
5
k
10
15
Figure 6.10: Simulated data with no background. Gradient norm plot over iteration.
Figure 6.11: Ground truth image. Simulated cardiac phantom.
(Top Left) Initial superimposed to ground truth image. 45 40 35 30 25 gr ad 20 15 10 5 0 0 2 4 6 8 k 10 12 14 16 Figure 6.12: Simulated data with known background. (Top Right) Initial predicted image.6. (Bottom Left) Final superimposed to ground truth image. (Bottom Right) Final predicted image. Results 107 Figure 6.3.13: Simulated data with known background. Gradient norm plot over iteration. .
17. As seen.2 Measured data from MRI Data was obtained directly from an MRI scanner as described in §5. 6. Shape reconstruction method 6.14: Ground truth image calculated from a fully sampled single coil data set.5. in ﬁg.15). 6. The sensitivity matrices for the multiple coil reconstruction were calculated as described previously in §5.5. 6. The fully sampled reconstruction using the LS gridding method. is the ground truth image ﬁg.16. 6. The gradient norm plot is shown in ﬁg.2. which appears as freckles in the fully reconstructed image. 6. In ﬁg.2.14.18 the shape was reconstructed from multiple coil data. .3. presented in §5. from the fully sampled reconstruction the signal contains signiﬁcant noise. Single coil reconstructions Figure 6.2. The shape reconstruction method was able to approximate the true boundary closely (ﬁg.5.108 Chapter 6.
2 gr ad 14 13.6 13.3. (Bottom Right) Final predicted image. (Top Right) Initial predicted image. 14.2 1 2 3 4 k 5 6 7 8 Figure 6.4 13.6. Gradient norm plot over iteration. (Bottom Left) Final superimposed to ground truth image.6 14. Results 109 Figure 6.8 14.15: Measured single coil data with known background.4 14. (Top Left) Initial superimposed to ground truth image.8 13. .16: Measured single coil data with known background.
Figure 6.18: Measured multiple coil data with known background. (Bottom Left) Final superimposed to ground truth image. (Top Left) Initial superimposed to ground truth image. (Bottom Right) Final predicted image. Shape reconstruction method Multiple coil reconstructions Figure 6. .17: Ground truth image calculated from a fully sampled multiple coil data set.110 Chapter 6. (Top Right) Initial predicted image.
As it was demonstrated in the results of the previous section. when the model is not a good approximation of the true object then g ∈ Range(Z(γ)).6.19: Measured multiple coil data with known background. which typically implies that there are many local minima. As it is typical with many ‘snake’ approaches. Our inverse problem approach is not based on derivative information.15 and 6.9 1. The quality of the reconstruction is dependant on the knowledge of the interior of the shape.88 1. 6.18. when the model is accurate the segmentation is robust and the initial position does not have to be very close to the real boundary. In the case of noisy data combined with an inaccurate interior intensity model the quality of the reconstructed shapes decreases as seen in ﬁgs. 6. our shape reconstruction method will produce accurate results even from less than 8 angular samples. Discussion 1. On the other hand.78 1. Gradient norm plot over iteration.8 1. a close initialisation will result in a good approximation of the boundary.4. If the model of the shape is accurate. The method though can be applied to other degrees of angular sampling. Results have been demonstrated for the case of 8 radial proﬁles. In this case.4 Discussion The shape reconstruction method has been demonstrated to work with both simulated and measured data. The number of objects has to be known in advance. but on a accurate model of the object. This might be considered as a disadvantage when compared to level set methods which are topologically adaptive. Nevertheless the shape reconstruction method gives good approximations even when this knowledge is not available. It is often the case though that the number of shapes is known in many applications and additional constraints have to be applied to level set methods to restrict the topology. When the model is precise then the solution can be thought of as a global minimum for the objective functions.84 1. then the shape reconstruction can be expected to be robust and precise. the true solution / .86 111 gr ad 1.76 1 2 3 4 k 5 6 7 8 Figure 6.82 1.92 1.
In a simple. background.112 Chapter 6. In the next chapter we discuss the problem of reconstructing a shape of unknown. interior intensity in an unknown background. Shape reconstruction method is hard to ﬁnd. the solutions are easy to ﬁnd as they lie on a deep valley of the objective function. but smooth. for example a zero intensity. making the true solution much harder to be tracked. In a complex background with a variety structures. The problem of reconstruction from incomplete Radon data also depends on the background structure. many solutions exist that minimize the objective function locally. . The valleys are smaller. It is then required to incorporate a priori information that will restrict the solution space to a welldeﬁned set of meaningful results.
The operator G : P → X maps the parameters of the shape in to the image space.1 Forward and inverse problem The ﬁrst part of the problem is to estimate the boundary of the shape. Our approach is based along the same general lines of an alternating minimisation approach. g is the measured data. The combined shape and image reconstruction method enjoys this global edge enhancing property while giving the ability to enhance edges further on the boundary of the estimated shape.Chapter 7 Combined reconstruction method Totalvariationbased methods in image reconstruction from tomographic data have been demonstrated in the works of [28]. This edge enhancing property applies globally to all locations of the image. Ye et al [187] have presented a method. which alternates the minimization process between the reconstruction of the image and the shape. [96] as discussed in §5. The aim is to obtain the solution to the nonlinear minimisation problem γmin = g − Z(γ)2 + λγ γ2 . It uses the reconstructed image for the estimation of the background and interior of the shape and then takes advantage of the shape information in the image reconstruction. 7. Pixels .1) where the operator Z(γ) = (RG)(γ) is the forward operator. Our method is based on the ideas presented in chapters §5 and §6. It is an initial solution to the problem of estimating shapes with an unknown intensity in an unknown background from limited data. T V based methods have the property of enhancing edges when compared to standard Tikhonov regularisation which tend to produce smooth solutions. The approach discussed in this chapter differs signiﬁcantly from these methods in the fact that it combines the image reconstruction with the shape reconstruction. γ is the shape parameter vector and λγ is a regularization parameter. 2 2 (7.4.
114 Chapter 7. In the intervals where the gradient is large the function is approximately the same. making it unsuitable for the primaldual method which requires the calculation of the second derivative. which maps the predicted image to the data space. Interpreting the behavior of the ψ in relation to our shape estimation method.4. Combined reconstruction method are classiﬁed as belonging to the interior. Douiri et al [39] proposed the use of the Huber function [80] for the approximation of the absolute value function. approximating more a Tikhonov solution. exterior or background. These intensities are calculated from the reconstructed image and they are restricted to be within a small percentage of the maximal intensity in the interior of the object. All these changes in β mainly effect the area where the gradient is small (ﬁg. 7.1) is solved with the LevenbergMarquardt method. Our image reconstruction approach is based on the estimation of the shape. 2 pζ pζ (7. The background intensities are estimated directly from the reconstructed image. The shape reconstruction problem in eq.4) is modiﬁed to each speciﬁc shape by altering the function ψ according to the estimated shape. Surrounding these pixels. pζ is the blob parameter vector and the total variation functional T Vβ (pζ ) = Ω ψβ ( pζ )dpζ (7. The Huber function is only C 1 continuous. In the image reconstruction part of the problem we solve (7. the derivative of the ψ function is becoming more isotropic. we assume that there is a narrow band of low intensities.2) min = arg min g − Jp pζ 2 + λT Vβ (pζ ) subject to c(pζ ) ≥ 0. which gives the following update: γk+1 = γk + (J T J + λI)−1 J T (g − Jγk ). Boundary pixels are calculated relative to their area belonging in the interior and the background. This is modelled from the reconstructed image in a similar way to the interior intensities by selecting the minimum value within that band and restricting the corresponding coefﬁcients as before. The operator R : X → Y is the Radon transform.1). If a location 1 Solutions that are piecewise constant .3. The proposed ψ = t2 + β 2 does not show severe signs of the staircase effect1 [26]. By increasing β. The reduction of β makes the function more anisotropic. as described previously in §5. leads us to the following conclusions. The interior of the object is modelled as a smooth varying distribution of intensities. which enhances our belief that an edge exists in a particular location. (7.3) where c are the constraint functions.
11.1 and βb = 0. interior (dotted line) and boundary (dashed line).4. Given this reconstructed image the shape parameters are initialised near the object of interest.4) is solved with the projected primaldual method presented in sections §5.5 0 0. (7. The minimisation problem in eq. 7.3.2 Results The intensity coefﬁcients are initialised using the damped least squares method.5 1 t Figure 7.2) at 8 angles. After each iteration of the shape estimation problem we resolve the image reconstruction problem using a few iterations of the projected primaldual method with the shape speciﬁc β values. On the background region. A more sophisticated approach can choose β values from the statistical properties of the expected edges.05 115 ′ ψ(t) 8 6 4 2 0 −1 −0. The reconstructed shapes and images are shown in ﬁg. These values are assigned according to the classiﬁcation of intensity coefﬁcients as background (solid line).09 for the interior. 7.3. βe = 0.1 interior β = 0. presented in §5.4. We begin the iteration by solving ﬁrst the shape estimation problem with the LevenbergMarquardt approach. interior of the shape and background intensities. We begin with simulated cardiac data.7. we seek to enhance edges but we do not know their exact locations and for that reason we use a value that lies in between the interior and the boundary β values. then small gradients will be penalised less as we assume that the interior intensities are smoothly distributed.1: Plot of the derivative of ψ(t) for different values of β.2. produced by taking the Radon transform of a fully reconstructed cardiac image (ﬁg. If a location is estimated to be in the interior. 7. Results 20 18 16 14 12 10 background β = 0.2. In this approach we use the following beta values βi = 0.15 boundary β = 0.2 and §5. background and boundary coefﬁcients. is estimated as a boundary coefﬁcient we will penalise even the small gradients. This alternating minimisation process is repeated until the convergence criteria are met. . We demonstrate our approach using 5 image iterations for every shape iteration.3.
7. . 7. 7. 7. 7.2: Ground truth image for the simulated experiments. On the left of ﬁg. The results from multiple coil data can be seen in ﬁgs.6 the initial and ﬁnal estimated shapes are superimposed on the ground truth image (ﬁg.116 Chapter 7.10. Further to that we apply our method to measured MRI data with 8 radial proﬁles. 7.8 .7 shows the gradient norm over the iteration as it climbs out of an infeasible solution.4). 7.7 we display the enhanced image.6. The single coil reconstructions can be seen in ﬁg. as described in the previous chapters. Combined reconstruction method An enhanced image has been created using the estimation of the boundary of the shape and the reconstruction of the image ((Left) ﬁg. The right plot of ﬁg. from single and multiple receiver coils. In the left part of ﬁg. which contains intensities below zero and/or above the maximal value.7. Simulated data reconstructions Figure 7.5).
(Top Right) Initial predicted image.3: Simulated data with unknown background.40217.4 1.6 0.8 1. (Right) Plot of the gradient norm of the shape reconstruction over iteration. (Bottom Right) Final predicted image.7. Results 117 Figure 7.2.2 gr ad 1 0. .6 1. 2 1. (Left) Enhanced reconstructed image.4: Simulated data with unknown background.4 1 2 3 4 k 5 6 7 8 Figure 7.8 0. (Bottom Left) Final superimposed to ground truth image. (Top Left) Initial superimposed to ground truth image. The error for the reconstructed image is rms = 0.
6509. Combined reconstruction method Single coil reconstructions Figure 7. Coil 5. (Bottom Left) Final superimposed to ground truth image.118 Chapter 7. (Bottom Right) Final predicted image. Figure 7.6: Measured data with unknown background. . (Top Right) Initial predicted image. (Top Left) Initial superimposed to ground truth image. The error for the reconstructed image is rms = 0.5: Ground truth image from fully sampled single coil data.
5 1 1 2 3 4 k 5 6 7 8 Figure 7.5 5 4.7. (Left) Enhanced reconstructed image. (Right) Plot of the gradient norm of the shape reconstruction over iteration.2. Coil 5. .5 4 gr ad 3.5 3 2.7: Measured data with unknown background. Results 119 5.5 2 1.
8: Ground truth image from fully sampled multiple coil data. The error for the reconstructed image is rms = 0.120 Chapter 7. Combined reconstruction method Multiple coil reconstructions Figure 7. (Top Right) Initial predicted image. (Bottom Right) Final predicted image. . (Bottom Left) Final superimposed to ground truth image. (Top Left) Initial superimposed to ground truth image. Figure 7.56808. Multiple coils.9: Measured data with unknown background.
2. (Left) Enhanced reconstructed image. (Right) Plot of the gradient norm of the shape reconstruction over iteration. Multiple coils. .4 1 2 3 4 k 5 6 7 8 Figure 7.5 0.6 gr ad 0.45 0.7. Results 121 0.10: Measured data with unknown background.65 0.75 0.7 0.55 0.
While this is an important subject and it raises interesting questions. which is a limiting factor in gated studies.122 Chapter 7. The method can be applied in other dynamic imaging modalities where the collected data is limited. Given an exact model of the heart the problem is then to grow or shrink it according to the estimated shape. The result is that the interior model ends up including low intensity values belonging outside of the object and progresses even further. Results for our method have been presented in simulated and measured data studies. This happens because we are trying to include low intensities within the interior of the shape without knowing their location in advance. As the method does not make any assumptions about the motion of the object it removes the restriction of periodicity. . The smooth varying interior model used for the intensities inside the object is an improvement on the constant interior model. The problem of nonﬁlled interiors is that the shape can continue to grow even when it has passed through the correct boundary. This makes the method ideal for dynamic imaging as there will be very little corruption due to motion artifacts. we have presented a method which is capable of reconstructing objects with smoothly varying intensities in an unknown background. it exceeds the purposes of this thesis. A model for the reconstruction of shapes has to describe the interior and the background in a distinct manner. Combined reconstruction method 7. Freebreathing imaging has the potential of being clinically more useful than breathold imaging due to the poorly understood changes in blood ﬂow and pressure in the region of the heart [122] during the extended breathholds required for the collection of data in traditional methods. but these outliers have to be restricted according to some a priori knowledge. In our approach the aim was to ﬁnd an object which has a smooth distribution of high intensity values. Still the exactness of our model suffers from the existence of very low intensities within the left ventricle.3 Discussion By combining the image reconstruction method with the shape estimation. Using the estimated shape we can further improve on the reconstruction of edges at the boundary of the object. according to some metric. registered model to the data. The T V based image reconstruction method enjoys the beneﬁt of global edgeenhancement. This could potentially be achieved with a model of the heart based on an anatomical atlas or on a learned model from a large data set. The motion of the object is very small during the collection of the limited amount of data (8 radial proﬁles) that we have used for our experiments. Then the problem of shape reconstruction would be to ﬁt the best. An improved cardiac model should allow for outliers within the interior. The presented method can be used to reconstruct realtime freebreathing cardiac MR data by solving a minimisation problem for each time step. This offers the possibility to apply the method to patients suffering from arrythmia.
7. The problem is solved in the temporal dimension as a state estimation problem using Kalman ﬁlters.3. . Discussion 123 In the next chapter we present a method which assumes that shapes are correlated in time.
Combined reconstruction method .124 Chapter 7.
We solve this stochastic problem using Kalman ﬁlters. While the temporally correlated problem can be solved with the combined method presented in chapter §7 as a sequence of minimisation problems. The problem of reconstructing the shape is expressed as a state estimation problem. 8. it would not be trivial to incorporate statistics that change in time. gt is the measured data at time t and ytt = Z(γ) is the predicted data at time t given the data at time t. data is temporally correlated.t = γtt−1 simpliﬁes the nonlinear Kalman ﬁlter equations (4. Choosing the linearisation point to be the current estimate of the predictor γ∗. Kalman ﬁlters have the temporal estimation of statistics build in. The formulation of the problem in this temporal case is also simpler using the Kalman ﬁlter approach. A similar approach for the calculation of diffusion and absorption coefﬁcients in optical tomography has been presented in [95]. where the states are the parameters of the shape.91) to . where the boundary is determined only by its previous position. We consider this to be a Markov process.1) where ett = gt − ytt is the a posteriori estimate error.87)(4. tt (8. The location of the cardiac boundaries at a particular time point is dependant on their previous location.1 Forward and inverse problem In the Kalman ﬁlters algorithm the aim is to minimise the a posteriori error covariance Ctt = E[ett eT ].Chapter 8 Temporally correlated combined reconstruction method In the case of cardiac MRI.
Temporally correlated combined reconstruction method T T Gt = Ctt−1 Jγ.t .t is the shape Jacobian matrix. The matrix St represents our knowledge about the change of the states from one time point to the next. The data vector used is g = {gt−n . The intensity reconstruction is iterated until sufﬁcient convergence is achieved. As our data for the intensity reconstruction we use all the radial proﬁles from the time point of the previous switch till the current time t. Eq. In our numerical experiments we have found that the heart is not exhibiting much motion during the collection of 8 radial proﬁles. We estimate the location and shape of the left ventricle with the extended Kalman ﬁlters using a single radial proﬁle as our data gt at time point t. After a number of radial proﬁles has been used for the shape estimation.6) γtt = γtt−1 + Gt (gt − Z(γtt−1 )) Ctt = Ctt−1 − Gt Jγ. gt }. To solve the combined image and shape reconstruction we use an alternating minimisation approach. where n is the number of proﬁles we have used for the estimation of the shape before switching to the image reconstruction method.5) according to the model of the state process.t (γtt−1 )Ctt−1 γt+1t = St γtt T Ct+1t = St Ctt St + Cw.126 Chapter 8.t (γtt−1 )(Jγ.. The shapes are estimated with the Kalman ﬁlters for each proﬁle. .t (γtt−1 )Ctt−1 Jγ. In each group.3) (8. The method is then switched again to the estimation of the shape using the intensity parameters calculated previously. presented in §5.2 Results We choose to solve the image reconstruction problem after 8 iterations of the shape estimation method. where Jγ.. (6. described in eq..3) updates the estimate according to the measured data and eq.2) (8. (8. Shape parameters are initialized near the object of interest.t )−1 (8. The acquisition of data is separated in groups of 8 proﬁles. All groups are . these 8 proﬁles are chosen so they have the largest angular distance between them in order to span the 180 degrees in a near optimal way. (8. Intensity parameters are initialized using the projected primaldual method. Setting St = I we are representing random motion.5) (8. The combined estimation of the shape and intensity parameters progresses by alternating every n time points until all radial proﬁles have been processed.t (γtt−1 ) + Cn.4). and we take advantage of this to reduce the computational cost without any signiﬁcant loss on the quality of the reconstructions.4) (8. we switch to the reconstruction of the intensities with the shape speciﬁc T Vβ projected primaldual method (§7). 8. similar to chapter §7.
the rms image errors and the areas over time are presented in ﬁg. The resulting undersampled data set consists out of 128 radial proﬁles. Cg ) = (8. N (C) is the number of pixels within an area. Results interleaved with each other to span the 180 degrees perfectly in time (ﬁg. The interleaved pattern (ﬁg. 8.1: Interleaved sampling pattern. respectively. 8. 8. In ﬁg. The full data set consisted out of 64 cardiac phases.2. Simulated data was produced by reconstructing a fully sampled data set and then taking the Radon transform at 8 angles per cardiac phase. which we have undersampled to 16 by taking every fourth phase.2 next to the predicted images with the smoothly varying constant interior. 8.7) where C and Cg are the areas of the predicted and ground truth contours.3 we display the ﬁltered backprojection reconstructions next to the projected primaldual ones. The similarity coefﬁcient.1) allows to construct sensitivity matrices from the time averaged images without using a body coil. For this task we use the Dice similarity coefﬁcient [35] 2 N (C ∩ Cg ) .4.8. In ﬁg. .1). ﬁltered backprojection and the combined reconstructions. The reconstructed shapes superimposed on the ground truth images can be seen in the left column of ﬁg. N (C) + N (Cg )) dsc(C. 8.5 we show one line passing through the center of the image over time for the ground truth. time point 1 8 6 4 8 6 4 127 time point 2 8 6 4 time point 3 8 6 4 all time points ky 2 0 ky 2 0 ky 2 0 ky 2 0 −2 −4 −6 −8 −8 −6 −4 −2 0 −2 −4 −6 −2 −4 −6 −6 −4 −2 0 −2 −4 −6 −6 −4 −2 0 kx 2 4 6 8 −8 −8 kx 2 4 6 8 −8 −8 kx 2 4 6 8 −8 −8 −6 −4 −2 0 kx 2 4 6 8 Figure 8. Further to that we show the predicted and ground truth areas over the cardiac phase in a graph. The fully sampled reconstructed data was segmented manually for comparison with the automatic segmentations. 8.
(Left) Reconstructed shapes superimposed on ground truth images. Temporally correlated combined reconstruction method Simulated data reconstructions 1 4 7 16 Figure 8. The numbers on the left column indicate the time point in the sequence. (Right) Reconstructed images with restricted interior intensities. .2: Reconstructions from simulated data.128 Chapter 8.
(Left) Filtered backprojection.2. Results 129 1 4 7 16 Figure 8. (Right) Reconstructed images using shape speciﬁc T Vβ approach. The numbers on the left column indicate the time point in the sequence.3: Reconstructions from simulated data.8. .
6 area 250 dsc 0. Filtered backprojection (solid line) and temporally correlated combined approach (dotted line).9 0.5 0.75 200 0.7 0.65 100 150 0 2 4 6 8 t 10 12 14 16 0 2 4 6 8 t 10 12 14 16 0 2 4 6 8 t 10 12 14 16 Figure 8. . (Left) Plot of the Dice similarity coefﬁcient over time (Middle) Plot of rms over time. (Left) Ground truth. (Right) Combined shape and image method.7 0.85 0.8 Filtered backprojection Time correlated combined approach 350 Predicted area 300 0.130 Chapter 8.4 0.5: xt plots of the central rx line in the image over time. The thick arrows point to the papillary muscle. Temporally correlated combined reconstruction method 0. (Right) Predicted and ground truth areas over time. (Middle Left) Filtered backprojection.9 1.8 rms 0.4: Error plots from simulated data reconstructions.1 1 400 Ground truth 0. (Middle Right) Shape speciﬁc total variation method. ground right ventricle F BP T Vβ T Vβ + C left ventricle r x t Figure 8.
The data used in this experiment was generated by undersampling each phase to 8 proﬁles. 8.2. For the case of realtime MRI.2. Results Measured data 131 ECG gated data was acquired from a healthy volunteer. The results for the single and multiple coil reconstructions are presented in ﬁgs. . as described in §5. as previously.5.8.13. a total of about 200 radial proﬁles can be collected within a single heart beat using a fast steady state free precession sequence. A total of 25 phases each with 208 radial proﬁles were collected using a ﬁveelement array receive coil.8. we 1D inverse Fourier transformed along each radial proﬁle. according to the central slice theorem. Using this very undersampled data set results in a 26fold acceleration compared to the original radial acquisition.6 . To transform the data in the Radon space. This was done by using every 8th proﬁle of the fully sampled data set.
Temporally correlated combined reconstruction method Single coil reconstructions 1 4 7 16 Figure 8. (Right) Reconstructed images with restricted interior intensities. . The numbers on the left column indicate the time point in the sequence.6: Reconstructions from measured single coil data. (Left) Reconstructed shapes superimposed on ground truth images.132 Chapter 8.
7: Reconstructions from measured single coil data. . Results 133 1 4 7 16 Figure 8. (Left) Gridding. The numbers on the left column indicate the time point in the sequence.2. (Right) Reconstructed images using shape speciﬁc T Vβ approach.8.
4 1. The thick arrows point to the papillary muscle. (Left) Ground truth.134 Chapter 8.8: Error plots from measured single coil data reconstructions. Gridding (solid line) and temporally correlated combined approach (dotted line).74 0 5 10 t 15 20 25 0 5 10 t 15 20 25 60 0 5 10 t 15 20 25 Figure 8.78 0. (Right) Combined shape and image method. (Left) Plot of the Dice similarity coefﬁcient over time (Middle) Plot of rms over time.1 1 0.8 area 120 100 0. ground right ventricle GRID T Vβ T V β +C left ventricle rx t Figure 8.9: xt plots of the central rx line in the image over time. (Middle Right) Shape speciﬁc total variation method.82 1.3 200 Ground truth Predicted area 0.84 1.5 1.2 160 rms 0.8 0. (Right) Predicted and ground truth areas over time. Temporally correlated combined reconstruction method 0.7 Gridding Time correlated combined approach 80 140 dsc 0.88 1.9 0.76 0.86 180 0. (Middle Left) Gridding reconstruction. .
. (Right) Reconstructed images with restricted interior intensities. The numbers on the left column indicate the time point in the sequence.2.8. (Left) Reconstructed shapes superimposed on ground truth images.10: Reconstructions from measured multiple coil data. Results Multiple coils reconstructions 135 1 4 7 16 Figure 8.
(Right) Reconstructed images using shape speciﬁc T Vβ approach. The numbers on the left column indicate the time point in the sequence. . (Left) Gridding.11: Reconstructions from measured multiple coil data.136 Chapter 8. Temporally correlated combined reconstruction method 1 4 7 16 Figure 8.
ground right ventricle G ID R TVβ TVβ + C left ventricle r x t Figure 8.13: xt plots of the central rx line in the image over time.7 rms 0.2 1. (Right) Predicted and ground truth areas over time. (Middle Right) Shape speciﬁc total variation method.65 0.6 ae ra 100 Gridding Time correlated combined approach 80 0.3 1.12: Error plots from measured multiple coil data reconstructions.8 160 Ground truth Predicted area 0.5 0.7 0.85 1.6 0. (Right) Combined shape and image method.2. (Left) Ground truth. (Left) Plot of the Dice similarity coefﬁcient over time (Middle) Plot of rms over time.9 0.55 60 0. (Middle Left) Gridding reconstruction.75 140 0. Gridding (solid line) and temporally correlated combined approach (dotted line). . Results 137 0.8.5 0 5 10 t 15 20 25 0 5 10 t 15 20 25 40 0 5 10 t 15 20 25 Figure 8.1 1 180 0.8 120 dsc 0. The thick arrows point to the papillary muscle.
limiting its motion to a bare minimum.13). This results in errors at object edges.5. such as ﬁltered backprojection and gridding. which covers all the kspace sampling positions. 8. this is not clear from the obtained results. The exact calculation of sensitivity matrices exceeds the purposes of this thesis. While we would expect the quality of the multiple receive coils reconstructions to be superior to the single coil case. The time correlated combined reconstruction does not make any assumptions about periodic motion. This mismatch is caused due to our model not representing the interior intensities correctly. 8. where gated methods are simply infeasible. which will be completely removed by the subtraction. produce images which are highly corrupted by noise. To eliminate noise we have lowpass ﬁltered the sensitivity matrices.2 we have divided each time averaged fully sampled single coil image with the root of the sum of squares of all coil images. While standard reconstruction methods. and therefore the predicted area would approximate the truth closer. The restricted intensity model for the interior of the shape causes underestimation of the area. As mentioned in §5. instead of the sum of squares that we have used. High frequency parameters should be changing faster than lower . A reason for this is our approximation of the sensitivity matrices. An alternative approach for smoothing is to perform a polynomial ﬁt for each pixel to the noisy images [139]. More details on the difference imaging method are given in the appendix.12.3 Discussion In this chapter a method for the estimation of both shape and intensity parameters has been presented. 8. Further improvement in the case of cardiac MRI is to enforce constraints on the center of the estimated shape. With a stationary background the only signal left if we subtract data from two different time points will be the motion of the object of interest.9 and 8. Sensitivity matrices could also be obtained by dividing the single coil images with an image obtained from a body coil. as seen in ﬁgs. our reconstruction method shows results where the cardiac ventricles can be clearly delineated and the presence of noise is limited. This approach does not require the knowledge of the background structures.5. In our case a body coil signal was not available.138 Chapter 8. This would result in improved estimation of the boundary.5. The beneﬁts of time correlated combined approach can be seen in the reconstructed images and especially in the xt plots (ﬁgs. Temporally correlated combined reconstruction method 8. It is also clear that the low intensities that represent the papillary muscle are a source of error for our model. 8. This makes it applicable in many dynamic imaging problems. as seen in ﬁg. of the cardiac boundary. If the background is assumed to be completely stationary then another interesting approach is to reconstruct the difference between two sequential time points. A more sophisticated model should be able to capture the expected intensity distribution within the heart with precision.4 and 8. It is the boundary of the object that is moving and not its center.
3.3. we can solve the problem considering each radial proﬁle to belong to a different time point. To calculate its elements we can incorporate them in the minimisation problem as parameters of the objective function. Further to that we can consider the nontrivial problem of a nonMarkov process where the parameters depend on a longer history of the motion of the states. While we can solve the shape reconstruction problem in time using a least squares approach such as the LevenbergMarquardt method. which calculates the estimates from past and future measurements. it would be much harder to incorporate statistics that change in time.8. Kalman ﬁlters provide shape estimates for each radial proﬁle reaching the physical limit of temporal resolution in MRI. for the estimation of the shape at each time point. Using the Kalman ﬁlter approach. representing the movement from one state to the next.7. Kalman ﬁlters offers the tools for this task. Another beneﬁt of the stochastic approach is that the state transition matrix can vary in time. since they represent details on the boundary and these are changing faster from one time point to the next. this robustness was not evident with the LevenbergMarquardt method. . which requires a number of radial proﬁles. as it is the change of gradients that takes considerably more time than to read data out on a particular proﬁle. If the data can be reconstructed offline we can use the ﬁxedinterval smoother §4. Thus. In our experiments. Discussion 139 frequency ones.
140 Chapter 8. Temporally correlated combined reconstruction method .
This implies that the angular artifacts will be rotating depending on the choice of angles. This reduction of angular artifacts in the reconstructed images can be of importance in kt applications where training about the motion in the images is required. such as kt and SENSE approaches (§2. The quality and usability of the reconstructed images can only be assessed by clinicians. which dominate standard methods. 3D dynamic imaging requires the collection of large data sets. Using our approach these requirements can be signiﬁcantly decreased. for example preconditioned conjugate gradient methods. Apart from the standard argument of increasing computer capabilities. These moving artifacts will be detected as motion which is clearly unwanted. In chapter §2 and §3 we presented the foundations of dynamic imaging in MR and discussed current approaches in shape reconstruction. In chapter §6 we have presented a shape reconstruction method in the case of simple or . such as ﬁltered backprojection and gridding algorithms. where the blob basis functions are naturally extended due to their symmetric nature. It also offers the ability to be combined with other MRI methods. higher resolution images using ﬁner basis functions grids can be obtained by replacing direct matrix inversions with iterative linear system solvers. which will reduce motion artifacts in the reconstructed images. This we have introduced in chapter §4. The image reconstruction method in §5 produces superior results to standard methodology.4). The application of these methods also offers the extension of the presented methodology to three dimensions. which we plan to include in our future work. Yet the methodology is general enough to be applicable to many limited data problems. These methods typically use interleaved data acquisition patterns. The presented methods have been applied to the problem of cardiac MRI. Our method reduces the severe streaky artifacts. without oversmoothing edge information. The basis of our methods is inverse problem theory.Chapter 9 Conclusions and future directions Novel approaches for the reconstruction of images and shapes in cardiac MRI have been presented in this thesis.
we believe that estimating the parameters of the motion. but directly from the measured data using a modelbased approach. The limited data used in our experiments has the potential to reduce scanning time and make high temporal resolution realtime cardiac imaging a clinical possibility. In chapter §7 we combined the image and shape reconstruction methods. The beneﬁt of reducing scanning time and breath hold requirements is clear in the case of many patients who ﬁnd MRI scanners claustrophobic. As a future direction. Shapes in our method are not estimated from edge information. Other potential applications include . The background and interior intensities are estimated using the image reconstruction method. data acquisition time is no longer limited by the ability of patients to hold their breath and can be extended in order to obtain images from more slices or even complete volumes of the heart. that is the state transition from one time point to the next. will improve the reconstruction results.142 Chapter 9. It is our model that deﬁnes the shape we wish to reconstruct. our novel approach is applicable under conditions where gated methods are infeasible. where the interior of the shape is considered to be of constant intensity. Realtime imaging escapes the problem of normalising. Some patients ﬁnd it difﬁcult to maintain even a short breath hold. The combination of our methods with registration techniques could be a possible direction for the exact segmentation of cardiac images. The deﬁnition of a more sophisticated cardiac model based on anatomical knowledge would make the shape reconstruction method more exact and robust. On top of that. It would also simplify examinations under stress. Our model for the interior intensities shows its limitations in the estimation of real cardiac shapes. Solving this as a state estimation problem in chapter §8 offers a compact method for dynamic shape detection directly from MR samples. The reconstruction of shapes offers quantitative analysis of the cardiac motion. In this chapter we have presented a basic approach. It escapes the temporally averaging nature of gated imaging methods and has the potential of reducing motion artifacts. shrinking or stretching. Applications of interest for the presented method in cardiac MRI are patients with arrythmia and freebreathing imaging. The trigonometric functions used in this thesis can be replaced by spherical harmonics for the description of surfaces. As we have not assumed the periodic nature of cardiac motion. The extension from planar contours to surfaces will be another exciting future direction. monitored cardiac cycles to ﬁt an average cycle. Conclusions and future directions known background. The shape reconstruction assists the estimation of images by providing local information about the expected intensity distribution at the edges and interior of the boundary. Freebreathing imaging can be potentially more clinically relevant due to the poorly understood changes in blood ﬂow and pressure within the cardiac region during extended breath holds [122].
even in limited view problems. The detection of the shape can also be employed for the choice of scanning angles. especially of the interior intensity model. where data cannot be collected from the whole 180 degrees. then more views where the curvature of the shape is high contribute more in the correct reconstruction. A robust shape reconstruction method will require further development. which aim to approximate data as close as possible without making assumptions about completeness of the measurement set. Assuming that there is more interest in reconstructing the cardiac contours precisely than reconstructing the surrounding structure. We have applied our methods to radially sampled cardiac MRI. Our proposed method is applicable to any choice of angles.[179]) not ideal for the reconstruction of 3D images. In radial data sampling the choice of angles can be exploited by an intelligent acquisition scheme. . The detection of boundaries of objects with unknown interior intensities in an unknown background is a difﬁcult problem. but the limited number of views makes the use of standard tomosynthesis algorithms ([60].[22] and [112]. we can collect data at angles tangent to the reconstructed shape. which has been explored recently for its possibility of producing higher quality reconstructions [23]. We believe that we have made a signiﬁcant ﬁrst step in the solution of these limited data dynamic problems with the introduction of modelbased techniques. where the angles are chosen according to the cardiac motion. and seen the improvements and limitations of our modelbased approaches. Another interesting application is 3D mammography where the 2D images are of high resolution. Reconstructed images are visually and numerically superior to standard methods. at the end diastolic phase the heart is moving slower and more data can be collected without fear of corruption by motion artifacts. where it would be very hard to maintain the fetus static.143 the imaging during pregnancy. In simple terms. Further evaluation of the image reconstruction method with multiple data sets will be required to access its clinical feasibility. Imaging of infants in situations where it is hard to keep them still is also another possibility. than those that have a low curvature. In this thesis we have presented methods for the reconstruction of images and shapes in a dynamic problem. If the object of interest is not perfectly round. Generally our novel approach can be used in many tomographic and Fourier imaging problems. An interesting choice of kspace positions is random sampling.
Conclusions and future directions .144 Chapter 9.
Appendix A Acronyms MR MRI CT EIT PET ECG FOV SNR FT LS EM SVD TSVD TV Magnetic resonance Magnetic resonance imaging Computed tomography Electrical impedance tomography Positron emission tomography Electrocardiogram Field of view Signal to noise ratio Fourier transform Least squares Expectation maximisation Singular value decomposition Truncated singular value decomposition Total variation .
Acronyms .146 Appendix A.
Appendix B Table of notation a a ai A diag(a) I Range(A) N ull(A) dim(A) → An Z T H † ∗ Scalar Column vector ith element of vector a unless otherwise deﬁned within the context Matrix or linear operator expressed as matrix Diagonal matrix with the elements of a on its diagonal Identity matrix Range of A Nullspace of A Dimensions of A Maps to ndimensional space named A Nonlinear operator Transpose Conjugate transpose Pseudoinverse Adjoint lp norm of a Expectation operator Objective functional Lagrangian Set union Set intersection Subset of Belongs to ap E[] Φ Λ ∪ ∩ ⊂ ∈ .
/ grad Continuity of a function for n derivatives Comb function Multiplication Relative mean square error Dice similarity coefﬁcient Convolution Elementwise multiplication Elementwise division Gradient of the objective functional .148 Appendix B.∗ . Table of notation xy gradient Cn III(x) × rms dsc ∗ .
(Top Right) Image difference between time point 1 and 8. (Bottom Right) Sinogram difference between time point 1 and 8. (Top Middle) Phantom image at time point 8.1: Difference imaging approach with stationary background. then the difference data will show the only areas where an object moved or changed shape. (Bottom Middle) Phantom sinogram data at time point 8. In a multiple coil experiment this will complicate the construction of sensitivity matrices from time averaged images and will require the use of a body coil for this task. The beneﬁt of such an approach is that in singlebreathhold . τ τ τ θ θ θ Figure C. This requires that the collection of data at each time point is done at the same locations in kspace instead of the interleaved sampling pattern. (Bottom Left) Phantom sinogram data at time point 1. (Top Left) Phantom image at time point 1.Appendix C Difference imaging Another approach for the dynamic imaging problem is to calculate the difference data between two time points by subtracting kspace proﬁles (5 in this experiment) at the same positions belonging to different time points. Assuming that the background structures remain stationary.
Gaussian noise (ﬁg.150 Appendix C. In ﬁg.1) to demonstrate the power of this approach. as described before. is the difference between the data at two sequential time points. we calculate the difference data between the current time point and the previous one and estimate the shape using the Kalman ﬁlter algorithm. C. Data was simulated from a dynamic phantom at 12 time points. We present results on simulated data with 15% added.2 the ground truth images and the reconstructed shapes are shown. The Kalman ﬁlter algorithm is essentially estimating motion by comparing the predicted and measured difference data. to the Radon data. In ﬁg. Note that the difference imaging method does not require knowledge of the stationary background structure. C. We initialize the shape parameters exactly at the boundary of the object of interest. Difference imaging cardiac MRI most of the motion in the image is from the heart and especially the left ventricle. The predicted motion is the difference between the predictions in the Radon space at time point t and t − 1. . For each subsequent time point. The measured motion. The background structures are practically stationary. C. The interior of the object is ﬁlled with a known constant value.3 we compare the reconstructions with manually segmented shapes.
(Left) Ground truth images. The numbers on the left column indicate the time point in the sequence. (Right) Reconstructed shapes superimposed on groundtruth. .151 1 4 7 12 Figure C.2: Difference imaging reconstructions.
915 dsc 0. then the need for using the same angles in the sampling sequence can be removed. C.9 1150 1100 1050 0. This error will be ampliﬁed as the method propagates in time. then on the next time step we will be modelling different motion than what is happening in the data. Even though most of the background is removed there is still some differences at locations other than the heart (ﬁg.905 area 1250 1200 0. The immediate problem with the difference imaging approach is that unless we can guarantee convergence of the estimated shapes to the true ones. errors will be propagated further in time.925 Appendix C.3: (Left) Plot of the Dice similarity coefﬁcient over time (Right) Predicted and ground truth areas over time. If the shape does not approximate the true boundaries precisely. which estimates only shapes of the left and right ventricles. by predicting the data at any angle from the image space model.91 0. If the model is very precise and convergence can be guaranteed. then the difference imaging approach could give exciting results.92 1400 1350 1300 Predicted area 0. This directly implies that the initialization of the shape parameters has to be exact.152 0. .4). there is more than one shape in motion. will be a cause of error. as our initial estimated position is wrong.89 0 2 4 6 t 8 10 12 0 2 4 6 t 8 10 12 Figure C. In the case of singlebreathhold cardiac MRI. Difference imaging 1450 Ground truth 0.895 0. Assuming that such a precise model exists. Using a simple model.
. (Top Right) Image difference between time point 1 and 8. (Bottom Right) Sinogram difference between time point 1 and 8. (Bottom Middle) Phantom sinogram data at time point 8. (Top Middle) Phantom image at time point 8. (Top Left) Phantom image at time point 1. (Bottom Left) Phantom sinogram data at time point 1.153 τ τ τ θ θ θ Figure C.4: Difference imaging approach with stationary background.
Difference imaging .154 Appendix C.
SpringerVerlag. H.Bibliography [1] R. CA. Johnston. 1998. Baroudi. Acar and C. Computer Vision and Image Understanding.A.L. Adrain. Nixon.J. 1997. SPIE. San Diego. 3661:356–366. Aquado. and A. J.O. Y. volume 4119 of Proceedings of SPIE. A realtime system for multiimage gated cardiac studies. SPIE. Research concerning the probabilities of the errors which happen in making observations. Bachem. Anderson and J. [7] A. 1977. [3] A. Mathematical Programming: The State o of the Art. and M. The Analyst. Parameterizing Arbitary Shapes via Fourier Descriptors for EvidenceGathering Extraction.L. and G. Unser. Turzo. Laine. Battle. [9] X. SPIE. A. 1979. Englewood Cliffs. and E. [4] B. Montiel.D.J. 3D Tomographic Reconstruction Using Geometrical Models. Green. 14:799–813. Borer.S. Wavelet Applications in Signal and Image Processing VIII. Moore.B.V. 1:93–109. editors. and M. 3034:346–357. [2] R. 69(2):202–221.G. [8] D. 1999. [5] A. 1808.F. [10] X.L. 1998.R. Douglas.M.. 1994. Battle. PrenticeHall.S. Hanson. Le Rest. 10(6):1217–1229. Ostrow. Berlin. Analysis of bounded variation penalty methods for illposed problems. 2000. and K. Vogel.A.S. 1983. Inverse problems. USA. Tomographic Reconstruction Using FreeForm Deformation Models. M. 18:79–84. M. editors. Gr¨ tschel. and B. Bizais. Bacharach. C. G. J. N. . Optimal Filtering. Cunningham. M. Journal of Nuclear Medicine. Aldroubi. Somersalo. M. Kaipio. Inverse Problems. Korte. [6] S. Dynamical electric wire tomography: a time series approach.
Topics in Magnetic Resonance Imaging. 1996.C. 22(1):61–79. [19] M. R. Romberg. Burkhardt and B. Donoho.C. Nonlinear Programming. [20] E. D. Griswold.J. J. Boccacci. 47:160–170. [17] M. Candes and D. John Wiley and Sons. F. Available at http://www. Generalized SMASH Imaging. M.M.J. Curvelets and Reconstruction of Images from Noisy Radon Data. 1985. 1967. Annals of Statistics. J. PILS.156 Bibliography [11] M. volume II of LNCS 1407. Blaimer. and G.J. Mueller. pages 108–117. and T. 2000. [23] E.D. 2000. Hajnal.J. pages 1–38. 30:784–842. Institute of Physics Publishing. GRAPPA: how to choose the optimal method. SENSE. Philadelphia. 1997. and T. Riddle. A. IN PRINT. Introduction to Inverse Problems in Imaging. . [15] R. Bracewell and A. Introduction to Random Signals and Applied Kalman Filtering. [22] E. Heidemann. Bertero and P.N. H.D.acm. Candes. Candes and D. [24] V. 2000. Sapiro. Hwang. Bydder. Computer Vision . 52(2):489–509. Jakob. and C. Radial basis functions. Numerical methods for least squares problems. Berlin.edu/ emmanuel/papers/StableRecovery. 15(4):223–236. Breuer. Kimmel. 2004. 1993. R. Magnetic Resonance in Medicine.J. Larkman. 1998. The Astrophysical Journal. and J. Shetty. Brown and P. Communications on Pure and Applied Mathematics. Sherali. [12] M. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information.G. ˚ [13] A. and P. In [3]. Inversion of fanbeam scans in radio astronomy. o [14] M. SIAM. M. SpringerVerlag.V. Bristol. Romberg.Y. Caselles. Acta Numerica. Neumann.L. Stable signal recovery from incomplete and inaccurate measurements. Tao. Tao. [16] R. John Wiley & Sons.caltech. Recovering edges in illposed inverse problems optimality of curvelet frames. 2006.ECCV ’98. Donoho. International Journal of Computer Vision. SMASH.S. [18] H. editors. IEEE Transactions on Information Theory. New York. Bazaraa.L. 1998. [21] E. Geodesic active contours. Candes. 2002.pdf. M. Bj¨ rck. Buhmann. 150:427–434. M. New York.
Hilbert. Total Variation Image Restoration: Overview and Recent Developments. Cootes. G. pages 1–18. S. Measures of the amount of ecological association between species. G. W. Tukey. An Algorithm for the Machine Calculation of Complex Fourier Series. and P. L. Mulet. NorthHolland. In [132]. Barlaud. . Active shape models . 19(90):297–301. [26] T. Methods of mathematical Physics.2. Recovery of blocky images from noisy and blurred data. SIAM Journal on Scientiﬁc Computing. [29] R. Vol. Deterministic Edgee Preserving Regularization in Computed Imaging.H. Partial differential equations.Bibliography 157 [25] T. [31] T. Chan. HighOrder Total VariationBased Image Restoration. Chan. Edwards.C. Charbonnier. Globally Convergent EdgePreserving Regularized Reconstruction: An Application to LimitedAngle Tomography. 20(6):1964–1977. (26):297–302. A. New York. Cooper. 1996. Taylor.H. and M. [34] A. and J. Computer Vision and Image Understanding. SpringerVerlag. F.their training and application.F. 56(4):1181–1198. Inverse Problems. G. H. Cooley and J. In [18]. Marquina. 2000. 1998.J. Park. Mathematics of Computation. D. 1962. Active appearance models. Graham. Bresler. SIAM Journal on Applied Mathematics. and C. 1987. Esedoglu. 2005. 1965. Noo. 22(2):503–516. [32] T. 2004. SIAM Journal on Scientiﬁc Computing. BlancF´ raud. Clackdoyle and F. A large class of inversion formulae for the 2d radon transform of functions of compact support. Courant and D.R. F. [37] Y. C. 20(4):1281–1291.J. [36] D. pages 483–498.F. Santosa. Mulet. IEEE Transactions On Image Processing. Aubert. 1998. Dobson and F. W. and A. SpringerVerlag.J. Amsterdam New York Oxford Tokyo. Statistical Data Analysis Based on the L1 norm and Related Methods. [30] J. and P. IEEE Transactions On Image Processing. [35] L. Yip. Dodge. [27] T. 1999. 1945. Chan. Taylor. Cootes. 1995. editor. 6(2):298–311. Dice. Delaney and Y. Interscience. Ecology. 1997. Golub. [33] R. 7(2):204–221. 61(1):38–59. [28] P. A nonlinear primaldual method for total variationbased image restoration.
161:527–531. and J. E. 5(1):71–88. Mark. [47] B. J. Linney. 1968. Halner. Riley.T.A. Linear Dynamic Recursive Estimation from the Viewpoint of Regression Analysis. Gated magnetic resonance imaging of congenital cardiac malformations. Redpath. Riemenschneider. T.M. 1986. and T. Physics in Medicine and Biology. IEEE Transactions on Computers. A. 25:751–756. Douiri.P. Arridge. Practical Methods for Optimization. [50] M.J.Elschlager. Fletcher. Automatic Segmentation of Liver using a Topology Adaptive Snake.D. In [7]. 22(1):67–92. Rodriguez. OPTICS LETTERS. 1984. L. Dorn and D. [46] R. Edelstein. 1983. In [127]. Lambrou. pages 87–114. Spin warp NMR imaging and applications to human wholebody imaging. In Proceedings of the Second International Conference Biomedical Engineering.D. New York. Periodic quasiorthogonal spline bases and applications to leastsquares curve ﬁtting of digital images. The representation and matching of pictorial structures. J. 1996.A. M. Flickner. [41] W.158 Bibliography [38] O. Johnson. Inverse Problems. Elangovan and R. Whitaker. [40] D. Kaufman. J. T. Hutchison.V. Watts. Chichester. Wiley. Fischler and M. 2005.D. Hale.C.C. 1972. Radiology.5kg. Austria. Fiacco and G. 1980. Innsbruck. From sinograms to surfaces: A direct approach to the segmentation of tomographic data. [45] A. Level set methods for inverse scatterings. Journal of American Statistical Association. Halving mr imaging time by conjugation: Demonstration at 3. 67(340):815–821. A. J. Horn. and R. Fletcher. [43] A. Schweiger. J. 1973. Penalty Functions. and S.D. 2001. pages 213–223. McCormick. ToddPokropek. [39] A. and A. 2006. [44] D. and A. 22(4):R67–R131.J. IEEE Transactions on Image Processing. 1987. SpringerVerlag. [42] V. Fletcher. Alﬁdi. Nelson. 30(18):2439–2441. Jacobstein. Nonlinear Programming: Sequential Unconstrained Minimization Techniques. M. Feinberg.S. Duncan and S. [48] R. Lesselier.B.A. Sanz. Radiology. John Wiley and Sons. . G. [49] R. 137140:150. Local diffusion regularization method for optical tomography reconstruction by using robust statistics. Evans.L.
F. M.T. Nittka. H. Herman.N. 19(1):20–28. 14(12):759–768. Gardu˜ o and G. C. 1980. Interactive query formulation for object search. pages 593–600. 2003.J. Jacob. 1974. D. editors. Volume and planar gated cardiac magnetic resonance imaging: A correlative study of normal anatomy with thallium201 spect and cadaver sections. pages 219–232. 129135:150. W. Theoretical Computer Science. [60] D. A History of the Calculus of Variations from the 17th through the 19th Century. [58] R. Germany. 1961. [62] J. IEEE Transactions on Nuclear Science. SpringerVerlag. Goldstine. Moodie. Iain Anderson. [56] R.F. Besser. Kiefer. IMA and Horwood. heoria motus corporum coelestium in sectionibus conicis solem ambientium. Geisinger. editors. [55] G. Gordon. 1971. and T.M.G. .Bibliography 159 [51] M. Mason. Generalized autocalibrating partially parallel acquisitions (grappa). D. Chilcote. Smeulders. Green. Go. 1984. Yeung. Tomosynthesis: a ThreeDimensional Radiographic Imaging Technique. Approximation with the radial basis functions of Lewitt. Reconstruction of pictures from their projections. [52] E. V.M. Green. Algorithms for Approximation IV. In [81]. University of Huddersﬁeld. and A. P.M. Gevers and A. J. Gordon and G. Perthes et I. 2003. Radiology. Hamburg. 346:281–299. George. In Jeremy Leversley. [53] C. R. . Wang. Sajjadi and Julian C. In S.A. An application of the WienerKolmogorov smoothing theory to matrix inversion. Communications of the ACM. Discretising Barrick’s equations. [59] R. Hunt. 2002. Oliveira. A tutorial on ART. G. IEEE Transactions on Biomedical Imaging. Chichester. Giraldi. and A. Grant. 2002. Herman. E. 9:387–392. New York Heidelberg Berlin. Proceedings of Wind over Waves II: Forecasting and Fundamentals of Applications. 1809. Journal of the SIAM.H. pages 212–219. J.F. Gauss. 47:1202–1210. H. B. MacIntyre. Magnetic Resonance in Medicine.S. [54] T. O’Donnell.W. 21:78–93.T. 1999. Jellus. J. Griswold. Implicit surface visualization of reconstructed biological n molecules. R. Dualtsnakes model for medical imaging segmentation.M. 24(7):993–1003. [61] J. Strauss. Meaney. Heidemann. M. Haase. [63] M. [57] H. 2005. Pattern Recognition Letters.K. W. 1972. and John C. SpringerVerlag. Kramer. J. Foster.
Jakob.L. W.R. [72] H. kt BLAST Reconstruction From NonCartesian kt Space Sampling. 43(3):269–272. Karl H. C.L. An RF array designed for cardiac SMASH imaging. Numerical Aspects of Linear Inversion. [73] H. Hansen. Hanson and G. International Statistical Review. Harter. Sydney. The Method of Least Squares and Some AlternativesAddendum to Part IV. . Harter.160 Bibliography [64] M. Local basisfunction approach to computed tomography. C. Wecksung. Harter. Baltes. Harter. The Method of Least Squares and Some AlternativesPart III. Hansen. Tsao.P. The Method of Least Squares and Some AlternativesPart I. M. P. 43(1):1–44.C. 8:849–872. 55:85–91. [66] M.K. [75] H. 12(1):44–47. 1983. 43(3):273–278. 1992. The Method of Least Squares and Some AlternativesPart V. [71] H. Pruessmann. Hansen. SIAM. IEEE Transactions on Image Processing. 1975. 1975. 1998. W. Hanson and G. International Statistical Review. Harter. 42(3):235–264+282. Edelman. M. 1974. The Method of Least Squares and Some AlternativesPart IV. 43(2):125–190. International Statistical Review. [76] H. In Proceedings of ISMRM. Eggers. 2003. [68] P. International Statistical Review. [69] K. S. Journal of the Optical Society of America. Philadelphia.S. 1974. R. K. [74] H.C. Australia. RankDeﬁcient and Discrete IllPosed Problems. 42(2):147–174.L. [70] K. Numerical tools for analysis and solution of Fredholm integral equations of the ﬁrst kind. Kozerke. Applied Optics. [67] P. A. Bayesian Estimation of 3D Objects from Few Radiographs.A. Sodickson. 1975. and D. A curve evolution approach to objectbased ton mographic reconstruction. Inverse Problems. Wecksung. 6th Scientiﬁc Meeting. Harter. Feng and D. 24:4028–4039. 1975. J. Casta˜ on. Griswold. International Statistical Review. The Method of Least Squares and Some AlternativesPart II.L. 1985.M. [65] W.L. 2006. International Statistical Review. Magnetic Resonance in Medicine. 73:1501–1509. and H.L.
6th Scientiﬁc Meeting. The Method of Least Squares and Some AlternativesPart VI Subject and Author Indexes. 1980. 2001. 45:1066–1074. M.Journal of Basic Engineering. 1981. Adaptive sensitivity encoding incorporating temporal ﬁltering (TSENSE).E. New York. [82] J. Magnetic Resonance in Medicine. Magnetic Resonance in Medicine. Wernick. Image reconstruction from projections : the fundamentals of computerized tomography. Berlin. Nishimura. [89] M.A. Schunck. New York. Statistical and Computational Inverse Problems. Numerical Methods for Unconstraint Optimisation and Nonlinear Equations. Meyer. Somersalo. Edelman. Kaipio and E.R. . New York. IEEE Transactions on Medical Imaging. and D. and ChinTu Chen. Jakob.K. PrenticeHall.T. R. C. SpringerVerlag. Sodickson. New York. [87] R.N. LNCS 1614. Selection of a convolution fucntion for fourier inversion using gridding. John Wiley and Sons.L. In Proceedings of ISMRM. Kass.M.M. [83] R. Englewood Cliffs. [90] P. Witkin.H. Machine Vision. Transaction of the ASME . A new approach to linear ﬁltering and prediction problems. and B.R. R. VDAUTOSMASH imaging. Huijsmans and A. Kasturi. [85] Jr. McVeigh. M. Manning. Griswold. and A. Smeulders. Macovski.I. and E. Kalman sinogram restoration for fast and accurate pet image reconstruction. Jacob. Robust Statistics.G.A. F. volume 160 of Applied Mathematical Sciences. Australia. J. VISUAL ’99.J. Haase. editors.P.J. Huber. A. Griswold. 1960. D. Jain. [79] G. 45(6):3022–3029. International Journal of Computer Vision. [86] J. Kellman. Academic Press. W. Heidemann.P. 1(4):321–331. M. Epstein.M. and D. 82(Series D):35–45. New Jersey. Snakes: Active contour models. 2001. 1983. SpringerVerlag. 44(1):113–159. A.G. Jackson. [81] D. McGrawHill. 45:846–852. London. Dennis and R. [80] P. [88] ChienMin Kao. Herman. 1991. Kalman. IEEE Transactions on Nuclear Science. 1998. 1995.Bibliography 161 [77] H. and P.E. 1988.B Schnabel. Terzopoulos.H. 1999. 10(3):473–478.W. 2004. Harter. Cardiac imaging with SMASH. Sydney.M. [84] P. International Statistical Review. [78] R. 1976.
R. Kolehmainen. Ernst. and J. Radiology. J.P. Dynamic image reconstruction in electrical impedance tomography with known internal structures. D. Lee. W. P. Jarvenpaa. Statistical inversion for medical xray tomography with few radiographs:ii. 2003.P. Welti. Botvinick.R. M.R. [97] V. S.B. M. Davis. Kaipio. M Vauhkonen.J. M. Epstein. 2000. Lipton.. L. and J. Kim. ScaleSpace 2001. Kim. California. M. Kolehmainen. I. Santa Monica. Kuhl and C. 44:933–939. 1999.P. Somersalo. Kang. Herfkens.E. 15:1375–1391. Berlin. and M. J. E. Y. R. 121127:150. S. Stateestimation approach to the nonstationary in optical tomography problem. Estimation of nonstationary region boundaries in eitstate estimation approach. Inverse Problems. 18:69–83. 18:236–258. S. 20(5):876–889. SpringerVerlag. RM3090PR. Kerckhove. [94] V. Lionheart. Low latency temporal ﬁlter design for realtime mri using unfold. S. S. Kumar. and J. [100] A. S. editor. Interpolation and Exrepolation of Stationary Random Sequences . J. 2001.R. 1982. 48:1465– 1490. L. Arridge. and C. 2001. Voutilainen. Schiller. IEEE Transactions on Magnetics. Arridge.B. Cardiac imaging using gated magnetic resonance.P. Kaufman. and R. McVeigh. Kolehmainen. Doyle and J. Kim. and E. Higgins.B.N.R. [93] K. Recovery of region boundaries of piecewise constant coefﬁcients of an elliptic pde from boundary data. LNCS 2106. [96] V. [98] A. Kaipio. Crooks. Koistinen. RAND Corp. Lanzer.H. Selin.R. Rept. Sorger. N. Inverse Problems. and E. Kolmogorov. Physics in Medicine and Biology. Prince. Kaipio. Kellman. J. 1962. P. Giardina. 2003. translated by W. Vauhkonen. 2002. application to dental radiology. Elliptic fourier features of a closed contour. 1975. Magnetic Resonance in Medicine.H. S. Arakawa. Lassas. Kolehmainen. Journal of the Optical Society of America A. Pirttila. Siltanen. [92] M. NMR Fourier Zeugmatography. . 1984. Journal of Magnetic Resonance. Y.M. [95] V.162 Bibliography [91] P. P. 17:1937–1956. 38(2):1301–1304. [99] F.L. Computer Graphics and Image Processing. [101] P. A. F. Kaipio. C.
79(6):745–754. [106] R. 48:493–501. Lucy. Legendre. Appl. [112] M. Lewitt. Physics in Medicine and Biology. An iterative method for the rectiﬁcation of observed distributions. 1996. [113] B. Lewitt. Massachusetts. Alternatives to voxels for image representation in iterative reconstruction algorithms. 1990. Madore.M. 2002. Magnetic Resonance in Medicine. 1944. 1999. [103] P. Magnetic Resonance in Medicine. [105] K. [107] R. Glover. Multidimensional digital image representations using generalized KaiserBessel window functions. ktSPARSE:High framerate dynamic MRI exploiting spatiotemporal sparsity. 242:190–191. A method for the solution of certain nonlinear problems in least squares. 1973. Siltanen. Pattern Recognition. 1987.G. Donoho. Madore. [111] D. Journal of Optical Society of America A. UNaliasing by Fourierencoding the Overlaps using the temporaL Dimension (UNFOLD). Lauterbur. [110] L.M. . D. 1992. Using unfold to remove artifacts in parallel imaging and in partialfourier imaging. Courcier. AddisonWesley. and N. Nouvelles methodes pour la determination des orbites des cometes.M. [104] A. Santos. 2:164–168. [108] Y.M. Quart. 7(10):1834–1846. Santosa. 1974.Bibliography 163 [102] M. Math. [109] ChunShin Lin and ChiaLin Hwang. J. 1984. 20(5):535–545. Paris. 5(6):987–995. New Forms of Shape Invariants From Elliptic Fourier Descriptors. Luenberger. applied to cardiac imaging and fMRI.M. 1805. 20(5):1537–1563. 2004. CA. G. SantaMonica. In ISMRM RealTime MRI Workshop 2006. Levenberg. Can one use total variation prior for edgepreserving Bayesian inversion? Inverse problems. A Computational Algorithm for Minimizing Total Variation in Image Restoration. [114] B. 42:813–828.J. and J.H.C. Astronomical Journal. Lustig. Pauly. Reading. Linear and Nonlinear Programming. Image Formation by Induced Local Interactions: Examples Employing Nuclear Magnetic Resonance. Pelc.L. Nature. Li and F. 37:705–716.. IEEE Transactions On Image Processing.B. USA. Lassas and S.
Medical Image Analysis. 1997. Meyer. New York. . [117] D. Moore. 1920. IEEE Transactions on Medical Imaging. McInerney and D. Magnetic Resonance in Medicine. Marabini. 120:363–371. Rietzel. and J.M. 52:1127–1135. 15:6878. croscopy. 26:394–395. Mor´ . and D. [123] F. Freebreathing radial acquisitions of the heart. [119] T. Hill.T. Deformable organisms for automatic medical image analysis.W. Crum. McLeish. Bulletin of the American Mathematical Society. 15(4):453–465. H. Duncan. [122] K. and J.J. Shenton. [121] T. Marquardt. ThreeUltrami dimensional reconstruction from reduced sets of very noisy images acquired following a singleaxis tilt schema: application of a new threedimensional reconstruction algorithm and objective comparison with weighted backprojection. Journal of the Society for Industrial and Applied Mathematics. S. G.164 Bibliography [115] R. 4:73–91. G. Marabini. 3D reconstruction in electron microscopy using ART with smooth spherically symmetric volume elements (blobs).G. SpringerVerlag. e In [7]. Carazo. Medical Image Analysis. Herman. 1996. E. 6:251–266. W. 2004. Tracking myocardial deformation using phase contrast mr velocity ﬁelds: A stochastic approach. [118] S. 1998. Wiley. R. Sinusas. Schrderand G. IEEE Transactions on Medical Imaging.T. A. Medical Image Analysis. 1963. 72:5365. On the reciprocal of the general algebraic matrix. Terzopoulos. Kozerke. An Algorithm for LeastSquares estimation of Nonlinear Parameters. [124] E. Matej and R. Terzopoulos. 1983. McInerney. Natterer. Practical considerations for 3D image reconstruction using spherically symmetric volume elements. pages 258–287. [120] T.G.J.T. Lewitt. 1996. [126] F. Herman.S. McInerney and D. Terzopoulos. 1(2):91–108. 2002. [125] J. Recent Developments in Algorithms and Software for Trust Region Methods. Journal of Structural Biology.M. Constable.L. [116] R. Hamarneh. 2000. and J. M. The Mathematics of Computerized Tomography. Tsnakes: Topology adaptive snakes.M. 11(2):431–441. and D. Carazo. 1996. Deformable models in medical image analysis: a survey.R. R. 1986.
Boesiger. [133] E. [139] K. Weiger. [138] K. P. 1999. New York. Y. A fast sinc gridding algorithm for fourier inversion in computer tomography. Berlin.D O’Sullivan. editors. [131] N. SpringerVerlag. [128] H. 2005. Weiger. A Technique for the Numerical Solution of Certain Integral Equations of the First Kind.L. Handbook of Mathematical Models in Computer Vision. 1988. Paragios. Scheidegger. IEEE Transactions PAMI. Sethian. Certain Topics in Telegraph Transmission Theory. Bone. 1986. 2001. Advances in Sensitivity Encoding With Arbitrary kSpace Trajectories. A Variational Approach for the Segmentation of the Left Ventricle in Cardiac Image Analysis. Boesiger. Faugeras. Shape Discrimination Using Fourier Descriptors. MICAI 2001. Penrose.Bibliography 165 [127] W. Viergever. Fronts Propagating with Curvature Dependent Speed: Algorithms Based on HamiltonJacobi Formulation. 1928. International Journal of Computer Vision. 4(4):200–207. 50(3):345–362. Magnetic Resonance in Medicine.P. . [130] J. Niessen and M. [136] D. Journal of the ACM (JACM). Osher and J. 2001. Nyquist. Digital Image Procesing. SENSE: Sensitivity Encoding for Fast MRI. 2002. Persson. 1962. 79:12–49. 8(3):388–397. Pruessmann. Total variation norm for threedimensional iterative reconstruction in limited view angle tomography. Bornert. Physics in Medicine and Biology. Pruessmann. Magnetic Resonance in Medicine. A generalized inverse for matrices.. Comp. LNCS 2208.P. Elmqvist.E. 2001. M. Paragios. M. Berlin. J.A. Phys. [134] R.E. 46:638–651. 9(1):84–97. [129] S.A. 1985. and P. [137] W. Springer. [135] M.. [132] N. editors. and O.I. Transaction of the A. and P. D. 1955. 51:406–413. IEEE Transactions on Medical Imaging. Proceedings of the Cambridge Philosophical Society. Peerson and KingSun Fu. and H. Chen. Pratt. 47:617–644. 46:853866. Wiley. 1991.B. M. 42:952–962. Phillips.
[143] L. Bayesian Estimation of 3D Objects from Few Radiographs. 2006.F. Nonlinear Total Variation based Noise Removal Algorithms. BayesianBased Iterative Method of Image Restoration. 60:259–268. A levelset approach for inverse problems involving obstacles. 1994. S. Geodesic active contours applied to texture feature space. 1971. 1972. 69:262–267. [146] K. V. Siltanen. and E. 1:17–33. Lakshminarayanan.A. Schweiger and S.Leipzig. Journal of the Optical Society of America. Lassas. In Proc.a Phys. IEEE Transactions On Nuclear Science. and E. Kolehmainen. Kl. M. 1974. Osher. 2001. Schweiger. [151] S. Sachs. Rudin. [141] G. Proceedings of the National Academy of Sciences. Uber die bestimmung von funktionen durch ihre integralwert l¨ ngs gewisser a mannigfaltigkeiten. Dorn. IEEE Transactions on Nuclear Science. Jarvenpaa.R. pages 1146–1153. 1917. and V. Shepp and B. ESAIM: Control. Statistical inversion for medical xray tomography with few radiographs:i. 1998.Y. 68:2236–2240. Jr. [147] M. International Conference on Computer Vision. O. S. Optimisation and Calculus of Variations. 1996.. Optics letters.A. Image reconstruction in optical tomography using local basis functions. 1992. Berichte S¨ chsische Akademie der Wissenschaften. [144] C. Sauer. [145] F. [150] L. J. 41(5):1780–1790. Arridge. Three dimensional reconstruc tions from radiographs and electron micrographs: Application of convolution instead of fourier transforms. Fatemi. Physics in Medicine and Biology. Isidoro. A. Richardson. R. Klifa. 2003. Pirttila. Sagiv and N. Sochen adn Y. Koistinen. J. SpringerVerlag. 62(1):55–59. Arridge. 12(4):583–593.H. Logan. Kaipio. Zacharopoulos. general theory. S. Zeevi. 21:21–43. In [92]. J. Somersalo. 48:1437–1463. Physica D. 2003. [148] M.V. Reconstructing Absorption and Diffusion Shape Proﬁles in Optical Tomography by a Level Set Technique. The Fourier reconstruction of a head section. Active blobs. and C.Math.N.166 Bibliography ¨ [140] J. Journal of Electronic Imaging. [142] W. Ramanchandran and A.P. Kolehmainen. 31(4):471–473. pages 344–352. . Radon. Santosa. [149] S. Sclaroff and J. P.
C. 42(8):32–42. The Annals of Statistics. Manning. and W. 38:591–603. Mathematical Statistics in the Early States. [156] H. 1997. Inverse Problem Theory and Methods for Model Parameter Estimation. Stegmann.W.K. Edelman. 29(2):237–245. Jacob. SIAM. Understanding Magnetic Resonance Imaging. Terzopoulos and K.A. 9th Scientiﬁc Meeting. 7:63– 68. Scotland.J. Stigler. [161] A. Stark. and B. [153] D. Hingorani. 41:1009–1022. [154] D.H.M. IEEE Spectrum. 6(2):239–265. M. Philadelphia.K. [163] D. 2000.J. Tailored SMASH Image Reconstructions for Robust In Vivo parallel MR Imaging. Smith and R. 4(6):306– 331. Leastsquares estimation:from Gauss to Kalman. Grønning. Sodickson and W. A. Paul. 1999. Fleischer. [155] D. [158] H. Glasgow. Speech and Signal Processing. Tarantola. Signaltonoise ratio and signaltonoise efﬁciency in smash imaging. and R. 1999. Deformable models. Woods. Terzopoulos. The Visual Computer.M. Griswold.C. Staib and J. 2004. Manning. In Proceedings of ISMRM. IEEE Transactions PAMI. Artiﬁcial life for computer graphics. B. J. I. Sodickson. Duncan. UK. Boundary ﬁnding with parametrically deformalbe contour models.R.W. [159] M. 1981. CRC Press. 44:243–251. 1988. Nilsson. R. IEEE Transactions on Acoustics. [160] S. Sodickson. J. Direct Fourier Reconstruction in Computer Tomography. Automated segmentation of cardiac magnetic resonance images. Boca Raton New York. Communications of the ACM. [157] L.C. 1978.S. Magnetic Resonance in Medicine.Bibliography 167 [152] R. [162] D. Magnetic Resonance in Medicine. Sorenson. 1992.K. Lange. . Simultaneous acquisition of spatial harmonics (smash): fast imaging with radiofrequency coil arrays. P. 1970. 1998. Magnetic Resonance in Medicine. 14(11):1061–1075.
M.N. SIAM Review. 2001. Vogel. Basel. Boesiger. Washington. SpringerVerlag. [166] J. Priorinformationenhanced dynamic imaging using single or multiple coils with kt BLAST and kt SENSE. [167] J. Vlaardingerbroek and J. Nauk SSSR. Vogel and M. Winston and Sons. SIAM. Iterative methods for total variation denoising. Pruessmann. 2002.E. Berlin. 1977. [165] A. Progress in Systems and Control Theory: Computation and Control IV. Arsenin. 39(5):195– 198. 17:227–238. 46:652–660. [168] J. In K. Tsao. Behnia. kt BLAST and kt SENSE: Dynamic MRI With High Frame Rate Exploiting Spatiotemporal Correlations. On the stability of inverse problems. Nonconvergence of the Lcurve regularization parameter selection method. Tikhonov. P. Tikhonov and V. A multigrid method for total variationbased image denoising. Pruessmann.R. Webb. K. B. In Proceedings of ISMRM. SIAM Journal on Scientiﬁc Computing. 50:10311042. Vogel. On the unfold method. editors. Hawaii. Solutions of IllPosed Problems. 47:202–207. Boesiger. 1943. 1996. 21(1):100–111. 2003. Honolulu.S. Vogel and M. Inverse Problems.E. USA.G. [169] J. Functional Analysis and Its Applications. .168 Bibliography [164] A.. Magnetic Resonance in Medicine. 12:535–547.R. Magnetic Resonance Imaging. Unifying linear priorinformationdriven methods for accelerated image acquisition. Tsao. [171] J. den Boer. Tsao. Dokl. [174] C. D. Tsao. Lund. Akad. Bowers and J. a [176] C. and A. and K. Frontiers in Applied Mathematics. Tsirelson.R. [175] C. Computational Methods for Inverse Problems.C. 1979. Varah. 2002. Philadelphia. Magnetic Resonance in Medicine. 10th Scientiﬁc Meeting. Oman.A. 1995. Magnetic Resonance in Medicine. Oman. A Practical Examination of Some Numerical Methods for Linear Discrete IllPosed Problems. 8(2):138–141. 1974. 1999. Not every Banach space contains an imbedding of lp or c0. 1996.R.T. [170] B. [173] C. Birk¨ user. and P. [172] M.
[179] R. [184] B. Theory and application for threedimensional dentoalveolar imaging. Wu. Wang. D. 2006.C. EunKee Jeong. Wendland. 2002. Sikora.B. Journal of Approximation Theory. Alexander. [181] H. Dorn. O. Error estimates for interpolation by compactly supported radial basis functions of minimal degree. and J. 1971. Wah and T. Inverse Problems. [183] P. Moulin. A SelfReferencing LevelSet Method for Image Reconstruction from Sparse Fourier Samples.R. New York.L. Magnetic Resonance in Medicine. Tyndall.L. 1973. Unfold using a temporal subtraction and spectral energy comparisson technique. 5(3):175–211. 26:53–62. Tunedaperture computed tomography (TACT). V. K. Bresler. 22(5):1509–1532. Ye. Y. Three dimensional reconstruction of shape and piecewise constant region values for optical tomography using spherical harmonic paremetrization and a boundary element method. R. 2002. [185] N. and P. Horton. 1998. The rubber mask technique. 1997.A. Fessler. IEEE Transactions on Image Processing. Yu and J. Weiger. A direct approach to estimating surfaces in tomographic data. Kolehmainen. [182] R. 1949. Elangovan. Interpolation and Smoothing of Stationary Time Series. Wiener. Fast. 14(1):1–25. 2000. Magnetic Resonance in Medicine. 7:813–824. Oman. . The Exrepolation. F. and P. 93:258–272. Pruessmann. London. [180] M. [188] D. Whittle. 1999. 50(3):253–270. 2002. 21:159–173. [189] A. Parker. and A. Pattern Recognition.F. Boesiger. Journal of Global Optimization.P. [178] B. Whitaker and V.E. 48:559–564. Zacharopoulos. Medical Image Analysis. WileyInterscience. Vogel and M.T. IEEE Transactions on Medical Imaging. Edgeperserving tomographic reconstruction with nonlocal regularization. John Wiley & Sons. Cardiac realtime imaging using sense. International Journal of Computer Vision. D.L. and J. Dentomaxillofacial Radiology. Efﬁcient and adaptive Lagrangemultiplier methods for nonlinear continuous global optimization.A. robust total variation–based reconstruction of noisy. Ludlow. 1998. [186] Y. Widrow. Arridge. Webber. [187] J. S. blurred images. Optimization under Constraints. 43:177–184.Bibliography 169 [177] C.