37 views

Uploaded by Robert Bagarić

Alessio PET Image Reconstruction

- 0ietidwdj88ay
- Image Reconstruction Methods
- Helathcare IT
- (Lecture Notes in Computer Science 9546) Martin Atzmueller, Alvin Chin, Frederik Janssen, Immanuel Schweizer, Christoph Trattner (Eds.)-Big Data Analytics in the Social and Ubiquitous Context_ 5th I
- Parameter Selection for EM clustering using Information Criterion and PDDP (IJCTE 2010)
- Importanceofphysics Blogspot Com
- Estimating Parameter Bias Uncertainty
- T2 Hotelling Pada
- Robots
- 2nd ASEAN Advanced Imaging Leaders Conference Malaysia
- CT
- Bootstrap Efron Fulltext
- Medical - BUPA Info Booklet
- veterinary technician outline
- Textbook a5
- Presentation Seminario UTAD
- US Federal Reserve: 200032pap
- 1
- 2.1986_Longitudinal Data Analysis Using Generalized Linear Models_zeger.1986
- Valida%E7%E3o de Modelos Para Risco de Cr%E9dito - Christian Kaiser

You are on page 1of 22

PET Image Reconstruction

Adam Alessio, PhD and Paul Kinahan, PhD

Department of Radiology

University of Washington

1959 NE Pacific Street, Box 356004

Seattle, WA 98195-6004, USA

aalessio@u.washington.edu

Data Acquisition

Analytic Two-Dimensional Image Reconstruction

Analytic Three-Dimensional Image Reconstruction

Iterative Image Reconstruction

Image Quality and Noise-Resolution Tradeoffs

Summary

References

Positron emission tomography (PET) scanners collect measurements of a patient`s in vivo

radiotracer distribution. These measurements are reconstructed into cross-sectional images.

Tomographic image reconstruction forms images of functional information in nuclear medicine

applications and the same principles can be applied to modalities such as X-ray computed

tomography. This chapter provides a brief introduction into tomographic image reconstruction and

outlines the current, commonly applied methods for PET. This chapter begins with an overview of

data acquisition in PET in 2-D and 3-D mode. The reconstruction methods presented in this chapter

are divided into analytic and iterative approaches. Analytic reconstruction methods offer a direct

mathematical solution for the formation of an image (section II, III). Iterative methods are based on

a more accurate description of the imaging process resulting in a more complicated mathematical

solution requiring multiple steps to arrive at an image (section IV). Finally, section V offers an

introduction into the tradeoffs between the varying methods and provides some guidance on how to

access the quality of a reconstruction algorithm.

Since its introduction to medical imaging applications in the late 1960`s, tomographic

reconstruction has grown into a well-researched, highly-evolved field. The modest goal of this

chapter is to focus on common PET reconstruction methods and not to explain more advanced

approaches. There are several general references on medical image reconstruction with a broader

scope than nuclear medicine, both at the introductory [26, 28, 39] and advanced levels [6, 38, 47].

I. DATA ACQUISITION

The physics of photon generation and detection for PET are described elsewhere in this text.

PET imaging, along with several other imaging modalities, can be described with a line-integral

model of the acquisition. We start by considering the parallelepiped joining any two detector

elements as a volume of response (figure 1). In the absence of physical effects such as attenuation,

scattered and accidental coincidences, detector efficiency variations, or count-rate dependent

effects, the total number of coincidence events detected will be proportional to the total amount of

tracer contained in the tube or volume of response (VOR), as indicated by the shaded area in figure

1(b).

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*< @

scanner detector 1

detector 2

LOR

! #$%&

detector 1

detector 2

VOR

(a) (b)

Figure 1. 'Tube' or volume of response corresponding to sensitive region scanned by two detector elements. (a) Overall

scheme with the volume indicated as a line of response (LOR). (b) Detail showing volume of response (VOR) scanned

by two of the rectangular detector elements (not to scale).

Integration along

Sinogram

all LORs at fixed

s

f(x,y)

!

!

x

y

!

! p(s, )

s

Figure 2. A projection,

!!""!# , is Iormed Irom integration along all parallel LORs at an angle !. The projections are

organized into a sinogram such that each complete projection Iills a single row oI ! in the sinogram. In this Iormat, a

single point in ! !"" ## traces a sinusoid in the sinogram.

I.A. Two-Dimensional Imaging

Two-dimensional PET imaging only considers lines of response (LORs) lying within a specified

imaging plane. The acquired data are collected along LORs through a two-dimensional object

! !"" ## as indicated in figure 2. The LORs are organized into sets of projections, line integrals for

all ! for a fixed direction ! " The collection of all projections for ! ! " ! "# forms a two

dimensional function of ! and ! is called a sinogram. This sinogram is aptly named because a fixed

point in the object traces a sinusoidal path in the projection space as shown in figure 2. A sinogram

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*< A

for a general object will be the superposition of all sinusoids corresponding to each point of activity

in the object (see figure 3).

The line-integral transform of

! !"" ## ! $!%"!# is called the X-ray transform [39], which in 2-D

is the same as the Radon transform. The X-ray transform (or variants thereof) is a basis for a model

of the data acquisition process of several modalities, including gamma cameras, SPECT, PET, and

X-ray imaging systems, as well as several non-medical imaging modalities.

In two-dimensional PET, we form projections through only a single transverse slice of a

volumetric object. We image a volumetric object by repeating the 2-D acquisition for multiple axial

(z) slices. When the sinogram for each value of ! is reconstructed, we can stack the image planes

together to form a three-dimensional image of ! !"" #" $# . Although this can be considered a form of

three-dimensional imaging, it is different from the 'fully' three-dimensional acquisition model

described in the next section.

Figure 3. During data acquisition, the object oI emission activity on the leIt Iorms the sinogram on the right.

I.B. Fully Three-Dimensional Imaging

With two-dimensional PET imaging, we acquire only line integral data for all imaging planes

perpendicular to the scanner or patient axis, called 'direct' planes (figure 4). Multiple 2-D planes are

stacked to form a 3-D volume. In fully three-dimensional PET imaging, we acquire both the direct

planes as well as the line-integral data lying on 'oblique' imaging planes that cross the direct planes,

as shown in figure 4. Both 2-D and fully 3-D PET imaging lead to 3-D images; the nomenclature

only specifies the type of data acquired. PET scanners operate in fully 3-D mode to increase

sensitivity, and thus lower the statistical noise associated with photon counting improving the

signal-to-noise ratio in the reconstructed image.

Early PET scanners avoided fully 3-D imaging for several reasons. First, fully 3-D

measurements require more storage with data set sizes approximately !"

#

times larger than 2-D

measurements. As a result, reconstruction becomes more computationally intensive. Moreover,

fully 3-D measurements contain significantly more scattered events due to the fact 2-D

measurements are collected with septa placed between each ring of detectors reducing oblique

scattered events. Improved scatter correction techniques have helped alleviate these problems [42,

49].

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*< A

detectors

Fully 3D PET

Measurements

oblique planes

direct

planes

cross

planes

z axis

axial

transaxial

2D PET

Measurements

septa

Figure 4. Comparison oI Iully 3-D and 2-D PET measurements. In 2-D mode, scanner only collects direct and

cross planes (organized into direct planes). In Iully 3-D mode, scanner collects all, or most, oblique planes.

I.C. Deterministic vs. Stochastic Imaging Model

One way to represent the imaging system is with the following linear relationship

! ""# #$ (1)

where p is the set of observations, H is the known system model, f is the unknown image, and n is

the error in the observations. The goal of reconstruction is to use the data values p (projections

through the unknown object) to find the image f. With this theoretical understanding of the line

integral model for the data, we need to clarify our understanding of the contents of the data values.

In practice, the data values are modeled either as deterministic or stochastic (random) variables.

A common approach for dealing with PET data is to assume that the data is deterministic,

containing no statistical noise. Therefore, the n in equation (1) is a deterministic number and, if

known, we could find the exact solution for the image (a deterministic image from a deterministic

set of data). Analytic reconstruction methods use the inverse of the discrete Radon transform to

solve this problem, offering a direct mathematical solution for the image (f) from known projections

(p). The deterministic assumption is advantageous because it simplifies the reconstruction allowing

for a fast, direct mathematical solution with behavior that is fairly predictable. On the negative side,

these methods disregard the noise structure in the observations and are based on an idealized model

of the system. Consequently, they can lead to images with reduced resolution and poor noise

properties (often in the form of streaking artifacts). Sections II and III will discuss analytic methods

based on the assumption of deterministic data.

In reality, the data values are intrinsically stochastic due to several physical realities in PET

imaging including: the positron decay process, the effects of attenuation, the addition of scattered

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*< A

and random events, and the photon detection process. Consequently, the n in equation (1) is more

accurately representative of random noise making it impossible to find the exact solution for the

image that formed the data. Therefore, we often revert to 'estimation techniques, which in the

field of tomographic reconstruction are solved iteratively as discussed in section IV. These

'estimation methods lead to approximate solutions only through constraining the solution with

some form of regularization.

II. ANALYTIC TWO-DIMENSIONAL IMAGE RECONSTRUCTION

II.A. The two-dimensional central-section theorem

The central-section theorem, also known as the central-slice or Fourier-slice theorem, is a

foundational relationship in analytic image reconstruction. This theorem states that that the Fourier

transform of a one-dimensional projection is equivalent to a section, or profile, at the same angle

through the center of the two-dimensional Fourier transform of the object [28]. Figure 5 shows a

pictorial description of the central-section theorem where

"

!

!""#!$ " # is the one-dimensional

Fourier transform of a projection,

"

!

! ""# #$ " # is the two-dimensional Fourier transform of the

image, and

!

!

is the Fourier space conjugate of !. The central-section theorem indicates that if we

know

!!!

"

""# at all angles ! ! " ! # , then we can fill in values for !!!

"

"!

#

#. The inverse two-

dimensional Fourier transform of !!!

"

"!

#

# will give us ! !"" ## .

!

s

p(s, !)

f(x,y)

P(!

s

,!)

!

s

!

s

!

x

F!!

x!

!

y

"

!

y

Projection

direction

!

1

!p(s,!)"

!

2

{f(x,y)}

Equivalent

values

x

y

Figure 5. Pictorial illustration oI the two-dimensional central-section theorem, showing the equivalency between

the one-dimensional Fourier transIorm oI a projection at angle ! and the central-section at the same angle though the

two-dimensional Fourier transIorm oI the object.

II.B. Backprojection

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*< A

An essential step in image reconstruction is backprojection, which is the adjoint to forward

projection process that forms the projections of the object. Figure 6 shows the backprojection along

a fixed angle, !. Conceptually, backprojection can be described as placing a value of

!!""!# back

into an image array along the appropriate LOR, but, since the knowledge of where the values came

from was lost in the projection step, the best we can do is place a constant value into all elements

along the LOR.

!

Backprojection along all

LORs at a !xed !!

!

"!!#!"

$

%

&!$#%'!"

Figure 6. Backprojection,

!!"" ##!$ , into an image reconstruction array oI all values oI !!""!# Ior a Iixed value

oI !.

One might assume that straight backprojection of all the collected projections will return the

image, but this is not the case due to the oversampling in the center of the Fourier transform. In

other words, each projection fills in one slice of the Fourier space resulting in oversampling in the

center and less sampling at the edges. For example, if we perform backprojections at only two

angles, say !

!

and !

!

, and examine the Fourier transform of the result we see that the contribution at

the origin is doubled while there is only one contribution at the edges of the field of view. Another

way of understanding this oversampling in the space domain is with the forward projection of a

single point source. If we simply backproject the point source projections, the image would be

heavily blurred since the projections are added back to the entire LOR from which they came. The

oversampling needs to be re-weighted, or filtered`, in order to have equal contributions throughout

the field of view.

II.C. Reconstruction by Backprojection-Filtering

Our goal is to compute ! !"" ## from

!!""!# . After backprojection, the oversampling in the

center of Fourier space needs to be filtered in order to have equal sampling throughout the Fourier

space. Basicially, the Fourier transform of the backprojected image must be filtered with a cone`

filter (! ! !

!

!

"!

"

!

). This cone filter accentuates values at the edge of the Fourier space and de-

accentuates values at the center of the Fourier space. This operation is summarized in

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*< A

!!!

"

" !

#

# " !$!!

"

" !

#

# (2)

where

!!!

"

" !

#

# is the 2-D Fourier transform of the backprojected image and

!!!

"

" !

#

# is the 2-D

Fourier transform of the backprojection-filtered image. The final step is the inverse Fourier

transform of

!!!

"

" !

#

# to obtain the image ! !"" ## .

This is known as the backprojection-filtering (BPF) image reconstruction method, where the

projection data are first backprojected, filtered in Fourier space with the cone filter, and then inverse

Fourier transformed. Alternatively the filtering can be performed in image space via the convolution

of !!"" ## with

"

!

!"

" ! ". A disadvantage of this approach is that the function !!"" ## has a larger

support than ! !"" ## due to the convolution with the filter term, which results in gradually decaying

values outside the support of ! !"" ## . Thus any numerical procedure must first compute !!"" ##

using a significantly larger image matrix size than is needed for the final result. This disadvantage

can be avoided by interchanging the filtering and backprojection steps as discussed next.

II.D. Reconstruction by Filtered-Backprojection (FBP)

If we interchange the order of the filtering and backprojection steps in Eq. (2) we obtain the

useful filtered-backprojection (FBP) image reconstruction method:

! !"" ## " $

%

!&"!#

$

"

!

'! (3)

where the 'filtered' projection, given by

!

"

!#"!# " "

$

!$

"

#

"

$

!!#"!# # $ # $ (4)

can be regarded as pre-corrected for the oversampling of the Fourier transform of ! !"" ## . The one-

dimensional 'ramp' filter,

!

!

, is a section through the rotationally symmetric two-dimensional cone

filter. An advantage of FBP is that the ramp filter is applied to each measured projection, which has

a finite support in ! , and we only need to backproject the filtered projections for ! less than the

radius of the field of view. This means that with FBP the image can be efficiently calculated with a

much smaller reconstruction matrix than can be used with BPF, for the same level of accuracy. This

is part of the reason for the popularity of the FBP algorithm.

II.E. Regularization

The inverse problem of Eq. (1) is ill-posed, and its solution, (Eq. (3)), is 'unstable' in the sense

that a small perturbation of the data,

!!""!# , can lead to an unpredictable change in the estimate of

! !"" ## . As photon detection is a stochastic process, some form of regularization is required to

constrain the solution space to physically acceptable values.

The most common form of regularizing image reconstruction is via simple smoothing. With the

FBP algorithm, Eq. (3), this can be written as

! !"" ## !

!

! !"" ## " "

$

"$

$!!

%

# !

%

"

$

&!%""# # $ # $'"

%

#

#

(5)

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*< A

where

!

! ##$ $% is the reconstructed estimate of ! !"" ## and

! !!

"

" is the apodizing (or smoothing)

function. In the tomographic image reconstruction literature, the term 'filter' is reserved for the

unique ramp filter,

!

!

, which arises from the sampling of the Fourier transform of the object. In

other words, for two-dimensional image reconstruction there is only one possible filter (the ramp

filter), while the apodizing function can take on any shape that is deemed most advantageous based

on image SNR, or other considerations. Note that unless

! !!

"

" " #, Eq. (5) yields a biased estimate

of ! !"" ## .

A simple model explains the principle of most apodizing functions. If we regard the measured

projection data as the sum of the true projection data and uncorrelated random noise, that is

!

"

!#"!# " !!#"!# #$!#"!# , then the corresponding measured power spectrum is

!

"

#

!!

$

""# " !

"

!!

$

""# #!

%

!!

$

""# . With the finite frequency response of the detection system the

signal power,

!

"

!!

#

""# , will gradually roll off, while the noise power for

!

"

!!

#

""# will remain

essentially constant, as illustrated in figure 7. The effect of the ramp filter, also shown in figure 7,

will be to amplify the high frequency components of the power spectrum, which are dominated by

noise at increasing frequencies. A very common apodizing function is the cosine apodizing window,

also called a Hamming, Von Hann, Hann, or even Hanning, window. The specific shape of

! !!

"

"

will determine the noise/resolution trade-offs in the reconstructed image and also affect the noise

co-variance (noise texture) in the image. In practice it is difficult to rigorously justify a choice of

! !!

"

" . In principle, at least, ! !!

"

" should be chosen to optimize a specific task, such as lesion

detection.

!

"

!!

#

" !$

!

#

! !

#

!

%!!

#

$

!

&

!

'

!!

#

" !$

!

#

Figure 7. Illustration oI the use oI an apodized ramp Iilter

! !!

"

" !

"

to suppress ampliIication oI high-Irequency

noise power above the cutoII Irequency !

!

.

III. ANALYTIC THREE-DIMENSIONAL IMAGE RECONSTRUCTION

There are two important differences between two-dimensional and three-dimensional image

reconstruction from X-ray transforms: spatially-varying scanner response and data redundancy. In

fully three-dimensional mode, the scanner is more sensitive to activity in the center of the axial field

of view than it is to activity at the edge of the scanner because more oblique planes intersect in the

center of the scanner. This causes a spatial variance, which complicates the use of analytical

reconstruction techniques. On the positive side, fully 3-D data contains redundancies due to the fact

that from an analytic point of view only a single slice of data is required to reconstruct an image.

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*< A

Since fully 3-D data contains multiple slices, there is an inherent redundancies in the data required

to reconstruct a single image.

III.A. Three-Dimensional Reprojection Algorithm (3DRP)

The spatially-varying scanner response can be understood as the consequence of the finite axial

extent of the scanner, leading to truncated projections. In a two-dimensional ring scanner, the

observed intensity of a point source will remain approximately constant, regardless of the position

of the point source inside the scanner's FOV, as illustrated in figure 8. For a three-dimensional

cylindrical scanner, however, with !"#$%&!'( projections the observed intensity of a point source

will vary depending on the position of the point source inside the scanner's FOV, particularly as the

point source moves axially (figure 8). For the impractical case of a non-truncated spherical scanner

with detectors completely surrounding the object, the scanner's response would be spatially-

invariant, although the issue of data redundancy would remain.

!

"

!

"

#

point

source

scanner

surface

2D ring scanner 3D cylinder scanner

Figure 8. Illustration oI the spatial !"#$%!$"&' oI two-dimensional imaging with X-ray transIorms

and the spatial #$%!$"&' oI three-dimensional imaging with X-ray transIorms. For the three-

dimensional cylindrical scanner, the observed intensity oI the point source will vary depending on

the position oI the point source inside the scanner's FOV.

The spatially-varying nature of the measured data complicates the image reconstruction process.

For example, if regions of the support of are unmeasured due to truncation, then accurately

computing the two-dimensional Fourier transform of the projection, say for an FBP type of

algorithm, is not possible by simply using FFTs.

A common method of restoring spatial invariance to measured three-dimensional X-ray

projection data is the three-dimensional reprojection (3DRP) algorithm [29]. In this method,

unmeasured regions of projections are estimated by numerically forward-projecting through an

initial estimate of the image. The initial estimate is formed by reconstructing an image using only

the direct planes (which are not truncated) with two-dimensional FBP for each transverse plane.

Further details on the implementation of this algorithm are given by Defrise and Kinahan [14].

It is also possible to restrict the axial angle range to use only those projections that are not

truncated. For medical imaging systems, however, this is usually not helpful as the patient extends

the entire axial length of the scanner.

III.B. Rebinning Methods

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*<

AB

With three-dimensional X-ray transform data it is possible to use the measured data to estimate

a stacked (volumetric) set of two-dimensional transverse sinograms by using some form of signal

averaging. Such a procedure is called a rebinning algorithm. Rebinning algorithms have the

advantage that each rebinned sinogram can be efficiently reconstructed with either analytic or

iterative two-dimensional reconstruction methods. In addition rebinning can significantly reduce the

size of the data. The disadvantages of the rebinning methods are that they either pay a penalty in

terms of a spatially-varying distortion and/or amplification of the statistical noise. Although

rebinning methods are not specifically a reconstruction procedure, they are an important addition to

the array of techniques to bear on three-dimensional image reconstruction problems.

The simplest, but still often useful, rebinning method is the single-slice rebinning (SSRB)

algorithm [13] where the rebinned sinograms are formed from averaging all of the oblique

sinograms that intersect the direct plane at the center of the transaxial field of view. A more

accurate rebinning method is the Fourier rebinning (FORE) algorithm [15], which is based on an

reasonably accurate equivalence between specific elements in the Fourier transformed oblique and

transverse sinograms. In other words, the Fourier transformed oblique sinograms can be resorted

into transverse sinograms and, after normalization for the sampling of the Fourier transform, inverse

transformed to recover accurate direct sinograms. The FORE method amplifies statistical noise

slightly compared to SSRB, but results in significantly less distortion.

IV. ITERATIVE IMAGE RECONSTRUCTION

IV.A. Basic Components

Iterative methods offer improvements over the analytical approach because they can account for

the noise structure in the observations and can use a more realistic model of the system. These

improvements come at the cost of added complexity resulting in mathematical problems without a

direct analytic solution or with an analytic solution that cannot be solved with current processing

capabilities. Consequently, these more realistic approaches are often solved with methods that

successively improve, or iterate, an estimate of the unknown image. This iterative process results in

a potentially more accurate estimate than analytical reconstruction methods, at the cost of greater

computational demands. Advances in computation speed and faster algorithms have helped

overcome the computational burden of iterative methods allowing them to receive growing clinical

acceptance.

The following discussion offers only an introduction to iterative methods. Refer to review

articles for more detailed discussions [31, 43]).

All iterative methods contain five basic components. The first and often-overlooked component

is a model for the image. This is usually a discretization of the image domain into N distinct pixels

(2D image elements) or voxels (3D image elements). Other models have been proposed, such as

spherical elements (blobs`) [34, 41] with overlapping boundaries, but these approaches are not used

clinically due to complexity. Usually, the image elements are deterministic variables, but these

elements can also be viewed more accurately as random variables, adopting a Bayesian approach

for the reconstruction problem and will be discussed later.

The second basic component is a system model that relates the image to the data. An element,

!

"#

of the system model, H, characterizes the imaging system and represents the probability that an

emission from voxel j is detected in projection i. Therefore,

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*<

AA

!

"

" #

"$

%

$

$ "!

&

!

(6)

where !

"

is the mean of the i-th projection and !

"

is the activity in voxel j as illustrated in figure 9.

Most clinical methods use spatially invariant system models with responses simplified for

computationally efficiency. In emerging research, accurate, spatially variant system modeling has

been shown to improve resolution and decrease mispositioning errors in high-resolution, small-

animal PET systems [20, 32, 45] and with whole-body PET imaging[2].

Figure 9. Illustration oI a single element oI the system model !

"#

.

The third component of all iterative methods is a model for the data, which in statistical

methods describes the statistical relationship between the value of the measurements and the

expected value of the measurements. In other words, this model relates how the projection

measurements vary around their expected mean values and is derived from our basic understanding

of the acquisition process. Photon detections are Poisson distributed so, in the majority of methods,

a Poisson model is used. For M projections, the Poisson probability law states that the probability

(!) that the random vector of Poisson distributed photon counts (P) equals the true photon counts

(p) given that we have a vector of emission rates, f is

!!! " " " # # "

"

$

"

$

$%&!!"

$

#

"

$

'

$"(

%

"

. (7)

Although the Poisson model is appropriate for a conceptual view of PET imaging, once the

corrections for randoms, scatter, and attenuation are applied, the data is no longer Poisson. Other

models such as approximations of the Poisson [10], shifted Poisson [50], and Gaussian models have

been proposed to improve model accuracy and for practical computation reasons.

Now that we have a description of the image, the measurements, and the system relating the

two, we must adopt a governing principle that defines the 'best image. The governing principle is

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*<

A@

often expressed mathematically as a cost, or objective, function. The most common principle for

iterative reconstruction is the Maximum Likelihood approach, a standard statistical estimation

method. In this approach, the probability relationship (7) is a likelihood function of the object f; we

must choose an estimate of the object

!

! that provides the greatest the value of L().

Maximum likelihood estimators are advantageous because they offer unbiased, minimum

variance estimates as the number of measurements increases towards infinity. This means that as

the number of measurements or projections becomes large the expected value of the image estimate

approaches the true image (

!!

"

" # ! "

#$%&

). And, these estimators are proven to provide the least

variance among all possible unbiased estimators, promising an estimate with less noise than other

unbiased estimators. On the surface, this noise property is advantageous, but even the least noisy

unbiased estimates are unacceptably noise due to the inherent noise in emission photon counting

systems. Therefore, in practice, methods often choose to allow some bias in the reconstructed

image in return for reduced noise levels. This bias is introduced in the form of spatial smoothing

effectively adding error to the image mean values while reducing overall noise levels. This

smoothing is performed either implicitly by stopping the algorithm before reaching the ML solution

or explicitly through some smoothing operation. This smoothing operation can take the form of a

post-processing low-pass filter or through one of many Bayesian/penalized approaches discussed

later.

The final component of all iterative methods is an algorithm that optimizes the cost function, or

in other words, finds the 'best image estimate. Numerous algorithms have been proposed ranging

from gradient-based algorithms [3, 10, 37] to the commonly used expectation maximization (EM)

algorithm. We will focus on a variant of the EM algorithm, ordered subsets expectation

maximization (OSEM), because it is the most widely used method.

IV.B. Maximum Likelihood - Expectation Maximization

The expectation maximization (EM) algorithm, discussed in 1977 by Dempster et al. [16],

offers a numerical method for determining a maximum likelihood estimate (MLE). The ML-EM

algorithm, since its introduction to the field of image reconstruction in 1982 by Shepp and Vardi

[46], remains a basis for the most popular statistical reconstruction method and has provided the

foundation for many other methods.

When the ML-EM algorithm is applied to PET image reconstruction, it leads to the simple

iterative equation

!

"

!#""#

#

$

!

"

!##

$

! % "

! %

"

$

%"

&

%

$

%'

$

!

'

!##

'

"

%

"

(8)

where

!

!

"

"#"#$

is the next estimate of voxel j based on the current estimate

!

!

""#

. We will describe

this algorithm qualitatively with the help of figure 10, starting with an initial image guess,

!

!

"#$

,

shown in the upper left of the figure and present in the denominator of equation (8). This initial

guess is usually just the entire image set to a constant value. The first step (1) forward projects this

image into the projection domain. Then, (2) these projections are compared with the measured

projections, p. This forms a multiplicative correction factor for each projection, which is then (3)

backprojected into image domain to obtain a correction factor for the initial image estimate. This

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*<

AB

image domain correction factor is then (4) multiplied by the current image estimate and divided by

a weighting term based on the system model to apply the desired strength of each image correction

factor. The new image estimate is now reentered in the algorithm as the next image; the algorithm

repeats itself while the estimate approaches the maximum likelihood solution.

Figure 10. Flow diagram oI the maximum likelihood-expectation maximization algorithm.

Starting with an initial image guess (

!

!

"#$

) in the upper leIt, the algorithm iteratively

chooses new image estimates based on the measured projections, !.

As the EM algorithm iteratively guesses an image estimate, the low frequency components of

the image appear within the first few iterations. As the ML estimate is approached, more and more

high frequency definition is resolved in the image, effectively adding more variance to the

reconstruction. As discussed above in the drawbacks of the ML solution, this variance is often

reduced (at the expense of increased bias) by stopping the algorithm early or by post-smoothing the

reconstruction. While the convergence rate of ML-EM is image dependent, ML-EM usually

requires approximately 20-50 iterations to reach an acceptable solution. Considering that ML-EM

requires one forward projection and one backprojection at each iteration, this overall processing

time is considerably more than the filtered backprojection approach, but leads to a potentially more

accurate reconstruction.

IV.C. Ordered Subsets Expectation Maximization

Ordered Subsets Expectation Maximization (OSEM) was introduced in 1994 [27] to reduce

reconstruction time of conventional ML-EM. This slight modification of ML-EM uses subsets of

the entire data set for each image update in the form

f

j

!n""#

#

$

f

j

!n#

H

! i j

! i "S

b

#

H

ij

p

i

H

ik

$

f

k

!n#

k

#

i"S

b

#

(9)

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*<

AB

where the backprojection steps sum over only the projections in subset !

"

of a total of ! subsets.

Therefore, the image is updated during each subiteration and one complete iteration will have !

image updates. When there is only one subset (!=1), OSEM is the same as ML-EM.

There are many approaches for dividing the projection space into subsets. Most approaches use

non-overlapping subsets (contain ! subsets each with ! ! " projections). A common approach

divides the projections into sets with different views, or azimuthal angles. For example, assume we

only collect projections of the object at angle 0!, 23!, 45!, and 68!. If we divide these projections

into 2 subsets, we would place the projections at angle 0! and 45! into one subset and the

projections at 23! and 68! into the other subset. Then, we apply the ML-EM algorithm with the

projections in subset 1 and obtain an image estimate. Next, we enter that image estimate into the

algorithm using subset 2 and repeat.

1 subsets

A)

1 4 8 16

4 subsets

B)

1 4 8 16

16 subsets

C)

1 4 8 16

16 subsets

filtered

D)

1 4 8 16

Figure 11. OSEM reconstructions oI simulated data at increasing iterations. Row A) contains OSEM reconstructions

when using only 1 subset, which is equivalent to the ML-EM algorithm, aIter iteration 1, 4, 8, and 16. Row B) and C)

highlight the eIIect oI increasing the number oI subsets. Row D) shows reconstructions that are post-Iiltered with a

7.5mm 3-D Gaussian Iilter representing a common clinical protocol Ior OSEM. Note the similarity oI A)16, B)4, and

C)1.

Figure 11 demonstrates the convergence properties of OSEM for varying iterations and numbers

of subsets. Even though OSEM resembles ML-EM, this computationally convenient method is not

guaranteed to converge to the ML solution. In practice, its convergence is similar to ML-EM and

approaches the ML solution roughly ! times faster than conventional ML-EM. Figure 12 plots the

Poisson likelihood values (eq. 3) of OSEM and ML-EM vs. iteration number illustrating the speed

improvements of increasing the number of subsets. This speed increase comes at the expense of

slightly more image variance at the same bias level when compared with ML-EM [30].

Consequently, some moderation must be exercised when choosing the total number of subsets.

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*<

AB

Figure 12 also highlights that 'most of the changes in the estimate occur in the initial iterations

supporting the approach of stopping these algorithms prematurely to reduce image variance. In

practice, OSEM is stopped early and post-smoothed to obtain visually appealing images although

we disagree with this approach since the bias/variance characteristics are object dependent.

0 5 10 15

iteration

l

i

k

e

l

i

h

o

o

d

ML!EM

OSEM 4 Subsets

OSEM 8 Subsets

Figure 12. Poisson likelihood values vs. iteration number Ior OSEM and ML-EM reconstructions oI simulated

data. Increasing the number oI subsets speeds the approach to the ML value by roughly a Iactor oI the number oI

subsets.

It is important to stress to the reader that the ML-EM and OSEM methods are just algorithms for

finding the ML estimate, defining only the last 2 components discussed in the 'Basic Components

section. The first 3 components (image, system, and measurements models) can vary dramatically

and still be termed an OSEM algorithm. For example, one OSEM method could use a very simple

system model providing reconstructions with much poorer quality than another OSEM method

using a more realistic system model. In short, not all OSEM methods are the same.

IV.D. Bayesian/Penalized Methods

Bayesian methods try to improve the quality of the reconstructed image by taking advantage of

knowledge of the image (e.g. non-negative tracer concentration and only small variations between

neighboring voxels). This information is known a priori and, using Baye`s rule, is often

incorporated into a maximum a posteriori (MAP) objective function (function with all information

available after reception of a priori knowledge of image). The details of MAP estimation are

beyond the scope of this chapter.

On a basic level, the a priori information enters the iterative process as a 'prior term that

enforces conditions on the image estimate at each iteration leading to guaranteed convergence with

certain algorithms. This is equivalent to using a 'penalty term at each iteration and these methods

are also termed penalized.` The prior/penalty term can promote desired properties in the image

such a smoothness levels [19, 25], edges [9, 24], or even particular structures based on anatomical

side` information from separate CT or MR studies [1, 12, 18, 23, 44]. One challenge in applying a

prior/penalty term is choosing the parameter that varies the influence, or strength, the penalty will

have on the image [21, 22]. While Bayesian methods are starting to receive clinical acceptance,

they are often not used due to the added complexity in implementation and due to the challenge of

choosing an appropriate parameter for governing the strength of the penalty term.

IV.E. Three Dimensional Iterative Reconstruction

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*<

AB

In concept, iterative methods are easily extendable to fully three-dimensional PET

measurements and reconstruction. With fully 3-D PET, the data model is based on 3-D

measurements (figure 4) and the image model is now a 3-D volume instead of a 2-D image. The

system model relates voxel elements to fully 3-D PET projections. The same governing principles

and statistical relationships apply to 2-D and 3-D PET measurements allowing for the same

optimization algorithms. Importantly, iterative methods model the spatial variance in fully 3-D

measurements through the system model, so all of the extra efforts required for analytic

reconstruction of 3-D measurements are not necessary.

In practice, the major challenge for fully 3-D iterative reconstruction is the increased

computational demands. The image sizes increase from !"

#

pixels to !"

#

voxels when moving to

fully 3-D reconstruction. The data sets sizes increase even more from !"

#

to as much as !"

#

entries. Consequently, the system model must be computed for effectively !"

!#

combinations (as

opposed to !"

#

combinations with 2-D PET). These increases pose both storage and processing

challenges. These challenges are being overcome through improvements in computer processing.

Another option for iterative methods is to first rebin the 3-D data into 2-D transaxial slices, as

discussed in section III.B. Then, faster 2-D iterative reconstruction can be performed on each 2-D

slice. The combination of FORE with appropriately weighted two-dimensional iterative

reconstruction methods for each rebinned sinogram has successfully been employed for the

reconstruction of three-dimensional whole-body PET data in clinically feasible times [11].

V. IMAGE QUALITY AND NOISE-RESOLUTION TRADEOFFS

V.A. Definitions of Image Quality

Discussion of tomographic reconstruction techniques leads to the challenge of determining

which technique provides the best` quality image. An objective comparison of image quality is

often difficult and can only be performed in the context of a specific application, or task. As stated

by Barrett, 'Image quality must be assessed on the basis of average performance of some task of

interest by some observer or decision maker [4, 5].

Two of the major imaging task categories are classification tasks and estimation tasks.

Classification tasks categorize the image or features in the image into one or more classes. A

common classification task is detection (a binary classification task). Some other examples of

classification tasks include signal detection, image segmentation, and diagnosis. On the other hand,

estimation tasks seek numerical parameters from an imaging system such as the quantitation of

physiological parameters or the estimation of features for pattern recognition. Specific examples of

estimation tasks include a cardiac ejection fraction or the value of tracer flux in a tissue

compartment model.

V.B. Quantitative Assessments

A common approach for accessing image quality is to determine one of a wide variety of figures

of merit including values such as signal to noise ratio, bias, contrast levels, etc. Caution should be

taken when trying to distill an image volume (containing on the order of !"

#

elements) into just a

few descriptive values because single values cannot capture the behavior of the entire image. With

this caveat in mind, it is still often useful when comparing images (and reconstruction methods) to

analyze several different figures of merit.

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*<

AB

Figure 13. Comparison oI Iiltered backprojection reconstructions oI simulated data with diIIerent noise levels

with diIIerent smoothing parameters. The cutoII oI the Hanning Iilter used in the Iiltering step is reduced

resulting in smoother images. The leIt column has the least bias with the most variance, while the right column

has the most bias with the least variance.

Figure 14. Comparison oI OSEM reconstructions oI simulated data with diIIerent noise levels with diIIerent

smoothing parameters. The post-reconstruction Iilter is varied changing the noise/bias tradeoII. The leIt column

has the least bias with the most variance, while the right column has the most bias with the least variance.

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*<

AB

The ever-present tradeoff in image quality is bias vs. variance. Given that PET imaging results

in random data, the reconstructed image is random. In other words, if the same object is imaged

multiple times, the reconstructed images will not be the same due to statistical variations in the data.

With this in mind, we would prefer to have a reconstruction method that creates an image that is on

average the true image: an unbiased method. Similarly, we would prefer a reconstruction method

that creates an image whose values do not deviate from their mean values: a zero variance method.

Bias is a measure of the accuracy of our reconstruction on average, while variance is a measure

of the precision of the estimate (how much the estimate varies around its average value). As

mentioned previously, all methods have a way to decrease the variance level at the expense of

increased bias. FBP reduces variance by changing the cutoff of the apodizing filter used in the

filtering prior to backprojection (figure 13). In OSEM, the algorithm is either stopped prematurely

or a post-reconstruction filter is applied (figure 14). As illustrated in the figure 15, given that all

methods are able to vary this tradeoff, in order to declare that method A outperforms method B in

terms of bias, method A must have less bias at the same variance level as method B (or less

variance at same bias level).

Bias

Variance

!

"

Bias

Variance

#

$

Figure 15. Hypothetical bias versus variance plots Ior reconstruction methods A, B, C, and D. These curves are

Iormed Irom varying the smoothing parameter in each method. It is appropriate to state that method A perIorms better

than method B in terms oI bias. Method C and D alternately perIorm better than each other at diIIerent variance levels.

Signal to noise ratio, and the similar contrast to noise ratio, are general figures of merit for

evaluating the bias-variance tradeoff in an image. This term quantifies some description of the

strength of the signal in relationship to the quantity of noise. For the record, a complete description

of bias and variance cannot be ascertained from a single reconstructed image because these terms

are based on the expectation of the image estimate (average reconstructed image over multiple

realizations [48]). Often times, for expediency the signal to noise ratio is computed from a single

reconstruction [7, 17]. When computed from a single reconstruction, these terms provide insight

into the bias vs. variance tradeoff, but should not be interpreted as a complete understanding of the

variance behavior of the method.

Many common figures of merit are used to describe the spatial resolution of reconstruction

methods. Resolution is loosely defined as the level of reproduction of spatial detail in the imaging

system. This is often expressed though the point spread function (PSF): a point source is placed in

the system and reconstructed into an image, a 'spread out version of the point. In PET, the PSF is

frequently characterized by the transaxial and axial 'Full Width at Half Maximum (FWHM) term,

which describes the width at half of the maximum value of the PSF of a Gaussian shaped function

fitted to the PSF (in the transaxial or axial direction). This term captures some of the behavior of

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*<

AB

the resolution of the method, but does not do a good job describing the lower values of the PSF

considering a Gaussian function has very long tails. Other descriptive resolution values include the

'Full Width at Tenth Maximum (FWTM, computed similarly as FWHM) and properties of the

modulation transfer function (Fourier transform of PSF). Considering that the smoothing required

to reduce the variance in images leads to degraded resolution, the bias-variance tradeoff also

translates to a inherent resolution-variance tradeoff. Therefore, it is not adequate to simply state

method A yields better resolution than method B, but must be shown that at an equal variance level,

method A has better resolution than method B.

Perhaps the most common task in nuclear medical applications is the detection of tumors. Since

a physician assesses images for this classification task, the best method for evaluating

reconstruction methods for detection is a human-observer study. Unfortunately, thorough human-

observer studies are time-consuming and often times impractical because they require the

evaluation of numerous reconstructed images and ideally numerous trained observers. As a result,

initial testing of reconstruction methods for detection are often performed with computer

algorithms, called numerical observers, designed to mimic the performance of a human observer. A

variety of numerical observers have been proposed such as the channelized Hotelling observer [8].

Regardless of whether a human or numerical observer is employed, the results of the observer study

are often presented in the form of a receiver operating characteristic (ROC) analysis. ROC curves

plot the trade-off between true-positive and false-positive classification. The overall detection

performance of a method is often summarized with the area under the ROC curve, with the larger

area signifying better detection performance.

VI. SUMMARY

The inverse problem of finding a cross-sectional image from tomographic PET measurements

can be solved with a variety of methods. When a deterministic view of the PET measurements is

adopted, analytic methods provide a fast, direct solution. When a more accurate model is used,

iterative methods provide a potentially more accurate image at the expense of increased

computation time. This chapter presented an introduction to the accepted standard analytic method

of filtered backprojection and the commonly used iterative method, OSEM. These methods have

gained acceptance in nuclear medicine partly due to resulting image quality and also due to other

factors such as their speed and relative simplicity both in terms of implementation and user

understanding. More sophisticated methods will receive clinical acceptance as processing power

increases and computation speed becomes less of a barrier.

Along with improving the reconstruction methods for solving the basic PET reconstruction

problem posed here, there are many novel challenges facing this field. New advances in PET

instrumentation, such as time-of-flight acquisition [33, 36], require specialized reconstruction

algorithms to reach their potential, promising significant improvements in signal to noise ratio.

Likewise, functional imaging methods are inherently dynamic at some level; Reconstruction

methods are being developed that cater to forming time-varying images of tracer distribution [35]

[40]. As these and many other reconstruction challenges are addressed, the efficacy and clinical

importance of PET imaging will continue to increase.

VII. ACKNOWLEDGMENTS

This work was supported by the U.S. National Institutes of Health grant RO1 CA74135.

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*<

@A

VIII. REFERENCES

[1] A. Alessio, P. Kinahan, and T. Lewellen, "Improved Quantitation for PET/CT Image

Reconstruction with System Modeling and Anatomical Priors," SPIE Med Imaging, San

Diego, 2005.

[2] A. Alessio, P. Kinahan, and T. Lewellen, "Modeling and Incorporation of System Response

Functions in 3D Whole Body PET," submitted to IEEE Trans Med Imaging, 2005.

[3] O. Axelsson, Iterative solution methods: Cambridge University Press, 1994.

[4] H. H. Barrett, "Objective assessment of image quality: effects of quantum noise and object

variability," J Opt Soc Am A, vol. 7, pp. 1266-78, 1990.

[5] H. H. Barrett and K. J. Myers, Foundations of Image Science. Hoboken, NJ: Wiley, 2004.

[6] H. H. Barrett and W. Swindell, Radiological Imaging. New York: Academic Press, 1981.

[7] H. H. Barrett, D. W. Wilson, and B. M. W. Tsui, "Noise properties of the EM algorithm. I.

Theory," Physics in Medicine and Biology, vol. 39, pp. 833, 1994.

[8] H. H. Barrett, J. Yao, J. P. Rolland, and K. J. Myers, "Model observers for assessment of

image quality," Proc. Nat. Acad. Sci. USA, vol. 90, pp. 9758-9765, 1993.

[9] C. Bouman and K. Sauer, "A generalized Gaussian image model for edge-preserving MAP

estimation," Image Processing, IEEE Transactions on, vol. 2, pp. 296-310, 1993.

[10] C. A. Bouman and K. Sauer, "A Unified Approach to Statistical Tomography Using

Coordinate Descent Optimization," IEEE Trans. on Image Processing, vol. 5, pp. 480-492,

1996.

[11] C. Comtat, P. Kinahan, M. Defrise, C. Michel, and D. Townsend, "Fast Reconstruction of

3D PET Data with Accurate Statistical Modeling," IEEE Trans Nuclear Science, vol. 45, pp.

1083-1089, 1998.

[12] C. Comtat, P. E. Kinahan, J. A. Fessler, T. Beyer, D. W. Townsend, M. Defrise, and C.

Michel, "Clinically feasible reconstruction of 3D whole-body PET/CT data using blurred

anatomical labels," Phys Med Biol, vol. 47, pp. 1-20, 2002.

[13] M. E. Daube-Witherspoon and G. Muehllehner, "Treatment of axial data in 3D PET," J Nucl

Med, vol. 28, pp. 1717-1724, 1987.

[14] M. Defrise and P. E. Kinahan, "Data acquisition and image reconstruction for 3D PET," in

The Theory and Practice of 3D PET, vol. 32, Developments in Nuclear Medicine, D. W.

Townsend and B. Bendriem, Eds. Dordrecht: Kluwer Academic Publishers, 1998, pp. 11-54.

[15] M. Defrise, P. E. Kinahan, D. W. Townsend, C. Michel, M. Sibomana, and D. F. Newport,

"Exact and Approximate Rebinning Algorithms for 3D PET Data," IEEE Trans Med

Imaging, vol. 16, pp. 145-158, 1997.

[16] A. Dempster, N. Laird, and D. Rubin, "Maximum likelihood from incomplete data via the

EM algorithm.," Journal of the Royal Statistical Society, vol. 39, pp. 1-38, 1977.

[17] J. A. Fessler, "Mean and variance of implicitly defined biased estimators (such as penalized

maximum likelihood): applications to tomography," Image Processing, IEEE Transactions

on, vol. 5, pp. 493-506, 1996.

[18] J. A. Fessler, N. H. Clinthorne, and W. L. Rogers, "Regularized emission image

reconstruction using imperfect side information," IEEE Trans Nuclear Science, vol. 39, pp.

1464-71, 1992.

[19] J. A. Fessler and A. O. Hero, "Penalized ML Image Recon using Space-Alternating

Generalized EM Algorithms," IEEE Trans. on Image Processing, vol. 4, pp. 1417-1429,

1995.

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*<

@A

[20] T. Frese, N. C. Rouze, C. A. Bouman, K. Sauer, and G. D. Hutchins, "Quantitative

comparison of FBP, EM, and Bayesian reconstruction algorithms for the IndyPET scanner,"

IEEE Trans Med Imaging, vol. 22, pp. 258-76, 2003.

[21] N. P. Galatsanos and A. K. Katsaggelos, "Methods for choosing the regularization parameter

and estimating the noise variance in image restoration and their relation," Image Processing,

IEEE Transactions on, vol. 1, pp. 322-336, 1992.

[22] B. Gidas, "Parameter estimation for Gibbs distributions from fully observed data," in

Markov Random Fields: Theory and Applications, R. Chellappa and A. Jain, Eds. Boston:

Academic Press, 1993, pp. 471-499.

[23] G. Gindi, M. Lee, A. Rangarajan, and I. G. Zubal, "Bayesian reconstruction of functional

images using anatomical information as priors," IEEE Trans Med Imaging, vol. 12, pp. 670-

680, 1993.

[24] Green, "Bayesian Recon from Emission Tomography using a modified EM," IEEE Trans

Med Imaging, vol. 9, pp. 84-93, March 1990.

[25] T. Hebert and R. Leahy, "A Generalized EM Algorithm for 3D Bayesian Recon from

Poisson Data using Gibbs Priors," IEEE Trans Med Imaging, vol. 8, pp. 194-202, 1989.

[26] G. T. Herman, Image reconstruction from projections. New York: Academic Press, 1980.

[27] H. M. Hudson and R. S. Larkin, "Accelerated image reconstruction using ordered subsets of

projection data," IEEE Trans Med Imaging, vol. 13, pp. 601-609, 1994.

[28] A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging. New York:

IEEE Press, 1988.

[29] P. E. Kinahan and J. G. Rogers, "Analytic 3D image reconstruction using all detected

events," IEEE Trans Nuclear Science, vol. 36, pp. 964-968, 1989.

[30] D. Lalush and B. Tsui, "Performance of ordered-subset reconstruction algorithms under

conditions of extreme attenuation and truncation in myocardial SPECT," J Nucl Med, vol.

41, pp. 737-744, 2000.

[31] R. Leahy and J. Qi, "Statistical Approaches in Quantitative PET," Statistics and Computing,

vol. 10, 2000.

[32] K. Lee, P. E. Kinahan, J. A. Fessler, R. S. Miyaoka, M. Janes, and T. K. Lewellen,

"Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner,"

Physics in Medicine and Biology, vol. 49, pp. 4563, 2004.

[33] T. K. Lewellen, "Time-of-flight PET," Semin Nucl Med, vol. 28, pp. 268-75, 1998.

[34] S. Matej and R. M. Lewitt, "Practical considerations for 3-D image reconstruction using

spherically symmetric volume elements," Medical Imaging, IEEE Transactions on, vol. 15,

pp. 68-78, 1996.

[35] M. Milosevic, M. N. Wernick, and E. J. Infusino, "Fast Spatio-Temporal Image

Reconstruction for Dynamic PET," IEEE Trans Med Imaging, vol. 18, pp. 185-195, 1999.

[36] W. W. Moses, "Time of Flight in PET Revisited," IEEE Trans Nuclear Science, vol. 50, pp.

1325-1330, 2003.

[37] E. Mumcuoglu, R. Leahy, S. R. Cherry, and Z. Zhou, "Fast Gradient-Based Methods for

Bayesian Reconstruction of Transmission and Emission PET Images," IEEE Trans Med

Imaging, vol. 13, pp. 687-701, 1994.

[38] F. Natterer, The Mathematics of Computerized Tomography. New York: Wiley, 1986.

[39] F. Natterer and F. Wbbeling, Mathematical Methods in Image Reconstruction.

Philadelphia: SIAM, 2001.

!"#$$%& ()* +%)(,() - ./0 12(3# 4#5&)$67856%&)

0& (99#(7 %) :#);%) #6 ("<= >85"#(7 ?#*%5%)# @

)*

/*<

@@

[40] T. E. Nichols, J. Qi, and R. M. Leahy, "Continuous Time Dynamic PET Imaging Using List

Mode Data," Proc. of 16th Conf. on Information Processing in Med. Imaging, Visegrad,

Hungary, 1999.

[41] T. Obi, S. Matej, R. M. Lewitt, and G. T. Herman, "2.5D Simultaneous Multislice

Reconstruction by Series Expansion Methods from FORE PET data," IEEE Trans Med

Imaging, vol. 19, pp. 474-484, May 2000.

[42] J. M. Ollinger, "Model-Based Scatter Correction for Fully 3D PET," Phys Med Biol, vol. 41,

pp. 153-176, 1996.

[43] J. M. Ollinger and J. A. Fessler, "Positron-emission tomography," Signal Processing

Magazine, IEEE, vol. 14, pp. 43-55, 1997.

[44] X. Ouyang, W. H. Wong, V. E. Johnson, X. Hu, and C. Chen, "Incorporation of Correlated

Structural Images in PET Image Reconstruction," IEEE Trans Med Imaging, vol. 13, pp.

627-640, 1994.

[45] J. Qi, R. M. Leahy, S. R. Cherry, A. Chatziioannou, and T. H. Farquhar, "High-resolution

3D Bayesian image reconstruction using the microPET small-animal scanner," Physics in

Medicine and Biology, vol. 43, pp. 1001, 1998.

[46] L. Shepp and Y. Vardi, "Maximum Likelihood Reconstruction for Emission Tomography,"

IEEE Trans Med Imaging, vol. MI-1, pp. 113-122, 1982.

[47] M. Wernick and J. Aarsvold, Emission Tomography: Fundamentals of PET and SPECT. San

Diego: Elsevier, 2004.

[48] D. W. Wilson, B. M. W. Tsui, and H. H. Barrett, "Noise properties of the EM algorithm. II.

Monte Carlo simulations," Physics in Medicine and Biology, vol. 39, pp. 847, 1994.

[49] S. C. Wollenweber, "Parameterization of a Model-Based 3-D PET Scatter Correction," IEEE

Trans Nuclear Science, vol. 49, pp. 722-727, 2002.

[50] M. Yavuz and J. A. Fessler, "Statistical Tomographic Recon methods for randoms-

precorrected PET scans," Medical Imaging Analysis, vol. 2, pp. 369-378, December 1998.

- 0ietidwdj88ayUploaded bybbkanil
- Image Reconstruction MethodsUploaded byUdara Seneviratne
- Helathcare ITUploaded byShradha Singh
- (Lecture Notes in Computer Science 9546) Martin Atzmueller, Alvin Chin, Frederik Janssen, Immanuel Schweizer, Christoph Trattner (Eds.)-Big Data Analytics in the Social and Ubiquitous Context_ 5th IUploaded byxyzxyz1001
- Parameter Selection for EM clustering using Information Criterion and PDDP (IJCTE 2010)Uploaded byambatvinay
- Importanceofphysics Blogspot ComUploaded byHenry Ong
- Estimating Parameter Bias UncertaintyUploaded byRaluca Eftimoiu
- T2 Hotelling PadaUploaded byfaiq
- RobotsUploaded byسمية عصام
- 2nd ASEAN Advanced Imaging Leaders Conference MalaysiaUploaded byRadiologyMalaysia
- CTUploaded byAhmed Sabry
- Bootstrap Efron FulltextUploaded byEfemena Doroh
- Medical - BUPA Info BookletUploaded byAnonymous hRWwL7pZnC
- veterinary technician outlineUploaded byapi-265063686
- Textbook a5Uploaded byalex634
- Presentation Seminario UTADUploaded byimmsilvam
- US Federal Reserve: 200032papUploaded byThe Fed
- 1Uploaded bynitishniraj2246
- 2.1986_Longitudinal Data Analysis Using Generalized Linear Models_zeger.1986Uploaded bysss
- Valida%E7%E3o de Modelos Para Risco de Cr%E9dito - Christian KaiserUploaded byjtnylson
- Coeficiente de VariacionUploaded byjomeba
- 25887942 (1)Uploaded bynida pervaiz
- Los avances en los métodos de imagen utilizados en la cadera artroplastia - Un algoritmo de diagnósticoUploaded byCésar Osvaldo
- Outlier and NormalityUploaded byAnna Riana Putriya
- Kim 2006Uploaded byMeyis
- -mainUploaded byMihaela Grasu
- IntegratedvelocitymodelestimationforimprovedpositioningwithanisotropicPSDMUploaded byRudra Mukherjee
- GMM.pdfUploaded byMac Cordel
- Screening for Breast CancerUploaded byDenn Bryce Dimaandal
- 2008-UCD-ITS-RR-08-47Uploaded byDedi Supiyadi

- BC 09 2006 Peterson JB Peacemaking Psych Resolv Glob ConflictUploaded bymacplenty
- BC 11 2013 Peterson JB Three Kinds of Meaning FinalUploaded byMicaSmirnov
- Fessler - Iterative Reconstruction in MRIUploaded byRobert Bagarić
- Kovacs 2012 List Mode PET ReconstructionUploaded byRobert Bagarić
- Kitson C H - Counterpoint for BeginnersUploaded byadonis1k
- Piston Walter_Orchestration (1969)Uploaded byManny
- 353256035-Blame-It-On-My-Youth-Keith-Jarrett-Transcription.pdfUploaded byRobert Bagarić
- The Man Who Planted TreesUploaded byalissonmnez
- Physics - Introduction to String Field TheoryUploaded byabhkashyap
- PET2008 Book of AbstractsUploaded byRobert Bagarić
- Clare Fischer - Harmonic Exercises for PianoUploaded byPablo Brianese
- (Guitar Tabs)the Real Book of BluesUploaded by*_alba_*
- Pharmakokinetics Model CourseUploaded byVakis Charalampous
- Beisel 2008 - A method for OSEM PET Reconstrucion on parallel architectures using STIRUploaded byRobert Bagarić
- Fundamentals of the Monte Carlo MethodUploaded bypticicaaa
- Gurdjieff and WorkUploaded byruilov
- Simon Singh - The Decipherment of HieroglyphsUploaded byRobert Bagarić
- Alan Watts-LecturesUploaded byJoseph Mumbasi
- Zheng 2010 - The studies of Improved Quantification techniques for PETUploaded byRobert Bagarić
- Basic_Physics_of_Nuclear_MedicineUploaded byrajesh_sekar_1
- BookUploaded byMyhayyi
- Frank Mantooth - Voicings for Jazz KeyboardUploaded byhaykminasyan
- analisi breckerUploaded bymorfeo7719
- Blanchot Space of LiteratureUploaded byKatma Vue
- Arvo Part Fur Alina VariationsUploaded bystefy123123
- Bach - Art of Fugue - 4 Voices SeparatelyUploaded byRobert Bagarić
- Bert Ligon - Jazz Theory Resources I & IIUploaded byRobert Bagarić
- Rimsky Korsakov - OrchestrationUploaded byDimitris Painter

- telUploaded byansarixxx
- Fourier TransformUploaded byLucas Gallindo
- second year E&TC PATTERN BUploaded bysakharamthorat
- Fourier TutorialUploaded byStefanvnv
- Freq Spec Lab08Uploaded byErica Mcmillan
- KasperLonsdaleEds InternationalTablesForX RayCrystallographyVol2 TextUploaded byLazar Alina
- ScienceUploaded bymiroslav11
- BS Physics - Scheme of StudiesUploaded byTanawush
- Portfolio Allocation Using Wavelet TransformUploaded byTraderCat Solaris
- Matlab Phase RetrievalUploaded bymaitham100
- 8579ch02Uploaded byAsk Bulls Bear
- Applied PDE loganUploaded byram_108
- IBPUploaded byGirish Bk B K
- 0819432326.pdfUploaded bySouvik Paul
- ComplexEnvelopeUploaded bygtm_prg
- DOC-20170614-WA0006Uploaded byAnonymous pgBcXL
- Green's Functions in Mathematical PhysicsUploaded byZow Niak
- Exercises in AnalysisUploaded byeasonrevant
- books_2757_0Uploaded byoluomo1
- chap9Uploaded byHàng Không Mẫu Hạm
- Review of Linear SystemsUploaded byAhmedMostafa
- Tutorial 6 - Fourier TransformUploaded bygjdapromise
- TP_BEKA2333_2014_2015Uploaded byAzrul Nizar
- b.e. Mech III to Viii SemUploaded bySenthil Kumar P
- PrFinalUploaded byBao-Ngoc Hoang
- Saber TutorialUploaded bystudent_ujjwol3163
- BTech CurriculumUploaded byShiv Shankar Kumar
- Comm-02-Fourier Theory and Communication SignalsUploaded byArquimedes Paschoal
- Properties Fourier TransformerUploaded byCarlos Sigua
- apologyriemanUploaded byMazilu Gianina