Professional Documents
Culture Documents
Medical Imaging Modalities Module 2nd Edition
Medical Imaging Modalities Module 2nd Edition
Course Material
References ................................................................................................................................... 9
2. ULTRASONIC .................................................................................................10
Objectives .................................................................................................................................. 10
References ................................................................................................................................. 38
3. X-RAY IMAGING...........................................................................................40
Objectives .................................................................................................................................. 40
3.7. Causes of x-ray tube failures and steps to extend tube life ............................................ 52
3.13. Mammography............................................................................................................ 72
References ................................................................................................................................. 93
4. CT SCANNING ...............................................................................................95
Objectives .................................................................................................................................. 95
CHAPTER FIVE..................................................................................................123
5.9. Coils Localization of the MRI signal: The Gradient .................................................... 145
6. NUCLEAR MEDICINE................................................................................166
Objectives ................................................................................................................................ 166
6.4.3. The Anger position network and pulse height analyzer........................................ 177
Objectives
1.1. Introduction
The human body is an incredibly complex system. Acquiring data about its static and dynamic
properties results in massive amounts of information. One of the major challenges to researchers
and clinicians is the question of how to acquire, process, and display vast quantities of information
about the body so that the information can be assimilated, interpreted, and utilized to yield more
useful diagnostic methods and therapeutic procedures. In many cases, the presentation of
information as images is the most efficient approach to addressing this challenge. As humans we
understand this efficiency; from our earliest years we rely more heavily on sight than on any other
perceptual skill in relating to the world around us.
Physicians increasingly rely as well on images to understand the human body and intervene in the
processes of human illness and injury. The use of images to manage and interpret information
about biological and medical processes is certain to continue its expansion, not only in clinical
medicine but also in the biomedical research enterprise that supports it.
Images of a complex object such as the human body reveal characteristics of the object such as its
transmissivity, opacity, emissivity, reflectivity, conductivity, and magnetizability, and changing
of these characteristics with time or application of energy enables imaging possible.
Images of the human body are derived from the interaction of energy with human tissue. The
energy can be in the form of radiation, magnetic, electric fields, and acoustic energy. The
interaction is at the molecular or atomic levels. So we can say medical imaging is an
interdisciplinary subject. It requires knowledge of physics; because it includes matter, radiation
and energy; knowledge of mathematics to apply linear algebra, numeric and statistics, knowledge
of life sciences like biology, physiology and medicine; a knowledge of Engineering for
optimization and implementation purposes and a knowledge of computer sciences for image
processing and image reconstruction.
Based on the energy it uses medical imaging can be classified into active medical imaging, which
is an imaging principle based on external delivery of energy like x-ray, magnetic resonance,
nuclear medicine and ultrasound imaging systems; and passive medical imaging which is an
imaging system based on an internal body signals like EEG, ECG and EMG.
Based on the radiation source it uses, we can classify medical imaging systems into external like
x-rays, ultrasound and radiofrequency (Nuclear Magnetic Resonance) and internal like radioactive
tracers that are used in positron emission tomography (PET) and single photon emission computed
tomography (SPECT) modalities.
Nuclear
Medicine
Figure 1.1 shows the electromagnetic spectrum common waves and the respective medical
imaging modality.
Within a month of their discovery, x rays were being explored as medical tools in several countries,
including Germany, England, France, and the United States. In 1901, Rontgen was awarded the
first Nobel Prize in Physics.
Over the first half of the twentieth century, x-ray imaging advanced with the help of improvements
such as intensifying screens, hot-cathode x-ray tubes, rotating anodes, image intensifiers, and
contrast agents.
3 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
X-ray CT: The second revolution
The second revolution in imaging began in 1972 with Hounsfield’s announcement of a practical
computer-assisted X-ray tomographic scanner, the CAT scanner, which is now called X-ray CT or
simply CT. This was actually the first radical change in the medical use of X-rays since Roentgen’s
discovery. It had to wait for the widespread use of cheap computing facilities to become practical.
CT uses the mathematical device of Fourier Filtered Back projection that was actually first
described in 1917 by an astronomer called Radon. Its practical implementation in clinical medicine
could not take place without the digital computer.
MRI
Research in nuclear magnetic resonance started shortly after the war. Two groups in America, led
by Bloch and Purcell in 1946, discovered the phenomenon independently. Bloch and Purcell later
shared a Nobel Prize in physics for this work. The original applications of the technique were in
the study of the magnetic properties of atomic nuclei themselves. The realization dependency of
time taken for a collection of nuclear magnetic moments in a liquid or a solid, to attain thermal
equilibrium, in some detail on the chemistry and physical properties of the surrounding atoms led
to the development of Nuclear magnetic resonance technique and to the magnetic resonance
imaging.
MRI is a wholly tomographic technique, just like X-ray CT, but it has no associated ionizing
radiation hazard. It provides a wider range of contrast mechanisms than X-rays and very much
better spatial resolution in many applications. The extremely rapid development of MRI has been
MRI is still a relative newcomer to medicine with many important new developments still to come,
both in applications and technique.
All of nuclear medicine, including diagnostic gamma imaging, became a practical possibility in
1942 with Fermi’s first successful operation of a uranium fission chain reaction. The development
of large area photon detectors was crucial to the practical use of gamma imaging. Anger announced
the use of a 2-inch NaI/scintillator detector, the first Anger camera, in 1952. Electronic photon
detectors, both for medical gamma imaging and X-rays, are adaptations of detectors developed, in
the decades after the war, at high energy and nuclear physics establishments such as Harwell,
CERN and Fermilab.
Diagnostic Ultrasound
Ultrasound is the term that describes sound waves of frequencies exceeding the range of human
hearing and their propagation in a medium. Medical diagnostic ultrasound is a modality that uses
Review Questions
1. What are the qualities of tomographic imaging technique over projection radiography
imaging?
2. Why magnetic resonance imaging is safe compared to x-ray and nuclear imaging?
3. How is it possible to get image of the human body? What are the necessary conditions
that are required?
4. What characteristics of the human body or object are used for imaging purpose?
5. Explain the motives behind medical imaging. Why is it important to visualize internal
structures and functions of the human body? What are the main goals of medical
imaging?
6. Describe the energy sources used in medical imaging. What are the advantages and
limitations of different types of energy, such as electromagnetic radiation, ultrasound, and
nuclear radiation?
1. Hendee W. R., Ritenour E. R., Medical imaging physics, fourth edition, A John Wiley &
Sons, Inc., Publication, ISBN 0-471-38226-4, New York, 2002.
2. Paul Suetens, Fundamentals of medical imaging, second edition, Cambridge University
Press, ISBN-13 978-0-521-51915-1, New York, 2009.
3. Chris Guy, Dominic ffytche, An Introduction to The Principles of Medical Imaging,
Revised Edition, Imperial College Press, ISBN 1-86094-502-3, London, 2005.
4. Jerrold T. B., Seibert A. J., Edwin M. L., John M. B., The essential physics of medical
imaging, Lipincott Williams and Wilkins, A wolters company, USA, 2002.
Objectives
2.1. Introduction
The word “ultrasonic” relates to the wave frequencies. Sound in general is divided into three
ranges: subsonic, sonic and ultrasonic. A sound wave is said to be sonic if its frequency is within
the audible spectrum of the human ear, which ranges from 20 to 20 000 Hz (20 kHz). The
frequency of subsonic waves is less than 20 Hz and that of ultrasonic waves is higher than 20 kHz.
Frequencies used in (medical) ultrasound imaging are about 100–1000 times higher than those
detectable by humans.
Ultrasound is the term that describes sound waves of frequencies exceeding the range of human
hearing and their propagation in a medium. Medical diagnostic ultrasound is a modality that uses
ultrasound energy and the acoustic properties of the body to produce an image from stationary and
moving tissues. It is totally noninvasive procedure. Acoustic waves are easily transmitted in water
but they are reflected from an interface according to the change in the acoustics impedance.
Leaving bones and lungs, all tissues of our body are composed of water which can transmit
acoustic waves easily. By sending high frequency sound waves into the body, the reflected sound
waves (returning echoes) are recorded and processed to reconstruct real time visual images by the
computer. The returning sound waves (echoes) reflect the size and shape of the organ and also
indicate whether the organ is solid, fluid or something in between. Unlike x-rays, ultrasound
The frequency and velocity of travel of sound waves depend on the bulk elastic properties of the
material and its density. All substances have finite bulk compressibility and so longitudinal,
compression waves will propagate through solids, liquids and gases but transverse, shear waves
can only propagate in solids. All soft tissue and body fluids behave like liquids (really gels of
varying viscosity) and ultrasound is propagated in these as a longitudinal wave. Bone, being a
solid, can support both longitudinal and transverse matter waves. In uniform or homogeneous
materials such as seawater or air it is possible to write down a very simple wave equation that
describes the propagation of sound through matter in terms of the bulk properties, density and
elasticity, of the material. More precisely the velocity of propagation for compression waves is
determined by the density (ρ) and the bulk modulus, K, so that:
The bulk modulus of a material is the reciprocal of its compressibility. The bulk modulus of air is
2 x104 times smaller than water and its density is factor of 1000 smaller than water.
The human can hear sound in the frequency range of 20 hertz to 20,000 hertz. As the name
suggests, ultrasound has frequency greater than 20,000 hertz. Diagnostic ultrasound has the range
of 1 to 10 megahertz. The general relationship between velocity, frequency and wavelength of any
wave is:
𝑪 = 𝝀𝒙𝒇
The intensity of a wave is defined as the energy which flows per unit time across a unit area
perpendicular to the wave propagation. For an infinite one dimensional plane wave:
Z = pc
where p is the density in kg/m3 and c is the speed of sound in m/sec. The SI units for acoustic
impedance are kg/(m2sec) and are often expressed in rayls, where 1 rayl is equal to 1 kg/(m2sec).
The acoustic impedance can be likened to the stiffness and flexibility of a compressible medium
which is related to the energy transfer from one medium to another. Table 2.1 below lists the
acoustic impedance of tissues and materials commonly encountered in medical ultrasound.
Consider a flat boundary, which dimensions are much greater than the ultrasound wavelength, for
example »1 mm for a 1.5 MHz central frequency. In the general situation as shown in Figure 2.2
below, the incident ultrasound wave strikes the boundary at an angle Ɵi. The following equations
relate the angles of incidence (Ɵi) and reflection (Ɵr), angles of incidence (Ɵi) and transmission
(Ɵt), reflected (pr) and transmitted (pt) pressures, and reflected (Ir) and transmitted (It) intensities:
The strongest reflected signal is received if the angle between the incident wave and the boundary
is 90o. In this case, the above Equations reduce to:
The backscattered signal detected by the transducer is maximized if the value of either Z1 or Z2 is
zero. However, in this case the ultrasound beam will not reach structures that lie deeper in the
body. Such a case occurs, for example, in GI tract imaging if the ultrasound beam encounters
15 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
pockets of air. A very strong signal is received from the front of the air pocket, but there is no
information of clinical relevance from any structures behind the air pocket. At the other extreme,
if Z1 and Z2 are equal in value, then there is no backscattered signal at all and the tissue boundary
is essentially undetectable.
The interface separated by the greatest difference in the speed of sound will provide the greatest
reflection as then the value of z1– z2 is large. This is the reason why it is difficult to visualize bone.
Similarly the acoustic impedance of air and tissue are 42.8 gm/cm2 and 16 × 10 2gm/cm2 which
results into a large value of z1– z2 thereby the interface makes the ultrasound energy reflected
completely without any penetration. A special jelly is used therefore to minimize reflected energy
from the interface of skin and transducer (in air) so that ultrasound can penetrate the body for the
imaging of organs. Hence the ability of ultrasound waves to travel through any medium is restricted
by the properties of that medium.
These properties include the density and elastic properties which make up the acoustic impedance
specific to that medium. The transmission is also limited by the transducer frequency being used.
Higher frequencies have shorter wavelengths and they can penetrate less than lower frequencies.
If the ultrasound beam strikes structures which are approximately the same size as, or smaller than,
the ultrasound wavelength then the wave is scattered in all directions. Most organs have a
characteristic structure that gives rise to a defined scatter signature and provides much of the
diagnostic information contained in the ultrasound image. Differences in scatter amplitude that
occur from one region to another cause corresponding brightness changes on the ultrasound
display.
In general, the echo signal amplitude from the insonated tissues depends on the number of
scatterers per unit volume, the acoustic impedance differences at the scatterer interfaces, the sizes
of the scatterers, and the ultrasonic frequency. The terms hyperechoic (higher scatter amplitude)
and hypoechoic (lower scatter amplitude) describe the scatter characteristics relative to the average
background signal. Hyperechoic areas usually have greater numbers of scatterers, larger acoustic
impedance differences, and larger scatterers. Acoustic scattering from no specular reflectors
increases with frequency, while specular reflection is relatively independent of frequency; thus, it
is often possible to enhance the scattered echo signals over the specular echo signals by using
higher ultrasound frequencies.
As an ultrasound beam passes through the body, its energy is attenuated by a number of
mechanisms including reflection, scatter and absorption. The net effect is that signals received
from tissue boundaries deep in the body are much weaker than those from boundaries which lie
close to the surface.
The total reduction in intensity of an ultrasound beam, on passing through a thickness of material,
is determined by the amount of energy absorbed and the amount scattered away from the direction
of the beam. Each type of material has an empirical attenuation coefficient as illustrated in Table
2.2 below.
Time-gain compensation is used to reduce the dynamic range of the signals, and after appropriate
further amplification and signal processing, the images are displayed in real time on the computer
monitor.
2.4.1. Transducers
Ultrasound is produced and detected with a transducer, composed of one or more ceramic elements
with electromechanical properties. The ceramic element converts electrical energy into mechanical
energy to produce ultrasound and mechanical energy into electrical energy for ultrasound
detection. Major components (Figure 2.5) include the piezoelectric material, matching layer,
backing block, acoustic absorber, insulating cover, sensor electrodes, and transducer housing.
Piezoelectric material
A piezoelectric material (often a crystal or ceramic) is the functional component of the transducer.
It converts electrical energy into mechanical (sound) energy by physical deformation of the crystal
structure. Conversely, mechanical pressure applied to its surface creates electrical energy.
Piezoelectric materials are characterized by a well-defined molecular arrangement of electrical
dipoles.
Damping block
The damping block, layered on the back of the piezoelectric element, absorbs the backward
directed ultrasound energy and attenuates stray ultrasound signals from the housing. This
component also dampens the transducer vibration to create an ultrasound pulse with a short spatial
pulse length, which is necessary to preserve detail along the beam axis (axial resolution).
Dampening of the vibration (also known as "ring-down") lessens the purity of the resonance
frequency and introduces a broadband frequency spectrum (Figure 2.7). With ring-down, an
increase in the bandwidth (range of frequencies) of the ultrasound pulse occurs by introducing
higher and lower frequencies above and below the center (resonance) frequency. The "Q factor"
describes the bandwidth of the sound emanating from a transducer as:
where fo is the center frequency and the bandwidth is the width of the frequency distribution.
A "high Q" transducer has a narrow bandwidth (i.e., very little damping) and a corresponding long
spatial pulse length. A "low Q" transducer has a wide bandwidth and short spatial pulse length.
Imaging applications require a broad bandwidth transducer in order to achieve high spatial
resolution along the direction of beam travel. Blood velocity measurements by Doppler
instrumentation require a relatively narrow-band transducer response in order to preserve velocity
information encoded by changes in the echo frequency relative to the incident frequency.
Continuous-wave ultrasound transducers have a very high Q characteristic. An example of a "high
Q" and "low Q" ultrasound pulse illustrates the relationship to spatial pulse length. While the Q
factor is derived from the term quality factor, a transducer with a low Q does not imply poor quality
in the signal.
Matching layer
The matching layer provides the interface between the transducer element and the tissue and
minimizes the acoustic impedance differences between the transducer and the patient. It consists
of layers of materials with acoustic impedances that are intermediate to those of soft tissue and the
transducer material. The thickness of each layer is equal to one-fourth the wavelength, determined
from the center operating frequency of the transducer and speed of sound in the matching layer.
For example, the wavelength of sound in a matching layer with a speed of sound of 2,000 m/sec
for a 5-MHz ultrasound beam is 0.4 mm. The optimal matching layer thickness is equal to 1/4A =
1/4 X 0.4 mm = 0.1 mm. In addition to the matching layers, acoustic coupling gel (with acoustic
impedance similar to soft tissue) is used between the transducer and the skin of the patient to
eliminate air pockets that could attenuate and reflect the ultrasound beam.
The strength of the received echoes is usually displayed as increased brightness on the screen
(hence the name for the basic ultrasonic imaging mode, B-mode, with B for brightness). A two-
dimensional data set is acquired as the transmitted beam is steered or its point of origin is moved
to different locations on the transducer face. The data set that is acquired in this manner will have
some set of orientations of the acoustic rays. The process of interpolating this data set to form a
TV raster image is usually referred to as scan conversion. With Doppler signal processing, mean
Doppler shifts at each position in the image can be determined from as few as 4 to 12 repeated
transmissions. The magnitudes of these mean frequencies can be displayed in color superimposed
on the B-mode image and can be used to show areas with significant blood flow.
Image formation using the pulse echo approach requires a number of hardware components: the
beam former, pulser, receiver, amplifier, scan converter/image memory, and display system.
Beam former
The beam former is responsible for generating the electronic delays for individual transducer
elements in an array to achieve transmit and receive focusing and, in phased arrays, beam steering.
Most modern, high-end ultrasound equipment incorporates a digital beam former and digital
electronics for both transmit and receive functions.
Digital beam former controls application-specific integrated circuits (ASICs) that provide
transmit/receive switches, digital-to-analog and analog-to-digital converters, and pre-
amplification and time again compensation circuitry for each of the transducer elements in the
array. Major advantages of digital acquisition and processing include the flexibility to introduce
new ultrasound capabilities by programmable software algorithms and to enhance control of the
acoustic beam.
Pulser
The pulser (also known as the transmitter) provides the electrical voltage for exciting the
piezoelectric transducer elements, and controls the output transmit power by adjustment of the
applied voltage. In digital beam-former systems, a digital-to analog-converter determines the
amplitude of the voltage. An increase in transmit amplitude creates higher intensity sound and
improves echo detection from weaker reflectors. A direct consequence is higher signal-to-noise
ratio in the images, but also higher power deposition to the patient. User controls of the output
power are labeled "output," "power," "dB," or "transmit" by the manufacturer. In some systems, a
low power setting for obstetric imaging is available to reduce power deposition to the fetus. A
Transmit/receive switch
The transmit/receive switch, synchronized with the pulser, isolates the high voltage used for
pulsing (~150 V) from the sensitive amplification stages during receive mode, with induced
voltages ranging from ~1 V to ~2 µV from the returning echoes. After the ring-down time, when
vibration of the piezoelectric material has stopped, the transducer electronics are switched to
sensing small voltages caused by the returning echoes, over a period up to about 1000 µsec (l msec.
In the pulse echo mode of transducer operation, the ultrasound beam is intermittently transmitted,
with a majority of the time occupied by listening for echoes. The ultrasound pulse is created with
a short voltage waveform provided by the pulser of the ultrasound system. The generated pulse is
typically two to three cycles long, dependent on the damping characteristics of the transducer
elements. With a speed of sound of 1,540 m/sec (0.154 cm/µsec), the time delay between the
transmission pulse and the detection of the echo is directly related to the depth of the interface as
The number of times the transducer is pulsed per second is known as the pulse repetition frequency
(PRF). For imaging, the PRF typically ranges from 2,000 to 4,000 pulses per second (2 to 4 kHz).
The time
Higher ultrasound frequency operation has limited penetration depth, allowing high PRFs.
Conversely, lower frequency operation requires lower PRFs because echoes can return from
greater depths.
In multielement array transducers, all preprocessing steps are performed in parallel. Each
transducer element produces a small voltage proportional to the pressure amplitude of the returning
echoes. An initial pre-amplification increases the detected voltages to useful signal levels. This is
combined with a fixed swept gain, to compensate for the exponential attenuation occurring with
distance traveled. Large variations in echo amplitude (voltage produced in the piezoelectric
element) with time are reduced from ~1,000,000:1 or 120 dB, to about 1,000: 1 or 60 dB with
these preprocessing steps.
Early ultrasound units used analog electronic circuits for all functions, which were susceptible to
drift and instability. Even today, the initial stages of the receiver often use analog electronic
circuits. Digital electronics were first introduced in ultrasound for functions such as image
formation and display. Since then, there has been a tendency to implement more and more of the
signal preprocessing functions in digital circuitry, particularly in high-end ultrasound systems.
In state-of-the-art ultrasound units, each piezoelectric element has its own preamplifier and analog-
to-digital converter (ADC). A typical sampling rate of 20 to 40 MHz with 8 to 12 bits of precision
is used.
Echo reception includes electronic delays to adjust for beam direction and dynamic receive
focusing to align the phases of detected echoes from the individual elements in the array as a
function of echo depth. In systems with digital beam formers, this is accomplished with digital
processing algorithms. Following phase alignment, the preprocessed signals from all of the active
transducer elements are summed. The output signal represents the acoustic information gathered
during the pulse repetition period along a single beam direction. This information is sent to the
receiver for further processing before rendering into a 2D image.
The receiver processes the data streaming from the beam former. Steps include time gain
compensation (TGC), dynamic range compression, rectification, demodulation, and noise
rejection.
The TGC stages supply the gain required to compensate for the attenuation brought about by the
propagation of sound in tissue. During the echo reception time, which ranges from 40 to 240 µs,
the gain of these amplifiers is swept over a range approaching 60 to 70 dB, depending on the
clinical examination. The value of this gain at any depth is under user control with a set of slide
pots often referred to as the TGC slide pots. The dynamic range available from typical TGC
amplifiers is in the order of 60 dB. One can think of the TGC amplifiers as providing a dynamic
range window into the total range available at the transducer. The user has the ability to adjust the
TGC and the noise rejection level.
2.6.1. A-Mode
A-mode (A for amplitude) is the display of the processed information from the receiver versus
time (after the receiver processing steps). As echoes return from tissue boundaries and scatterers
2.6.2. B-mode
B-mode (B for brightness) is the electronic
conversion of the A-mode and A-line
information into brightness-modulated dots on
a display screen. In general, the brightness of
the dot is proportional to the echo signal
amplitude (depending on signal processing
parameters). The B-mode display is used for
M-mode and 2D gray-scale imaging. The 2D
scan is composed of many A-Mode scans each
taken with a different beam position or angle
with respect to the field of view within the
Figure 2-13: Simultaneous B- and M-mode display B-Mode (upper image) and
M-Mode (lower image) display of heart valve motion.
The single selected A-scan shown in Figure 2.13 produces three prominent reflections, the middle
one is from a moving boundary such as a heart valve. The time variation of the echo times is
displayed on the vertical axis while the cursor scrolls across the screen at constant rate in time.
2.7.1. Filtering
First, the received RF signals are filtered in order to remove high-frequency noise. In second
harmonic imaging, the transmitted low-frequency band is also removed, leaving only the received
high frequencies in the upper part of the bandwidth of the transducer. The origin of these
frequencies, which were not transmitted, is nonlinear wave propagation.
Figure 2.14 shows an example of an RF signal and its envelope. If each amplitude along the
envelope is represented as a gray value or brightness, and different lines are scanned by translating
the transducer, a B-mode (B stands for brightness) image is obtained. Bright pixels correspond to
strong reflections, and the white lines in the image represent the two boundaries of the scanned
object. To construct an M-mode image, the same procedure with a static transducer is applied.
Most ultrasound scanners therefore enable the user to modify the gain manually at different depths.
The function of the scan converter is to create 2D images from echo information from distinct
beam directions, and to perform scan conversion to enable image data to be viewed on video
display monitors. Scan conversion is necessary because the image acquisition and display occur
in different formats. Early scan converters were of an analog design, using storage cathode ray
tubes to capture data. These devices drifted easily and were unstable over time. Modern scan
converters use digital technology for storage and manipulation of data. Digital scan converters are
extremely stable, and allow subsequent image processing with application of a variety
mathematical functions.
Digital information streams to the scan converter memory, configured as a matrix of small picture
elements (pixels) that represent a rectangular coordinate display. Most ultrasound instruments have
a ~500 X 500-pixel matrix (variations between manufacturers exist). Each pixel has a memory
address that uniquely defines its location within the matrix. During image acquisition, the digital
signals are inserted into the matrix at memory addresses that correspond as close as possible to the
relative reflector positions in the body. Transducer beam orientation and echo delay times
determine the correct pixel addresses (matrix coordinates) in which to deposit the digital
information. Misalignment between the digital image matrix and the beam trajectory, particularly
for sector-scan formats at larger depths, requires data interpolation to fill in empty or partially
filled pixels. The final image is most often recorded with 512 X 512 X 8 bits per pixel, representing
about ¼ megabyte of data. For color display, the bit depth is often as much as 24 bits (3 bytes per
primary color).
There are two kinds of Doppler imaging: Flow imaging and Strain imaging. Strain imaging is a
more recent application of Doppler imaging. When neighboring pixels move with a different
velocity, the spatial velocity gradient can be calculated. This gradient corresponds to the strain rate
(i.e., strain per time unit; the tissue lengthening or shortening per time unit). The strain rate can be
estimated in real time. The strain (i.e., the local deformation) of the tissue can then be calculated
as the integral of the strain rate over time.
In blood velocity estimation, the goal is not simply to estimate the mean target position and mean
target velocity. The goal instead is to measure the velocity profile over the smallest region possible
and to repeat this measurement quickly and accurately over the entire target. Therefore, the joint
optimization of spatial, velocity, and temporal resolution is critical. In addition to the mean
velocity, diagnostically useful information is contained in the volume of blood flowing through
A number of features make blood flow estimation distinct from typical radar and sonar target
estimation situations. The combination of factors associated with the beam formation system,
properties of the intervening medium, and properties of the target medium lead to a difficult and
unique operating environment. Figure 2.16 below summarizes the operating environment of an
ultrasonic blood velocity estimation system.
It is possible to give color coding to the Doppler information and superimpose it on a real time B-
mode image facility which can help in identification of blood vessels or blood vessels having
abnormal flow. This technique can also be used to diagnose coronary stenosis.
The color flow map (Figure 2.17) shows color-encoded velocities superimposed on the gray-scale
image with the velocity magnitude indicated by the color bar on the side of the image. Motion
toward the transducer is shown in yellow and red, and motion away from the transducer is shown
in blue and green, with the range of colors representing a range of velocities to a maximum of 6
cm/s in each direction. Velocities above this limit would produce aliasing for the parameters used
Figure 2-17: Colour-coded Doppler scan of a foetal heart. The grey image is derived from
B-Mode. The super-imposed colour maps blood flow velocity in the selected region.
Sinusoidal wave is transmitted by a piezoelectric crystal, and the reflected signal is received by a
second crystal. Usually, both crystals are embedded in the same transducer. CW Doppler is the
only exception to the pulse–echo principle for ultrasound data acquisition. It does not yield spatial
(i.e., depth) information.
Pulsed waves are transmitted along a particular line through the tissue at a constant pulse repetition
frequency (PRF). However, rather than acquiring the complete RF signal as a function of time, as
in the M-mode acquisition, only one sample of each reflected pulse is taken at a fixed time, the so-
This is the Doppler equivalent of the B-mode acquisition. However, for each image line, several
pulses (typically 3–7) instead of one are transmitted. The result is a 2D image in which the velocity
information is visualized by means of color superimposed onto the anatomical gray scale image.
Besides the B-mode data used for the 20 image, other information from M-mode and Doppler
signal processing can also be displayed. During operation of the ultrasound scanner, information
in the memory is continuously updated in real time. When ultrasound scanning is stopped, the last
image acquired is displayed on the screen until ultrasound scanning resumes.
References
1. Myer kutz, Standard handbook of Biomedical Engineering and design, McGraw-Hill
companies, 2004.
2. Ed. Joseph D. Bronzino, The Biomedical Engineering Hand Book, Second Edition, CRC
press LLC, 2000.
3. G. S. Sawhney, Fundamentals of Biomedical Engineering, New Age International (P)
Limited, Publishers, New Delhi, 2007.
Objectives
By studying this chapter, the students should be able to:
3.1. Introduction
X-ray imaging depends on the partial translucence of biological tissue with respect to X-ray
photons. If a beam of X-rays is directed at the human body, a fraction of the photons will pass
through without interaction. The bulk of the incident photons, on the other hand, will interact with
the tissue in different ways. As the beam is moved across the body, the relative proportions of
transmission, absorption and scatter change, as the beam encounters more or less bone, blood and
soft tissue. Ideally the contrast in an X-ray image would be produced just by the variation in the
number of photons that survive the direct journey to the detector without interaction. These are the
primary photons. They have travelled in a straight line from the X-ray tube focus and will give rise
to sharp-edged shadows. Figure 3.1 illustrates the imaging process.
Medical imaging uses X-rays with energies in the range 20–150 keV. These are very much higher
than most atomic binding energies and thus the inelastic interaction processes of Compton
scattering and the photoelectric effect launch charged electrons and ions, with relatively large
X-rays are invisible, can penetrate matter, ionize gases, change a photo emulsion, create light
emission in different substance and induce biological changes in living tissue
• an electron source,
• an external energy source to accelerate the electrons
• an evacuated path for electron acceleration,
• a target electrode,
For x-ray to be produced there should be a source of fast-moving electrons, a sudden stop of the
electrons’ motion in stopping the electron motion, the conversion of kinetic energy (KE) into EMS
energies like Infrared (heat) and light x-ray energies.
As shown in Figure 3.2, electron beam is focused from the cathode to the anode target by the
focusing cup and the electrons interaction with the electrons on the tungsten atoms of target
material will cause a production of:
• Heat: Most kinetic energy of projectile e- is converted into heat. This happens when
Projectile e- interacts with the outer-shell e- of the target atoms but do not transfer enough
energy to the outer-shell e- to ionize.
• X-rays: Characteristic radiation (20%) or Bremsstrahlung radiation (80%).
The principal parts of the X-ray Imaging System are operating Console, High-voltage generator,
X-ray tube. The system is designed to provide a large number of e- with high kinetic energy
focused to a small target.
Characteristic Radiation
When the incident electron interacts with the K-shell electron via a repulsive electrical force, the
K-shell electron is removed leaving a vacancy in the K-shell. An electron from the adjacent L-
shell (or possibly a different shell) fills the vacancy. Since the electron loses energy while it moves
from its shell, a characteristic x-ray photon is emitted with energy equal to the difference between
the binding energies of the two shells. It is called characteristic because it is characteristic of the
target element in the energy of the photon Produced.
Characteristics x-rays have discrete energies (Figure 3.3) based on the e- binding energies of target
material
Bremsstrahlung spectrum
Bremsstrahlung radiation arises from energetic electron interactions with an atomic nucleus of the
target material. In a "close" approach, the positive nucleus attracts the negative electron, causing
deceleration and redirection, resulting in a loss of kinetic energy that is converted to an x-ray. The
x-ray energy depends on the interaction distance between the electron and the nucleus; it decreases
as the distance increases. Hence Bremsstrahlung spectrum can be produced at any projectile e-
value. Bremsstrahlung x-rays have a range of energies and form a continuous emission spectrum.
Figure 3.4 below shows the production of bremsstrahlung radiation and its spectrum.
Major factors that affect x-ray production efficiency are the atomic number of the target material
and the kinetic energy of the incident electrons.
An X-ray tube is generally cylindrically in shape and measuring only 12-18 cm in length and 9 cm
in breadth. The envelope is of Pyrex or borosilicate glass tube enables it to withstand extremely
high temperature.
X-ray tube is housed in a metal housing which is lined with lead except at window. (The lead
absorbs all x-rays). This metal housing contains sealed oiled to serve as an electrical insulator and
to help in heat dissipation. Some c-rays housing also have a fan to cool the tube. The metal also
provides physical support to the x-ray tube.
3.3.2. Cathode
The negative side of the tube is called as cathode. It contains a tungsten or tungsten-rhenium
filament, focusing cups and supporting wires.
Filament: tungsten or tungsten-rhenium alloy is preferred because of its high melting point (3370
0
C), little tendency to vaporize and high atomic number (74).
The filament is supported by two stout wires, which connects it to the proper electrical source. A
low voltage current is sent through one wire to heat the filament. Another wire is connected to
high voltage source which provide high negative potential during exposure to drive away electrons
toward anode.
Most modern x-ray machine are provided with two filament mounted side by side. These
filaments differ in size, producing two focal spots of different size in the target. Such x-ray tubes
are called dual focus tubes.
Focusing cup: the filament if embedded in concave metal shroud made of nickel or molybdenum,
it is called focusing cup. It is given negative electrical potential so that electrons emitted from
cathode do not spread away. Because of focusing cup these electrons rush towards anode in a small
stream only.
The stationary or fixed type anodes are used in x-ray tubes which are low powered such as portable
x-ray unit, dental x-ray units etc. however the rotating anode is required in larger capacity x-ray
machines which produce high intensity x-ray beam.
Anode has two functions; to provide mechanical support to the target and to act as a good thermal
conductor for heat dissipation.
• High melting point (3370 oC) to withstand high temperatures produced during exposures.
• High atomic number (74) allows high x-ray production.
• Little tendency to vaporize.
Rotating anode (Figure 3.6): rotating anode provides several hundred times larger target area for
electron beam to interact and thus heat generated during an exposure us spread over a larger area
of the anode.
Figure 3-6: Cross-section of tube housing with x-ray tube insert for rotating anode
Self-lubricating bearings coated with metallic barium or silver serve first purpose whereas disc
and stem of molybdenum serves the second by virtue of its poor heat conductivity.
The tungsten-rhenium strip is beveled. The degree of beveling is called the target angle or anode
angle (Figure 3.7).
Focal spot: the focal spot on the target surface of the anode is that area which is bombarded by the
electrons from the cathode during an exposure. The size and shape of focal spot is determined by:
Heel effect: the radiographic intensity on the cathode side of the x-ray beam is higher that on the
anode side. This difference in the intensity across the x-ray beam is called heel effect. This is
because of the fact that the x-rays are emitted from target in all the directions and those emerging
parallel to the angled target is attenuated by target itself. Thus, the intensity of x-ray on anode side
is low. This difference in intensity may be as high as 40%.
Therefore, while taking a radiograph of a part of unequal thickness, thicker or denser part/side
should be positioned towards cathode and thinner side towards the anode. The smaller the anode
angle more is the heel effect. The heel effect is more noticed when the film size is larger. Figure
3.9 below illustrates the heel effect.
The radiation output of an X-ray tube clearly depends on the electrical power that it consumes and
tubes are described or rated by the manufacturer in terms of electrical power rather than photon
flux produced. The electron beam energy is quite capable, when converted to heat, of both
destroying the anode and damaging the associated vacuum enclosure. For this reason each tube
Different radiographic procedures have been found empirically to be best carried out with
particular combinations of focal spot sizes, HT voltages, beam currents and exposure times. The
particular choice of settings will be governed by the overall attenuation factor for the anatomy
under investigation (how much bone, how much soft tissue), the type of film used, and finally the
recommended patient dose for the particular investigation.
Inherent filtration includes the thickness (l to 2 mm) of the glass or metal insert at the x-ray tube
port. Glass (Si02) and aluminum have similar attenuation properties (Z = 14 and Z = 13,
respectively) and effectively attenuate all x-rays in the spectrum below approximately 15 keV.
Inherent filtration can include attenuation by housing oil and the field light mirror in the collimator
assembly.
Added filtration on the other hand refers to placing a sheet of metal intentionally in the beam to
change its effective energy. In general, diagnostic radiology, added filters attenuate the low-energy
x-rays in the spectrum that have virtually no chance of penetrating the patient and reaching the x-
ray detector. Because the low-energy x-rays are absorbed by the filters instead of the patient, the
radiation dose is reduced. Aluminum (Al) is the most common 'added filter material.
Compensation (equalization) filters are also used to change the spatial pattern of the x-ray intensity
incident on the patient, so as to deliver a more uniform x-ray exposure to the detector. For example,
a trough filter used for chest radiography has a centrally located vertical band of reduced thickness
and consequently produces greater x-ray fluence in the middle of the field. This filter compensates
In the collimator housing (Figure 3.10), a beam of light reflected by a mirror (of low x-ray
attenuation) mimics the x-ray beam. Thus, the collimation of the x-ray field is identified by the
collimator's shadows. Regulators require the light field and x-ray field be aligned so that the sum
3.7. Causes of x-ray tube failures and steps to extend tube life
Cathode failure: prolonged heating of the filament by normal current or due to repeated exposures
cause evaporation of filament metal causing its progressive thinning which in turn renders it
vulnerable to break.
Anode failure: melting of anode results due to excessive heat production by the bombardment of
electrons on it. This damage to anode causes uncontrolled x-ray production. It also causes
vibrations in the rotor due to imbalanced disc which increases the possibility of anode stem
fracture.
Glass envelope failure: it may crack due to secondary arcing from the filament to the metal
deposits on the glass wall as a result of tungsten evaporation.
The high-voltage section of an x-ray generator contains a step-up transformer, typically with a
primary-to-secondary turns ratio of 1:500 to 1:1,000. Within this range, a tube voltage of 100 kVp
requires an input peak voltage of 200 V to 100 V, respectively.
Figure 3-11: A modular schematic view shows the basic x-ray generator components
In modern systems, microprocessor control arid closed-loop feedback circuits help to ensure
proper exposure (Figure 3.11). Most modern generators used for radiography have automatic
exposure control (AEC) circuits, whereby the technologist selects the kVp and mA, and the AEC
system determines the correct exposure time. The AEC (also referred to as a photo timer) measures
the exposure with the use of radiation detectors located near the image receptor, which provide
feedback to the generator to stop the exposure when the proper exposure to the image receptor has
been reached.
At the operator console, the operator selects the kVp, the mA (proportional to the number of x-
rays in the beam at a given kVp), the exposure time, and the focal spot size. The peak kilovoltage
(kVp) determines the x-ray beam quality (penetrability), which plays a role in the subject contrast.
The x-ray tube current (mA) determines the x-ray flux (photons per square centimeter) emitted by
the x-ray tube at a given kVp. The product of tube current (mA) and exposure time (seconds) is
expressed as milliampere-seconds (mAs). Some generators used in radiography allow the selection
of "three-knob" technique (individual selection of kVp, mA, and exposure time), whereas others
only allow "two-knob" technique (individual selection of kVp and mAs). The selection of focal
spot size (i.e., large or small) is usually determined by the mA setting: low mA selections allow
the small focus to be used, and higher mA settings require the use of the large focus due to anode
heating concerns. On some x-ray generators, preprogrammed techniques can be selected for
various examinations (i.e., chest; kidneys, ureter, and bladder; cervical spine). All console circuits
have relatively low voltage and current levels that minimize electrical hazards.
Several x-ray generator circuit designs are in common use including single-phase x-ray generator,
which uses a single phase input line voltage of 220V at 50A and produce a DC wave form; three-
phase x-ray generator that uses three-phase AC line source and deliver constant voltage to the x-
ray tube and can produce short exposure times; , constant potential generator that provides a nearly
constant voltage to the x-ray tube which is accomplished with three phase transformer; and high
frequency inverter generator which is discussed below.
The operational frequency of the generator is variable, depending chiefly on the set kVp, but also
on the mA and the frequency-to-voltage characteristics of the transformer. There are several
advantages of the high-frequency inverter generator. Single phase or three-phase input voltage can
be used. Closed-loop feedback and regulation circuits ensure reproducible and accurate kVp and
mA values. The variation in the voltage applied to the x-ray tube is similar to that of a three-phase
generator. Transformers operating at high frequencies are more efficient, more compact, and less
costly to manufacture. Modular and compact design makes equipment siting and repairs relatively
easy.
The high-frequency inverter generator is the preferred system for all but a few applications (e.g.,
those requiring extremely high power, extremely fast kVp switching, or sub millisecond exposure
times). In only rare instances is the constant-potential generator a better choice.
In this high-frequency generator, the feedback signals provide excellent stability of the peak tube
voltage (kVp) and tube current (mA). Solid lines in the circuit depict low voltage, dashed lines
depict high voltage, and dotted lines depict low-voltage feedback signals from the sensing circuits
(kVp and mA). The potential difference across the x-ray tube is determined by the charge delivered
by the transformer circuit and stored on the high-voltage capacitors.
Screen –film radiography is the oldest projection radiographic system used since the discovery of
imaging with the x-rays.
Projection imaging refers to the acquisition of a two-dimensional image of the patient’s their
dimensional anatomy. Projection imaging delivers a great deal of information compression,
because the anatomy that spans the entire thickness of the patient is presented in one image.
Radiography is a transmission imaging procedure. X-rays emerge from the x-ray tube, which is
positioned on one side of the patient's body, they then pass through the patient and are detected on
the other side of the patient by the screen-film detector.
In screen-film radiography, the optical density (OD, a measure of film darkening) at a specific
location on the film is (ideally) determined by the x-ray attenuation characteristics of the patient's
anatomy along a straight line through the patient between the x-ray source and the corresponding
location on the detector. The x-ray tube emits a relatively uniform distribution of x-rays directed
toward the patient. After the homogeneous distribution interacts with the patient's anatomy, the
screen-film detector records the altered x-ray distribution (Figure 3.14).
The low sensitivity of film for X-rays would yield prohibitively large patient doses. Therefore, an
intensifying screen is used in front of the film. This type of screen contains a heavy chemical
element that absorbs most of the X-ray photons. When an X-ray photon is absorbed, the kinetic
energy of the released electron raises many other electrons to a higher energy state. When returning
• Fluorescence is the prompt emission of light when excited by X-rays and is used in
intensifying screens. A material is said to fluoresce when light emission begins
simultaneously with the exciting radiation and light emission stops immediately after the
exciting radiation has stopped. Initially, calcium tungstate (CaWO4) was most commonly
used for intensifying screens.
Advances in technology have now resulted in the use of rare earth compounds, such as
gadolinium oxysulfide (Gd2O2S). A more recent scintillator material is thallium-doped
cesium iodide (CsI:Tl), which has not only an excellent absorption efficiency but also a
good resolution because of the needle-shaped or pillar like crystal structure, which limits
lateral light diffusion.
• Phosphorescence or afterglow is the continuation of light emission after the exciting
radiation has stopped. If the delay to reach peak emission is longer than 10−8 seconds or
59 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
if the material continues to emit light after this period, it is said to phosphoresce.
Phosphorescence in screens is an undesirable effect, because it causes ghost images and
occasionally film fogging.
3.9.3. Film
The film contains an emulsion with silver halide crystals (e.g., AgBr). When exposed to light, the
silver halide grains absorb optical energy and undergo a complex physical change. Each grain that
absorbs a sufficient number of photons contains dark, tiny patches of metallic silver called
development centers. When the film is developed, the development centers precipitate the change
of the entire grain to metallic silver. The lighter reaching a given area of the film, the more grains
are involved and the darker the area after development. In this way a negative is formed. After
development, the film is fixed by chemically removing the remaining silver halide crystals.
In radiography, the negative image is the final output image. In photography, the same procedure
has to be repeated to produce a positive image. The negative is then projected onto a sensitive
paper carrying silver halide emulsion similar to that used in the photographic film.
Graininess: The image derived from the silver crystals is not continuous but grainy. This effect is
most prominent in fast films. Indeed, because the number of photons needed to change a grain into
metallic silver upon development is independent of the grain size, the larger the grains, the faster
the film become dark.
Speed: The speed of a film is inversely proportional to the amount of light needed to produce a
given amount of metallic silver on development. The speed is mainly determined by the silver
halide grain size. The larger the grain size the higher the speed because the number of photons
needed to change the grain into metallic silver upon development is independent of the grain size.
The speed depends on the properties of the intensifying screen, the film, on the quality of film–
screen contact, and on a good match between the emission spectrum of the screen and the spectral
sensitivity of the film used.
Contrast: The most widely used description of the photosensitive properties of a film is the plot
of the optical density D versus the logarithm of the exposure E. This curve is
Where Iin and Iout are the incoming and outgoing light intensity when exposing the developed film
with a light source.
Developing: In developer the exposed silver bromide is reduced to silver. Developer is an alkaline
solution of various compounds and the reducing action is slow at temperature below 18o C. At
temperature above 30oC, the film emulsion will get damaged. Hence the developer temperature is
to be maintained at 20oC by refrigeration system and the recommended developing time is 5
minutes at 20oC. Prolonged development will fog the image. Development in higher temperature
will result in poor contrast.
Rinsing: Once the developing is over, the film turns to black colour and at the same time is not
transparent due to the existence of undeveloped emulsion on the film. The unexposed emulsion
will again react with light when it is taken out in ordinary light and turn black. Hence, it is essential
to remove the unexposed emulsion. So, after developing, film is immersed in another tank
containing ordinary water or water with glacial acetic acid for few minutes to stop the developing
action.
Fixing: In fixer, the unexposed silver bromide is dissolved away any only the converted silver
which is black in colour will remain on the film which will represent the internal structural
variation of the part. The fixer solution is acidic in nature.
Drying: After washing the films in running water for about 20 to 25 minutes, films are dried. Dust
free hot air of temperature 100oF to 120oF is used for drying the film. Drying cabinet or rooms are
used for drying the film. Once the film is dried, the radiograph is ready for evaluation.
After exposure a latent image exists as a spatially dependent distribution of electrons trapped in
high energy states. During readout, laser light is used to stimulate the trapped electrons back to the
conduction band, where they are free to transition to the valance band by emission of light. Figure
3.14 shows imaging plate exposure and readout process.
• The cassette is moved into the reader unit and the imaging plate is mechanically removed
from the cassette.
• The imaging plate is translated across a moving stage and scanned by a laser beam.
• The laser light stimulates the emission of trapped energy in the imaging plate, and visible
light is released from the plate.
• The light released from the plate is collected by a fiber optic light guide and strikes a
photomultiplier tube (PMT), where it produces an electronic signal.
• The electronic signal is digitized and stored.
• The plate is then exposed to bright white light to erase any residual trapped energy.
• The imaging plate is then returned to the cassette and is ready for reuse.
3.10.2. CR Reader
As the red laser light (approximately 700 nm) strikes the imaging phosphor at a location (x,y), the
trapped energy from the x-ray exposure at that location is released from the imaging plate. The
imaging plate is mechanically translated through the readout system using rollers. A laser beam
DR describes a multitude of digital x-ray detection systems that immediately process the absorbed
X-ray signal after exposure and produce the image for viewing with no further user interaction.
There are two types of digital radiography (DR) detectors,
• Indirect flat panel detectors
• Direct flat panel detectors.
During exposure, negative voltage is applied to all gate lines, causing all of the transistor switches
on the flat panel imager to be turned off. Therefore, charge accumulated during exposure remains
at the capacitor in each detector element. During readout, positive voltage is sequentially applied
to each gate line (e.g., Rl, R2, R3, as shown in Figure 3.20.), one gate line at a time. Thus, the
switches for all detector elements along a row are turned on. The multiplexer (right of Figure 3.20
) is a device with a series of switches in which one switch is opened at a time. The multiplexer
sequentially connects each vertical wire (e.g., C1, C2, C3), via switches (S1, S2, S3), to the
digitizer, allowing each detector element along each row to be read out.
The size of the detector element on a flat panel largely determines the spatial resolution of the
detector system. For high spatial resolution, small detector elements are needed. However, the
electronics (e.g., the switch, capacitor, etc.) of each detector element takes up a certain (fixed)
66 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
amount of the area, so, for flat panels with smaller detector elements, a larger fraction of the
detector element's area is not sensitive to light. Therefore, the light collection efficiency decreases
as the detector elements get smaller.
For flat panel arrays, the light collection efficiency of each detector element depends on the
fractional area that is sensitive to light (Figure 3.21). This fraction is called the fill factor, and a
high fill factor is desirable.
Therefore, the choice of the detector element dimensions requires a tradeoff between spatial
resolution and contrast resolution.
Direct flat panel detectors eliminate the intermediate step of converting X-ray energy into light,
and uses direct absorption of the x-ray photons to produce an electrical signal.
Direct flat panel detectors are made from a layer of photo conductor material on top of a TFT
array. These photoconductor materials have many of the same properties as silicon, except they
have higher atomic numbers. Selenium is commonly used as the photoconductor. The TFT arrays
of these direct detection systems make use of the same line-and-gate readout logic, as described
for indirect detection systems. With direct detectors, the electrons released in the detector layer
from x-ray interactions are used to form the image directly. Light photons from a scintillator are
not used. A negative voltage is applied to a thin metallic layer (electrode) on the front surface of
the detector, and therefore the detector elements are held positive in relation to the top electrode.
In most radiography (except) mammography a photon interaction in soft tissue produce scattered
x-ray photons. These scattered photons are the main source of noise that cause a radiation fog in
the radiographic image.
For two adjacent areas transmitting photon fluencies of A & B the contrast in the absence of scatter
is:
C0 = (A-B)/A
C = C0 x( 1/(1+(S/P)))
Scattering photons can be reduced by incorporating a collimation system called anti-scatter grid
between the detector and the patient. Anti-scatter grid is composed of a series of lead grip strips
aligned with the x-ray source. They are used to reduce the amount of scattered radiation reaching
the detector by utilizing geometry.
Grid ratio: it is the ratio of the height to the width of the interspaces (not the grid bars) in the grid.
Grid ratios of 8: 1, 10: 1, and 12: 1 are most common in general radiography and a grid ratio of 5:
1 is typical in mammography.
Figure 3-26: Radiographic image without (left) anti-scatter and with anti-scatter grid (right):
The distinguishing features of mammography equipment from other x-ray imaging are due to the
cancer and tissue property.
• Cancer produces very small physical changes in the breast that are difficult to visualize
with conventional x-ray imaging.
• Breast consist soft tissues with relatively small differences in density (or atomic number).
The small x-ray attenuation differences between normal and cancerous tissues in the breast require
the use of x-ray equipment specifically designed to optimize breast cancer detection.
Low x-ray energies provide the best differential attenuation between the tissues (see Figure 3-28);
however, the high absorption results in a high tissue dose and long exposure time. Detection of
minute calcifications in breast tissues is also important because of the high correlation of
calcification patterns with disease.
The x-ray tubes are arranged such that the cathode side of the tube is adjacent to the patient’s chest
wall (Figure 3.29), since the highest intensity of x-rays is available at the cathode side, and the
attenuation of x-rays by the patient is generally greater near the chest wall of the image.
Mammography tubes often have grounded anodes, whereby the anode structure is maintained at
ground (0) voltage and the cathode is set to the highest negative voltage. With the anode at the
same voltage as the metal insert in this design, off focus radiation is reduced because the metal
housing attracts many of the rebounding electrons that would otherwise be accelerated back to the
anode.
Figure 3-29: Orientation of anode cathode axis along the chest wall to nipple direction
Most x-ray machines use aluminium or "aluminium equivalent" to filter the x-ray beam to reduce
unnecessary exposure to the patient. Mammography uses filters that work on a different principle
and are used to enhance contrast sensitivity. Molybdenum (same as in the anode) is the standard
The characteristic radiation energies of molybdenum (17.5 and 19.6 keV) are nearly optimal for
detection of low-contrast lesions in breasts of 3 to 6.cm thickness.
3.13.5. Compression
Breast compression is a necessary part of the mammography examination. Firm compression
reduces overlapping anatomy and decreases tissue thickness of the breast. This results in fewer
scattered x-rays, less geometric blurring of anatomic structures, and lower radiation dose to the
breast tissues. Achieving a uniform breast thickness lessens exposure dynamic range and allows
the use of higher contrast film.
Compression is achieved with a compression paddle, a flat Lexan plate attached to a pneumatic or
mechanical assembly. Suspicious areas often require “spot” compression to eliminate
superimposed anatomy by further spreading the breast tissues over a localized area. Figure 3-33
below illustrates area compression and spot compression paddle.
1. 2D mammography
o It is also called full-field digital mammography (FFDM),
o With 2D digital mammography, the radiologist is viewing all of the complexities of
breast tissue in a one flat image.
o Disadvantage: Sometimes breast tissue can overlap, giving the illusion of normal breast
tissue looking like an abnormal area.
2. 3D mammography/ tomosynthesis
• It is a mammography system where the x-ray tube and imaging plate move during
the exposure.
• 3D mammography creates a series of thin slices through the breast that allow doctors
to examine breast tissue detail one slice at a time to help find breast cancer at its
earliest stages.
Figure 3-33: A). Dense opacity with specular border in the cranial part of the right breast;
B).Cluster of irregular microcalcification suggesting a low differentiated carcinoma.
3.14. Fluoroscopy
Fluoroscopy refers to the continuous acquisition of a sequence of x-ray images over time,
essentially a real-time x-ray movie of the patient. Fluoroscopy is a transmission projection imaging
modality, and is, in essence, just real-time radiography. Fluoroscopic systems use x-ray detector
systems capable of producing images in rapid temporal sequence. Fluoroscopy is used for
positioning catheters in arteries, for visualizing contrast agents in the gastrointestinal (GI) tract,
and for other medical applications such as invasive therapeutic procedures where real-time image
feedback is necessary. Fluoroscopy is also used to make x-ray movies of anatomic motion, such
as of the heart or the esophagus.
The image output of a fluoroscopic imaging system is a projection radiographic image, but in a
typical 10-minute fluoroscopic procedure a total of 18,000 individual images are produced which
Fluoroscopy Instrumentation
The five-component ("pentode") electronic lens systems of the Image Intensifier include:
– The G1, G2, and G3 electrodes
– the input screen substrate (the cathode)
– the anode near the output phosphor
Under the influence of the about ~25,000 to 35,000 V electric field, electrons are accelerated and
arrive at the anode with high velocity and considerable kinetic energy. The intermediate electrodes
(G1, G2, and G3) shape the electric field, focusing the electrons properly onto the output layer.
After penetrating the very thin anode, the energetic electrons strike the output phosphor and cause
a burst of light to be emitted.
The electrons strike the output phosphor, causing emission of light. The thick glass output window
allows light to escape the top of Image Intensifier. Light that is reflected in the output window is
scavenged to reduce glare by the addition of a light absorber around the circumference of the
output window.
Each electron causes the emission of approximately 1,000 light photons from the output phosphor
resulting in light amplification (Figure 3-39). The image is much smaller at the output phosphor
than it is at the input phosphor, because the 23- to 35-cm diameter input image is focused onto a
circle with a 2.5-cm diameter. The reduction in image diameter leads to amplification
(minification).
The conversion factor of a new II ranges from 100 to 200 Cd sec m-2 mR-1. The conversion factor
degrades with time, and this ultimately can lead to the need for II replacement.
Brightness gain
It is the product of the electronic and minification gains of the II. The electronic gain of an II is
roughly about 50, and the minification gain changes depending on the size of the input phosphor
and the magnification mode.
As the effective diameter of the input phosphor decreases (increasing magnification), the
brightness gain decreases.
In normal operation of the image intensifier (left of Figure 3.40), electrons emitted by the
photocathode over the entire surface of the input window are focused onto the output phosphor,
resulting in the maximum field of view (FOV) of the II. Magnification mode (right Figure 3-41)
is achieved by pressing a button that modulates the voltages applied to the five electrodes, which
in turn changes the electronic focusing such that only electrons released from the smaller diameter
FOV are properly focused onto the output phosphor.
S distortion is a spatial warping of the image in an S shape through the image. This type of
distortion is usually subtle, if present, and is the result of stray magnetic fields and the earth's
magnetic field. On fluoroscopic systems capable of rotation, the position of the S distortion can
shift in the image due to the change in the system's orientation with respect to the earth's magnetic
field.
The video signal is a voltage versus time waveform (as shown in Figure 3-44) that is communicated
electronically by the cable connecting the video camera with the video monitor. Synchronization
pulses are used to synchronize the raster scan pattern between the TV camera target and the video
monitor. Horizontal sync pulses cause the electron beam in the monitor to laterally retrace and
prepare for the next scan line. A vertical sync pulse has different electrical characteristics and
causes the electron beam in the video monitor to reset at the top of the screen.
Inside the video monitor, the electron beam is scanned in raster fashion, and the beam current is
modulated by the video signal. Higher beam current at a given location results in more light
produced at that location by the monitor phosphor. The raster scan on the monitor is done in
synchrony with the scan of the TV target. The video signal is amplified and is transmitted by cable
to the television monitor, where it is transformed back into a visible image.
Flat pane detectors are the hearts of digital fluoroscopy system. They are thin film transistor (TFT)
pixelated arrays that are rectangular in format and are used as x-ray detectors. CsI scintillator is
used to convert the incident x-ray beam into light. TFT systems have a photodiode at each detector
element which converts the light energy to electronic signal. Flat panel detectors replace the image
86 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
intensifier, video camera and directly record the real-time fluoroscopic image sequence. The image
produced by the image intensifier is circular in format, resulting in less efficient utilization of
rectangular monitors for fluoroscopic display. The flat panel detector produces a rectangular
image, well matched to the rectangular format of TV monitors. The flat panel detector is
substantially less bulky than the image intensifier and TV system, but provides the same
functionality.
• Visual Inspection
• Beam Measurements (kVp, mR, HVL, etc)
• Receptor Tests: Grids, PBL, Coverage
• Tube Assembly Tests: Collimator assembly, Focal Spot, Source to Image Distance
• Darkroom Tests (if applicable)
Visually evident deficiencies often ignored/worked around by staff. Reporting deficiencies often
lead to corrective actions including:
• Lights/LEDs working
• Proper technique indication
• Locks and interlocks work
• No broken/loose dials, knobs
• Any obvious electrical or mechanical defects
Beam Measurements
kVp evaluation is a critical issue in diagnostic radiology. Even with HF generators Poor kV
calibration can:
• Increase dose if kV’s too low
• Cause poor mA linearity, leading to possible repeats
88 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
• Image contrast: affected, but relatively minor effect for ranges of miss-calibration
usually encountered
A single-exposure rating chart (Figure 3-46) provides information on the allowed combinations of
kVp, mA, and exposure time (power deposition) for a particular x-ray tube, focal spot size, anode
rotation speed, and generator type (no accumulated heat on the anode).
Figure 3-45: The single exposure rating charts for a given focal spot and anode rotation speeds:
Each curve plots peak kilovoltage (kVp) on the vertical axis versus time (seconds) on the
horizontal axis, for a family of tube current (mA) curves indicating the maximal power loading.
For each tube, there are usually four individual charts with peak kilovoltage as the y-axis and
exposure time as the x-axis, containing a series of curves, each for a particular mA value. Each of
For mA versus time plots with various kVp curves, the rules are the same but with a simple
exchange of kVp and mA labels. Previous exposures must also be considered when deciding
whether an exposure is permissible, because there is also a limit on the total accumulated heat
capacity of the anode. The anode heat input and cooling charts must be consulted in this instance.
The monitor consists of a piece of photographic film overlaid with small sheets of different
materials, which act as radiation filters. Typically, these are tin, aluminum, and plastic. After
exposure the degree of blackening of the film can be related to the total radiation to which the film
1. Why only K-characteristics x-rays of the tungsten target material are desirable in
radiographic imaging?
2. Why are photons undergoing Compton scattering undesired?
3. One way to avoid counting scattered photons is by placing an anti-scatter grid (with spacing
w and height h) on top of the X-ray detector, shown below. Determine the maximum
scattering angle of X-rays in terms of h and w that can pass through the grid and hence
being detected.
References
1. Ed. Joseph D. Bronzino, The Biomedical Engineering Hand Book, Second Edition, CRC
press LLC, 2000.
2. G.S., Fundamentals of Biomedical Engineering, New Age International (P) Limited,
Publishers, New Delhi, 2007.
3. Hendee W. R., Ritenour E. R., Medical imaging physics, fourth edition, A John Wiley &
Sons, Inc., Publication, ISBN 0-471-38226-4, New York, 2002.
4. Paul Suetens, Fundamentals of medical imaging, second edition, Cambridge University
Press, ISBN-13 978-0-521-51915-1, New York, 2009.
5. Chris Guy, Dominic ffytche, An Introduction to The Principles of Medical Imaging,
Revised Edition, Imperial College Press, ISBN 1-86094-502-3, London, 2005.
Objectives
After completing this chapter, the students should be able to:
4.1. Introduction
In conventional radiography heart, lung & ribs are all superimposed on the same film and
information with respect to the dimension parallel to the x-ray beam is lost. But In medical
tomography Slices capture each organ in its actual 3D dimensional position. CT is the first imaging
modality that made it possible to probe the inner depths of the body, slice by slice. The word
"tomography" is derived from the Greek τoµoς (slice) and γραφιν (to write).
• A motorized table moves the patient through a circular opening in the CT imaging
system.
• While the patient is inside the opening, an
X-ray source and a detector assembly within
the system rotate around the patient. A
single rotation typically takes a second or
less. During rotation the X-ray source
produces a narrow, fan-shaped beam of X-
rays that passes through a section of the
patient's body.
• Detectors in rows opposite the X-ray source
register the X-rays that pass through the
patient's body as a snapshot in the process
of creating an image. Many different Figure 4-1: Drawing of CT fan beam
(and patient in a CT imaging system
"snapshots" (at many angles through the
patient) are collected during one complete rotation.
• For each rotation of the X-ray source and detector assembly, the image data are sent to a
computer to reconstruct all of the individual "snapshots" into one or multiple cross-
sectional images (slices) of the internal organs and tissues.
CT images of internal organs, bones, soft tissue, and blood vessels provide greater clarity and
more details than conventional X-ray images, such as a chest X-Ray.
The tomographic image is a picture of a slab of the patient’s anatomy. The 2D CT image
corresponds to a 3D section of the patient. CT slice thickness is very thin (1 to 10 mm) and is
approximately uniform. The 2D array of pixels in the CT image corresponds to an equal number
of 3D voxels (volume elements) in the patient. Each pixel on the CT image displays the average
x-ray attenuation properties of the tissue in the corresponding voxel. The following figure
illustrates the projection and reconstruction procedure.
4.2. Instrumentation of CT
According to their source beam geometry, CT – Scanner generations generally can be classified
into three:
• Pencil Beam: inefficient use of the x-
ray source, excellent scatter rejection
• Fan Beam: linear detectors, scattered
radiation accounts around 5%
• Open Beam: as used in projection
radiography, highest detection of
scatter
The rotate/rotate geometry of 3rd generation scanners Figure 4-7: 3rd generation CT scanner
leads to a situation in which each detector is
responsible for the data corresponding to a
ring in the image (Figure 4-8).
In third generation CT scanner, the detectors toward the center of the array make the transmission
measurement It, while the reference detector that measures I0 is positioned near the edge of the
detector array. But in fourth generation the same detector make both the transmission measurement
and the reference measurement so that the gain will be the same. This is illustrated in Figure 4.10
below.
If many (n) regions with different linear attenuation coefficients (inhomogeneous medium) occur
along the path of x-rays, the transmission is:
measurements through the patient at different positions. The acquisition of a single axial CT image
may involve approximately 800 rays taken at 1,000 different projection angles, for a total of
Each ray acquired in CT is a transmission measurement It through the patient along a line. The un-
attenuated intensity of the x-ray beam Io is also measured during the scan by a reference detector.
For a single orientation of projection, the projection data P can be calculated as:
I t = I0e
− u ( x, y )t
ln(I 0 / I t ) = − u ( x, y )t = P
In this method, an initial guess about the two-dimensional pattern of x-ray attenuations is made.
The projection data likely to be given by this two-dimensional pattern (model predictions) in
different directions are then calculated which is compared with the measured data. Discrepancies
between the measured data and predicted model data are used in a continuous iterative
improvement of the predicted model array.
i.e., scan I in vertical direction, scan II in diagonal direction scan III in horizontal direction to find
image matrix. Following iterations can be carried out to match image matrix to the object matrix:
i. Scan I of the object matrix in vertical direction gives the vertical sums of 6 and 14 which
is distributed in vertical columns with equal weighing i.e.,6/2 and 14/2 to get an image
matrix.
ii. Scan II of matrix in diagonal direction of object matrix gives attenuations as 4, 8 and 8 and
image matrix after first iteration given 3, 10, & 7. Differences of object and image matrix
have values of 1, –2 and 1 which are back projected with equal weighing diagonally as
shown below.
B. Simple Backprojection
In this method, the image is reconstructed directly from the projection data without any need to
compare the measured data and the reconstructed model. If projections of an object in the two
directions normal to x and y axes are measured and then this projection data are projected back
into the image plane, the area of interaction receives their summed intensities. It can be seen that
the back projection distribution is a representation of the imaged object. In actual process, the back
projection for all scanned angles is carried out and the total back projected image is made by
summing the contribution from all the scan angles. This method generally gives a crude
reconstruction of the imaged object. Figure 4-4 below illustrates the effect of multiple projections
in reconstructing the image in simple back projection.
a) b)
107 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
c)
Figure 4-14: Simple Back projection: a) single detector. b) Two detectors. c) Multiple detectors.
The main drawback of this reconstruction method is the presence of noise in the reconstructed
image. Figure 4-14 above illustrates the blurring effect on the reconstructed image even for a data
received by multiple projections.
It is possible that the image can be reconstructed with the back projection after data has been
filtered first. The back projected image is Fourier transformed into the frequency domain and
filtered with a filter proportional to spatial frequency up to some frequency cutoff. These filtered
projections are used to construct the final back -projected image. The filtering operations can also
be carried out in Cartesian coordinates by using analytic algorithms which are known as
convolution techniques. This is achieved by convolving (filtering) the shadow function with a
filter. In principle, the blurring effect is removed in the convolution process by means of a
weighing (suitable processing function) of the scan profiles before back projection. This method
has begun found to give a good reconstruction of the imaged object. Different kernels are used for
varying clinical applications such as soft tissue imaging or bone imaging. Figure 4-16 below
illustrates the convolution of projected data with the filtering kernels.
Consider the 2D
parallel-beam geometry
as in Figure (a) in
which µ(x,y) represents
the distribution of the
linear attenuation
coefficient in the xy-
plane.
angle θ with the y-axis. The unattenuated intensity of the X-ray beams is I0. A new coordinate
system (r,s) is defined by rotating (x,y) over the angle θ. This gives the following transformation
formulas:
For a fixed angle θ, the measured intensity profile as a function of r as shown in Figure 4.17 (b)
is given by:
Where pθ(r) is the projection of the function µ(x,y) along the angle θ. pθ(r) can be measured for θ
ranging from 0 to 2π. Because concurrent beams coming from opposite sides theoretically yield
identical measurements, attenuation profiles acquired at opposite sides contain redundant
information. Therefore, as far as parallel beam geometry is concerned, it is sufficient to measure
pθ(r) for θ ranging from 0 to π Stacking all these projections pθ(r) results in a 2D dataset p(r,θ)
called a sinogram.
Assume a distribution µ(x,y) containing a single dot, the corresponding projection function p(r,θ)
has a sinusoidal shape, which explains the origin of the name sinogram. In mathematics, the
transformation of any function f(x,y) into its sinogram p(r,θ) is called the Radon transform:
Let θ be variable. Then Pθ(k) becomes a 2D function P(k,θ). The projection theorem now states
that
The 1D FT with respect to variable r of the Radon transform of a 2D function is the 2D FT of that
function. Hence, it is possible to calculate f(x,y) for each point (x,y) or µ(x,y) based on all its
projections pθ(r), θ varying between 0 and π.
where µ(x,y) is the floating point number of the (x,y) pixel before conversion, µwater is the
attenuation coefficient of water, and CT(x,y) is the CT number (or Hounsfield unit) that ends up
in the final clinical CT image. The value of µwater is about 0.195 for the x-ray beam energies
typically used in CT scanning. This normalization results in CT numbers ranging from about -
1,000 to +3,000, where -1,000 corresponds to air, soft tissues range from -300 to -100, water is 0,
and dense bone and areas filled with contrast agent range up to +3,000. CT values are relatively
stable for a single organ and are to a large extent independent of the X-ray tube spectrum.
Windowing and leveling the CT image is the way to perform this post-processing task (which
nondestructively adjusts the image contrast and brightness).
• The window width (W) determines the contrast of the image, with narrower windows
resulting in greater contrast.
• The level (L) is the CT number at the center of the window.
4.6. Spiral/Helical CT
Acquiring a single axial slice through a particular organ is of very limited diagnostic use, and so a
volume consisting of multiple adjacent slices is always acquired. One way to do this is for the X-
ray source to rotate once around the patient, then the patient table to be electronically moved a
small distance in the head/foot direction, the X-ray source to be rotated back to its starting position,
and another slice acquired. This is clearly a relatively slow process. The solution is to move the
table continuously as the data are being acquired: this means that the X-ray beam path through the
patient is helical, as shown in Figure 4-22. Such a scanning mode is referred to as either ‘spiral’
or ‘helical’, the two terms being interchangeable. However, there were several hardware and image
First, the very high-power cables that feed the CT system cannot physically be rotated continuously
in one direction, and similarly for the data transfer cables attached to the detector bank. A
‘contactless’ method of both delivering power and receiving data had to be designed. Second, the
X-ray tube is effectively operating in continuous mode, and therefore the anode heating is much
greater than for single-slice imaging. Finally, since the beam pattern through the patient is now
helical rather than consisting of parallel projections, modified image reconstruction algorithms had
to be developed. The first ‘spiral CT’, developed in the early 1990s, significantly reduced image
acquisition times and enabled much greater volumes to be covered in a clinical scan.
The value of p typically used in clinical scans lies between 1 and 2. The reduction in tissue
radiation dose, compared to an equivalent series of single-slice scans, is equal to the value of p.
4.7. CT Angiography
Angiography is a minimally invasive medical test performed by injecting a contrast material to
produce pictures of blood vessels in the body.
CT angiography uses a CT scanner to produce detailed images of both blood vessels and tissues
in various parts of the body. An iodine-rich contrast material (dye), which have high attenuation
property, is usually injected through a small catheter placed in a vein of the arm. A CT scan is then
performed while the contrast flows through the blood vessels to the various organs of the body.
Physicians use this test to diagnose and evaluate many diseases of blood vessels and related
conditions such as:
• Injury
• aneurysms
• blockages (including those from blood clots or plaques)
• disorganized blood vessels and blood supply to tumors
• congenital (birth related) abnormalities of the heart, blood vessels or various parts of the
body which might be supplied by abnormal blood vessels
Also, physicians use this exam to check blood vessels following surgery, such as:
• identify abnormalities, such as aneurysms, in the aorta, both in the chest and abdomen, or
in other arteries.
• detect atherosclerotic(plaque) disease in the carotid artery of the neck, which may limit
blood flow to the brain and cause a stroke.
• identify a small aneurysm or arteriovenous malformation (abnormal communications
between blood vessels) inside the brain or other parts of the body.
4.8. CT Artifacts
Computed Tomography, can introduce false detail and distortion, that is to say image artifacts, into
the final image as a result of faulty equipment, errors in the stored data and patient movement.
Because of its digital nature, tomography can also produce generic artifacts.
Review Questions
1. Describe the main features that improve the contrast resolution in Computed
Tomography over Conventional projection Radiography.
2. What is the cause of ring artifact in third generation CT? Describe how this problem is
solved by fourth generation CT.
3. Briefly describe the simple backprojection approach to image reconstruction, and explain
how the filtered backprojection method improves these images.
4. Define a CT (Hounsfield) unit, and explain the purpose of post processing using
windowing and leveling in a CT viewing device.
5. Explain the relationship between voxel and pixel.
6. For the object shown in Figure below, draw the projections that would be acquired at
angles θ=0, 45, 90, 135 and 1800. Sketch the sinogram for values of θ from 0 to 3600.
References
1. Ed. Joseph D. Bronzino, The Biomedical Engineering Hand Book, Second Edition, CRC
press LLC, 2000.
2. G.S., Fundamentals of Biomedical Engineering, New Age International (P) Limited,
Publishers, New Delhi, 2007.
3. Hendee W. R., Ritenour E. R., Medical imaging physics, fourth edition, A John Wiley &
Sons, Inc., Publication, ISBN 0-471-38226-4, New York, 2002.
4. Paul Suetens, Fundamentals of medical imaging, second edition, Cambridge University
Press, ISBN-13 978-0-521-51915-1, New York, 2009.
5. Chris Guy, Dominic ffytche, An Introduction to The Principles of Medical Imaging,
Revised Edition, Imperial College Press, ISBN 1-86094-502-3, London, 2005.
Objectives
After completing this chapter, the reader should be able to:
5.1. Introduction
Magnetic resonance imaging (MRI) is an imaging technique used primarily in medical settings to
produce high quality images of the inside of the human body. MRI is based on the principles of
nuclear magnetic resonance (NMR), a spectroscopic technique used by scientists to obtain
microscopic chemical and physical information about molecules. The protons and neutrons of the
nucleus have a magnetic field associated with their nuclear spin and charge distribution. Resonance
is an energy coupling that causes the individual nuclei, when placed in a strong external magnetic
field, to selectively absorb, and later release, energy unique to those nuclei and their surrounding
environment. This energy is called NMR signal.
The components of a MRI system are (1) a magnet (2) gradient coils (3) a transmitter (4) a receiver
(5) a computer and (6) shin coils. The layout of the system is as shown in Figure 5-1.
5.2.1. Spin
Matter is made out of molecules, which are made out of atoms, which are made out of electrons
orbiting a nucleus (made out of protons and neutrons). The nucleus, made out of protons and
neutrons, can be thought of as a charged sphere. It turns out that the nucleus behaves as if it’s
rotating; in other words, it has an internal angular momentum. This angular momentum is called
spin. The spin of the nucleus is quantized, meaning it can only assume values in quanta of a basic
1 3
unit, ħ, or its half-integer multiples: 2ħ, ħ, 2ħ, 2ħ… Different atomic nuclei have different spin
The magnetic moment’s size is proportional to the spin. The constant of proportionality is called
the gyromagnetic ratio and denoted γ. It depends on the nucleus in question:
124 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
Unpaired Unpaired
Nuclei Net Spin (MHz/T)
Protons Neutrons
1
H 1 0 1/2 42.58
2
H 1 1 1 6.54
31
P 1 0 1/2 17.25
23
Na 1 2 3/2 11.27
14
N 1 1 1 3.08
13
C 0 1 1/2 10.71
19
F 1 0 1/2 40.08
Two or more particles with spins having opposite signs can pair up to eliminate the observable
manifestations of spin. An example is helium. In nuclear magnetic resonance, it is unpaired
nuclear spins that are of importance.
Molecules are made out of atoms, connected between them by chemical bonds. The most important
molecule in MRI is water. A “typical” water molecule is:
Oxygen-16 has no spin (its 8 protons pair up destructively, as do its 8 neutrons), and 1H has spin
½. Because of symmetry, the two hydrogen atoms are equivalent, in the sense that they behave as
one spin-½ entity with double the magnetic moment.
A single mm3 of tissue contains 1019 hydrogen atoms, most in water molecules and fat. Those two
give the main signals in MRI. Other molecules are significantly less prominent because they are
not as plentiful as water/fat. For example, Glutamine can be observed in the brain, but its
concentration is only ~ 10 millimolar. Compare that to, say, the concentration of pure water (5.5
×104 millimolar) and take into account tissue is made predominantly out of water.
Large macromolecules often don’t contribute to the signal because of another reason: Their
complex structure means they relax very fast; that is, when we irradiate them, they decay back to
their ground state before we can get a significant signal.
Suppose you have N molecules in a volume V, each having a magnetic moment mi. Recall that
the moments are all vectors, so we can imagine a vector “attached” to each atom.
In general, without the large external field of the MRI machine, they would all point in different
directions and the bulk magnetization will be zero.
Upon the application of an external field, the spins tend to align along the field (Figure 5-3) –
although thermal motion will prevent them from doing so completely.
From a microscopic point of view, classical physics teaches us the magnetic moment precesses
about the magnetic field with an angular frequency ω= γB. That is, m’s component perpendicular
to B goes around it in a circle.
When an atom is placed in a magnetic field, its electrons circulate about the direction of the applied
magnetic field. This circulation causes a small magnetic field at the nucleus which opposes the
externally applied field.
The magnetic field at the nucleus (the effective field) is therefore generally less than the applied
field by a fraction. The electron density around each nucleus in a molecule varies according to the
types of nuclei and bonds in the molecule. The opposing field and therefore the effective field at
each nucleus will vary. This is called the chemical shift phenomenon.
a) b)
The total effect of all fields is therefore both these effects: precession (external fields) + relaxation
(internal fields).
To excite a spin, irradiate it with an external, perpendicular magnetic field at its resonant frequency
γB0. This is called “on-resonance irradiation”. The external resonant field is achieved using an
external RF coil built into the MRI machine. Figure 5-6 below illustrates the bulk magnetization
orientation before and after applying external RF signal.
The coil is ideally capable of generating a homogeneous, time-dependent RF field in the transverse
plane (transverse to B0), often called the B1-field.
The geometric meaning of ωRF can be understood by simply plotting BRF(t) as a function of time.
We then find out that ωRF is the angular frequency of the RF field vector in the xy-plane:
𝜃 = 𝜔𝑡 = 𝛾𝐵1 𝑡
Therefore, the resonant RF field should be applied for a time π/2 called hard π/2 -pulse
The stronger B1, the shorter tπ/2. The shortest pulses possible is desired because the longer time
taken, the more time wasted and the more the thermal relaxation effects become a nuisance. Some
numbers are in order. For typical MRI scanners,
Once we excite the spins onto the xy-plane and leave them there, they will precess about the
effective field. When we turn off the RF, we’re only left with the offset’s field. When viewed
“from above”, the spin moves in a circle in the xy (transverse) plane. The more we wait, the larger
θ becomes.
The thermal effects will eventually cause the spin to return to thermal equilibrium along the z-axis.
This usually takes ~100 ms to ~ 1 sec.
In general, any component of the magnetization can be tipped into the transverse plane to give rise
to a signal. A magnetization vector has three components, Mx, My, Mz. Once tilted onto the xy
plane (along the x-axis) it will precess. During this precession, Mz will remain constant and Mx
and My or Mxy change according to the following.
The NMR spectrum for a molecule or compound is a unique, reproducible “signature” of great
utility in determining the presence of unknown compounds in analytical chemistry. The origin of
the unique pattern of chemical shifts for a molecule is the distortion of the magnetic field caused
by the distribution of electrons that corresponds to each chemical bond. The effective field felt by
each nucleus in the molecule is determined by the chemical bonds that surround it. Therefore, their
resonance frequencies are shifted slightly. The individual protons within a chemical group
experience slight differences in chemical shift as well because a hydrogen nucleus in the center of
the group is influenced more strongly by the field of other hydrogen nuclei than is one at the
periphery of the group. Therefore, a resonance peak for a chemical group is often split slightly into
a number of equally spaced peaks, referred to collectively as a multiplet.
Suppose you have a sample with water, and it has some offset Δω in the rotating frame. You first
excite the water and then let it precess - it will precess with the angular velocity Δω in the rotating
frame. This rotation will induce a voltage in a coil placed around it (in fact, the same RF coil used
to excite it can also be used to measure this voltage, but you can use a different coil as well). The
periodic circular precession will induce a periodic, sinusoidal voltage in the coil. By observing this
signal you can deduce the offset, Δω and differentiate from other molecule.
Fat and water – having different offsets, Δω, owing to their different chemical structures, each
would give rise to its own signal with its own periodicity as shown in Figure (5-13).
That’s the idea of NMR spectroscopy: you’re given a sample with different chemical compounds
and you’re asked to:
Figure 5-14: NMR Spectrum for fat and water (left) and Ethyl Alcohol (right)
• The transverse magnetization gets “eaten up” and eventually disappears with a time
constant T2. Usually T2~ tens of ms in tissue.
• The longitudinal magnetization gets “built up” back to its equilibrium value, with a time
constant T1which is always larger (but not always by much!) than T2.
• Remember, T1≥T2 always. This means the transverse magnetization gets “eaten up” faster
than the longitudinal magnetization gets “built up”.
The Bloch equations are a set of coupled differential equations which can be used to describe the
behavior of a magnetization vector under any conditions. When properly integrated, the Bloch
equations will yield the X', Y', and Z components of magnetization as a function of time.
Assume
Similarly,
5.7.2. T1 Relaxation
The time constant which describes how MZ returns to its equilibrium value is called the spin lattice
relaxation time (T1). The equation governing this behavior as a function of the time t after its
displacement is:
Mz = Mo ( 1 - e-t/T1 )
T1 is the time to reduce the difference between the longitudinal magnetization (MZ) and its
equilibrium value by a factor of e.
5.7.3. T2 Decay
In addition to the rotation, the net magnetization starts to dephase because each of the spin packets
making it up is experiencing a slightly different magnetic field and rotates at its own Larmor
The time constant which describes the return to equilibrium of the transverse magnetization, MXY,
is called the spin-spin relaxation time, T2.
T2 is always less than or equal to T1. The net magnetization in the XY plane goes to zero and then
the longitudinal magnetization grows in until we have Mo along Z.
Any transverse magnetization behaves the same way. The transverse component rotates about the
direction of applied magnetization and dephases. T1 governs the rate of recovery of the
longitudinal magnetization.
In summary, the spin-spin relaxation time, T2, is the time to reduce the transverse magnetization
by a factor of e.
The combination of these two factors is what actually results in the decay of transverse
magnetization. The combined time constant is called T2 star and is given the symbol T2*. The
relationship between the T2 from molecular processes and that from inhomogeneities in the
magnetic field is as follows.
Tissue T2 (ms)
white matter 92
muscle 47
fat 85
kidney 58
liver 43
To measure T1, a 180o pulse is first applied to a state of thermal equilibrium magnetization. This
rotates the net magnetization down to the -Z axis. The magnetization undergoes spin-lattice
relaxation and returns toward its equilibrium position along the +Z axis. Before it reaches
equilibrium, a 90o pulse is applied which rotates the longitudinal magnetization into the XY plane
and measure.
A set of experiments can be done with different T1s and in each experiment; the maximal value of
the signal is taken. This process of finding T1 is called an inversion recovery (IR) experiment.
Once you excite the spins from thermal equilibrium, they begin precessing at different rates, and
eventually “spread out” in the xy-plane. This means that, if you were to acquire their signal, it
would slowly die out because the spins would end up pointing in all sorts of directions and add up
destructively (remember, the signal is a vector sum of the spins in the xy-plane):
If 180o pulse is applied, pulse rotates the magnetization by 180o about the xy-plane (The pulse
would invert our spins). After additional time T, the spins would end up re-aligning along the x-
axis.
applied, the pattern will repeat itself indefinitely, since the spins would dephase, get flipped (by
the 180), rephase, dephase again, get flipped (by the 180), rephase, dephase, ... ad infinitum; in
effect, there is relaxation that needs to be taken into account hence the spins will become
decaying.
Contrast in MRI is related primarily to the proton density and to relaxation phenomena (i.e., how
fast a group of protons gives up its absorbed energy). Proton density is influenced by the mass
density (g/cm3). Proton density differs among tissue types, and in particular adipose tissues have
a higher proportion of protons than other tissues, due to the high concentration of hydrogen in fat
[CH3(CH2)nCOOH]. Two different relaxation mechanisms (spin/lattice and spin/spin) are present
in tissue, and the dominance of one over the other can be manipulated by the timing of the
radiofrequency (RF) pulse sequence and magnetic field variations in the MRI system.
If we subject a sample to a field with spatially variable field strength, the spectral distribution of
the received signal will reflect the spatial characteristics of the sample. This idea, with use of
linearly varying fields, is used to great advantage in MRI.
NB:
• We can control each term, Gx(t), Gy(t), Gz(t) and shape it as we wish.
• Note that, e.g., the x-gradient does not create a field along the x-axis. Rather, it
increases/decreases the z-field along the x-axis.
• The gradients Gk are measured in field/unit length. Usually they’re specified in mT/m or
G/cm.
• Different spins will have different positions, and hence will experience a different z-
component of the field:
The gradient, in effect, assigns a linearly increasing offset (i.e. field in the z direction) to the
spins in the sample. The following Figure illustrates this effect.
Spatial localization, fundamental to MR imaging, requires the imposition of magnetic field non
uniformities-magnetic gradients superimposed upon the homogeneous and much stronger main
magnetic field, which are used to distinguish the positions of the signal in a three-dimensional
object (the patient).
Conventional MRI involves RF excitations combined with magnetic field gradients to localize the
signal from individual volume elements (voxels) in the patient. With appropriate design, the
gradient coils create a magnetic field that linearly varies in strength versus distance over a
predefined field of view (FOV). When superimposed upon a homogeneous magnetic field (e.g.,
the main magnet, Bo), positive gradient field adds to Bo and negative gradient field reduces Bo.
Figure 5-27: the effects of the main magnetic field and the applied slice gradient
Slice thickness is determined by two parameters: (a) the bandwidth (BW) of the RF pulse, and (b)
the gradient strength across the FOV. Consider a pulse B1(t) that is multiplied by cos(ot). This
is called modulation. B1(t) is called the RF excitation. o is the carrier frequency = γ B0.
In summary, the slice select gradient applied during the RF pulse results in proton excitation in a
single plane and thus localizes the signal in the dimension orthogonal to the gradient. It is the first
of three gradients applied to the sample volume.
The signal from each x-position contains a specific center frequency. The over-all spin signal is
the sum of signals along x. A Fourier transform will recover signal contribution at each frequency,
i.e. x-location, and the resulting spectrum will determine a projection of the desired imaged object.
Figure 5-30: Pulse Sequence Timing Diagram (left) and Frequency Encoding & Data Sampling
(right)
Position of the spins in the third spatial dimension is determined with a phase encode gradient
(PEG), applied before the frequency encode gradient and after the slice encode gradient, along the
third perpendicular axis. Phase represents a variation in the starting point of sinusoidal waves, and
can be purposefully introduced with the application of a short duration gradient. After the initial
localization of the excited protons in the slab of tissue by the SEG, all spins are in phase coherence
(they have the same phase). During the application of the PEG, a linear variation in the precessional
frequency of the excited spins occurs across the tissue slab along the direction of the gradient.
150 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
After the PEG is turned off, spin precession reverts to the Larmor frequency, but now phase shifts
are introduced, the magnitude of which are dependent on the spatial position relative to the PEG
null and the PEG strength. Phase advances for protons in the positive gradient, and phase retards
for protons in the negative gradient, while no phase shift occurs for protons at the null. For each
repetition time (TR) interval, a specific PEG strength introduces a specific phase change across
the FOV. Incremental change in the PEG strength from positive through negative polarity during
the image acquisition produces positionally dependent phase shift at each position along the
applied phase encode direction. Protons at the center of the FOV (PEG null) do not experience any
phase shift. Protons located furthest from the null at the edge of the FOV gain the maximum
positive phase shift with the largest positive gradient, no phase shift with the "null" gradient, and
maximum negative phase shift with the largest negative gradient. Protons at intermediate distances
from the null experience intermediate phase shifts (positive or negative). Thus, each location along
the phase encode axis is spatially encoded by the amount of phase shift.
Decoding the spatial position along the phase encode direction occurs by Fourier transformation,
only after all of the data for the image have been collected. Symmetry in the frequency domain
requires detection of the phase shift direction (positive or negative) to assign correct position in
the final image. Since the PEG is incrementally varied throughout the acquisition sequence (e.g.,
128 times in a 128 X 128 MR image), slight changes in position caused by motion will cause a
corresponding change in phase, and will be manifested as partial (artifactual) copies of anatomy
displaced along the phase encode axis.
where
The signal is somehow the (3D) Fourier transform of the spin density M0(r), which is precisely
the image.
It is the Fourier transform of M0(r), which is proportional to the density of spins at each position,
which is the “image” we’re after. Thus:
The signal s(k) can be thought of as being acquired in “k-space” (or, in the 2D case, in the “k-
plane”).
Figure 5-36: The mathematical relationship between the acquired k-space data on the left and the
image on the is a two-dimensional Fourier-transform.
The ideas behind all EPI-based methods are the same:
• Excite the spins.
• Start moving around in k-space by varying the gradients, and acquire.
Advantages of EPI:
• Fast: just about the fastest scan technique there is. You can acquire an entire 2D image in
a few tens of milliseconds. An entire 3D image of the brain can be had in a few seconds.
Disadvantages of EPI:
• Puts high demands on the gradients. The rapidly varying gradients can cause biophysical
effects in patients, such as electrical currents in tissues.
• Sensitive to gradient imperfections (and there are imperfections. Lots).
• Resolution is just so-so compared to other scan techniques.
• Signal decays as T2*. Other scan techniques can make the signal decay slower, according
to T2. This decay also means that ...
• EPI is particularly susceptible to magnetic field inhomogeneities. In particular, in the
brain, there are a few notorious areas which are hard to-impossible to observe with EPI:
the area near the frontal and temporal lobes:
TR X No. of Phase Encode Steps (z-axis) X No. of Phase Encode Steps (y-axis) X No. of Signal
Averages
A three-dimensional Fourier transform (three 1-D Fourier transforms) is applied for each column,
row, and depth axis in the image matrix "cube." Volumes obtained can be either isotropic, the same
size in all three directions, or anisotropic, where at least one dimension is different in size. The
advantage of the former is equal resolution in all directions; reformations of images from the
volume do not suffer from degradations of large sample size. After the spatial domain data
(amplitude and contrast) are obtained, individual two-dimensional slices in any arbitrary plane are
extracted by interpolation of the cube data. When using a standard TR of 600 msec with one
average for a T1-weighted exam, a 128 X 128 X 128 cube requires 163 minutes or about 2.7 hours!
The concept of FOV, voxel size and resolution are intimately related. In a 2D image, the number
of voxels in an image along the x and y axes is:
where Δx and Δy are the voxel sizes. Assuming that (e.g., for the x-direction) we select Δkx to
equal 1/FOVx, we have:
Contrast sensitivity is the major attribute of MR. The spectacular contrast sensitivity of MR
enables the exquisite discrimination of soft tissues and contrast due to blood flow. This sensitivity
is achieved through differences in the Tl, T2, spin density, and flow velocity characteristics.
Contrast which is dependent upon these parameters is achieved through the proper application of
pulse sequences. MR contrast materials, usually susceptibility agents that disrupt the local
magnetic field to enhance T2 decay or provide a relaxation mechanism for enhanced Tl decay (e.g.,
bound water in hydration layers), are becoming important enhancement agents for the
differentiation of normal and diseased tissues. The absolute contrast sensitivity of the MR image
is ultimately limited by the SNR and presence of image artifacts.
Where
Oxygenated blood is diamagnetic while deoxygenated blood is paramagnetic. In flow of blood and
conversion of oxyhemoglobin to deoxyhemoglobin, local magnetization of the spins will be
affected. T2* is longer for tissues around Oxygenated blood as compared to tissues around
deoxygenated blood. This fact allows areas of high metabolic activity to produce a correlated
signal and is the basis of functional MRI (fMRI).
This lowers deoxyhemoglobin content per unit volume of brain tissue. Signal intensity in a BOLD
sensitive image increases in regions of the brain engaged by a ‘‘task’’ relative to a resting in T2*
images.
Because the BOLD sequence produces images that are highly dependent on blood oxygen levels,
areas of high metabolic activity will be enhanced when the prestimulus image is subtracted, pixel
by pixel, from the poststimulus image. These fMRI experiments determine which sites in the brain
are used for processing data.
The most common MR sequence used to collect the data is a multi-slice echo-planar imaging,
since data acquisition is fast enough to obtain whole-brain coverage in a few seconds. Many
different types of stimulus can be used: visual, motor or auditory-based. The changes in image
intensity in activated areas are very small, typically only 0.2–2% using a 3 Tesla scanner, and so
experiments are repeated a number of times with periods of rest (baseline) between each
stimulation block. Data processing involves Figure 5-40: Typical fMRI image
correlation of the MRI signal intensity for each pixel
with the stimulation waveform, followed by statistical analysis to determine whether the
correlation is significant. Typical scans may take 10–40 minutes, with several hundred repetitions
of the stimulus/rest paradigm.
• pre-surgical planning,
• Finding brain pathologies,
• Assessment of patients with disorders of consciousness (coma, vegetative state,
minimally conscious state, locked-in syndrome)
Research Purposes: many researchers from several disciplines are using fMRI to better understand
brain function in animals and humans.
Preprocessing Methods
• Motion correction
• Slice time correction
• Spatial filtering
• Intensity normalization
• Temporal filtering
Objectives
By studying this chapter, the reader should be able to:
5.1. Introduction
Nuclear medicine is an emission imaging technique based on radioisotopes. The imaging
procedure requires the injection or administration of a small volume of a soluble carrier substance,
labeled by a radioactive isotope. The blood circulation distributes the injected solution throughout
the body. Ideally the carrier substance is designed to concentrate preferentially in a target organ
or around a particular disease process. The radioactive tracer is ideally just an emitter of gamma-
rays with energies in the range 60–510 keV. The emitted photons leave the body, to be collimated
and counted using a large area electronic photon detector, sometimes called an Anger camera.
Most nuclear medicine studies require the use of a detector outside the body to measure the rate of
accumulation, release, or distribution of radioactivity in a particular region inside the body. The
rate of accumulation or release of radioactivity may be measured with one or more detectors,
usually NaI(Tl) scintillation crystals, positioned at fixed locations outside the body. Images of the
distribution of radioactivity usually are obtained with a stationary imaging device.
In the following subsequent sections, description of radioactivity, the main radionuclides and
carrier molecules in common use, the physics of the Anger camera, the planar scintigraphy and the
basic principles of SPECT and PET will be discussed.
5.2. Radioactivity
A nucleus not in its stable state will adjust itself until it is stable either by ejecting portions of its
nucleus or by emitting energy in the form of photons (gamma rays). This process is referred to as
radioactive decay. A radioactive isotope is one which undergoes a spontaneous change in the
composition of the nucleus, termed as ‘disintegration’, resulting in emission of energy.
The quantity of radioactive material, expressed as the number of radioactive atoms undergoing
nuclear transformation per unit time (t), is called activity (A). Described mathematically, activity
is equal to the change (dN) in the total number of radioactive atoms (N) in a given period of time
(dt), or
𝒅𝑵
𝑨=
𝒅𝒕
Activity is measured in units of curies (Ci), where one curie equals 3.7x1010 disintegrations per
second. In nuclear medicine, activities from 0.1 to 30 mCi of a variety of radionuclides are
typically used for imaging studies, and up to 300 mCi of iodine 131 are used for therapy. The SI
unit for radioactivity is the Becquerel (Bq), named for Henri Becquerel, who discovered
radioactivity in 1896. One millicurie (mCi) is equal to 37 megabecquerels (l mCi = 37 MBq).
The number of atoms decaying per unit time (dN/dt) is proportional to the number of unstable
atoms (N (t=0)) that are present at any given time.
167 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
The fundamental decay equation is given by:
𝑵(𝒕) = 𝑵𝟎 . 𝒆−𝝀𝒕
Then number of atoms decaying per unit time called activity will be
𝒅𝑵(𝒕)
𝑨(𝒕) = − = 𝝀. 𝑵𝟎 . 𝒆−𝝀𝒕 = 𝑨𝟎 . 𝒆−𝝀𝒕
𝒅𝒕
One of the most common measures of radioactivity is the half-life (τ1/2), which is the time required
for the radioactivity to drop to one-half of its value given by:
Most radionuclides decay in one or more of the following ways: (a) alpha decay, (b) beta-minus
emission, (c) beta-plus (positron) emission, (d) electron capture, or (e) isomeric transition.
Alpha decaying elements attached antibody can be used to kill acute myeloid leukemia cancer cells
with a procedure called for targeted alpha therapy.
The antineutrino is an electrically neutral subatomic particle whose mass is much smaller than that
of an electron. Any excess energy in the nucleus after beta decay is emitted as gamma rays, internal
conversion electrons, and other associated radiations.
The positron is also referred to as a positive beta particle or positive electron or antielectron. In
positron decay, a neutrino is also emitted. In many ways, positron decay is the mirror image of
beta decay: positive electron instead of negative electron, neutrino instead of antineutrino. Unlike
the negative electron, the positron itself survives only briefly. It quickly encounters an electron
(electrons are plentiful in matter), and both are annihilated producing a gamma radiation. This is
why it is considered an anti-electron. Positron emission requires an energy difference between the
The capture of an orbital electron creates a vacancy in the electron shell, which is filled by an
electron from a higher-energy shell. This electron transition results in the emission of characteristic
x-rays and/or Auger electrons. Electron capture radio nuclides used in medical imaging decay to
atoms in excited states that subsequently emit externally detectable x-rays or gamma rays or both.
The energy is released in the form of gamma rays or internal conversion electrons, or both.
5.3. Radiopharmaceuticals
Radiopharmaceuticals are medicinal formulations containing radioisotopes which are safe for
administration in humans for diagnosis or for therapy. Nuclear imaging depends critically on
the design and manufacture of suitable radiopharmaceuticals substances that can take part in
metabolism and are labeled with one or more gamma radioactive elements. Although a few
133 123
radionuclides such as Xe and I are used in elemental form, the majority of
radiopharmaceuticals consist of two parts: a carrier molecule (pharmaceutical) and a suitable
incorporated radionuclide.
The radionuclide chosen for labeling must have chemical properties which allow it to be
incorporated chemically into a carrier, without unintentionally completely altering the designed
metabolic function of that carrier. The radionuclide half-life is also very important factor. For
nuclear medical diagnostics only nuclides with half-life of seconds up to hours can be used.
Radionuclides with too short half-life may decay prior to application, and for radionuclides with
too long half-life the counting rate will be extremely low and may cause inacceptable radiation
exposure.
Although many naturally occurring radioactive nuclides exist, all of those commonly administered
to patients in nuclear medicine are artificially produced. Radioactive elements (radionuclides) are
produced by:
For every gamma-ray that hits the scintillation crystal a few thousand photons are produced, each
with a very low energy of a few electron volts. These very low light signals need to be amplified
and converted into an electrical current that can be digitized: PMTs are the devices used for this
specific task. A photomultiplier tube (PMT) consists of a photocathode on top, followed by a
cascade of dynodes
The PMT is glued to the crystal. Because the light photons should reach the photocathode of the
PMT, the crystal must be transparent to the visible photons. The energy of the photons hitting the
photocathode releases some electrons from the cathode. These electrons are then accelerated
toward the positively charged dynode nearby. They arrive with higher energy (the voltage
difference x the charge), activating additional electrons.
The relative magnitudes and signs of these summed signals then define the estimated (X, Y)
location of the scintillation event in the crystal given by:
where k is an empirical constant of the particular camera. The sum of all four amplifier outputs is
a measure of the total visible light and thus the initial energy of the incident gamma-ray. The
summed signal is sent to a pulse height analyzer circuit (PHA), which performs the energy
analysis. The role of the PHA is to determine which of the recorded events correspond to gamma-
rays that have not been scattered within tissue (primary radiation) and should be retained, and
which have been Compton scattered in the patient, do not contain any useful spatial information,
and so should be rejected. Since the amplitude of the voltage pulse from the PMT is proportional
177 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
to the energy of the detected gamma-ray, discriminating on the basis of the magnitude of the output
of the PMT is equivalent to discriminating on the basis of gamma-ray energy.
Planar imaging with a parallel-hole collimator, like the X-ray analogue, provides no depth
information; rather the activity over the entire patient thickness below the collimator is integrated
to form the image. Planar gamma imaging, like its X-ray analogue, has the merit of simplicity and
relative speed and thus, in many clinical procedures, the single projection image is the chosen
method for preliminary investigations.
SPECT uses essentially the same instrumentation and many of the same radiotracers as planar
scintigraphy, and most SPECT machines can also be used for planar scans. The majority of SPECT
scans are used for myocardial perfusion studies to detect coronary artery disease or myocardial
infarction, although SPECT is also used for brain studies to detect areas of reduced blood flow
associated with stroke, epilepsy or neurodegenerative diseases such as Alzheimer.
Since the array of PMTs is two-dimensional in nature, the data can be reconstructed as a series of
adjacent slices. A 3600rotation is generally needed in SPECT since the source-to-detector distance
affects the distribution of c-ray scatter in the body, the degree of tissue attenuation and also the
spatial resolution, and so projections acquired at 1800 to one another are not identical. A
converging collimator is often used in SPECT to increase the SNR of the scan. In a SPECT brain
180 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
scan each image of a multi-slice data set is formed from typically 500 000 counts, with a spatial
resolution of ~7 mm. Myocardial SPECT has a lower number of counts, typically 100 000 per
image, and a spatial resolution about twice as coarse as that of the brain scan (since the source to
detector distance is much larger due to the size of the body).
The stuck of projections in different orientation will produce a sinogram (Figure 6-11) by
appropriate reconstruction algorithm the 2D image will be reconstructed.
The conservation of energy demands that the energy of the two γ-rays is supplied by the total
energy of the positron and the electron. By the time the annihilation takes place, nearly all the
initial kinetic energy of the positron has been dissipated in tissue. The annihilation event results in
the conversion of the combined electron/positron rest mass, 1.024MeV, into photon energy. Two
γ-rays, each with energy of nearly 512keV are produced to conserve energy. In addition, since the
electron/positron pair is essentially at rest when annihilation takes place, the two γ-rays have to
leave the annihilation site, travelling in nearly opposite directions in order to conserve linear
momentum.
The two γ-rays, together, define a line that passes close to the point of emission of the original
positron. Coincidence detection is employed with a circular position-sensitive detector encircling
the patient. A single coincidence event, with its two detected ends, defines a line of sight through
the patient, somewhere along which, the annihilation took place. Since the positron mean free path
is very short, the distribution of lines of sight reflects the concentration of the radioactive tracer,
leaving aside the inevitable distortions brought about by scattering and absorption of the emitted
γ-rays. The relatively high γ-ray energy ensures minimal photoelectric absorption in tissue but
significant amounts of Compton scattering, both in the patient and the detector. PET is a wholly
tomographic procedure. The collection of very many events provides enough data to assemble a
series of projections that can be combined to reconstruct 2D images of isotope concentration.
The two γ -rays, travelling in nearly opposite directions, themselves define a line and thus a
collimator is not required, as long as position-sensitive detection is sufficiently precise. The
absence of a collimator in PET makes a major contribution to an overall increase in its efficiency
and spatial resolution with respect to SPECT.
The basic PET camera consists of a number of segmented 3600 ring detectors with a common axis,
arranged at intervals of about one centimeter apart
184 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
Figure 6-15: PET ring of detectors
Slices for reconstruction are chosen by using particular detector rings along the axis. Each segment
within a ring is a position-sensitive detector.
The projection data in PET is Integral of the activity along the line or tube of response. But due to
false coincidence detection of scattered photons, detector efficiency and atteunation the measured
data may deviate from the assuemed projection data.
PET has many clinical applications. It can be applied in oncology, neurology and cardiology. The
following table illustrates some of the PET tracers and their application.
PET imaging, particularly in oncologic applications, often reveals suspicious lesions, but provides
little information regarding their exact locations in the organs of the patients. Like SPECT, PET
can also be fused with CT. PET/CT has its major clinical applications in the general areas of
oncology, cardiology and neurology. In oncology, whole body imaging is used to identify both
primary tumors and secondary metastatic disease remote from the primary source. The major
disadvantages of PET/CT are the requirement for an on-site cyclotron to produce positron emitting
radiotracers, and the high associated costs.
186 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
Consider Figure 6-18. The whole
body projection image (left)
shows in the right lower abdomen
(dotted line) a focal FDG-
enhancement. In the
corresponding CT image the
melanoma metastasis in the colon
wall (arrow) is easily overlooked.
The PET image clearly shows the
metastasis. However, only the
PET/CT fusion allows the exact
anatomic localization of the
metastasis that was then removed
by surgery.
Figure: 6-18: comparison of CT, PET and PET/CT
images
In systems that use collimation to form images, the spatial resolution rapidly deteriorates with
distance from the face of the imaging device. This causes the spatial resolution to deteriorate from
the edge to the center in transverse SPECT images. In contrast, the spatial resolution in a transverse
PET image is best in the center. Table 6.3 compares SPECT and PET systems.
References
1. G.S., Fundamentals of Biomedical Engineering, New Age International (P) Limited,
Publishers, New Delhi, 2007.
2. Hendee W. R., Ritenour E. R., Medical imaging physics, fourth edition, A John Wiley &
Sons, Inc., Publication, ISBN 0-471-38226-4, New York, 2002.
3. Paul Suetens, Fundamentals of medical imaging, second edition, Cambridge University
Press, ISBN-13 978-0-521-51915-1, New York, 2009.
4. Chris Guy, Dominic ffytche, An Introduction to The Principles of Medical Imaging,
Revised Edition, Imperial College Press, ISBN 1-86094-502-3, London, 2005.
5. Jerrold T. B., Seibert A. J., Edwin M. L., John M. B., The essential physics of medical
imaging, Lipincott Williams and Wilkins, A wolters company, USA, 2002.