You are on page 1of 199

Medical Imaging Systems

Course Material

Prepared by: Gizeaddis Lamesgin (Ph.D.)

School of Biomedical Engineering


Jimma Institute of Technology
Jimma University
1st version: February 2016
Revised: December 2022
CONTENTS

CHAPTER ONE ...................................................................................................................... 1

1. INTRODUCTION TO MEDICAL IMAGING ..................................................... 1


Objectives .................................................................................................................................... 1

1.1. Introduction ...................................................................................................................... 1

1.2. History of Medical Imaging ............................................................................................. 3

1.3. Comparison of Diagnostic Medical Imaging Techniques ................................................ 6

Review Questions ........................................................................................................................ 7

References ................................................................................................................................... 9

CHAPTER TWO ...................................................................................................10

2. ULTRASONIC .................................................................................................10
Objectives .................................................................................................................................. 10

2.1. Introduction .................................................................................................................... 10

2.2. Physics of Ultrasound..................................................................................................... 11

2.2.1. Pressure and Intensity of sound wave ..................................................................... 12

2.3. Interaction of Ultrasound with Matter ............................................................................ 13

2.3.1. Reflection, transmission and refraction at tissue boundaries .................................. 14

2.3.2. Scattering by small structures ................................................................................. 17

2.3.3. Absorption and total attenuation of ultrasound energy in tissue............................. 17

2.4. Instrumentation of Ultrasound ....................................................................................... 18

2.4.1. Transducers ............................................................................................................. 19

2.5. Ultrasound Image Data Acquisition ............................................................................... 23

2.6. Principles of A-Mode, B-Mode, M-Mode ..................................................................... 28

2.6.1. A-Mode ................................................................................................................... 28

i | Medical Imaging Systems, JiT, School of Biomedical Engineering.


2.6.2. B-mode.................................................................................................................... 29

2.6.3. M-mode or Time position plot ................................................................................ 29

2.7. Image Reconstruction ..................................................................................................... 30

2.7.1. Filtering ................................................................................................................... 30

2.7.2. Envelope detection .................................................................................................. 31

2.7.3. Attenuation correction ............................................................................................ 31

2.7.4. Scan converters ....................................................................................................... 32

2.8. Doppler Ultrasound ........................................................................................................ 32

2.8.1. Data acquisition ...................................................................................................... 35

2.9. Image Display ................................................................................................................ 36

2.10. Image Resolution........................................................................................................... 37

2.11. Image Storage ................................................................................................................ 37

Review Questions ...................................................................................................................... 38

References ................................................................................................................................. 38

CHAPTER THREE ...............................................................................................40

3. X-RAY IMAGING...........................................................................................40
Objectives .................................................................................................................................. 40

3.1. Introduction .................................................................................................................... 40

3.2. Production of X-rays ...................................................................................................... 41

3.3. X-ray tube ....................................................................................................................... 44

3.3.1. Glass envelope and tube housing ............................................................................ 45

3.3.2. Cathode ................................................................................................................... 45

3.3.3. Anode ...................................................................................................................... 46

3.4. X-ray Tube Operation and Rating .................................................................................. 49

3.5. Filtration diagnostic X-ray ............................................................................................. 50

ii | Medical Imaging Systems, JiT, School of Biomedical Engineering.


3.6. X-ray tube collimators assembly .................................................................................... 51

3.7. Causes of x-ray tube failures and steps to extend tube life ............................................ 52

3.8. X-Ray Generator ............................................................................................................ 53

3.8.1. High-frequency generator ....................................................................................... 54

3.9. Screen-Film Radiography .............................................................................................. 57

3.9.1. Screen-film detector ................................................................................................ 58

3.9.2. Intensifying Screens ................................................................................................ 58

3.9.3. Film ......................................................................................................................... 60

3.9.4. Film Processing ....................................................................................................... 61

3.10. Computed Radiography (CR) ..................................................................................... 62

3.10.1. Imaging plate .......................................................................................................... 62

3.10.2. CR Reader ............................................................................................................... 63

3.10.3. A photomultiplier tube (PMT) ................................................................................ 64

3.11. Digital Radiography (DR) .......................................................................................... 65

3.11.1. Indirect Flat Panel Detectors ................................................................................... 65

3.11.2. Direct flat panel detectors ....................................................................................... 67

3.12. Scatter radiation in Projection Radiography ............................................................... 68

3.12.1. Contrast ................................................................................................................... 68

3.12.2. Anti-scatter Grids .................................................................................................... 69

3.13. Mammography............................................................................................................ 72

3.13.1. Cathode and Filament Circuit ................................................................................. 73

3.13.2. Mammography X-ray Tube Anode ......................................................................... 74

3.13.3. Collimation ............................................................................................................. 75

3.13.4. Automatic exposure control (AEC) ........................................................................ 75

3.13.5. Compression ........................................................................................................... 76

iii | Medical Imaging Systems, JiT, School of Biomedical Engineering.


3.13.6. Mammography Image Receptors ............................................................................ 77

3.14. Fluoroscopy ................................................................................................................ 78

3.14.1. Image Intensifier (II) ............................................................................................... 79

3.14.2. Characteristics of II Performance ........................................................................... 82

3.14.3. Video Camera ......................................................................................................... 85

3.14.4. Common procedures using fluoroscopy ................................................................. 87

3.15. Digital Subtraction Angiography (DSA) .................................................................... 87

3.16. X-ray mA testing tools and Quality Assurance kit ..................................................... 88

3.16.1. Quality Assurance (QA) tests of x – ray machines ................................................. 88

3.16.2. Protecting cloths and personal dosimeters .............................................................. 90

Review Questions ...................................................................................................................... 92

References ................................................................................................................................. 93

CHAPTER FOUR ..................................................................................................95

4. CT SCANNING ...............................................................................................95
Objectives .................................................................................................................................. 95

4.1. Introduction ............................................................................................................................ 95

4.2. Instrumentation of CT .................................................................................................... 97

4.3. CT – Scanner Generations.............................................................................................. 98

4.3.1. The First Generation: Rotate/Translate, Pencil beam ............................................. 98

4.3.2. The Second Generation: Rotate/Translate, Narrow fan beam ................................ 99

4.3.3. Third Generation: Rotate/Rotate, Fan beam ......................................................... 100

4.3.4. Fourth generation: Rotate/Stationary .................................................................... 101

4.3.5. Fifth Generation: Stationary/Stationary ................................................................ 102

4.3.6. Sixth Generation: Helical CT................................................................................ 103

4.3.7. Seventh Generation: Multiple detector array ........................................................ 103

iv | Medical Imaging Systems, JiT, School of Biomedical Engineering.


4.4. CT Image Formation .................................................................................................... 104

4.4.1. Tomographic reconstruction ................................................................................. 105

4.4.2. Projection and Radon transform ........................................................................... 110

4.4.3. Backprojection and Inverse Radon transform ....................................................... 112

4.4.4. The central slice theorem ...................................................................................... 112

4.5. CT Image Display ........................................................................................................ 113

4.5.1. CT numbers or Hounsfield Units .......................................................................... 113

4.5.2. Windowing and Leveling ...................................................................................... 114

4.6. Spiral/Helical CT.......................................................................................................... 115

4.7. CT Angiography ........................................................................................................... 117

4.8. CT Artifacts .................................................................................................................. 118

4.8.1. Patient movement.................................................................................................. 118

4.8.2. Partial volume effects ........................................................................................... 119

4.8.3. Beam Hardening and metallic implants Artifacts ................................................. 120

Review Questions .................................................................................................................... 120

References ............................................................................................................................... 122

CHAPTER FIVE..................................................................................................123

5. MAGNETIC RESONANCE IMAGING (MRI) .........................................123


Objectives ................................................................................................................................ 123

5.1. Introduction .................................................................................................................. 123

5.2. Spin Physics ................................................................................................................. 124

5.2.1. Spin ....................................................................................................................... 124

5.2.2. Molecules and Their Spins.................................................................................... 125

5.2.3. Bulk Magnetization ............................................................................................... 126

5.3. Effect of a Magnetic Field ............................................................................................ 127

v | Medical Imaging Systems, JiT, School of Biomedical Engineering.


5.3.2. Chemical Shift ........................................................................................................... 128

5.3.3. Sources of Magnetic Fields ................................................................................... 128

5.4. Excitation: The RF Coils .............................................................................................. 129

5.4.3. The Rotating Frame .............................................................................................. 131

5.5. Signal Acquisition ........................................................................................................ 134

5.6. NMR Spectroscopy ...................................................................................................... 135

5.7. Thermal Relaxation ...................................................................................................... 137

5.7.1. Bloch Equations .................................................................................................... 138

5.7.2. T1 Relaxation ........................................................................................................ 139

5.7.3. T2 Decay................................................................................................................ 139

5.8. Measuring T2, T1, and T2* ............................................................................................ 142

5.8.1. T1- Inversion Recovery ......................................................................................... 142

5.8.2. T2– Spin Echo Sequence ...................................................................................... 142

5.8.3. T1 & T2 Relaxation Trends ................................................................................... 143

5.8.4. Proton density ....................................................................................................... 144

5.9. Coils Localization of the MRI signal: The Gradient .................................................... 145

5.9.1. Slice select Gz gradient ......................................................................................... 148

5.9.2. The frequency encode Gx gradient (FEG) ............................................................ 149

5.9.3. Phase Encoding Gy Gradient ................................................................................. 150

5.9.4. Gradient Sequencing ............................................................................................. 152

5.10. "K-Space" Data Acquisition and Image Reconstruction .......................................... 153

5.10.1. Echo Planar Imaging (EPI) ................................................................................... 154

5.10.2. Three-Dimensional Fourier Transform Image Acquisition .................................. 156

5.11. Image Characteristics ............................................................................................... 157

5.11.1. Spatial Resolution and Contrast Sensitivity .......................................................... 157

vi | Medical Imaging Systems, JiT, School of Biomedical Engineering.


5.11.2. Signal-to-Noise Ratio (SNR) ................................................................................ 159

5.12. Functional MRI (fMRI) ............................................................................................ 160

5.12.1. fMRI Application .................................................................................................. 162

5.12.2. Sources of noise in fMRI ...................................................................................... 162

Review Questions .................................................................................................................... 163

References ............................................................................................................................... 165

CHAPTER SIX ....................................................................................................166

6. NUCLEAR MEDICINE................................................................................166
Objectives ................................................................................................................................ 166

6.1. Introduction .................................................................................................................. 166

6.2. Radioactivity ................................................................................................................ 167

6.2.1. Alpha decay .......................................................................................................... 168

6.2.2. Beta-minus (Negatron) decay ............................................................................... 169

6.2.3. Beta-plus (Positron) decay .................................................................................... 169

6.2.4. Electron capture decay .......................................................................................... 170

6.2.5. Isomeric Transition ............................................................................................... 170

6.3. Radiopharmaceuticals .................................................................................................. 171

6.4. Gamma camera ............................................................................................................. 174

6.4.1. The collimator ....................................................................................................... 175

6.4.2. The Scintillator Crystal ......................................................................................... 175

6.4.3. The Anger position network and pulse height analyzer........................................ 177

6.5. Planar scintigraphy ....................................................................................................... 178

6.6. Single photon emission computed tomography (SPECT)............................................ 180

6.7. Positron Emission Tomography (PET) ........................................................................ 183

6.8. Comparison of SPECT and PET .................................................................................. 187

vii | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Review Questions .................................................................................................................... 189

References ............................................................................................................................... 190

viii | Medical Imaging Systems, JiT, School of Biomedical Engineering.


CHAPTER ONE
1. INTRODUCTION TO MEDICAL IMAGING
____________________________________________________________________________________

Objectives

After completing this chapter, students should be able to:

• Explain the motives of medical imaging


• Identify the energy sources, tissue properties, and image properties employed in medical
imaging.
• Provide a summary of the history of medical imaging.
• Explain the pivotal role of x-ray computed tomography, magnetic resonance imaging and
nuclear medicine imaging in the evolution of modern medical imaging.

1.1. Introduction
The human body is an incredibly complex system. Acquiring data about its static and dynamic
properties results in massive amounts of information. One of the major challenges to researchers
and clinicians is the question of how to acquire, process, and display vast quantities of information
about the body so that the information can be assimilated, interpreted, and utilized to yield more
useful diagnostic methods and therapeutic procedures. In many cases, the presentation of
information as images is the most efficient approach to addressing this challenge. As humans we
understand this efficiency; from our earliest years we rely more heavily on sight than on any other
perceptual skill in relating to the world around us.

Physicians increasingly rely as well on images to understand the human body and intervene in the
processes of human illness and injury. The use of images to manage and interpret information
about biological and medical processes is certain to continue its expansion, not only in clinical
medicine but also in the biomedical research enterprise that supports it.

Images of a complex object such as the human body reveal characteristics of the object such as its
transmissivity, opacity, emissivity, reflectivity, conductivity, and magnetizability, and changing
of these characteristics with time or application of energy enables imaging possible.
Images of the human body are derived from the interaction of energy with human tissue. The
energy can be in the form of radiation, magnetic, electric fields, and acoustic energy. The
interaction is at the molecular or atomic levels. So we can say medical imaging is an
interdisciplinary subject. It requires knowledge of physics; because it includes matter, radiation
and energy; knowledge of mathematics to apply linear algebra, numeric and statistics, knowledge
of life sciences like biology, physiology and medicine; a knowledge of Engineering for
optimization and implementation purposes and a knowledge of computer sciences for image
processing and image reconstruction.

Based on the energy it uses medical imaging can be classified into active medical imaging, which
is an imaging principle based on external delivery of energy like x-ray, magnetic resonance,
nuclear medicine and ultrasound imaging systems; and passive medical imaging which is an
imaging system based on an internal body signals like EEG, ECG and EMG.

Based on the radiation source it uses, we can classify medical imaging systems into external like
x-rays, ultrasound and radiofrequency (Nuclear Magnetic Resonance) and internal like radioactive
tracers that are used in positron emission tomography (PET) and single photon emission computed
tomography (SPECT) modalities.

Nuclear
Medicine

Figure 1-1: The electromagnetic spectrum

2 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


The electromagnetic spectrum includes all the energy sources that are used for medical imaging.
The radio waves that are used in magnetic resonance imaging (MRI) and the gamma rays that are
applied in nuclear medicine imaging are the two extremities. The frequency, energy and the danger
level increases as we go from radio waves to the gamma rays.

Figure 1.1 shows the electromagnetic spectrum common waves and the respective medical
imaging modality.

1.2. History of Medical Imaging

Radiographic X-ray: The first revolution

In November 1895 Wilhelm Rontgen, a physicist


at the University of Wurzburg, was experimenting
with cathode rays. These rays were obtained by
applying a potential difference across a partially
evacuated glass “discharge” tube. Rontgen
observed the emission of light from crystals of
barium platinocyanide some distance away, and he
recognized that the fluorescence had to be caused
by radiation produced by his experiments. He
called the radiation “x rays” and quickly
discovered that the new radiation could penetrate
various materials and could be recorded on
Figure 1-2: The first Rontgen radiograph
photographic plates. Among the more dramatic
illustrations of these properties was a radiograph of a hand that Rontgen included in early
presentations of his findings.

Within a month of their discovery, x rays were being explored as medical tools in several countries,
including Germany, England, France, and the United States. In 1901, Rontgen was awarded the
first Nobel Prize in Physics.

Over the first half of the twentieth century, x-ray imaging advanced with the help of improvements
such as intensifying screens, hot-cathode x-ray tubes, rotating anodes, image intensifiers, and
contrast agents.
3 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
X-ray CT: The second revolution

The second revolution in imaging began in 1972 with Hounsfield’s announcement of a practical
computer-assisted X-ray tomographic scanner, the CAT scanner, which is now called X-ray CT or
simply CT. This was actually the first radical change in the medical use of X-rays since Roentgen’s
discovery. It had to wait for the widespread use of cheap computing facilities to become practical.
CT uses the mathematical device of Fourier Filtered Back projection that was actually first
described in 1917 by an astronomer called Radon. Its practical implementation in clinical medicine
could not take place without the digital computer.

The importance of CT is related to several of its features, including the following:


• Provision of cross-sectional images of anatomy
• Availability of contrast resolution superior to traditional radiology
• Construction of images from x-ray transmission data by a “black box” mathematical
process requiring a computer
• Creation of clinical images that are no longer direct proof of a satisfactory imaging process
so that intermediate control measures from physics and engineering are essential
• Production of images from digital data that are processed by computer and can be
manipulated to yield widely varying appearances.

MRI

Research in nuclear magnetic resonance started shortly after the war. Two groups in America, led
by Bloch and Purcell in 1946, discovered the phenomenon independently. Bloch and Purcell later
shared a Nobel Prize in physics for this work. The original applications of the technique were in
the study of the magnetic properties of atomic nuclei themselves. The realization dependency of
time taken for a collection of nuclear magnetic moments in a liquid or a solid, to attain thermal
equilibrium, in some detail on the chemistry and physical properties of the surrounding atoms led
to the development of Nuclear magnetic resonance technique and to the magnetic resonance
imaging.

MRI is a wholly tomographic technique, just like X-ray CT, but it has no associated ionizing
radiation hazard. It provides a wider range of contrast mechanisms than X-rays and very much
better spatial resolution in many applications. The extremely rapid development of MRI has been

4 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


possible because it uses many of the techniques of its parent, nuclear magnetic resonance (NMR).
In fact, in its infancy, MRI was called Spatially Localized Nuclear Magnetic Resonance but this
was changed to magnetic resonance imaging (MRI) or simply MR both to avoid the long-winded
title and to remove any misplaced association with ionizing radiation through the adjective nuclear.

MRI is still a relative newcomer to medicine with many important new developments still to come,
both in applications and technique.

Diagnostic Nuclear Medicine

Nuclear imaging produces images of the distributions of radionuclides in patients. Because


charged particles from radioactivity in a patient are almost entirely absorbed within the patient,
nuclear imaging uses gamma rays, characteristic x-rays (usually from radionuclides that decay by
electron capture), or annihilation photons (from positron-emitting radionuclides) to form images.

All of nuclear medicine, including diagnostic gamma imaging, became a practical possibility in
1942 with Fermi’s first successful operation of a uranium fission chain reaction. The development
of large area photon detectors was crucial to the practical use of gamma imaging. Anger announced
the use of a 2-inch NaI/scintillator detector, the first Anger camera, in 1952. Electronic photon
detectors, both for medical gamma imaging and X-rays, are adaptations of detectors developed, in
the decades after the war, at high energy and nuclear physics establishments such as Harwell,
CERN and Fermilab.

Tomographic, SPECT gamma images, obtained by translations or rotations of a gamma camera,


were first reported in 1963 by Kuhl and Edwards. Wren reported the first measurements of positron
annihilation as early as 1951 and crude scanning arrangements for imaging were described by
Brownell and Sweet in 1953. PET is still in its infancy as a clinical tool, largely because it is very
expensive to install, but it is firmly established in clinical research. It requires not only an imaging
camera but also a nearby cyclotron to produce the required short-lived positron emitting
radionuclides.

Diagnostic Ultrasound

Ultrasound is the term that describes sound waves of frequencies exceeding the range of human
hearing and their propagation in a medium. Medical diagnostic ultrasound is a modality that uses

5 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


ultrasound energy and the acoustic properties of the body to produce an image from stationary and
moving tissues. Although ultrasonic waves could be produced in the 1930’s and were investigated
as a means of medical imaging, ultrasound imaging did not start properly until after World War 2.
It benefited from the experience gained with SONAR, in particular, and fast electronic pulse
generation in general. The first two-dimensional image, obtained using sector scanning, was
published in 1952 by Wild and Reid. Application to foetal imaging began in 1961 shortly after the
introduction of the first commercial two-dimensional imaging system. Today, ultrasound imaging
is second only to the use of X-rays in its frequency of clinical use.

1.3. Comparison of Diagnostic Medical Imaging Techniques


Medical imaging modalities are different on their method (transmission or emission based), the
parameters measured to construct images and on their application. Table 1.1 below summarizes
the differences of some of the modalities.
Table 1.1: Comparison of medical imaging modalities

6 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Diagnostic Imaging

Without ionization With ionizing


radiation radiation

Nuclear magnetic Ultrasound X-ray imaging Nuclear Medical


resonance techniques

Spectroscopy Tomography Planar Tomography


Planar SPECT,
PET
Figure 1-3: Classification of diagnostic medical imaging modalities

Review Questions

1. What are the qualities of tomographic imaging technique over projection radiography
imaging?
2. Why magnetic resonance imaging is safe compared to x-ray and nuclear imaging?
3. How is it possible to get image of the human body? What are the necessary conditions
that are required?
4. What characteristics of the human body or object are used for imaging purpose?
5. Explain the motives behind medical imaging. Why is it important to visualize internal
structures and functions of the human body? What are the main goals of medical
imaging?
6. Describe the energy sources used in medical imaging. What are the advantages and
limitations of different types of energy, such as electromagnetic radiation, ultrasound, and
nuclear radiation?

7 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


7. Explain how tissue properties affect the image formation in medical imaging. How do
different tissues interact with energy sources and how does this affect image contrast and
resolution?
8. Describe the image properties that are employed in medical imaging. What are the main
features of an image, such as spatial resolution, contrast, and signal-to-noise ratio? How
do these properties affect the diagnostic accuracy of the image?
9. Explain the pivotal role of X-ray computed tomography (CT) in the evolution of modern
medical imaging. How has CT improved over time in terms of image quality, speed, and
patient safety? What are the main clinical applications of CT imaging?
10. Describe the pivotal role of magnetic resonance imaging (MRI) in the evolution of
modern medical imaging. How does MRI differ from other imaging modalities in terms
of energy sources and tissue interactions? What are the advantages and limitations of
MRI imaging?
11. Explain the pivotal role of nuclear medicine imaging in the evolution of modern medical
imaging. How does nuclear medicine imaging use radioactive isotopes to visualize
internal structures and functions? What are the main clinical applications of nuclear
medicine imaging?
12. Compare and contrast the advantages and limitations of X-ray CT, MRI, and nuclear
medicine imaging. How do these imaging modalities complement each other in clinical
practice? What are the future directions in medical imaging research and development?
13. Discuss the ethical and social issues associated with medical imaging. What are the
implications of patient privacy, informed consent, and radiation exposure for clinical
practice and research?
14. Explain the role of evidence-based medicine in medical imaging. How are imaging
techniques evaluated and validated for clinical use? What are the challenges and
opportunities for improving the diagnostic accuracy and cost-effectiveness of medical
imaging?

8 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


References

1. Hendee W. R., Ritenour E. R., Medical imaging physics, fourth edition, A John Wiley &
Sons, Inc., Publication, ISBN 0-471-38226-4, New York, 2002.
2. Paul Suetens, Fundamentals of medical imaging, second edition, Cambridge University
Press, ISBN-13 978-0-521-51915-1, New York, 2009.
3. Chris Guy, Dominic ffytche, An Introduction to The Principles of Medical Imaging,
Revised Edition, Imperial College Press, ISBN 1-86094-502-3, London, 2005.
4. Jerrold T. B., Seibert A. J., Edwin M. L., John M. B., The essential physics of medical
imaging, Lipincott Williams and Wilkins, A wolters company, USA, 2002.

9 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


CHAPTER TWO
2. ULTRASONIC
____________________________________________________________________________________

Objectives

After completing this chapter, students should be able to:

• Explain the physics of ultrasonic


• Identify the different interaction of ultrasound wave with matter
• Explain basic principles of ultrasound imaging, instrumentation and its role in medical
diagnostic world.

2.1. Introduction
The word “ultrasonic” relates to the wave frequencies. Sound in general is divided into three
ranges: subsonic, sonic and ultrasonic. A sound wave is said to be sonic if its frequency is within
the audible spectrum of the human ear, which ranges from 20 to 20 000 Hz (20 kHz). The
frequency of subsonic waves is less than 20 Hz and that of ultrasonic waves is higher than 20 kHz.
Frequencies used in (medical) ultrasound imaging are about 100–1000 times higher than those
detectable by humans.

Ultrasound is the term that describes sound waves of frequencies exceeding the range of human
hearing and their propagation in a medium. Medical diagnostic ultrasound is a modality that uses
ultrasound energy and the acoustic properties of the body to produce an image from stationary and
moving tissues. It is totally noninvasive procedure. Acoustic waves are easily transmitted in water
but they are reflected from an interface according to the change in the acoustics impedance.
Leaving bones and lungs, all tissues of our body are composed of water which can transmit
acoustic waves easily. By sending high frequency sound waves into the body, the reflected sound
waves (returning echoes) are recorded and processed to reconstruct real time visual images by the
computer. The returning sound waves (echoes) reflect the size and shape of the organ and also
indicate whether the organ is solid, fluid or something in between. Unlike x-rays, ultrasound

10 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


requires no exposure to ionization radiation. It is also a real time technique that can produce a
picture of blood flow as it is at the very moment of imaging.

2.2. Physics of Ultrasound


An ultrasound wave, like all sound waves, is a propagating mechanical disturbance of the matter
through which it passes. Sound of any description, unlike electromagnetic waves, cannot propagate
through a vacuum. The sound wave creates pressure disturbances that accelerate and displace the
atoms in its path. It travels in the form of longitudinal wave i.e., the particles of the medium move
in same direction in which the wave propagates There is no net permanent displacement of
particles; rather a local oscillatory disturbance is passed along from one group of atoms to the next
along the direction of travel of the wave.

The frequency and velocity of travel of sound waves depend on the bulk elastic properties of the
material and its density. All substances have finite bulk compressibility and so longitudinal,
compression waves will propagate through solids, liquids and gases but transverse, shear waves
can only propagate in solids. All soft tissue and body fluids behave like liquids (really gels of
varying viscosity) and ultrasound is propagated in these as a longitudinal wave. Bone, being a
solid, can support both longitudinal and transverse matter waves. In uniform or homogeneous
materials such as seawater or air it is possible to write down a very simple wave equation that
describes the propagation of sound through matter in terms of the bulk properties, density and
elasticity, of the material. More precisely the velocity of propagation for compression waves is
determined by the density (ρ) and the bulk modulus, K, so that:

The bulk modulus of a material is the reciprocal of its compressibility. The bulk modulus of air is
2 x104 times smaller than water and its density is factor of 1000 smaller than water.

The human can hear sound in the frequency range of 20 hertz to 20,000 hertz. As the name
suggests, ultrasound has frequency greater than 20,000 hertz. Diagnostic ultrasound has the range
of 1 to 10 megahertz. The general relationship between velocity, frequency and wavelength of any
wave is:
𝑪 = 𝝀𝒙𝒇

11 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


A sound wave at middle C has a wavelength of 5682 mm in air but an ultrasonic wave at 1 MHz
has a wavelength of 1.5 mm. Both sound waves travel through air at the same speed. But the speed
varies in different substances. Figure 2.1 illustrates the velocity of sound in a range of biological
substances.

. Figure 2-1: The velocity of sound in biological substances

2.2.1. Pressure and Intensity of sound wave


Sound energy causes particle displacements and variations in local pressure in the propagation
medium. The pressure variations are most often described as pressure amplitude (P). Pressure
amplitude is defined as the peak maximum or peak minimum value from the average pressure on
the medium in the absence of a sound wave. The SI unit of pressure is the pascal (Pa),defined as
one newton per square meter (N/m2). The average atmospheric pressure on earth at sea level of
14.7 pounds per square inch is approximately equal to 100,000 Pa. Diagnostic ultrasound beams
typically deliver peak pressure levels that exceed ten times the earth's atmospheric pressure, or
about 1 MPa (mega Pascal).

The intensity of a wave is defined as the energy which flows per unit time across a unit area
perpendicular to the wave propagation. For an infinite one dimensional plane wave:

Intensity (I) = 2π2ρ0cf2ζmax

12 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


where f= frequency, ζmax= maximum particle displacement, ρ0= density and C=average speed.
Therefore intensity depends upon the parameters of impressed ultrasound wave (f and ζmax) and
parameters of medium known as specific acoustic impedance z (z= ρ0C). Biological tissue is not
homogeneous acoustically i.e., ρ0 and C are not constant within the medium.

2.3. Interaction of Ultrasound with Matter


Ultrasound interactions are determined by the acoustic properties of matter. As ultrasound energy
propagates through a medium, interactions that occur include reflection, refraction, scattering, and
absorption. Whenever either a boundary between two tissues, or small structures within a
homogeneous tissue, are encountered the differences in acoustic properties result in a fraction of
the energy of the ultrasound beam being backscattered towards the transducer, where it forms the
detected signal.

The acoustic impedance (Z) of a material is defined as

Z = pc

where p is the density in kg/m3 and c is the speed of sound in m/sec. The SI units for acoustic
impedance are kg/(m2sec) and are often expressed in rayls, where 1 rayl is equal to 1 kg/(m2sec).
The acoustic impedance can be likened to the stiffness and flexibility of a compressible medium
which is related to the energy transfer from one medium to another. Table 2.1 below lists the
acoustic impedance of tissues and materials commonly encountered in medical ultrasound.

Table 2.1: Acoustic impedance of tissue and materials

13 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


2.3.1. Reflection, transmission and refraction at tissue boundaries
When an ultrasound wave encounters a boundary between two tissues with different values of Z,
a certain fraction of the wave energy is backscattered (or reflected) towards the transducer, with
the remainder being transmitted through the boundary deeper into the body.

Consider a flat boundary, which dimensions are much greater than the ultrasound wavelength, for
example »1 mm for a 1.5 MHz central frequency. In the general situation as shown in Figure 2.2
below, the incident ultrasound wave strikes the boundary at an angle Ɵi. The following equations
relate the angles of incidence (Ɵi) and reflection (Ɵr), angles of incidence (Ɵi) and transmission
(Ɵt), reflected (pr) and transmitted (pt) pressures, and reflected (Ir) and transmitted (It) intensities:

Figure 2-2: interaction of sound wave at tissue boundaries

14 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


The values of the reflection and transmission pressure coefficients are related by:

with the corresponding values of intensity reflection coefficients given by:

The strongest reflected signal is received if the angle between the incident wave and the boundary
is 90o. In this case, the above Equations reduce to:

The backscattered signal detected by the transducer is maximized if the value of either Z1 or Z2 is
zero. However, in this case the ultrasound beam will not reach structures that lie deeper in the
body. Such a case occurs, for example, in GI tract imaging if the ultrasound beam encounters
15 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
pockets of air. A very strong signal is received from the front of the air pocket, but there is no
information of clinical relevance from any structures behind the air pocket. At the other extreme,
if Z1 and Z2 are equal in value, then there is no backscattered signal at all and the tissue boundary
is essentially undetectable.
The interface separated by the greatest difference in the speed of sound will provide the greatest
reflection as then the value of z1– z2 is large. This is the reason why it is difficult to visualize bone.
Similarly the acoustic impedance of air and tissue are 42.8 gm/cm2 and 16 × 10 2gm/cm2 which
results into a large value of z1– z2 thereby the interface makes the ultrasound energy reflected
completely without any penetration. A special jelly is used therefore to minimize reflected energy
from the interface of skin and transducer (in air) so that ultrasound can penetrate the body for the
imaging of organs. Hence the ability of ultrasound waves to travel through any medium is restricted
by the properties of that medium.

These properties include the density and elastic properties which make up the acoustic impedance
specific to that medium. The transmission is also limited by the transducer frequency being used.
Higher frequencies have shorter wavelengths and they can penetrate less than lower frequencies.

Figure 2-3: Incident, Transmitted and Reflected Wave

16 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


2.3.2. Scattering by small structures

If the ultrasound beam strikes structures which are approximately the same size as, or smaller than,
the ultrasound wavelength then the wave is scattered in all directions. Most organs have a
characteristic structure that gives rise to a defined scatter signature and provides much of the
diagnostic information contained in the ultrasound image. Differences in scatter amplitude that
occur from one region to another cause corresponding brightness changes on the ultrasound
display.
In general, the echo signal amplitude from the insonated tissues depends on the number of
scatterers per unit volume, the acoustic impedance differences at the scatterer interfaces, the sizes
of the scatterers, and the ultrasonic frequency. The terms hyperechoic (higher scatter amplitude)
and hypoechoic (lower scatter amplitude) describe the scatter characteristics relative to the average
background signal. Hyperechoic areas usually have greater numbers of scatterers, larger acoustic
impedance differences, and larger scatterers. Acoustic scattering from no specular reflectors
increases with frequency, while specular reflection is relatively independent of frequency; thus, it
is often possible to enhance the scattered echo signals over the specular echo signals by using
higher ultrasound frequencies.

2.3.3. Absorption and total attenuation of ultrasound energy in tissue


Ultrasound is heavily absorbed by the biological tissue through which it passes en route to and
from a reflecting boundary. The organized, imposed motion of the sound wave induces a variety
of motions of the very small cellular and subcellular structures that make up human tissue. These
motions are heavily damped by viscous friction and hence transform the organized ultrasound
wave energy into random heat energy.

As an ultrasound beam passes through the body, its energy is attenuated by a number of
mechanisms including reflection, scatter and absorption. The net effect is that signals received
from tissue boundaries deep in the body are much weaker than those from boundaries which lie
close to the surface.

The total reduction in intensity of an ultrasound beam, on passing through a thickness of material,
is determined by the amount of energy absorbed and the amount scattered away from the direction
of the beam. Each type of material has an empirical attenuation coefficient as illustrated in Table
2.2 below.

17 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Table 2.2: Attenuation coefficients for selected tissues at 1MHz

2.4. Instrumentation of Ultrasound


A block-diagram of the basic instrumentation used for ultrasound imaging is shown in Figure 2.4
below. The input signal to the transducer comes from a frequency generator. The frequency
generator is gated on for short time durations and then gated off, thus producing short periodic
voltage pulses. These pulsed voltage signals are amplified and fed via a transmit/receive switch to
the transducer. Since the transducer both transmits high power pulses and also receives very low
intensity signals, the transmit and receive circuits must be very well isolated from each other; this
is the purpose of the transmit/receive switch. The amplified voltage is converted by the transducer
into a mechanical pressure wave which is transmitted into the body. Reflection and scattering from
boundaries and structures within tissue occur, as described previously. The backscattered pressure
waves reach the transducer at different times dictated by the depth in tissue from which they
originate, and are converted into voltages by the transducer. These voltages have relatively small
values, and so pass through a very low-noise preamplifier before being digitized.

18 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 2-4: The major elements of a basic ultrasound imaging system

Time-gain compensation is used to reduce the dynamic range of the signals, and after appropriate
further amplification and signal processing, the images are displayed in real time on the computer
monitor.

2.4.1. Transducers
Ultrasound is produced and detected with a transducer, composed of one or more ceramic elements
with electromechanical properties. The ceramic element converts electrical energy into mechanical
energy to produce ultrasound and mechanical energy into electrical energy for ultrasound
detection. Major components (Figure 2.5) include the piezoelectric material, matching layer,
backing block, acoustic absorber, insulating cover, sensor electrodes, and transducer housing.

19 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 2-5: components of ultrasound transducer

Figure 2-6: (a)Linear-array transducer. (b)Phased-array transducer

Piezoelectric material

A piezoelectric material (often a crystal or ceramic) is the functional component of the transducer.
It converts electrical energy into mechanical (sound) energy by physical deformation of the crystal
structure. Conversely, mechanical pressure applied to its surface creates electrical energy.
Piezoelectric materials are characterized by a well-defined molecular arrangement of electrical
dipoles.

20 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


When mechanically compressed by an externally applied pressure, the alignment of the dipoles is
disturbed from the equilibrium position to cause an imbalance of the charge distribution. A
potential difference (voltage) is created across the element with one surface maintaining a net
positive charge and one surface a net negative charge. Surface electrodes measure the voltage,
which is proportional to the incident mechanical pressure amplitude. Conversely, application of
an external voltage through conductors attached to the surface electrodes induces the mechanical
expansion and contraction of the transducer element.

Damping block

The damping block, layered on the back of the piezoelectric element, absorbs the backward
directed ultrasound energy and attenuates stray ultrasound signals from the housing. This
component also dampens the transducer vibration to create an ultrasound pulse with a short spatial
pulse length, which is necessary to preserve detail along the beam axis (axial resolution).
Dampening of the vibration (also known as "ring-down") lessens the purity of the resonance
frequency and introduces a broadband frequency spectrum (Figure 2.7). With ring-down, an
increase in the bandwidth (range of frequencies) of the ultrasound pulse occurs by introducing
higher and lower frequencies above and below the center (resonance) frequency. The "Q factor"
describes the bandwidth of the sound emanating from a transducer as:

where fo is the center frequency and the bandwidth is the width of the frequency distribution.

A "high Q" transducer has a narrow bandwidth (i.e., very little damping) and a corresponding long
spatial pulse length. A "low Q" transducer has a wide bandwidth and short spatial pulse length.
Imaging applications require a broad bandwidth transducer in order to achieve high spatial
resolution along the direction of beam travel. Blood velocity measurements by Doppler
instrumentation require a relatively narrow-band transducer response in order to preserve velocity
information encoded by changes in the echo frequency relative to the incident frequency.
Continuous-wave ultrasound transducers have a very high Q characteristic. An example of a "high
Q" and "low Q" ultrasound pulse illustrates the relationship to spatial pulse length. While the Q
factor is derived from the term quality factor, a transducer with a low Q does not imply poor quality
in the signal.

21 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 2-7: Effect of damping block on frequency spectrum

Matching layer

The matching layer provides the interface between the transducer element and the tissue and
minimizes the acoustic impedance differences between the transducer and the patient. It consists
of layers of materials with acoustic impedances that are intermediate to those of soft tissue and the
transducer material. The thickness of each layer is equal to one-fourth the wavelength, determined
from the center operating frequency of the transducer and speed of sound in the matching layer.
For example, the wavelength of sound in a matching layer with a speed of sound of 2,000 m/sec
for a 5-MHz ultrasound beam is 0.4 mm. The optimal matching layer thickness is equal to 1/4A =
1/4 X 0.4 mm = 0.1 mm. In addition to the matching layers, acoustic coupling gel (with acoustic
impedance similar to soft tissue) is used between the transducer and the skin of the patient to
eliminate air pockets that could attenuate and reflect the ultrasound beam.

22 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


2.5. Ultrasound Image Data Acquisition
Image formation in medical ultrasound is accomplished by a pulse-echo mechanism in which a
thin ultrasound beam is transmitted directionally into the patient, and then experiences partial
reflections from tissue interfaces that create echoes, which return to the transducer and the echoes
generated by the interaction of that beam with scattering targets are received by a transducer or a
set of transducer elements. The transmit and receive processing used to create this beam is referred
to as beam formation.

The strength of the received echoes is usually displayed as increased brightness on the screen
(hence the name for the basic ultrasonic imaging mode, B-mode, with B for brightness). A two-
dimensional data set is acquired as the transmitted beam is steered or its point of origin is moved
to different locations on the transducer face. The data set that is acquired in this manner will have
some set of orientations of the acoustic rays. The process of interpolating this data set to form a
TV raster image is usually referred to as scan conversion. With Doppler signal processing, mean
Doppler shifts at each position in the image can be determined from as few as 4 to 12 repeated
transmissions. The magnitudes of these mean frequencies can be displayed in color superimposed
on the B-mode image and can be used to show areas with significant blood flow.

Image formation using the pulse echo approach requires a number of hardware components: the
beam former, pulser, receiver, amplifier, scan converter/image memory, and display system.

Beam former

The beam former is responsible for generating the electronic delays for individual transducer
elements in an array to achieve transmit and receive focusing and, in phased arrays, beam steering.
Most modern, high-end ultrasound equipment incorporates a digital beam former and digital
electronics for both transmit and receive functions.

23 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 2-8: Components of the ultrasound imager.

Digital beam former controls application-specific integrated circuits (ASICs) that provide
transmit/receive switches, digital-to-analog and analog-to-digital converters, and pre-
amplification and time again compensation circuitry for each of the transducer elements in the
array. Major advantages of digital acquisition and processing include the flexibility to introduce
new ultrasound capabilities by programmable software algorithms and to enhance control of the
acoustic beam.

Pulser

The pulser (also known as the transmitter) provides the electrical voltage for exciting the
piezoelectric transducer elements, and controls the output transmit power by adjustment of the
applied voltage. In digital beam-former systems, a digital-to analog-converter determines the
amplitude of the voltage. An increase in transmit amplitude creates higher intensity sound and
improves echo detection from weaker reflectors. A direct consequence is higher signal-to-noise
ratio in the images, but also higher power deposition to the patient. User controls of the output
power are labeled "output," "power," "dB," or "transmit" by the manufacturer. In some systems, a
low power setting for obstetric imaging is available to reduce power deposition to the fetus. A

24 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


method for indicating output power in terms of a thermal index (TI) and mechanical index (MI) is
usually provided.

Transmit/receive switch

The transmit/receive switch, synchronized with the pulser, isolates the high voltage used for
pulsing (~150 V) from the sensitive amplification stages during receive mode, with induced
voltages ranging from ~1 V to ~2 µV from the returning echoes. After the ring-down time, when
vibration of the piezoelectric material has stopped, the transducer electronics are switched to
sensing small voltages caused by the returning echoes, over a period up to about 1000 µsec (l msec.

Pulse echo operation

In the pulse echo mode of transducer operation, the ultrasound beam is intermittently transmitted,
with a majority of the time occupied by listening for echoes. The ultrasound pulse is created with
a short voltage waveform provided by the pulser of the ultrasound system. The generated pulse is
typically two to three cycles long, dependent on the damping characteristics of the transducer
elements. With a speed of sound of 1,540 m/sec (0.154 cm/µsec), the time delay between the
transmission pulse and the detection of the echo is directly related to the depth of the interface as

where c, the speed of


sound, is expressed in cm/µsec; distance from the transducer to the reflector, D, is expressed in
cm; the constant 2 represents the round-trip distance; and time is expressed in µsec. One pulse
echo sequence produces one amplitude-modulated (A-line) of image data. The timing of the data
excitation and echo acquisition relates to distance. Many repetitions of the pulse echo sequence
are necessary to construct an image from the individual A-lines.

The number of times the transducer is pulsed per second is known as the pulse repetition frequency
(PRF). For imaging, the PRF typically ranges from 2,000 to 4,000 pulses per second (2 to 4 kHz).
The time

25 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


between pulses is the pulse repetition period (PRP), equal to the inverse of the PRF. An increase
in PRF results in a decrease in echo listening time. The maximum PRF is determined by the time
required for echoes from the most distant structures to reach the transducer. If a second pulse
occurs before the detection of the most distant echoes, these more distant echoes can be confused
with prompt echoes from the second pulse, and artifacts can occur. The maximal range is
determined from the product of the speed of sound and the PRP divided by 2 (the factor of 2
accounts for round-trip distance):

Higher ultrasound frequency operation has limited penetration depth, allowing high PRFs.
Conversely, lower frequency operation requires lower PRFs because echoes can return from
greater depths.

Pre-amplification and analog to digital conversion

In multielement array transducers, all preprocessing steps are performed in parallel. Each
transducer element produces a small voltage proportional to the pressure amplitude of the returning
echoes. An initial pre-amplification increases the detected voltages to useful signal levels. This is
combined with a fixed swept gain, to compensate for the exponential attenuation occurring with
distance traveled. Large variations in echo amplitude (voltage produced in the piezoelectric
element) with time are reduced from ~1,000,000:1 or 120 dB, to about 1,000: 1 or 60 dB with
these preprocessing steps.

Early ultrasound units used analog electronic circuits for all functions, which were susceptible to
drift and instability. Even today, the initial stages of the receiver often use analog electronic
circuits. Digital electronics were first introduced in ultrasound for functions such as image
formation and display. Since then, there has been a tendency to implement more and more of the
signal preprocessing functions in digital circuitry, particularly in high-end ultrasound systems.

In state-of-the-art ultrasound units, each piezoelectric element has its own preamplifier and analog-
to-digital converter (ADC). A typical sampling rate of 20 to 40 MHz with 8 to 12 bits of precision
is used.

26 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


ADCs with larger bit depths and sampling rates are necessary for systems that digitize the signals
directly from the pre-amplification stage. In systems where digitization of the signal occurs after
analog beam formation and summing, a single ADC with less demanding requirements is typically
employed.

Beam Steering, Dynamic Focusing, and Signal Summation

Echo reception includes electronic delays to adjust for beam direction and dynamic receive
focusing to align the phases of detected echoes from the individual elements in the array as a
function of echo depth. In systems with digital beam formers, this is accomplished with digital
processing algorithms. Following phase alignment, the preprocessed signals from all of the active
transducer elements are summed. The output signal represents the acoustic information gathered
during the pulse repetition period along a single beam direction. This information is sent to the
receiver for further processing before rendering into a 2D image.

Figure 2-9: Beam Steering, Dynamic Focusing, and Signal Summation


Receiver
The receiver accepts data from the beam former during the pulse repetition period, which
represents echo information as a function of time (depth). Subsequent signal processing occurs in
the sequence shown in Figure 2.11 below.

27 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 2-10: Sequence of signal processing

The receiver processes the data streaming from the beam former. Steps include time gain
compensation (TGC), dynamic range compression, rectification, demodulation, and noise
rejection.

The TGC stages supply the gain required to compensate for the attenuation brought about by the
propagation of sound in tissue. During the echo reception time, which ranges from 40 to 240 µs,
the gain of these amplifiers is swept over a range approaching 60 to 70 dB, depending on the
clinical examination. The value of this gain at any depth is under user control with a set of slide
pots often referred to as the TGC slide pots. The dynamic range available from typical TGC
amplifiers is in the order of 60 dB. One can think of the TGC amplifiers as providing a dynamic
range window into the total range available at the transducer. The user has the ability to adjust the
TGC and the noise rejection level.

2.6. Principles of A-Mode, B-Mode, M-Mode

2.6.1. A-Mode
A-mode (A for amplitude) is the display of the processed information from the receiver versus
time (after the receiver processing steps). As echoes return from tissue boundaries and scatterers

28 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


(a function of the acoustic impedance differences in the tissues), a digital signal proportional to
echo amplitude is produced as a function of time. One "A-line" of data per pulse repetition period
is the result. As the speed of sound equates to depth (round-trip time), the tissue interfaces along
the path of the ultrasound beam are localized by distance from the transducer. The earliest uses of
ultrasound in medicine used A-mode information to determine the midline position of the brain
for revealing possible mass effect of brain tumors. A-mode and A-line information is currently
used in ophthalmology applications for precise distance measurements of the eye. Otherwise, A-
mode display by itself is seldom used.

2.6.2. B-mode
B-mode (B for brightness) is the electronic
conversion of the A-mode and A-line
information into brightness-modulated dots on
a display screen. In general, the brightness of
the dot is proportional to the echo signal
amplitude (depending on signal processing
parameters). The B-mode display is used for
M-mode and 2D gray-scale imaging. The 2D
scan is composed of many A-Mode scans each
taken with a different beam position or angle
with respect to the field of view within the

Figure 2-11: The standard B-Mode ultrasound imaging patient.


geometry

2.6.3. M-mode or Time position plot


M-mode (M for motion) is a technique that uses B-
mode information to display the echoes from a
moving organ, such as the myocardium and valve
leaflets, from a fixed transducer position and beam
direction on the patient. The echo data from a single
ultrasound beam passing through moving anatomy are
acquired Figure 2-12: The M-mode or time position scan.

29 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


and displayed as a function of time, represented by reflector depth on the vertical axis (beam path
direction) and time on the horizontal axis. M-mode can provide excellent temporal resolution of
motion patterns, allowing the evaluation of the function of heart valves and other cardiac anatomy.
Only anatomy along a single line through the patient is represented by the M-mode technique, and
with advances in real-time 2D echocardiography, Doppler, and color flow imaging, this display
mode is of much less importance than in the past.

Figure 2-13: Simultaneous B- and M-mode display B-Mode (upper image) and
M-Mode (lower image) display of heart valve motion.

The single selected A-scan shown in Figure 2.13 produces three prominent reflections, the middle
one is from a moving boundary such as a heart valve. The time variation of the echo times is
displayed on the vertical axis while the cursor scrolls across the screen at constant rate in time.

2.7. Image Reconstruction


Image reconstruction Reconstructing ultrasound images based on the acquired RF data, involves
the following steps: filtering, envelope detection, attenuation correction, log-compression, and
scan conversion.

2.7.1. Filtering
First, the received RF signals are filtered in order to remove high-frequency noise. In second
harmonic imaging, the transmitted low-frequency band is also removed, leaving only the received
high frequencies in the upper part of the bandwidth of the transducer. The origin of these
frequencies, which were not transmitted, is nonlinear wave propagation.

30 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


2.7.2. Envelope detection
Because the very fast fluctuations of the RF signal are not relevant for gray scale imaging, the
high-frequency information is removed by envelope detection. Usually this is done by means of a
quadrature filter or a Hilbert transformation.

Figure 2-14: RF signal and its envelope

Figure 2.14 shows an example of an RF signal and its envelope. If each amplitude along the
envelope is represented as a gray value or brightness, and different lines are scanned by translating
the transducer, a B-mode (B stands for brightness) image is obtained. Bright pixels correspond to
strong reflections, and the white lines in the image represent the two boundaries of the scanned
object. To construct an M-mode image, the same procedure with a static transducer is applied.

2.7.3. Attenuation correction


Identical structures should have the same gray value and, consequently, the same reflection
amplitudes. However, the amplitude of the incident and reflected wave decreases with depth
because of attenuation of the acoustic energy of the ultrasonic wave during propagation. To
compensate for this effect, the attenuation is estimated. Because time and depth are linearly related
in echography, attenuation correction is often called time gain compensation. Typically, a simple
model is used – for example, an exponential decay – but in practice several tissues with different
attenuation properties are involved.

Most ultrasound scanners therefore enable the user to modify the gain manually at different depths.

31 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


2.7.4. Scan converters
The image data are acquired on a polar coordinate grid for sector scanners (e.g., mechanically
steered, curvilinear arrays, and phased-array systems) and in a rectangular grid for linear arrays. It
is clearly necessary to convert this image data into one of the standard TV raster formats for easier
viewing, recording, computer capture, etc. This is performed in a module in most systems referred
to as the scan converter

The function of the scan converter is to create 2D images from echo information from distinct
beam directions, and to perform scan conversion to enable image data to be viewed on video
display monitors. Scan conversion is necessary because the image acquisition and display occur
in different formats. Early scan converters were of an analog design, using storage cathode ray
tubes to capture data. These devices drifted easily and were unstable over time. Modern scan
converters use digital technology for storage and manipulation of data. Digital scan converters are
extremely stable, and allow subsequent image processing with application of a variety
mathematical functions.

Digital information streams to the scan converter memory, configured as a matrix of small picture
elements (pixels) that represent a rectangular coordinate display. Most ultrasound instruments have
a ~500 X 500-pixel matrix (variations between manufacturers exist). Each pixel has a memory
address that uniquely defines its location within the matrix. During image acquisition, the digital
signals are inserted into the matrix at memory addresses that correspond as close as possible to the
relative reflector positions in the body. Transducer beam orientation and echo delay times
determine the correct pixel addresses (matrix coordinates) in which to deposit the digital
information. Misalignment between the digital image matrix and the beam trajectory, particularly
for sector-scan formats at larger depths, requires data interpolation to fill in empty or partially
filled pixels. The final image is most often recorded with 512 X 512 X 8 bits per pixel, representing
about ¼ megabyte of data. For color display, the bit depth is often as much as 24 bits (3 bytes per
primary color).

2.8. Doppler Ultrasound


Doppler ultrasound is based on the principle that sound reflected by a moving target like blood has
a different frequency from the incident sound wave. The difference in frequencies is known as
Doppler shift which is proportional to the velocity of the target. Doppler shift is the useful

32 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


information with the echoes which helps in the detection of flowing blood. It also enables to
quantify the velocity of the blood.

There are two kinds of Doppler imaging: Flow imaging and Strain imaging. Strain imaging is a
more recent application of Doppler imaging. When neighboring pixels move with a different
velocity, the spatial velocity gradient can be calculated. This gradient corresponds to the strain rate
(i.e., strain per time unit; the tissue lengthening or shortening per time unit). The strain rate can be
estimated in real time. The strain (i.e., the local deformation) of the tissue can then be calculated
as the integral of the strain rate over time.

Figure 2-15: A schematic diagram of the components of a pulsed Doppler


ultrasound instrument

In blood velocity estimation, the goal is not simply to estimate the mean target position and mean
target velocity. The goal instead is to measure the velocity profile over the smallest region possible
and to repeat this measurement quickly and accurately over the entire target. Therefore, the joint
optimization of spatial, velocity, and temporal resolution is critical. In addition to the mean
velocity, diagnostically useful information is contained in the volume of blood flowing through

33 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


various vessels, spatial variations in the velocity profile, and the presence of turbulence. While
current methods have proven extremely valuable in the assessment of the velocity profile over an
entire vessel, improved spatial resolution is required in several diagnostic situations. Improved
velocity resolution is also desirable for a number of clinical applications. Blood velocity estimation
algorithms implemented in current systems also suffer from a velocity ambiguity due to aliasing.

A number of features make blood flow estimation distinct from typical radar and sonar target
estimation situations. The combination of factors associated with the beam formation system,
properties of the intervening medium, and properties of the target medium lead to a difficult and
unique operating environment. Figure 2.16 below summarizes the operating environment of an
ultrasonic blood velocity estimation system.

Figure 2-16: Operating environment for the estimation of blood velocity

It is possible to give color coding to the Doppler information and superimpose it on a real time B-
mode image facility which can help in identification of blood vessels or blood vessels having
abnormal flow. This technique can also be used to diagnose coronary stenosis.

The color flow map (Figure 2.17) shows color-encoded velocities superimposed on the gray-scale
image with the velocity magnitude indicated by the color bar on the side of the image. Motion
toward the transducer is shown in yellow and red, and motion away from the transducer is shown
in blue and green, with the range of colors representing a range of velocities to a maximum of 6
cm/s in each direction. Velocities above this limit would produce aliasing for the parameters used

34 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


in optimizing the instrument for the display of ovarian flow. A velocity of 0 m/s would be indicated
by black, as shown at the center of the color bar.

Figure 2-17: Colour-coded Doppler scan of a foetal heart. The grey image is derived from
B-Mode. The super-imposed colour maps blood flow velocity in the selected region.

2.8.1. Data acquisition


In Doppler imaging data acquisition is done in three different ways.

• Continuous wave (CW) Doppler

Sinusoidal wave is transmitted by a piezoelectric crystal, and the reflected signal is received by a
second crystal. Usually, both crystals are embedded in the same transducer. CW Doppler is the
only exception to the pulse–echo principle for ultrasound data acquisition. It does not yield spatial
(i.e., depth) information.

• Pulsed wave (PW) Doppler

Pulsed waves are transmitted along a particular line through the tissue at a constant pulse repetition
frequency (PRF). However, rather than acquiring the complete RF signal as a function of time, as
in the M-mode acquisition, only one sample of each reflected pulse is taken at a fixed time, the so-

35 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


called range gate, after the transmission of the pulse. Consequently, information is obtained from
one specific spatial position.

• Color flow (CF) imaging

This is the Doppler equivalent of the B-mode acquisition. However, for each image line, several
pulses (typically 3–7) instead of one are transmitted. The result is a 2D image in which the velocity
information is visualized by means of color superimposed onto the anatomical gray scale image.

2.9. Image Display


For monitor displays, the digital scan converter memory requires a digital-to-analog converter
(DAC) and electronics to produce a compatible video signal. The DAC converts the digital pixel
data matrix into a corresponding analog video signal compatible with specific video monitors.
Window and level adjustments of the digital data modify the brightness and contrast of the
displayed image without changing the digital memory values (only the look-up transformation
table) before digital-to-analog conversion. The pixel density of the display monitor can limit the
quality and resolution of the image. Employing a "zoom" feature on many ultrasound instruments
can enlarge the image information to better display details within the image that are otherwise
blurred. Two types of methods, "read" zoom, and "write" zoom, are usually available. "Read"
zoom enlarges a user-defined region of the stored image and expands the information over a larger
number of pixels in the displayed image. Even though the displayed region becomes larger, the
resolution of the image itself does not change. Using "write" zoom requires the operator to rescan
the area of the patient that corresponds to the user-selected area. When write zoom is enabled, the
transducer scans the selected area and only the echo data within the limited region is acquired.
This allows a greater line density and all pixels in the memory are used in the display of the
information. In this case, better sampling provides better resolution.

Besides the B-mode data used for the 20 image, other information from M-mode and Doppler
signal processing can also be displayed. During operation of the ultrasound scanner, information
in the memory is continuously updated in real time. When ultrasound scanning is stopped, the last
image acquired is displayed on the screen until ultrasound scanning resumes.

36 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


2.10. Image Resolution
Resolution is the ability of the system to separate and define small and closely separated structure.
The resolution can be: (1) lateral and (2) axial. Lateral resolution is the ability of the system to
separate and define small structures in the plane perpendicular to the beam axis. Lateral resolution
can be optimized by focusing the beam at the area of interest and then slowly increasing the
frequency. If the beam width is greater than the separation between two objects then these objects
cannot be resolved. Axial resolution is the measure of the system to separate and define structures
along the axis of the beam. It depends upon the pulse duration. Two neighboring structures can be
resolved by beam if the wavelength of the beam is less than their axial distance between them.
However, the average ultrasound pulse contains two wave lengths. Therefore, higher frequency
transducer has to be used to improve the axial resolution. The frequency cannot be made much
higher to have better resolution as then the penetration of wave falls with increased frequency.

2.11. Image Storage


Ultrasound images are typically composed of 640 X 480 or 512 X 512 pixels. Each pixel typically
has a depth of 8 bits (l byte) of digital data, providing up to 256 levels of gray scale. Image storage
(without compression) is approximately ¼ megabytes (MB) per image. For real-time imaging (10
to 30 frames per second), this can amount to hundreds of megabytes of data. Color images used
for Doppler studies increase the storage requirements further because of larger numbers of bits
needed for color resolution (full fidelity color requires 24 bits/pixel, one byte each for the red,
green, and blue primary colors).

37 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Questions
1. Explain the principles of ultrasound propagation in different media.
2. What is the difference between longitudinal and transverse waves? How do they apply to
ultrasound?
3. Discuss the concept of wavelength and frequency in ultrasound physics. How are they
related?
4. How does attenuation affect ultrasound imaging? What are the factors that contribute to
attenuation?
5. Explain the piezoelectric effect and how it is used in ultrasound transducers.
6. Describe the basic components of an ultrasound imaging system and their functions.
7. What is the difference between A-mode, B-mode, and M-mode ultrasound imaging?
When would you use each of these modes?
8. Discuss the process of beamforming and how it affects image quality.
9. How does the choice of transducer frequency affect image quality and resolution?
10. What is spatial compounding and how does it improve image quality?
11. Explain the principles of Doppler ultrasound and how it is used in clinical settings.
12. What is the difference between pulsed-wave and continuous-wave Doppler ultrasound?
When would you use each of these modes?
13. Discuss the concept of frequency shift and how it is used in Doppler ultrasound.
14. What is the difference between spectral and color Doppler imaging? How are they used
in clinical settings?
15. Describe the limitations of Doppler ultrasound and when it may not be an appropriate
diagnostic tool.

References
1. Myer kutz, Standard handbook of Biomedical Engineering and design, McGraw-Hill
companies, 2004.
2. Ed. Joseph D. Bronzino, The Biomedical Engineering Hand Book, Second Edition, CRC
press LLC, 2000.
3. G. S. Sawhney, Fundamentals of Biomedical Engineering, New Age International (P)
Limited, Publishers, New Delhi, 2007.

38 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


4. Hendee W. R., Ritenour E. R., Medical imaging physics, fourth edition, A John Wiley &
Sons, Inc., Publication, ISBN 0-471-38226-4, New York, 2002.
5. Paul Suetens, Fundamentals of medical imaging, second edition, Cambridge University
Press, ISBN-13 978-0-521-51915-1, New York, 2009.
6. Chris Guy, Dominic ffytche, An Introduction to The Principles of Medical Imaging,
Revised Edition, Imperial College Press, ISBN 1-86094-502-3, London, 2005.
7. Jerrold T. B., Seibert A. J., Edwin M. L., John M. B., The essential physics of medical
imaging, Lipincott Williams and Wilkins, A wolters company, USA, 2002.

39 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


CHAPTER THREE
3. X-RAY IMAGING

Objectives
By studying this chapter, the students should be able to:

• Identify each component of an x-ray tube and explain its function.


• Explain the shape of the x-ray spectrum, and identify factors that influence it.
• Discuss concepts of x-ray production, including the focal spot, line-focus principle and x-
ray beam hardening.
• Define and apply x-ray tube rating limits and charts.
• Describe the construction of radiographic film.
• Define transmittance and optical density.
• Describe the principle of operation of the image intensifier both in fluoroscopy and
screen-film radiography.

3.1. Introduction
X-ray imaging depends on the partial translucence of biological tissue with respect to X-ray
photons. If a beam of X-rays is directed at the human body, a fraction of the photons will pass
through without interaction. The bulk of the incident photons, on the other hand, will interact with
the tissue in different ways. As the beam is moved across the body, the relative proportions of
transmission, absorption and scatter change, as the beam encounters more or less bone, blood and
soft tissue. Ideally the contrast in an X-ray image would be produced just by the variation in the
number of photons that survive the direct journey to the detector without interaction. These are the
primary photons. They have travelled in a straight line from the X-ray tube focus and will give rise
to sharp-edged shadows. Figure 3.1 illustrates the imaging process.

Medical imaging uses X-rays with energies in the range 20–150 keV. These are very much higher
than most atomic binding energies and thus the inelastic interaction processes of Compton
scattering and the photoelectric effect launch charged electrons and ions, with relatively large

40 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


initial kinetic energies, into the surrounding tissue to produce the patient dose. The radiation dose
of both the patient and attendant staff must be kept to an absolute minimum and this has a
significant impact upon both the design of equipment and the procedures used in X-ray
radiography.

Figure 3-1: Principle of Radiographic Imaging

X-rays are invisible, can penetrate matter, ionize gases, change a photo emulsion, create light
emission in different substance and induce biological changes in living tissue

3.2. Production of X-rays


X-rays are produced when highly energetic electrons interact with matter and convert their kinetic
energy into electromagnetic radiation. A device that accomplishes such a task consists of

• an electron source,
• an external energy source to accelerate the electrons
• an evacuated path for electron acceleration,
• a target electrode,

For x-ray to be produced there should be a source of fast-moving electrons, a sudden stop of the
electrons’ motion in stopping the electron motion, the conversion of kinetic energy (KE) into EMS
energies like Infrared (heat) and light x-ray energies.

41 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 3-2: Production of x-rays

As shown in Figure 3.2, electron beam is focused from the cathode to the anode target by the
focusing cup and the electrons interaction with the electrons on the tungsten atoms of target
material will cause a production of:

• Heat: Most kinetic energy of projectile e- is converted into heat. This happens when
Projectile e- interacts with the outer-shell e- of the target atoms but do not transfer enough
energy to the outer-shell e- to ionize.
• X-rays: Characteristic radiation (20%) or Bremsstrahlung radiation (80%).

The principal parts of the X-ray Imaging System are operating Console, High-voltage generator,
X-ray tube. The system is designed to provide a large number of e- with high kinetic energy
focused to a small target.

Characteristic Radiation

When the incident electron interacts with the K-shell electron via a repulsive electrical force, the
K-shell electron is removed leaving a vacancy in the K-shell. An electron from the adjacent L-
shell (or possibly a different shell) fills the vacancy. Since the electron loses energy while it moves
from its shell, a characteristic x-ray photon is emitted with energy equal to the difference between
the binding energies of the two shells. It is called characteristic because it is characteristic of the
target element in the energy of the photon Produced.

42 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Characteristic K x-rays are emitted only when the electrons impinging on the target exceed the
binding energy of a K-shell electron. Acceleration potentials must be greater than 69.5 kVp for
tungsten targets or 20 kVp for molybdenum targets to produce K characteristic x-rays.

Characteristics x-rays have discrete energies (Figure 3.3) based on the e- binding energies of target
material

Figure 3-3: Production of characteristics x-ray and its discrete levels

Bremsstrahlung spectrum

Bremsstrahlung radiation arises from energetic electron interactions with an atomic nucleus of the
target material. In a "close" approach, the positive nucleus attracts the negative electron, causing
deceleration and redirection, resulting in a loss of kinetic energy that is converted to an x-ray. The
x-ray energy depends on the interaction distance between the electron and the nucleus; it decreases
as the distance increases. Hence Bremsstrahlung spectrum can be produced at any projectile e-
value. Bremsstrahlung x-rays have a range of energies and form a continuous emission spectrum.
Figure 3.4 below shows the production of bremsstrahlung radiation and its spectrum.

Major factors that affect x-ray production efficiency are the atomic number of the target material
and the kinetic energy of the incident electrons.

43 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 3-4: Production of bremsstrahlung x-ray and its spectrum

3.3. X-ray tube


An X-ray tube is basically a thermo-ionic diode type electronic vacuum tube whose function is to
convert the electrical energy into X-rays. It has the following major components (Figure 3.5):

• Glass envelope and tube


housing
• Filament
• Anode
• Cathode

Figure 3-5: Parts of the X-ray Tube

44 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


3.3.1. Glass envelope and tube housing
Anode and cathode of an X-ray tube are enclosed in a vacuum glass tube or envelop. Vacuum
allows unobstructed path for the electron stream (tube current) and prevents oxidation and blurring
of the filament.

An X-ray tube is generally cylindrically in shape and measuring only 12-18 cm in length and 9 cm
in breadth. The envelope is of Pyrex or borosilicate glass tube enables it to withstand extremely
high temperature.

X-ray tube is housed in a metal housing which is lined with lead except at window. (The lead
absorbs all x-rays). This metal housing contains sealed oiled to serve as an electrical insulator and
to help in heat dissipation. Some c-rays housing also have a fan to cool the tube. The metal also
provides physical support to the x-ray tube.

3.3.2. Cathode
The negative side of the tube is called as cathode. It contains a tungsten or tungsten-rhenium
filament, focusing cups and supporting wires.

Filament: tungsten or tungsten-rhenium alloy is preferred because of its high melting point (3370
0
C), little tendency to vaporize and high atomic number (74).

The filament is supported by two stout wires, which connects it to the proper electrical source. A
low voltage current is sent through one wire to heat the filament. Another wire is connected to
high voltage source which provide high negative potential during exposure to drive away electrons
toward anode.

Most modern x-ray machine are provided with two filament mounted side by side. These
filaments differ in size, producing two focal spots of different size in the target. Such x-ray tubes
are called dual focus tubes.

Focusing cup: the filament if embedded in concave metal shroud made of nickel or molybdenum,
it is called focusing cup. It is given negative electrical potential so that electrons emitted from
cathode do not spread away. Because of focusing cup these electrons rush towards anode in a small
stream only.

45 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


3.3.3. Anode
Anode is the positive side of x-ray tube. It is of two types.

• Stationary cathode (fixed anode)


• Rotating cathode

The stationary or fixed type anodes are used in x-ray tubes which are low powered such as portable
x-ray unit, dental x-ray units etc. however the rotating anode is required in larger capacity x-ray
machines which produce high intensity x-ray beam.

Anode has two functions; to provide mechanical support to the target and to act as a good thermal
conductor for heat dissipation.

Stationary cathode: it is consisted of block copper in which a rectangular plate of tungsten-rhenium


alloy 2-3 mm thick is embedded. This plate is called target. The tungsten is selected for the target
area due to the following reasons:

• High melting point (3370 oC) to withstand high temperatures produced during exposures.
• High atomic number (74) allows high x-ray production.
• Little tendency to vaporize.

Rotating anode (Figure 3.6): rotating anode provides several hundred times larger target area for
electron beam to interact and thus heat generated during an exposure us spread over a larger area
of the anode.

Figure 3-6: Cross-section of tube housing with x-ray tube insert for rotating anode

46 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Therefore, the use of rotating anode allows much higher exposure in much shorter exposure time.
Rotating anode is made molybdenum disc coated with a strip of tungsten-rhenium alloy and
molybdenum stem. Rotating anode is driven by an electromagnetic inductor motor, which consists
of two main parts the rotor and the stator. The stator is outside the glass envelope and is connected
with molybdenum stem of anode. Current flowing in the stator produces a magnetic field which
induces a current in the rotor making it rotate. Rotating of anode at high speed requires excellent
bearings with protection from heat generated during exposure.

Self-lubricating bearings coated with metallic barium or silver serve first purpose whereas disc
and stem of molybdenum serves the second by virtue of its poor heat conductivity.

The tungsten-rhenium strip is beveled. The degree of beveling is called the target angle or anode
angle (Figure 3.7).

Focal spot: the focal spot on the target surface of the anode is that area which is bombarded by the
electrons from the cathode during an exposure. The size and shape of focal spot is determined by:

• The size of the filament


• The size and shape of focusing cup
• The positioning of the filament in the focusing cup
• Smaller the size of focal spot, sharper is the radiographic definition.

Figure 3-7: Anode focal spot and target angle

47 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


However, as the focal spot decrease in size, the heating of the target concentrate at a smaller area
which may damage the target and thereby affecting x-ray production (see Figure 3.8). These
conflicting requirements are met to some extent by line focus principle. By employing this
principle by beveling the anode angle the effective area of target can be made much smaller than
the actual area of electron interaction. The lower is the anode angle; the smaller is effective focal
spot size. For general radiography the target angle is usually not less than 15o.

Figure 3-8: Tradeoffs in choosing the anode angle

Heel effect: the radiographic intensity on the cathode side of the x-ray beam is higher that on the
anode side. This difference in the intensity across the x-ray beam is called heel effect. This is
because of the fact that the x-rays are emitted from target in all the directions and those emerging
parallel to the angled target is attenuated by target itself. Thus, the intensity of x-ray on anode side
is low. This difference in intensity may be as high as 40%.

Therefore, while taking a radiograph of a part of unequal thickness, thicker or denser part/side
should be positioned towards cathode and thinner side towards the anode. The smaller the anode
angle more is the heel effect. The heel effect is more noticed when the film size is larger. Figure
3.9 below illustrates the heel effect.

48 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 3-9: The heel effect

3.4. X-ray Tube Operation and Rating


The very wide range of applications of X-rays requires a range of tube designs to provide a range
of power output, peak X-ray wavelengths and operating characteristics. Thus, in X-ray CT a
relatively high-powered, wide angle wedge-shaped beam is required from a tube that has to be
mechanically rotated at high speed within the scanner gantry, in order to carry out tomography.
The mechanical movement effectively rules out the use of cooling water or oil and requires a
special slip ring arrangement to transmit the high-tension voltage from the fixed and bulky
transformers to the tube itself. The high output power demands the use of a rotating anode, used
in pulsed operation. A dental X-ray set, on the other hand, is relatively low-powered, needs only a
narrow field of view and the tube is fixed during the exposure. Thus, a much simpler water-cooled
or air-cooled fixed anode can be used.

The radiation output of an X-ray tube clearly depends on the electrical power that it consumes and
tubes are described or rated by the manufacturer in terms of electrical power rather than photon
flux produced. The electron beam energy is quite capable, when converted to heat, of both
destroying the anode and damaging the associated vacuum enclosure. For this reason each tube

49 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


has an associated rating chart laying out safe operating conditions. The tube is rated in heat units
(HU), obtained from the product kV x mA x seconds. Both the anode and the tube are given
separate ratings. Thus, a standard radiography tube may be given an anode rating of 150,000 HU
and the tube 1 million HU. In addition, the tube will be described by the time taken for both anode
and housing to cool back down to a normal operating temperature after an intensive session of
exposures has raised the tube and or anode to its maximum operating temperature.

Different radiographic procedures have been found empirically to be best carried out with
particular combinations of focal spot sizes, HT voltages, beam currents and exposure times. The
particular choice of settings will be governed by the overall attenuation factor for the anatomy
under investigation (how much bone, how much soft tissue), the type of film used, and finally the
recommended patient dose for the particular investigation.

3.5. Filtration in diagnostic X-ray


Low energy x-ray will be absorbed by the body, without providing diagnostic information.
Filtration is the Process of absorbing low-energy x-ray photons before they enter the patient. There
are two types of filtrations: inherent filtration and added filtration.

Inherent filtration includes the thickness (l to 2 mm) of the glass or metal insert at the x-ray tube
port. Glass (Si02) and aluminum have similar attenuation properties (Z = 14 and Z = 13,
respectively) and effectively attenuate all x-rays in the spectrum below approximately 15 keV.
Inherent filtration can include attenuation by housing oil and the field light mirror in the collimator
assembly.

Added filtration on the other hand refers to placing a sheet of metal intentionally in the beam to
change its effective energy. In general, diagnostic radiology, added filters attenuate the low-energy
x-rays in the spectrum that have virtually no chance of penetrating the patient and reaching the x-
ray detector. Because the low-energy x-rays are absorbed by the filters instead of the patient, the
radiation dose is reduced. Aluminum (Al) is the most common 'added filter material.

Compensation (equalization) filters are also used to change the spatial pattern of the x-ray intensity
incident on the patient, so as to deliver a more uniform x-ray exposure to the detector. For example,
a trough filter used for chest radiography has a centrally located vertical band of reduced thickness
and consequently produces greater x-ray fluence in the middle of the field. This filter compensates

50 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


for the high attenuation of the mediastinum and reduces the exposure latitude incident on the image
receptor. Wedge filters are useful for lateral projections in cervical-thoracic spine imaging, where
the incident fluence is increased to match the increased tissue thickness encountered (e.g., to
provide a low incident flux to the thin neck area and a high incident flux to the thick shoulders).
"Bow-tie" filters are used in computed tomography (CT) to reduce dose to the periphery of the
patient, where x-ray paths are shorter and fewer x-rays are required. Compensation filters are
placed close to the x-ray tube port or just external to the collimator assembly.

3.6. X-ray tube collimators assembly


Collimators adjust the size and shape of the x-ray field emerging from the tube port. The typical
collimator assembly is attached to the tube housing at the tube port with a swivel joint. Adjustable
parallel-opposed lead shutters define the x-ray field.

Figure 3-10: Collimators Assembly

In the collimator housing (Figure 3.10), a beam of light reflected by a mirror (of low x-ray
attenuation) mimics the x-ray beam. Thus, the collimation of the x-ray field is identified by the
collimator's shadows. Regulators require the light field and x-ray field be aligned so that the sum

51 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


of the misalignments, along either the length or the width of the field, is within 2% of the SID. For
example, at a typical SID of 100 cm (40 inches), the sum of the misalignments between the light
field and the x-ray field at the left and right edges must not exceed 2cm and the sum of the
misalignments at the other two edges also must not exceed 2cm. Positive beam limitation (PBL)
collimators automatically limit the field size to the useful area of the detector. Mechanical sensors
in the film cassette holder detect the cassette size and location and automatically adjust the
collimator blades so that the x-ray field matches the cassette dimensions. Adjustment to a smaller
field area is possible; however, a larger field area requires disabling the PBL circuit.

3.7. Causes of x-ray tube failures and steps to extend tube life
Cathode failure: prolonged heating of the filament by normal current or due to repeated exposures
cause evaporation of filament metal causing its progressive thinning which in turn renders it
vulnerable to break.

Anode failure: melting of anode results due to excessive heat production by the bombardment of
electrons on it. This damage to anode causes uncontrolled x-ray production. It also causes
vibrations in the rotor due to imbalanced disc which increases the possibility of anode stem
fracture.

Glass envelope failure: it may crack due to secondary arcing from the filament to the metal
deposits on the glass wall as a result of tungsten evaporation.

Steps to extend tube life:

• Anode should be warmed up before actual exposure is made


• Do not switch on MA or KVP setting while rotor is engaged (because this causes torque
force on the anode).
• Use high KVP and low MA setting to avoid overheating of anode
• Consult correct tube rating chart to avoid overheating of anode
• Do not run rotating anode unnecessarily as this shorten the life of bearings
• Adequate cooling of the tube housing must be ensured to avoid excessive heating of oil in
the tube housing.
• Do not allow overheating of the filament by repeated exposure in short time.

52 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


3.8. X-Ray Generator
The principal function of the x-ray generator is to provide current at a high voltage to the x-ray
tube. Its main components are: the transformer, the high voltage power circuit, the stator circuit,
the focal spot selector, and automatic exposure control circuits. Electrical power available to a
hospital or clinic provides up to about 480 V, much lower than the 20,000 to 150,000 V needed
for x-ray production. Transformers are principal components of x-ray generators; they convert low
voltage into high voltage through a process called electromagnetic induction.

The high-voltage section of an x-ray generator contains a step-up transformer, typically with a
primary-to-secondary turns ratio of 1:500 to 1:1,000. Within this range, a tube voltage of 100 kVp
requires an input peak voltage of 200 V to 100 V, respectively.

Figure 3-11: A modular schematic view shows the basic x-ray generator components
In modern systems, microprocessor control arid closed-loop feedback circuits help to ensure
proper exposure (Figure 3.11). Most modern generators used for radiography have automatic
exposure control (AEC) circuits, whereby the technologist selects the kVp and mA, and the AEC
system determines the correct exposure time. The AEC (also referred to as a photo timer) measures
the exposure with the use of radiation detectors located near the image receptor, which provide
feedback to the generator to stop the exposure when the proper exposure to the image receptor has
been reached.

53 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Many generators have circuitry that is designed to protect the x-ray tubes from potentially
damaging overload conditions. Combinations of kVp, mA, and exposure time delivering excessive
power to the anode are identified by this circuitry, and such exposures are prohibited. Heat load
monitors calculate the thermal loading on the x-ray tube anode, based on kVp, mA, and exposure
time, and taking into account intervals for cooling. Some x-ray systems are equipped with sensors
that measure the temperature of the anode. These systems protect the x-ray tube and housing from
excessive heat buildup by prohibiting exposures that would damage them. This is particularly
important for CT scanners and high-powered interventional angiography systems.

At the operator console, the operator selects the kVp, the mA (proportional to the number of x-
rays in the beam at a given kVp), the exposure time, and the focal spot size. The peak kilovoltage
(kVp) determines the x-ray beam quality (penetrability), which plays a role in the subject contrast.
The x-ray tube current (mA) determines the x-ray flux (photons per square centimeter) emitted by
the x-ray tube at a given kVp. The product of tube current (mA) and exposure time (seconds) is
expressed as milliampere-seconds (mAs). Some generators used in radiography allow the selection
of "three-knob" technique (individual selection of kVp, mA, and exposure time), whereas others
only allow "two-knob" technique (individual selection of kVp and mAs). The selection of focal
spot size (i.e., large or small) is usually determined by the mA setting: low mA selections allow
the small focus to be used, and higher mA settings require the use of the large focus due to anode
heating concerns. On some x-ray generators, preprogrammed techniques can be selected for
various examinations (i.e., chest; kidneys, ureter, and bladder; cervical spine). All console circuits
have relatively low voltage and current levels that minimize electrical hazards.

Several x-ray generator circuit designs are in common use including single-phase x-ray generator,
which uses a single phase input line voltage of 220V at 50A and produce a DC wave form; three-
phase x-ray generator that uses three-phase AC line source and deliver constant voltage to the x-
ray tube and can produce short exposure times; , constant potential generator that provides a nearly
constant voltage to the x-ray tube which is accomplished with three phase transformer; and high
frequency inverter generator which is discussed below.

3.8.1. High-frequency generator


The high-frequency inverter generator is the contemporary state-of-the-art choice for diagnostic
x-ray systems. Its name describes its function, whereby a high-frequency alternating waveform

54 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


(up to 50,000 Hz) is used for efficient transformation of low to high voltage. Subsequent
rectification and voltage smoothing produce a nearly constant output voltage.

The operational frequency of the generator is variable, depending chiefly on the set kVp, but also
on the mA and the frequency-to-voltage characteristics of the transformer. There are several
advantages of the high-frequency inverter generator. Single phase or three-phase input voltage can
be used. Closed-loop feedback and regulation circuits ensure reproducible and accurate kVp and
mA values. The variation in the voltage applied to the x-ray tube is similar to that of a three-phase
generator. Transformers operating at high frequencies are more efficient, more compact, and less
costly to manufacture. Modular and compact design makes equipment siting and repairs relatively
easy.

Figure 3-12: AC to DC conversion waveforms


In a high-frequency inverter generator, a single- or three-phase alternating current (AC) input
voltage is rectified and smoothed to create a direct current (DC) waveform (Figure 3.12). An
inverter circuit produces a high-frequency square wave as input to the high-voltage transformer.
Rectification and capacitance smoothing provide the resultant high-voltage output waveform.
Figure 3.13 below shows the components of a general-purpose high-frequency inverter generator.
The DC power supply produces a constant voltage from either a single-phase or a three-phase input
line source (the three-phase source provides greater overall input power).
55 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
The mA is regulated; with a resistor circuit sensing the actual mA (the voltage across a resistor is
proportional to the current) and comparing it with a reference voltage. If the mA is too low, the
voltage comparator/oscillator increases the trigger frequency, which boosts the power to the
filament to raise its temperature and increase the thermionic emission of electrons. The closed-
loop feedback circuit eliminates the need for space charge compensation circuits and automatically
corrects for filament aging effects. The voltage is regulated analogously.

The high-frequency inverter generator is the preferred system for all but a few applications (e.g.,
those requiring extremely high power, extremely fast kVp switching, or sub millisecond exposure
times). In only rare instances is the constant-potential generator a better choice.

Figure 3-13: Modular components of the high frequency generator

In this high-frequency generator, the feedback signals provide excellent stability of the peak tube
voltage (kVp) and tube current (mA). Solid lines in the circuit depict low voltage, dashed lines
depict high voltage, and dotted lines depict low-voltage feedback signals from the sensing circuits
(kVp and mA). The potential difference across the x-ray tube is determined by the charge delivered
by the transformer circuit and stored on the high-voltage capacitors.

56 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


3.9. Screen-Film Radiography

Screen –film radiography is the oldest projection radiographic system used since the discovery of
imaging with the x-rays.

Projection imaging refers to the acquisition of a two-dimensional image of the patient’s their
dimensional anatomy. Projection imaging delivers a great deal of information compression,
because the anatomy that spans the entire thickness of the patient is presented in one image.

Figure 3-14: The basic geometry of projection radiography

Radiography is a transmission imaging procedure. X-rays emerge from the x-ray tube, which is
positioned on one side of the patient's body, they then pass through the patient and are detected on
the other side of the patient by the screen-film detector.

In screen-film radiography, the optical density (OD, a measure of film darkening) at a specific
location on the film is (ideally) determined by the x-ray attenuation characteristics of the patient's
anatomy along a straight line through the patient between the x-ray source and the corresponding
location on the detector. The x-ray tube emits a relatively uniform distribution of x-rays directed
toward the patient. After the homogeneous distribution interacts with the patient's anatomy, the
screen-film detector records the altered x-ray distribution (Figure 3.14).

57 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


The basic components of a planar X-ray radiography system are: X-ray tube, a collimator, an anti-
scatter grid, a detector. The detector is to convert the energy of the transmitted X-rays into light.
Depending on the detector type used, X-ray radiography system are classified into the following:

• Screen-film radiography: Traditional x-ray film detector


• Computed radiography: Digital detectors
• Digital radiography: Digital detectors

3.9.1. Screen-film detector


The screen-film detector system used for general radiography consists of, a cassette, intensifying
screens and a sheet of film. Figure 3.15 shows the screen-film cassette and its cross-section.

Figure 3-15: A typical screen film cassette and its cross-section

3.9.2. Intensifying Screens


Photographic film is very inefficient for capturing X-rays. Only 2% of the incoming X-ray photons
contribute to the output image on a film. This percentage of contributing photons corresponds to
the probability that an X-ray photon (quantum) is absorbed by the detector. It is known as the
absorption efficiency.

The low sensitivity of film for X-rays would yield prohibitively large patient doses. Therefore, an
intensifying screen is used in front of the film. This type of screen contains a heavy chemical
element that absorbs most of the X-ray photons. When an X-ray photon is absorbed, the kinetic
energy of the released electron raises many other electrons to a higher energy state. When returning

58 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


to their initial state they produce a flash of visible light, called scintillation. Note that these light
photons are scattered in all directions. Consequently, two intensifying screens can be used, i.e.,
one in front and one behind the film, to increase the absorption efficiency further. The portion of
the light that is directed toward the film contributes to the exposure of the film. In this way, the
absorption efficiency can be increased to more than 50% instead of the 2% for film. Because the
light is emitted in all directions, a smooth light spot instead of a sharp peak hit the film and causes
image blurring.

X-ray intensifying screens consist of scintillating substances that exhibit luminescence.


Luminescence is the ability of a material to emit light after excitation, either immediately
(Fluorescence) or delayed (Phosphorescence) (Figure 3.16).

• Fluorescence is the prompt emission of light when excited by X-rays and is used in
intensifying screens. A material is said to fluoresce when light emission begins
simultaneously with the exciting radiation and light emission stops immediately after the
exciting radiation has stopped. Initially, calcium tungstate (CaWO4) was most commonly
used for intensifying screens.

Figure 3-16: luminescence

Advances in technology have now resulted in the use of rare earth compounds, such as
gadolinium oxysulfide (Gd2O2S). A more recent scintillator material is thallium-doped
cesium iodide (CsI:Tl), which has not only an excellent absorption efficiency but also a
good resolution because of the needle-shaped or pillar like crystal structure, which limits
lateral light diffusion.
• Phosphorescence or afterglow is the continuation of light emission after the exciting
radiation has stopped. If the delay to reach peak emission is longer than 10−8 seconds or
59 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
if the material continues to emit light after this period, it is said to phosphoresce.
Phosphorescence in screens is an undesirable effect, because it causes ghost images and
occasionally film fogging.

3.9.3. Film
The film contains an emulsion with silver halide crystals (e.g., AgBr). When exposed to light, the
silver halide grains absorb optical energy and undergo a complex physical change. Each grain that
absorbs a sufficient number of photons contains dark, tiny patches of metallic silver called
development centers. When the film is developed, the development centers precipitate the change
of the entire grain to metallic silver. The lighter reaching a given area of the film, the more grains
are involved and the darker the area after development. In this way a negative is formed. After
development, the film is fixed by chemically removing the remaining silver halide crystals.

In radiography, the negative image is the final output image. In photography, the same procedure
has to be repeated to produce a positive image. The negative is then projected onto a sensitive
paper carrying silver halide emulsion similar to that used in the photographic film.

Typical characteristics of a film are its graininess, speed, and contrast.

Graininess: The image derived from the silver crystals is not continuous but grainy. This effect is
most prominent in fast films. Indeed, because the number of photons needed to change a grain into
metallic silver upon development is independent of the grain size, the larger the grains, the faster
the film become dark.

Speed: The speed of a film is inversely proportional to the amount of light needed to produce a
given amount of metallic silver on development. The speed is mainly determined by the silver
halide grain size. The larger the grain size the higher the speed because the number of photons
needed to change the grain into metallic silver upon development is independent of the grain size.
The speed depends on the properties of the intensifying screen, the film, on the quality of film–
screen contact, and on a good match between the emission spectrum of the screen and the spectral
sensitivity of the film used.

Contrast: The most widely used description of the photosensitive properties of a film is the plot
of the optical density D versus the logarithm of the exposure E. This curve is

60 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


called the sensitometric curve. The exposure is the product of incident light intensity and its
duration. The optical density is defined by

Where Iin and Iout are the incoming and outgoing light intensity when exposing the developed film
with a light source.

3.9.4. Film Processing


By a chemical process the latent image formed during exposure is made visible and permanent.
The films are to be handled in a dark room fitted with safe light. The safelight will give the required
illumination and at the same time the particular coloured light will not affect the film. Generally,
lights with olive green or red orange filter are used as safelight. The film emulsion will get affected
by high temperature. So, the dark room should be air-conditioned.

The film processing includes 5 steps:

Developing: In developer the exposed silver bromide is reduced to silver. Developer is an alkaline
solution of various compounds and the reducing action is slow at temperature below 18o C. At
temperature above 30oC, the film emulsion will get damaged. Hence the developer temperature is
to be maintained at 20oC by refrigeration system and the recommended developing time is 5
minutes at 20oC. Prolonged development will fog the image. Development in higher temperature
will result in poor contrast.

Rinsing: Once the developing is over, the film turns to black colour and at the same time is not
transparent due to the existence of undeveloped emulsion on the film. The unexposed emulsion
will again react with light when it is taken out in ordinary light and turn black. Hence, it is essential
to remove the unexposed emulsion. So, after developing, film is immersed in another tank
containing ordinary water or water with glacial acetic acid for few minutes to stop the developing
action.

Fixing: In fixer, the unexposed silver bromide is dissolved away any only the converted silver
which is black in colour will remain on the film which will represent the internal structural
variation of the part. The fixer solution is acidic in nature.

61 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Washing: -The film coming out of fixer will have chemicals carried over, which should be
thoroughly washed; otherwise, the film will get discolored after sometime, when in storage. To
remove the chemicals, films are thoroughly washed in running water.

Drying: After washing the films in running water for about 20 to 25 minutes, films are dried. Dust
free hot air of temperature 100oF to 120oF is used for drying the film. Drying cabinet or rooms are
used for drying the film. Once the film is dried, the radiograph is ready for evaluation.

3.10. Computed Radiography (CR)


CR differs from screen-film radiography in that the CR cassette contains a phosphor plate called
photostimulable phosphor detector (PSP) systems instead of a sheet of film. When x-rays are
absorbed by photostimulable phosphors, some light is also promptly emitted, but much of the
absorbed x-ray energy is trapped in the PSP screen and can be read out later. For this reason, PSP
screens are also called storage phosphors or imaging plates.

3.10.1. Imaging plate


CR imaging plates are made of BaFBr and BaFI. A CR plate is a flexible screen that is enclosed
in a cassette similar to a screen-film cassette. BaFBr is doped with europium (Eu) atoms creating
defects on their structure. When the x-ray energy is absorbed by the BaFBr:Eu phosphor, the
absorbed energy excites electrons from the valance band to the conduction band associated with
the europium atoms, The excited electrons become mobile, and some fraction of them interact with
a so called F-center that is created by the doping process. The F-center traps these electrons in a
higher-energy, metastable state, where they can remain for days to weeks. The number of trapped
electrons per unit area of the imaging plate is proportional to the intensity of x-rays incident at
each location during the exposure.

After exposure a latent image exists as a spatially dependent distribution of electrons trapped in
high energy states. During readout, laser light is used to stimulate the trapped electrons back to the
conduction band, where they are free to transition to the valance band by emission of light. Figure
3.14 shows imaging plate exposure and readout process.

62 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 3-17: Imaging plate processing

CR Imaging plate processing steps:

• The cassette is moved into the reader unit and the imaging plate is mechanically removed
from the cassette.
• The imaging plate is translated across a moving stage and scanned by a laser beam.
• The laser light stimulates the emission of trapped energy in the imaging plate, and visible
light is released from the plate.
• The light released from the plate is collected by a fiber optic light guide and strikes a
photomultiplier tube (PMT), where it produces an electronic signal.
• The electronic signal is digitized and stored.
• The plate is then exposed to bright white light to erase any residual trapped energy.
• The imaging plate is then returned to the cassette and is ready for reuse.

3.10.2. CR Reader
As the red laser light (approximately 700 nm) strikes the imaging phosphor at a location (x,y), the
trapped energy from the x-ray exposure at that location is released from the imaging plate. The
imaging plate is mechanically translated through the readout system using rollers. A laser beam

63 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


strikes a rotating multifaceted mirror, which causes the beam to repeatedly scan across the imaging
plate (Figure 3.18). These two motions result in a raster scan pattern of the imaging plate as shown
in the Figure below. Light is emitted from the plate by laser stimulation. A fraction of the emitted
light travels through the fiber optic light guide and reaches a photo multiplier tube (PMT) which
converts the light to electrical signals). The electronic signal that is produced by the PMT is
digitized and stored in memory. Therefore, for every spatial location (x,y), a corresponding gray
scale value is determined, and this is how the digital image I(x,y) is produced in a CR reader.

Figure 3-18: CR reader

3.10.3. A photomultiplier tube (PMT)


A photomultiplier tube (PMT) consists of a photocathode on top, followed by a cascade of dynodes
(Figure 3.19). The PMT is glued to the crystal. Because the light photons should reach the
photocathode of the PMT, the crystal must be transparent to the visible photons. The energy of the
photons hitting the photocathode releases some electrons from the cathode. These electrons are
then accelerated toward the positively charged dynode nearby. They arrive with higher energy (the
voltage difference × the charge), activating additional electrons. Because the voltage becomes
systematically higher for subsequent dynodes, the number of electrons increases in every stage,
finally producing a measurable signal. Because the multiplication in every stage is constant, the
final signal is proportional to the energy of the original photon.
64 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
Figure 3-19: Photomultiplier Tube

3.11. Digital Radiography (DR)

DR describes a multitude of digital x-ray detection systems that immediately process the absorbed
X-ray signal after exposure and produce the image for viewing with no further user interaction.
There are two types of digital radiography (DR) detectors,
• Indirect flat panel detectors
• Direct flat panel detectors.

3.11.1. Indirect Flat Panel Detectors


By indirect flat panel detectors, x-ray energy is first converted into light by a CsI:TI scintillator,
and then the light is converted into a voltage using a two-dimensional array of photodiodes. When
an X-ray is absorbed in a CsI rod light will be produced. The light is converted to an electrical
signal by the photodiodes in the thin film transistors (TFT) array and stored in capacitors which
are formed at the junction of the photodiodes.

65 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


A typical configuration for a flat panel detector system is shown in Figure 3.20. The flat panel
comprises a large number of individual detector elements, each one capable of storing charge in
response to x-ray exposure. Each detector element has a light-sensitive region, and a small corner
of it contains the electronics. Just before exposure, the capacitor, which stores the accumulated x-
ray signal on each detector element, is shunted to ground, dissipating lingering charge and resetting
the device. The light-sensitive region is a photoconductor, and electrons are released in the
photoconductor region on exposure to visible light. During exposure, charge is built up in each
detector element and is held there by the capacitor.
Figure 3-20: Flat panel detector readout process

During exposure, negative voltage is applied to all gate lines, causing all of the transistor switches
on the flat panel imager to be turned off. Therefore, charge accumulated during exposure remains
at the capacitor in each detector element. During readout, positive voltage is sequentially applied
to each gate line (e.g., Rl, R2, R3, as shown in Figure 3.20.), one gate line at a time. Thus, the
switches for all detector elements along a row are turned on. The multiplexer (right of Figure 3.20
) is a device with a series of switches in which one switch is opened at a time. The multiplexer
sequentially connects each vertical wire (e.g., C1, C2, C3), via switches (S1, S2, S3), to the
digitizer, allowing each detector element along each row to be read out.

The size of the detector element on a flat panel largely determines the spatial resolution of the
detector system. For high spatial resolution, small detector elements are needed. However, the
electronics (e.g., the switch, capacitor, etc.) of each detector element takes up a certain (fixed)
66 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
amount of the area, so, for flat panels with smaller detector elements, a larger fraction of the
detector element's area is not sensitive to light. Therefore, the light collection efficiency decreases
as the detector elements get smaller.

For flat panel arrays, the light collection efficiency of each detector element depends on the
fractional area that is sensitive to light (Figure 3.21). This fraction is called the fill factor, and a
high fill factor is desirable.

Figure 3-21: flat panel detectors light sensitivity

Therefore, the choice of the detector element dimensions requires a tradeoff between spatial
resolution and contrast resolution.

3.11.2. Direct flat panel detectors

Direct flat panel detectors eliminate the intermediate step of converting X-ray energy into light,
and uses direct absorption of the x-ray photons to produce an electrical signal.

Direct flat panel detectors are made from a layer of photo conductor material on top of a TFT
array. These photoconductor materials have many of the same properties as silicon, except they
have higher atomic numbers. Selenium is commonly used as the photoconductor. The TFT arrays
of these direct detection systems make use of the same line-and-gate readout logic, as described
for indirect detection systems. With direct detectors, the electrons released in the detector layer
from x-ray interactions are used to form the image directly. Light photons from a scintillator are
not used. A negative voltage is applied to a thin metallic layer (electrode) on the front surface of
the detector, and therefore the detector elements are held positive in relation to the top electrode.

67 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


During x-ray exposure, x-ray interactions in the selenium layer liberate electrons that migrate
through the selenium matrix under the influence of the applied electric field and are collected on
the detector elements. After exposure, the detector elements are read out as described earlier for
indirect systems.

3.12. Scatter radiation in Projection Radiography

In most radiography (except) mammography a photon interaction in soft tissue produce scattered
x-ray photons. These scattered photons are the main source of noise that cause a radiation fog in
the radiographic image.

A scatter to primary ratio (S/P) is the ratio of scattered


photons to the primary photons. It depends on the area
of the x-ray field (field of view), the patient thickness
and the energies of the x-rays. The increment of
scatter to primary ratio cause reduction in contrast and
signal to noise ratio (SNR) on the image.

Figure 3-22: Scatter to primary ratio


3.12.1. Contrast
Contrast is the difference in the image gray scale between closely adjacent regions on the image.
In radiography scattered radiation causes a loss of contrast in the image.

For two adjacent areas transmitting photon fluencies of A & B the contrast in the absence of scatter
is:
C0 = (A-B)/A

And the contrast in the presence of scatter is:

C = C0 x( 1/(1+(S/P)))

68 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


The term /(1+(S/P)) is called a reduction factor. Figure 3.23 below shows how contrast decreases
with scatter to primary ratio.

Figure 3-23: Contrast as a function of scatter to primary ratio

3.12.2. Anti-scatter Grids

Scattering photons can be reduced by incorporating a collimation system called anti-scatter grid
between the detector and the patient. Anti-scatter grid is composed of a series of lead grip strips
aligned with the x-ray source. They are used to reduce the amount of scattered radiation reaching
the detector by utilizing geometry.

Grid ratio: it is the ratio of the height to the width of the interspaces (not the grid bars) in the grid.
Grid ratios of 8: 1, 10: 1, and 12: 1 are most common in general radiography and a grid ratio of 5:
1 is typical in mammography.

As of the Figure 3.25 below grid ratio = h/D.

69 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


The grid is essentially a one-dimensional collimator, and increasing the grid ratio increases the
degree of collimation. Higher grid ratios provide better scatter cleanup, but they also result in
greater radiation doses to the patient. A grid is quite effective at attenuating scatter that strikes the
grid at large angles (where 0 degrees is the angle normal to the grid), but grids are less effective
for smaller-angle scatter.

Figure 3-24: Anti-scatter grid system

Figure 3-25: Grid construction

70 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


The following table includes the parameters that describe anti-scatter grid properties.

Table 3.1: parameters that describe anti-scatter grid properties

Figure 3-26: Radiographic image without (left) anti-scatter and with anti-scatter grid (right):

71 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


3.13. Mammography
Mammography is a radiographic examination that is specially designed for detecting breast
pathology. It uses a low-dose x-ray system to examine breasts. A mammography exam, called a
mammogram, is used to aid in the early detection and diagnosis of breast diseases in women.

The distinguishing features of mammography equipment from other x-ray imaging are due to the
cancer and tissue property.
• Cancer produces very small physical changes in the breast that are difficult to visualize
with conventional x-ray imaging.
• Breast consist soft tissues with relatively small differences in density (or atomic number).

The small x-ray attenuation differences between normal and cancerous tissues in the breast require
the use of x-ray equipment specifically designed to optimize breast cancer detection.

Low x-ray energies provide the best differential attenuation between the tissues (see Figure 3-28);
however, the high absorption results in a high tissue dose and long exposure time. Detection of
minute calcifications in breast tissues is also important because of the high correlation of
calcification patterns with disease.

Figure 3-27: Attenuation of breast tissues as a function of energy.

72 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Detecting micro calcifications while minimizing dose and enhancing low-contrast detection
imposes extreme requirements on mammographic equipment and detectors. Because of the risks
of ionizing radiation, techniques that minimize dose and optimize image quality are essential, and
have led to the refinement of dedicated x-ray equipment, specialized x-ray tubes, compression
devices, anti-scatter grids, phototimers, and detector systems for mammography. Strict quality
control procedures and cooperation among the technologist, radiologist, and physicist ensure that
acceptable breast images are achieved at the lowest possible radiation dose.

The following Figure illustrates the instrumentation of mammography system.

Figure 3-28: Mammography system

3.13.1. Cathode and Filament Circuit


The mammographic x-ray tube is typically configured with dual filaments in a focusing cup that
produce small focal spot sizes. A small focal spot minimizes geometric blurring and maintains
spatial resolution necessary for microcalcification detection. An important difference in
mammographic tube operation compared to conventional radiographic operation is the low
operating voltage, below 35 kilovolt peak (kVp). Feedback circuits adjust the filament current as
a function of kV to deliver the desired tube current, which is 100 mA (±25 mA) for the large (0.3
mm) focal spot and 25 mA (± 10 mA) for the small (0.1 mm) focal spot.
73 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
3.13.2. Mammography X-ray Tube Anode
Most x-ray tubes use tungsten as the anode material. But Mostly mammography equipment uses
molybdenum anodes with an atomic number (Z) of 42 or in some designs, a dual material anode
with an additional rhodium with an atomic number (Z) of 45. These materials are used because
they produce a characteristic radiation spectrum that is close to optimum for breast imaging.

The x-ray tubes are arranged such that the cathode side of the tube is adjacent to the patient’s chest
wall (Figure 3.29), since the highest intensity of x-rays is available at the cathode side, and the
attenuation of x-rays by the patient is generally greater near the chest wall of the image.

Mammography tubes often have grounded anodes, whereby the anode structure is maintained at
ground (0) voltage and the cathode is set to the highest negative voltage. With the anode at the
same voltage as the metal insert in this design, off focus radiation is reduced because the metal
housing attracts many of the rebounding electrons that would otherwise be accelerated back to the
anode.

Figure 3-29: Orientation of anode cathode axis along the chest wall to nipple direction

Most x-ray machines use aluminium or "aluminium equivalent" to filter the x-ray beam to reduce
unnecessary exposure to the patient. Mammography uses filters that work on a different principle
and are used to enhance contrast sensitivity. Molybdenum (same as in the anode) is the standard

74 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


filter material. Added tube filters of the same element as the target reduce the low- and high energy
x-rays in the x-ray spectrum and allow transmission of the characteristic x-ray energies.

The characteristic radiation energies of molybdenum (17.5 and 19.6 keV) are nearly optimal for
detection of low-contrast lesions in breasts of 3 to 6.cm thickness.

Figure 3-30: Anode filter combination


3.13.3. Collimation
In mammography fixed-size metal apertures or variable field size shutters are used to collimate
the x-ray beam. For most mammography examinations, the field size matches the film cassette
sizes (e.g., 18 X 24 cm or 24 X 30 cm). The exposure switch is enabled only when the collimator
is present. Many new mammography systems have automatic collimation systems that sense the
cassette size.

3.13.4. Automatic exposure control (AEC)


It is also called a phototimer, employs a radiation sensor, an amplifier, and a voltage comparator,
to control the exposure. The AEC detector is located underneath the cassette. This sensor consists
of a single ionization chamber or an array of three or more semiconductor diodes. The sensor
measures the residual x-ray photon flux transmitted from the patient. During the exposure, x-ray
interactions in the sensor release electrons that are collected and charge a capacitor. When the
voltage across the capacitor matches a preset reference voltage in a comparator switch, the
exposure is terminated. Figure 3-32 illustrates the control system.

75 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 3-31: Automatic exposure control system

3.13.5. Compression
Breast compression is a necessary part of the mammography examination. Firm compression
reduces overlapping anatomy and decreases tissue thickness of the breast. This results in fewer
scattered x-rays, less geometric blurring of anatomic structures, and lower radiation dose to the
breast tissues. Achieving a uniform breast thickness lessens exposure dynamic range and allows
the use of higher contrast film.

Compression is achieved with a compression paddle, a flat Lexan plate attached to a pneumatic or
mechanical assembly. Suspicious areas often require “spot” compression to eliminate
superimposed anatomy by further spreading the breast tissues over a localized area. Figure 3-33
below illustrates area compression and spot compression paddle.

76 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 3-32: Compression paddle

3.13.6. Mammography Image Receptors


An ideal imaging system for mammography would require good spatial resolution with minimal
radiation dose to the patient. This is achieved by using highly sensitive flat panel detectors that are
applied in digital radiography. Reducing the diameter of the stimulating laser beam, use of a
thinner phosphor layer and dual-sided read-out CR plates are three approaches which can be used
to address this issue. Both direct and indirect DR receptors can be used for Digital Mammography.
Here, the manufacturing technology limits pixel size to about 70-100μm for image sizes up to
3,328x4,096 pixels of area up to 24x29cm.

The two types of digital mammography are:

1. 2D mammography
o It is also called full-field digital mammography (FFDM),
o With 2D digital mammography, the radiologist is viewing all of the complexities of
breast tissue in a one flat image.
o Disadvantage: Sometimes breast tissue can overlap, giving the illusion of normal breast
tissue looking like an abnormal area.

2. 3D mammography/ tomosynthesis
• It is a mammography system where the x-ray tube and imaging plate move during
the exposure.
• 3D mammography creates a series of thin slices through the breast that allow doctors
to examine breast tissue detail one slice at a time to help find breast cancer at its
earliest stages.

77 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


• It allows radiologists to view the breast tissue in one-millimeter slices, so that they
can provide a more confident assessment. It finds cancers missed with conventional
2D mammography.

Figure 3-33: A). Dense opacity with specular border in the cranial part of the right breast;
B).Cluster of irregular microcalcification suggesting a low differentiated carcinoma.

3.14. Fluoroscopy
Fluoroscopy refers to the continuous acquisition of a sequence of x-ray images over time,
essentially a real-time x-ray movie of the patient. Fluoroscopy is a transmission projection imaging
modality, and is, in essence, just real-time radiography. Fluoroscopic systems use x-ray detector
systems capable of producing images in rapid temporal sequence. Fluoroscopy is used for
positioning catheters in arteries, for visualizing contrast agents in the gastrointestinal (GI) tract,
and for other medical applications such as invasive therapeutic procedures where real-time image
feedback is necessary. Fluoroscopy is also used to make x-ray movies of anatomic motion, such
as of the heart or the esophagus.

The image output of a fluoroscopic imaging system is a projection radiographic image, but in a
typical 10-minute fluoroscopic procedure a total of 18,000 individual images are produced which

78 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


is TV technology which provides 30 frames per second imaging. It also allows acquisition of a
real time digital sequence of images (digital video) that can be played back as a movie loop.

Fluoroscopy Instrumentation

Figure 3-34: Layout of Fluoroscopy System

3.14.1. Image Intensifier (II)


The image intensifier is the principal component of the fluoroscopy imaging chain that
distinguishes it from radiography is the image intensifier. Image intensifiers are used to convert
the x-ray spectrum to light energy. The thickness of the II here is larger than a radiographic system
to increase the detection efficiency and therefore reduce the required X-ray dose. The fluoroscopy
image intensifiers are several thousand more sensitive than the screen-film cassette image
intensifiers.

There are four principal components of an Image Intensifier (Figure 3-36):


– Photocathode (in the input screen),
– Three focusing electrodes (G1. G2, and G3),
– Anode (part of the output window).
– Output phosphor

79 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 3-35: Image Intensifier tube and its components

The input screen

The input screen of the II consists of


four different layers:

• Vacuum window keeps air


out of II
• Support
• CsI needles
• photocathode

The vacuum window in the input


screen blocks the air not to enter in the
Figure 3-36: The input screen
image intensifier. The vacuum
environment is required to accelerate the electrons. X-rays must pass through the vacuum window
and support, before striking the cesium iodide (Csl) input phosphor. CsI forms in long crystalline
needles that act like light pipes, limiting the lateral spread of light and preserving spatial resolution.
The input phosphor absorb the x-rays and convert their energy into visible light. Light strikes the
photocathode causing electrons to be liberated into the electronic lens system of the II.

80 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


The electron optics

The five-component ("pentode") electronic lens systems of the Image Intensifier include:
– The G1, G2, and G3 electrodes
– the input screen substrate (the cathode)
– the anode near the output phosphor

Under the influence of the about ~25,000 to 35,000 V electric field, electrons are accelerated and
arrive at the anode with high velocity and considerable kinetic energy. The intermediate electrodes
(G1, G2, and G3) shape the electric field, focusing the electrons properly onto the output layer.
After penetrating the very thin anode, the energetic electrons strike the output phosphor and cause
a burst of light to be emitted.

The output phosphor

The electrons strike the output phosphor, causing emission of light. The thick glass output window
allows light to escape the top of Image Intensifier. Light that is reflected in the output window is
scavenged to reduce glare by the addition of a light absorber around the circumference of the
output window.

Figure 3-37: The output phosphor

Each electron causes the emission of approximately 1,000 light photons from the output phosphor
resulting in light amplification (Figure 3-39). The image is much smaller at the output phosphor
than it is at the input phosphor, because the 23- to 35-cm diameter input image is focused onto a
circle with a 2.5-cm diameter. The reduction in image diameter leads to amplification
(minification).

81 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


𝑎𝑟𝑒𝑎 𝑜𝑓 𝑡ℎ𝑒 𝑖𝑛𝑝𝑢𝑡 𝑝ℎ𝑜𝑠𝑝ℎ𝑜𝑟
Minification gain of II =
𝑎𝑟𝑒𝑎 𝑜𝑓 𝑡ℎ𝑒 𝑜𝑢𝑡𝑝𝑢𝑡 𝑝ℎ𝑜𝑠𝑝ℎ𝑜𝑟

Figure 3-38: Light Amplification in image intensifier

3.14.2. Characteristics of II Performance


The Parameters useful in specifying the capabilities of the II are: Conversion factor, Brightness
gain, Field of View/Magnification Modes, Quantum Detection Efficiency and S Distortion. These
characteristics are also useful in troubleshooting IIs when they are not performing properly.
Conversion factor

Defined as a measure of the gain of an Image Intensifier

𝑙𝑖𝑔ℎ𝑡 𝑜𝑢𝑡 𝑜𝑓𝐼𝐼 (𝑐𝑎𝑛𝑑𝑒𝑙𝑎 /𝑚𝑒𝑡𝑒𝑟 𝑠𝑞𝑢𝑎𝑟𝑒𝑑. )


𝐶𝑜𝑛𝑣𝑒𝑟𝑠𝑖𝑜𝑛 𝑓𝑎𝑐𝑡𝑜𝑟 =
𝐸𝑥𝑝𝑜𝑠𝑢𝑟𝑒 𝑟𝑎𝑡𝑒 𝑖𝑛𝑡𝑜 𝐼𝐼(mR/see. )

The conversion factor of a new II ranges from 100 to 200 Cd sec m-2 mR-1. The conversion factor
degrades with time, and this ultimately can lead to the need for II replacement.

Brightness gain

It is the product of the electronic and minification gains of the II. The electronic gain of an II is
roughly about 50, and the minification gain changes depending on the size of the input phosphor
and the magnification mode.

82 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


𝐵𝐺 = 𝑚𝑖𝑛𝑖𝑓𝑖𝑐𝑎𝑡𝑖𝑜𝑛 𝑔𝑎𝑖𝑛 𝑥 𝑒𝑙𝑒𝑐𝑡𝑟𝑜𝑛𝑖𝑐 𝑔𝑎𝑖𝑛 (𝑔𝑎𝑖𝑛 𝑥 𝑓𝑙𝑢𝑥)

As the effective diameter of the input phosphor decreases (increasing magnification), the
brightness gain decreases.

Automatic Brightness Control


Brightness is controlled automatically in fluoroscopy system by the automatic brightness control
(ABC). The purpose of ABC is to keep the brightness of the image constant at monitor by
regulating the x-ray exposure rate (control kVp, mA or both). ABC is triggered with changing
patient size and field modes.

Figure 3-39: Automatic Brightness Control

Field of View/Magnification Modes


Image intensifiers come in different sizes and the field of view (FOV) and magnification modes
are determined by the size of Image intensifiers. most IIs have several magnification modes.
Magnification is produced by pushing a button that changes the voltages applied to the electrodes
in the II, and this result in different electron focusing. As the magnification factor increases, a
smaller area on the input of the II is visualized. When the magnification mode is engaged, the
83 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
collimator also adjusts to narrow the x-ray beam to the smaller field of view. As a matter of
radiation safety, the fluoroscopist should use the largest field of view (the least magnification) that
will facilitate the task at hand.

In normal operation of the image intensifier (left of Figure 3.40), electrons emitted by the
photocathode over the entire surface of the input window are focused onto the output phosphor,
resulting in the maximum field of view (FOV) of the II. Magnification mode (right Figure 3-41)
is achieved by pressing a button that modulates the voltages applied to the five electrodes, which
in turn changes the electronic focusing such that only electrons released from the smaller diameter
FOV are properly focused onto the output phosphor.

Figure 3-40: Normal and Magnification mode


S distortion

S distortion is a spatial warping of the image in an S shape through the image. This type of
distortion is usually subtle, if present, and is the result of stray magnetic fields and the earth's
magnetic field. On fluoroscopic systems capable of rotation, the position of the S distortion can
shift in the image due to the change in the system's orientation with respect to the earth's magnetic
field.

84 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


3.14.3. Video Camera
The output image on an II is small, and for over-table II's, a ladder would be needed to view the
output window directly. Consequently, a video camera is usually mounted above the II and is used
to relay the output image to a TV monitor for more convenient viewing by the operator. In addition
to the TV camera, other image recording systems are often connected to the output of the II. The
optical distributor is used to couple these
devices to the output image of the II. A lens
is positioned at the output image of the II,
and light rays travel in a parallel (non-
diverging) beam into the light-tight
distributor housing. A mirror or a prism is
used to reflect or refract the light toward the
desired imaging receptor.

Two methods are used to electronically


convert the visible image on the output
phosphor of the image intensifier into an
electronic signal:
– Television camera tube
– Thin film transistors (TFT)
Figure 3-41: Image Intensifier coupling system
The Television Camera
The television camera consists of cylindrical housing, approximately 15 mm in diameter by 25 cm
in length that contains the heart of the television camera tube. It also contains electromagnetic coils
that are used to properly steer the electron beam inside the tube.

Figure 3-42: The Tv Camera Tube

85 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


At the TV camera, an electron beam is swept in raster fashion on the TV target (e.g., SbS03). The
TV target is a photoconductor, whose electrical resistance is modulated by varying levels of light
intensity. In areas of more light, more of the electrons in the electron beam pass across the TV
target and reach the signal plate, producing a higher video signal in those lighter reasons.

The video signal is a voltage versus time waveform (as shown in Figure 3-44) that is communicated
electronically by the cable connecting the video camera with the video monitor. Synchronization
pulses are used to synchronize the raster scan pattern between the TV camera target and the video
monitor. Horizontal sync pulses cause the electron beam in the monitor to laterally retrace and
prepare for the next scan line. A vertical sync pulse has different electrical characteristics and
causes the electron beam in the video monitor to reset at the top of the screen.

Figure 3-43: The closed-circuit TV system used in fluoroscopy

Inside the video monitor, the electron beam is scanned in raster fashion, and the beam current is
modulated by the video signal. Higher beam current at a given location results in more light
produced at that location by the monitor phosphor. The raster scan on the monitor is done in
synchrony with the scan of the TV target. The video signal is amplified and is transmitted by cable
to the television monitor, where it is transformed back into a visible image.

Flat panel fluoroscopy detectors

Flat pane detectors are the hearts of digital fluoroscopy system. They are thin film transistor (TFT)
pixelated arrays that are rectangular in format and are used as x-ray detectors. CsI scintillator is
used to convert the incident x-ray beam into light. TFT systems have a photodiode at each detector
element which converts the light energy to electronic signal. Flat panel detectors replace the image
86 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
intensifier, video camera and directly record the real-time fluoroscopic image sequence. The image
produced by the image intensifier is circular in format, resulting in less efficient utilization of
rectangular monitors for fluoroscopic display. The flat panel detector produces a rectangular
image, well matched to the rectangular format of TV monitors. The flat panel detector is
substantially less bulky than the image intensifier and TV system, but provides the same
functionality.

3.14.4. Common procedures using fluoroscopy


• Investigations of the gastrointestinal tract (barium
swallows)
• Orthopedic surgery: - guide fracture reduction and
the placement of metalwork.
• Angiography of the leg, heart and cerebral vessels.
• Urological surgery: - particularly in retrograde
pyelography.
• Implantation of cardiac rhythm management devices
(pacemakers, implantable cardioverter defibrillators
and cardiac resynchronization devices)
• Discography an invasive diagnostic procedure for
evaluation for intervertebral disc pathology.

Figure 3-44: Fluoroscopic procedure

3.15. Digital Subtraction Angiography (DSA)


DSA is an imaging technique that produces very high-resolution images of the vasculature in the
body, being able to resolve small blood vessels which are less than 100 µm in diameter.

Steps to acquire image


– acquiring a regular radiographic image,
– injecting iodinated contrast agent into the bloodstream and acquiring a second
image,
– Performing image subtraction of the two digital images.

87 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


This technique greatly helps in contrast enhancement as the subtraction removes the appearance
of stationary anatomy from the resulting images while synthesizing images containing only
contrast in the blood vessels. Each image in the sequence reveals a different stage in the filling of
vessels with contrast. It is used to investigate diseases such as stenosis and clotting of arteries and
veins, and irregularities in systemic blood flow.

3.16. X-ray mA testing tools and Quality Assurance kit

3.16.1. Quality Assurance (QA) tests of x – ray machines


The aim of QA tests in diagnostic radiology is used to ensure good quality images with optimal
doses. Images of poor quality will necessitate retakes and can results in unnecessary doses to
patients, staff, public and overloading of the machine.

The QA kit comprises of

• Visual Inspection
• Beam Measurements (kVp, mR, HVL, etc)
• Receptor Tests: Grids, PBL, Coverage
• Tube Assembly Tests: Collimator assembly, Focal Spot, Source to Image Distance
• Darkroom Tests (if applicable)

Visually evident deficiencies often ignored/worked around by staff. Reporting deficiencies often
lead to corrective actions including:
• Lights/LEDs working
• Proper technique indication
• Locks and interlocks work
• No broken/loose dials, knobs
• Any obvious electrical or mechanical defects

Beam Measurements

kVp evaluation is a critical issue in diagnostic radiology. Even with HF generators Poor kV
calibration can:
• Increase dose if kV’s too low
• Cause poor mA linearity, leading to possible repeats
88 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
• Image contrast: affected, but relatively minor effect for ranges of miss-calibration
usually encountered

A single-exposure rating chart (Figure 3-46) provides information on the allowed combinations of
kVp, mA, and exposure time (power deposition) for a particular x-ray tube, focal spot size, anode
rotation speed, and generator type (no accumulated heat on the anode).

Figure 3-45: The single exposure rating charts for a given focal spot and anode rotation speeds:
Each curve plots peak kilovoltage (kVp) on the vertical axis versus time (seconds) on the
horizontal axis, for a family of tube current (mA) curves indicating the maximal power loading.
For each tube, there are usually four individual charts with peak kilovoltage as the y-axis and
exposure time as the x-axis, containing a series of curves, each for a particular mA value. Each of

89 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


the four charts is for a specific focal spot size and anode rotation speed. Each curve represents the
maximal allowable tube current for a particular kVp and exposure time.
There are many ways to determine whether a given single-exposure technique is allowed. The mA
curves on the graph define the transition from allowed exposures to disallowed exposures for a
specified kVp and exposure time. One method is as follows:

1. Find the intersection of the requested kVp and exposure time.


2. Determine the corresponding mA. If necessary, estimate by interpolation of adjacent
mA curves. This is the maximal mA allowed by the tube focal spot.
3. Compare the desired mA to the maximal mA allowed. If the desired mA is larger than
the maximal mA, the exposure is not allowed. If the desired mA is equal to or smaller
than the maximal mA, the exposure is allowed.

For mA versus time plots with various kVp curves, the rules are the same but with a simple
exchange of kVp and mA labels. Previous exposures must also be considered when deciding
whether an exposure is permissible, because there is also a limit on the total accumulated heat
capacity of the anode. The anode heat input and cooling charts must be consulted in this instance.

3.16.2. Protecting cloths and personal dosimeters


The three main factors in effective protection from radiation exposure are distance from the source,
the time spent in the radiation field and finally, shielding. An X-ray beam is generally very intense
and collimated in a well-defined way. Its mean free path in air is many meters and so staff must
always avoid intercepting the direct beam. In addition, the direct beam must encounter a heavy
beam stop to prevent it carrying on through walls, floors and ceilings, into adjacent uncontrolled
working areas.

An individual’s occupational exposure to radiation is clearly dependent on his/her working


practice and will vary with the type of task involved. It is important that not just the average
exposure is controlled but also any peaks contained in that average. Personal occupational
exposure is monitored using film badges illustrated in Figure 3.46 and pocket ionization chambers.

The monitor consists of a piece of photographic film overlaid with small sheets of different
materials, which act as radiation filters. Typically, these are tin, aluminum, and plastic. After
exposure the degree of blackening of the film can be related to the total radiation to which the film

90 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


was exposed. The degree of differential blackening under the various filters provides a measure of
the photon energies in the radiation.

Figure 3-46: The personal radiation film badge monitor


Lead aprons are worn by all radiographers to reduce the amount of scattered radiation reaching the
body surface. All modern medical X-ray installations are fitted with adjustable lead diaphragms
so that, in each examination, only the region of interest is exposed to the beam. This reduces the
exposure and dose to the patient and the exposure of staff to scattered radiation.

91 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Review Questions

1. Why only K-characteristics x-rays of the tungsten target material are desirable in
radiographic imaging?
2. Why are photons undergoing Compton scattering undesired?
3. One way to avoid counting scattered photons is by placing an anti-scatter grid (with spacing
w and height h) on top of the X-ray detector, shown below. Determine the maximum
scattering angle of X-rays in terms of h and w that can pass through the grid and hence
being detected.

4. Explain the trades off in choosing the anode angle.


5. What makes mammography a special x-ray equipment technique? What is the purpose of
compression in mammography?
6. Adjacent regions of a radiograph have optical densities of 1.0 and 1.5. What is the
difference in the transmission of light through the two regions?
7. Define film speed and discuss its significance.
8. Trace the chain of events that take place when an automatic brightness control system
varies exposure rate.
9. Define the meaning of image contrast, and identify and explain the variables that
contribute to it.
10. What is the difference between Computed Radiography (CR) and Digital Radiography
(DR)? How does image scanning from the imaging plate accomplished in CR?
11. Describe the production of X-rays and the characteristics of the X-ray spectrum.
12. Explain the difference between bremsstrahlung and characteristic radiation in X-ray
production.

92 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


13. What are the factors affecting the intensity and quality of X-ray beams?
14. Explain the principle of interaction of X-rays with biological tissues and the different
types of interactions.
15. Discuss the radiation dose and its effects on human tissue.
16. How does the radiation dose affect the contrast and resolution of X-ray images?
17. Describe the concept of attenuation and its role in X-ray imaging.
18. What is the difference between transmission and scattering of X-rays in biological
tissues?
19. Explain how digital X-ray imaging differs from conventional radiography.
20. What are the advantages of digital X-ray imaging over conventional radiography?
21. Describe the principles of mammography image formation.
22. Discuss the advantages and disadvantages of mammography as a diagnostic tool for
breast cancer.
23. How do the characteristics of the X-ray beam affect mammography image quality?
24. Explain how contrast agents are used in mammography and their role in image
enhancement.
25. Describe the different types of artifacts that can occur in mammography images and their
causes.

References
1. Ed. Joseph D. Bronzino, The Biomedical Engineering Hand Book, Second Edition, CRC
press LLC, 2000.
2. G.S., Fundamentals of Biomedical Engineering, New Age International (P) Limited,
Publishers, New Delhi, 2007.
3. Hendee W. R., Ritenour E. R., Medical imaging physics, fourth edition, A John Wiley &
Sons, Inc., Publication, ISBN 0-471-38226-4, New York, 2002.
4. Paul Suetens, Fundamentals of medical imaging, second edition, Cambridge University
Press, ISBN-13 978-0-521-51915-1, New York, 2009.
5. Chris Guy, Dominic ffytche, An Introduction to The Principles of Medical Imaging,
Revised Edition, Imperial College Press, ISBN 1-86094-502-3, London, 2005.

93 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


6. Jerrold T. B., Seibert A. J., Edwin M. L., John M. B., The essential physics of medical
imaging, Lipincott Williams and Wilkins, A wolters company, USA, 2002.
7. Rehani MM; ArunKumar LS; Berry M. Quality assurance in diagnostic radiology. Ind. J.
Radiol. Imag. 2: 259-263, 1992.

94 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


CHAPTER FOUR
4. CT SCANNING

Objectives
After completing this chapter, the students should be able to:

• Explain the principles of x-ray transmission computed tomography.


• Compare the properties of x-ray projection images with x-ray CT images.
• Provide a brief history of the evolution of x-ray CT imaging.
• Describe different approaches to the reconstruction of CT images from projection
measurements.
• Characterize the relationship between CT numbers, linear attenuation coefficients,
and physical densities associated with CT scanning.
• Depict the configuration of spiral CT scanners.
• Explain different CT artifacts and their reduction techniques.

4.1. Introduction

Computed tomography (CT), sometimes called "computerized tomography" or "computed axial


tomography" (CAT), is a noninvasive medical examination or procedure that uses specialized X-
ray equipment to produce cross-sectional images of the body. Each cross-sectional image
represents a “slice” of the person being imaged, like the slices in a loaf of bread. These cross-
sectional images are used for a variety of diagnostic and therapeutic purposes.

In conventional radiography heart, lung & ribs are all superimposed on the same film and
information with respect to the dimension parallel to the x-ray beam is lost. But In medical
tomography Slices capture each organ in its actual 3D dimensional position. CT is the first imaging
modality that made it possible to probe the inner depths of the body, slice by slice. The word
"tomography" is derived from the Greek τoµoς (slice) and γραφιν (to write).

95 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


How a CT system works:

• A motorized table moves the patient through a circular opening in the CT imaging
system.
• While the patient is inside the opening, an
X-ray source and a detector assembly within
the system rotate around the patient. A
single rotation typically takes a second or
less. During rotation the X-ray source
produces a narrow, fan-shaped beam of X-
rays that passes through a section of the
patient's body.
• Detectors in rows opposite the X-ray source
register the X-rays that pass through the
patient's body as a snapshot in the process
of creating an image. Many different Figure 4-1: Drawing of CT fan beam
(and patient in a CT imaging system
"snapshots" (at many angles through the
patient) are collected during one complete rotation.
• For each rotation of the X-ray source and detector assembly, the image data are sent to a
computer to reconstruct all of the individual "snapshots" into one or multiple cross-
sectional images (slices) of the internal organs and tissues.

CT images of internal organs, bones, soft tissue, and blood vessels provide greater clarity and
more details than conventional X-ray images, such as a chest X-Ray.

The tomographic image is a picture of a slab of the patient’s anatomy. The 2D CT image
corresponds to a 3D section of the patient. CT slice thickness is very thin (1 to 10 mm) and is
approximately uniform. The 2D array of pixels in the CT image corresponds to an equal number
of 3D voxels (volume elements) in the patient. Each pixel on the CT image displays the average
x-ray attenuation properties of the tissue in the corresponding voxel. The following figure
illustrates the projection and reconstruction procedure.

96 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 4-2: Principle of CT

4.2. Instrumentation of CT

Figure 4-3: CT instrumentation


Generally computed tomography device instrumentation can be categorized into the data
acquisition unit and the computer system. The filters, collimator, reference detector, internal

97 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


projector, x-ray tube heat exchanger (oil cooler), high voltage generator (0-75kV), direct drive
gantry motor, rotation control unit, data acquisition system (DAS), detectors, slip rings can be
included in the data acquisition unit. The computer unit includes the image processing system, the
display and evaluation sections. Figure 4.3 shows some parts of CT instrumentation.

4.3. CT – Scanner Generations

According to their source beam geometry, CT – Scanner generations generally can be classified
into three:
• Pencil Beam: inefficient use of the x-
ray source, excellent scatter rejection
• Fan Beam: linear detectors, scattered
radiation accounts around 5%
• Open Beam: as used in projection
radiography, highest detection of
scatter

Figure 4-4: CT - Scanner source beam geometry


4.3.1. The First Generation:
Rotate/Translate, Pencil beam
In the first generation of CT scanners employed a rotate /translate, pencil beam system (Fig. 4-5).
Only two x-ray detectors were used, and they measured the transmission of x-rays through the
patient for two different slices. The acquisition of the numerous projections and the multiple rays
per projection required that the single detector for each CT slice be physically moved throughout
all the necessary positions. This system used parallel ray geometry. Starting at a particular angle,
the x-ray tube and detector system translated linearly across the field of view (FOV), acquiring
160 parallel rays across a 24-cm Fay. When the x-ray tube/detector system completed its
translation, the whole system was rotated slightly, and then another translation was used to acquire
the 160 rays in the next projection. This procedure was repeated until 180 projections were
acquired at I-degree intervals. A total of 180 x 160 = 28,800 rays were measured.

98 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 4-5: The 1st generation CT scanner

4.3.2. The Second Generation: Rotate/Translate, Narrow fan beam


The next incremental improvement to the CT scanner was the incorporation of a linear array of 30
detectors resulting shorter scan time which was around 18 seconds/slice. The narrow fan beam
allows more scattered radiation to be detected. The scanner system requires translational motion
and rotational motion of the source and detectors but with reduced frequency of the previous
generation.

Figure 4-6: The 2nd generation CT scanner

99 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


4.3.3. Third Generation: Rotate/Rotate, Fan beam
The number of detectors used in third-generation scanners was increased substantially (to more
than 800 detectors), and the angle of the fan beam was increased so that the detector array formed
an arc wide enough to allow the x-ray beam to interrogate the entire patient (Fig. 4.7). Because
detectors and the associated electronics are
expensive, this led to more expensive CT scanners.
However, spanning the dimensions of the patient
with an entire row of detectors eliminated the need
for translational motion. The multiple detectors in
the detector array capture the same number of ray
measurements in one instant as was required by a
complete translation in the earlier scanner systems.

The early third-generation scanners could deliver


scan times shorter than 5 seconds. Newer systems
have scan times of one half second.

The rotate/rotate geometry of 3rd generation scanners Figure 4-7: 3rd generation CT scanner
leads to a situation in which each detector is
responsible for the data corresponding to a
ring in the image (Figure 4-8).

Drift in the signal levels of the detectors over


time affects the t values that are
backprojected to produce the CT image,
causing ring artifacts.

Figure 4-8: Ring Artifact

100 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


4.3.4. Fourth generation: Rotate/Stationary
The Fourth-generation CT scanners were designed to overcome the problem of ring artifacts. With
fourth-generation scanners, the detectors are removed from the rotating gantry and are placed in a
stationary 360-degree ring around the patient (Fig. 4-9), requiring many more detectors. Modern
fourth-generation CT systems use about 4,800 individual detectors. Because the x-ray tube rotates
and the detectors are stationary, fourth-generation CT is said to use a rotate/stationary geometry.

Figure 4-9: 4th generation CT scanner

In third generation CT scanner, the detectors toward the center of the array make the transmission
measurement It, while the reference detector that measures I0 is positioned near the edge of the
detector array. But in fourth generation the same detector make both the transmission measurement
and the reference measurement so that the gain will be the same. This is illustrated in Figure 4.10
below.

3rd gen : ln( g1I 0 / g 2 I t ) = t


4 th gen : ln( gI 0 / gI t ) = t
The equation is true only if the gain terms cancel each other out. If there is electronic drift in one
or both of the detectors, the gain changes between detectors.

101 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 4-10: The beam geometry in third and fourth generation

4.3.5. Fifth Generation: Stationary/Stationary


This CT scanner generation (called Cine CT) is developed specifically for cardiac tomographic
imaging. This "cine-CT" scanner does not use a conventional x-ray tube; instead, a large arc of
tungsten encircles the patient and lies directly opposite to the detector ring. X-rays are produced
from the focal track as a high-energy electron beam strikes the tungsten.

Figure 4-11: A schematic diagram of the fifth-generation

102 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


There are no moving parts to this scanner gantry. The electron beam is produced in a cone-like
structure (a vacuum enclosure) behind the gantry and is electronically steered around the patient
so that it strikes the annular tungsten target (Fig. 4-11). Cine-CT systems, also called electron beam
scanners, are marketed primarily to cardiologists. They are capable of 50-msec scan times and can
produce fast-frame-rate CT movies of the beating heart.

4.3.6. Sixth Generation: Helical CT


The sixth generation CT is called helical CT. Helical CT scanners acquire data while the table is
moving. By avoiding the time required to translate the patient table, the total scan time required to
image the patient can be much shorter. This generation allows the use of less contrast agent and
increases patient throughput. In some instances the entire scan be done within a single breath-hold
of the patient.

Figure 4-12: the 6th generation Helical CT

4.3.7. Seventh Generation: Multiple detector array


X-ray tubes designed for CT have impressive heat storage and cooling capabilities, although the
instantaneous production of x-rays (i.e., x-rays per milliampere-second [mAs]) is constrained by
the physics governing x-ray production. An approach to overcoming x-ray tube output limitations
is to make better use of the x-rays that are produced by the x-ray tube. When multiple detector
arrays are used, the collimator spacing is wider and therefore more of the x-rays that are produced
by the x-ray tube are used in producing image data. With conventional, single detector array
scanners, opening up the collimator increases the slice thickness, which is good for improving the
utilization of the x-ray beam but reduces spatial resolution in the slice thickness dimension. With

103 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


the introduction of multiple detector arrays, the slice thickness is determined by the detector size
and not by the collimator. This represents a major shift in CT technology.

4.4. CT Image Formation


In early CT imaging devices, a narrow x-ray beam is scanned across a patient in synchrony with
a radiation detector on the opposite side of the patient.

If the beam is monoenergetic or nearly so and the patient is assumed to be a homogeneous


medium, the transmission of x rays through the patient is:

If many (n) regions with different linear attenuation coefficients (inhomogeneous medium) occur
along the path of x-rays, the transmission is:

With a single transmission measurement, the separate attenuation coefficients cannot be


determined because there are too many unknown values of µi in the equation. However, with
multiple transmission measurements in the same
plane but at different orientations of the x-ray
source and detector, the coefficients can be
separated so that a cross-sectional display of
attenuation coefficients is obtained across the
plane of transmission measurements.

A single transmission measurement through the


patient made by a single detector at a given
moment in time is called a ray. A series of rays
that pass through the patient at the same
orientation or angle is called a projection or view.
The purpose of the CT scanner hardware is to
acquire a large number of transmission Figure 4-13: CT projection data

measurements through the patient at different positions. The acquisition of a single axial CT image
may involve approximately 800 rays taken at 1,000 different projection angles, for a total of

104 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


approximately 800,000 transmission measurements. Before the axial acquisition of the next slice,
the table that the patient is lying on is moved slightly in the cranial-caudal direction (the "z-axis"
of the scanner), which positions a different slice of tissue in the path of the x-ray beam for the
acquisition of the next image.

Each ray acquired in CT is a transmission measurement It through the patient along a line. The un-
attenuated intensity of the x-ray beam Io is also measured during the scan by a reference detector.
For a single orientation of projection, the projection data P can be calculated as:

I t = I0e 
− u ( x, y )t

ln(I 0 / I t ) = − u ( x, y )t = P

Figure 4-13 shows a projection acquired at a specific angle or orientation.

4.4.1. Tomographic reconstruction


After acquiring the raw data, a CT reconstruction algorithm is used to produce the CT images. The
reconstruction of images from the scanning data is carried out by the computer. The fundamental
of the principle is given by the mathematical discovery that a two-dimensional function can be
determined by the projection of this function from all directions. The scanned data at angles
uniformly distributed about the origin can reconstruct the images if data is properly processed or
projected. The time required for reconstruction is same as that is required for acquiring the data.
Mathematical reconstruction algorithms in software permit reconstruction to start simultaneously
as the first projection data is received by the computer. The reconstruction methods are:
a. Interactive methods
b. Analytic methods with the concept of back projection.
c. Analytic methods with the concept of filtered back projection
A. Iterative methods

In this method, an initial guess about the two-dimensional pattern of x-ray attenuations is made.
The projection data likely to be given by this two-dimensional pattern (model predictions) in
different directions are then calculated which is compared with the measured data. Discrepancies
between the measured data and predicted model data are used in a continuous iterative
improvement of the predicted model array.

105 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


In order to illustrate the methodology of iterative method to obtain an image of attenuation
coefficients from the measured intensity data, we suppose the attenuation coefficients of the first
row and second row by a 2 × 2 object matrix as:

Now we carry out scanning in three directions

i.e., scan I in vertical direction, scan II in diagonal direction scan III in horizontal direction to find
image matrix. Following iterations can be carried out to match image matrix to the object matrix:

i. Scan I of the object matrix in vertical direction gives the vertical sums of 6 and 14 which
is distributed in vertical columns with equal weighing i.e.,6/2 and 14/2 to get an image
matrix.

ii. Scan II of matrix in diagonal direction of object matrix gives attenuations as 4, 8 and 8 and
image matrix after first iteration given 3, 10, & 7. Differences of object and image matrix
have values of 1, –2 and 1 which are back projected with equal weighing diagonally as
shown below.

106 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


iii. Scan III in horizontal direction of object matrix gives attenuations as 10 and 10 while
scanning of image matrix after second iteration gives attenuation as 10 and 10. The object
and image matrix now match as difference in values of elements in both matrices is zero.
Now we use the final image matrix to generate image with the help of the computer

B. Simple Backprojection
In this method, the image is reconstructed directly from the projection data without any need to
compare the measured data and the reconstructed model. If projections of an object in the two
directions normal to x and y axes are measured and then this projection data are projected back
into the image plane, the area of interaction receives their summed intensities. It can be seen that
the back projection distribution is a representation of the imaged object. In actual process, the back
projection for all scanned angles is carried out and the total back projected image is made by
summing the contribution from all the scan angles. This method generally gives a crude
reconstruction of the imaged object. Figure 4-4 below illustrates the effect of multiple projections
in reconstructing the image in simple back projection.

a) b)
107 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
c)
Figure 4-14: Simple Back projection: a) single detector. b) Two detectors. c) Multiple detectors.
The main drawback of this reconstruction method is the presence of noise in the reconstructed
image. Figure 4-14 above illustrates the blurring effect on the reconstructed image even for a data
received by multiple projections.

C. Filtered Back projection

It is possible that the image can be reconstructed with the back projection after data has been
filtered first. The back projected image is Fourier transformed into the frequency domain and
filtered with a filter proportional to spatial frequency up to some frequency cutoff. These filtered
projections are used to construct the final back -projected image. The filtering operations can also
be carried out in Cartesian coordinates by using analytic algorithms which are known as
convolution techniques. This is achieved by convolving (filtering) the shadow function with a
filter. In principle, the blurring effect is removed in the convolution process by means of a
weighing (suitable processing function) of the scan profiles before back projection. This method
has begun found to give a good reconstruction of the imaged object. Different kernels are used for
varying clinical applications such as soft tissue imaging or bone imaging. Figure 4-16 below
illustrates the convolution of projected data with the filtering kernels.

108 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 4-15: Filtered Back projection

Figure 4-16: Filtering kernels

109 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


4.4.2. Projection and Radon transform
In this section a mathematical background behind Tomographic reconstruction will be discussed.

Consider the 2D
parallel-beam geometry
as in Figure (a) in
which µ(x,y) represents
the distribution of the
linear attenuation
coefficient in the xy-
plane.

It is assumed that the


patient lies along the z-
axis and that µ(x,y) is
zero outside a circular
field of view with
diameter FOV. The X-
ray beams make an Figure 4-17: CT projection principle

angle θ with the y-axis. The unattenuated intensity of the X-ray beams is I0. A new coordinate
system (r,s) is defined by rotating (x,y) over the angle θ. This gives the following transformation
formulas:

For a fixed angle θ, the measured intensity profile as a function of r as shown in Figure 4.17 (b)
is given by:

110 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Where Lr,θ is the line that makes an angle θ with the y-axis at distance r from the origin. Each
intensity profile is transformed into an attenuation profile.

Where pθ(r) is the projection of the function µ(x,y) along the angle θ. pθ(r) can be measured for θ
ranging from 0 to 2π. Because concurrent beams coming from opposite sides theoretically yield
identical measurements, attenuation profiles acquired at opposite sides contain redundant
information. Therefore, as far as parallel beam geometry is concerned, it is sufficient to measure
pθ(r) for θ ranging from 0 to π Stacking all these projections pθ(r) results in a 2D dataset p(r,θ)
called a sinogram.

Assume a distribution µ(x,y) containing a single dot, the corresponding projection function p(r,θ)
has a sinusoidal shape, which explains the origin of the name sinogram. In mathematics, the
transformation of any function f(x,y) into its sinogram p(r,θ) is called the Radon transform:

Figure 4-4-18: A Sinogram

111 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


4.4.3. Back projection and Inverse Radon transform
Given the sinogram p(r,θ), what is the original function f(x,y) or µ(x,y)? The procedure of
finding the solution is called back projection and is given by:

It means also finding Inverse radon transform

4.4.4. The central slice theorem


The projection theorem, also called the central slice theorem, gives an answer to question of
finding the attenuation data from the projection data. Let F(kx,ky) be the 2D FT of f(x,y)

And Pθ(k) the 1D FT of pθ(r)

Let θ be variable. Then Pθ(k) becomes a 2D function P(k,θ). The projection theorem now states
that

The 1D FT with respect to variable r of the Radon transform of a 2D function is the 2D FT of that
function. Hence, it is possible to calculate f(x,y) for each point (x,y) or µ(x,y) based on all its
projections pθ(r), θ varying between 0 and π.

112 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 4-19: Illustration of the Central slice theorem

4.5. CT Image Display

4.5.1. CT numbers or Hounsfield Units


After CT reconstruction, but before storing and displaying, CT images are normalized and
truncated to integer values. The number CT(x,y) in each pixel, (x,y), of the image is converted
using the following expression:

where µ(x,y) is the floating point number of the (x,y) pixel before conversion, µwater is the
attenuation coefficient of water, and CT(x,y) is the CT number (or Hounsfield unit) that ends up
in the final clinical CT image. The value of µwater is about 0.195 for the x-ray beam energies
typically used in CT scanning. This normalization results in CT numbers ranging from about -
1,000 to +3,000, where -1,000 corresponds to air, soft tissues range from -300 to -100, water is 0,
and dense bone and areas filled with contrast agent range up to +3,000. CT values are relatively
stable for a single organ and are to a large extent independent of the X-ray tube spectrum.

113 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 4-20: Hounsfield Scale

4.5.2. Windowing and Leveling


CT images typically possess 12 bits of gray scale, for a total of 4,096 shades of gray. For resolving
relative differences in gray scale at fixed pupil diameter, the human eye has limited ability (30 to
90 shades of gray), and 6 to 8 bits is considered sufficient for image display.

Windowing and leveling the CT image is the way to perform this post-processing task (which
nondestructively adjusts the image contrast and brightness).

• The window width (W) determines the contrast of the image, with narrower windows
resulting in greater contrast.
• The level (L) is the CT number at the center of the window.

114 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 4-21: The concept of windowing and leveling: The thoracic images (top) illustrate the
dramatic effect of changing the window and level settings.

4.6. Spiral/Helical CT
Acquiring a single axial slice through a particular organ is of very limited diagnostic use, and so a
volume consisting of multiple adjacent slices is always acquired. One way to do this is for the X-
ray source to rotate once around the patient, then the patient table to be electronically moved a
small distance in the head/foot direction, the X-ray source to be rotated back to its starting position,
and another slice acquired. This is clearly a relatively slow process. The solution is to move the
table continuously as the data are being acquired: this means that the X-ray beam path through the
patient is helical, as shown in Figure 4-22. Such a scanning mode is referred to as either ‘spiral’
or ‘helical’, the two terms being interchangeable. However, there were several hardware and image

115 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


processing issues that had to be solved before this concept was successfully implemented in the
early 1990s

Figure 4-22: The principle of Spiral CT

First, the very high-power cables that feed the CT system cannot physically be rotated continuously
in one direction, and similarly for the data transfer cables attached to the detector bank. A
‘contactless’ method of both delivering power and receiving data had to be designed. Second, the
X-ray tube is effectively operating in continuous mode, and therefore the anode heating is much
greater than for single-slice imaging. Finally, since the beam pattern through the patient is now
helical rather than consisting of parallel projections, modified image reconstruction algorithms had
to be developed. The first ‘spiral CT’, developed in the early 1990s, significantly reduced image
acquisition times and enabled much greater volumes to be covered in a clinical scan.

116 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


In a helical scan, the pitch (p) is defined as the ratio of the table feed (d) per rotation of the X-ray
tube to the collimated slice thickness (S):

The value of p typically used in clinical scans lies between 1 and 2. The reduction in tissue
radiation dose, compared to an equivalent series of single-slice scans, is equal to the value of p.

4.7. CT Angiography
Angiography is a minimally invasive medical test performed by injecting a contrast material to
produce pictures of blood vessels in the body.

CT angiography uses a CT scanner to produce detailed images of both blood vessels and tissues
in various parts of the body. An iodine-rich contrast material (dye), which have high attenuation
property, is usually injected through a small catheter placed in a vein of the arm. A CT scan is then
performed while the contrast flows through the blood vessels to the various organs of the body.

Physicians use this test to diagnose and evaluate many diseases of blood vessels and related
conditions such as:
• Injury
• aneurysms
• blockages (including those from blood clots or plaques)
• disorganized blood vessels and blood supply to tumors
• congenital (birth related) abnormalities of the heart, blood vessels or various parts of the
body which might be supplied by abnormal blood vessels

Also, physicians use this exam to check blood vessels following surgery, such as:
• identify abnormalities, such as aneurysms, in the aorta, both in the chest and abdomen, or
in other arteries.
• detect atherosclerotic(plaque) disease in the carotid artery of the neck, which may limit
blood flow to the brain and cause a stroke.
• identify a small aneurysm or arteriovenous malformation (abnormal communications
between blood vessels) inside the brain or other parts of the body.

117 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


• detect atherosclerotic disease that has narrowed the arteries to the legs and help prepare for
endovascular intervention or surgery.
• detect disease in the arteries to the kidneys or visualize blood flow to help prepare for a
kidney transplant.
• guide interventional radiologists and surgeons making repairs to diseased blood vessels,
such as implanting stents or evaluating a stent after implantation.
• detect injury to one or more arteries in the neck, chest, abdomen, pelvis or extremities in
patients after trauma.
• evaluate arteries feeding a tumor prior to surgery or other procedures such as
chemoembolization or selective internal radiation therapy.
• identify dissection or splitting in the aorta in the chest or abdomen or its major branches.
• show the extent and severity of the effects of coronary artery disease and plan for a surgical
operation, such as a coronary bypass and stenting.
• examine pulmonary arteries in the lungs to detect pulmonary embolism (blood clots, such
as those traveling from leg veins) or pulmonary arteriovenous malformations.
• look at congenital abnormalities in blood vessels, especially arteries in children (e.g.,
malformations in the heart or other blood vessels due to congenital heart disease).
• evaluate obstructions of vessels.

4.8. CT Artifacts
Computed Tomography, can introduce false detail and distortion, that is to say image artifacts, into
the final image as a result of faulty equipment, errors in the stored data and patient movement.
Because of its digital nature, tomography can also produce generic artifacts.

4.8.1. Patient movement


Patient movement is always a problem when data acquisition times exceed a few seconds. The
movement can simply blur the image or introduce more troublesome discrete artifacts, when the
patient motion modulates the projection data with a particular temporal frequency. This causes
misregistration artifacts, which appear as shading in reconstructed image.

118 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 4-23: Artifacts due to patient movement

4.8.2. Partial volume effects


The interpretation of any tomographic image can be confused by partial volume effects. These
arise directly from the use of finite image voxel dimensions. Even a very high-resolution image,
made up from cubical cells about 1 mm on a side may well comprise some cells covering a range
of tissue types, bone, blood vessel, muscle and fat. The signal from these cells will reflect the
average rather than just one tissue type. At the edges of very high contrast, such as bony structures
in CT, there will be very abrupt changes in the numerical values making up the projections. Image
reconstruction using a finite number of projections cannot faithfully reproduce the sharp edge;
rather it creates a halo close to the edge in the image. Using thin slice will improve the image by
reducing the partial volume artifacts.

Figure 4-24: Artifacts due to Partial volume effects

119 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


4.8.3. Beam Hardening and metallic implants Artifacts
The polychromatic x-ray spectrum is differently attenuated depending on the energy. For thicker
objects or bony structures, the mean energy of the spectrum is increasing due to scattering. When
x-ray beam with continuous energy is used for transmission radiography the beam hardening effect
arises. A thicker i.e., more attenuating parts of the object attenuate softer parts of the original
spectrum more effectively than thinner parts. Metallic implants can also cause beam hardening by
scattering the x-ray energy. This effect will cause the image to have strip patterns. Using more
energetic radiation or filtering low energy part of the spectrum reduces the beam hardening effect.

Figure 4-25: Beam hardening and metallic implants

Review Questions
1. Describe the main features that improve the contrast resolution in Computed
Tomography over Conventional projection Radiography.
2. What is the cause of ring artifact in third generation CT? Describe how this problem is
solved by fourth generation CT.
3. Briefly describe the simple backprojection approach to image reconstruction, and explain
how the filtered backprojection method improves these images.
4. Define a CT (Hounsfield) unit, and explain the purpose of post processing using
windowing and leveling in a CT viewing device.
5. Explain the relationship between voxel and pixel.
6. For the object shown in Figure below, draw the projections that would be acquired at
angles θ=0, 45, 90, 135 and 1800. Sketch the sinogram for values of θ from 0 to 3600.

120 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


7. Describe the procedures of CT angiography and mention some of its applications.
8. Explain the principles of x-ray transmission computed tomography. How does CT scanning
work? What are the different steps involved in CT imaging?
9. Discuss the properties of x-ray projection images and compare them with x-ray CT images.
What are the advantages of CT imaging over conventional x-ray imaging?
10. Trace the evolution of x-ray CT imaging. How has CT imaging technology advanced over
the years? What are the major milestones in the history of CT imaging?
11. Compare the different approaches to the reconstruction of CT images from projection
measurements. What are the advantages and disadvantages of each approach?
12. Define CT numbers, linear attenuation coefficients, and physical densities associated with
CT scanning. How are these parameters related to each other? What is the significance of
each parameter in CT imaging?
13. Describe the configuration of spiral CT scanners. How are they different from conventional
CT scanners? What are the advantages of spiral CT scanners?
14. Explain the different CT artifacts and their reduction techniques. What are the common
sources of CT artifacts? How can they be minimized or eliminated?
15. Discuss the role of post-processing techniques in CT imaging. How can image processing
algorithms be used to improve the quality of CT images? What are the limitations of these
techniques?

121 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


16. Compare the advantages and disadvantages of different types of CT scanners, such as
single-slice, multi-slice, and cone-beam CT scanners. How do these scanners differ in
terms of image quality, speed, and radiation dose?
17. Explain how contrast agents are used in CT imaging. What are the different types of
contrast agents used in CT imaging? How do they enhance the visibility of certain tissues
or structures in CT images?
18. Describe the different types of CT image acquisition protocols, such as non-contrast,
contrast-enhanced, and functional CT imaging. What are the indications for each type of
protocol?
19. Discuss the role of radiation dose in CT imaging. What are the factors that affect the
radiation dose in CT scanning? How can the radiation dose be optimized to minimize the
risk of radiation-induced cancer?
20. Explain the concept of image noise in CT imaging. How does noise affect image quality?
What are the techniques used to reduce image noise in CT imaging?
21. Describe the different types of artifacts that can be encountered in CT imaging. What are
the causes of these artifacts? How can they be avoided or corrected?
22. Discuss the applications of CT imaging in various medical fields, such as oncology,
neurology, cardiology, and orthopedics. How does CT imaging complement other imaging
modalities in the diagnosis and management of diseases?

References
1. Ed. Joseph D. Bronzino, The Biomedical Engineering Hand Book, Second Edition, CRC
press LLC, 2000.
2. G.S., Fundamentals of Biomedical Engineering, New Age International (P) Limited,
Publishers, New Delhi, 2007.
3. Hendee W. R., Ritenour E. R., Medical imaging physics, fourth edition, A John Wiley &
Sons, Inc., Publication, ISBN 0-471-38226-4, New York, 2002.
4. Paul Suetens, Fundamentals of medical imaging, second edition, Cambridge University
Press, ISBN-13 978-0-521-51915-1, New York, 2009.
5. Chris Guy, Dominic ffytche, An Introduction to The Principles of Medical Imaging,
Revised Edition, Imperial College Press, ISBN 1-86094-502-3, London, 2005.

122 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


CHAPTER FIVE
5. MAGNETIC RESONANCE IMAGING (MRI)

Objectives
After completing this chapter, the reader should be able to:

• Explain the phenomenon of precession of nuclei in a static magnetic field.


• Solve the Larmor equation to determine resonance frequency.
• Describe the behavior of bulk magnetization of a sample in the presence of static and
radio frequency magnetic fields.
• Define T1 and T2 relaxation.
• Explain the phenomenon of free induction decay.
• Explain how the two-dimensional Fourier transform (2DFT) method is used to
construct the MR image.
• List the components of an MRI system, state their principle of operation, and describe
their contribution to the system.
• Explain the physiological basis of functional MRI (fMRI).
• Explain the chemical basis of MR spectroscopy and give some examples of nuclei
that are studied.

5.1. Introduction

Magnetic resonance imaging (MRI) is an imaging technique used primarily in medical settings to
produce high quality images of the inside of the human body. MRI is based on the principles of
nuclear magnetic resonance (NMR), a spectroscopic technique used by scientists to obtain
microscopic chemical and physical information about molecules. The protons and neutrons of the
nucleus have a magnetic field associated with their nuclear spin and charge distribution. Resonance
is an energy coupling that causes the individual nuclei, when placed in a strong external magnetic
field, to selectively absorb, and later release, energy unique to those nuclei and their surrounding
environment. This energy is called NMR signal.

123 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


MRI started out as a tomographic imaging technique, that is it produced an image of the NMR
signal in a thin slice through the human body. MRI has advanced beyond a tomographic imaging
technique to a volume imaging technique.

The components of a MRI system are (1) a magnet (2) gradient coils (3) a transmitter (4) a receiver
(5) a computer and (6) shin coils. The layout of the system is as shown in Figure 5-1.

Figure 5-1 : The Components of MRI system

5.2. Spin Physics

5.2.1. Spin

Matter is made out of molecules, which are made out of atoms, which are made out of electrons
orbiting a nucleus (made out of protons and neutrons). The nucleus, made out of protons and
neutrons, can be thought of as a charged sphere. It turns out that the nucleus behaves as if it’s
rotating; in other words, it has an internal angular momentum. This angular momentum is called
spin. The spin of the nucleus is quantized, meaning it can only assume values in quanta of a basic
1 3
unit, ħ, or its half-integer multiples: 2ħ, ħ, 2ħ, 2ħ… Different atomic nuclei have different spin

values; some examples are presented in the table below.

The magnetic moment’s size is proportional to the spin. The constant of proportionality is called
the gyromagnetic ratio and denoted γ. It depends on the nucleus in question:
124 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
Unpaired Unpaired
Nuclei Net Spin (MHz/T)
Protons Neutrons
1
H 1 0 1/2 42.58
2
H 1 1 1 6.54
31
P 1 0 1/2 17.25
23
Na 1 2 3/2 11.27
14
N 1 1 1 3.08
13
C 0 1 1/2 10.71
19
F 1 0 1/2 40.08

Table 5.1: some nuclei spin


Protons, electrons, and neutrons possess spin. Individual unpaired electrons, protons, and neutrons
each possess a spin of ½ ħ. In the deuterium atom ( 2H ), with one unpaired proton and one unpaired
neutron the total nuclear spin = 1.

Two or more particles with spins having opposite signs can pair up to eliminate the observable
manifestations of spin. An example is helium. In nuclear magnetic resonance, it is unpaired
nuclear spins that are of importance.

5.2.2. Molecules and Their Spins

Molecules are made out of atoms, connected between them by chemical bonds. The most important
molecule in MRI is water. A “typical” water molecule is:

Oxygen-16 has no spin (its 8 protons pair up destructively, as do its 8 neutrons), and 1H has spin
½. Because of symmetry, the two hydrogen atoms are equivalent, in the sense that they behave as
one spin-½ entity with double the magnetic moment.

125 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Due to natural abundance and a high γ, the signal in MRI in tissue is dominated by the signal from
the Hydrogen (1H).

A single mm3 of tissue contains 1019 hydrogen atoms, most in water molecules and fat. Those two
give the main signals in MRI. Other molecules are significantly less prominent because they are
not as plentiful as water/fat. For example, Glutamine can be observed in the brain, but its
concentration is only ~ 10 millimolar. Compare that to, say, the concentration of pure water (5.5
×104 millimolar) and take into account tissue is made predominantly out of water.

Large macromolecules often don’t contribute to the signal because of another reason: Their
complex structure means they relax very fast; that is, when we irradiate them, they decay back to
their ground state before we can get a significant signal.

5.2.3. Bulk Magnetization


In an MRI machine one cannot study single spins or single molecules. A typical voxel is ~ mm3,
and it often contains many spins. MRI studies the properties of nuclear spins in bulk.

Suppose you have N molecules in a volume V, each having a magnetic moment mi. Recall that
the moments are all vectors, so we can imagine a vector “attached” to each atom.

Figure 5-2: Schematic representation of microscopic and bulk magnetization.

In general, without the large external field of the MRI machine, they would all point in different
directions and the bulk magnetization will be zero.

126 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


The bulk magnetization M of the volume V is defined as the (vector!) sum over all elements in
the volume:

Upon the application of an external field, the spins tend to align along the field (Figure 5-3) –
although thermal motion will prevent them from doing so completely.

Figure 5-3: Bulk magnetization in the presence of External field

5.3. Effect of a Magnetic Field


When an MRI patient lies in an MRI scanner, he/she is exposed to a high (> 1 Tesla), constant
magnetic field. The MRI scanner uses coils to irradiate the patient with magnetic fields. The
gradient coils that are used for imaging give off a magnetic field. It is therefore vital that we
understand the effects of magnetic fields on a magnetic moment.

From a microscopic point of view, classical physics teaches us the magnetic moment precesses
about the magnetic field with an angular frequency ω= γB. That is, m’s component perpendicular
to B goes around it in a circle.

Figure 5-4: Magnetic moment precession about a magnetic field


127 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
ω is called the Larmor frequency of the magnetic moment. How fast is this precession? This
depends on B and on γ. Different nuclei precess with different Larmor frequencies. Bulk
magnetization: undergoes Precession and Thermal relaxation.

5.3.2. Chemical Shift

When an atom is placed in a magnetic field, its electrons circulate about the direction of the applied
magnetic field. This circulation causes a small magnetic field at the nucleus which opposes the
externally applied field.

The magnetic field at the nucleus (the effective field) is therefore generally less than the applied
field by a fraction. The electron density around each nucleus in a molecule varies according to the
types of nuclei and bonds in the molecule. The opposing field and therefore the effective field at
each nucleus will vary. This is called the chemical shift phenomenon.

5.3.3. Sources of Magnetic Fields


There are several sources of magnetic fields:

1. External: external (to the sample) fields.


a. Main field, B0.
b. Gradients.
c. RF.
d. Earth’s magnetic field.
2. Atomic:
a. Created by the electrons’ orbit around the nucleus: moving charges create magnetic fields
(Ampere’s Law)
b. The electrons have a spin magnetic moment themselves, and these create a magnetic
moment at the nucleus’ position.
3. Intra-Molecular:
a. Magnetic fields created by spin magnetic moments of neighboring atoms in a molecule.
4. Extra-Molecular: other molecules.

These fields can be classified into two categories:


• “Internal” fields which originate from the “sample” (the test subject).
• External fields created by the MRI machine (B0, RF, gradient).
128 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
It turns out these two-give rise to two completely different phenomena:
• The internal fields can be treated as random fluctuations. These lead the spins to relax
back to thermal equilibrium, which is along B0, the main field.
• The external fields are responsible for the spin precession, as described previously. For
example, the M “wants” to precess about B0:

a) b)

Figure 5-5: Effect of a) internal fields. b) External fields

The total effect of all fields is therefore both these effects: precession (external fields) + relaxation
(internal fields).

5.4. Excitation: The RF Coils


In MRI, we can only detect a signal from the spins if they precess and therefore induce a current
in our MRI receiver coils by Faraday’s law. However, this poses a problem: when we put our
patient in an MRI machine, the spins in his/her body align along the field. They remain that way
indefinitely, so no precession will occur. For the spins to precess, and for us to detect them, we
need to somehow force them away from equilibrium – for example, make them perpendicular to
the main field.

To excite a spin, irradiate it with an external, perpendicular magnetic field at its resonant frequency
γB0. This is called “on-resonance irradiation”. The external resonant field is achieved using an
external RF coil built into the MRI machine. Figure 5-6 below illustrates the bulk magnetization
orientation before and after applying external RF signal.

129 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 5-6: A schematic representation of the bulk magnetization vectors in our body in the
presence of an external field, at equilibrium and after application of resonance irradiation.

The coil is ideally capable of generating a homogeneous, time-dependent RF field in the transverse
plane (transverse to B0), often called the B1-field.

Figure 5-7: The RF field.


This picture emphasizes two important points:
i. BRF is always perpendicular to B0.
ii. (ii.) It is always much smaller than B0 (in magnitude).
The RF field looks like this, analytically:

130 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


The total magnetic field felt by a spin is, therefore:

The geometric meaning of ωRF can be understood by simply plotting BRF(t) as a function of time.
We then find out that ωRF is the angular frequency of the RF field vector in the xy-plane:

Figure 5-8: Magnetic moment in the presence of RF field

5.4.3. The Rotating Frame


To excite nuclei, tip them away from B0 field by applying a small rotating B1 field in the x-y plane
(transverse plane). The rotating B1 field is created by running a RF electrical signal through a coil.
By tuning the RF field to the Larmor frequency (in resonance) within some time t, a small B field
(~0.1 G) can create a significant torque on the magnetization.

The plan is:


• Irradiate the spins on resonance.

131 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


• Wait for enough time until the spins reach the xy plane (complete a quarter of a circle).
• Turn off the RF field.

Figure 5-9: Tip Bulk Magnetization


The angle the spin precesses by is given by

𝜃 = 𝜔𝑡 = 𝛾𝐵1 𝑡

Therefore, the resonant RF field should be applied for a time π/2 called hard π/2 -pulse

π/2 = γB1tπ/2 that is tπ/2= π/2γB1

The stronger B1, the shorter tπ/2. The shortest pulses possible is desired because the longer time
taken, the more time wasted and the more the thermal relaxation effects become a nuisance. Some
numbers are in order. For typical MRI scanners,

B1~ 10μT and γ= 2π ×42 MHz/T yielding

Once we excite the spins onto the xy-plane and leave them there, they will precess about the
effective field. When we turn off the RF, we’re only left with the offset’s field. When viewed
“from above”, the spin moves in a circle in the xy (transverse) plane. The more we wait, the larger
θ becomes.

132 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 5-10: Magnetization rotation in transverse plane
Different chemical species will precess at different angular velocities because of their different
offsets and chemical shift Figure (5-10 left), so after the same amount of time they’ll point in
different directions as shown in the figure above.

The thermal effects will eventually cause the spin to return to thermal equilibrium along the z-axis.
This usually takes ~100 ms to ~ 1 sec.

In general, any component of the magnetization can be tipped into the transverse plane to give rise
to a signal. A magnetization vector has three components, Mx, My, Mz. Once tilted onto the xy
plane (along the x-axis) it will precess. During this precession, Mz will remain constant and Mx
and My or Mxy change according to the following.

133 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 5-11: Magnetization components in the transverse plane

5.5. Signal Acquisition


The receiver coils, wound around the main bore of the magnet in a particular configuration in MRI
and NMR machines pick up the spins’ signals. The signal receiving process can be explained by
Faraday’s Law of Induction: Chang in magnetic flux in time across a surface induces an electric
potential (voltage signal).

The idea of signal reception is this:

• The nuclear magnetic moments


create magnetic fields.
• Rotating the moment also rotates
the field, changing it with time.
• If we put a coil around the
imaged subject, the changing
magnetic field will create a
Figure 5-12: Signal Acquisition concept
changing flux through the coil.
• The changing flux will induce an observable voltage

134 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


5.6. NMR Spectroscopy
The differences in resonance frequencies or offsets among nuclei that occupy different positions
in molecules enable us to identify molecular signatures in a tissue. For example, protons in the
CH3 group of ethyl alcohol resonate at a slightly different frequency than do protons in the CH2
group or protons in OH. The difference is only about 4 ×10−4 % (4 ppm) when the field strength
is 1 tesla.

The NMR spectrum for a molecule or compound is a unique, reproducible “signature” of great
utility in determining the presence of unknown compounds in analytical chemistry. The origin of
the unique pattern of chemical shifts for a molecule is the distortion of the magnetic field caused
by the distribution of electrons that corresponds to each chemical bond. The effective field felt by
each nucleus in the molecule is determined by the chemical bonds that surround it. Therefore, their
resonance frequencies are shifted slightly. The individual protons within a chemical group
experience slight differences in chemical shift as well because a hydrogen nucleus in the center of
the group is influenced more strongly by the field of other hydrogen nuclei than is one at the
periphery of the group. Therefore, a resonance peak for a chemical group is often split slightly into
a number of equally spaced peaks, referred to collectively as a multiplet.

Suppose you have a sample with water, and it has some offset Δω in the rotating frame. You first
excite the water and then let it precess - it will precess with the angular velocity Δω in the rotating
frame. This rotation will induce a voltage in a coil placed around it (in fact, the same RF coil used
to excite it can also be used to measure this voltage, but you can use a different coil as well). The
periodic circular precession will induce a periodic, sinusoidal voltage in the coil. By observing this
signal you can deduce the offset, Δω and differentiate from other molecule.

Fat and water – having different offsets, Δω, owing to their different chemical structures, each
would give rise to its own signal with its own periodicity as shown in Figure (5-13).

That’s the idea of NMR spectroscopy: you’re given a sample with different chemical compounds
and you’re asked to:

o Say which compounds there are.


o Say how much of each compound there is.

135 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 5-13: water and fat Signals
The signal we pick up would be the sum of signals coming from each spin species:

Figure 5-14: NMR Spectrum for fat and water (left) and Ethyl Alcohol (right)

136 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


5.7. Thermal Relaxation
Every magnetization vector M can be decomposed to a parallel component (longitudinal) (to B0,
here parallel to z) and a perpendicular one (transverse). Each component “relaxes” differently.

Figure 5-15: Transverse and longitudinal relaxation


It turns out:

• The transverse magnetization gets “eaten up” and eventually disappears with a time
constant T2. Usually T2~ tens of ms in tissue.
• The longitudinal magnetization gets “built up” back to its equilibrium value, with a time
constant T1which is always larger (but not always by much!) than T2.
• Remember, T1≥T2 always. This means the transverse magnetization gets “eaten up” faster
than the longitudinal magnetization gets “built up”.

Figure 5-16: Process of Relaxation

137 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


5.7.1. Bloch Equations

The Bloch equations are a set of coupled differential equations which can be used to describe the
behavior of a magnetization vector under any conditions. When properly integrated, the Bloch
equations will yield the X', Y', and Z components of magnetization as a function of time.

Bloch found that: or

The above equations can be modified to describe the relaxation phenomenologically:

Assume

• We’ve just excited our spins.


• We’re on resonance (no offset, Bz=0).
• No irradiation (Bx=By=0).

The above equations become:

We can solve these equations. First, My = 0 initially. This is a typical differential


equation with a solution y(t)=y0eax

138 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Hence, the solution is:

Similarly,

5.7.2. T1 Relaxation

The time constant which describes how MZ returns to its equilibrium value is called the spin lattice
relaxation time (T1). The equation governing this behavior as a function of the time t after its
displacement is:

Mz = Mo ( 1 - e-t/T1 )

T1 is the time to reduce the difference between the longitudinal magnetization (MZ) and its
equilibrium value by a factor of e.

Figure 5-17: T1 Relaxation

5.7.3. T2 Decay

In addition to the rotation, the net magnetization starts to dephase because each of the spin packets
making it up is experiencing a slightly different magnetic field and rotates at its own Larmor

139 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


frequency. The longer the elapsed time, the greater the phase difference. Here the net
magnetization vector is initially along +Y. For this and all dephasing examples think of this vector
as the overlap of several thinner vectors from the individual spin packets.

The time constant which describes the return to equilibrium of the transverse magnetization, MXY,
is called the spin-spin relaxation time, T2.

MXY =MXYo e-t/T2

T2 is always less than or equal to T1. The net magnetization in the XY plane goes to zero and then
the longitudinal magnetization grows in until we have Mo along Z.

Any transverse magnetization behaves the same way. The transverse component rotates about the
direction of applied magnetization and dephases. T1 governs the rate of recovery of the
longitudinal magnetization.

In summary, the spin-spin relaxation time, T2, is the time to reduce the transverse magnetization
by a factor of e.

Two factors contribute to the decay of transverse magnetization.


• Molecular interactions (lead to a pure T2 molecular effect)
• Variations in Bo (lead to an inhomogeneous T2 effect.

The combination of these two factors is what actually results in the decay of transverse
magnetization. The combined time constant is called T2 star and is given the symbol T2*. The
relationship between the T2 from molecular processes and that from inhomogeneities in the
magnetic field is as follows.

1/T2* = 1/T2 + 1/T2inhomo.

140 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 5-18: T2 and T* Relaxation
Different tissues have different T2 time. The following table illustrates sample tissue time
constants.

Tissue T2 (ms)

gray matter 100

white matter 92

muscle 47

fat 85

kidney 58

liver 43

Table 5.2: Sample tissue time constants

141 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


5.8. Measuring T2, T1, and T2*
5.8.1. T1- Inversion Recovery

To measure T1, a 180o pulse is first applied to a state of thermal equilibrium magnetization. This
rotates the net magnetization down to the -Z axis. The magnetization undergoes spin-lattice
relaxation and returns toward its equilibrium position along the +Z axis. Before it reaches
equilibrium, a 90o pulse is applied which rotates the longitudinal magnetization into the XY plane
and measure.

Figure 5-19: T1 Inversion recovery process

The amplitude of the signal after waiting a time T1,

A set of experiments can be done with different T1s and in each experiment; the maximal value of
the signal is taken. This process of finding T1 is called an inversion recovery (IR) experiment.

5.8.2. T2– Spin Echo Sequence

Once you excite the spins from thermal equilibrium, they begin precessing at different rates, and
eventually “spread out” in the xy-plane. This means that, if you were to acquire their signal, it
would slowly die out because the spins would end up pointing in all sorts of directions and add up
destructively (remember, the signal is a vector sum of the spins in the xy-plane):

If 180o pulse is applied, pulse rotates the magnetization by 180o about the xy-plane (The pulse
would invert our spins). After additional time T, the spins would end up re-aligning along the x-
axis.

142 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


If successive 180 pulses spaced 2T apart are
Figure 5-20: Spin echo technique

applied, the pattern will repeat itself indefinitely, since the spins would dephase, get flipped (by
the 180), rephase, dephase again, get flipped (by the 180), rephase, dephase, ... ad infinitum; in
effect, there is relaxation that needs to be taken into account hence the spins will become
decaying.

Figure 5-21: spin Decaying with 180 0 pulse repetition


The decay after the excitation is determined by T2* (by both microscopic field fluctuations and
field non-homogeneities), but the overall decay of the echoes is determined by T2 alone. This
furnishes us with a method of measuring the “true” T2 microscopic decay of a sample.

5.8.3. T1 & T2 Relaxation Trends


T1 depends on tumbling rate (more tumbling at Larmor Frequency = shorter T1) while T2 depends
on time spent in vicinity of nuclear neighbors (more time near same neighbors = shorter T2).

• Solids: Very Long T1, Very Short T2


• Viscous: Short T1, Short T2
143 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
• Non-Viscous: Long T1, Long T2

Figure 5-22: T1 and T2 trends for different tissue states

5.8.4. Proton density


The maximum MR signal that may be induced in a receiver coil depends linearly upon the number
of protons available. Hence doubling either the volume or the proton density (protons per cubic
millimeter) of the sample doubles the MR signal.

Contrast in MRI is related primarily to the proton density and to relaxation phenomena (i.e., how
fast a group of protons gives up its absorbed energy). Proton density is influenced by the mass
density (g/cm3). Proton density differs among tissue types, and in particular adipose tissues have
a higher proportion of protons than other tissues, due to the high concentration of hydrogen in fat
[CH3(CH2)nCOOH]. Two different relaxation mechanisms (spin/lattice and spin/spin) are present
in tissue, and the dominance of one over the other can be manipulated by the timing of the
radiofrequency (RF) pulse sequence and magnetic field variations in the MRI system.

T -weighted ρ -weighted T -weighted


1 2

Figure 5-23: T1, T2 and Proton density images contrast difference

144 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


5.9. Coils Localization of the MRI signal: The Gradient
The main MRI bore has three coils around it, capable of generating a linearly increasing z-field
along the x, y and z axes:

Figure-5-24: MRI gradient Magnets


The linear field gradients are creating by pumping current through these coils.

If we subject a sample to a field with spatially variable field strength, the spectral distribution of
the received signal will reflect the spatial characteristics of the sample. This idea, with use of
linearly varying fields, is used to great advantage in MRI.

Figure 5-25: Gradient fields orientation


The magnetic field is all along z, but magnetic strength can vary spatially with x, y, and/or z.
145 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
Without the gradient coils the precessional frequency is:

w = γB0 and the field considering the offset

Now with the presence of gradient field G

w(t) = γ (B0 + G(t).r), where r is either x, y, or z.

And the field

NB:
• We can control each term, Gx(t), Gy(t), Gz(t) and shape it as we wish.
• Note that, e.g., the x-gradient does not create a field along the x-axis. Rather, it
increases/decreases the z-field along the x-axis.
• The gradients Gk are measured in field/unit length. Usually they’re specified in mT/m or
G/cm.
• Different spins will have different positions, and hence will experience a different z-
component of the field:

The gradient, in effect, assigns a linearly increasing offset (i.e. field in the z direction) to the
spins in the sample. The following Figure illustrates this effect.

Spatial localization, fundamental to MR imaging, requires the imposition of magnetic field non
uniformities-magnetic gradients superimposed upon the homogeneous and much stronger main
magnetic field, which are used to distinguish the positions of the signal in a three-dimensional
object (the patient).

146 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 5-26: angular frequency variation due to Z and X gradients

Conventional MRI involves RF excitations combined with magnetic field gradients to localize the
signal from individual volume elements (voxels) in the patient. With appropriate design, the
gradient coils create a magnetic field that linearly varies in strength versus distance over a
predefined field of view (FOV). When superimposed upon a homogeneous magnetic field (e.g.,
the main magnet, Bo), positive gradient field adds to Bo and negative gradient field reduces Bo.

Two properties of gradient systems are important:


• The peak amplitude of the gradient field determines the "steepness" of the gradient field.
• The slew rate is the time required to achieve the peak magnetic field amplitude, where
shorter time is better. Typical slew rates of gradient fields are from 5mT/m/msec to 250
mT/m/msec.
The localization of protons in the three-dimensional volume requires the application of the
previously discussed three distinct gradients during the pulse sequence, also called: slice select,
frequency encode, and phase encode gradients. These gradients are usually sequenced in a specific
order, depending on the pulse sequences employed. Often, the three gradients overlap partially or
completely during the scan to achieve a desired spin state, or to leave spins in their original phase
state after the application of the gradient(s).

147 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


5.9.1. Slice select Gz gradient
The slice select gradient (SSG) determines the slice of tissue to be imaged in the body, in
conjunction with the RF excitation pulse. For axial MR images, this gradient is applied along the
long (cranial-caudal) axis of the body. Proton precessional frequencies vary according to their
distance from the null of the SSG. A selective frequency (narrow band) RF pulse is applied to the
whole volume, but only those spins along the gradient that have a precessional frequency equal to
the frequency of the RF will absorb energy due to the resonance phenomenon. In the example
shown in Figure 5-27, the local magnetic field changes in one-Gauss increments accompanied by
a change in the precessional frequency from chin to the top of the head.

Figure 5-27: the effects of the main magnetic field and the applied slice gradient
Slice thickness is determined by two parameters: (a) the bandwidth (BW) of the RF pulse, and (b)
the gradient strength across the FOV. Consider a pulse B1(t) that is multiplied by cos(ot). This
is called modulation. B1(t) is called the RF excitation. o is the carrier frequency = γ B0.

Figure 5-28: slice selection Pulse Modulation

148 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


The equivalent frequency profile of the above pulse is the Sinc pulse shown in Figure (5-28) above.
The pulse width determines the output frequency Band Width (BW). A narrow sinc pulse width
and high-frequency oscillations produce a wide BW and a corresponding broad excitation
distribution. Conversely, a broad, slowly varying sinc pulse produces a narrow BW and
corresponding thin excitation distribution.

In summary, the slice select gradient applied during the RF pulse results in proton excitation in a
single plane and thus localizes the signal in the dimension orthogonal to the gradient. It is the first
of three gradients applied to the sample volume.

5.9.2. The frequency encode Gx gradient (FEG)


It is also known as the readout gradient, is applied in a direction perpendicular to the SSG. For an
axial image acquisition, the FEG is applied along the x-axis throughout the formation and the
decay of the signals arising from the spins excited by the slice encode gradient. Spins constituting
the signals are frequency encoded depending on their position along the FEG.

Figure 5-29: Frequency Encoding

149 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


During the time the gradient is turned on, the protons precess with a frequency determined by their
position from the null. Higher precessional frequencies occur at the positive pole, and lower
frequencies occur at the negative pole of the FEG. Demodulation (removal of the Larmor
precessional frequency) of the composite signal produces a net frequency variation that is
symmetrically distributed from 0 frequencies at the null, to +ωmax and -ωmax at the edges of the
FOV.

The signal from each x-position contains a specific center frequency. The over-all spin signal is
the sum of signals along x. A Fourier transform will recover signal contribution at each frequency,
i.e. x-location, and the resulting spectrum will determine a projection of the desired imaged object.

Figure 5-30: Pulse Sequence Timing Diagram (left) and Frequency Encoding & Data Sampling
(right)

5.9.3. Phase Encoding Gy Gradient


The frequency encoding gradient (x-direction) will always produce iso-lines of resonance
frequencies so, how can the y-localization be achieved?

Position of the spins in the third spatial dimension is determined with a phase encode gradient
(PEG), applied before the frequency encode gradient and after the slice encode gradient, along the
third perpendicular axis. Phase represents a variation in the starting point of sinusoidal waves, and
can be purposefully introduced with the application of a short duration gradient. After the initial
localization of the excited protons in the slab of tissue by the SEG, all spins are in phase coherence
(they have the same phase). During the application of the PEG, a linear variation in the precessional
frequency of the excited spins occurs across the tissue slab along the direction of the gradient.
150 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
After the PEG is turned off, spin precession reverts to the Larmor frequency, but now phase shifts
are introduced, the magnitude of which are dependent on the spatial position relative to the PEG
null and the PEG strength. Phase advances for protons in the positive gradient, and phase retards
for protons in the negative gradient, while no phase shift occurs for protons at the null. For each
repetition time (TR) interval, a specific PEG strength introduces a specific phase change across
the FOV. Incremental change in the PEG strength from positive through negative polarity during
the image acquisition produces positionally dependent phase shift at each position along the
applied phase encode direction. Protons at the center of the FOV (PEG null) do not experience any
phase shift. Protons located furthest from the null at the edge of the FOV gain the maximum
positive phase shift with the largest positive gradient, no phase shift with the "null" gradient, and
maximum negative phase shift with the largest negative gradient. Protons at intermediate distances
from the null experience intermediate phase shifts (positive or negative). Thus, each location along
the phase encode axis is spatially encoded by the amount of phase shift.

Decoding the spatial position along the phase encode direction occurs by Fourier transformation,
only after all of the data for the image have been collected. Symmetry in the frequency domain
requires detection of the phase shift direction (positive or negative) to assign correct position in
the final image. Since the PEG is incrementally varied throughout the acquisition sequence (e.g.,
128 times in a 128 X 128 MR image), slight changes in position caused by motion will cause a
corresponding change in phase, and will be manifested as partial (artifactual) copies of anatomy
displaced along the phase encode axis.

In summary MR signal in frequency encoding (x) is Fourier transform of projection of object


which is the line integrals along y.

Encoding in other direction is phase encoding.


– Vary angle of frequency encoding direction.
• 1D FT along each angle and Reconstruct similar to CT.
– Apply sinusoidal weightings along y direction.

151 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 5-31: Combined effect of phase and frequency encoding gradients

5.9.4. Gradient Sequencing


Figure (5-30) below illustrates the timing of the gradients in conjunction with the RF excitation
pulses and the data acquisition during the evolution and decay of the echo. This sequence is
repeated with slight incremental changes in the phase encode gradient strength to define the three-
dimensions in the image.

Figure 5-32: A spin-echo pulse sequence diagram


152 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
5.10. "K-Space" Data Acquisition and Image Reconstruction
The total signal from all spins in the body in the presence of gradient fields is given by summing
over the entire body/object being imaged:

where

The signal is somehow the (3D) Fourier transform of the spin density M0(r), which is precisely
the image.

It is the Fourier transform of M0(r), which is proportional to the density of spins at each position,
which is the “image” we’re after. Thus:

The signal s(k) can be thought of as being acquired in “k-space” (or, in the 2D case, in the “k-
plane”).

K-space describes a two-dimensional matrix


of positive and~-ative spatial frequency
values, encoded as complex numbers. The
matrix is divided into four quadrants, with the
origin at the center representing frequency =
0. Frequency domain data are encoded in the
kx direction by the frequency encode gradient,
and in the ky direction by the phase encode
gradient in most image sequences.
Figure 5-33: k-Space

153 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


5.10.1. Echo Planar Imaging (EPI)
Assume a two dimensional object, M0(r) = M0(x,y), and the “k-space” is two-dimensional, kx-ky)
and assume the “pulse sequence” of EPI below. A pulse sequence is simply a shorthand notation
for the RF pulses, gradients and acquisition periods used throughout an experiment.

Figure 5-34: EPI pulse sequence


A. The spins are all excited onto the xy-plane. Don’t forget we’re assuming a 2D object.
B. Negative x & y gradients are applied. This has the effect of moving k from its initial
position,.
at the center of k plane, to the point (B) in the diagram below
.
C. The block (C) is repeated Nrep times. Each block corresponds to a “right-up-left-up”
trajectory in the k-plane (as shown in the schematic illustration below). The short, strong
y gradients are said to be “blipped”. One acquires when then x-gradient is on. Because the
hardware isn’t perfect, you can only acquire a point every so-and-so microseconds (usually
on the order of 1 microseconds). The acquired points are represented by red dots along the
trajectory in the schematic drawing below.

154 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 5-35: EPI k-spaces or grids
So EPI is sampling of the Fourier transform of an image. Having sampled the FT on this discrete
grid, performing inverse Fourier transform to retrieve the image.

Figure 5-36: The mathematical relationship between the acquired k-space data on the left and the
image on the is a two-dimensional Fourier-transform.
The ideas behind all EPI-based methods are the same:
• Excite the spins.
• Start moving around in k-space by varying the gradients, and acquire.

Advantages of EPI:
• Fast: just about the fastest scan technique there is. You can acquire an entire 2D image in
a few tens of milliseconds. An entire 3D image of the brain can be had in a few seconds.

155 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


• Low RF power deposition: The use of a single 90 pulse to excite the spins means the
patients don’t get irradiated a lot. Irradiation can be problematic since it can cause the
tissues in the body to heat up.
• Because the technique is fast, it is also pretty robust with respect to motion.

Disadvantages of EPI:
• Puts high demands on the gradients. The rapidly varying gradients can cause biophysical
effects in patients, such as electrical currents in tissues.
• Sensitive to gradient imperfections (and there are imperfections. Lots).
• Resolution is just so-so compared to other scan techniques.
• Signal decays as T2*. Other scan techniques can make the signal decay slower, according
to T2. This decay also means that ...
• EPI is particularly susceptible to magnetic field inhomogeneities. In particular, in the
brain, there are a few notorious areas which are hard to-impossible to observe with EPI:
the area near the frontal and temporal lobes:

5.10.2. Three-Dimensional Fourier Transform Image Acquisition


Three-dimensional image acquisition (volume imaging) requires the use of a broadband,
nonselective RF pulse to excite a large volume of spins simultaneously. Two phase gradients are
discretely applied in the slice encode and phase encode directions, prior to the frequency encode
(readout) gradient. The image acquisition time is equal to:

TR X No. of Phase Encode Steps (z-axis) X No. of Phase Encode Steps (y-axis) X No. of Signal
Averages

A three-dimensional Fourier transform (three 1-D Fourier transforms) is applied for each column,
row, and depth axis in the image matrix "cube." Volumes obtained can be either isotropic, the same
size in all three directions, or anisotropic, where at least one dimension is different in size. The
advantage of the former is equal resolution in all directions; reformations of images from the
volume do not suffer from degradations of large sample size. After the spatial domain data
(amplitude and contrast) are obtained, individual two-dimensional slices in any arbitrary plane are
extracted by interpolation of the cube data. When using a standard TR of 600 msec with one
average for a T1-weighted exam, a 128 X 128 X 128 cube requires 163 minutes or about 2.7 hours!

156 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Obviously, this is unacceptable for standard clinical imaging. GRE pulse sequences with TR of
50msec acquire the same image in about 15 minutes. Another shortcut is with anisotropic voxels,
where the phase-encode steps in one dimension are reduced, albeit with a loss of resolution. A
major benefit to isotropic three-dimensional acquisition is the uniform resolution in all directions
when extracting any two dimensional image from the matrix cube. In addition, high SNR is
achieved compared to a similar two-dimensional image, allowing reconstruction of very thin slices
with good detail (less partial volume averaging) and high SNR. A downside is the increased
probability of motion artifacts and increased computer hardware requirements for data handling
and storage.

Figure 5-37: Three-dimensional image acquisition

5.11. Image Characteristics

5.11.1. Spatial Resolution and Contrast Sensitivity


The detail we can see in an MRI image is dependent on both FOV and Resolution. Spatial
resolution, contrast sensitivity, and SNR parameters form the basis for evaluating the MR image
characteristics. The spatial resolution is dependent on the FOV, which determines pixel size, the
gradient field strength, which determines the FOV, the receiver coil characteristics (head coil, body
coil, various surface coil designs), the sampling bandwidth, and the image matrix.

157 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


The FOV determines the dimension of the image, and is typically measured in millimeters. It is
directly related to the spacing or density of sampling of data points in k-domain.

The concept of FOV, voxel size and resolution are intimately related. In a 2D image, the number
of voxels in an image along the x and y axes is:

where Δx and Δy are the voxel sizes. Assuming that (e.g., for the x-direction) we select Δkx to
equal 1/FOVx, we have:

This relates the k-space quantities to the resolution of the image.

Figure 5-38: FOV and K-space


For a fixed FOV
– Increasing Resolution increases detail.
– Decreasing Resolution decreases detail.

For a fixed Resolution


158 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
– Increasing FOV decreases detail.
– Decreasing FOV increases detail.

Contrast sensitivity is the major attribute of MR. The spectacular contrast sensitivity of MR
enables the exquisite discrimination of soft tissues and contrast due to blood flow. This sensitivity
is achieved through differences in the Tl, T2, spin density, and flow velocity characteristics.
Contrast which is dependent upon these parameters is achieved through the proper application of
pulse sequences. MR contrast materials, usually susceptibility agents that disrupt the local
magnetic field to enhance T2 decay or provide a relaxation mechanism for enhanced Tl decay (e.g.,
bound water in hydration layers), are becoming important enhancement agents for the
differentiation of normal and diseased tissues. The absolute contrast sensitivity of the MR image
is ultimately limited by the SNR and presence of image artifacts.

5.11.2. Signal-to-Noise Ratio (SNR)


The signal-to-noise ratio (SNR) of the MR image is dependent on a number of variables.

Where

✓ I = intrinsic signal intensity based on pulse sequence


✓ Voxelx,y,z = voxel volume, determined by FOV; image matrix, and slice thickness
✓ NEX = number of excitations, the repeated signal acquisition into the same voxels, which
depends on Nx (# frequency encode data) and Ny (# phase encode steps)
✓ BW = frequency bandwidth of RF transmitter! receiver
✓ f(QF) = function of coil quality factor parameter (tuning the coil)
✓ f(B) = function of magnetic field strength, B
✓ f(slice gap) = function of inter-slice gap effects, and
✓ f(reconstruction) = function of reconstruction algorithm,

159 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


5.12. Functional MRI (fMRI)
Functional Magnetic Resonance Imaging (fMRI): uses MRI to indirectly measure brain activity.
MRI focuses on brain anatomy; while fMRI focuses on the brain activity. Blood flow and blood
oxygenation are linked to neural activity. Based on the assumption that neuronal activity requires
O2 which is carried by the blood; increased blood flow and resulting hemodynamics are foundation
to fMRI.

fMRI is dependent on two factors:


• The cerebral blood flow rate (CBF)
• Blood oxygen level. (BOLD)

Oxygenated blood is diamagnetic while deoxygenated blood is paramagnetic. In flow of blood and
conversion of oxyhemoglobin to deoxyhemoglobin, local magnetization of the spins will be
affected. T2* is longer for tissues around Oxygenated blood as compared to tissues around
deoxygenated blood. This fact allows areas of high metabolic activity to produce a correlated
signal and is the basis of functional MRI (fMRI).

Spatially specific regions of the brain are activated to certain stimuli.


– Regional CBF increases
– Regional Oxygen consumption rate is elevated to a lesser degree,
– Localized to within 2 or 3 mm of where the neural activity is.

This lowers deoxyhemoglobin content per unit volume of brain tissue. Signal intensity in a BOLD
sensitive image increases in regions of the brain engaged by a ‘‘task’’ relative to a resting in T2*
images.

Because the BOLD sequence produces images that are highly dependent on blood oxygen levels,
areas of high metabolic activity will be enhanced when the prestimulus image is subtracted, pixel
by pixel, from the poststimulus image. These fMRI experiments determine which sites in the brain
are used for processing data.

160 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Stimuli in fMRI experiments can be physical
(finger movement), sensory (light flashes or
sounds), or cognitive (repetition of "good" or
"bad" word sequences). To improve the SNR in
the fMRI images, a stimulus is typically applied in
a repetitive, periodic sequence, and BOLD images
are acquired continuously. Areas in the brain that
demonstrate time-dependent activity and correlate
with the time-dependent application of the
stimulus are coded using a color scale and are
overlaid onto a gray-scale image of the brain for
anatomic reference. Figure 5-39: fMRI Image

The most common MR sequence used to collect the data is a multi-slice echo-planar imaging,
since data acquisition is fast enough to obtain whole-brain coverage in a few seconds. Many
different types of stimulus can be used: visual, motor or auditory-based. The changes in image
intensity in activated areas are very small, typically only 0.2–2% using a 3 Tesla scanner, and so
experiments are repeated a number of times with periods of rest (baseline) between each
stimulation block. Data processing involves Figure 5-40: Typical fMRI image
correlation of the MRI signal intensity for each pixel
with the stimulation waveform, followed by statistical analysis to determine whether the
correlation is significant. Typical scans may take 10–40 minutes, with several hundred repetitions
of the stimulus/rest paradigm.

Figure 5-41: Neural activity measurement using fMRI

161 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


5.12.1. fMRI Application
Its high temporal and spatial resolution and non-invasiveness makes fMRI ideal in measuring
brain activity for clinical as well as non-clinical applications.

Clinical applications of fMRI:

• pre-surgical planning,
• Finding brain pathologies,
• Assessment of patients with disorders of consciousness (coma, vegetative state,
minimally conscious state, locked-in syndrome)

Research Purposes: many researchers from several disciplines are using fMRI to better understand
brain function in animals and humans.

5.12.2. Sources of noise in fMRI


The BOLD signature of activation is relatively weak. So other sources of noise in the acquired
data must be carefully controlled. This means that a series of processing steps must be performed
on the acquired images before the actual statistical search for task-related activation can begin.

The five main sources of noise in fMRI are:


• Thermal noise, (α B0)
• System noise,
• Physiological noise, (α B02)
• Random neural activity and
• Differences in both mental strategies and behavior across people and across tasks within a
person.

Preprocessing Methods
• Motion correction
• Slice time correction
• Spatial filtering
• Intensity normalization
• Temporal filtering

162 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Review Questions
1. Explain briefly the meanings of the terms.
a. Free water density
b. Longitudinal relaxation time.
2. Explain why the skull and other solid parts of the human skeleton do not give rise to any
observable signal in conventional MRI.
3. Explain what is meant by the term free induction decay in MRI, and how it can be
produced using a combination of static and pulsed magnetic fields.
4. A sample has a T1 of 1.0 seconds. If the net magnetization is set equal to zero, how long
will it take for the net magnetization to recover to 98% of its equilibrium value?
5. A sample has a T2 of 100 ms. How long will it take for any transverse magnetization to
decay to 37% of its starting value?
6. A hydrogen sample is at equilibrium in a 1.5 Tesla magnetic field. A constant B1 field of
1.17x10-4 Tesla is applied along the +x'-axis for 50 microseconds. What is the direction
of the net magnetization vector after the B1 field is turned off?
7. In conventional clinical imaging, the Zgradient coil is used for slice selection, the Xgradient
for frequency encoding and the Ygradient for phase encoding. Sketch the trajectory in K-
space taken for 2D images obtained in this manner. How could the X and Ygradient fields
be combined to obtain a radial K-space trajectory, the standard in CT and one originally
used in MRI?
8. Explain how molecule signatures can and their quantity can be estimated using NMR
spectroscopy.
9. Why fMRI is the best method in studying brain activity over other methods (PET,
SPECT, EEG).
10. Explain the phenomenon of precession of nuclei in a static magnetic field. How is the
precession frequency related to the strength of the magnetic field and the magnetic
moment of the nucleus?
11. Use the Larmor equation to determine the resonance frequency of a proton in a 1.5 Tesla
magnetic field. How does the resonance frequency change as the strength of the magnetic
field is increased or decreased?

163 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


12. Describe the behavior of the bulk magnetization of a sample in the presence of static and
radio frequency magnetic fields. How does the magnetization change over time in
response to the applied fields?
13. Define T1 and T2 relaxation times in the context of magnetic resonance imaging. How
are these relaxation times related to the longitudinal and transverse components of the
magnetization vector?
14. Explain the phenomenon of free induction decay in magnetic resonance imaging. How is
the signal generated and how does it decay over time?
15. Describe how the two-dimensional Fourier transform (2DFT) method is used to construct
a magnetic resonance image. How is the image contrast generated from the magnetic
resonance signal?
16. Identify the components of an MRI system, state their principle of operation, and
describe their contribution to the system. How are these components integrated to form
an MRI scanner?
17. Discuss the physiological basis of functional MRI (fMRI). How is fMRI used to image
brain activity? What are the limitations of fMRI in mapping brain function?
18. Explain the chemical basis of MR spectroscopy and give some examples of nuclei that
are studied. How is the MR spectrum generated and how is it used to identify chemical
compounds in a sample?
19. Discuss the applications of magnetic resonance imaging and spectroscopy in various
medical fields, such as neurology, cardiology, and oncology. How do these techniques
complement other imaging modalities in the diagnosis and management of diseases?

164 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


References
1. G.S., Fundamentals of Biomedical Engineering, New Age International (P) Limited,
Publishers, New Delhi, 2007.
2. Hendee W. R., Ritenour E. R., Medical imaging physics, fourth edition, A John Wiley &
Sons, Inc., Publication, ISBN 0-471-38226-4, New York, 2002.
3. Paul Suetens, Fundamentals of medical imaging, second edition, Cambridge University
Press, ISBN-13 978-0-521-51915-1, New York, 2009.
4. Chris Guy, Dominic ffytche, An Introduction to The Principles of Medical Imaging,
Revised Edition, Imperial College Press, ISBN 1-86094-502-3, London, 2005.
5. Assaf Tal, Assaf’s magnetic resonance pages, Magnetic Resonance Imaging, , Senior
Lecturer, post-doctoral researcher at New York University’s Langone School of
Medicine, 2013
6. Jerrold T. B., Seibert A. J., Edwin M. L., John M. B., The essential physics of medical
imaging, Lipincott Williams and Wilkins, A wolters company, USA, 2002.

165 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


CHAPTER SIX
6. NUCLEAR MEDICINE

Objectives
By studying this chapter, the reader should be able to:

• Explain the basic principles and different types of radioactivity.


• Describe the principles, and diagnostic application of radioactive nuclides and
radiopharmaceuticals.
• Identify the function of each component of a scintillation gamma camera.
• Discuss the acquisition, presentation and properties of images for single-photon emission
tomography.
• Discuss the acquisition, presentation and properties of images for positron emission
tomography.
• Explain the recent advances of imaging with dual imaging modalities PET/CT, SPECT/CT.

5.1. Introduction
Nuclear medicine is an emission imaging technique based on radioisotopes. The imaging
procedure requires the injection or administration of a small volume of a soluble carrier substance,
labeled by a radioactive isotope. The blood circulation distributes the injected solution throughout
the body. Ideally the carrier substance is designed to concentrate preferentially in a target organ
or around a particular disease process. The radioactive tracer is ideally just an emitter of gamma-
rays with energies in the range 60–510 keV. The emitted photons leave the body, to be collimated
and counted using a large area electronic photon detector, sometimes called an Anger camera.

Most nuclear medicine studies require the use of a detector outside the body to measure the rate of
accumulation, release, or distribution of radioactivity in a particular region inside the body. The
rate of accumulation or release of radioactivity may be measured with one or more detectors,
usually NaI(Tl) scintillation crystals, positioned at fixed locations outside the body. Images of the
distribution of radioactivity usually are obtained with a stationary imaging device.

166 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


There are three different modalities under the general umbrella of nuclear medicine. The most
basic, planar scintigraphy, images the distribution of radioactive material in a single two
dimensional image, analogous to a planar X-ray scan. The second type of scan, single photon
emission computed tomography (SPECT), produces a series of contiguous two-dimensional
images of the distribution of the radiotracer using the same agents as planar scintigraphy. The final
method is positron emission tomography (PET). This involves injection of a different type of
radiotracer, one which emits positrons (positively charged electrons). These annihilate with
electrons within the body, emitting gamma-rays with energy of 511 keV.

In the following subsequent sections, description of radioactivity, the main radionuclides and
carrier molecules in common use, the physics of the Anger camera, the planar scintigraphy and the
basic principles of SPECT and PET will be discussed.

5.2. Radioactivity
A nucleus not in its stable state will adjust itself until it is stable either by ejecting portions of its
nucleus or by emitting energy in the form of photons (gamma rays). This process is referred to as
radioactive decay. A radioactive isotope is one which undergoes a spontaneous change in the
composition of the nucleus, termed as ‘disintegration’, resulting in emission of energy.

The quantity of radioactive material, expressed as the number of radioactive atoms undergoing
nuclear transformation per unit time (t), is called activity (A). Described mathematically, activity
is equal to the change (dN) in the total number of radioactive atoms (N) in a given period of time
(dt), or

𝒅𝑵
𝑨=
𝒅𝒕

Activity is measured in units of curies (Ci), where one curie equals 3.7x1010 disintegrations per
second. In nuclear medicine, activities from 0.1 to 30 mCi of a variety of radionuclides are
typically used for imaging studies, and up to 300 mCi of iodine 131 are used for therapy. The SI
unit for radioactivity is the Becquerel (Bq), named for Henri Becquerel, who discovered
radioactivity in 1896. One millicurie (mCi) is equal to 37 megabecquerels (l mCi = 37 MBq).

The number of atoms decaying per unit time (dN/dt) is proportional to the number of unstable
atoms (N (t=0)) that are present at any given time.
167 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
The fundamental decay equation is given by:
𝑵(𝒕) = 𝑵𝟎 . 𝒆−𝝀𝒕

where λ is called the decay constant.

Then number of atoms decaying per unit time called activity will be

𝒅𝑵(𝒕)
𝑨(𝒕) = − = 𝝀. 𝑵𝟎 . 𝒆−𝝀𝒕 = 𝑨𝟎 . 𝒆−𝝀𝒕
𝒅𝒕

One of the most common measures of radioactivity is the half-life (τ1/2), which is the time required
for the radioactivity to drop to one-half of its value given by:

Most radionuclides decay in one or more of the following ways: (a) alpha decay, (b) beta-minus
emission, (c) beta-plus (positron) emission, (d) electron capture, or (e) isomeric transition.

5.2.1. Alpha decay


Very large unstable atoms, atoms with high atomic mass, may split into nuclear fragments. The
smallest stable nuclear fragment that is emitted is the particle consisting of two neutrons and two
protons, equivalent to the nucleus of a helium atom. Because it was one of the first types of
radiation discovered, the emission of a helium nucleus is called alpha radiation, and the emitted
helium nucleus is called an alpha particle. Alpha particles are the heaviest and least penetrating
form of radiation consider in this chapter. They are emitted from the atomic nucleus with discrete
energies in the range of 2 to 10 MeV. An alpha particle is approximately four times heavier than a
proton or neutron and carries an electronic charge twice that of the proton. Alpha decay can be
described by the following equation:

Alpha decaying elements attached antibody can be used to kill acute myeloid leukemia cancer cells
with a procedure called for targeted alpha therapy.

168 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


5.2.2. Beta-minus (Negatron) decay
Nuclei with excess neutrons can achieve stability by a process that amounts to the conversion of a
neutron into a proton and an electron. The proton remains in the nucleus, but the electron is emitted.
This is called beta radiation, and the electron itself is called a beta particle.

The antineutrino is an electrically neutral subatomic particle whose mass is much smaller than that
of an electron. Any excess energy in the nucleus after beta decay is emitted as gamma rays, internal
conversion electrons, and other associated radiations.

5.2.3. Beta-plus (Positron) decay


In a manner analogous to that for excess neutrons, an unstable nucleus with too many protons can
undergo a decay that has the effect of converting a proton into a neutron. A proton can be converted
into a neutron and a positron, which is an electron with a positive, instead of negative, charge.

Figure 6-1: Annihilation process

The positron is also referred to as a positive beta particle or positive electron or antielectron. In
positron decay, a neutrino is also emitted. In many ways, positron decay is the mirror image of
beta decay: positive electron instead of negative electron, neutrino instead of antineutrino. Unlike
the negative electron, the positron itself survives only briefly. It quickly encounters an electron
(electrons are plentiful in matter), and both are annihilated producing a gamma radiation. This is
why it is considered an anti-electron. Positron emission requires an energy difference between the

169 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


parent and daughter atoms of at least 1.02 MeV. This decay process is backbone of positron
emission tomography imaging tecnique.

5.2.4. Electron capture decay


Electron capture (E) is an alternative to positron decay for neutron-deficient radionuclides. In this
decay mode, the nucleus captures an orbital (usually a K- or L-shell) electron, with the conversion
of a proton into a neutron and the simultaneous ejection of a neutrino. Electron capture can be
described by the following equation:

The capture of an orbital electron creates a vacancy in the electron shell, which is filled by an
electron from a higher-energy shell. This electron transition results in the emission of characteristic
x-rays and/or Auger electrons. Electron capture radio nuclides used in medical imaging decay to
atoms in excited states that subsequently emit externally detectable x-rays or gamma rays or both.

5.2.5. Isomeric Transition


Often, during radioactive decay, a daughter is formed in an excited (i.e., unstable) state. Gamma
rays are emitted as the daughter nucleus undergoes an internal rearrangement and transitions from
the excited state to a lower-energy state. Once created, most excited states transition almost
instantaneously to lower energy states with the emission of gamma radiation. However, some
excited states persist for longer periods, with half-lives ranging from approximately 10-12 seconds
to more than 600 years. These excited states are called metastable or isomeric states and are
denoted by the letter "m" after the mass number (e.g., Tc-99m). Isomeric transition is a decay
process that yields gamma radiation without the emission or capture of a particle by the nucleus.
There is no change in atomic number, mass number, or neutron number.

Isomeric transition can be described by the following equation:

The energy is released in the form of gamma rays or internal conversion electrons, or both.

The following figure summarizes the decay schemes.

170 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 6-2: Decay Schematics

5.3. Radiopharmaceuticals
Radiopharmaceuticals are medicinal formulations containing radioisotopes which are safe for
administration in humans for diagnosis or for therapy. Nuclear imaging depends critically on
the design and manufacture of suitable radiopharmaceuticals substances that can take part in
metabolism and are labeled with one or more gamma radioactive elements. Although a few
133 123
radionuclides such as Xe and I are used in elemental form, the majority of
radiopharmaceuticals consist of two parts: a carrier molecule (pharmaceutical) and a suitable
incorporated radionuclide.

171 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 6-3: Formation of Radiopharmaceutical
The carrier has to be soluble and eventually cross cell membranes, after oxidation or other
metabolic processes. On the other hand, the carrier must be sufficiently stable that it has time to
reach a target site and be concentrated there, before its metabolic demise. There are gamma-
emitting radionuclides of biologically important elements such as iodine, fluorine and oxygen
which allow a labeled, naturally occurring, biologically important molecule to be synthesized.

The radionuclide chosen for labeling must have chemical properties which allow it to be
incorporated chemically into a carrier, without unintentionally completely altering the designed
metabolic function of that carrier. The radionuclide half-life is also very important factor. For
nuclear medical diagnostics only nuclides with half-life of seconds up to hours can be used.
Radionuclides with too short half-life may decay prior to application, and for radionuclides with
too long half-life the counting rate will be extremely low and may cause inacceptable radiation
exposure.

Although many naturally occurring radioactive nuclides exist, all of those commonly administered
to patients in nuclear medicine are artificially produced. Radioactive elements (radionuclides) are
produced by:

• Natural occurrence, but rarely


• Nuclear reactors (bombarding neutron beams to stable nuclides)
• Nuclear fission (as a product)
• Cyclotron (bombarding accelerated charged particles to stable nuclides).

172 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Table 6.1 illustrates different radionuclides half live, decay process and production method.

Table 6.1: different radionuclides and their characteristics

Table 6.2 demonstrates the comparison of radionuclide production methods

173 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Table 6.2: Radionuclide production methods
Radiopharmaceuticals are administered to the patient via the following methods:
• Injection (through blood stream)
• Swallowing (gastrointestinal track)
• Inhalation (through lung)

Radiopharmaceutical concentration in tissue is driven by one or more of the following


mechanisms: (a) compartmental localization and leakage, (b) cell sequestration, (c) phagocytosis,
(d) passive diffusion, I metabolism, (f) active transport, (g) capillary blockade, (h) perfusion, (i)
chemotaxis, (j) antibody-antigen complexation, (k) receptor binding, and (1)physiochemical
adsorption.

5.4. Gamma camera


All modern applications of gamma imaging use one or more large area multi-detectors, called
gamma or Anger cameras, named after the inventor. All gamma cameras have three main parts: a
collimator, a NaI crystal scintillator/PM tube multi-detector and a hardwired logic circuit for
position sensitive photon counting, and energy analysis.

174 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 6-4: The Gamma Camera

5.4.1. The collimator


The role of the collimator in nuclear medicine is very similar to that of the anti-scatter grid in X-
ray imaging. Since emitted rays from a source of radioactivity within the body are emitted in all
directions, a much higher degree of collimation is required in nuclear medicine than in X-ray
imaging. This, together with the high attenuation of emitted rays within the body, leads to a very
high proportion of emitted c-rays (99.9%) not being detected. The collimator thus defines a line of
sight and a projection direction perpendicular to the face of the collimator. Gamma-rays emitted
along the collimation axis from any depth below the face are collected. The collimator is essential
but necessarily leads to a drastic reduction in the efficiency of photon detection, since many
perfectly acceptable gamma trajectories stop in the lead spaces between the holes. Overall,
relatively poor use is made of the available patient dose.

5.4.2. The Scintillator Crystal


Gamma-rays not attenuated in the body are detected using a scintillation crystal which converts
their energy into light. In the gamma camera, the detector is a large single crystal of thallium-

175 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


activated sodium iodide, NaI(Tl), approximately 40–50cm diameter. When ac-ray strikes the
crystal, it loses energy through photoelectric and Compton interactions which result in a population
of excited electronic states within the crystal. De-excitation of these states occurs ~230
nanoseconds later via emission of photons with a wavelength of 415 nm (visible blue light),
corresponding to a photon energy of ~4eV. A very important characteristic of scintillators such as
NaI(Tl) is that the amount of light (the number of photons) produced is directly proportional to the
energy of the incident gamma-ray.

For every gamma-ray that hits the scintillation crystal a few thousand photons are produced, each
with a very low energy of a few electron volts. These very low light signals need to be amplified
and converted into an electrical current that can be digitized: PMTs are the devices used for this
specific task. A photomultiplier tube (PMT) consists of a photocathode on top, followed by a
cascade of dynodes

The PMT is glued to the crystal. Because the light photons should reach the photocathode of the
PMT, the crystal must be transparent to the visible photons. The energy of the photons hitting the
photocathode releases some electrons from the cathode. These electrons are then accelerated
toward the positively charged dynode nearby. They arrive with higher energy (the voltage
difference x the charge), activating additional electrons.

Figure: 6-5: Photomultiplier and the electrical scheme

176 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Because the voltage becomes systematically higher for subsequent dynodes, the number of
electrons increases in every stage, finally producing a measurable signal. Because the
multiplication in every stage is constant, the final signal is proportional to the number of
scintillation photons, which in turn is proportional to the energy of the original photon. Hence, a
γ-photon is detected, and its energy can also be measured.

5.4.3. The Anger position network and pulse height analyzer


Whenever a scintillation event occurs in a NaI(Tl) crystal the PMT closest to the scintillation event
produces the largest output current. However, adjacent PMTs produce smaller output currents,
with the amount of light detected being approximately inversely proportional to the distance
between the scintillation events and the particular PMT. By comparing the magnitudes of the
currents from all the PMTs, the location of the scintillation within the crystal can be much better
estimated. In older analogue gamma cameras this process was carried out using an Anger logic
circuit, which consists of four resistors connected to the output of each PMT, as shown in Figure
(6-6). This network produces four output signals, X+, X-, Y+ and Y- which are summed for all the
PMTs.

The relative magnitudes and signs of these summed signals then define the estimated (X, Y)
location of the scintillation event in the crystal given by:

where k is an empirical constant of the particular camera. The sum of all four amplifier outputs is
a measure of the total visible light and thus the initial energy of the incident gamma-ray. The
summed signal is sent to a pulse height analyzer circuit (PHA), which performs the energy
analysis. The role of the PHA is to determine which of the recorded events correspond to gamma-
rays that have not been scattered within tissue (primary radiation) and should be retained, and
which have been Compton scattered in the patient, do not contain any useful spatial information,
and so should be rejected. Since the amplitude of the voltage pulse from the PMT is proportional
177 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
to the energy of the detected gamma-ray, discriminating on the basis of the magnitude of the output
of the PMT is equivalent to discriminating on the basis of gamma-ray energy.

Figure 6-6: The Anger Logic

5.5. Planar scintigraphy


The gamma scan analogue of the X-ray radiograph is called Planar Imaging. It involves counting
gamma photons while keeping the Anger camera in a fixed position and orientation with respect
to the patient. The variation of intensity with position in the camera is a 2D image (one projection)
of the variation of all radioactive density immediately below the camera.

Planar imaging with a parallel-hole collimator, like the X-ray analogue, provides no depth
information; rather the activity over the entire patient thickness below the collimator is integrated
to form the image. Planar gamma imaging, like its X-ray analogue, has the merit of simplicity and
relative speed and thus, in many clinical procedures, the single projection image is the chosen
method for preliminary investigations.

178 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 6-7: Components of a standard planar imaging system
The major clinical application of planar scintigraphy is whole-body bone scanning which
constitutes about 20% of all nuclear medicine scans. In addition, the thyroid, gastrointestinal tract,
liver and kidneys are scanned using planar scintigraphy with specialized agents. For example,
99m-Tc can be attached to small particles of sulphur colloid, less than 100 nm diameters, which
concentrate in healthy Kupffer cells in the liver, spleen and bone marrow. Pathological conditions
such as cirrhosis of the liver result in diseased Kupffer cells which can no longer uptake the
radiotracer, and therefore areas of low signal intensity, ‘cold spots’, are present in the image.
Figure (6-8) below shows the whole body scan of a patient over time.
179 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
Figure: 6-8: Skeletal Scintigraphy (99mTc-MDP)

5.6. Single photon emission computed tomography (SPECT)


SPECT is one tomographic modification of planar imaging. If the gamma camera can be tilted
through a wide range of angles with respect to the patient, then a collection of 2D projections can
be obtained. Either filtered backprojection or iterative techniques are used to reconstruct multiple
two-dimensional axial slices from the acquired projections. The collimator determines the line of
sight through the patient for any given projection.

SPECT uses essentially the same instrumentation and many of the same radiotracers as planar
scintigraphy, and most SPECT machines can also be used for planar scans. The majority of SPECT
scans are used for myocardial perfusion studies to detect coronary artery disease or myocardial
infarction, although SPECT is also used for brain studies to detect areas of reduced blood flow
associated with stroke, epilepsy or neurodegenerative diseases such as Alzheimer.

Since the array of PMTs is two-dimensional in nature, the data can be reconstructed as a series of
adjacent slices. A 3600rotation is generally needed in SPECT since the source-to-detector distance
affects the distribution of c-ray scatter in the body, the degree of tissue attenuation and also the
spatial resolution, and so projections acquired at 1800 to one another are not identical. A
converging collimator is often used in SPECT to increase the SNR of the scan. In a SPECT brain
180 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
scan each image of a multi-slice data set is formed from typically 500 000 counts, with a spatial
resolution of ~7 mm. Myocardial SPECT has a lower number of counts, typically 100 000 per
image, and a spatial resolution about twice as coarse as that of the brain scan (since the source to
detector distance is much larger due to the size of the body).

Figure: -6-9: SPECT Camera

In SPECT the detected signal is the line


integral of the activities in a specified
direction. The projection data P(r, α)
contain only photons emitted from the
activity distribution A(x,y) that strike the
collimator perpendicularily.

P(r, α) = ∫ 𝐴(𝑥, 𝑦). 𝑑𝑙(𝑟, 𝛼)

Figure 6-10: projection data

The stuck of projections in different orientation will produce a sinogram (Figure 6-11) by
appropriate reconstruction algorithm the 2D image will be reconstructed.

181 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure: 6-11: A set of projections and a Sinogram

Figure:6-12: A SPECT machine (left) and a Kidney SPECT (right)


SPECT can be combined with CT called A SPECT/CT fusion system (Figure 6-12) and acquires
data from the two image modalities with a single integrated patient bed and gantry. The two
imaging studies are performed with the patient remaining on the table, which is moved from the
CT scanner to the SPECT system. Typical clinical systems use a two-head SPECT camera with a
multi-slice CT scanner. The primary advantages of a combined SPECT/CT over a stand-alone
SPECT system are: (i) improved attenuation correction for SPECT reconstruction using the high
resolution anatomical information from the CT scanner, and (ii) the fusion of high-resolution
anatomical (CT) with functional (SPECT) information, allowing the anatomical location of
radioactive ‘hot’ or ‘cold’ spots to be defined much better with reduced partial volume effects
compared to SPECT alone. The major application of SPECT is to cardiac and brain perfusion
studies.
182 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
Figure 6-13: A SPECT/CT system

5.7. Positron Emission Tomography (PET)


PET or positron emission tomography is a more recent clinical modification of gamma imaging
that makes use of the two γ-rays, emitted simultaneously, when a positron annihilates with an
electron. The tracer introduced into the patient is a positron emitter such as 15O, 11C, 18F, bound to
a suitable carrier. The radioactive decay produces a positron, e+, with an initial kinetic energy of
~1 MeV. Typically the positron travels less than 5 mm in biological tissue from its point of
emission. The high electron density of biological tissue ensures frequent electron/positron
encounters; one of these will result in the disappearance or annihilation of the two particles,
replacing them with two γ-rays.

The conservation of energy demands that the energy of the two γ-rays is supplied by the total
energy of the positron and the electron. By the time the annihilation takes place, nearly all the
initial kinetic energy of the positron has been dissipated in tissue. The annihilation event results in
the conversion of the combined electron/positron rest mass, 1.024MeV, into photon energy. Two
γ-rays, each with energy of nearly 512keV are produced to conserve energy. In addition, since the
electron/positron pair is essentially at rest when annihilation takes place, the two γ-rays have to
leave the annihilation site, travelling in nearly opposite directions in order to conserve linear
momentum.

183 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Figure 6-14: Positron Annihilation

The two γ-rays, together, define a line that passes close to the point of emission of the original
positron. Coincidence detection is employed with a circular position-sensitive detector encircling
the patient. A single coincidence event, with its two detected ends, defines a line of sight through
the patient, somewhere along which, the annihilation took place. Since the positron mean free path
is very short, the distribution of lines of sight reflects the concentration of the radioactive tracer,
leaving aside the inevitable distortions brought about by scattering and absorption of the emitted
γ-rays. The relatively high γ-ray energy ensures minimal photoelectric absorption in tissue but
significant amounts of Compton scattering, both in the patient and the detector. PET is a wholly
tomographic procedure. The collection of very many events provides enough data to assemble a
series of projections that can be combined to reconstruct 2D images of isotope concentration.
The two γ -rays, travelling in nearly opposite directions, themselves define a line and thus a
collimator is not required, as long as position-sensitive detection is sufficiently precise. The
absence of a collimator in PET makes a major contribution to an overall increase in its efficiency
and spatial resolution with respect to SPECT.

The basic PET camera consists of a number of segmented 3600 ring detectors with a common axis,
arranged at intervals of about one centimeter apart
184 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
Figure 6-15: PET ring of detectors

Slices for reconstruction are chosen by using particular detector rings along the axis. Each segment
within a ring is a position-sensitive detector.
The projection data in PET is Integral of the activity along the line or tube of response. But due to
false coincidence detection of scattered photons, detector efficiency and atteunation the measured
data may deviate from the assuemed projection data.

Figure -6-16: Assumed and measured projection data of PET

185 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Each parallel slice is reconstructed independently (a 2D sinogram originates a 2D slice). The PET
image is reconstructed using either filtered backprojection or iterative methods. Slices are stacked
to form a 3D volume f(x,y,z). Figure 6-17 illustrates the reconstruction procedure.

Figure: 6-17: PET 2D reconstruction

PET has many clinical applications. It can be applied in oncology, neurology and cardiology. The
following table illustrates some of the PET tracers and their application.

PET imaging, particularly in oncologic applications, often reveals suspicious lesions, but provides
little information regarding their exact locations in the organs of the patients. Like SPECT, PET
can also be fused with CT. PET/CT has its major clinical applications in the general areas of
oncology, cardiology and neurology. In oncology, whole body imaging is used to identify both
primary tumors and secondary metastatic disease remote from the primary source. The major
disadvantages of PET/CT are the requirement for an on-site cyclotron to produce positron emitting
radiotracers, and the high associated costs.
186 | Medical Imaging Systems, JiT, School of Biomedical Engineering.
Consider Figure 6-18. The whole
body projection image (left)
shows in the right lower abdomen
(dotted line) a focal FDG-
enhancement. In the
corresponding CT image the
melanoma metastasis in the colon
wall (arrow) is easily overlooked.
The PET image clearly shows the
metastasis. However, only the
PET/CT fusion allows the exact
anatomic localization of the
metastasis that was then removed
by surgery.
Figure: 6-18: comparison of CT, PET and PET/CT
images

5.8. Comparison of SPECT and PET


In single photon emission imaging, the spatial resolution and the detection efficiency are primarily
determined by the collimator. Both are ultimately limited by the compromise between collimator
efficiency and collimator spatial resolution that is a consequence of collimated image formation.
It is the use of annihilation coincidence detection instead of collimation that makes the PET
scanner much more efficient than the scintillation camera and also yields its superior spatial
resolution.

In systems that use collimation to form images, the spatial resolution rapidly deteriorates with
distance from the face of the imaging device. This causes the spatial resolution to deteriorate from
the edge to the center in transverse SPECT images. In contrast, the spatial resolution in a transverse
PET image is best in the center. Table 6.3 compares SPECT and PET systems.

187 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Table 6.3: Comparison of PET and SPECT

188 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


Review Questions
1. In a sample of 20 000 atoms, if 400 decay in 8 seconds what is the radioactivity,
measured in mCi, of the sample?
2. If a radioactive nuclide decays for an interval of time equal to its average life, what
fraction of the original activity remains.
3. Explain the relationship between radiotracers half-life and image quality in nuclear
medicine imaging.
4. Explain why pharmaceutical which have similar property with molecules in our body are
required for nuclear imaging.
5. For a 64364 data matrix, how many total counts are necessary for a 10% pixel-by-pixel
uniformity level in a SPECT image?
6. Using the rest mass of the electron, show that the energies of the twoc-rays produced by
the annihilation of an electron with a positron are 511 keV.
7. Explain why spatial resolution in SPECT is dependent on collimator and camera orbit
while it is relatively constant across trans-axial image and best at the center in PET.
8. Explain the basic principles of radioactivity. What are the different types of radioactive
decay and how do they differ in terms of energy and particle emission?
9. Describe the principles and diagnostic applications of radioactive nuclides and
radiopharmaceuticals. How are radiopharmaceuticals synthesized and labeled with
radioactive nuclides? How are they used for imaging and therapy?
10. Explain the function of each component of a scintillation gamma camera. How does the
gamma camera detect and localize gamma radiation emitted by a radiopharmaceutical in
the body?
11. Describe the acquisition, presentation, and properties of images for single-photon
emission tomography (SPECT). How is the SPECT image reconstructed from multiple
projection images? What are the advantages and limitations of SPECT imaging?
12. Describe the acquisition, presentation, and properties of images for positron emission
tomography (PET). How is the PET image reconstructed from multiple coincidence
events? What are the advantages and limitations of PET imaging?
13. Explain the recent advances in imaging with dual imaging modalities PET/CT and
SPECT/CT. What are the advantages of combining functional and anatomical imaging in

189 | Medical Imaging Systems, JiT, School of Biomedical Engineering.


a single examination? How is the PET/CT or SPECT/CT image generated and
interpreted?
14. Discuss the applications of nuclear medicine imaging in various medical fields, such as
oncology, cardiology, and neurology. How do these techniques complement other
imaging modalities in the diagnosis and management of diseases?
15. Explain the principles of radiation safety in nuclear medicine imaging. What are the
guidelines for the safe handling and administration of radioactive materials to patients
and healthcare workers?
16. Describe the role of quality control in nuclear medicine imaging. How are the
performance and accuracy of imaging systems monitored and maintained over time?
17. Discuss the ethical and social issues associated with the use of nuclear medicine imaging.
What are the implications of radiation exposure and genetic testing for patients and their
families? How can these issues be addressed in clinical practice and research?

References
1. G.S., Fundamentals of Biomedical Engineering, New Age International (P) Limited,
Publishers, New Delhi, 2007.
2. Hendee W. R., Ritenour E. R., Medical imaging physics, fourth edition, A John Wiley &
Sons, Inc., Publication, ISBN 0-471-38226-4, New York, 2002.
3. Paul Suetens, Fundamentals of medical imaging, second edition, Cambridge University
Press, ISBN-13 978-0-521-51915-1, New York, 2009.
4. Chris Guy, Dominic ffytche, An Introduction to The Principles of Medical Imaging,
Revised Edition, Imperial College Press, ISBN 1-86094-502-3, London, 2005.
5. Jerrold T. B., Seibert A. J., Edwin M. L., John M. B., The essential physics of medical
imaging, Lipincott Williams and Wilkins, A wolters company, USA, 2002.

190 | Medical Imaging Systems, JiT, School of Biomedical Engineering.

You might also like