You are on page 1of 307

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/322156993

Radiation Physics and its applications in diagnostic radiological techniques

Book · December 2017

CITATION READS
1 3,389

3 authors, including:

Mahmood Radhi Haider Qasim Hamood


Middle Technical University/College Of Health And Medical Technology /Baghdad Middle Technical University
11 PUBLICATIONS   14 CITATIONS    14 PUBLICATIONS   5 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Urinary Tract Infections View project

Medical engineering View project

All content following this page was uploaded by Haider Qasim Hamood on 31 December 2017.

The user has requested enhancement of the downloaded file.


Radiation Physics
and its applications in diagnostic
radiological techniques

Dr. MAHMOOD RADHI AL-QURAYSHI


Assistant Professor
Ph.D. (Physicist)
Department of Radiological Techniques

Dr. HAIDER QASIM AL-MOSAWI


Assistant Professor
DMRD; FIBMS (Radiologist)
Department of Radiological Techniques

2015
BOOK CONTENTS

CHAPTER ONE : RADIATION AND ATOM 1

CHAPTER TWO : PRODUCTION OF X-RAYS 19

CHAPTER THREE : INTERACTION OF X-RAYS 34

CHAPTER FOUR : IMAGING WITH X-RAYS 61

CHAPTER FIVE : FLUOROSCOPY 82

CHAPTER SIX : COMPUTED TOMOGRAPHY 91

CHAPTER SEVEN : NUCLEAR MEDICINE 130


IMAGING SYSTEMS
CHAPTER EIGHT : IMAGING WITH ULTRASOUND 142

CHAPTER NINE : MAGNETIC RESONANCE 210


IMAGING
CHAPTER TEN : RADIATION HAZARDS AND 271
PROTECTION

BOOK REFERENCES 291

APPENDIX : ACRONYMS AND UNITS 295


PREFACE
This book is intended as an assistant textbook in radiation physics and its applications
in diagnostic radiological techniques in applied academic medical graduate programs.
The book may also be of interest for the large number of professional physicists, who
in their daily occupations deal with medical physics and have a need to improve their
understanding of radiation physics and to all medical postgraduate programs.
Medical physics is a rapidly developing specialty of physics, concerned with the
application of radiation to diagnosis and treatment of human disease.
In contrast to other physics specialties, such as nuclear physics, solid-state physics,
and high-energy physics, studies of modern medical physics attract a much broader
base of professionals including graduate students in medical imaging residents and
technology students in diagnostic imaging and therapeutic radiation oncology, students
in biomedical engineering, and students in radiation safety and radiation dosimeter
educational programs. All these professionals have a common desire to improve their
knowledge of the physics that underlies the application of radiation in diagnosis and
treatment of disease. Candidates preparing for professional certification exams in any
of the medical imaging and medical radiation related medical specialties should find
the material useful. The intent of this book is to provide the missing link between the
elementary physics and the physics of the medical imaging subspecialties.
This book is based on notes that we developed over the past years of teaching
radiation physics to students in radiological techniques department at the college of
health and medical technology. It contains eleven chapters, each chapter covering a
specific group of subjects related to radiation physics that form the basic knowledge
required from professionals working in different medical imaging fields.
We greatly appreciate to our colleagues in the radiological techniques department at
the health and medical technology college for their encouragement, approval and
tolerance of our concentrating on the book during the past year.
Finally, we gratefully acknowledge that the completion of this book could not have
been accomplished without the support and encouragement of our families.

Dr. Mahmood Radhi Alqurayshi Dr. Haider Qasim Almosawi

Baghdad December, 2014


CHAPTER 1

RADIATION AND ATOM

Atoms are far too small to see directly, even with the most powerful
optical microscopes. It is through the "language of light" that we
Rationale communicate with the world of the atom. This chapter will introduce
you to the rudiments of this language.

Mahmood & Haider

Performance Objectives

After studying the chapter one, the student will be able to:-

1. Define the radiological units.


2. Draw a diagram of the atomic structure.
3. Know the mechanism of the distribution of the electrons on the atomic
shells.
4. Define the ionizing radiation.
5. Give some example of the ionizing radiation.
6. Determine the sources of ionizing radiation.
7. Know the considered when ionizing radiation is measured.
8. Define electromagnetic radiation.
9. Determine the relationship between the photon energy and its frequency.
10. State and explain the inverse square law.
CHAPTER 1 RADIATION AND ATOM

CHAPTER ONE: RADIATION AND ATOM


CHAPTER CONTENTS
1.1 The Atom 2
1.1.1 Fundamental Particles 2
1.1.2 Atomic Structure 3
1.1.3 Binding Energy 3
1.2 Wave-Particle Duality 4
1.3 Radiation 6
1.3.1 Non-Ionizing Radiation 8
1.3.2 Ionizing Radiation 8
1.4 Types of Ionizing Radiation 8
1.4.1 Particle Radiation 9
1.4.1.1 Alpha Particles 9
1.4.1.2 Beta Particles 9
1.4.1.3 Neutron Radiation 10
1.4.2 Types of Electromagnetic Ionizing Radiation 10
1.4.2.1 Gamma Rays 11
1.4.2.2 X- Rays 11
1.4.2.3 Ultraviolet 11
1.5 Inverse Square Law for Radiation 11
1.6 Properties Considered When Ionizing Radiation Measured 13
1.7 Radiologic Units 13
1.7.1 Roentgen 13
1.7.2 Rad 13
1.7.3 Rem 14
1.7.4 Curie 14
1.7.5 Electron Volt 14
1.8 Practical units 15
1.8.1 Absorbed dose 15
1.8.2 Equivalent dose 15
1.8.3 Effective dose 17
1.1 The Atom
Atoms are far too small to see directly, even with the most powerful optical microscopes. But
atoms do interact with and under some circumstances emit light in ways that reveal their
internal structures in amazingly fine detail. It is through the "language of light" that we
communicate with the world of the atom. This section will introduce you to the rudiments of
this language.
1.1.1 Fundamental Particles
Diagnostic imaging employs radiations – X, gamma, radiofrequency and sound – to which the
body is partly but not completely transparent, and it exploits the special properties of a
number of elements and compounds. As ionizing radiations (X-rays and gamma rays) are

2
CHAPTER 1 RADIATION AND ATOM

used most, it is best to start by discussing the structure of the atom and the production of X-
rays.
Table 1.1: Fundamental properties of particulate radiation
Relative Mass Approximate Energy
particle Symbol
charge (amu) Equivalent (MeV)

Neutron n0 0 1.008982 940


1 +
Proton P, H +1 1.007593 938
− −
Electron (beta minus) e ,β −1 0.000548 0.511
+ +
Positron (beta plus) e ,β +1 0.000548 0.511
4 2+
Alpha α, H +2 4.0028 3727

1.1.2 Atomic Structure


An atom consists mainly of empty space. Its mass is concentrated in a central nucleus which
contains a number A of nucleons, where A is called the mass number as shown in figure 1.1.

Figure 1.1: Electron shells in a sodium atom.

The nucleons comprise Z protons, where Z is the atomic number of the element, and so (A-Z)
neutrons.
A nuclide is a species of nucleus characterized by the two numbers Z and A. The atomic
number is synonymous with the name of the element. The electron limit per shell can be
calculated from the expression: 2n2 where n is the shell number.
In each atom, the outermost or valence shell is concerned with the chemical, thermal,
optical and electrical properties of the element. X-rays involve the inner shells, and
radioactivity concerns the nucleus.
1.1.3 Binding Energy
Binding energy, amount of energy required to separate a particle from a system of particles or
to disperse all the particles of the system. Binding energy is especially applicable to

3
CHAPTER 1 RADIATION AND ATOM

subatomic particles in atomic nuclei, to electrons bound to nuclei in atoms, and to atoms and
ions bound together in crystals.
Nuclear binding energy is the energy required to separate an atomic nucleus completely
into its constituent protons and neutrons, or, equivalently, the energy that would be liberated
by combining individual protons and neutrons into a single nucleus. The hydrogen-2 nucleus,
for example, composed of one proton and one neutron, can be separated completely by
supplying 2.23 million electron volts (MeV) of energy. Conversely, when a slowly moving
neutron and proton combine to form a hydrogen-2 nucleus, 2.23 MeV are liberated in the
form of gamma radiation. The total mass of the bound particles is less than the sum of the
masses of the separate particles by an amount equivalent (as expressed in Einstein’s mass–
energy equation) to the binding energy. Electron binding energy, also called ionization
potential, is the energy required to remove an electron from an atom, a molecule, or an ion. In
general, the binding energy of a single proton or neutron in a nucleus is approximately a
million times greater than the binding energy of a single electron in an atom.
An atom is said to be ionized when one of its electrons has been completely removed. The
detached electron is negative ion and the remnant atom a positive ion. Together they form an
ion pair.
The binding energy depends on the shell , and on the element,
increasing as the atomic number increases.
An atom is excited when an electron is raised from one shell to another farther out.

1.2 Wave-Particle Duality


There are two aspects for Electromagnetic radiation can be regarded as a stream of ‘packets’
or quanta of energy, called photons (i.e. quantum aspects), traveling in straight lines. The
photon is the smallest possible packet (quantum) of light; it has zero mass but a definite
energy.
Electromagnetic radiation can also be regarded as sinusoidally varying electric and
magnetic fields (i.e. wave aspects), traveling with light velocity when in vacuum. They are
transverse waves: the electric and magnetic field vectors point at right angles to each other
and to the direction of travel of the wave.
Einstein is most famous for saying "mass is related to energy". Of course, this is usually
written out as an equation, rather than as words:

or

Energy Mass Speed of Light


in Joules in kg ( )

Because of the wave-particle duality of light, the energy of a wave can be related to the
wave's frequency by the equation:

4
CHAPTER 1 RADIATION AND ATOM

Energy planck's Constant Frequency


(Joules) ( ) (Hz or s-1)
There are three measurable properties of wave motion: amplitude, wavelength, and
frequency, the number of vibrations per second. The relation between the wavelength λ
(Greek lambda) and frequency of a wave (Greek nu) is determined by the propagation
velocity ;

Wavelength frequency velocity (constant)

This relation is true of all kinds of wave motion, including sound; although for sound the
velocity is about a million times less. More usefully, since frequency is inversely proportional
to wavelength, so also is photon energy:

E (in keV) =1.24/λ (in nm)

For example: Blue light λ=400 nm E=3 eV


Typical X- and gamma rays λ=0.1 nm E=140 keV

At any point, the graph of field strength against time is a sine wave, depicted as a solid curve
in Figure 1.3. The peak field strength is called the amplitude (A). The interval between
successive crests of the wave is called the period (T). The frequency is the number of
crests passing a point in a second, and . The dashed curve refers to a later instant,
showing how the wave has travelled forward with velocity c.
At any instant, the graph of field strength against distance is also a sine wave. The distance
between successive crests of the wave is called the wavelength (λ).

Wavelength (λ)
or Period (T)

Amplitude (A)

Propagation Velocity
Mahmood & Haider
Figure 1.2: Electromagnetic wave.

5
CHAPTER 1 RADIATION AND ATOM

The types of radiation are listed in Table 1.2, in order of increasing photon energy, increasing
frequency, and decreasing wavelength (see figure 1.3. When the energy is less than 1 keV the
radiation is usually described in terms of its frequency, except that visible light is usually
described in terms of its wavelength. It is curious that only radiations at the ends of the
spectrum, radio waves and X- or gamma rays, penetrate the human body sufficiently to be
used in transmission imaging.
Table 1.2: Electromagnetic spectrum
Electromagnetic Wavelength Frequency Energy
Radio waves 30-6 m 10-50 MHz 40-200 neV
Infrared 10-0.7 µrn 30-430 THz 0. 2-1.8 eV
Visible light 700-400 nm 430-750 THz l.8-3eV
Ultraviolet 400-100 nm 750-3000 THz 3-12 eV
X- and gamma 60-2.5 pm 5× I06– 120× 106THz 20-500 keV

1.3 Radiation
Radiation is a fact of life: all around us, all the time. Radiation is energy moving in the form
of waves or streams of particles. Understanding radiation requires basic knowledge of atomic
structure, energy and how radiation may damage cells in the human body. There are many
kinds of radiation all around us. When people hear the word radiation, they often think of
atomic energy, nuclear power and radioactivity, but radiation has many other forms. Sound
and visible light are familiar forms of radiation; other types include ultraviolet radiation (that
produces a suntan), infrared radiation (a form of heat energy), and radio and television
signals. Figure 1.3 presents an overview of the electromagnetic spectrum.
Electromagnetic radiation is a form of energy. Electromagnetic energy is the term given to
energy traveling across empty space and used to describe all the different kinds of energies
released into space by stars such as the Sun. All forms of electromagnetic radiation (which
includes radio waves, light, cosmic rays, etc.) moves through empty space with the same
velocity at the speed of 299,792 km per second (very close to 3×108 ms-1) and not
significantly less in air. These kinds of energies include some that you will recognize and
some that will sound strange. They include:
 Radio Waves
 TV waves
 Radar waves
 Heat (infrared radiation)
 Light
 Ultraviolet Light (This is what causes Sunburns)
 X-rays (emitted by X-ray tubes)
 Short waves
 Microwaves, like in a microwave oven

6
CHAPTER 1 RADIATION AND ATOM

 Gamma Rays; gamma rays (emitted by radioactive nuclei) have essentially the same
properties of X-rays and differ only in their origin.
All these waves do different things (for example, light waves make things visible to the
human eye, while heat waves make molecules move and warm up, and x rays can pass
through a person and land on film, allowing us to take a picture inside someone's body) but
they have some things in common. They all travel in waves. The fact that electromagnetic
radiation travels in waves lets us measure the different kind by wavelength or how long the
waves are. That is one way we can tell the kinds of radiation apart from each other.
Although all kinds of electromagnetic radiation are released from the Sun, our atmosphere
stops some kinds from getting to us. For example, the ozone layer stops a lot of harmful
ultraviolet radiation from getting to us, and that's why people are so concerned about the hole
in it.
We humans have learned uses for a lot of different kinds of electromagnetic radiation and
have learned how to make it using other kinds of energy when we need to.

Figure 1.3: The electromagnetic spectrum


In general, the radiation is a descriptor for energy (in the form of either particles or waves)
travelling through space or another medium. The energy is emitted from the source and
radiates in straight lines and in all directions. If the radiation is an electromagnetic wave, it
will travel at the speed of light. Because of the way the energy is radiated, radiation is
relatively straightforward to detect and measure and inferences can be made about its source.
The properties of the energy emitted will determine the way it interacts with matter (and
living tissue) and therefore its measurement technique and requirements for regulation.
Not all radiation interacts with matter in the same way. There are two forms of radiation:
non-ionizing and ionizing.

7
CHAPTER 1 RADIATION AND ATOM

1.3.1 Non-Ionizing Radiation


Non-ionizing radiation is the radiation that has enough energy to move atoms in a molecule
around or cause them to vibrate, but not enough to remove electrons. That mean it does not
possess enough energy to produce ions. Non-ionising radiation consists of parts of the
electromagnetic-spectrum (Figure 1.3), which includes radio waves, microwaves, infra-red,
visible and ultraviolet light, together with sound and ultrasound. Cellular telephones,
television stations, FM and AM radio, and cordless phones use non-ionizing radiation. Other
forms include the earth’s magnetic field, as well as magnetic field exposure from proximity to
transmission lines, household wiring and electric appliances. These are defined as extremely
low-frequency (ELF) waves and are not considered to pose a health risk. The electromagnetic
spectrum also includes ionising electromagnetic radiation (x and gamma rays).

1.3.2 Ionizing Radiation


Ionizing radiation is a special type of radiation (in the form of either particles or waves) that
has enough energy to remove tightly bound electrons out of their orbits around atoms, thus
creating ions. In other words, Ionizing radiation is any kind of radiation capable of removing
an orbital electron from an atom with which it interacts, the atom is said to be ionized. This
process is called ionization. Ionizing radiation includes the radiation that comes from both
natural and man-made radioactive materials. Examples of this kind of radiation of interest for
the purpose of this chapter are gamma (γ) and x-rays. Gamma radiation consists of photons
that originate from within the nucleus, and X-ray radiation consists of photons that originate
from outside the nucleus, and are typically lower in energy than gamma radiation. We take
advantage of its properties in diagnostic imaging, to kill cancer cells, and in many
manufacturing processes.
Ionization is the process by which a stable atom or a molecule loses or gains an electron(s),
thereby acquiring an electric charge or changing an existing charge. An atom or molecule
with an electric charge is called an ion, which may behave differently, electrically and
chemically, from a stable atom or molecule. The altered behaviour may lead to new possibly
undesired molecules, a change in the conductive properties of the material in the vicinity of
the ion, a release of energy, or a combination of these effects. In the human body, these
effects may lead to changes in the structure or behaviour of cells. Therefore, ionizing radiation
has sufficient energy to be able to displace an electron from its orbit around an atom and,
conversely, non-ionizing radiation does not have sufficient energy to displace electrons.
Ionizing radiation can occur in one of two forms: particulate or electromagnetic. Particulate
ionizing radiation is emitted when components of the structure of an atom are ejected,
artificially or naturally. Ionizing radiation includes the radiation that comes from both natural
and man-made radioactive materials.
1.4 Types of Ionizing Radiation
Photon radiation can penetrate very deeply and sometimes can only be reduced in intensity by
materials that are quite dense, such as lead or steel. In general, photon radiation can travel

8
CHAPTER 1 RADIATION AND ATOM

much greater distances than alpha or beta radiation, and it can penetrate bodily tissues and
organs when the radiation source is outside the body. Photon radiation can also be hazardous
if photon-emitting nuclear substances are taken into the body. An example of a nuclear
substance that undergoes photon emission is cobalt-60, which decays to nickel-60. There are
several types of ionizing radiation.
1.4.1 Particle Radiation
Particle radiation consists of a stream of charged or neutral particles, both charged ions and
subatomic elementary particles. This includes solar wind, cosmic radiation, and neutron flux
in nuclear reactors.
1.4.1.1 Alpha Particles
Alpha particles (α), helium nuclei, are the least penetrating. Some unstable atoms emit alpha
particles. Alpha particles are positively charged and made up of two protons and two neutrons
from the atom’s nucleus. Alpha particles come from the decay of the heaviest radioactive
elements, such as uranium, radium and polonium. Even very energetic alpha particles can be
stopped by a single sheet of paper. They are so heavy that they use up their energy over short
distances and are unable to travel very far from the atom.
The health effect from exposure to alpha particles depends greatly on how a person is
exposed. Alpha particles lack the energy to penetrate even the outer layer of skin, so exposure
to the outside of the body is not a major concern. Inside the body, however, they can be very
harmful. If alpha-emitters are inhaled, swallowed, or get into the body through a cut, the alpha
particles can damage sensitive living tissue. The way these large, heavy particles cause
damage makes them more dangerous than other types of radiation. The ionizations they cause
are very close together--they can release all their energy in a few cells. This results in more
severe damage to cells and DNA.

1.4.1.2 Beta Particles


Beta particles (β) are fast-moving particles with a negative electrical charge. Beta particles
(electrons) are emitted from an atom’s nucleus during radioactive decay with more
penetrating, but still can be absorbed by a few millimeters of aluminum. However, in cases
where high energy beta particles are emitted shielding must be accomplished with low density
materials, e.g. plastic, wood, water or acrylic glass (Plexiglas, Lucite). They travel farther in
air than alpha particles, but can be stopped by a layer of clothing or by a thin layer of a
substance such as aluminum. In the case of beta+ radiation (positrons), the gamma radiation
from the electron-positron annihilation reaction poses additional concern. These particles are
emitted by certain unstable atoms such as hydrogen-3 (tritium), carbon-14 and strontium-90.
Beta particles are more penetrating than alpha particles but are less damaging to living
tissue and DNA because the ionizations they produce are more widely spaced. Some beta
particles are capable of penetrating the skin and causing damage such as skin burns. However,
as with alpha-emitters, beta-emitters are most hazardous when they are inhaled or swallowed.

9
CHAPTER 1 RADIATION AND ATOM

1.4.1.3 Neutron Radiation


Neutron radiation is not as readily absorbed as charged particle radiation, which makes this
type highly penetrating. Neutrons are absorbed by nuclei of atoms in a nuclear reaction. This
most-often creates a secondary radiation hazard, as the absorbing nuclei transmute to the next-
heavier isotope, many of which are unstable.
Apart from cosmic radiation, spontaneous fission is the only natural source of neutrons. A
common source of neutrons is the nuclear reactor, in which the splitting of a uranium or
plutonium nucleus is accompanied by the emission of neutrons. The neutrons emitted from
one fission event can strike the nucleus of an adjacent atom and cause another fission event,
inducing a chain reaction. The production of nuclear power is based upon this principle. All
other sources of neutrons depend on reactions where a nucleus is bombarded with a certain
type of radiation (such as photon radiation or alpha radiation), and where the resulting effect
on the nucleus is the emission of a neutron. Neutrons are able to penetrate tissues and organs
of the human body when the radiation source is outside the body. Neutrons can also be
hazardous if neutron-emitting nuclear substances are deposited inside the body. Neutron
radiation is best shielded or absorbed by materials that contain hydrogen atoms, such as
paraffin wax and plastics. This is because neutrons and hydrogen atoms have similar atomic
weights and readily undergo collisions between each other.
Figure 1.4 summarizes the types of radiation discussed in this chapter, from higher-energy
ionizing radiation to lower-energy non-ionizing radiation. Each radiation source differs in its
ability to penetrate various materials, such as paper, skin, wood and lead.

Figure 1.4: Penetration abilities of different types of ionizing radiation


1.4.2 Types of Electromagnetic Ionizing Radiation
In general, electromagnetic radiation consists of emissions of electromagnetic waves, the
properties of which depend on the wavelength. Ionizing radiation has more energy than non-
ionizing radiation such that it can cause chemical changes by interacting with an atom to
remove tightly bound electrons from the orbit of the atom, causing the atom to become
charged or ionized. The types of ionizing electromagnetic radiation are categorized according
to their wavelength.

10
CHAPTER 1 RADIATION AND ATOM

1.4.2.1 Gamma Rays


Gamma rays (γ) are weightless packets of energy called photons. Gamma-rays have the
smallest wavelengths and but have much higher energy of any other wave in the
electromagnetic spectrum. Unlike alpha and beta particles, which have both energy and mass,
gamma rays are pure energy. Gamma rays are often emitted along with alpha or beta particles
during radioactive decay and in nuclear explosions.
Gamma rays are a radiation hazard for the entire body. They can easily penetrate barriers,
such as skin and clothing that can stop alpha and beta particles. Gamma rays have so much
penetrating power that several inches of a dense material like lead or even a few feet of
concrete may be required to stop them. Gamma rays can pass completely through the human
body easily; as they pass through, they can cause ionizations that damage tissue and DNA or
kill living cells, a fact which medicine uses to its advantage, using gamma-rays to kill
cancerous cells.
1.4.2.2 X-Rays
Because of their use in medicine, almost everybody has heard of x-rays. X-rays are similar to
gamma rays in that they are photons of pure energy. X-rays and gamma rays have the same
basic properties but come from different parts of the atom. X-rays are emitted from processes
outside the nucleus, but gamma rays originate inside the nucleus. They also are generally
lower in energy and, therefore, less penetrating than gamma rays but have higher energy than
ultraviolet waves. As the wavelengths of light decrease, they increase in energy. We usually
talk about X-rays in terms of their energy rather than wavelength. This is partially because X-
rays have very small wavelengths. It is also because X-ray light tends to act more like a
particle than a wave. X-rays can be produced naturally or artificially by machines using
electricity.
Literally thousands of x-ray machines are used daily in medicine. Computerized
tomography, commonly known as CT or CAT scans, uses special x-ray equipment to make
detailed images of bones and soft tissue in the body. Medical x-rays are the single largest
source of man-made radiation exposure. X-rays are also used in industry for inspections and
process controls.
1.4.2.3 Ultraviolet
The dividing line between ionizing and non-ionizing radiation in the electromagnetic
spectrum falls in the ultraviolet portion of the spectrum and while most UV is classified as
non-ionizing radiation, the shorter wavelengths from about 150 nm (UV-C or ‘Far’ UV) are
ionizing. UV-C from the sun is nearly all absorbed by the ozone layer.
1.5 Inverse Square Law for Radiation
Any point source which spreads its influence equally in all directions without a limit to its
range will obey the inverse square law. This comes from strictly geometrical considerations.
The intensity of the influence at any given radius is the source strength divided by the area

11
CHAPTER 1 RADIATION AND ATOM

of the sphere. Being strictly geometric in its origin, the inverse square law applies to diverse
phenomena.
Point sources of gravitational force, electric field, light, sound or radiation obey the inverse
square law. When light is emitted from a source such as the sun or a light bulb, the intensity
decreases rapidly with the distance from the source. X-rays exhibit precisely the same
property. The intensity of the radiation is inversely proportional to the square of the
distance from a point source (see figure 1.6).

Figure 1.6: The inverse square law applying to a point source.


As intensity is the power per unit area,
Intensity = Power /Area ,

It naturally decreases with the square of the distance as the size of the radiative spherical
wave front increases with distance. So, the luminous intensity on a spherical surface a
distance from a source radiating a total power P is:

As P and remain constant, the luminous intensity is proportional to the inverse of distance:

12
CHAPTER 1 RADIATION AND ATOM

Thus, if I double the distance to a light source the observed intensity is decreased to
of its original value. Generally, the ratio of intensities at distances and
are

1.6 Properties Considered When Ionizing Radiation Measured


Ionizing radiation is measured in terms of:
 the strength or radioactivity of the radiation source,
 the energy of the radiation,
 the level of radiation in the environment, and
 the radiation dose or the amount of radiation energy absorbed by the
human body.
From the point of view of the occupational exposure, the radiation dose is the most important
measure. Occupational exposure limits like the ACGIH TLVs are given in terms of the
permitted maximum dose. The risk of radiation-induced diseases depends on the total
radiation dose that a person receives over time.
1.7 Radiologic Units
There are five units accustomed for measure radiation:
1.7.1 Roentgen (R)
The Rontgen (R or r) is the unit of dose of electromagnetic radiation exposure or intensity. It
is equal to the radiation intensity that will create 2.08×109 ion pairs in a cubic centimeter of
air that is:
1R = 2.08 ⨉ 109 ion pairs/cm3
The official definition, however, is in terms of electric charge per unit mass of air:
1R=2.58 ⨉ 10-4 C/kg
The charge refers to the electrons liberated by ionization. The output of x-ray machines is
specified in roentgens or sometimes millroentgens (mR). The roentgen applies only to x-rays
and gamma rays and their interactions with air.
1.7.2 Rad
The Rad (radiation absorbed dose) is used to measure the amount of radiation absorbed by an
object or person which reflects the amount of energy that radioactive sources deposit in
materials through which they pass. The radiation-absorbed dose (rad) is the amount of energy
(from any type ofionizing radiation) deposited in any medium (e.g., water, tissue, air).
Biologic effects usually are related to the radiation absorbed dose, and therefore the rad is the
unit most often used when describing the radiation quantity received by a patient or an
experimental animal. The rad is used for any type of ionizing radiation any exposed matter,

13
CHAPTER 1 RADIATION AND ATOM

not just air. An absorbed dose of 1 rad means that 1 gram of material absorbed 100 ergs of
energy (a small but measurable amount) as a result of exposure to radiation.
1Rad=100 ergs/g (10-2 Gy)
Where the erg (joule) is a unit of energy, and the gram (kilogram) is a unit of mass. The
related international system unit is the gray (Gy), where 1 Gy is equivalent to 100 rad.

1.7.3 Rem
The rem (Roentgen equivalent man) is the traditional unit of dose equivalent (DE) or
occupational exposure. It is used to express the quantity of radiation received by radiation
workers. Some types of radiation produce more damage than x-rays. The rem accounts for
these differences in biologic effectiveness. This is particularly important to persons working
near nuclear reactors or particle accelerator.
1.7.4 Curie
Curie (Ci) the original unit used to express the decay rate of a sample of radioactive material.
The curie is equal to that quantity of radioactive material (not the radiation emitted by that
material) in which the number of atoms decaying per second is equal to 37 billion (3.7×1010).
In other words, One Curie is that quantity of material in which 3.7x1010 atoms disintegrate
every second (3.7x1010 Becquerel, Bq). It was based on the rate of decay of atoms within
one gram of radium. It is named for Marie and Pierre Curie who discovered radium in 1898.
The curie is the basic unit of radioactivity used in the system of radiation units in the United
States, referred to as "traditional" units. Becquerel (Bq) or Curie (Ci) is a measure of the rate
(not energy) of radiation emission from a source.
1.7.5 Electron Volt
Electron Volt (eV) is the amount of energy by the charge of a single electron moved across an
electric potential difference of one volt. A more fundamental unit of energy is the Joule (J).
That means, a particle with charge q has energy after passing through the potential
V. Therefore, one electron volt is equal to J. The energy of an x-ray is
measured in electron volts or, more often, thousands of electron volts (KeV). An electron that
is accelerated by an electric potential of one volt will acquire energy to one eV. Most x-ray
used in diagnostic radiology have energy up to 150 KeV, where as those in radiotherapy are
measured in MeV. Other radiological important energies, such as electron and nuclear binding
energies and mass-energy equivalence, are also expressed in eV.
Note: because diagnostic radiology is concerned primarily with x-rays, for our purposes we
may consider:
1R equal to 1rad equal to 1rem
With other types of ionizing radiation this generalization is not true.

14
CHAPTER 1 RADIATION AND ATOM

Table 1.3: The special quantities of radiologic science and their-associated


special units
Customary unit SI unit
Quantity
Name Symbol Name Symbol
Exposure Roentgen R Coulomb per kilogram C/kg
Absorbed dose rad rad gray Gy
Dose equivalent rem rem Seivert Sv
Radioactivity Curie Ci Becquerel Bq

1.8 Practical Units


The dose of radiation is when radiation’s energy is deposited into our body’s tissues. The
more energy deposited into the body, the higher the dose. For the purpose of radiation
protection, dose quantities are expressed in three ways: absorbed, equivalent, and effective.
The practical units in everyday use are described below.
1.8.1 Absorbed Dose
When ionizing radiation penetrates the human body or an object, it deposits energy. The
fundamental units do not take into account the amount of damage done to matter (especially
living tissue) by ionizing radiation. This is more closely related to the amount of energy
deposited rather than the charge. The energy absorbed from exposure to radiation is called an
absorbed dose. The absorbed dose is measured in a unit called the gray (Gy).
The gray (Gy), with units J/kg, is the SI unit of absorbed dose, which represents the
amount of radiation required to deposit 1 joule of energy in 1 kilogram of any kind of matter.
The rad (radiation absorbed dose), is the corresponding traditional unit, which is 0.01 J
deposited per kg. 100 rad = 1 Gy.
1.8.2 Equivalent Dose
When radiation is absorbed in living matter, a biological effect may be observed. However,
equal doses of different types or energies of radiation cause different amounts of damage to
living tissue. For example, 1 Gy of alpha radiation causes about 20 times as much damage as
1 Gy of X-rays and more harmful to a given tissue than 1 Gy of beta radiation. Therefore, the
equivalent dose was defined to give an approximate measure of the biological effect of
radiation. To obtain the equivalent dose, the absorbed dose is multiplied by a specified
radiation weighting factor (WR), which is different for each type of radiation. This weighting
factor is also called the Q (quality factor), or RBE (Relative Biological Effectiveness of the
radiation). The equivalent dose provides a single unit that accounts for the degree of harm that
different types of radiation would cause to the same tissue. The equivalent dose is expressed
in a measure called the sievert (Sv).
The weighted absorbed dose is called equivalent dose.

15
CHAPTER 1 RADIATION AND ATOM

Where

 The sievert (Sv) is the SI unit of equivalent dose. Although it has the same units as the
gray, J/kg, it measures something different. For a given type and dose of radiation(s)
applied to a certain body part(s) of a certain organism, it measures the magnitude of an X-
rays or gamma radiation dose applied to the whole body of the organism, such that the
probabilities of the two scenarios to induce cancer is the same according to current
statistics.
 1 sievert = 100 rem. Because the rem is a relatively large unit, typical equivalent dose is
measured in millirem (mrem), 10−3 rem, or in microsievert (μSv), 10−6 Sv. 1 mrem = 10
μSv.
 A unit sometimes used as a measure of low level radiation exposure is the BRET
(Background Radiation Equivalent Time, BRET). This is the number of days of an average
person's background radiation exposure the dose is equivalent to. That mean, One BRET is
the equivalent of one day worth of average human exposure to background radiation. This
unit is not standardized, and depends on the value used for the average background
radiation dose.
For comparison, the average 'background' dose of natural radiation received by a person per
day makes BRET 6.6 μSv (660 μrem). However local exposures vary, with the yearly average
in the US being around 3.6 mSv (360 mrem), and in a small area in India as high as 30 mSv (3
rem). The lethal full-body dose of radiation for a human is around 4–5 Sv (400–500 rem).
The health hazards of low doses of ionizing radiation are unknown and controversial,
because the effects, mainly cancer and genetic damage, take many years to appear, and the
incidence due to radiation exposure can't be statistically separated from the many other causes
of these diseases. The purpose of the BRET measure is to allow a low level dose to be easily
compared with a universal yardstick: the average dose of background radiation, mostly from
natural sources, that every human unavoidably receives during daily life. Background
radiation level is widely used in radiological health fields as a standard for setting exposure
limits. Presumably, a dose of radiation which is equivalent to what a person would receive in
a few days of ordinary life will not increase his rate of disease measurably.

16
CHAPTER 1 RADIATION AND ATOM

The BRET corresponding to a dose of radiation is the number of days of average


background dose it is equivalent to. It is calculated from the equivalent dose in sieverts by
dividing by the average annual background radiation dose in Sv, and multiplying by 365:

The definition of the BRET unit is apparently unstandardized, and depends on what value is
used for the average annual background radiation dose, which differs in different countries
and regions. The United Nations Scientific Committee on the Effects of Atomic Radiation
(UNSCEAR 2000) estimate for worldwide background radiation dose is 2.4 mSv (240 mrem).
Using this value each BRET unit equals 6.6 μSv. BRET values range from 2 BRET for a
dental x-ray to around 400 for a barium enema study.

1.8.3 Effective Dose


Different tissues and organs have different radiation sensitivities (see Figure 1.7). For
example, bone marrow is much more radiosensitive than muscle or nerve tissue. To obtain an
indication of how exposure can affect overall health, the equivalent dose is multiplied by a
tissue weighting factor (WT) related to the risk for a particular tissue or organ. This
multiplication provides the effective dose absorbed by the body. The unit used for effective
dose is also the sievert.
Effective dose is not a real physical quantity, but is a "manufactured" quantity invented by
the International Commission on Radiological Protection (an international scientific group). It
is calculated by multiplying actual organ doses by "risk weighting factors" (which gives each
organ's relative radiosensitivity to developing cancer) and adding up the total of all the
numbers - the sum of the products is the "effective whole-body dose" or just "effective dose."
These weighting factors are designed so that this "effective dose" supposedly represents the
dose that the total body could receive (uniformly) that would give the same cancer risk as
various organs getting different doses.
If several tissues, T1, T2, T3 etc., individually receive equivalent doses , , , etc.
then the total risk to individual should not exceed that resulting from the stipulated dose limit
to uniform whole body irradiation. Depending on the extent to which the risk from stochastic
effects in a tissue/ organ may contribute to the total risk from stochastic effects, a weighting
factor, WT is assigned to each tissue/ organ. Thus, the effective dose, E is defined as:

Where WT is tissue weighting factor, HT is equivalent dose to tissue T.

17
CHAPTER 1 RADIATION AND ATOM

For practical puposes, one gray and one sivert are essentially equal and Roentgen, rad and
rem are equivalent

Figure 1.7: Tissue weighting factors

For example, if someone’s lungs and thyroid are exposed separately to radiation, and the
equivalent doses to the organs are 2 mSv (they have a weighting factor of 0.12) and 1 mSv (it
has a weighting factor of 0.05) respectively.
The effective dose is: (2 mSv × 0.12) + (1 × 0.06) = 0.3 mSv.
The risk of harmful effects from this radiation would be equal to a 15.5 mSv dose delivered
uniformly throughout the whole body. This model says that the cancer risk from the whole
body getting 0.3 mSv uniformly is the same as the lungs getting 2 mSv and the thyroid getting
1 mSv (and no other organ getting a significant dose).
Figure 1.8 presents an overview of the relationship between effective, equivalent and
absorbed doses.

18
CHAPTER 1 RADIATION AND ATOM

For each organ or tissue estimate the


ABSORBED DOSE
Energy "deposited" in a kilogram of a substance by radiation
in mGy

Multiply by the
RADIATION WEIGHTING FACTOR
"WR" for the radiation used
Absorbed dose weighted for susceptibility to effect of different
radiations

And so obtain the


EQUIVALENT DOSE
to the organ or tissue in mSv

Multiply by the
TISSUE WEIGHTING FACTOR
"WT" for the tissue or organ concerned
Equivalent dose weighted for susceptibility to effect of different tissues

Sum over all the organs and tissues


irradiated

And so obtain the


EQUIVALENT DOSE
to the patient in mSv

Figure 1.8: Relationship between effective, equivalent and absorbed doses

19
CHAPTER 2

PRODUCTION OF X-
RAYS

Interaction processes of electrons with target material in the x-ray tube


are very important subjects to be studied in order to have knowledge
Rationale about the mechanisms of interaction that lead to the control of the
quantity and quality x-ray.

Mahmood & Haider

Performance Objectives

After studying the chapter two, the student will be able to:-

1. Explain the method of production of x-ray with ability to draw a block


diagram to the tube of production of x-ray.
2. Mention the factors which affecting the x-ray emission spectrum.
3. Define bremsstrahlung.
4. State the processes which occur in the target of an x-ray tube.
5. Determine the mechanism of interaction with the K-shell.
6. Compare and contrast the characteristic radiation spectrum with
continuous spectrum
7. State and explain the inverse square law.
CHAPTER 2 PRODUCTION OF X-RAYS

CHAPTER TWO: PRODUCTION OF X-RAYS


CHAPTER CONTENTS
2.1. Basic Requirements for Production of X-Rays 20
2.1.1. Supply of Electrons 20
2.1.2. Movement of the Electrons 21
2.2. Components and Properties of an X-Ray Tube 21
2.2.1. Cathode 21
2.2.2. Anode 22
2.2.3. Processes Occurring in the Anode of an X-ray Tube 22
2.3. X-ray Generator Options 22
2.3.1. Kilovoltage 22
2.3.2. Focal Spot 23
2.4. Inherent Filtration 23
2.5. Cooling Requirements 24
2.6. Production of X-rays 25
2.7. The X-Ray Tube 25
2.8. The Origin of Characteristic X-rays 26
2.9. Continuous X-Ray Spectrum 27
2.10. Characteristic X-Ray Spectrum 28
2.11. Controlling the X-Ray Spectrum 31
2.12. Effects of Voltage and Amperage on X-Ray Production 32
2.12.1. Effect of Voltage 32
2.12.2. Effect of Amperage 33

2.1 Basic Requirements for Production of X-Rays


X-rays are produced when some form of matter is struck by a rapidly moving electron. To
accomplish this, three basic requirements must be met.
2.1.1 Supply of Electrons
There must be a supply of the electrons. Fortunately, they can be supplied by simply raising
the temperature of a suitable material. An electron source is readily obtainable in as much as
all matter is generally considered to be composed of electrons and other minute particles. All
that is necessary to sufficiently heat the proper material. As the temperature rises, the
electrons become more and more agitated until finally they escape or “boil off” the material,
surrounding it in the form of an electron cloud. This is known as thermionic emission. In an
X-ray tube the heated material is known as the filament, which is similar to the filament in a
light bulb. Just as in a light bulb the filament is heated by passing electrical current through it.
This cloud of electrons simply hovers around and returns to the emitting substance unless
some external action or force pulls it away (see figure 2.1).

20
CHAPTER 2 PRODUCTION OF X-RAYS

Cloud of electrons Glass Envelope

Anode

Cathode

Mahmood & Haider


Figure 2.1: Electrons cloud surrounds the filament

2.1.2 Movement of the Electrons


Movement of the emitted electrons is the second step in producing X-rays. This movement is
brought about by the repelling and attracting forces inherent in electrical charges. The
fundamental law of electrostatics states that like charges repel each other and unlike charges
attract each other. Electrons are negative charges, thus repel each other. However, a stronger
attracting force is needed to accelerate the electrons to a higher velocity. Therefore, a strong
opposite (positive) charge is used to move the electrons from one point to another. It is
important that this movement is conducted in a good vacuum; otherwise the electrons collide
with air molecules and lose energy through ionization and scattering. In an X-ray tube the
anode is given a positive charge with respect to the filament, which is part of the cathode.
2.2 Components and Properties of an X-Ray Tube
An x-ray tube consists of two electrodes sealed into an evacuated glass envelope.
 A negative electrode (cathode) which incorporates a fine tungsten coil or filament.
 A positive electrode (anode) which incorporates a smooth flat metal target, usually of
tungsten.
Traditionally the tube has been a glass envelope with a reduced thickness at the window, the
point where the x-rays exit, to reduce x-ray absorption. The high vacuum reduces the problem
of the electrons colliding with, and being absorbed by, molecules of air and provides electrical
insulation between the cathode and anode. In some designs a beryllium window is
incorporated to further reduce absorption of the x-ray beam, particularly the lower energies. In
many applications glass envelopes are being replaced by metal-ceramic envelopes. These
tubes usually involve a metal cylinder with a ceramic disk at each end to hold and insulate the
cathode and anode assemblies. The metal-ceramic tube is more durable than the glass tube
and is less susceptible to thermal and mechanical shock.
2.2.1 Cathode
A structure known as the cathode serves as the electron source. Actually, it is a filament or
coil of thoriated (thorium oxide, ThO2) tungsten wire that emits electrons when heated to a

21
CHAPTER 2 PRODUCTION OF X-RAYS

high temperature. But because the filament gives off electrons in all directions, some means
must be used to focus them on a target. A reflector or focusing cup within the cathode
structure, into which the filament is centered, serves to focus the electron beam much as light
is focused by a flashlight reflector.
2.2.2 Anode
As mentioned previously, there must be a target for the electron beam to strike before X-rays
are actually produced. In radiographic tubes the target material is generally made of tungsten.
The choice of tungsten as a target for industrial radiography is based on four material
characteristics:
1. High atomic number (74). The higher the atomic number of a material the more
efficient is the conversion from electrical energy into X-ray energy.
2. High melting point (690.F*). Most of the energy in the electrons bombarding the
target is dissipated in the form of heat. The extremely high melting point of tungsten
permits operation of the target at very high temperatures.
3. High thermal conductivity. Permits rapid removal of heat from the target, allowing
maximum energy input for a given area size.
4. Low vapor pressure. This reduces the amount of target material vaporized during
operation.
The tungsten target material is usually imbedded into a massive copper rod. Copper is an
excellent thermal conductor and is used to remove the heat from the target for dissipation by
air, oil, or water cooling, depending on tube design and operation. The target and its copper
support are the anode. To produce x-rays it must be at a positive potential (voltage) with
respect to the cathode in order to attract the electrons available at the cathode.

2.2.3 Processes Occurring in the Anode of an X-Ray Tube


Each electron arrives at the surface of the target with a kinetic energy (in kiloelectronvolts)
equivalent to the kV between the anode and cathode at that instant. The electrons penetrate
several micrometers into the target and lose their energy by a combination of processes:
- as a large number of very small energy losses, by interaction with the outer electrons
of the atoms; constituting unwanted heat and causing a rise of temperature.
- as large energy losses producing X-rays, by interaction with either the inner shells of
the atoms or the field of the nucleus.
2.3 X-Ray Generator Options
2.3.1 Kilovoltage
X-ray generators come in a large variety of sizes and configurations. There are stationary
units that are intended for use in lab or production environments and portable systems that can
be easily moved to the job site. Systems are available in a wide range of energy levels. When
inspecting large steel or heavy metal components, systems capable of producing millions of
electron volts may be necessary to penetrate the full thickness of the material. Alternately,

22
CHAPTER 2 PRODUCTION OF X-RAYS

small, lightweight components may only require a system capable of producing only a few
tens of kilovolts.
2.3.2 Focal Spot
Another important consideration is the focal spot size of the tube since this factor into the
geometric unsharpness of the image produced. The focal spot is the area of the target that is
bombarded by the electrons from the cathode. The shape and size of the focusing cup of the
cathode and the length and diameter of the filament all determine the size and shape of the
focal spot. The size of the focal spot has a very important effect upon the quality of the x-ray
image. Generally, the smaller the focal spot the better the detail of the image. But as the
electron stream is focused to a smaller area, the power of the tube must be reduced to prevent
overheating at the tube anode. Therefore, the focal spot size becomes a tradeoff of resolving
capability and power. Generators can be classified as a conventional, minifocus, and
microfocus system. Conventional units have focal-spots larger than about 0.5 mm, minifocus
units have focal-spots ranging from 50 microns to 500 microns (.050 mm to .5 mm), and
microfocus systems have focal-spots smaller than 50 microns. Smaller spot sizes are
especially advantageous in instances where the magnification of an object or region of an
object is necessary. The cost of a system typically increases as the spot size decreases and
some microfocus tubes exceed $100,000. Some manufacturers combine two filaments of
different sizes to make a dual-focus tube. This usually involves a conventional and a
minifocus spot-size and adds flexibility to the system.
The electron stream from the filament is focused as a narrow rectangle on the anode target.
The typical target face is made at an angle of about 20 degrees to the cathode. When the
rectangular focal spot is viewed from below, in the position of the film, it appears more nearly
a small square. Thus, effective area of the focal spot is only a fraction of its actual area. By
using the X-rays that emerge at this angle, a small focal spot is created, improving
radiographic definition. Because the electron stream is spread over a greater area of the target,
heat dissipation by the anode is improved.
2.4 Inherent Filtration
Inherent filtration is the filtration. The primary purpose of these filters is to reduce the number
of low-energy x-rays that reach the patient. The target itself, the glass wall of the tube, the
material necessary to provide the vacuum, mechanical rigidity and other materials collectively
referred to as the filtration substantially absorb the lower-energy photons. There is therefore a
low-energy cut-off, at about 20 keV, as well as a maximum energy. Low-energy X-rays
contribute nothing to diagnostic quality and serve only to increase patient dose unnecessarily
because they are absorbed in superficial tissues and do not penetrate to reach the film. The
latter depends only on the and the former on the filtration added to the tube. Peak
kilovoltage ( ) is the maximum voltage applied across an X-ray tube. It determines the
kinetic energy of the electrons accelerated in the X-ray tube and the peak energy of the X-ray
emission spectrum. The actual voltage across the tube may fluctuate.

23
CHAPTER 2 PRODUCTION OF X-RAYS

In construction of some glass x-ray tubes, the port is reduced in thickness to provide less
inherent filtration. In some other tubes the port is made of beryllium which is a light metal of
low atomic number and low x-ray absorption. Because of tremendous pressures exerted by the
atmosphere on large evacuated containers, x-ray ports must be designed with sufficient
thickness to withstand these pressures without implosion. In center-grounded x-ray
equipment, it is also necessary to provide gas (e.g., sulfur hexafluoride, SF6) and solid
insulation for electrical isolation of the x-ray tube. Excessive inherent filtration reduces the x-
ray output as well as the radiographic contrast on equipment of a given rating.
X-ray machines have metal filters positioned in the useful beam, usually in normal practice
it is acceptable to tolerate inherent filtration equivalent to 1 mm of aluminum up to 100 kVp
(kilovolts peak); 3 mm of aluminum up to 175 kVp; 5 mm of aluminum equivalent up to 250
kVp; and higher filtration in 1,000 to 2,000 kVp units. Inherent filtration above these
tolerances reduces contrast, and hence, sensitivity of radiographic inspection, and as a result,
limits the sensitivity of inspection, especially on thin sections and light alloys. For this reason,
during radiographic inspections using kilovoltage of 150 or less, the tube head shall be
configured so that generated radiation will travel from the target through a beryllium window
without passing through any media other than air or insulating gas.
2.5 Cooling Requirements
The product of and kV equals watts of electrical power in the electron beam striking the
X-ray target. One watt of electrical power is equal to one volt-ampere. Therefore, in an X-ray
tube operating at (or 0.01 amperes) and 140 kV (140,000 volts), 1400 watts of
electrical power are in the electron beam. Only a very small amount of the energy in the
electron beam is converted into X radiation. This ranges from about 0.05 percent at 30 kV to
approximately 10 percent in the energy range. Most of the electron beam energy is
converted into heat. This generation of heat in the X-ray tube target material is one of the
limiting factors in the capabilities of the X-ray tube. It is necessary to remove this heat from
the target as rapidly as possible. Various techniques are used for removal of heat. In some
instances, the target is comparatively thin, and suitable oil is circulated on the back surface to
remove heat. Others (where the anode is being operated at ground potential) use water-
antifreeze mixture to conduct heat away from the target. Most X-ray targets are mounted in
copper, using the copper as a heat sink. Some units have no external method of heat removal,
but depend upon heat dissipation into the atmosphere by fins of a thermal radiator. Some
totally enclosed tubes depend upon the heat storage capacity of the anode structure to absorb
the heat generated during X-ray exposure. This heat is then dissipated after the unit is turned
off. These units usually have a duty cycle as a limiting factor of operation that is dependent
upon the heat storage capacity of the anode structure and the rate of heat dissipation by
thermal radiation. The rate of heat removal from the X-ray target is the primary limiting factor
in X-ray tube operation.

24
CHAPTER 2 PRODUCTION OF X-RAYS

2.6 Production of X-Rays


Over a century ago in 1895, Wilhelm Roentgen discovered the first example of ionizing
radiation, X-rays. The key to Roentgens discovery was a device called a Crooke’s tube, which
was a glass envelope under high vacuum, with a wire element at one end forming the cathode,
and a heavy copper target at the other end forming the anode. When a high voltage was
applied to the electrodes, electrons formed at the cathode would be pulled towards the anode
and strike the copper with very high energy. Roentgen discovered that very penetrating
radiations were produced from the anode, which he called x-rays.
X-ray production whenever electrons of high energy strike a heavy metal target, like
tungsten or copper. When electrons hit this material, some of the electrons will approach the
nucleus of the metal atoms where they are deflected because of their opposite charges
(electrons are negative and the nucleus is positive, so the electrons are attracted to the
nucleus). This deflection causes the energy of the electron to decrease, and this decrease in
energy then results in forming an x-ray.
Medical X-ray machines in hospitals use the same principle as the Crooke’s Tube to
produce X-rays. The most common x-ray machines use tungsten as there cathode, and have
very precise electronics so the amount and energy of the X-ray produced is optimum for
making images of bones and tissues in the body.

2.7 The X-Ray Tube


X-rays are produced in X-ray tubes such as the one shown in Figure 2.2. X-rays are produced
when fast moving electrons are suddenly stopped by impact on a metal target. The high-
voltage source is typically of the order of 103 to 106 volts. The filament (cathode) is heated to
incandescence and emits electrons by the process of thermionic emission. At such high
temperatures (2200oC) the atomic and electronic motion in a metal is sufficiently violent to
enable a fraction of the free electrons to leave the surface, despite the net attractive pull of the
lattice of positive ions. These electrons are then repelled by the negative cathode and attracted
by the positive anode. The electrons are accelerated by the high voltage source toward a solid
target called the anode which is made out of a metal of high atomic weight like tungsten, or
metals like copper and molybdenum. Because of the vacuum they are not hindered in any
way, and bombard the target with a velocity around half the speed of light.
The kinetic energy of the electrons is converted into X-rays (1%) and into heat (99%).
Inside the tube, the electrons have a small probability of collision with air molecules since the
gas pressure inside the gas is of the order of 0.01 Pa or 10-7 atm. After reaching the target, the
electrons transfer energy in the form of electromagnetic radiation. This radiation is called X-
rays. Since the incoming electrons beam energy is of the order , the electromagnetic
radiation can be very energetic.

25
CHAPTER 2 PRODUCTION OF X-RAYS

High-voltage cables
80 – 140 kV
+
Anode Rotor
Vacuum

Oil
Cathode Bearings

Stator windings
Lead shielding
x-ray beam Mahmood & Haider
Figure 2.2: Fundamentals of X-Ray Tube

2.8 The Origin of Characteristic X-Rays


When a sample is bombarded by an electron beam, some electrons are knocked out of their
shells in a process called inner-shell ionization. About 0.1% of the electrons produce K-shell
vacancies; most produce heat. Outer-shell electrons fall in to fill a vacancy in a process of
self-neutralization. The energy required to produce inner-shell ionization is termed the
excitation potential or critical ionization potential.

The production of "characteristic" X-rays by electron


bombardment of pure elements was first observed in 1909 by
Charles G. Barkla and C.A. Sadler. However, the
physical origin of X-rays was not clear. Barkla received the
Nobel Prize in 1917.
C.G. Barkla (1877-1944)

Point two can be understood from the kinetic energy, , of the incoming electron
corresponding to the potential energy of the electric potential between the cathode and the
target,

There is not more energy for the outcome x-ray. The quantization of this radiation implies that
the x-ray photon has energy

From where

26
CHAPTER 2 PRODUCTION OF X-RAYS

The X-ray production can be seeing as a continuous spectrum of radiation superimposed by


two peaks of well defined wavelength. These two spectrums are called the Continuous X-Ray
Spectrum and the Characteristic X-Ray Spectrum.
2.9 Continuous X-Ray Spectrum
As depicted in figure 2.3, if a projectile electron penetrates the K-shell and approaches close
to the nucleus. A projectile electron approaches fast and leaves less quickly, losing some or all
of its kinetic energy when passing close to the nucleus and its path is diverted or the initial
path travel is slowed down.
The lost energy is carried away as a single photon of X-rays or bremsstrahlung (literally,
'braking radiation'). Except in mammography, 80% or more of the X-rays emitted by a
diagnostic X-ray tube are bremsstrahlung.
The continuous spectrum corresponds to the entire radiation spectrum ignoring the two
well defined peaks. In this case the incoming electrons provide a limited amount of energy to
the target atoms. This comparatively small amount of energy is carried by the emitted
radiation in the form of X-Rays photons.
If is the initial kinetic energy of the incident electron and ∆K is the change in energy of
the electron, see figure 2.3, the X-Ray photon has an energy given by where
and is the frequency of the emitted X-Ray photon.
The most energetic photon (greatest frequency) corresponds to the case in which the total
incident kinetic energy of the electron is transfered into the X-Ray
photon, from where the cutoff wavelength can be calculated

The cutoff wavelength is totally independent of the target material. However, the other
characteristics of the spectrum depend on the target material.
Very rarely, an electron arriving at the target is immediately a completely stopped in this
way and produces a single photon of energy equivalent to the . This is the largest photon
energy that can be produced this kilovoltage. X-ray production by energy conversion. Events
1, 2, and 3 depict incident electrons interacting in the vicinity of the target nucleus, resulting
in bremsstrahlung production caused by the deceleration and change of momentum, with the
emission of a continuous energy spectrum of x-ray photons.
It is more likely that the bombarding electron first loses some of its energy as heat and
then, when it interacts with the nucleus, it loses only part of its remaining energy, with
emission of bremsstrahlung of lower photon energy.

27
CHAPTER 2 PRODUCTION OF X-RAYS

Scattered
Electrons

Incident
Electron ++
s 3 ++ + +
+ + ++
++ ++
2
++
1
2
3 Close interaction
Impact with nucleus Moderate energy
Maximum energy

Distant interaction
Low energy Mahmood & Haider

Figure 2.3: Production of bremsstrahlung

The X-rays may be emitted in any direction (although mainly sideways to the electron beam)
and with any energy up to the maximum. Figure 2.3 plots the relative number of photons
having each photon energy (kiloelectronvolts).The bremsstrahlung forms a continuous
spectrum. The maximum photon energy (kiloelectronvolts) is equivalent to the .

2.10 Characteristic X-Ray Spectrum


Projectile electrons interact with an inner-shell electron of the target rather than the outer shell
electron. As depicted in figure 2.4, when a projectile electron from the filament collides with
an electron in the K-shell of an atom, an electron will be ejected from the atom, provided that
the energy of the bombarding electron is greater than the binding energy of the shell.
The hole so created in the K-shell is most likely to be filled by an electron falling in from
the L-shell with the emission of a single X-ray photon of energy equal to the difference in the
binding energies of the two shells, . The photon is referred to as radiation.
Alternatively, but less likely, the hole may be filled by an electron falling in from the M-shell
with the emission of a single X-ray photon of energy , referred to as radiation.

28
CHAPTER 2 PRODUCTION OF X-RAYS

Electron
++ ejected
++ +
+ leaving a
+ +++
++ "hole"
Incident
Electron
s

K-shell photon
L-shell photon
M-shell Characteristi
N-shellphoton
photon c x-ray
O-shell photon photon

N (n =
0
M 4)
(n =
L (n3)= 2)
-5
Energy ( KeV )

-10 Kα


-15

K (n =
-20 1)

Mahmood & Haider


Figure 2.4: Energy -25 levels diagram for Molybdenum and production of
characteristic radiation
In the case of the usual target material, tungsten (Z=74),

Thus, the radiation has photon energy;


the radiation has photon energy;

29
CHAPTER 2 PRODUCTION OF X-RAYS

There is also L-radiation, produced when a hole created in the L-shell is filled by an electron
falling in from farther out. Even in the case of tungsten these photons have only of
energy, insufficient to leave the X-ray tube assembly, and so they play no part in radiology.
The X-ray photons produced in an X-ray tube in this way have a few discrete or separate
photon energies and constitute a line spectrum.
The photon energy of the K-radiation therefore increases as the atomic number of the
target increases. It is characteristic of the target material and is unaffected by the tube voltage.
A K-electron cannot be ejected and the K-radiation is not produced at all if the peak tube
voltage is less than EK, i.e. 70 kV in the case of a tungsten target. The rate of production of
the characteristic radiation increases as the kV is increased above this value.


Relative Intensity

3
Characteristic
2 Continuous x-rays
Spectrum
"Bremsstrahlung" Kβ x-rays from a
molybdenum
1 target at 35 kV

30 40 50 60 70 80 90
λmin Wavelength " λ" (10 -12 m)
Mahmood & Haider
Figure 2.5: Wavelength distribution of X-Ray production in a
molybdenum target.
The two peaks of Figure 2.5, labeled and , are part of what is called the Characteristic
X-Ray Spectrum. Similar peaks appear for greater wavelengths or smaller frequencies.
The emission of characteristic X-Rays involves the following processes
1. The energetic electrons collide with an atom of the target knocking out one of the most
internal electrons of the atom (n small) creating a hole in the atomic structure of the
atom.
2. The hole is filled when an electron from a greater energy level from a middle shell of the
atom (n mid value) jumps down to the lower energy shell emitting a high energy
photon (characteristic X-Ray).
The electron from the middle shell is subsequently replaced by and electron from an upper
energy shell which in the transition emits a low energy photon.
The average or effective energy of the continuous spectrum lies between these two, and is
typically one-third to one-half of the kVp . Thus, an X-ray tube operated at 90 kVp can be

30
CHAPTER 2 PRODUCTION OF X-RAYS

thought of as emitting, effectively, 45 keV X-rays. As this peak kV is greater than the K-shell
binding energy, characteristic X-rays are also produced. They are shown at C in figure 7 as
line superimposed on the continuous spectrum.
–The intensity of X-rays emitted is proportional to kV2 × mA.
–The efficiency of X-ray production is the ratio

and increases with the kV. The efficiency is greater the higher the atomic number of the
target.


Continuous
Spectrum
Characteristic
Relative numbers

"Bremsstrahlung"
Lines
of photons

120KV

80KV

40KV

40 80 120
Photon energy (kVe) Mahmood & Haider
Figure 2.8: Effect of tube kilovoltage on X-ray spectra

2.11 Controlling the X-Ray Spectrum


To summarize, there are five factors affecting the X-ray spectrum. The following are the
effects of altering each in turn, the other four remaining constant:
Increasing the kV shifts the spectrum upward and to the right, as shown in Fig. 2.8. It
increases the maximum and effective energies and the total number of X-ray photons. Below
a certain kV (70 kV for a tungsten target) the characteristic K-radiation is not produced.
Increasing the mA does not affect the shape of the spectrum but increases the output of
both bremsstrahlung and characteristic radiation in proportion.
Changing the target to one of lower atomic number reduces the output of bremsstrahlung but
does not otherwise affect its spectrum, unless the filtration is also changed. The photon energy
of the characteristic lines will also be less.
Whatever the kilovoltage waveform the maximum and minimum photon energies are
unchanged. However, a constant potential or three-phase generator produces more X-rays and
at higher energies than those produced by a single-phase pulsating potential generator,
operating with the same values of kVp and mA. Both the output and the effective energy of
the beam is therefore greater. This is because in Figure 2.9 the tube voltage is at the same

31
CHAPTER 2 PRODUCTION OF X-RAYS

peak value throughout the exposure. In Figure 2.9 it is below peak value during the greater
part of each half cycle. A single-phase generator produces useful X-rays in pulses, each
lasting 30 ms during the middle of each 100 ms half cycle of the mains.

Change kVp Change mA


(mA Constant (kVp Constant
INTENSITY

INTENSITY
High kVp High mA

Low kVp Low mA

WAVELENGTH WAVELENGTH
Mahmood & Haider
Figure 2.9: Relationship between kV and mA and wavelength.

2.12 Effects of Voltage and Amperage on X-Ray Production


Two sources of electrical energy are required and are derived from the alternating current
(AC) mains by means of transformers.
 The filament heating voltage (about 10 V) and current (about 10 A).
 The accelerating voltage (typically 30-150 kV) between the anode and cathode ("high
tension", "kilovoltage", or "kV"). This drives the current of electrons
(typically ) flowing between the anode and cathode ("tube
current", "milliamperage", or ").
The is controlled by varying the filament temperature. A small increase in filament
temperature, voltage, or current produces a large increase in tube current.

2.12.1 Effect of Voltage


In different equipment, different methods are used to accelerate the electrons. In the smaller
X-ray generators, up to and including two million volt units, acceleration is accomplished
with transformers to step up the incoming power line voltage and apply it between the anode
and the cathode of the X-ray tube. Since the X-ray generators operate at very high voltages,
the unit kilovolt (kV) is used to designate one thousand volts. As the kilovoltage (the potential
that causes the electrons to accelerate) is changed, the kinetic energy of the moving electrons
is changed, altering the energy of the resulting X-radiation. As the kilovoltage is increased,
the efficiency of converting the electrical energy into X-rays is increased. Therefore, when
kilovoltage is changed, the penetrating capability of the generated radiation is changed, and
the quality of radiation is altered due to the efficiency of electrical energy converted into X-

32
CHAPTER 2 PRODUCTION OF X-RAYS

rays. That is, kilovoltage (or Kilovolt Peak, " ") is the component that controls the quality
of the x-ray beam produced. It is also what controls the contrast or gray scale in the produced
x-ray film, the higher the the lower the contrast. When the kV is set on the control
console, the maximum kilovolt that will be achieved is the number you have selected. For
example, if you set the at “60”, the maximum kilovolt that will be produced is 60 kV, or
60,000 volts. The reason we call it Kilovolts Peak or is, you will also get some voltages
that are less than the kilovolts peak, or maximum kV set on the control console. You will get
some voltages at 58 kV or 59 kV etc.
The change in x-ray quantity is proportional to the square of the ratio of the ; in other
words, if were doubled, the x-ray intensity would increase by a factor of four.

Where and are the X-ray intensities at and , respectively. Selecting the proper

kilovoltage is very important in industrial radiographic applications.

2.12.2 Effect of Amperage


Amperage is a measure of the amount of electrical current applied to the filament. It is also a
direct measurement of the number of free electrons available in the X-ray tube and is
independent of variations in kilovoltage. Thus the quantity of X-radiation is in direct relation
to the filament current. Typically, the amount of current is small, so the unit milliamperage
(mA) is used to designate one one-thousandth of an ampere.
Milliamperage (1/1000 of an amp) × time (in seconds) is what controls the quantity or the
amount of x-ray photons produced. This is also what controls the blackening or density on the
x-ray film.

To calculate the mAs, you multiply

, s = seconds (usually in fractions of a sec.)

For example:

X-ray quantity is directly proportional to the mAs. When the mAs is doubled, the number of
electrons striking the tube target is doubled, and therefore the number of x-rays emitted is
doubled.

33
CHAPTER 2 PRODUCTION OF X-RAYS

34
CHAPTER 3

INTERACTION OF
X-RAYS
Interaction processes of x-ray are very important subjects to be studied
in order to have a full knowledge about the mechanisms of interaction
that are lead to x-rays attenuation.
Subjects such as effective atomic number, Absorption edges, the
Rationale relative importance of Compton and photoelectric attenuation,
secondary electrons, properties of x-and gamma rays and attenuation
of x-rays by the patient conceder which will they need it to deep
comprehension to the radiation.
Filtration are very important subjects to be studiedMahmood
in order &to Haider
have a
full knowledge about structure of matters which will they need it to
understand how does filtration work.
Performance Objectives

After studying the chapter three, the student will be able to:-
1. Define attenuation.
2. Name and describe mechanisms of x-rays interaction.
3. Contrast between the types of attenuation.
4. Define Compton process and photoelectric absorption
5. Compare the Compton process and photoelectric absorption.
6. Define absorption edges.
7. Determine the relative importance of Compton and photoelectric
attenuation.
8. Determine the sources of attenuation of x-rays by the patient.
9. Define filtration.
10. Determine the advantage of filtration.
11. Explain the effects of filtration.
CHAPTER 3 INTERACTION OF X-RAYS

CHAPTER THREE: INTERACTION OF X-RAYS


CHAPTER CONTENTS
3.1. Introduction and Overview 35
3.2. Compton Scattering (Modified Scatter) 37
3.2.1. Direction of Scatter 38
3.2.2. Energy of Scattered Radiation 38
3.3. Photoelectric Effect 41
3.4. Coherent Scatter 43
3.5. Attenuation 43
3.5.1. Linear Attenuation Coefficient 45
3.5.2. Factors Affecting the Attenuation 46
3.6. Photoelectric Rates 47
3.6.1. Dependence on Photon Energy 47
3.6.2. Material Atomic Number 48
3.7. Effective Atomic Number 49
3.8. Compton Rates 50
3.9. Mass Attenuation Coefficient 50
3.10. Use of Linear Attenuation Coefficients 51
3.11. Half-Value Layer 52
3.12. Competitive Interactions 55
3.13. Secondary Electrons 56
3.14. Electron Interactions 56
3.15. Electron Range 57
3.16. Linear Energy Transfer 57
3.17. Properties of X-and Gamma Rays 58
3.18. Attenuation of X-rays by the Patient 59
3.19. Filtration 59
3.20. Choice of Filter Material 59
3.21. Effects of Filtration 60

3.1 Introduction and Overview


As we saw in the previous chapter, X-ray photons are created by the interaction of accelerated
electrons with target at the atomic level in x-ray tube. Photons (x-ray and gamma) end their
lives by transferring their energy to electrons contained in matter. X-ray interactions are
important in diagnostic examinations for many reasons. For example, the selective interaction
of x-ray photons with the structure of the human body produces the image; the interaction of
photons with the receptor converts an x-ray or gamma image into one that can be viewed or
recorded. This chapter considers the basic interactions between x-ray and gamma photons and
matter.

35
CHAPTER 3 INTERACTION OF X-RAYS

X- and gamma-rays have many modes of interaction with matter. Some of those which are not
important to radiography or nuclear medicine imaging are:
 Mössbauer Effect
 Coherent Scattering
 Pair Production
and will not be described here.
Those which are very important to radiography are the Photoelectric Effect and the Compton
Effect. We will consider each of these in turn below. Note that the effects described here are
also of relevance to the interaction of gamma-rays with matter since as we have noted before
X-rays and gamma-rays are essentially the same entities and differ only in their origin. So
the treatment below is also of relevance to medicine imaging.
However, the total fraction of photons passing through an absorber decreases
exponentially with the thickness of the absorber.
As a beam of X- or gamma rays passes through matter, three possible fates await each
photon, as listed below:
a. Penetrate (Transmitted): photon can pass through the section of matter without
interacting.
b. Absorbed: photon can interact with the matter and transferring to the matter all of their
energy (the photon completely absorbed by depositing its energy) or some of it (partial
absorption).
c. Produce Scattered Radiation: photon can interact and diverted in a new direction, as
scattered or defected from its original direction and deposited part of its energy.
Photons Entering the Human Body will either Penetrate, be Absorbed, or Produce Scattered
Radiation (see figure 3.1).

Incident beam Transmitted X-ray

b
Absorbed X-ray

Scattered X-ray

Mahmood & Haider


Figure 3.1: Interactions of photons when an entering the human body;
will either penetrate, be absorbed, or produce scattered radiation

36
CHAPTER 3 INTERACTION OF X-RAYS

There are two kinds of interactions through which photons deposited their energy; both are with
electrons. In one type of interaction the photon loses all its energy; in the other, it loses a
portion of its energy, and the remaining energy is scattered. X-ray absorption and scattering
processes are stochastic processes, governed by the statistical laws of chance. It is impossible to
predict which of the individual photons in a beam will be transmitted by 1 mm of a material,
but it is possible to be quite precise about the fraction of them that will be, on account of the
large numbers of photons the beam contains.
Although a large number of possible interaction mechanisms are known for radiation X- or
gamma rays) in matter only two major types play an important role in diagnostic radiation:
photoelectric absorption, and Compton scattering.
3.2 Compton Scattering (Modified Scatter)
A Compton interaction is one in which only a part of the energy of the photon is transferred to a
valance electron which is essentially free and a photon is produced with reduced energy i.e. the
electron recoils and taking away some of the energy of the photon as kinetic energy. In other
words, the Compton scattering is the process whereby an X- or gamma ray interacts with a free
or weakly bound electron and transfers part of its energy to the electron. Notice that the
electron leaves the atom and may act like a beta-particle and that the x-ray photon leaves the
site of the interaction in a direction different from that of the original photon, i.e. diverted in a
new direction with reduced energy as shown in figure 3.2. Because of the change in photon
direction, this type of interaction is classified as a scattering process sometimes called
Compton Scattering, also known a incoherent scattering. This deflected or scattered X-ray can
undergo further Compton Effects within the material.

- Compton electron
Incident
X-Ray
- Angle of
deflection
- - - photon
(θ)
++ -
- ++ ++ Scattered
-
+ +++ K
X-Ray
++ L
M
-
- -
-
-
Mahmood & Haider
Figure 3.2: A schematic representation of the Compton Scattering.
In effect, a portion of the incident radiation "bounces off" or is scattered by the material. This is
significant in some situations because the material within the primary X-ray beam becomes a
secondary radiation source. The most significant object producing scattered radiation in an X-

37
CHAPTER 3 INTERACTION OF X-RAYS

ray procedure is the patient's body. The portion of the patient's body that is within the primary
X-ray beam becomes the actual source of scattered radiation. This has two undesirable
consequences.
 The scattered radiation that continues in the forward direction and reaches the image
receptor decreases the quality (contrast) of the image.
 The radiation that is scattered from the patient is the predominant source of radiation
exposure to the personnel conducting the examination.
The angle of deflection (scatter) θ is the angle between the scattered ray and the incident ray.
Photons may be scattered in all directions. The electrons are projected only in sideways and
forward directions.
3.2.1 Direction of Scatter
In Compton interactions, the relationship of the electron energy to that of the photon depends
on the angle of scatter and the original photon energy.
It is possible for photons to scatter in any direction. The direction in which an individual
photon will scatter is purely a matter of chance. There is no way in which the angle of scatter
for a specific photon can be predicted. However, there are certain directions that are more
probable and that will occur with a greater frequency than others. The factor that can alter the
overall scatter direction pattern is the energy of the original photon. In diagnostic examinations,
the most significant scatter will be in the forward direction. This would be an angle of scatter of
only a few degrees. However, especially at the lower end of the energy spectrum, there is a
significant amount of scatter in the reverse direction, i.e., backscatter. For the diagnostic photon
energy range, the number of photons that scatter at right angles to the primary beam is in the
range of one-third to one-half of the number that scatter in the forward direction. Increasing
primary photon energy causes a general shift of scatter to the forward direction. However, in
diagnostic procedures, there is always a significant amount of back- and side-scatter radiation.
3.2.2 Energy of Scattered Radiation
When a photon undergoes a Compton interaction, its energy is divided between the scattered
secondary photon and the electron with which it interacts. The electron's kinetic energy is
quickly absorbed by the material along its path. In other words, in a Compton interaction, part
of the original photon's energy is absorbed and part is converted into scattered radiation.
The manner in which the energy is divided between scattered and absorbed radiation depends
on two factors-the angle of scatter and the energy of the original photon. The relationship
between the energy of the scattered radiation and the angle of scatter is a little complex and
should be considered in two steps. The photon characteristic that is specifically related to a
given scatter angle is its change in wavelength. It should be recalled that a photon's wavelength
() and energy (E) are inversely related as given by:

E = 12.4 / 

38
CHAPTER 3 INTERACTION OF X-RAYS

Conservation of energy and momentum allows only a partial energy transfer when the electron
is not bound tightly enough for the atom to absorb recoil energy. This interaction involves the
outer, least tightly bound electrons in the scattering atom. The electron becomes a free electron
with kinetic energy equal to the difference of the energy lost by the X-ray and the electron
binding energy. Because the electron binding energy is very small compared to the X-ray
energy, the kinetic energy of the electron is very nearly equal to the energy lost by the X-ray:

Where

Leave the interaction site: the freed electron and the scattered X-ray. The directions of the
electron and the scattered X-ray depend on the amount of energy transferred to the electron
during the interaction. Equation (2) gives the energy of the scattered X-ray. It was observed that
when X-rays of a known wavelength interact with atoms, the X-rays are scattered through an
angle and emerge at a different wavelength related to .

where

Since photons lose energy in a Compton interaction, the wavelength always increases. The
relationship between the
, and the angle of scatter is given by:

The quantity is known as the Compton wavelength of the electron; it is equal to

2.43×10−12 m or 0.024 nm.


where
is the initial wavelength,
is the wavelength after scattering,
is the Planck constant,
is the electron rest mass,
is the speed of light, and
is the scattering angle.

39
CHAPTER 3 INTERACTION OF X-RAYS

The wavelength shift is at least zero (for θ = 0°) and at most twice the Compton

wavelength of the electron (for θ = 180°).


It is important to recognize the difference between a change in wavelength and a change in
energy. Since higher energy photons have shorter wavelengths, a change of say 0.024 Å
represents a larger energy change than it would for a lower energy photon. All photons
scattered at an angle of 90 degrees will undergo a wavelength change of 0.0243Å. The change
in energy associated with 90-degree scatter is not the same for all photons and depends on their
original energy. The change in energy can be found as follows. For a 110-keV photon, the
wavelength is 0.1127 Å. A scatter angle of 90 degrees will always increase the wavelength by
0.0243. Therefore, the wavelength of the scattered photon will be 0.1127 plus 0.0243 or 0.1370.
The energy of a photon with this wavelength is 91 keV. The 110 keV photons will lose 19 keV
or 17% of their energy in the scattering process. Lower energy photons lose a smaller
percentage of their energy.
It will be seen, as shown in figure 3.3, that the greater the angle of scatter:
 The greater the energy and range of the recoil electron, and also
 The greater the loss of energy (and increase of wavelength) of the scattered photon.

Recoil
Electron Recoil
Incident Incident electron
Incident
o
photon Recoil photon photon θ≈0
Electron
o
o θ= 90
Back-scattered θ ≈ 180 Forward-
photon Side-scattered scattered photon
photon

Back-scattered photon Side-scattered photon Forward-scattered photon


(θ = 180o) (θ = 90o) (θ = 0o)

Mahmood & Haider


Figure 3.3: Illustrates three different angles of Compton scattering.

Thus
 Back-scattered photon is energy minimum for a head-on collision where the X-ray is
scattered 180° and the electron moves forward in the direction of the incident X-ray.
 forward-scattered photon, For very small angle scatterings (θ ≈ 0), the energy of the,
scattered X-ray is only slightly less than the energy of the incident X-ray and the
scattered electron takes very little energy away from the interaction.
 The higher the initial photon energy:

40
CHAPTER 3 INTERACTION OF X-RAYS

 The greater the remaining photon energy of the scattered radiation and the more
penetrating it is; also
 The greater the energy
that is carried off by the recoil electron and the greater its range. This is seen in the
following examples:
Incident photon Back-scattered photon Recoil electron
25 keV 22 keV 3 keV
150 keV 100 keV 50 keV

 The softening effect of Compton scatter is therefore greatest with large scattering
angles as well as with high energy X-rays.

3.3 Photoelectric Effect


In the photoelectric (photon-electron) interaction, X-ray (or gamma-ray) collides with an orbital
electron in the atomic shells of an atom of the material. If photon energy is greater than the
binding energy of the shell, it can transfer all its energy to the electron. The electron is ejected
from the atom by this energy and begins to pass through the surrounding matter (see figure
3.4). The electron rapidly loses its energy and moves only a relatively short distance from its
original location. The photon's energy is, therefore, deposited in the matter close to the site of
the photoelectric interaction.

- Photoelectron
-
Incident - -
-
X-Ray
++
- ++ + + -
+ +++ K
++ L
M
- -
-
-
Mahmood & Haider
Figure 3.4: A schematic representation of the photoelectric absorption process.
The energy transfer is a two-step process:
 The first step is the photoelectric interaction in which the photon transfers its energy to
the electron.
 The second step is the depositing of the energy in the surrounding matter by the
electron.

41
CHAPTER 3 INTERACTION OF X-RAYS

Photoelectric interactions usually occur with electrons that are tightly bound to the atom, that
is, those with a relatively high binding energy. Photoelectric interactions are most probable
when the electron binding energy is only slightly less than the energy of the photon. If the
binding energy is more than the energy of the photon, a photoelectric interaction cannot occur.
This interaction is possible only when the photon has sufficient energy to overcome the binding
energy and removes the electron from the atom.
On the basis of the principle of conservation of energy, we can deduce that the photon's
energy is divided into two parts by the interaction. A portion of the energy is used to overcome
the electron's binding energy and to remove it from the atom. The remaining energy is
transferred to the electron as kinetic energy and is deposited near the interaction site. Since the
interaction creates a vacancy in one of the electron shells, typically the K or L, an electron
moves down to fill in. The electron will leave the atom with a kinetic energy equal to the
energy of the X-ray less that of the orbital binding energy. The electrons so ejected are called
photoelectrons.
Note that an ion results when the photoelectron leaves the atom. Also note that the X-ray energy
is totally absorbed in the process.
The photon disappears: Part of its energy, equal to the binding energy of the K-shell, is
expended in removing the electron from the atom, and the remainder becomes the kinetic
energy (KE) of that electron:
KE of the electron = photon energy - EK
Less often, the X- or gamma ray photon may interact with an electron in the L-shell of an atom.
The electron is then ejected from the atom with
KE = photon energy - EL
Two subsequent points should also be noted.
 The photoelectron can cause ionizations along its track in a similar manner to a beta-particle.
 The vacancy "holes" created in the atomic shell are filled by an electron from an outer shell
(farther out) of the atom, with the emission of a series of photons of characteristic radiation
(see figure 3.5). In this way the whole of the original photon energy is accounted for.

42
CHAPTER 3 INTERACTION OF X-RAYS

Characteristic
- radiation
- -
-

++
- ++ + + -
+ +++ K
++ L M
- - -
-
Mahmood & Haider
Figure 3.5: Production of characteristic radiation.
The drop in energy of the filling electron often produces a characteristic x-ray photon. The
energy of the characteristic radiation depends on the binding energy of the electrons involved.
Characteristic radiation initiated by an incoming photon is referred to as fluorescent radiation.
Fluorescence, in general, is a process in which some of the energy of a photon is used to create
a second photon of less energy. This process sometimes converts x-rays into light photons.
Whether the fluorescent radiation is in the form of light or x-rays depends on the binding
energy levels in the absorbing material.
In the case of air, tissue, and other light-atom materials, the characteristic radiation is so soft
that it is absorbed immediately with the ejection of a further, low-energy, photoelectron or
"Auger electron". Thus, all the original photon energy is converted into the energy of electronic
motion and is said to have been absorbed by the material. Photoelectric absorption in such
materials is complete absorption.
On the other hand, the characteristic rays from barium and iodine in contrast media are
sufficiently energetic to leave the patient. In this respect they act like Compton scattered rays.
3.4 Coherent Scatter
The incident photon interacts with an electron which is tightly bound to its parent atom and
excites it, causing to vibrate. The vibration causes the photon to scatter. The atom is too
massive to recoil and the photon is scattered with no loss of energy and deposits no energy in
the material. No secondary electron is set moving and no ionization or other effect is
produced in the material. This process occurs only with low-energy photons and at very small
angles of scattering, in which case the scattered radiation does not leave the beam. This type
of interaction has generally little significance in most diagnostic procedures. It is also
called variously as coherent, classical, elastic, or Thomson scattering.
3.5 Attenuation

43
CHAPTER 3 INTERACTION OF X-RAYS

Attenuation refers to the fact that the reduction in intensity of an x-ray beam as it passes
through the material due to the interaction event between x-ray and matter (i.e. absorption and
scattering of photons). Some of the photons interact with the material, and some pass on
through. These interactions mainly include the photoelectric effect and Compton scattering,
remove some of the photons from the beam in a process known as attenuation. The degree of
attenuation depends on the intensity of the original x-ray beam and the physical density of the
material through which the x-ray beam passes. Under specific conditions, a certain percentage
of the photons will interact, or be attenuated, in a 1-unit thickness of material.
In clinical applications we are generally not concerned with the fate of an individual photon
but rather with the collective interaction of the large number of photons. In most instances we
are interested in the overall rate at which photons interact as they make their way through a
specific material.
For a narrow beam of mono-energetic photons, the change in x-ray beam intensity at some
distance in a material can be expressed in the form of an equation as:

Where: the minus sign indicating that the intensity is reduced by the absorber.

The experimental set-up is illustrated in the figure 3.6. We refer to the intensity of the radiation
which strikes the absorber as the incident intensity, I0, and the intensity of the radiation which
gets through the absorber as the transmitted intensity, Ix. Notice also that the thickness of the
absorber is denoted by x.
When this equation is integrated, it becomes:

The number of atoms/cm3 (n) and the proportionality constant (s) are usually combined to yield
the linear attenuation coefficient (μ). Therefore the equation becomes:

Where:

44
CHAPTER 3 INTERACTION OF X-RAYS

This final expression tells us that the radiation intensity will decrease in an exponential fashion
with the thickness of the absorber with the rate of decrease being controlled by the Linear
Attenuation Coefficient.
Absorber

Io Ix

0 X Mahmood & Haider

Figure 3.6: the transmitted intensity


The expression is shown in graphical form Figure 3.7. The graph plots the intensity against
thickness, x. We can see that the intensity decreases from , that is the number at , in a
rapid fashion initially and then more slowly in the classic exponential manner.

Io
Ln Intensity

Slope: -μ
Intensity

Thickness, X Thickness, X
Mahmood & Haider
Figure 3.7: Graphical representation of the dependence of radiation
intensity on the thickness of absorber: Intensity versus thickness on the left
and the natural logarithm of the intensity versus thickness on the right.
3.5.1 Linear Attenuation Coefficient
The linear attenuation coefficient (µ) measures the probability that a photon (x-rays or gamma
rays) interacts (i.e. is absorbed or scattered) per unit length of the path it travels in a specified
material. In our example the fraction that interacts in the 1-cm thickness is 0.1, or 10%, and the
value of the linear attenuation coefficient is 0.1 per cm. This value basically accounts for the
number of atoms in a cubic cm volume of material and the probability of a photon being
scattered or absorbed from the nucleus or an electron of one of these atoms.

45
CHAPTER 3 INTERACTION OF X-RAYS

Using the transmitted intensity equation above, linear attenuation coefficients can be used to
make a number of calculations. These include:
 the intensity of the energy transmitted through a material when the incident x-ray
intensity, the material and the material thickness are known.
 the intensity of the incident x-ray energy when the transmitted x-ray intensity, material,
and material thickness are known.
 the thickness of the material when the incident and transmitted intensity, and the
material are known.
 the material can be determined from the value of µ when the incident and transmitted
intensity, and the material thickness are known.
Linear attenuation coefficient values indicate the rate at which photons interact as they move
through material and are inversely related to the average distance photons travel before
interacting. The rate at which photons interact (attenuation coefficient value) is determined by
the energy of the individual photons and the atomic number and density of the material.
The influence of the Linear Attenuation Coefficient can be seen in the Figure 3.8. All three
curves here are exponential in nature, only the Linear Attenuation Coefficient is different.
Notice that when the Linear Attenuation Coefficient has a low value the curve decreases
relatively slowly and when the Linear Attenuation Coefficient is large the curve decreases very
quickly.
Io
Intensity

μ small
μ medium
μ large

Thickness
Mahmood & Haider
Figure 3.8: Exponential attenuation expressed using a small, medium
and large value of the Linear Attenuation Coefficient, μ.

The Linear Attenuation Coefficient is characteristic of individual absorbing materials. Some


materials have a small value and are easily penetrated by x- or gamma-rays. Other materials
such as lead have a relatively large Linear Attenuation Coefficient and are relatively good
absorbers of radiation.

46
CHAPTER 3 INTERACTION OF X-RAYS

3.5.2 Factors Affecting the Attenuation


Atomic number: In the case of the low atomic number absorber such as carbon (Z=6) the
chances of interactions are reduced. In other words the radiation has a greater probability of
being transmitted through the absorber and the attenuation is consequently lower than in the
high atomic number case such as lead (Z = 82). The change in x-ray beam intensity :

Therefore if we were to double the atomic number of our absorber we would increase the
attenuation by a factor of two cubed, that is 8, if we were to triple the atomic number we would
increase the attenuation by a factor of 27 that is three cubed, and so on. It is for this reason that
high atomic number materials (e.g. Lead, Pb) are used for radiation protection.
Density: The low density absorber will give rise to less attenuation than a high density absorber
since the chances of an interaction between the radiation and the atoms of the absorber are
relatively lower. That is, the change in x-ray beam intensity is proportional to density:

Thickness: A third factor which we could vary is the thickness of the absorber. At this stage as
you should be able to predict, the thicker the absorber the greater the attenuation.
X-Ray Energy: Absorption characteristics will increase or decrease as the energy of the x-ray is
increased or decreased. Since attenuation characteristics of materials are important in the
development of contrast in a radiograph, an understanding of the relationship between material
thickness, absorption properties, and photon energy is fundamental to producing a quality
radiograph. A radiograph with higher contrast will provide greater probability of detection of a
given discontinuity. An understanding of absorption is also necessary when designing x-ray and
gamma ray shielding, cabinets, or exposure vaults. Thus, the higher energy of x-rays means the
attenuation less.
3.6 Photoelectric Rates
The probability, and thus attenuation coefficient value, for photoelectric interactions depends
on how well the photon energies and electron binding energies match. This can be considered
from two perspectives.
 In a specific material with a fixed binding energy, a change in photon energy alters
the match and the chance for photoelectric interactions.
 On the other hand, with photons of a specific energy, the probability of
photoelectric interactions is affected by the atomic number of the material, which
changes the binding energy.

3.6.1 Dependence on Photon Energy


In a given material, the probability of photoelectric interactions occurring is strongly dependent
on the energy of the photon and its relationship to the binding energy of the electrons. The
figure 3.9 shows the relationship between the attenuation coefficient for iodine (Z = 53), bone,
muscle and fat for different photon energies. This graph shows two significant features of the

47
CHAPTER 3 INTERACTION OF X-RAYS

relationship. One is that the coefficient value, or the probability of photoelectric interactions,
decreases rapidly with increased photon energy. It is generally said that the probability of
photoelectric interactions is inversely proportional to the cube of the photon energy ( ).
This general relationship can be used to compare the photoelectric attenuation coefficients at
two different photon energies. The significant point is that the probability of photoelectric
interactions occurring in a given material drops drastically as the photon energy is increased.
The other important feature of the attenuation coefficient-photon energy relationship shown
in the Figure 3.9 is that it changes abruptly at one particular energy: the binding energy of the
shell electrons. The K-electron binding energy is 33 keV for iodine. This feature of the
attenuation coefficient curve is generally designated as the K, L, or M edge.
100
Linear Attenuation
Coefficient, cm-1

10

Iodine
Bone
1 Muscle
Fat

0.1
0 25 50 75 100
Energy, keV Mahmood & Haider

Figure 3.9: linear attenuation coefficients versus radiation energy


The reason for the sudden change is apparent if it is recalled that photons must have energies
equal to or slightly greater than the binding energy of the electrons with which they interact.
When photons with energies less than 33 keV pass through iodine, they interact primarily with
the L-shell electrons. They do not have sufficient energy to eject electrons from the K shell, and
the probability of interacting with the M and N shells is quite low because of the relatively
large difference between the electron-binding and photon energies. However, photons with
energies slightly greater than 33 keV can also interact with the K shell electrons. This means
that there are now more electrons in the material that are available for interactions. This
produces a sudden increase in the attenuation coefficient at the K-shell energy. In the case of
iodine, the attenuation coefficient abruptly jumps from a value of 5.6 below the K edge to a
value of 36, or increases by a factor of more than 6.
A similar change in the attenuation coefficient occurs at the L-shell electron binding energy.
For most elements, however, this is below 10 keV and not within the useful portion of the x-ray
spectrum. Photoelectric interactions occur at the highest rate when the energy of the x-ray
photon is just above the binding energy of the electrons.
3.6.2 Material Atomic Number

48
CHAPTER 3 INTERACTION OF X-RAYS

The probability of photoelectric interactions occurring is also dependent on the atomic number
of the material. An explanation for the increase in photoelectric interactions with atomic
number is that as atomic number is increased, the binding energies move closer to the photon
energy. The general relationship is that the probability of photoelectric interactions (attenuation
coefficient value) is proportional to . In general, the conditions that increase the probability
of photoelectric interactions are low photon energies and high-atomic-number materials.
To summarize
 As the photon energy is increased, photoelectric attenuation decreases according to the
following formula until the binding energy EK of the particular material is reached:

 At this energy, the photoelectric absorption jumps to a higher value and then decreases
again as the photon energy further increases.

 This discontinuity is an exception to the general rule that attenuation decreases with
increasing energy. The reason is that photons with less energy than EK can only eject
L-electrons and can only be absorbed in that shell. Photons with greater energy than
EK can eject K-electrons as well, and can therefore be absorbed in both shells.
 The sudden change of absorption coefficient is called the K-absorption edge, and occurs
at different photon energies with different materials. The higher the atomic number of
the material, the greater is EK and the greater is the photon energy at which the edge
occurs.
3.7 Effective Atomic Number
Effective atomic number is a term that is similar to atomic number but is used for compounds
(e.g. water) and mixtures of different materials (such as tissue and bone) rather than for
atoms. To take account of the photoelectric absorption, the effective atomic number is defined
as the cube root of the average of the cube roots of the atomic numbers of the constituents.
One proposed formula for the effective atomic number, Zeff , is as follows:

Where
fn : is the fraction of the total number of electrons associated with each element,
Zn : is the atomic number of each element.
2.94 ≈ 3
Some examples are listed in table 3.1:
Table 3.1:Physical Characteristics of Contrast-Producing Materials
Material Effective Atomic Density

49
CHAPTER 3 INTERACTION OF X-RAYS

Number (Z) (g/cm3)


Water 7.42 1.0
Muscle 7.46 1.0
Fat 5.92 0.91
Air 7.64 0.00129
Bone 13
Calcium 20.0 1.55
Iodine 53.0 4.94
Barium 56.0 3.5

Effective atomic number is important for predicting how X-rays interact with a substance, as
certain types of X-ray interactions depend on the atomic number.
The Compton and photoelectric interactions depend on the effective atomic number but not
on the molecular configuration.
3.8 Compton Rates
Compton interactions can occur with the very loosely bound electrons. All electrons in low-
atomic-number materials and the majority of electrons in high-atomic-number materials are in
this category. The characteristic of the material that affects the probability of Compton
interactions is the number of available electrons. It was shown earlier that all materials, with the
exception of hydrogen, have approximately the same number of electrons per gram of material.
Since the concentration of electrons in a given volume is proportional to the density of the
materials, the probability of Compton interactions is proportional only to the physical density
and not to the atomic number, as in the case of photoelectric interactions. The major exception
is in materials with a significant proportion of hydrogen. In these materials with more electrons
per gram, the probability of Compton interactions is enhanced.
Although the chances of Compton interactions decrease slightly with photon energy, the
change is not so rapid as for photoelectric interactions, which are inversely related to the cube
of the photon energy.
3.9 Mass Attenuation Coefficient
The mass attenuation coefficient is a measurement of how strongly substance absorbs or
scatters light at a given wavelength, per unit mass. Since a linear attenuation coefficient is
dependent on the density of a material, the mass attenuation coefficient is often reported for
convenience. The Linear Attenuation Coefficient is useful when we were considering an
absorbing material of the same density but of different thicknesses. Material density does have
a direct effect on linear attenuation coefficient values. Confusion often arises as to the effect of
material density on attenuation coefficient values.
Consider water for example. The linear attenuation for water vapor is much lower than it is for
ice because the molecules are more spread out in vapor so the chance of a photon encounter

50
CHAPTER 3 INTERACTION OF X-RAYS

with a water particle is less. Normalizing  by dividing it by the density of the element or
compound will produce a value that is constant for a particular element or compound. This
constant () is known as the mass attenuation coefficient and has units of cm2/gm, and
therefore do not change with changes in density. A related coefficient can be of value when we
wish to include the density, ρ, of the absorber in our analysis. This is the Mass Attenuation
Coefficient which is defined as the:

To convert a mass attenuation coefficient () to a linear attenuation coefficient (), simply
multiply it by the density () of the material.

The measurement unit used for the Linear Attenuation Coefficient is cm-1, and a common unit
of density is the g cm-3. You might like to derive for yourself on this basis that the cm2g-1 is the
equivalent unit of the Mass Attenuation Coefficient.
The total attenuation rate depends on the individual rates associated with photoelectric and
Compton interactions. The respective attenuation coefficients are related as follows:

Let us now consider the factors that affect attenuation rates and the competition between
photoelectric and Compton interactions. Both types of interactions occur with electrons within
the material. The chance that a photon will interact as it travels a 1-unit distance depends on
two factors.
One factor is the concentration, or density, of electrons in the material. Increasing the
concentration of electrons increases the chance of a photon coming close enough to an electron
to interact. The electron concentration was determined by the physical density of the material.
Therefore, density affects the probability of both photoelectric and Compton interactions.
All electrons are not equally attractive to a photon. What makes an electron more or less
attractive is its binding energy. The two general rules are:
1. Photoelectric interactions occur most frequently when the electron binding energy is
slightly less than the photon energy.
2. Compton interactions occur most frequently with electrons with relatively low binding
energies.
The electrons with binding energies within the energy range of diagnostic x-ray photons were
the K-shell electrons of the intermediate- and high-atomic-number materials. Since an atom can
have, at the most, two electrons in the K shell, the majorities of the electrons are located in the
other shells and have relatively low binding energies.

51
CHAPTER 3 INTERACTION OF X-RAYS

3.10 Use of Linear Attenuation Coefficients


One use of linear attenuation coefficients is for selecting a radiation energy that will produce
the most contrast between particular materials in a radiograph. Say, for example, that it is
necessary to detect tungsten inclusions in iron. It can be seen from the graphs (figure 3.9) of
linear attenuation coefficients versus radiation energy that the maximum separation between the
tungsten and aluminum curves occurs at around 100keV. At this energy the difference in
attenuation between the two materials is the greatest so the radiographic contrast will be
maximized.
100000

10000
Linear Attenuation
Coefficient, cm-1

1000
Tungsten (W)
100

10
Lead (Pb)
1 Copper (Cu)
Aluminum (Al)

0.1
1 10 100 1000
Energy, keV
Mahmood & Haider
Figure 3.9: linear attenuation coefficients versus radiation energy

3.11 Half-Value Layer


The thickness of any given material which reduce the incident radiation intensity by a factor of
two (i.e. 50% of the incident energy has been attenuated) is known as the half-value layer
(HVL). The HVL is expressed in units of distance (mm or cm). Like the attenuation coefficient,
it is photon energy dependant. Increasing the penetrating energy of a stream of photons will
result in an increase in a material's HVL. From a graphical point of view we can say that when:

The HVL is inversely proportional to the attenuation coefficient. If an incident energy of 1 and
a transmitted energy is 0.5 is plugged into the equation introduced on the preceding page, it can
be seen that the HVL multiplied by m must equal 0.693.

If x is the HVL then m times HVL must equal 0.693 (since the number 0.693 is the exponent
value that gives a value of 0.5, ln (0.5) = 0.693).

52
CHAPTER 3 INTERACTION OF X-RAYS

Therefore, the HVL and  are related as follows:

These last two equations express the relationship between the Linear Attenuation Coefficient
and the Half Value Layer. They are very useful when solving numerical questions relating to
attenuation and frequently form the first step in solving a numerical problem.

50%

Percent Transmitted
25%
Incident Energy

12.5%

Energy
100%

6.25%
3.1%
1.6%

0.8%
1 2 3 4 5 6 7
Half Value Layers Mahmood & Haider
Figure 3.10: Relationship between penetration and object thickness expressed
in HVLs.

The HVL is often used in radiography simply because it is easier to remember values and
perform simple calculations. In a shielding calculation, such as illustrated in Figure 3.10, it can
be seen that if the thickness of one HVL is known, it is possible to quickly determine how
much material is needed to reduce the intensity to less than 1%.

Table 3.2: Half-Value Layer (HVL) of Aluminum for Various KV Values


Minimum penetration (HVL)
KV
for Aluminum (mm)
30 0.3
50 1.2
70 1.5
90 2.5
110 3.0

53
CHAPTER 3 INTERACTION OF X-RAYS

Note: The values presented on this page are intended for educational purposes. Other sources
of information should be consulted when designing shielding for radiation sources.

Io
Intensity

X1/2 Thickness, X Mahmood & Haider


Figure 3.11: The intensity of the radiation decrease with increase
thickness material.
The Half Value Layer for a range of absorbers is listed in the following table for x-ray:
Table 3.3: Half Value Layers (in cm) for a range of materials at x-ray
energies of 100 and 200 keV.
Absorber 100 keV 200 keV
Air 3.555 4.359
Water 4.15 5.1
Carbon 2.07 2.53
Aluminum 1.59 2.14
Iron 0.26 0.64
Copper 0.18 0.53
Lead 0.012 0.068

 The first point to note is that the Half Value Layer highly
dependent on the atomic number of the absorbing material. Since it decreases as the atomic
number increases. For example the lead is more effective than either aluminum or air at
absorbing X-rays due to of its higher density and atomic number and therefore the
attenuation should be relatively large. The value for air at 100 keV is about 35 meters and
it decreases to just 0.12 mm for lead at this energy. In other words 35 m of air is needed
to reduce the intensity of a 100 keV x-ray beam by a factor of two whereas just 0.12 mm
of lead can do the same thing.
 The second thing to note is that the Half Value Layer increases with increasing x-ray
energy. For example from 0.18 cm for copper at 100 keV to about 0.5 cm at 200 keV.

54
CHAPTER 3 INTERACTION OF X-RAYS

 Thirdly note that relative to the data in the previous table there is a reciprocal relationship
between the Half Value Layer and the Linear Attenuation Coefficient.

3.12 Competitive Interactions


The energy at which interactions change from predominantly photoelectric to Compton is a
function of the atomic number of the material. The figure 3.12 shows this crossover energy for
several different materials. At the lower photon energies, photoelectric interactions are much
more predominant than Compton. Over most of the energy range, the probability of both
decreases with increased energy. However, the decrease in photoelectric interactions is much
greater. This is because the photoelectric rate changes in proportion to 1/E3, whereas Compton
interactions are much less energy dependent. In soft tissue, the two lines cross at an energy of
about 30 keV. At this energy, both photoelectric and Compton interactions occur in equal
numbers. Below this energy, photoelectric interactions predominate. Above 30 keV, Compton
interactions become the significant process of x-ray attenuation. As photon energy increases,
two changes occur: The probability of both types of interactions decreases, but the decrease for
Compton is less, and it becomes the predominant type of interaction.
In higher-atomic-number materials, photoelectric interactions are more probable, in general,
and they predominate up to higher photon energy levels. The conditions that cause
photoelectric interactions to predominate over Compton are the same conditions that enhance
photoelectric interactions, that is, low photon energies and materials with high atomic numbers.

55
CHAPTER 3 INTERACTION OF X-RAYS

10

Mass Attenuation Coefficient, (cm2 / gm)


Lead
1.0

Soft Tissue Iodine

0.1
Compton – all material

0.01
100 200 300
Photon Energy ( keV )
Mahmood & Haider
Figure 3.12: Comparison of Photoelectric and Compton
Interaction Rates for Different Materials and Photon Energies.
The total attenuation coefficient value for materials involved in x-ray and gamma interactions
can vary tremendously if photoelectric interactions are involved. A minimum value of
approximately 0.15 cm2/g is established by Compton interactions. Photoelectric interactions
can cause the total attenuation to increase to very high values. For example, at 30 keV, lead
(Z=82) has a mass attenuation coefficient of 30 cm2/g.
From all above, the photoelectric coefficient is proportional to Z3/E3, and is particularly high
when the photon energy is just greater than EK. The Compton coefficient is independent of Z
and little affected by E. Accordingly, photoelectric absorption is more important than the
Compton process with high-Z materials as well as with relatively low-energy photons.
Conversely, the Compton process is more important than photoelectric absorption with low-Z
materials as well as with high-energy photons. As regards diagnostic imaging with X-rays (20-140
keV), therefore:
1. The Compton process is the predominant process for air, water, and soft tissues;
2. The photoelectric absorption predominates for contrast media, lead, and the materials used in
films, screens, and other imaging devices; while
3. Both are important for bone.

56
CHAPTER 3 INTERACTION OF X-RAYS

3.13 Secondary Electrons


The term 'secondary radiation' refers to Compton scattered radiation; and 'secondary electrons'
to the recoil electrons and photoelectrons set moving in the material by the two processes.
3.14 Electron Interactions
The interaction and transfer of energy from photons to tissue has two phases.
 The first is the "one-shot" interaction between the photon and an electron in which
all or a significant part of the photon energy is transferred;
 The second is the transfer of energy from the energized electron as it moves
through the tissue. This occurs as a series of interactions, each of which transfers a
relatively small amount of energy.
As they travel through the material, the secondary electrons interact with the outer shells of the
atoms they pass nearby, and excite or ionize them. The track of the electron is therefore dotted
with ion pairs. When traveling through air the electron loses an average of 34 eV per ion-pair
formed.
Several types of radioactive transitions produce electron radiation including beta radiation,
internal conversion (IC) electrons, and Auger electrons. These radiation electrons interact with
matter (tissue) in a manner similar to that of electrons produced by photon interactions.
The electrons set free by photoelectric and Compton interactions have kinetic energies
ranging from relatively low values to values slightly below the energy of the incident photons.
As the electrons leave the interaction site, they immediately begin to transfer their energy to the
surrounding material. Because the electron carries an electrical charge, it can interact with other
electrons without touching them. As it passes through the material, the electron, in effect,
pushes the other electrons away from its path. The ionization results, if the force on an electron
is sufficient to remove it from its atom. In some cases, the atomic or molecular structures are
raised to a higher energy level, or excited state. This is accounted for by about 3eV being
needed to excite an atom and about 10eV to ionize it, and there being about eight times as many
excitations as ionizations.
Regardless of the type of interaction, the moving electron loses some of its energy. Most of
the ionization produced by x- and gamma radiation is not a result of direct photon interactions,
but rather of interactions of the energetic electrons with the material. For example, in air,
radiation must expend an average energy of 33.4eV per ionization. Consider a 50-keV x-ray
photon undergoing a photoelectric interaction. The initial interaction of the photon ionizes one
atom, but the resulting energetic electron ionizes approximately 1,500 additional atoms.
3.15 Electron Range

57
CHAPTER 3 INTERACTION OF X-RAYS

The total distance an electron travels in a material before losing all its energy is generally
referred to as its range. When it has lost the whole of its initial energy in this way, the electron
comes to the end of its range. The two factors that determine the range are
(1) The initial energy of the electrons; the greater the initial energy of the electron, the
greater its range.
(2) The density of the material; the range is inversely proportional to the density of the
material.
One important characteristic of electron interactions is that all electrons of the same energy
have the same range in a specific material.
In general, the range of electron radiation in materials such as tissue is a fraction of a
millimeter. This means that essentially all electron radiation energy is absorbed in the body
very close to the site interactions.
For example, when 140 keV photons are absorbed in soft tissue, some of the secondary
electrons are photoelectrons having energy of 140 keV, able to produce some 4000 ion pairs
and having a range of about 0.2 mm.
Most of the secondary electrons are recoil electrons with a spectrum of energies averaging
25 keV and an average range of about'0.02 mm. The ranges in air are some 800 times greater
than in tissue. Due to their continual 'collisions' with the atoms, the tracks of secondary
electrons are somewhat tortuous.

3.16 Linear Energy Transfer


The rate at which an electron transfers energy to a material is known as the linear energy
transfer (LET), and is expressed in terms of the amount of energy transferred per unit of
distance traveled. Typical units are kiloelectron volts per micrometer (keV/um). In a given
material, such as tissue, the LET value depends on the kinetic energy (velocity) of the electron.
The LET is generally inversely related to the electron velocity. As a radiation electron loses
energy, its velocity decreases, and the value of the LET increases until all its energy is
dissipated. LET values in soft tissue for several electron energies are given in table 3.4.
Table 3.4: Electron Energy vs. Linear Energy Transfer
Electron Energy (keV) LET (keV / m)
1000 0.2
100 0.3
10 2.2
1 12.0

The effectiveness of a particular radiation in producing biological damage is often related to the
LET of the radiation. The actual relationship of the efficiency in producing damage to LET

58
CHAPTER 3 INTERACTION OF X-RAYS

values depends on the biological effect considered. For some effects, the efficiency increases
with an increase in LET, for some it decreases, and for others it increases up to a point and then
decreases with additional increases in LET. For a given biological effect, there is an LET value
that produces an optimum energy concentration within the tissue. Radiation with lower LET
values does not produce an adequate concentration of energy. Radiations with higher LET
values tend to deposit more energy than is needed to produce the effect; this tends to waste
energy and decrease efficiency.

3.17 Properties of X-and Gamma Rays


It is the excitations and ionizations produced by the secondary electrons which account for the
various properties of X- and gamma rays:
 The ionization of air and other gases makes them electrically conducting: used in the
measurement of X- and gamma rays.
 The ionization of atoms in the constituents of living cells cause biological damage:
responsible for the hazards of radiation exposure to patients and staff and necessitating
protection against radiation.
 The excitation of atoms of certain materials (phosphors) makes them emit light
(luminescence, scintillation, or fluorescence): used in the measurement of X- and
gamma rays and as a basis of radiological imaging.
 The effect on the atoms of silver and bromine in a photographic film leads to blackening
(photographic effect): used in the measurement of X- and gamma rays and as a basis of
radiography.
 The greater part of the energy absorbed from an X- or gamma ray beam is converted
into increased molecular motion, i.e. heat in the material, and produces an extremely
small rise in temperature.
3.18 Attenuation of X-Rays by the Patient
In conventional projection radiography, a fairly uniform, featureless beam of X-radiation falls
on the patient and is differentially absorbed by the tissues of the body. Emerging from the
patient, the X-ray beam carries a pattern of intensity which is dependent on the thickness and
composition of the organs in the body. Superimposed on the absorption pattern is an overall
pattern of scattered radiation.
The X-rays emerging from the patient are captured on a large flat phosphor screen. This
converts the invisible X-ray image into a visible image of light, which then is either:
3.19 Filtration
When a radiograph is taken, the lower-energy photons in the X-ray beam are mainly absorbed
by and deposit dose in the patient. Only small fractions, if any, reach the film and contribute to
the image. The object of filtration is to remove a large proportion of the lower-energy photons
before they reach the skin. This reduces the dose received by the patient while hardly affecting

59
CHAPTER 3 INTERACTION OF X-RAYS

the radiation reaching the film, and so the resulting image. This dose reduction is achieved by
interposing between the X-ray tube and patient a uniform flat sheet of metal, usually aluminum,
and called the added or additional filtration. The predominant attenuation process should be
photoelectric absorption, which varies inversely as the cube of the photon energy. The filter
will therefore attenuate the lower-energy photons (which mainly contribute to patient dose)
much more than it does the higher-energy photons (which are mainly responsible for the
image).
The X-ray photons produced in the target are first filtered by the window of the tube
housing, the insulating oil, the glass insert, and, principally, the target material itself. The
combined effect of these disparate components are expressed as an equivalent thickness of
aluminum, typically 0.5-1 mm Al and called the inherent filtration. The light beam diaphragm
mirror also adds to the filtration. When inherent filtration must be minimized, a tube with a
window of beryllium (Z = 4) instead of glass may be used.
The total filtration is the sum of the added filtration and the inherent filtration. For general
diagnostic radiology it should be at least 2.5 mm Al equivalent. (This will produce an HVL of
about 2.5 mm Al at 70 kV, and 4.0 mm at 120 kV.) Mammography is a special case.
3.20 Choice of Filter Material
The atomic number should be sufficiently high to make the energy-dependent attenuating
process, photoelectric absorption, predominate. It should not be too high, since the whole of the
useful X-ray spectrum should lie on the high-energy side of the absorption edge. If not, the
filter might actually soften the beam.
Aluminum (Z = 13) is generally used, as it has a sufficiently high atomic number to be
suitable for most diagnostic X-ray beams. With the higher kV values, copper (Z = 29) is
sometimes used, being a more efficient filter, but it emits 9 keV characteristic X-rays. These
must be absorbed by ’backing filter’ of aluminum on the patient side of the 'compound filter'.
Other filter materials (molybdenum or palladium) have absorption edges (20 or 24 keV,
respectively) favorable for mammography. Erbium (58 keV) has been used at moderate kV
values, and is another so-called 'K-edge filter'.

3.21 Effects of Filtration


Figure 3.13 shows the spectrum of X-rays generated at 60 kV after passing through 1, 2, and 3
mm aluminum. A filter attenuates the lower-energy X-rays more in proportion than the higher-
energy X-rays. It then-fore increases the penetrating power (HVL) of the beam at the cost of
reducing its intensity.
It reduces the skin dose to the patient while having little effect on the radiological image. It
is responsible for the low-energy cut-off of the X-ray spectrum, depicted in Figure 3.13.
Increasing the filtration has the following effects.
 It causes the continuous X-ray spectrum to shrink and move to the right.

60
CHAPTER 3 INTERACTION OF X-RAYS

 It increases the minimum and effective photon energies but does not affect the
maximum photon energy.
 It reduces the area of the spectrum and the total output of X-rays. Finally, it increases
the exit dose/entry dose ratio, or film dose/skin dose ratio.
 Above a certain thickness, there is no gain from increasing the filtration, as the output is
further reduced with little further improvement in patient dose or HVL.
Relative number of photons

1mm 2mm 3mm filter

60
Photon Energy ( keV )
Mahmood & Haider
Figure 3.13: Schematic effect of increasing filtration on the X-ray spectrum

61
CHAPTER 4

IMAGING WITH X-RAYS

X-ray imaging is one of the fastest and easiest ways for a physician
to view the internal organs and structures of the body. Highlight the
importance of some effects such as the effect of scattered radiation,
Rationale Grid ratio and the effect of the direct rays to obtain the highest
quality of the image. These topics should be taught in depth to get
the full knowledge of such effects

Mahmood & Haider

Performance Objectives
After studying the chapter four, the student will be able to:-

1. Explain the methods of reduce the amount scattered radiation


2. Determine the mechanism of effect of scattered radiation.
3. Define Grid ratio.
4. Know the Effect on direct rays
5. Determine Unsharpness and blurring.
6. Determine the Contrast improvement factor.
7. Define kilovoltage and milliampere-seconds.
8. Define the focal spot.
9. Know the methods of Mammography.
10. Determine the exposure factors.
11. Determine the radiation dose of Mammography.
CHAPTER 4 IMAGING WITH X-RAYS

CHAPTER FOUR: IMAGING WITH X-RAYS


CHAPTER CONTENTS
4.1. Contrast 63
4.2. Definition of Contrast and Physical Determinants of Contrast 64
4.3. Radiographic Contrast of Biological Tissues 65
4.3.1. Soft Tissue 67
4.3.2. Fat 68
4.3.3. Bone 68
4.4. Contrast Agents 69
4.5. Effect of Scattered Radiation 71
4.6. Scatter reduction and Contrast Improvement 72
4.6.1. Field Size 72
4.6.2. Kilovoltage 72
4.6.3. Grid 73
4.6.4. Air Gap 73
4.6.5. Flat Metal Filter 74
4.7. Effect on Scattered Rays 74
4.7.1. Grid Ratio 75
4.8. Contrast Improvement Factor 75
4.9. Effect on Direct Rays 75
4.9.1. Focused and Unfocused Grids 75
4.9.2. Grid cut-off 75
4.9.3. Stationary and Moving Grids 76
4.9.4. Speed and Selectivity 76
4.9.5. Moving Slot 77
4.10. Magnification and Distortion 77
4.10.1. Distortion 77 Large Air Gap Reduces
4.11. Unsharpness and Blurring 78 Scatter Fraction
4.11.1. Geometrical Unsharpness 78
4.11.2. Movement Unsharpness 78
4.11.3. Absorption Unsharpness 78
4.12. Choice of Exposure Factors79
4.12.1. Kilovoltage 79
4.12.2. Milliampere-seconds 79
4.12.3. Exposure Time 79
4.13. Macroradiography 79
4.14. Mammography 81

62
CHAPTER 4 IMAGING WITH X-RAYS

4.1Contrast
Any medical image can be described in terms of three basic concepts that we already have
mentioned:
 Spatial resolution or clarity refers to the spatial extent of small objects within the image.
 Noise, refers to the precision with which the signal is received; a noisy image will have
large fluctuations in the signal across a uniform object while a precise signal will have
very small fluctuations.
 contrast,
Image contrast, refers to the difference in brightness or darkness in the image between an area
of interest and its surrounding background. For example, if gray and white dots are painted
onto a black canvas, the white circle is presented with a larger contrast with respect to the
background than the gray circle (Figure 4.1). The information in a medical image usually is
presented in "shades of gray". (Color is avoided because it creates false borders that can distract
the observer.) One uses the differences in gray shades to distinguish different tissue types,
analyze anatomical relationships, and sometimes assess physiological function. The larger the
difference in gray shades between two different tissue types, the easier it is to make important
clinical distinctions. It is often the objective of an imaging system to maximize the contrast in
the image for a particular object of interest, although this is not always true since there may be
design compromises where noise and spatial resolution are also very important. The contrast in
an image depends on both material characteristics of the object being imaged as well as
properties of the device(s) used to image the object. In this chapter we will detail the concept of
contrast and describe the physical determinants of contrast, including material properties, x-ray
spectra, detector response, and the role of perturbations such as scatter radiation and image
intensifier veiling glare.

Low Contrast Medium Contrast High Contrast


Mahmood & Haider
Figure 4.1: The difference in brightness or darkness in the image,
"Image contrast".

63
CHAPTER 4 IMAGING WITH X-RAYS

4.2 Definition of Contrast and Physical Determinants of Contrast


In its most general terms, contrast can be defined as the fractional difference in some
measurable quantity in two regions of an image. Usually when we say "contrast", we mean
image contrast, which is the fractional difference in optical density or brightness between two
adjacent regions in an image. In conventional radiography, image contrast depends on two
other types of "contrast" called (a) radiographic contrast and (b) detector contrast. Radiographic
contrast (sometimes called subject contrast) refers to the difference in the number of x-ray or
gamma ray photons emerging from adjacent regions of the object being scanned, which
depends on differences in atomic number, physical density, electron density, thickness, as well
as the energy spectrum of the x-ray beam emitted by the source.
(Because radiographic contrast depends on the x-ray energy, which is not a characteristic of the
subject, we avoid the use of the term “subject contrast” even though this the term commonly
found in other texts.)
Detector contrast refers to the ability of the detector to convert differences in photon fluence
across the x-ray beam into differences in optical density (film), image brightness (image
intensifiers), signal amplitude (electronic detectors), or some other physical, optical, or
electronic signal used to represent the image in the imaging system. As we shall see later, for
many systems the measurement we make of the image signal must be linearly related to the
intensity of the radiation signal generating the image. For some cases it will be necessary to use
the H&D curve of the film to convert film densities to relative exposure values. The detector
contrast depends on the chemical composition of the detector material, its thickness, atomic
number, electron density, as well as the physical process by which the detector converts the
radiation signal into an optical, photographic, or electronic signal. The detector contrast also
depends on the x-ray spectrum used to image the object. The detector may increase or decrease
the radiographic contrast; that is, the detector may produce photographic or electronic signals
that have a larger or smaller fractional difference between adjacent areas of the image in
comparison to the difference found in the radiographic signal.
A third component in the description of image contrast includes various physical
perturbations such as scattered radiation, image intensifier veiling glare, and the base and fog
density levels of film. Each of these tends to reduce image contrast. Also, digital image
displays allow the observer to control window and level parameters that affect brightness and
contrast. This provides a fourth component of contrast, which we will call "display contrast".
Each of these components of contrast will be discussed in this chapter.
The physical determinants of contrast can be understood by examining the processes by
which a radiographic image is formed. We will consider a system in which a patient is placed
between an x-ray tube and a detector, the detector being either a film-screen combination or an
electronic detector. The x-ray tube is operated at a certain kVp, which, along with any
filtration, determines the energy spectrum of beam. X-ray photons from the source are

64
CHAPTER 4 IMAGING WITH X-RAYS

attenuated by various materials in the patent (muscle, fat, bone, air, and contrast agents) along
the path between the source and detector.
The photon attenuation of each material depends on its elemental composition as well as the
energy of the beam. This effect is assessed using its linear attenuation coefficient (μ), which
gives the fraction of photons absorbed by a unit thickness of the material. An equation (eq. 6 in
chapter 3) is useful as mass attenuation coefficients are commonly tabulated rather than linear
attenuation coefficients.
(4.1)

Where:

The components of radiographic contrast are apparent in equation (4.1).


 The first component mass attenuation coefficient depends explicitly on energy
(E) and implicitly on the atomic number of the material and its electron density.
 The second component is the mass density of the material. The greater the density,
the larger the attenuation afforded by that material, as seen in the product .
 The third component represents the thickness of the material. Again, the thicker the
material the more attenuation that material provides to the x-ray beam.

The energy of the x-ray beam determines the value of the mass attenuation coefficient and is
one of the most important factors in controlling both the radiographic and overall (image)
contrast. The mass attenuation coefficients generally decreases as the photon energy increases
except at points of discontinuity called absorption edges, mostly the K-edge or L-edge. At these
energies, photoelectric interactions between the photon and inner shell electrons cause large
increases in the photoelectric cross-section as the photon energy slightly exceeds the binding
energy of the orbital electrons. Correspondingly, contrast tends to decrease as the photon
energy increases except when K-edge or L-edge discontinuities cause large increases in
contrast.

4.3 Radiographic Contrast of Biological Tissues


One of the principal determinants of contrast in a radiograph of the human body is, of course,
the types of tissue found in the body region being imaged. The radiation attenuation properties

65
CHAPTER 4 IMAGING WITH X-RAYS

of each tissue type in turn are determined by its elemental and chemical composition, as it is for
all other (i.e. biologic and non-biologic) chemical compounds and mixtures.
For purposes of our discussion, we can consider the body being composed of three different
tissues:
 Fat
 soft tissue (Lean)
 Bone
 Air found in the lungs (and in the gastrointestinal tract)
 contrast agents that possibly will be introduced into the body
The chemical composition of the three major tissue types are given in Table 4.1 and will be
used in the description of attenuation properties presented in the following section.

Table 4.1: Elemental Composition of Body Materials


Adipose Muscle
% Composition Bone
Tissue (striated) water
(by mass) (Femur)
(Fat) (soft tissue)
Hydrogen (Low Z) 11.2 10.2 11.2 8.4
Carbon 57.3 12.3 27.6
Nitrogen 1.1 3.5 2.7
Oxygen 30.3 72.9 88.8 41.0
Sodium 0.08
Magnesium 0.02 0.2
Phosphorus 0.2 7.0
Sulfur 0.06 0.5 0.2
Potassium 0.3
Calcium (High Z) 0.007 14.7

Some of their relevant physical properties are given in Table 4.2 and graphs of their mass
attenuation coefficients as a function of energy are presented in Figure 4.2.

Table 4.2: Physical Properties of Human Body Constituents


Effective Density Electron Density
Material
Atomic No. (gm/cm3) (electrons/kg)
Air 7.6 0.00129 3.01×1026 Lowest atten.
26
Water 7.4 1.00 3.34×10
Soft tissue 7.4 1.05 3.36×1026
Fat 5.9  6.3 0.91 3.34  3.48×1026
Bone 11.6  13.8 1.65  1.85 3.0  3.19×1026 Highest atten.

66
CHAPTER 4 IMAGING WITH X-RAYS

100000
…….. Water & soft Tissue

Mass Attenuation Coefficient, cm-


1000 Cortical Bone
Adipose Tissue
100

10
1

0.1

0.01
1 10 100 1000

Energy, keV Mahmood & Haider


Figure 4.2: Mass Attenuation Coefficients of Tissues (cm2/gm).

4.3.1 Soft Tissue


The term "soft tissue", as used in this text, excludes fat but includes muscle and body fluids.
The term "lean soft tissue" sometimes is used to describe no fatty soft tissues, but we will use
the less cumbersome term "soft tissue" and implicitly exclude fat. There are, of course, many
different types of soft tissue including liver tissue, collagen, ligaments, blood, cerebrospinal
fluid, and so on. However, the chemical composition of these tissues is dominated by elements
with low atomic numbers. Therefore, we will assume that they are radio-graphically equivalent
to water and have an effective atomic number of 7.4 and an electron density of 3.34 × 1026
electrons per gram.
Water-equivalent tissues have several important radiologic properties that contribute to their
contrast.
 First, the photoelectric effect dominates photon attenuation up to energy of 30 keV,
after which the Compton Effect becomes increasingly dominant in the remainder of the
diagnostic energy range. As we will see later, photoelectric interactions provide better
differentiation between tissue types. Therefore it is desirable to use lower energy
photons to maximize contrast in diagnostic examinations.
 Second, because the body is about 70% water-equivalent by weight, contrast for these
tissues is dictated predominantly by variations in the thickness or density (i.e. by the
product of in equation 1). In the diagnostic range, the HVL of soft tissue is in the
range of 3 to 4 cm so that thickness differences of 3cm provide a radiologic contrast of
50%.
 Finally, the radiographic similarity of a majority of tissue volume in the human body
complicates our imaging task. For example, it is impossible to visualize blood directly
or to separate tumors from surrounding normal soft tissue. This forces radiologists to
use "contrast agents" (as will be discussed below) to provide contrast to enable

67
CHAPTER 4 IMAGING WITH X-RAYS

visualization of these anatomic structures of interest. Without contrast agents, except in


the grossest manner it is impossible to visualize important structures, such as the liver,
GI tract, or cardiac blood pool, using standard plane film x-ray techniques.

4.3.2. Fat
Along with the energy of the x-ray beam, electron density, physical density, and atomic number
determine the attenuation of any material through their impact on the attenuation coefficient.
Due to the presence of low atomic number elements, fat has a lower physical density and lower
effective atomic number, and therefore a lower photoelectric attenuation coefficient, than either
soft tissue or bone. For this reason, fat has a lower attenuation coefficient than other materials
in the body (except air) at low energies where the photoelectric interactions are the dominant
effect.
However, at higher energies, fat has a somewhat higher Compton mass attenuation
coefficient than other tissues found in the body. Unlike other elements, the nucleus of hydrogen
is free of neutrons, giving hydrogen a higher electron density (electrons/mass) than other
elements. Because hydrogen contributes a larger proportion of the mass in fat than it does in
soft tissue and bone, fats have a larger electron density than other tissues. This becomes
particularly important at higher energies where Compton interactions dominate attenuation. In
fact, inspection of a table of mass attenuation coefficients (Figure 2) shows that at higher
energies the mass attenuation coefficient of fat slightly exceeds that of bone or soft tissue,
precisely due to the higher electron density of fat. However, due to its low density it does not
have a higher linear attenuation coefficient.
As Tables 4.1 and 4.2 shows, the differences in atomic number, physical density, and
electron density between soft tissue and fat are slight. The differences in the linear attenuation
coefficients and therefore in radiographic contrast between fat and soft tissue is small. One
must depend on the energy dependence of the photoelectric effect to produce contrast between
these two materials. This is particularly true in mammography where one uses an x-ray beam
with an effective energy of about 18 keV. Such a low energy spectrum maximizes the contrast
between glandular tissue, connective tissue, skin, and fat, all of which have increasingly similar
attenuation coefficients at higher photon energies.
4.3.3 Bone
The mineral component of bone gives it excellent contrast properties for x-ray photons in the
diagnostic range. This is due to two properties. First, its physical density is 60% to 80% higher
than soft tissue. This increases the linear attenuation coefficient of bone by a proportionate
fraction over that of soft tissue. Second, its effective atomic number (about 11.6) is
significantly higher than that of soft tissue (about 7.4). Since the photoelectric mass attenuation
coefficient varies with the cube of the atomic number, the photoelectric mass attenuation
coefficient for bone is about [11.6/7.4]3 = 3.85 times that of soft tissue. The combined effect of
its greater physical density and its larger effective atomic number gives bone a photoelectric

68
CHAPTER 4 IMAGING WITH X-RAYS

linear attenuation coefficient approximately 6 times greater than that of soft tissue or fat. This
difference decreases at higher energies where the Compton Effect becomes more dominant.
However, even at higher energies, the higher density of bone still allows it to have excellent
contrast with respect to both soft tissue and fat. Therefore when imaging bone, one can resort to
higher energies to minimize patient exposure while maintaining reasonable contrast instead of
resorting to low x-ray beam energies as one is compelled to do when attempting to differentiate
fat from soft tissue.

4.4 Contrast Agents


Most of the methods we use to improve contrast involve control of variables external to the
patient, such as detector response and choice of x-ray beam kVp. The factors that control
contrast within the patient, such as the thickness, physical density, and elemental composition
of the body's tissues, are difficult if not impossible to control while an image is being recorded.
There are times, however, when the composition of the body part can be modified to
increase radiographic contrast. This is accomplished by introducing a material, called a contrast
agent, into the body to increase, or sometimes decrease, the attenuation of an object being
imaged (Table 4.3). For example, agents containing iodine commonly are injected into the
circulatory system when an angiographer is imaging blood vessels or the ventricular blood
pool. This is necessary because blood and muscle both have attenuation coefficients essentially
equal to that of water. Therefore, blood cannot be distinguished from surrounding soft tissue
structures using conventional x-ray techniques without contrast agents. When an iodinated
contrast agent is introduced into the circulatory system it increases the x-ray attenuation of the
blood, allowing the radiologist to visualize the blood pool in the vessels (either arteries or
veins), or in the cardiac chambers. Barium is another element that is commonly used as a
contrast agent, particularly in the gastrointestinal (GI) tract. A thick solution containing barium
is introduced into the gastrointestinal tract by swallowing or through some other path. When
the barium solution is inside of the GI tract, the borders of the GI tract can be visualized so that
the radiologist can look for ulcerations or any ruptures that may be present.

Iodine and barium are used as contrast agents for several reasons.
 The first is that they can be incorporated into chemicals that are not toxic even when
rather large quantities are introduced into the body.
 Second, to be useful as a contrast agent, the material must have a linear attenuation
coefficient that is different from that of most materials found in the human body. When
iodinated contrast agents are used in angiography, the iodine must provide sufficient x-
ray attenuation to provide discernible contrast from surrounding soft tissues when
imaged with x-rays.

Both barium (Z = 56) and iodine (Z = 53) meet these requirements. A common iodinated
contrast agent is Hydopaque (Table 3), a cubic centimeter of which contains 0.25 grams

69
CHAPTER 4 IMAGING WITH X-RAYS

of , 0.50 grams of , and 0.6 grams of water with a density of 1.35


grams/cm3. Most of its attenuation is provided by the iodine component due to its higher atomic
number and physical density. Also the K-edge of iodine occurs at 33.2 keV, at the center of the
energy spectrum used for most diagnostic studies. Similarly, the barium contrast agent used in
abdominal studies contains 450 grams of barium sulfate ( ) in 25 milliliters of water to
give a suspension with a physical density of 1.20 gm/cm3. The K-edge of barium occurs at 37.4
keV, again lying near the center of the energy spectrum used for abdominal studies.

Table 4.3: Examples of Contrast Media


Hydopaque (Iodine)

Composition: 0.25 grams C18H26I309 + 0.50 grams C11H3I3N204


+ 0.6 grams water
Physical density: 1.35 grams/cm3
K-edge Energy: 33.2 keV
Atomic Number of Iodine: 53
Applications: Angiography, Genitourinary (GU) Studies
Barium sulfate (BaSO4)
Composition: 450 grams barium sulfate + 25 milliliters water
Physical density: 1.20 gm/cm3
K-edge Energy: 37.4 keV
Atomic Number of Barium: 56
Applications: Gastrointestinal (GI) Studies

Air

Composition: 78% N2 + 21% O2


Physical Density: 0.0013 gm/cm3
K-edge Energy: 0.4 keV
Effective Atomic Number: 7.4
Applications: GI Studies, Pneumoencepalography

One can maximize the contrast of iodine and barium contrast media by imaging with x-ray
photons just slightly above the k-edge of the contrast agent. This is achieved by "shaping" the
x-ray spectrum by an additional metallic filter with a k-edge higher than the k-edge of the
contrast agent. The metallic filter attenuates photons at energies higher than its k-edge but
transmits a larger proportion of photons with energies just below its k-edge. If the k-edge of the
filter is higher than the k-edge of the contrast agent, a large proportion of the transmitted
photons will fall over the k-edge of the contrast agent. Since these photon energies are chosen
to fall in a region of maximum attenuation for the contrast agent, they will maximize its
contrast. As you might deduce, metals with slightly higher atomic numbers than the contrast
agent are useful as x-ray beam filters in these applications since they also have slightly higher

70
CHAPTER 4 IMAGING WITH X-RAYS

kedges. Therefore rare earth metals such as samarium (Sm) and cerium (Ce) are commonly
used to filter the x-ray beam in contrast studies involving iodine or barium. Because of the
principle of their operation, they also are called "k-edge" filters.
100000
Iodine
Mass Attenuation Coefficient 1000 Barium
Water
100
Air
Cm2 / gm

10

0.1

0.01
1 10 100 1000

Energy, keV Mahmood & Haider


2
Figure 4.3: Mass Attenuation Coefficients (cm /gm) for Contrast Agents.
A third contrast agent, one that reduces rather than increases x-ray attenuation, is air. Prior to
the advent of computed tomography, radiographic images of brain structures were obtained
after injecting air into the cerebral ventricles through a catheter inserted in the spinal column.
The introduction of air displaced cerebral spinal fluid in the ventricles, which otherwise has
essentially the same x-ray attenuation properties as the surrounding brain tissue. Without the
introduction of a contrast agent, the ventricles could not be visualized with standard x-ray
techniques, nor could the various white and gray matter structures in the brain be differentiated
using plane-film radiography. The air injected into the ventricles displaced the cerebral spinal
fluid, allowing the radiologist to visualize the shape of the ventricles. Any distortion in the
shape suggested the presence of a tumor or other abnormality.
The property of air that makes it a useful contrast agent is its low physical density. Since
both the photoelectric and Compton linear attenuation coefficients of a material are directly
proportional to its physical density, air maintains a low linear attenuation coefficient with
respect to any other material found in the body
4.5 Effect of Scattered Radiation
The primary radiation carries the information to be imaged, while the scattered radiation
obscures it. This is similar to the way in which the light in a room affects the image seen on a
television screen. The amount (S) of scattered radiation reaching a point on the film-screen may
be several times the amount (P) of primary radiation reaching the point. The ratio S/P depends
on the thickness of the part and the area of the beam. The ratio is typically 4:1 for a
posteroanterior chest (in which case only 20% of the photons recorded by the film-screen carry
useful information) and 9:1 for a lateral pelvis. Since the scattered radiation is more or less

71
CHAPTER 4 IMAGING WITH X-RAYS

uniform over the image, it acts like a veil and reduces the contrast which would otherwise be
produced by the primary rays by the factor (1 + S/P) which may be anything up to 10 times.
4.6 Scatter reduction and Contrast Improvement
The image formation process in diagnostic radiology essentially captures a radiographic
"shadow" created by the body of x-rays from a point source. The accuracy of this "shadow"
depends on the photons being highly directional. However, scattered radiation is not emitted by
a single point source. Rather, it strikes the film-screen cassette from random directions and
carries little useful information, unlike the directional primary photons that arise from the
source.
There are a number of ways to reduce scattered radiation which produced by the patient
(relative to primary) to improve radiographic contrast
4.6.1 Field size
Reducing the field area, by the use of cones or the light beam diaphragm, reduces the volume
of scattering tissue and so decreases scatter and improves contrast as shown in figure 4.4.

X-ray Fan
Beam

Scatter
Radiation

Small Field Size Reduces Large Field Size Increases


Scatter Fraction Scatter Fraction

Mahmood & Haider

Figure 4.4: Reduce Field Size.

4.6.2 Kilovoltage
Using a lower kV produces less forward scatter and more side scatter. At the same time it
produces less penetrating scatter, so scatter produced at some distance from the film is less
likely to reach it. In practice, these effects may not be very significant. Reducing the kV does

72
CHAPTER 4 IMAGING WITH X-RAYS

increase the contrast, but primarily because of the increased differential photoelectric
absorption.
The amount of scatter (relative to the primary rays) reaching the film-screen may be reduced
and contrast increased by interposing between it and the patient:
4.6.3 Grid
An 'anti-scatter' grid, seen in cross-section in Figure 4.5, consists of thin (0.07 mm) strips of a
heavy metal (such as lead) sandwiched between thicker (0.18 mm) strips of inter-space material
(plastic, carbon, fiber, or aluminum, which are transparent to X-rays), encased in aluminum or
carbon fiber. The lead strips absorb (say, 90% of) the scattered rays which hit the grid
obliquely, while allowing (say, 70%) of the primary rays to pass through the gaps and reach the
film.
Mahmood & Haider

X-ray Fan
Beam

Scatter
Radiation

Radiographic
Grid

Figure 4.5: Radiographic grid Reduce scatter.

4.6.4 Air Gap


If, as in Figure 4.6, the film-screen is moved some 30 cm away from the patient, much of the
obliquely traveling scatter (shown dashed) misses it, and the contrast is improved.

Due to the inverse square law, the increased distance causes:

(1) a small reduction in the intensity of the primary radiation which comes from the anode,
some distance away, but
(2) a large reduction in the intensity of the scattered radiation, since that comes from points
within the patient, much nearer. Use of an air gap increases contrast but necessitate an
increase in the kV or mAs, and also results in a magnified image.

73
CHAPTER 4 IMAGING WITH X-RAYS

X-ray Fan
Beam

Scatter
Radiation

Small Air Gap Increases


Scatter Fraction
Large Air Gap Reduces
Mahmood & Haider Scatter Fraction
Figure 4.6: Effect of Air Gap on the Scattered Radiation.

4.6.5 Flat Metal Filter


Such a filter, placed on the cassette, absorbs the softer and obliquely traveling scatter more than
the harder direct rays. This is not very effective, and necessitates an increase in the mAs.

Narrow Beam
Geometries

Fore Slit
X-Ray Fan
Beam
Scatter
Radiation

Aft Slit

Mahmood &
Haider
Figure 4.7: Scanning slit system using fan beam to reduce scatter.

4.7 Effect on Scattered Rays


Few of the scattered rays S can pass through the channels between the strips of lead and reach
the film. Since most of them are traveling obliquely and are relatively soft, they will be largely
absorbed by the strips of lead. This is shown in greater detail in Figure 4.8, A being the lead

74
CHAPTER 4 IMAGING WITH X-RAYS

strips and B the interspace material. It will be seen that the grid has only a small angle of
acceptance within which scattered rays can reach a point on the film.

4.7.1 Grid Ratio


The grid ratio is the depth of the interspace channel divided by the width of the interspace
channel, and is typically 8:1. The larger the grid ratio, the smaller the angle of acceptance, the
more efficient the grid is at absorbing scattered radiation and the greater the contrast in the
image.
With very large fields, especially at a high kV, more scatter is produced, and a high-ratio
grid (12:1 or 16:1) is preferable. No grid would generally be used with thin parts of the body or
with children or where there is an air gap.

∅ Mahmood & Haider


Scattered rays from
different parts of the body

Primary X-Ray
A A
Beam

B B Grid

Transmitted beam
Film Film

Figure 4.8: (a) Construction of a grid, (b) Grid cut-off is caused by placing a focused grip
upside down on the film.

4.8 Contrast Improvement Factor


The contrast improvement factor is defined as contrast with a grid divided by contrast
without a grid. It lies typically between 3 and 5, depending on the grid ratio and the various
factors affecting the relative amount of scatter produced.
4.9 Effect on Direct Rays
4.9.1 Focused and Unfocused Grids
In a focused grid the strips are tilted progressively from the center to the edges of the grid so
that they all point toward the tube focus, as in Fig. 2.5. About 20% of the direct rays impinge
on the edges of the lead strips and are attenuated. The rest pass through the gaps between the
lead strips.
4.9.2 Grid cut-off
Within certain tolerances (a) the grid must be used at a specified distance from the anode, (b)
the tube must be accurately centered over the grid, and (c) the grid must not be tilted; otherwise

75
CHAPTER 4 IMAGING WITH X-RAYS

cut-off of the primary rays will occur. With a linear (i.e. uncrossed) grid, the X-ray tube can be
angled along the length of the grid without 'cutting off the primary radiation. This can be useful
for certain examinations. If the tube is angled the other way, or if a focused grid is accidentally
placed the wrong way round or upside down on the film, the primary beam will be absorbed,
leaving perhaps only one small central area of the film exposed. This is known as 'grid cut-off
(see Fig. 1b), and is more restrictive with high-ratio grids.
Unfocused grids, in which the strips are completely parallel, may be used at any focus
distance but suffer severely from cut-off. The effect can be reduced by using a longer FFD or a
grid with a lower grid ratio.
4.9.3 Stationary and Moving Grids
- Grid lines (grid lattice) are shadows of the lead strips of a stationary grid
superimposed on the radiological image. If the line density (number of grid lines per
millimeter) is sufficiently high they may not be noticeable at the normal viewing distance but
they nevertheless reduce the definition of fine detail.
- A moving grid (‘Bucky’) has typically five lines per millimeter. During the
exposure it moves for a short distance, perpendicular to the grid lines. It can move to and fro
(reciprocating) or in a circular fashion (oscillating). Such movement blurs out the grid lines. It
is important that the grid starts to move before the exposure starts, moves steadily during the
exposure, and does not stop moving until after the exposure is over.
- A multiline grid has seven or more lines per millimeter together with a high grid
ratio, and can be used as a stationary grid without the lines being visible. It is used when a
moving grid cannot be used, and, being thinner, incurs fewer doses to the patient.
4.9.4 Speed and Selectivity
The two tasks of a grid - to transmit primary radiation and absorb scattered radiation - may be
judged by its selectivity:

Typical figures range from 6 to 12, depending on the grid ratio and tube kV.
The use of grids necessitates increased radiographic exposure for thp same film density,
because of the removal of some of the direct rays and most of the scatter. Speed or exposure
factor is the ratio

or, in practical terms,

and is typically 3-5, depending on the grid structure, patient thickness, etc. Using a grid and,
consequently, an increased exposure obviously increases patient dose.

76
CHAPTER 4 IMAGING WITH X-RAYS

4.9.5 Moving Slot


An alternative to a grid consists of two metal plates each with a slot, 5 mm wide, aligned with
the beam in front of and behind the patient. They are arranged to move steadily across the field
during the exposure. With only a slice of the patient being irradiated at any instant, little scatter
is produced. An increase of exposure time is necessary, but this can be mitigated by employing
several slits well spaced apart.
4.10 Magnification and Distortion
Some important aspects of a radiological image arise simply from the fact that X-rays travel in
straight lines. Figure 9a shows how the image I of a structure S produced by a diverging X-ray
beam is larger than the structure itself.

(a) Focal spot Mahmood & Haider (b)


f

Film
S F density
Structure
h C
I
Film
B Image B B B

Figure 4.9: Magnification and blurring, (a) Formation of magnified image, (b) A
plot of film density along a line across the film.

If the diagram is redrawn with larger or smaller values for F and h, it will be seen that
magnification is reduced by using a longer focal-film distance (FFD) F or by decreasing the
object-film distance (OFD) h. When positioning the patient, the film is therefore usually placed
close to the structures of interest. If the tissues are compressed this will also reduce patient
dose. On the other hand, advantage is taken of increased magnification in macroradiography.
The magnification M is equal to

and is given by F/(F-h) in Fig. 4.9a.

4.10.1 Distortion
This refers to a difference between the shape of a structure in the image and in the subject. It
may be due to foreshortening of the shadow of a tilted object, e.g. a tilted circle projects as an

77
CHAPTER 4 IMAGING WITH X-RAYS

ellipse. It may also be caused by differential magnification of the parts of a structure nearer to
and farther away from the film–screen, an effect which is familiar to photographers. It can be
reduced by using a longer FFD.

4.11 Unsharpness and Blurring


4.11.1 Geometrical Unsharpness
The image of a stationary structure produced by the beam from an ideal point source would be
perfectly sharp. At the edge of the shadow, the intensity of X-rays would change suddenly from
a high to a low value. Figure 4.9a shows that, in the case of an effective focal spot f mm square,
the intensity changes gradually over a distance B, variously called tin-penumbra, blurring or
unsharpness.
If the diagram is redrawn with larger or smaller values for f, F and h, it will be seen that
blurring is reduced by:
using a smaller focal spot;
decreasing the object-film distance; or
using a longer FFD which also reduces magnification and distortion.
F is usually and conveniently 1m, but it may be smaller in some techniques, such as
fluoroscopy, while 2 m is used with a chest stand.
The geometrical blurring B(g) = f h/F (approximately). If f = 2mm, F= 1m, and h = 100
mm, then B(g) = 0.2 mm. Usually the geometrical blurring is less than this as f and h are
smaller. Figure 4.9b plots the intensity of X-rays (or film density) along a line across the film.
B is the geometrical blurring and C the contrast.
4.11.2 Movement Unsharpness
One of the problems in radiography is the imaging of moving structures. If, during an exposure
of duration t seconds, the structure moves parallel to the film with an average speed v, the edge
of the shadow moves a distance slightly greater than v t. This produces movement blurring
B(m) = v t (approximately). If v = 4 mm s-1 and t = 0.05 s, then B(m) = 0.2 mm. (Movement
strictly perpendicular to the film does not produce blurring.)
Movement blurring may be reduced to a satisfactory degree by immobilization and by using
a sufficiently short exposure time, made possible by a rotating anode tube and intensifying
screens. It is, on the other hand, made use of in mechanical tomography.
4.11.3 Absorption Unsharpness
A gradual change in absorption near the edge of a tapered or rounded structure, e.g. a blood
vessel, produces absorption blurring. This is inherent in the objects being imaged, though the
effect can sometimes be reduced by careful patient positioning.
In practice, all three types of unsharpness combine to limit the resolution achievable.

78
CHAPTER 4 IMAGING WITH X-RAYS

4.12 Choice of Exposure Factors


The controls of an X-ray set usually include kV and two out of the three factors: mA, exposure
time, and mAs (Table 4.4). In making a particular radiological examination, when screens are
used the film dose (see Section 2.1) is approximately proportional to kV4 x mAs, so that kV
and mAs have to he considered together. As a rule of thumb, since 904 is twice 754, increasing
the tube voltage by 15 kV allows the mAs to be halved when imaging a given subject.
Table 4.4: Examples of exposure factors
Examination Exposure factors
Barium meal 90 kV
(screening) 0.5 mA (up to S mA if necessary)
Average adult 70 kV
chest X-ray 10 mAs comprising a high mA (e.g. 300-500 mA) and a short
exposure time of only a few milliseconds (e.g. 0.02 s)
Chest X-ray 120kV
(high-kV 4-5 mAs composed similarly to the average adult
technique) chest X-ray above

4.12.1 Kilovoltage
In general, as high as possible a kV will be used so as to (a) increase the penetration of the
beam and reduce patient dose, (b) increase the latitude of exposure and range of tissues
displayed, and (c) reduce the mAs needed and thus allow shorter exposure times, within the
rating of the tube; but (d) not so high a kV that insufficient contrast results in the area of
diagnostic interest.
4.12.2 Milliampere-seconds
Having chosen the kV for a particular examination, this determines the required mAs, which is
then subdivided into (a) as short as possible an exposure time (to arrest motion) and (b) a
correspondingly high mA-just within the rating of the tube.
4.12.3 Exposure time
The necessary exposure time can be reduced by selecting (a) a higher kV and (b) a larger focal
spot. However, as explained in Section 3.6, using a larger focal spot to reduce the exposure
time may sometimes increase the overall blurring. Focal spot size, exposure time and screen
speed should be chosen together, to give minimal total blurring, which occurs when the
separate blurring components are approximately equal. The necessary exposure time can also
be reduced by using:
a three-phase generator, rather than single phase; full-wave single-phase rather than
self-rectification; and a higher speed and larger-diameter anode disk.
4.13 Macroradiography
Macroradiography is a radiographic imaging technique used to increase the size of the image
relative to that of the object. Where a magnified image is required, the anode-object distance is

79
CHAPTER 4 IMAGING WITH X-RAYS

decreased relative to the object-film distance, which is increased. Figure 3.8 shows that the
image of a structure S is larger when the film F2 is some distance from the patient than when it
is at F1, close to the patient. This has the following implications for technique:
 Focal spot
A very small focal spot must be used (e.g. 0.1 mm) otherwise geometric blurring would
be unacceptable. As a result, the heat rating is reduced.
 Exposure time
Exposure times may have to be increased to several seconds, to keep within the rating
of the focal spot and as a result increased movement blurring may result.
Immobilization is therefore important.

Focus Mahmood & Haider

S Structure

F1 Film

F2 Film
Figure 4.10: Macroradiography.

 Patient dose
Patient dose is increased because of the increased exposure needed.
 Contrast
No grid is needed as the air gap reduces the scatter reaching the film. This helps to
reduce the additional exposure needed.
 Screen blurring
The relative effect of screen blurring is reduced. This means that fast screens may lay
used, which again helps to reduce the additional exposure needed.
 Geometrical blurring
Geometrical blurring is increased, relative to the size of the image.
 Quantum mottle
Quantum mottle is not increased, since the same number of X-ray photons are absorbed
in the screen, for the same film blackening.

80
CHAPTER 4 IMAGING WITH X-RAYS

4.14 Mammography
Mammography aims to demonstrate both microcnlcifications as small as 100 um in size of high
inherent contrast and larger areas of tissue having much lower contrast, on the same film.
The breast does not attenuate the beam greatly, allowing the use of the low kV that is needed
to obtain sufficient photoelectric absorption in order to differentiate between normal and
abnormal breast tissues. Ideally, monoenergetic X-rays of about 18-20 keV would produce
optimal contrast and penetration in the case of a small breast.
This can be approximated by operating a tube having a ben-Ilium window and a
molybdenum target at 28 kV (constant potential) and using a 0.03 mm molybdenum filter
which has an absorption edge at 20 keV. This transmits most of the characteristic radiation
(17.9 and 19.5 keV) but removes most of the continuous spectrum. Figure 3.9 compares (A) the
spectrum produced in this way by a molybdenum-molybdenum combination with (B) the
spectrum that would be produced by operating a tube with a tungsten target at 30 kV and using
a 0.5 mm Al filter.

A Mahmood & Haider


Relative number of photons

Molybdenum
Tungsten

B
B
A

Photon energy (keV) 30

Figure 4.11: X-ray spectra using a molybdenum target and filter (A) and a
tungsten target with aluminum filter (B).

With the thicker breast, higher-energy radiation is preferred. Better results, with a significant
decrease in absorbed dose to the breast tissue, are obtained with a rhodium or a palladium filter,
having absorption edges at 23 and 24 keV, respectively. Better still is a tube with a rhodium
target, giving 20.2 and 22.7 keV characteristic rays, used with a rhodium filter.
The small focal spot and the inefficient production of X-rays consequent on the low kV and
target atomic number incur problems of tube loading.
Films with a value of y of about 3 are used. Contrast is also improved (but the exposure and
patient dose are increased) by the use of a grid or air gap. A very fine grid is the preferred
option.

81
CHAPTER 5

FLUOROSCOPY

There is a significant challenge in the provision of medical


professionals that have the responsibility of conducting procedures
for various radiographic diagnostic techniques with appropriate
knowledge of physics that can be applied to enhance the
Rationale effectiveness and safety of these techniques. This chapter provides a
general description of the components of a fluoroscope and the
mechanism of its work.

Mahmood & Haider

Performance Objectives

After studying the chapter five, the student will be able to:-

1. Understand general physics of fluoroscopy


2. Describe the components of a fluoroscope
3. Practice radiation safety
4. Understand basic and advanced principles of fluoroscopic assessment
5. State and explain the inverse square law.
CHAPTER 5 FLUOROSCOPY

CHAPTER FIVE: FLUOROSCOPY


CHAPTER CONTENTS
5.1. Overview of Fluoroscopy 83
5.2. Description 84
5.3. Fluoroscopy Works 84
5.4. Benefits / Risks 58
5.5. Components of Fluoroscope 86
5.5.1. X-Ray Generator 86
5.5.2. X-Ray Tube 86
5.5.3. Collimator 87
5.5.4. Patient Table and Pad 87
5.5.5. Image Intensifier 87
5.6. Image Intensifier Fluoroscopy Systems 89

5.1 Overview of Fluoroscopy


Fluoroscopy is an imaging technique commonly used by physicians to obtain real-time
images of the internal structures of a patient through the use of a fluoroscope. In its simplest
form, a fluoroscope consists of an x-ray source a fluorescent screen between which a patient
is placed. However, modern fluoroscopes couple the screen to an x-ray image intensifier and
charge-coupled device (CCD) video camera allowing the images to be played and recorded on
a monitor. The use of x-rays, a form of ionizing radiation, requires that the potential risks
from a procedure be carefully balanced with the benefits of the procedure to the patient.
While physicians always try to use low dose rates during fluoroscopy procedures, the length
of a typical procedure often results in a relatively high absorbed dose to the patient. Recent
advances include the digitization of the images captured and flat-panel detector systems
which reduce the radiation dose to the patient still further.

Figure 5.1: Fluoroscopic Acquisition components

83
CHAPTER 5 FLUOROSCOPY

5.2 Description
Fluoroscopy is a type of medical imaging produces a video x-ray that uses a continuous X-ray
beam to display an organ or part (internal structures) of a patient in real time on a computer
screen or television monitor. Fluoroscopes are used for interventional procedures such as
guiding the placement of a catheter during an arteriography, for assessing stomach and bowel
movement and function, and for detecting obstructions in the airway or blood vessels. A
contrast agent may also be used to enhance the images. During a fluoroscopy procedure, an
X-ray beam is passed through the body. The image is transmitted to a monitor so the
movement of a body part or of an instrument or contrast agent (“X-ray dye”) through the body
can be seen in detail.
Fluoroscopy is most often used to view the upper gastrointestinal (GI) tract, which includes
the stomach, esophagus, duodenum, and the upper small intestine. It is also used to view the
lower GI tract.
In its simplest form, a fluoroscope consists of an X-ray source and fluorescent (phosphor)
screen between which a patient is placed to convert the pattern of x-rays leaving the patient
into a pattern of light. Since the intensity of light is proportional to the intensity of x-rays.
However, modern fluoroscopes couple the screen to an X-ray image intensifier and CCD
video camera allowing the images to be recorded and played on a monitor. The x-ray tube is
usually located under the table and is attached to the fluoroscopic tower but in some types of
Fluoroscopy, the x-ray tube located above the table (see figure 5.2).

(a) (b)
Figure 5.2: (a) tube x-ray above the table (b) tube x-ray under the table
5.3 Fluoroscopy Works
The fluoroscope is a type of x-ray machine that can use either a continuous or a pulsing x-ray
beam. The x-ray machine has an x-ray tube that is constructed of glass or metal and has a
vacuum seal inside. The x-ray tube is usually located under the table and is attached to the
fluoroscopic tower. It generates x-rays by converting electricity from its power line (AC
current of 120-480 volts) into electricity that falls into the 25-150 kilo volt range. This creates

84
CHAPTER 5 FLUOROSCOPY

a stream of electrons that are shot against a tungsten target. When the electrons hit this target
(called an anode) the atomic structure of the tungsten stops the electrons, causing a release of
x-ray energy. This energy is focused by the x-ray tube onto the area of the body to be imaged.
These very energetic electromagnetic waves can pass through the body and create images
of internal structures. Because the different tissues within the body are of different densities,
those waves are attenuated (weakened) at differing rates as they pass through. Bone, for
example, is very dense and absorbs a lot of the x-rays, while the tissues surrounding the bone
are less dense and absorb less of the x-ray. It is this difference in the absorption of the waves
that creates variations in the exposures and allows the detail of the image to be formed.
With a fluoroscope, when the beam passes through the body it hits an image intensifier that
increases the brightness of the image many times (e.g. x1000 to x5000) so that it can be
viewed on a display screen. The image intensifier itself is coupled to a video camera that
captures and encodes the two-dimensional patterns of light as a video signal from the x-ray
machine. The signal is converted back into a pattern of light seen as the image on the monitor.
The camera output can be digitized for computer image enhancements.

5.4 Benefits / Risks


Fluoroscopy is used in a wide variety of examinations and procedures to diagnose or treat
patients. Some examples are:

 Barium X-rays and enemas (to view the gastrointestinal tract)


 Catheter insertion and manipulation (to direct the movement of a catheter through
blood vessels, bile ducts or the urinary system)
 Placement of devices within the body, such as stents (to open narrowed or blocked
blood vessels)
 Angiograms (to visualize blood vessels and organs)
 Orthopedic surgery (to guide joint replacements and treatment of fractures)

Because fluoroscopy involves the use of X-rays, a form of ionizing radiation, all fluoroscopic
procedures carries some risks of radiation-induced cancer to the patient. The radiation dose
the patient receives varies depend greatly on the size of the patient as well as length of the
procedure. Fluoroscopy can result in relatively high radiation doses, especially for complex
interventional procedures (such as placing stents or other devices inside the body) which
require fluoroscopy be administered for a long period of time. The use of X-rays, a form of
ionizing radiation, requires the potential risks from a procedure to be carefully balanced with
the benefits of the procedure to the patient. While physicians always try to use low dose rates
during fluoroscopic procedures, the length of a typical procedure often results in a relatively
high absorbed dose to the patient. Recent advances include the digitization of the images

85
CHAPTER 5 FLUOROSCOPY

captured and flat panel detector systems which reduce the radiation dose to the patient still
further. Radiation-related risks associated with fluoroscopy include:
 radiation-induced injuries to the skin and underlying tissues (“burns”), which occur
shortly after the exposure, and
 radiation-induced cancers, which may occur sometime later in life.

The probability that a person will experience these effects from a fluoroscopic procedure is
statistically very small. Therefore, if the procedure is medically needed, the radiation risks are
outweighed by the benefit to the patient. In fact, the radiation risk is usually far less than other
risks not associated with radiation, such as anesthesia or sedation, or risks from the treatment
itself. To minimize the radiation risk, fluoroscopy should always be performed with the
lowest acceptable exposure for the shortest time necessary.
See the Medical X-ray Imaging webpage for more information on benefits and risks of X-
ray imaging, including fluoroscopy.
5.5 Components of Fluoroscope
Simple fluoroscopes consist of nothing more than a fluorescent screen and an X-ray source; a
patient is placed in between them. Modern fluoroscopes can be more complicated.
Components, as shown in figure 5.3, of the fluoroscopic imaging chain are:
 x-ray generator
 x-ray tube
 collimator
 filters
 patient table
 grid
 image intensifier
 optical coupling
 television system
 image recording
The components pertinent to orthopedic surgery are the x-ray generator, x-ray tube,
collimator, patient table and pad, and image intensifier.
5.5.1 X-ray Generator
Produces electrical energy and allows selection of kilovolt peak (kVp) and tube current (mA)
that is delivered to x-ray tube.

5.5.2 X-Ray Tube


An X-ray tube is a device used to generate X-rays convert electrical energy of x-ray generator
to x-ray beam. Source of radiation; want to increase distance from x-ray tube.

86
CHAPTER 5 FLUOROSCOPY

5.5.3 Collimator
Collimator contains multiple sets of shutter blades that define the shape of the x-ray beam.
There is a rectangular and a round set of blades. By further collimating the beam, or "coning
down" to the area of interest, the exposed volume of tissue is reduced, which results in less
scatter production and better image contrast. It also reduces the overall patient and surgeon
radiation dose by minimizing scatter and direct exposure.
5.5.4 Patient Table and Pad
Must be balance adequate strength to support the patient's body weight while minimizing x-
ray attenuation. This can be accomplished with carbon fiber composite materials. Thin foam
pads are better than thick gel pads.

Monitor

Video Camera

Optical Coupling

Image Intensifier

Grid

Patient

Table

Filtration

Collimator

X-ray Tube

X-ray Generator
Mahmood & Haider
Figure 5.3: Components of the fluoroscopic imaging chain

5.5.5 Image Intensifier


An image intensifier is very large vacuum tube and complex electronic device which involves
images taken with the fluoroscopy technique that can then be played and seen on the
television camera (figure 5.4).
The image-intensifier tube is approximately 50 cm long. The anode is a circular plate with
a hole in the middle through which electrons pass to the output phosphor, which is just the
other side of the anode and is usually made of zinc cadmium sulfide. The output phosphor is
the site where electrons interact and produce light.

87
CHAPTER 5 FLUOROSCOPY

Major components include a curved input layer to convert x-rays to electrons. The CsI
crystals are grown as tiny needles (see figure 5.5) and are tightly packed in a layer of
approximately 300 μm, each crystal is approximately 5 μm in diameter. This results in
microlight pipes with little dispersion and improved spatial resolution.

Input Phosphor Photocathode (-) Housing


Aperture
(Iris) TV camera
Anode Lens optics
(+) and mirror
e — assembly


e
Video or CCD camera to
Output ADC to digital image
Phosphor
X-rays in
~25 KeV
acceleration Light out → Recorder
Grid —
e e

e —
e
ee e



ZnCds:Ag

e e

SbCs3 Output phosphor
CsI
photocathode
phosphor Electrons → Light
~5000 X amplification
X-rays → Light → Electrons
Mahmood & Haider

Figure 5.4: Image intensifier- TV subsystem


Other components are the electron lenses to focus the electrons, an anode to accelerate them,
and an output layer to convert them into a visible image.

Electrons
Photocathode

CsI Needles

Support

Vacuum Window Light Emission


in CsI Needle

X-Ray
Light Pipe (optical Fiber) Mahmood & Haider

Figure 5.5: Structured Phosphor; Cesium Iodide (CsI), crystal grow in long
columns that act as light pipes

88
CHAPTER 5 FLUOROSCOPY

X-rays that exit the patient and are incident on the image-intensifier tube are transmitted
through the glass envelope and interact with and deposit energy into the layer of phosphor
(which is composed of cesium iodide, CsI); a portion of this energy is converted into light.
The light from the phosphor is then absorbed by the photocathode layer of the image
intensifier, which uses the light energy to emit electrons: The number of electrons emitted is
in direct proportion to the amount of light that was absorbed. The electrons are then amplified
and accelerated by a high voltage (25,000–35,000 V) placed between the input cathode of the
image intensifier and the output phosphor, the emitted electrons gain substantial kinetic
energy and travel at a high velocity. Electrostatic plates are used to focus the electrons and
direct them to the output phosphor, which has a much smaller surface area.
5.6 Image Intensifier Fluoroscopy Systems
Upon impacting the output phosphor, a portion of the energy is converted back to a light
image. This is similar to the effect of radiographic intensifying screens. Because the electron
flux from a large input surface area is concentrated onto a much smaller output surface area at
the output phosphor, the light image that emerges from the output phosphor is much brighter
than it would be at the input phosphor layer (magnification gain). Moreover, the high kinetic
energy gained by the electrons, a result of the high voltage applied across the image
intensifier, also increases the emitted light from the output phosphor (flux gain). After passing
through a lens system and an aperture, the television camera tube intercepts this light image
and converts the light pattern into a series of electrical signals that may be displayed on the
television monitor. The tube components are contained within a glass or metal envelope that
provides structural support but more importantly maintains a vacuum. When installed, the
tube is mounted inside a metal container to protect it from rough handling and breakage.
There are different diameters that can accommodate body parts of various sizes.
The next active element of the image-intensifier tube is the photocathode, which is
bonded directly to the input phosphor with a thin, transparent adhesive layer. The
photocathode is a thin metal layer usually composed of cesium and antimony compounds that
respond to stimulation of input phosphor light by the emission of electrons. The photocathode
emits electrons when illuminated by the input phosphor. This process is known as
photoemission. The term is similar to thermionic emission, which refers to electron
emission that follows heat stimulation. Photoemission is electron emission that follows light
stimulation.
For the image pattern to be accurate, the electron path from the photocathode to the output
phosphor must be precise. The engineering aspects of maintaining proper electron travel are
called electron optics because the pattern of electrons emitted from the large cathode end of
the image-intensifier tube must be reduced to the small output phosphor.
The devices responsible for this control, called electrostatic focusing lenses, are located
along the length of the image-intensifier tube. The electrons arrive at the output phosphor with
high kinetic energy and contain the image of the input phosphor in minified form.

89
CHAPTER 5 FLUOROSCOPY

The increased illumination of the image is due to the multiplication of light photons at the
output phosphor compared with x-rays at the input phosphor and the image magnification
from input phosphor to output phosphor. The ability of the image intensifier to increase the
illumination level of the image is called its brightness gain. The brightness gain is simply the
product of the magnification gain and the flux gain.

90
Chapter 6

Computed Tomography

This chapter is to provide an overview of the computed tomography


topic, and basic understanding of the underlying principles of image
Rationale reconstruction in CT.

Mahmood & Haider

Performance Objectives

After studying the chapter six, the student will be able to:-

1. Describe the scan-and step slice acquisition method and the general
characteristics of the data sets it produces.
2. Describe the helical/spiral volume acquisition method and the general
characteristics of the data set it produces.
3. Describe and illustrate the general concept of the back-projection method of
image reconstruction.
4. Explain what is meant by "filtered" back projection.
5. Sketch a slice of tissue and illustrate the concept of voxels that are formed
during image reconstruction.
6. Describe and illustrate the general range of CT numbers for tissue and
materials in a human body.
7. Explain how windowing contributes to high contrast sensitivity.
CHAPTER 6 COMPUTED TOMOGRAPHY

CHAPTER SIX: COMPUTED TOMOGRAPHY


CHAPTER CONTENTS
6.1. Introduction 93
6.2. History of Computed Tomography 93
6.3. Operating Steps 95
6.4. Different Generations of CT Scanners 96
6.4.1. First-Generation 96
6.4.2. Second-Generation 96
6.4.3. Third-Generation 97
6.4.4. Fourth-Generation 98
6.4.5. Fifth-Generation (Electron-Beam Scanner) 99
6.5. Basic Principles of CT 100
6.6. CT Image 103
6.7. Principles of Helical CT Scanning Operation 104
6.8. Factors Affecting Spatial Resolution 105
6.8.1. Matrix and Pixel Size 105
6.8.2. Field Of View (FOV) in CT 107
6.8.3. Display Field of View (DFOV) 108
6.8.4. Voxel Size 108
6.8.5. Focal Spot Size 110
6.8.6. Blur 110
6.9. Nyquist Sampling Theorem 110
6.10. Low-Contrast Resolution 111
6.11. Factors Relating to Low-Contrast Resolution 112
6.12. Basic CT Scanner Components 113
6.12.1. Scanning Unit (Gantry) 113
6.12.2. X-Ray Tube 114
6.12.3. Detector Array 115
6.12. 4. Data-Acquisition System (DAS) 117
6.12.5. CT Patient Table or Couch 118
6.13. Fan Beam 118
6.14. Focused Septa 118
6.15. Image Reconstruction 118
6.16. Several Approaches to Image Reconstruction 120
6.17. The Filtered Backprojection Algorithm 124
6.18. Image Generation 125
6.18.1. Acquisition 125
6.18.2. Display 125
6.18.3. Windowing 125
6.18.4. Volume Visualization 126
6.19. Image Quality Characteristics 126
6.20. Contrast Sensitivity 127
6.21. Visibility of Detail 128
6.22. Visual Noise 128

92
CHAPTER 6 COMPUTED TOMOGRAPHY

6.1 Introduction
Computed Tomography (CT) is a medical imaging technique that uses X-rays to obtain
structural and functional information about the human body. The digital geometry processing
can be used to generate a three-dimensional image of the internal structures of the human
body from a large series of two-dimensional X-ray images taken around a single axis of
rotation. It is also called a computed axial tomography scan. The origin of the word
"tomography" is derived from the Greek word tomē ("cut") or "tomos" meaning "slice" or
"section" and graphein meaning ("drawing"). A CT imaging system uses computer-processed
X-rays to produce tomographic images or 'slices' of specific areas of the body, like the slices
in a loaf of bread. Computed tomography (CT) has the capability to provide a different form
of high-quality imaging. Computed tomography known as cross-sectional imaging is used for
diagnostic procedures and visualization to guide therapeutic procedures in various medical
disciplines. On the other hand, the radiation dose imparted to the patient’s body during a
procedure is a relatively high dose compared to radiography. But in spite of that, the
computed tomography (CT) is now one of the most effective imaging methods and value to
medical diagnosis and guidance of therapeutic procedures.
In CT scanning, both the image quality characteristics and the radiation dose depend on
and are controlled by the specific imaging protocol selected for each patient. The image
quality has a complex combination of many adjustable imaging factors and is influenced by
many technical parameters for each procedure. Therefore, there is the need to manage the
radiation dose for each patient and balance it with respect to the image quality requirements.
This can be achieved by adjusting the protocol factors for each procedure.
The general objective for each imaging procedure is to visualize the various anatomical
structures and any signs of pathology if they are present. Therefore adjust the image
characteristics to provide the required visualization and limit the radiation dose to no more
than that required to produce the necessary image quality.
Optimized imaging protocols for a specific clinical study, must take the factors are
adjusted to provide the proper balance among necessary image quality and patient exposure of
radiation.
6.2 History of Computed Tomography
In 1972 CT became feasible with the development of modern computer technology in the
1960s, but some of the ideas on which it is based can be traced back to the first half of that
century. In 1917 the Bohemian mathematician Radon (1917) proved in a research paper of
fundamental importance that the distribution of a material or material property in an object
layer can be calculated if the integral values along any number of lines passing through the
same layer are known. The first applications of this theory were developed for
radioastronomy by Bracewell (1956), but they met with little response and were not exploited
for medical purposes.

93
CHAPTER 6 COMPUTED TOMOGRAPHY

It is quite remarkable that image reconstruction from projections was attempted as early as
1940. Needless to say; these attempts were made without the benefits of modern computer
technology. In a patent granted in 1940, Gabriel Frank described the basic idea of today’s
tomography.
Twenty-one years later, William H. Oldendorf, an American neurologist from Los
Angeles, performed a series of experiments based on principles similar to those later used in
CT. The objective of his work was to determine whether internal structures within dense
structures could be identified by transmission measurements.
In 1963, David E. Kuhl and Roy Q. Edwards introduced transverse tomography using
radioisotopes, which was further developed and evolved into today’s emission computed
tomography. A sequence of scans was acquired at uniform steps and regular angular intervals
with two opposing radiation detectors. At each angle, the film was exposed to a narrow line of
light moving across the face of a cathode ray tube with a location and orientation
corresponding to the detectors’ linear position. This is essentially the analog version of the
backprojection operation. The process was repeated at 15-deg angular increments (the film
was rotated accordingly to sum up the backprojected views). In later experiments, the film
was replaced by a computer-based backprojection process. What was lacking in these
attempts was an exact reconstruction technique.
In 1963, the physicist Allan M. Cormack reported the findings from investigations of
perhaps the first CT scanner actually built. Cormack was carried out the first experiments on
medical applications of this type of reconstructive tomography. His work could be traced back to
1955 when he was asked to spend one and a half days a week at Groote Schuur Hospital
attending to the use of isotopes after the resignation of the hospital physicist (Cormack was
the only nuclear physicist in Cape Town, South Africa). While observing the planning of
radiotherapy treatments, Cormack came to realize the importance of knowing the x-ray
attenuation coefficient distribution inside the body. He wanted to reconstruct attenuation
coefficients of tissues to improve the accuracy of radiation treatment.
The development of the first clinical CT scanner began in 1967 with English engineer
Godfrey N. Hounsfield at the Central Research Laboratories of EMI, Ltd. in England. The first
successful practical implementation of the theory was achieved in 1972 by Hounsfield, who is
now generally recognized as the inventor of CT. While investigating pattern recognition
techniques, he deduced, independent of Cormack, that x-ray measurements of a body taken
from different directions would allow the reconstruction of its internal structure.20
Preliminary calculations by Hounsfield indicated that this approach could attain a 0.5%
accuracy of the x-ray attenuation coefficients in a slice. This is an improvement of nearly a
factor of 100 over the conventional radiograph. For their pioneering work in CT, Cormack
and Hounsfield shared the 1979 Nobel Prize in Physiology and Medicine.
The first laboratory scanner was built in 1967. Linear scans were performed on a rotating
specimen in 1-deg steps (the specimen remained stationary during each scan). Because of the

94
CHAPTER 6 COMPUTED TOMOGRAPHY

low-intensity americium gamma source, it took nine days to complete the data acquisition and
produce a picture. Unlike the reconstruction method used by Cormack, a total of 28,000
simultaneous equations had to be solved by a computer in 2.5 hours.
The use of a modified interpolation method, a higher-intensity x-ray tube, and a crystal
detector with a photomultiplier reduced the scan time to nine hours and improved the
accuracy from 4% to 0.5%.
The first clinically available CT device was installed at London’s Atkinson-Morley
Hospital in September 1971, after further refinement on the data acquisition and
reconstruction techniques. Images could be produced in 4.5 minutes. On October 4, 1971, the
first patient, who had a large cyst, was scanned and the pathology was clearly visible in the
image.

6.3 Operating Steps


Devices that consist of the basic components of a patient table which the patient to be scanned
lie, the table moves the patient into the gantry and the x-ray tube rotates around the patient.
The scanner gantry contains the rotating portion that holds the X-ray tube generator and
detector array. As x-rays pass through the patient to the detectors, A computer system
acquires and performing the necessary calculations to go from measurements to a viewable
image

Figure 6.1: CT scanner, a typical CT scanner.

The table, on which the patient lies, is at the front of the scanner. The table then moves into
the central bore of the CT scanner. Within the scanner gantry is a rotating ring with both the
X-ray tube and the detector array.
A high-voltage x-ray generator supplies electric power to the x-ray tube, which usually has
a rotating anode and is capable of withstanding the high heat loads generated during rapid
multiple-slice acquisition. The x-ray tube generator, detector array, collimators, and rotational
frame are housed in moveable frame as ring shaped unit called the gantry (Fig. 6.1).

95
CHAPTER 6 COMPUTED TOMOGRAPHY

6.4 Different Generations of CT Scanners


The following describes the evolution of CT technology over the last 30 years.
The “Generation” Race
– 1st Generation - single beam, translate-rotate
– 2nd Generation - multiple beam, translate-rotate
– 3rd Generation - fan beam, rotate
– 4th Generation - fan beam, fixed ring
– Fifth-generation CT (The electron-beam scanner)
6.4.1 First-generation
The type of scanner built in 1971 is called the first-generation CT. In a first-generation
scanner, only one pencil beam is measured at a time. In the original head scanner, the x-ray
source was collimated to a narrow beam of 3 mm wide (along the scanning plane) and 13 mm
long (across the scanning plane). The x-ray source and detector were linearly translated to
acquire individual measurements. The original scanner collected 160 measurements across the
scan field. After the linear measurements were completed, both the x-ray tube and the detector
were rotated 1 deg to the next angular position to acquire the next set of measurements, as
shown in Fig. 6.2.

Source
detector

Source

Object

Mahmood & Haider


Figure 6.2: First-generation CT scanner geometry. At any time instant, a
single measurement is collected. The x-ray tube and detector translate linearly
to cover the entire object. The entire apparatus is then rotated 1 deg to repeat
the scan.

6.4.2 Second-Generation
Although clinical results from the first-generation scanners were promising, there remained a
serious image quality issue associated with patient motion during the 4.5-min data acquisition.
The data acquisition time had to be reduced. This need led to the development of the second-
generation scanner illustrated in Fig. 6.3. Although this was still a translation-rotation

96
CHAPTER 6 COMPUTED TOMOGRAPHY

scanner, the number of rotation steps was reduced by the use of multiple pencil beams. The
figure depicts a design in which six detector modules were used. The angle between the pencil
beams was 1 deg. Therefore, for each translation scan, projections were acquired from six
different angles. This allowed the x-ray tube and detector to rotate 6 deg at a time for data
acquisition, representing a reduction factor of 6 in acquisition time. In late 1975, EMI
introduced a 30-detector scanner that was capable of acquiring a complete scan under 20 sec.
This was an important milestone for body scanning, since the scan interval fell within the
breath-holding range for most patients.

Source

Object

Mahmood & Haider


Figure 6.3: Second-generation CT scanner geometry. At each time instant,
measurements from six different angles are collected. Although the x-ray
source and detectors still need to be linearly translated, the x-ray tube and
detector can rotate every 6 deg.

6.4.3 Third-generation CT
One of the most popular scanner types is the third-generation CT scanner illustrated in Fig.
6.4. In this configuration, many detector cells are located on an arc concentric to the x-ray
source. The size of each detector is sufficiently large so that the entire object is within each
detector’s field-of-view (FOV) at all times. The x-ray source and the detector remain
stationary to each other while the entire apparatus rotates about the patient. Linear motion is
eliminated to significantly reduce the data acquisition time. In the early models of the third-
generation scanners, both the x-ray tube power and the detector signals were transmitted by
cables.
Limitations on the length of the cables forced the gantry to rotate both clockwise and
counterclockwise to acquire adjacent slices. The acceleration and deceleration of the gantry,
which typically weighed several hundred kilograms, restricted the scan speed to roughly 2 sec
per rotation. Later models used slip rings for power and data transmission. Since the gantry
could rotate at a constant speed during successive scans, the scan time was reduced to 0.5 sec
or less. The introduction of slip ring technology was also a key to the success of helical or

97
CHAPTER 6 COMPUTED TOMOGRAPHY

spiral CT (Chapter 9 is devoted to this topic). Because of the inherent advantages of the third-
generation technology, nearly all of the state-of-the-art scanners on the market today are third
generation.

Source Large fan


(X-ray tube) beam

Object

Mahmood & Haider


Figure 6.4: Third-generation CT scanner geometry. At any time instant, the
entire object is irradiated by the x-ray source. The x-ray tube and detector
are stationary with respect to each other while the entire apparatus rotates
about the patient.

6.4.4 Fourth-Generation
Several technology challenges in the design of the third-generation CT, including detector
stability and aliasing, led to investigations of the fourth generation concept depicted in Fig.
6.5. In this design, the detector forms an enclosed ring and remains stationary during the
entire scan, while the x-ray tube rotates about the patient. Unlike the third-generation scanner,
a projection is formed with signals measured on a single detector as the x-ray beam sweeps
across the object. The projection, therefore, forms a fan with its apex at the detector, as shown
by the shaded area in Figure 6.5 (a projection in a third generation scanner forms a fan with
the x-ray source as its apex). One advantage of the fourth-generation scanner is the fact that
the spacing between adjacent samples in a projection is determined solely by the rate at which
the measurements are taken. This is in contrast to third-generation scanning in which the
sample spacing is determined by the detector cell size. A higher sampling density can
eliminate potential aliasing artifacts. In addition, since at some point during every rotation
each detector cell is exposed directly to the x-ray source without any attenuation, the detector
can be recalibrated dynamically during the scan. This significantly reduces the stability
requirements of the detector.
A potential drawback of the fourth-generation design is scattered radiation. Because each
detector cell must receive x-ray photons over a wide angle, no effective and practical scatter
rejection can be performed by a post-patient collimator. Although other scatter correction
schemes, such as the use of a set of reference detectors or software algorithms, are useful, the

98
CHAPTER 6 COMPUTED TOMOGRAPHY

complexity of these corrections is likely to increase significantly with the introduction of


multislice or volumetric CT.
A more difficult drawback to overcome is the number of detectors required to form a
complete ring. Since the detector must surround the patient at a fairly large circumference (to
maintain an acceptable source-to-patient distance), the number of detector elements and the
associated data acquisition electronics become quite large. For example, a recent single-slice
fourth-generation scanner required 4800 detectors. The number would be much higher for
multislice scanners. Thus, for economical and practical reasons, fourth-generation scanners
are likely to be phased out.

Source

Object

Mahmood & Haider


Figure 6.5: Geometry of a fourth-generation CT scanner. At any time instant,
the x-ray source irradiates the detectors in a fan-shaped x-ray beam, as shown
by the solid lines. A projection is formed using measurement samples from a
single detector over time, as depicted by the shaded fan-shaped region.

6.4.5 Fifth-Generation (Electron-Beam Scanner)


The electron-beam scanner, sometimes called the fifth-generation scanner, used in electron-
beam computed tomography (EBCT), or electron-beam tomography (EBT), was built
between 1980 and 1984 for cardiac applications. To “freeze” cardiac motion, a complete set
of projections must be collected within 20 to 50 ms. This is clearly very challenging for
conventional third- or fourth-generation types of scanners due to the enormous centripetal
force placed on the x-ray tube and the detector. In the electron-beam scanner, the rotation of
the source is provided by the sweeping motion of the electron beam (instead of the
mechanical motion of the x-ray tube). Figure 6.6 shows a simplified schematic diagram of an
electron-beam scanner. The bottom arc (210 deg) represents an anode with multiple target
tracks. A high-speed electron beam is focused and deflected by carefully designed coils to
sweep along the target ring, similar to a cathode ray tube. The entire assembly is sealed in a
vacuum. Fan shaped x-ray beams are produced and collimated to a set of detectors,
represented by the top arc of 216 deg. The detector ring and the target ring are offset (no

99
CHAPTER 6 COMPUTED TOMOGRAPHY

coplanar) to make room for the overlapped portion. When multiple target tracks and detector
rings are used, coverage of 8 cm along the patient long axis can be obtained for the heart.
Since the system has no mechanical moving parts, scan times as fast as 50 ms can be
achieved. However, for noise considerations, multiple scans are often averaged to produce the
final image.

Data acquisition
Magnetic focus and system
Electron gun deflection coils
Detectors
Electron beam
X-ray beams

Patient
couch
Vacuum drift tube
X-ray
collimators

Target rings
Mahmood & Haider
Figure 6.6: Geometry of an electron-beam scanner.

6.5 Basic Principles of CT


The tissues and materials generally differ in their ability to absorb X-rays, where some
substances are more permeable to X-rays while some others impermeable and therefore the
different tissues seem different when the X-ray film is developed.
For example, the dense tissues such as the bones appear white on a CT film while the soft
tissues such as the brain or kidney appear gray while the cavities filled with air such as the
lungs appear black.
Fundamentally a CT scanner when undergoing X-ray through the patient makes many
measurements of attenuation through the plane of a finite thickness cross section of the body
and obtaining information with a detector on the other side. The X-ray tube and detector array
be interconnected and rotated the around the patient during the survey period. Then assemble
the data that is obtained in digital computers and integrate it to reconstruct a digital image of
the cross section to provide a cross sectional image (tomogram) that is displayed on a
computer screen. That is a CT image is composed of pixels (picture elements). Each pixel on
the image represents a measurement of the average x-ray attenuation of a box-like (small

100
CHAPTER 6 COMPUTED TOMOGRAPHY

volume) element (voxel) extending through the thickness of the tissue section. In addition, in
a real CT image, all tissues within a single pixel would be the same shade of gray (see figure
6.7). The image can be stored for retrieval and use later.

10 mm
Voxels X-ray tube
0.5 mm
0.5 mm

0.5 mm Pixels
0.5 mm Detectors

Voxel
W

Mahmood & Haider


Figure 6.7: Sample CT image.

When X-rays pass through the material subjected to attenuation which is the removal fraction
of x-ray photons, as a result of tissue absorption and scatter, from the x-ray beam as it passes
through matter. An attenuation measurement quantifies the fraction of radiation removed in
passing through a given amount of a specific material of thickness Δx (Fig. 6.8, A).
Attenuation is expressed as:
(6.1)

where, It and Io are the x-ray intensities measured with and without the material in the x-ray
beam path, respectively, and μ is the linear attenuation coefficient of the specific material.
Formula (6.1) is expressed as the natural logarithm:

101
CHAPTER 6 COMPUTED TOMOGRAPHY

The image reconstruction process derives the average attenuation coefficient (μ) values for
each voxel in the cross section by using many rays from many different rotational angles
around the cross section. The specific attenuation of a voxel (μ) increases with the density and
the atomic numbers of tissues averaged through the volume of the voxel and declines with
increasing x-ray energy.

X-ray tube

∆X
Specific material
of thickness ∆x
∆X

Specific material
of certain Stack of voxels
thickness of thickness ∆x

Detector
A B Mahmood & Haider
Figure 6.8: Principles of CT. Diagram shows the x-ray attenuation through
a specific material of finite thickness (Δx) (Eq 6.2) (A) and through a
material considered as a stack of voxels with each voxel of finite thickness
(Δx)(Eq 6.2)(B).

Mathematically, the attenuation value (μ) for each voxel could be determined algebraically
with a very large number of simultaneous equations by using all ray sums that intersect the
voxel. However, a much more elegant and simpler method called filtered back-projection
was used in the early CT scanners and remains in use today. That means, the filtered back-
projection method and many other methods are applied to derive the average attenuation
coefficient (μ) values for each voxel in the cross section, using many rays from many
different rotational angles around the cross section. Rays are collected in sets called
projections, which are made across the patient in a particular direction in the section plane.
There may be from 500 to 1,000 or more rays in a single projection. To reconstruct the image
from the ray measurements, each voxel must be viewed from multiple different directions. A
complete data set requires many projections at rotational intervals of 1° or less around the
cross section. Back-projection effectively reverses the attenuation process by adding the
attenuation value of each ray in each projection back through the reconstruction matrix.
Because this process generates a blurred image, the data from each projection are

102
CHAPTER 6 COMPUTED TOMOGRAPHY

mathematically altered (filtered) prior to back-projection, eliminating the intrinsic blurring


effect. There are a number of advanced reconstruction techniques that are currently used in
the CT image reconstruction process; however, these are beyond the scope of this article.

6.6 CT Image
The Computed tomography number (CT number) is a selectable scan factor based on the
Hounsfield scale. Each elemental region of the CT image (pixel) is expressed in terms of
Hounsfield units (HU) corresponding to the x-ray attenuation (or tissue density). CT
numbers are displayed as gray-scale pixels on the viewing monitor. White represents
pixels with higher CT numbers (bone). Varying shades of gray are assigned to
intermediate CT numbers e.g., soft tissues, fluid and fat. Black represents regions with
lower CT numbers like lungs and air-filled organs.
For radiologists, the most important output from a CT scanner is the image itself. The
variable signal intensity in CT results from tissue discrimination based on the variations in
attenuation between “voxels,” which depends on differences in voxel density and atomic
number of elements present and is influenced by the detected mean photon energy. Image that
produces CT which will show later is composed of pixels (picture elements) and each pixel on
it represents the average x-ray attenuation in a small volume (voxel) that extends through the
tissue section. In addition, all tissues within single pixel in a real CT image will be the same
shade of gray.
As a final step, the individual voxel attenuation values are scaled to more convenient
integers and normalized to voxel values containing water ( ). The CT image does not
show μ values directly, but the intensity scale (called the CT number) used in the
reconstructed CT image is defined by:

where μ is the measured attenuation of the material in the voxel and is the linear
attenuation coefficient of water. This unit is often called the Hounsfield unit (HU), honoring
the inventor of CT. Voxels containing materials that attenuates more than water (e.g. muscle
tissue, liver, and bone) have positive CT numbers, whereas materials with less attenuation
than water (e.g. lung or adipose tissues) have negative CT numbers. With the exception of
water and air, the CT numbers for a given material will vary with changes in the x-ray tube
potential and from manufacturer to manufacturer.
By definition, water has a CT number of zero. The CT number for air is –1000 HU,
since . Soft tissues (including fat, muscle, and other body tissues) have CT numbers
ranging from –100 HU to 60 HU. Cortical bones are more attenuating and have CT numbers
from 250 HU to over 1000 HU. The linear attenuation coefficient is magnified by a factor
over 1000 (note the division by ). Medical scanners typically work in a range of –1024
HU to +3071 HU.

103
CHAPTER 6 COMPUTED TOMOGRAPHY

The contrast and metal objects have values from several hundred to several thousand HU.
Because of the large dynamic range of the CT number, it is impossible to adequately visualize
it without modification on a standard grayscale monitor or film. Typical display devices use
eight-bit grayscales, representing 256 different shades of gray. If a CT image is displayed
without transformation, the original dynamic range of well over 2000 HU must be
compressed by a factor of at least 8.
6.7 Principles of Helical CT Scanning Operation
There are two modes for a CT scan: conventional, (or slice-to-slice scan), and helical (or
spiral) CT. For slice-to-slice CT scan, it consists of two alternate stages:

Data acquisition: During this stage, the patient remains stationary and the x-ray tube rotates
about the patient to acquire a complete set of projections at a prescribed scanning location.

Patient positioning: During this stage, no data are acquired and the patient is transported to
the next prescribed scanning location.

The data acquisition stage typically takes one second or less while the patient positioning
stage is around one second. Thus, the duty cycle of the slice-to-slice CT is 50% at best. This
poor scanning efficiency directly limits the volume coverage speed versus performance and
therefore the scan throughput of the step-and-shoot CT.
Helical (or spiral) CT scanners use slip-ring technology, which was introduced around
1990. Slip-ring scanners can perform a scan in which the patient moves slowly through the
gantry which is referred to as the table speed, while the X-ray tube and detector rotate in a
plane perpendicular to the major axis of the patient’s body. In this technique the data are
continuously acquired or collected without pausing while the patient is simultaneously
transported at a constant speed through the gantry. For this reason the duty cycle of the helical
scan is improved to nearly 100% and the volume coverage speed performance can be
substantially improved. This means that the X-ray tube and detector perform a ‘spiral’ or
‘helical’ movement with respect to the patient, generally at a rate of one revolution per
second. This technique allows fast and continuous acquisition of the data from a complete
volume. Many coarse data sets, each of one rotation, are created by interpolation of the spiral
data, after which the axial images are generated using the standard reconstruction techniques.
The helical scanning is characterized by continuous gantry rotation and continuous data
acquisition while the patient table is moving at constant speed; see Fig. 6.9. The acquired
volume of data can be reconstructed at any point during the scan. All modern CT scanners are
multi-slice which refers to a special CT system equipped with a multiple-row detector array to
simultaneously collect data at different slice locations. The data from each full rotation are
mathematically reconstructs by computer to produce an image of one slice.

104
CHAPTER 6 COMPUTED TOMOGRAPHY

Z-axis

Figure 6.9: Principle of spiral/helical CT scanning.

6.8 Factors Affecting Spatial Resolution


There are a variety of interrelated factors affecting the degree of spatial resolution in a CT
image. These factors are:
 matrix size
 pixel size
 field of view (FOV)
 voxel size
 slice thickness
 focal spot size
 blur

6.8.1 Matrix and Pixel Size


A pixel is a picture element (pix, abbreviation of pictures + element). Tomographic
images are composed of several pixels; the pixel size is determined by the used field of
view and the number of elements in the display image matrix. The corresponding size of
the pixel may be smaller than the actual spatial resolution. Pixels do not have a fixed
size; their diameters are generally measured in micrometers (microns). Although the
pixel is not a unit of measurement itself, pixels are often used to measure the resolution
(or sharpness) of images. As a hypothetical example, a 600 x 1000 pixel image has 4
times the pixel density and is thus 4 times sharper than a 300 x 500 pix el image,
assuming the two images have the same physical size.
To create an image, the system must segment raw data into tiny sections. A matrix is
a grid that is used to break the data into columns and rows of tiny squares. Each square
is a picture element, more commonly referred to as a pixel (Figure 6.10). The matrix

105
CHAPTER 6 COMPUTED TOMOGRAPHY

size refers to how many pixels are present in the grid. A 512 matrix will have 512 pixels
across the rows and 512 pixels down the columns. The most common matrix sizes used
in CT are 256, 512, and 1024. Because the perimeter of the grid is held constant, a larger
matrix six (i.e. 1024 as opposed to 512) will contain smaller individual pixels.
Therefore, matrix size is one of the factors that control pixel size.

Mahmood & Haider


Figure 6.10: A matrix contains columns and rows of pixels.

Each pixel has a width X and a length Y. The two-dimensional pixel represents a three-
dimensional portion of patient tissue. The pixel value represents the proportional amount
of x-ray energy that passes through anatomy and strikes the detector. The information
contained in each pixel is averaged so that one density number (or Hounsfield unit
"HU") is assigned to each pixel. If an object is smaller than a pixel, its density will be
averaged with the information in the remainder of the pixel. This phenomenon is
referred to as the partial volume effect or volume averaging. It results in a less accurate
image.
A large pixel size will make it more likely that multiple objects are contained within a
pixel (Figure 6.11). Because no object smaller than a pixel can be accurately displayed
due to volume averaging, the pixel size affects the spatial resolution. When pixels are
smaller, it is less likely that they will contain different densities, therefore decreasing
the likelihood of volume averaging (Figure 6.12). Hence, smaller pixel size will improve
spatial resolution.
Because no object smaller than a pixel can be accurately displayed due to volume
averaging (and the matrix size influences the size of the pixel), it follows that matrix
size affects spatial resolution.

106
CHAPTER 6 COMPUTED TOMOGRAPHY

Small Matrix
Large Pixel Size (Large Pixel Size)

Objects in Patient Displayed Image


Mahmood & Haider
Figure 6.11: Objects that fall within a pixel will be averaged together to appear on the image
as a single, larger object.

Large Matrix
Small Pixel Size (Small Pixel Size)

Objects in Patient Displayed Image Mahmood & Haider


Figure 6.12: Small pixel size reduces the likelihood that multiple objects will be averaged
together. In this way, pixel size affects spatial resolution in the image.

6.8.2 Field Of View (FOV) in CT


The FOV in CT is the area of scan region that is included in the image reconstruction. There
are two types of FOV:
 Scan FOV (SFOV) and
 Display FOV (DFOV)
SFOV is the region within the gantry opening, the anatomy that is included in the
reconstruction. SFOV is less than the physical opening of the CT gantry, which is the reason

107
CHAPTER 6 COMPUTED TOMOGRAPHY

why part of the anatomy is cut off in scanning larger patients. On the other hand, DFOV is
area of reconstructed image that can be displayed. Smaller DFOV results in larger image size.
The SFOV influences the physical dimensions of image pixel. A 10-cm FOV in a 512 × 512
matrix results in pixel dimensions of approximately 0.2 mm, and a 35-cm FOV produces pixel
widths of about 0.7 mm (Figure 6.13).

Large Image
(Field of View)
Small Image
(Field of View)

Small Pixels

Large Pixels Mahmood & Haider


Figure 6.13: Effect of image size

6.8.3 Display Field of View (DFOV)


Another factor influencing the size of the pixel is the DFOV. Selecting the DFOV determines
how much of the total raw data available will be used to create an image. Decreasing the field
of view will decrease the pixel size. Pixel size can be thought of as the amount of patient data
each pixel contains. When we decrease the field of view, less information is contained in each
pixel. As the field of view increases, the amount of data to be included in the image increases.
Consequently, the pixel size increases as more patient information is crammed into each
pixel. This causes the spatial resolution to decrease.
The following formula reveals the relationship between matrix size, display field of
view, and pixel size:

6.8.4 Voxel Size


A voxel is a volume element (volumetric pixel) representing a value in the three
dimensional space (expressed in units of mm 3 ), corresponding to a pixel for a given slice
thickness. Voxels are frequently used in the visualization and analysis of medical data.
The CT pixel intensity is proportional to the signal intensity of the appropriate voxel.
Voxels are associated with CT numbers.

108
CHAPTER 6 COMPUTED TOMOGRAPHY

A voxel (volume element) represents a volume of patient data. Voxel size also plays a
role in volume averaging. As stated earlier, in order to create an image, the system must
break up the patient data into segments. We have seen how a matrix is used to divide the
data into pixels with X and Y dimensions. This allows the system to create a two-
dimensional image. However, it is important to keep in mind that a three -dimensional
object is being represented. By accounting for the slice thickness, the voxel represents a
volume of patient data. Thus, instead of a square of data-as is the case with a pixel-the
voxel is a cube of data. All of the data within the voxel are averaged together to result in
one HU.

Y
Z
X
Mahmood & Haider
Figure 6.14: The depth of the voxel, or Z axis, correlates to the slice
thickness.

The depth of the voxel correlates to the operator’s selection of slice thickness. This dimension
is referred to as the Z axis (Figure 6.14). When comparing the X, Y, and Z dimensions, even
with a relatively large matrix and a small field of view, the slice thickness–or Z axis–will be
longer than either the X or Y dimensions. Therefore, the slice thickness will play an even
larger role in volume averaging (as well as the subsequent spatial resolution) than either
display field or matrix size. In fact, slice thickness is the primary factor affecting the degree of
volume averaging in the image.
Decreasing the slice thickness affects the resolution in two ways. First, it reduces the
amount of tissue averaged together. Second, it will increase the image noise if technique is not
increased to compensate for photon absorption from increased collimation.
Figures 6.15A and B illustrate how a wide slice thickness will affect the amount of volume
averaging in the image. Assuming a 2-mm object is contained in a 10-mm slice (Figure
6.15A); 8 mm of normal tissue will be averaged to produce a less accurate image. By
decreasing the slice thickness, as shown in Figure 6.15B, the amount of normal tissue that is
averaged in with data from the abnormality is reduced. In this way, an image is created that
more closely represents the actual object scanned.

109
CHAPTER 6 COMPUTED TOMOGRAPHY

Mahmood & Haider

10-mm slice Two 5-mm


(A) (B)
Figure 6.15: (a) A thicker slice will contain more volume averaging. (b)
Thin slices will reduce the amount of volume averaging and improve
resolution.

6.8.5 Focal Spot Size


A small focal spot size will improve resolution in the image. Some scanners have only one
focal spot available. Scanners with the option of higher mA stations often automatically
switch to a larger focal spot size when mA is increased. This is due to the fact that although a
small focal spot offers superior spatial resolution, it cannot withstand heat as well as a larger
focal spot. Therefore, an increase in mAs often necessitates a larger focal spot. The effect
from a change in focal spot is minimal and not readily visible in patient images. In fact, in
order to assess the difference caused by a change in focal spot size, images must be taken with
a phantom and then compared. Even though switching to a larger focal spot may slightly
decrease the spatial resolution, it will likely be outweighed by the decrease in patient motion
that is possible when switching to a shorter scan time with a higher mA.
6.8.6 Blur
In considering the resolution of a system, we must consider an aspect known as sharpness.
Sharpness is the ability of a system to define an edge. It is measured by the amount of blur in
a system. Blur can result from factors intrinsic to radiography, such as the way a photon
interacts with an object. Or blur can be caused from extrinsic factors, such as patient motion.
Sources of blur in CT include geometric blur from the focal spot size, detector blur,
absorption blur (patient), and motion blur (patient).
6.9 Nyquist Sampling Theorem
An element of random chance exists in the creation of a CT image. Applying the Nyquist
sampling theorem helps explain this occurrence. When applied to CT imaging, the theorem
can be summarized by the following statement: because an object may not lie entirely within a
pixel, the pixel should be half the size of the object to increase the likelihood of being
resolved.

110
CHAPTER 6 COMPUTED TOMOGRAPHY

To understand this theorem, it is again important to recall the way an object may be
segmented by the system. If the object in question is the same size as a pixel, it is possible that
by chance the object may fall entirely within a single pixel. Figure 6.16A represents this pos-
sibility. However, random chance will dictate that it is much more likely the object will
straddle two different pixels. Figure 6.16B illustrates this possibility. It should be apparent
that the image resulting from the case portrayed by Figure 6.16B would be inferior to that of
the image resulting from Figure 6.16A. A third possibility could also arise. In Figure 6.16C,
the object falls at the junction of four separate pixels. Therefore, only a fourth of the object
will be averaged in with three fourths of a pixel of normal tissue. This would be the worst
case scenario and would result in an image with the worst spatial resolution.

A B C

D E

Mahmood & Haider


Figure 6.16: The Nyquist sampling theorem explains why a small
pixel will increase resolution.

Furthermore, the Nyquist sampling theorem states that we can reduce the likelihood of our
worst case scenario occurring by reducing the size of the pixel. We can see that in Figure
6.16D, the smaller pixel size will improve spatial resolution by allowing four pixels to
represent the object, with no normal tissue enclosed in the pixel. However, even with the
smaller pixel size, cases such as Figure 6.16E will still arise. In this case, the object will fall
so that only two of the pixels accurately represent the object, whereas the four surrounding
will have some degree of volume averaging. However, we can see from our illustrations that
the situation depicted by Figure 6.16E would be preferable to that of Figure 6.16C. To review,
by reducing the size of the pixel, we can increase our chance of accurately representing a
small object. The theorem further states that in order to best increase our chances, the pixel
size should be half the size of the object.
6.10 Low-Contrast Resolution
Low-contrast resolution is the ability to differentiate objects with slightly different densities.
This factor is the second major aspect of image quality. In order to discern an object on an

111
CHAPTER 6 COMPUTED TOMOGRAPHY

image, there must be a density difference between the object and its background. This is the
type of contrast we are concerned with in the case of a liver lesion that is nearly the same
density as normal liver tissue. The term low-contrast detectability is used when discussing the
ability to see an object that is nearly the same density as its background. Often, intravascular
or oral contrast agents are used to create or increase a density difference, thereby increasing
an image’s low-contrast resolution. Low-contrast resolution may also be referred to as the
sensitivity of the system; hence, the term low-contrast sensitivity is also used.
The size of the object that is visible depends on three factors.
 The first is the level of contrast in the object. For example, consider a calcified nodule.
It will be much easier to see if the nodule is in the lung, where the air within the lung
will provide a substantial amount of contrast. On the other hand, imagine the difficulty
in differentiating the nodule if it were to lie next to the iliac crest. The level of contrast
that is related to the density of the objects being scanned is often called subject contrast
(or sometimes inherent contrast).
 The second factor that influences the size of the object that is visible is image noise. We
can recognize noise as the grainy appearance-or salt-and-pepper look-on an under-
exposed image.
 The third factor is the window setting used to display an image. Narrow window widths
will improve low-contrast discrimination in the image.

CT is superior to conventional film/screen radiography in its ability to resolve small


differences in tissue densities. In fact, low-contrast resolution is where CT excels. Although
the spatial resolution of a CT scanner is inferior to that of standard radiography its low-
contrast resolution is much better. For example, in order for an object to be resolved with
standard radiography, there must be a 10% difference in object density. But CT can resolve a
density difference of as little as 0.1%.

6.11 Factors Relating to Low-Contrast Resolution


Common terms used when discussing low-contrast resolution are contrast scale, contrast-
detail response, receiver operator characteristics (ROC), quantum noise, and dose.
Contrast scale is affected by the window width and window level. As a general rule, to
enhance contrast between two tissues, narrow the window to just include both tissues, and set
the level centered between them.
Contrast-detail response (sometimes referred to as the contrast-detail curve) shows us that
for a given technique, the level of contrast that is visible will decrease as the object size
decreases. To greatly simplify the concept, all other factors staying the same, smaller objects
are harder to see than larger objects.
ROC describes the fact that different observers will look at the same image and evaluate it
differently. A common method of evaluating an image is to scan a phantom. A series of

112
CHAPTER 6 COMPUTED TOMOGRAPHY

progressively smaller low-contrast circles will appear on the resulting image. Counting the
number of circles clearly visible will determine the level of low-contrast resolution on the
image. However, different observers may look at the same image and evaluate it differently.
Some individuals may say they see six circles clearly, whereas other persons evaluating the
same image may feel they can only see four circles. Therefore, the degree of contrast
measured on an image is somewhat subjective.
Quantum noise produces visible fluctuations in the image (i.e., a salt-and-pepper look).
This factor will degrade images, particularly their low-contrast resolution. Quantum noise is
the result of too few x-ray photons reaching the detectors. Therefore, noise and radiation dose
are linked; as radiation dose increases, image noise is suppressed. As the noise decreases,
small low-contrast objects are more visible. Smoothing algorithms can help to reduce the
visibility of noise by averaging each pixel with its neighbor. Similarly, wide window widths
also help disguise noise. For this reason, it is a common practice in CT to increase the window
width on images of obese patients.
6.12 Basic CT scanner components
6.12.1 Scanning Unit (Gantry)
The largest component of the CT installation consists of an x-ray unit which functions as a
transmitter, a data acquisition unit, which functions as a receiver and detector array. Also
containing other control electronics and the mechanical components required for the scanning
motions including collimators and filters, detectors, data acquisition system (DAS), rotational
components including slip ring systems and all associated electronics such as gantry
angulation motors and positioning laser lights. In commercial CT systems these components
are housed in moveable frame as ring shaped unit called the gantry (see Figure 6.17).
A CT gantry can be angled up to 30 degrees toward a forward or backward position.
Gantry angulation is determined by the manufacturer and varies among CT systems. Gantry
angulation allows the operator to align pertinent anatomy with the scanning plane. The
opening through which a patient passes is referred to as the gantry aperture. Gantry aperture
diameters generally range from 50-85 cm. Generally, larger gantry aperture diameters, 70-85
cm, are necessary for CT departments that do a large volume of biopsy procedures. The
larger gantry aperture allows for easier manipulation of biopsy equipment and reduces the
risk of injury when scanning the patient and the placement of the biopsy needle
simultaneously. The diameter of the gantry aperture is different for the diameter of the
scanning circle or scan field of view. If a CT system has a gantry aperture of 70 cm diameter
it does not mean that you can acquire patient data utilizing a 70 cm diameter. Generally, the
scanning diameter in which patient or projection data is acquired is less than the size of the
gantry aperture. Lasers or high intensity lights are included within or mounted on the gantry.
The lasers or high intensity lights serve as anatomical positioning guides that reference the
center of the axial, coronal, and sagittal planes.

113
CHAPTER 6 COMPUTED TOMOGRAPHY

Figure 6.17: Basic CT scanner components

6.12.2 X-ray Tube, Collimation, Filtration


CT procedures facilitate the use of large exposure factors, (high mA and kVp values) and
short exposure times. The development of spiral/helical CT allows continuous scanning while
the patient table or couch moves through the gantry aperture. A typical spiral/helical CT scan
of the abdomen may require the continuous production of x-rays for a 30 to 40 second period.
The stress caused by the constant build up of heat can lead to a rapid decrease of tube life.
When an x-ray tube reaches a maximum heat value it simply will not operate until it cools
down to an acceptable level. CT systems produce x-radiation continuously or in short
millisecond bursts or pulses at high mA and KvP. values. CT x-ray tubes must possess a high
heat capacity which is the amount of heat that a tube can store without operational damage to
the tube. The x-ray tube must be designed to absorb high heat levels generated from the high
speed rotation of the anode and the bombardment of electrons upon the anode surface. An x-
ray tubes heat capacity is expressed in heat units. Modern CT systems utilize x-ray tubes that
have a heat capacity of approximately 3.5 to 5 million heat units (MHU). A CT x-ray tube
must possess a high heat dissipation rate. Many CT x-ray tubes utilize a combination of oil
and air cooling systems to eliminate heat and maintain continuous operational capabilities. A
CT x-ray tube anode has a large diameter with a graphite backing. The large diameter backed
with graphite allows the anode to absorb and dissipate large amounts of heat. The focal spot
size of an x-ray tube is determined by the size of the filament and cathode which is
determined by the manufacturer. Most x-ray tubes have more than one focal spot size. The use
of a small focal spot increases detail but it concentrates heat onto a smaller portion of the
anode therefore, more heat is generated. As previously described, when heat is building up
faster than the tube can dissipate it the x-ray tube will not produce x-rays until it has
sufficiently cooled. CT tubes utilize a bigger filament than conventional radiography x-ray
tubes. The use of a bigger filament increases the size of the effective focal spot. Decreasing
the anode or target angle decreases the size of the effective focal spot. Generally, the anode

114
CHAPTER 6 COMPUTED TOMOGRAPHY

angle of a conventional radiography tube is between 12 and 17 degrees. CT tubes employ a


target angle approximately between 7 and 10 degrees. The decreased anode or target angle
also helps elevate some of the effects caused by the heel effect. CT can compensate any loss
of resolution due the use of larger focal spot sizes by employing resolution enhancement
algorithms such as bone or sharp algorithms, targeting techniques, and decreasing section
thickness. In CT collimation of the x-ray beam includes tube collimators, a set of pre-patient
collimators and post-patient or pre-detector collimators. Some CT systems utilize this type of
collimation system while other does not. The tube or source collimators are located in the x-
ray tube and determine the section thickness that will be utilized for a particular CT scanning
procedure. When the CT technologist selects a section thickness he or she is determining tube
collimation by narrowing or widening the beam. A second set of collimators located directly
below the tube collimators maintain the width of the beam as it travels toward the patient. A
final set of collimators called post-patient or predetector collimators are located below the
patient and above the detector. The primary responsibilities of this set of collimators are to
insure proper beam width at the detector and reduce the number of scattered photons that may
enter a detector. There are two types of filtration utilized in CT. Mathematical filters such as
bone or soft tissue algorithms are included into the CT reconstruction process to enhance
resolution of a particular anatomical region of interest. Inherent tube filtration and filters
made of aluminum or Teflon are utilized in CT to shape the beam intensity by filtering out
low energy photons that contribute to the production of scatter. Special filters called "bow-tie"
filters absorb low energy photons before reaching the patient. X-ray beams are polychromatic
in nature which means an x-ray beam contains photons of much different energy. Ideally, the
x-ray beam should be monochromatic or composed of photons having the same energy.
Heavy filtration of the x-ray beam results in a more uniform beam. The more uniform the
beam, the more accurate the attenuation values or CT numbers are for the scanned anatomical
region.
6.12.3 Detector Array
Detector array is an array of individual detector elements. The number of detector elements
varies between a few hundred and 4800, depending on the acquisition geometry and
manufacturer. Each detector element functions independently of the others. When the x-ray
beam travels through the patient, it is attenuated by the anatomical structures it passes
through. The path that an x-ray beam travels from the tube to a single detector is referred to as
a ray. After the x-ray beam passes through the object being scanned, the detector samples the
beams intensity. The detector reads each ray and measures the resultant beam attenuation. The
attenuation measurement of each ray is termed a ray sum. A complete set of ray sums is
referred to as a view or projection. In conventional radiography we utilize a film-screen
system as the primary image receptor to collect the attenuated information. The image
receptors that are utilized in CT are referred to as detectors. The CT process essentially relies
on collecting attenuated photon energy and converting it to an electrical signal, which will
then be converted to a digital signal for computer reconstruction. It takes many views to

115
CHAPTER 6 COMPUTED TOMOGRAPHY

create a computed tomography image. Obtaining a single view does not give the entire
perspective of the object being scanned. Therefore, we can say that the detector is "seeing" an
insufficient amount of information. The attenuation properties of each ray sum are accounted
for and correlated with the position of each ray. At this point, the detector has "collected" the
projection or raw data. The more photons collected, the stronger and more accurate the
detector signal. This is essential for accurate image reconstruction. The detector accomplishes
this task by adding together all the photon energy it has received. The detector receives all the
projection data and subsequently generates an electrical or analog signal. The signal
represents an absorption or attenuation profile. An attenuation profile is obtained for each
view or projection. Every detector in the detector array is responsible for this task. Detector
efficiency describes the percent of incoming photons that a detector converts to a useable
electrical signal. The two primary factors that determine how well a detector can capture
photons relative to efficiency is the width and the distance between each detector. It is
important that detectors are placed as close to one another as possible. A detector is a crystal
or ionizing gas that when struck by an x-ray photon produces light or electrical energy.
The two types of detectors utilized in CT systems are scintillation or solid state and xenon
gas detectors. Scintillation detectors convert 99-100 percent of the attenuated photons into a
useable electrical signal. Scintillation detectors utilize a crystal that fluoresces when struck by
an x-ray photon which produces light energy. A photodiode is attached to the scintillation
portion of the detector. The photodiode transforms the light energy into electrical or analog
energy. The strength of the detector signal is proportional to the number of attenuated
photons that are successfully converted to light energy and then to an electrical or analog
signal. The most frequently used scintillation crystals are made of Bismuth Germinate
(Bi4Ge3012) and Cadmium Tungstate (CdWO4). Earlier designs utilized Sodium and
Cesium Iodide as the light producing agent. One of the problems associated with these
elements was that at times it would fluoresce more than necessary. The afterglow problems
associated with Sodium and Cesium Iodide altered the strength of the detector signal which
could cause inaccuracies during computer reconstruction.
The second type of detector utilized for CT imaging system is a gas detector. The gas
detector is usually constructed utilizing a chamber made of a ceramic material with long thin
ionization plates usually made of Tungsten submersed in Xenon gas. Xenon gas detectors are
less efficient, converting 60-90 percent of the photons that enter the chambers. The long thin
tungsten plates act as electron collection plates. When attenuated photons interact with the
charged plates and the xenon gas ionization occurs. The ionization of ions produces an
electrical current. Xenon gas is the element of choice because of its ability to remain stable
under extreme amounts of pressure. Utilizing more gas in a detector increases the number of
molecules that can be ionized therefore; the strength of the detector signal or response is
increased. The long thin tungsten plates of the gas detector are highly directional. Ionization
of the plates and the resultant detector signal rely on attenuated photons entering the chamber
and ionizing the gas. If the xenon gas detectors are not positioned properly there is a chance

116
CHAPTER 6 COMPUTED TOMOGRAPHY

that the ability of the detector to produce an accurate signal is compromised because the
photons may miss the chamber. The xenon gas detectors are generally fixed with the position
of the x-ray tube which occurs with 3rd generation scanner geometry designs. The term
detector refers to a single element or a single type of detector used in a CT system. The term
detector array is used to describe the total number of detectors that a CT system utilizes for
collecting attenuated information. 3rd generation CT imaging systems employ 800-1000
detectors while 4th generation scanners include 4000-5000 individual detectors in a detector
array.
The efficiency of the xenon gas detector is compromised by the absorption of some of the
photons by the ionization plates. Additionally, photons may pass through the chamber
without interacting with the gas molecules. However, one advantage to this situation may be
that some of the photons absorbed by the plates were scattered photons. As in conventional
radiography scatter also adversely affects the CT image. Therefore, it is reasonable to
conclude that the gas detectors have low scatter acceptability. Scintillation detectors convert
almost all the information it receives including scattered photons therefore, the detectors have
high scatter acceptability.
6.12.4 Data-Acquisition System
The part of a CT scanner connects the detectors of a CT scanner to the system computer, and
may consist of a preamplifier, integrator, multiplexer, logarithmic amplifier, and analog-to-
digital converter.
Once the detector generates the analog or electrical signal it is directed to the data
acquisition system (DAS). The analog signal generated by the detector is a weak signal and
must be amplified to further be analyzed. Amplifying the electrical signal is one of the tasks
performed by the data acquisition system (DAS). The DAS is located in the gantry right after
or above the detector system. In some modern CT scanning systems the signal amplification
occurs within the detector itself. Before the projection or raw data, which is currently in the
form of an electrical or analog signal, goes to the computer it must be converted to digital
information. The computer does not "understand" analog signals therefore; the information
must be converted to digital information. This task is accomplished by an analog to digital
converter which is an essential component of the DAS. The digital signal is transferred to an
array processor. The array processor solves the statistical information using algorithmic
calculations essential for mathematical reconstruction of a CT image. An array processor is a
specialized high speed computer designed to execute mathematical algorithms for the
purpose of reconstruction. The array processor solves reconstruction mathematics faster than
a standard microprocessor. It is important to note that special algorithms may require several
seconds to several minutes for a standard microprocessor to compute. Recently, processors
that compute CT reconstruction mathematics faster than array processors have been utilized
to solve reconstruction mathematics essential to the development of CT fluoroscopy. The
term image or reconstruction generator is used to describe this type of computer.

117
CHAPTER 6 COMPUTED TOMOGRAPHY

6.12.5 CT Patient Table or Couch


The final component of the scan or imaging system is the patient table or couch. CT tables or
couches should be made with a material that will not cause artifacts when scanned. Many CT
tables or couches are made of a carbon fiber material. The movement of the table or couch is
referred to as incrimination or indexing. Helical/spiral CT table incrimination or indexing is
quantified in millimeters per second mm/sec because the table is moving for the entire scan.
All table or couch designs have weight limits that if exceeded may compromise incrimination
or indexing accuracy. Various attachments are available for different types of scanning
procedures. Attachments for direct coronal scanning and therapy planning are commonly
used in many CT departments.
6.13 Fan beam
The x-ray beam is generated at the focal spot "which is the region of the anode where x-rays
are generated" and so diverges as it passes through the patient to the detector array. The
thickness of the beam is generally selectable between 1.0 and 10 mm and defines the slice
thickness.

6.14 Focused septa


Thin metal plates between detector elements which are aligned with the focal spot so that the
primary beam passes unattenuated to the detector elements, while scattered x-rays which
normally travel in an altered direction are blocked.

6.15 Image Reconstruction


Image reconstruction in CT is a mathematical process that generates images from X-ray
projection data acquired at many different angles around the patient. Measurements are made
of the intensity of the X-ray beam and converted to a set of attenuation measurements. These
are known as the “Radon Transform” of the image. An inverse transformation must be then
carried out to determine the distribution of attenuations for each pixel element
through the target. Because of the complexity of the subject, it is impossible to cover all
aspects of image reconstruction hear. At the core of any CT scan image reconstruction is a
computer algorithm called Filtered Backprojection (FBP) (Figure 6.18). Each of the hundreds
of x-ray image data sets obtained by the CT scanner is filtered to prepare them for the
backprojection step. Backprojection is nothing more than adding each filtered x-ray image
data set’s contribution into each pixel of the final image reconstruction. Each x-ray view data
set consists of hundreds of floating point numbers, and there are hundreds of these data sets.
In a high-resolution image, there are millions to tens of millions of pixels. It is easy to see
why summing hundreds of large data sets into millions of pixels is a very time-intensive
operation which only gets worse as the image resolution increases. Because of the need to
balance image resolution, image generation time and cost in CT scan systems, much work has
already been completed to make the FBP more computationally efficient. It turns out that the
filtering step is extremely time-intensive if the x-ray view image data is left in its natural

118
CHAPTER 6 COMPUTED TOMOGRAPHY

spatial coordinate system. However, if the image data sets are translated into the frequency
domain, the filtering operation becomes trivial.

Filtered Backprojection

X-ray views Data Image Pixel Reconstructed


(Data sets) Filtering Summation Image

Mahmood & Haider


Figure 6.18: CT scan Image Reconstruction.
Image reconstruction has a fundamental impact on image quality and therefore on radiation
dose. For a given radiation dose it is desirable to reconstruct images with the lowest possible
noise without sacrificing image accuracy and spatial resolution. Reconstructions that improve
image quality can be translated into a reduction of radiation dose because images of
acceptable quality can be reconstructed at lower dose.
Two major categories of methods exist,
 Analytical reconstruction and
 Iterative reconstruction.
Methods based on filtered backprojection (FBP) are one type of analytical reconstruction that
is currently widely used on clinical CT scanners because of their computational efficiency and
numerical stability. Many FBP-based methods have been developed for different generations
of CT data-acquisition geometries, from axial parallel- and fan-beam CT in the 1970s and
1980s to current multi-slice helical CT and cone-beam CT with large area detectors.
Users of clinical CT scanners usually have very limited control over the inner workings of
the reconstruction method and are confined principally to adjusting various parameters
specific to different clinical applications.
One of the most important parameters that affect the image quality is reconstruction
kernels (also referred to as “filter” or “algorithm” by some CT vendors). Reconstruction
kernels are complex processing of digital data using mathematical principles. Scan data are
based on penetration and attenuation measurements of photons as they traverse matter and
must then be converted into digital data and displayed as a CT image.
Another important reconstruction parameter is slice thickness, which controls the spatial
resolution in the longitudinal direction, influencing the tradeoffs among resolution, noise, and

119
CHAPTER 6 COMPUTED TOMOGRAPHY

radiation dose. It is the responsibility of CT users to select the most appropriate reconstruction
kernel and slice thickness for each clinical application so that the radiation dose can be
minimized consistent with the image quality needed for the examination.
Iterative reconstruction has recently received much attention in CT because it has many
advantages compared with conventional FBP techniques. Important physical factors including
focal spot and detector geometry, photon statistics, X-ray beam spectrum, and scattering can
be more accurately incorporated into iterative reconstruction, yielding lower image noise and
higher spatial resolution compared with FBP. In addition, iterative reconstruction can reduce
image artifacts such as beam hardening, windmill, and metal artifacts. A recent clinical study
on an early version of iterative reconstruction demonstrated a potential dose reduction of up to
65% compared with FBP-based reconstruction algorithms. Due to the intrinsic difference in
data handling between FBP and iterative reconstruction, images from iterative reconstruction
may have a different appearance (e.g., noise texture) from those using FBP reconstruction.
Careful clinical evaluation and reconstruction parameter optimization will be required before
iterative reconstruction can be accepted into mainstream clinical practice. High computation
load has always been the greatest challenge for iterative reconstruction and has impeded its
use in clinical CT imaging. Software and hardware methods are being investigated to
accelerate iterative reconstruction. With further advances in computational technology,
iterative reconstruction may be incorporated into routine clinical practice in the future.

6.16 Several Approaches to Image Reconstruction


To understand some of the methodologies employed in CT reconstruction, we will begin with
an extremely simplified case in which the object is formed with four small blocks. The
attenuation coefficients are homogeneous within each block and are labeled , ,
and , as shown in Figure 6.19. We will further consider the scenario where line integrals
are measured in the horizontal, vertical, and diagonal directions. Five measurements in total
are selected in this example. It can be shown that the diagonal and three other measurements
form a set of independent equations. For example,

(6.1)

Here, four independent equations are established for the four unknowns. From elementary
algebraic knowledge, we know that there is a unique solution to the problem, since the
number of equations equals the number of unknowns. If we generalize the problem to the case
where the object is divided into N by N small elements, we could easily reach the conclusion
that as long as enough independent measurements (N2) are taken, we can always uniquely
solve the attenuation coefficient distribution of the object.

120
CHAPTER 6 COMPUTED TOMOGRAPHY

Many techniques are readily available to solve linear sets of equations. Direct matrix
inversion was the method used on the very first CT apparatus in 1967. Over 28,000 equations
were simultaneously solved. When the object is divided into finer and finer elements
(corresponding to higher spatial resolutions), the task of solving simultaneous sets of
equations becomes quite a challenge, even with today’s computer technology. In addition, to
ensure that enough independent equations are formed, we often need to take more than N2
measurements, since some of the measurements may not be independent. A good example is
the one given in Figure 6.19. Assume that four measurements, p1, p2, p3, and p5, are taken in
the horizontal and vertical directions. It can be shown that these measurements are not linearly
independent (p5 = p1 + p2 + p3). A diagonal value must be added to ensure their
orthogonality. When the number of equations exceeds the number of unknowns, a
straightforward solution may not always be available. This is even more problematic when we
consider the inevitable possibility that errors exist in some of the measurements. Therefore,
different reconstruction techniques need to be explored. Despite its limited usefulness, the
linear algebraic approach proves the existence of a mathematical solution to the CT problem.

Mahmood & Haider


Figure 6.19: A simple example of an object and its projections.

One possible remedy to solve this problem is the so-called iterative reconstruction approach.
For the ease of illustration, we again start with an oversimplified. Consider the four-block
object problem discussed previously. This time, we assign specific attenuation values for each
block, as shown in Figure 6.20a. The corresponding projection measurements are depicted in
the same figure. We will start with an initial guess of the object’s attenuation distribution.
Since we have no a priori knowledge of the object itself, we assume that it is homogeneous.
We can start with an initial estimate using the average of the projection samples. The sum of
the projection samples (3 + 7 = 10 or 4 + 6 = 10) evenly distributed over the four blocks
results in an average value of 2.5 (10/4 =2.5). Next, we calculate the line integrals of our
estimated distribution along the same paths as the original projection measurement. For

121
CHAPTER 6 COMPUTED TOMOGRAPHY

example, we can calculate the projection samples along the horizontal direction and obtain the
calculated projection values of 5 (2.5 + 2.5) and 5, as shown in Figure 6.20b.
By comparing the calculated projections against the measured values of 3 and 7 (Figure
6.20a), we observe that the top row is overestimated by 2 (5 − 3) and the bottom row is
underestimated by 2 (5 − 7). Since we have no a priori knowledge of the object, we again
assume that the difference between the measured and the calculated projections must be split
evenly among all pixels along each ray path. Therefore, we decrease the value of each block
in the top row by 1 and increase the bottom row by 1, as shown in Figure 6.20c. The
calculated projections in the horizontal direction are now consistent with the measured
projections. We repeat the same process for projections in the vertical direction and reach the
conclusion that each element in the first column must be decreased by 0.5 and each element in
the second column increased by 0.5, as shown in Figure 6.20d. The calculated projections in
all directions are now consistent with the measured projections (including the diagonal
measurement), and the reconstruction process stops. The object is correctly reconstructed.
This reconstruction process is called the algebraic reconstruction technique (ART). Based on
the above discussion, it is clear that iterative reconstruction methods are computationally
intensive because forward projections (based on the estimated reconstruction) must be
performed repeatedly. This is in addition to the updates required of the reconstructed pixels
based on the difference between the measured projection and the calculated projection. All of
the iterative reconstruction algorithms require several iterations before they converge to the
desired results. Given the fact that the state-of-the-art CT scanner can acquire a complete
projection data set in a fraction of a second, and each CT examination typically contains
several hundred images, the routine clinical usage of ART is still a long way from reality.
Despite its limited utility, ART does provide some insight into the reconstruction process.
Recall the image-update process that was used in Figure 6.20. When there is no a priori
knowledge of the object, we always assume that the intensity of the object is uniform along
the ray path. In other words, we distribute the projection intensity evenly among all pixels
along the ray path. This process leads to the concept of backprojection.

(a) (b) (c) (d) Mahmood & Haider

Figure 6.20: Illustration of iterative reconstruction (a) original object and its projections (b) initial
estimate of the object and its projection (c) updated estimation of object and its projection and (d) final
estimation and projections.

122
CHAPTER 6 COMPUTED TOMOGRAPHY

Consider a simple case in which the object of interest is an isolated point. The corresponding
projection is an impulse function with its peak centered at the location where a parallel ray
intersects the point, as shown in Figure 6.21. Similar to the reasoning used for ART, we do
not know, a priori, the location of the point other than the fact that it is located on that line.
Therefore, we have to assume a uniform probability distribution for its location. We paint the
entire ray path with the same intensity as the measured projection, as shown in Figure 6.21
(a). In this example, the first projection is oriented vertically. The next projection is again an
impulse function, and again we paint the entire ray path that intersects the impulse with the
same intensity as the measurement. This time, however, the ray path is slightly rotated relative
to the first one because of the difference in projection angles. This process is repeated for all
projection samples. Figures 6.21 (b)–(i) depict the results obtained over different angular
ranges in 22.5-deg increments.

Figure 6.21: Backprojection process of a single point. (a) Backprojected


image of a single projection. (b)–(i) Backprojection of views covering: (b) 0
to 22.5 deg; (c) 0 to 45 deg; (d) 0 to 67.5 deg; (e) 0 to 90 deg; (f) 0 to 112.5
deg; (g) 0 to 135 deg; (h) 0 to 157.5 deg; and (i) 0 to 180 deg.

Note that the painting procedure essentially reverses the projection process and formulates a
2D object from a set of 1D line integrals. As a result, this process is called backprojection,

123
CHAPTER 6 COMPUTED TOMOGRAPHY

and is one of the key image reconstruction steps used in many commercial CT scanners. From
Figure 6.21(i), it is clear that by backprojecting over the range of 0 to 180 deg, a rough
estimate of the original object (a point) can be obtained. By examining the intensity profile of
the reconstructed point (Figure 6.22), we conclude that the reconstructed point computer
graphical techniques. Yellow line is a blurred version of the true object (gray line).
Degradation of the spatial resolution is obvious. From the linear system theory, we know that
Figure 6.21(i) is essentially the impulse response of the backprojection process. Therefore, we
should be able to recover the original object by simply deconvolving the backprojected
images with the inverse of the impulse response. This approach is often called the
backprojection-filtering approach.

1200

1000
Intensity (HU)

800

600

400

200

0
0 20 40 60 80 100
Pixels Mahmood & Haider
Figure 6.22: Profile of a reconstructed point. Solid black line:
reconstruction with backprojection; thick gray line: ideal reconstruction.
6.17 The Filtered Backprojection Algorithm
Although the Fourier slice theorem provides a straightforward solution for tomographic
reconstruction, it presents some challenges in actual implementation. First, the sampling
pattern produced in the Fourier space is non-Cartesian. The Fourier slice theorem states that
the Fourier transform of a projection is a line through the origin in 2D Fourier space. As a
result, samples from different projections fall on a polar coordinate grid, as shown in Figure
6.23.
To perform a 2D inverse Fourier transform, these samples must be interpolated or
regridded to a Cartesian coordinate. Interpolation in the frequency domain is not as
straightforward as interpolation in real space. In real space, an interpolation error is localized
to the small region where the pixel is located. This property does not hold, however, for
interpolation in the Fourier domain, since each sample in a 2D Fourier space represents
certain spatial frequencies (in the horizontal and vertical directions). Therefore, an error
produced on a single sample in Fourier space affects the appearance of the entire image (after
the inverse Fourier transform).

124
CHAPTER 6 COMPUTED TOMOGRAPHY

Sampling grid of the


Fourier transform of
a projection

Mahmood & Haider


Figure 6.23: Sampling pattern in Fourier space based on the Fourier
slice theorem.
6.18 Image Generation
6.18.1 Acquisition
In the simplest case, the object (here a round cylinder) is linearly scanned by a thin, needle-
like beam. This produces a sort of shadow image (referred to as "attenuation profile" or
"projection"), which is recorded by the detector and the image processor. Following further
rotation of the tube and the detector by a small angle, the object is once again linearly scanned
from another direction, thus producing a second shadow image. This procedure is repeated
several times until the object has been scanned for a 180° rotation.
6.18.2 Display
The various attenuation profiles are further processed in the image processor. In the case of
simple backprojection, each attenuation profile in the scanning direction is added up in the
image memory. This result in a blurred image is due to the disadvantage of this simple
backprojection, i.e. each object not only contributes to its own display, but also influences the
image as a whole. This already becomes visible after 3 projections. To avoid this problem,
each attenuation profile is subjected to a mathematical high-pass filter (also referred to as
"kernel") prior to the backprojection. This produces overshoot and undershoot at the edges of
the object. The mathematical operation is referred to as "convolution". The convolved
attenuation profiles are then added up in the image memory to produce a sharp image.

6.18.3 Windowing
In the CT image, density values are represented as gray scale values. However, since the
human eye can discern only approx. 80 gray scale values, not all possible density values can
be displayed in discernible shades of gray. For this reason, the density range of diagnostic
relevance is assigned the whole range of discernible gray values. This process is called
windowing. To set the window, it is first defined which CT number the central gray scale

125
CHAPTER 6 COMPUTED TOMOGRAPHY

value is to be assigned to. By setting the window width, it is then defined which CT numbers
above and below the central gray value can still be discriminated by varying shades of gray,
with black representing tissue of the lowest density and white representing tissue of the
highest density.
6.18.4 Volume Visualization
Traditionally, CT images are viewed in a slice-by-slice mode. A series of reconstructed CT
images are placed on films, and radiologists are trained to form, in their heads, the volume
information from multiple 2D images. Although the ability to generate 3D images by
computer has been available nearly since the beginning of CT, the use of this capability has
only recently become popular. This change is mainly due to three factors. The first is related
to the quality and efficiency of 3D image generation. Because of the slow acquisition speed of
the early CT scanners, thicker slices were typically acquired in order to cover the entire
volume of an organ. The large mismatch between the in-plane and cross-plane resolution
produced undesirable 3D image quality and artifacts. In addition, the amount of time needed
to generate 3D images from a 2D dataset was quite long, due to computer hardware
limitations and a lack of advanced and efficient algorithms.
The second factor is the greater productivity of radiologists. Historically, CT scans were
"organ centric"; each CT examination covered only a specific organ, such as the liver, lung, or
head, with either thick slices or sparsely placed thin slices. The number of images produced
by a single examination was well below 100. With the recent advances in CT technology
(helical and multi-slice), a large portion of the human body can be easily covered with thin
slices. For example, some of the CT angiographic applications typically cover a 120-cm range
from the celiac artery to a foot with thin slices. The number of images produced by a single
examination can be several hundred to over 1000. To view these images sequentially would
be an insurmountable task. The third factor that influences the image presentation format is
related to clinical applications. CT images have become increasingly useful tools for surgical
planning, therapy treatment, and other applications outside the radiology department. It is
more convenient and desirable to present these images in a format that can be understood by
people who do not have radiological training.
6.19 Image Quality Characteristics
CT image quality characteristics can be described in one comprehensive term of visibility that
is the visibility of anatomical structures, various tissues, and signs of pathology. The visibility
depends on the characteristics of the imaging system which is somewhat complex combination of
the five (5) factors. That means the image quality is not a single factor but is a composite of at
least five factors:
 Contrast Sensitivity
 Visibility of Detail (Blurring), as affected by blurring (Sometimes called spatial
resolution)
 Visual Noise

126
CHAPTER 6 COMPUTED TOMOGRAPHY

 Spatial Characteristics (Views, FOV, etc) or geometric characteristics of the


image/body relationship
 Artifacts
Each one of these characteristics could have an effect on the visibility of specific anatomical
or pathologic objects within the body. But the important is that each one of these
characteristics is generally adjustable and can be changed or set by a combination of the
protocol factors. So the challenge is by doing a complex combination of these factors to make
up an optimized procedure.
6.20 Contrast Sensitivity
Although all of the image characteristics are important and have a potential effect on visibility
but the contrast sensitivity is especially significant in CT because it is what makes it a
superior imaging modality for many clinical procedures.
Contrast sensitivity is generally considered one of the capabilities of the imaging
equipment, including the CT scanner which is a characteristic of the imaging process. So the
contrast sensitivity is how much control the conversion of the physical contrast within the
body to visible contrast in the image. In the case CT imaging is the differences in physical
density among the tissues within the body. An exception is when an iodine-based contrast
medium is used where it becomes more of an atomic number (Z) effect.
Computed tomography (CT) generally has a significantly higher contrast-sensitivity for
"seeing" the soft tissues and differences among the tissues in the body than the other x-ray
imaging modalities.
Within a body there will be bones, bullets, and barium have a very high physical contrast
relative to the soft tissues (see figure 6.24).

Objects in the body Imaging Procedure


Physical Contrast
CONTRAST
SENSITIVITY

High Med Low

Low
(Soft Tissues,
Fluids, etc)

High
(Bones, Bullets,
Barium, etc) Images
Mahmood & Haider
Figure 6.24: The physical contrast for bones, bullets, and barium relative
to the soft tissues.

127
CHAPTER 6 COMPUTED TOMOGRAPHY

The CT is excels in an imaging the very low density differences between and among the soft
that is the real challenge. That means the contrast sensitivity determines the range of visibility
with respect to physical contrast. The CT procedure has high contrast sensitivity therefore the
tissues with small differences in density will be visualized. The procedures have low contrast
sensitivity either because of limitations of the specific imaging modality or the adjustments of
the imaging protocol factors then only objects with high physical contrast will be visible.
While the tissues that have small differences in density (physical contrast) will not be visible.

6.21 Visibility of Detail


An optimized protocol for a specific clinical study must take these physical principles into
account and be adjusted to give proper balance among detail, low noise, and patient exposure.
There are two important characteristics of the computed tomographic (CT) image that affect
the ability to visualize anatomic structures and pathologic features are blurred and noise.
Increased blurring reduces the visibility of image detail (small objects and features). This
leads to that we can't read the fine print when we have blurred vision. CT include inherent
sources of blurring that limits visibility of detail and determines the types of diagnostic
procedures it can be used, such as the size of the sampling aperture (which can be regulated
by the focal spot size and the detector size), the size of the voxels, and the reconstruction filter
selected.
Reduce blurring and improve visibility of fine details can be achieved by the use of small
voxels and edge-enhancing filters. But on the other hand the small voxels absorb fewer
photons and this leads to increased noise. Therefore optimized protocol for a specific clinical
study must take these physical principles into account and be adjusted to give proper balance
among detail, low noise, and patient exposure. Scan the problem that represents a special
challenge is when underestimate the blurring it increases another undesirable image
characteristic, visual noise, and can also lead to increased radiation dose to the patient. That is
why we must have optimized imaging protocols that take all of these factors into account and
provide a proper balance.
6.22 Visual Noise
Noise is caused by the variation in attenuation coefficients between voxels. Use of small
voxels and edge-enhancing filters helps reduce blurring and improve visibility of fine details.
However, small voxels absorb fewer photons and therefore result in increased noise. Noise
can be reduced by using large voxels, increasing radiation dose, or using a smoothing filter,
but this filter increases blurring. An optimized protocol for a specific clinical study must take
these physical principles into account and be adjusted to give proper balance among detail,
low noise, and patient exposure.
Computed tomography (CT) is the science that creates two-dimensional cross sectional
images from three-dimensional body structures. Computed tomography utilizes a
mathematical technique called reconstruction to accomplish this task. It is important for any

128
CHAPTER 6 COMPUTED TOMOGRAPHY

individual studying the CT science to recognize that CT is a mathematical process. In a basic


sense, a CT image is the result of "breaking apart" a three-dimensional structure and
mathematically putting it back together again and displaying it as a two-dimensional image
on a television screen. The primary goal of any CT system is to accurately reproduce the
internal structures of the body as two-dimensional cross-sectional images. This goal is
accomplished by computed tomography's superior ability to overcome superimposition of
structures and demonstrate slight differences in tissue contrast. It is important to realize that
collecting many projections of an object and heavy filtration of the x-ray beam play important
roles in CT image formation. Each component of a CT system plays a major role in the
accurate formation of each CT image it produces.

129
Chapter 7

Nuclear Medicine
Imaging Systems
In this chapter, we gave a simple explanation of the relevant topics
(radioactivity) with gamma camera in a manner which should be
understandable by those without a formal physics background. So is
this chapter as an introduction to the process of radioactivity and uses
Rationale in gamma camera for those who encounter radioactive materials in
their work and who would like to better understand the phenomenon,
but whose education did not include physics to the appropriate level.

Mahmood & Haider

Performance Objectives

After studying the chapter seven, the student will be able to:-

1. Define the term “activity” and to state its units.


2. Define the term “half-life” and activity.
3. Know the mechanism of the radioactive decay.
4. Know the mechanism of the Gamma imaging
5. Give some example of the isotopes can decay.
6. Know the Basic Components of Gamma Camera.
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS

CHAPTER SEVEN: NUCLEAR MEDICINE IMAGING SYSTEMS


CHAPTER CONTENTS
7.1. Introduction 124
7.2. Definition of “Activity” 124
7.3. Isotopes and Nuclides 124
7.4. Half-life 128
7.5. Radioactivity 126
7.6. Radioactive Decay 127
7.7. Gamma Camera 127
7.8. Gamma Imaging 125
7.9. Basic Components of Gamma Camera 192
7.10. Collimation 131
7.11. Emission Tomography 133

7.1 Introduction
Radioactivity is a collection of unstable atoms that undergo spontaneous, random
transformation (radioactive decay) resulting in new elements or a lower energy state of the
atoms. Radioactivity is a phenomenon that occurs naturally in a number of substances. Atoms
of the substance spontaneously emit invisible but energetic radiations, which can penetrate
materials that are opaque to visible light. The effects of these radiations can be harmful to
living cells but, when used in the right way, they have a wide range of beneficial applications,
particularly in medicine. Radioactivity has been present in natural materials on the earth since
its formation (for example in potassium-40 which forms part of all our bodies). However,
because its radiations cannot be detected by any of the body’s five senses, the phenomenon
was only discovered 100 years ago when radiation detectors were developed. Nowadays we
have also found ways of creating new man made sources of radioactivity; some (like iodine-
131 and molybdenum-99) are incidental waste products of the nuclear power industry which
nevertheless have important medical applications, whilst others (for example fluorine-18) are
specifically produced for the benefits of their medical use.
7.2 Definition of “Activity”
 Activity is the shortened term commonly used for radioactivity.
 Activity is the number of atoms that decay per unit time.
7.3 Isotopes and Nuclides
While all atoms of the same element contain the same number of protons, the number of
neutrons may be different. For example, carbon atoms have six protons. If a carbon atom also
has six neutrons, it is Carbon-12. If it has seven neutrons, it is Carbron-13. A carbon atom
containing six protons and eight neutrons is Carbon-14. This form or isotope of carbon is
radioactive. Carbon-14 is radioactive while Carbon-12 and Carbon-13 are stable. The term
nuclide is used to refer to any type of atom, so that Carbon-12 and Hydrogen-2 are nuclides.
They are not isotopes of each other because they differ in the number of protons that they each

131
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS

have in their nucleus. The prefix “radio-” can be added to either term, making radioisotope or
radionuclide, whenever the atom referred to is radioactive.
7.4 Half-life
The activity of a radioactive source reduced over a period of time which was different for each
substance. The time required for half of a large number of identical radioactive atoms to decay
is called the half-life which is denoted by the symbol t½. In other words, the time taken for the
activity to fall to half of its original value is called the half-life of the source. The physical
characteristic associated with a radioactive material is the half-life. The definition of the half-
life is:
 The period of time it takes for the number of radioactive atoms to be reduced to one-half
of its original amount.
 The mathematical symbol used to identify half-life is t½.
 Each radioactive isotope has its own characteristic half-life.
However the activity does not fall at a steady rate, so it is not the case that the activity will
have fallen to nothing after two half-lives. Instead the activity falls at an ever decreasing rate
so that in every half-life the activity will halve.
Figure 7.1 shows a graph of how the activity of a source changes with time. If the activity
starts out at a value Ao then after one half-life the activity will have fallen to half of Ao. After
two half-lives the activity will have fallen to one quarter of Ao and after three half-lives to one
eighth of Ao. It can be seen that the activity is falling more and more slowly and, in principle,
it will never actually reach zero.

Mahmood & Haider


Ao
Activity

1/2 Ao

1/4 Ao
1/8 Ao

t1/2 2t1/2 3t1/2


Time
Figure 7.1: Exponential decay of activity.
In practice after a sufficiently long time the activity will have fallen to a negligible level. The
shape of a curve like this is said to be exponential and so radioactivity is said to exhibit
exponential decay.

132
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS

Mathematically it can be described by the formula

7.5 Radioactivity
The radioactivity is generally administered to the patient in the form of a radiopharmaceutical
- the term radiotracer is also used. This follows some physiological pathway to accumulate for
a short period of time in some part of the body. A good example is 99mTc-tin colloid which
following intravenous injection accumulates mainly in the patient's liver. The substance emits
gamma-rays while it is in the patient's liver and we can produce an image of its distribution
using a nuclear medicine imaging system. This image can tell us whether the function of the
liver is normal or abnormal or if sections of it are damaged from some form of disease.
Different radiopharmaceuticals are used to produce images from almost all regions of the
body:

Part of the Body Example Radiotracer


99m
Brain Tc-ceretec
Thyroid Na99mTcO4
133
Lung (Ventilation) Xe gas
99m
Lung (Perfusion) Tc-MAA
99m
Liver Tc-Tin colloid
99m
Spleen Tc-Damaged Red Blood Cells
75
Pancreas Se-Selenomethionine
99m
Kidneys Tc-DMSA

Note that the form of information obtained using this imaging method is mainly related to the
physiological functioning of an organ as opposed to the mainly anatomical information which
is obtained using X-ray imaging systems. Nuclear medicine therefore provides a different
perspective on a disease condition and generates additional information to that obtained from
X-ray images. Our purpose here is to concentrate on the imaging systems used to produce the
images.
Early forms of imaging system used in this field consisted of a radiation detector (a
scintillation detector for example) which was scanned slowly over a region of the patient in
order to measure the radiation intensity emitted from individual points within the region. One
such device was called the Rectilinear Scanner. Such imaging systems have been replaced
since the 1970s by more sophisticated devices which produce images much more rapidly. The

133
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS

most common of these modern devices is called the Gamma Camera and we will consider its
construction and mode of operation below.
7.6 Radioactive Decay
Isotopes that are not stable and emit radiation are called radioisotopes. A radioisotope is an
isotope of an element that undergoes spontaneous decay and emits radiation as it decays.
During the decay process, it becomes less radioactive over time, eventually becoming stable.
Once an atom reaches a stable configuration, it no longer gives off radiation. For this
reason, radioactive sources – or sources that spontaneously emit energy in the form of ionizing
radiation as a result of the decay of an unstable atom – become weaker with time. As more and
more of the source’s unstable atoms become stable, less radiation is produced and the activity
of the material decreases over time to zero.
The time it takes for a radioisotope to decay to half of its starting activity is called the
radiological half-life. Each radioisotope has a unique half-life, and it can range from a fraction
of a second to billions of years. For example, iodine-131 has an eight-day half-life, whereas
plutonium-239 has a half-life of 24,000 years. A radioisotope with a short half-life is more
radioactive than a radioisotope with a long half-life, and therefore will give off more radiation
during a given time period.
There are three main types of radioactive decay:
• Alpha decay: Alpha decay occurs when the atom ejects a particle from the nucleus,
which consists of two neutrons and two protons. When this happens, the atomic number
decreases by 2 and the mass decreases by 4. Examples of alpha emitters include radium,
radon, uranium and thorium.
• Beta decay: In basic beta decay, a neutron is turned into a proton and an electron is
emitted from the nucleus. The atomic number increases by one, but the mass only decreases
slightly. Examples of pure beta emitters include strontium-90, carbon-14, tritium and
sulphur-35.
• Gamma decay: Gamma decay takes place when there is residual energy in the nucleus
following alpha or beta decay, or after neutron capture (a type of nuclear reaction) in a
nuclear reactor. The residual energy is released as a photon of gamma radiation. Gamma
decay generally does not affect the mass or atomic number of a radioisotope. Examples of
gamma emitters include iodine-131, cesium-137, cobalt-60, radium-226, and technetium-
99m. Gamma rays are produced by unstable nuclei when protons and neutrons re-range to a
more stable configuration. Gamma decay usually follows an alpha or beta decay and does
not change element.
Many isotopes can decay by more than one method. For example, when actinium-226
(Z=89) decays, 83% of the rate is through -decay,
226
Ac → 226Th + e + ,

134
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS

17% is through electron capture,


226
Ac + e → 226Fr + ,
And the remainder, 0.006%, is through decay,
226
Ac → 222Fr + 4He
Therefore from 100,000 atoms of actinium, one would measure on average 83,000 beta
particles and 6 alpha particles (plus 100,000 neutrinos or antineutrinos). These proportions are
known as branching ratios. The branching ratios are different for the different radioactive
nuclei.
Three very massive elements, 232Th (14.1 billion year half-life), 235U (700 million year half-
life), and 238U (4.5 billion year half-life) decay through complex “chains” of alpha and beta
decays ending at the stable 208Pb, 207Pb, and 206Pb respectively. The ratio of uranium to lead
present on Earth today gives us an estimate of its age (4.5 billion years). Given Earth’s age,
any much shorter-lived radioactive nuclei present at its birth have already decayed into stable
elements. One of the intermediate products of the 238U decay chain, 222Rn (radon) with a half-
life of 3.8 days, is responsible for higher levels of background radiation in many parts of the
world. This is primarily because it is a gas and can easily seep out of the earth into unfinished
basements and then into the house.
7.7 Gamma Camera
A gamma camera, also called a scintillation camera or Anger camera, is a device used to
image gamma radiation emitting radioisotopes, a technique known as Scintigraphy (see
figure 7.2). Scintigraphy ("scint," Latin scintilla, spark) is a form of diagnostic test used in
nuclear medicine, wherein radioisotopes (here called radiopharmaceuticals) are taken
internally, and the emitted radiation is captured by external detectors (gamma cameras) to
form two-dimensional images. The applications of scintigraphy include early drug
development and nuclear medical imaging to view and analyze images of the human body or
the distribution of medically injected, inhaled, or ingested radionuclide emitting gamma rays.
In contrast, Single-photon emission computed tomography (SPECT) and Positron emission
tomography (PET) is Functional imaging technique that produces a three-dimensional image
of functional processes in the body. Therefore, are classified as separate techniques to
scintigraphy, although they also use gamma cameras to detect internal radiation. Scintigraphy
is unlike a diagnostic X-ray where external radiation is passed through the body to form an
image. The system detects pairs of gamma rays emitted indirectly by a positron-emitting
radionuclide (tracer), which is introduced into the body on a biologically active molecule.
Three-dimensional images of tracer concentration within the body are then constructed by
computer analysis.
 Metabolism is the set of life-sustaining chemical transformations within the cells of
living organisms.

135
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS

 Functional imaging (or functional medical imaging), is a method in medical


imaging of detecting or measuring changes in Metabolism, blood flow, regional
chemical composition, and absorption.

Figure 7.2: Apparatus for making scintigraphy

7.8 Gamma Imaging


Gamma imaging carried out by injecting patient with a tracer that emits gamma rays (figure
7.3). Therefore gamma cameras image the radiation from a tracer introduced into the patient’s
body. The most commonly used tracer is technetium-99m (99mTc), a metastable nuclear isomer
chosen Technetium preferred because of its long half-life of six hours and its ability to be
incorporated into a variety of molecules in order to target different systems within the body.
As it travels through the body and emits radiation the tracer’s progress is tracked by a crystal
that scintillates in response to gamma-rays. The crystal is mounted in front of an array of light
sensors that convert the resulting flash of light into an electrical signal. Gamma cameras differ
from X-ray imaging techniques in one very important respect; rather than anatomy and structure,
gamma cameras map the function and processes of the body. The gamma camera is an imaging
technique used to carry out functional scans of the brain, thyroid, lungs, liver, gallbladder,
kidneys and skeleton.

136
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS

Figure 7.3: Gamma camera procedure


Gamma cameras are made of a crystal (sodium iodide) which produces a burst of light when
gamma rays hit it. Light is picked up by detectors (photomultiplier tubes) located behind the
crystal. Electrical output from detectors is fed to computer to produce image. Lead grid
(collimator) only allows gamma cameras aligned with the ‘holes’ to hit crystal - allowing a
“sharper” image to be obtained.
7.9 Basic Components of Gamma Camera
The basic design of the most common type of gamma camera used today was developed by an
American physicist, Hal Anger and is therefore sometimes called the Anger Camera. It
consists of a large diameter NaI (Ti) scintillation crystal which is viewed by a large number of
photomultiplier tubes. The basic components of a gamma camera are shown in Figure 7.4.
The crystal and PM Tubes are housed in a cylindrical shaped housing commonly called the
camera head and a cross-sectional view of this is shown in the figure. The crystal can be
between about 25 cm and 40 cm in diameter and about 1 cm thick. The diameter is dependent
on the application of the device. For example a 25 cm diameter crystal might be used for a
camera designed for cardiac applications while a larger 40 cm crystal would be used for
producing images of the lungs. The thickness of the crystal is chosen so that it provides good
detection for the 140 keV gamma-rays emitted from 99mTc - which is the most common
radioisotope used today.
Before we do so we should note that the position signals also contain information about the
intensity of each scintillation. This intensity information can be derived from the position
signals by feeding them to a summation circuit (marked Σ in the figure) which adds up the
four position signals to generate a voltage pulse which represents the intensity of scintillation.

137
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS

This voltage pulse is commonly called the Z-pulse (or zee-pulse in American English!) which
following pulse height analysis (PHA) is fed as the unblank pulse to the cathode ray
oscilloscope (CRO).

+X
-X
+Y
-Y
CRO
Position
Circuit Z Unblank
Σ PHA

PM Tube
Array

Crystal
Collimator

Organ containing
radiopharmaceutical
Mahmood & Haider
Figure 7.4: A block diagram of the basic components of a gamma camera
So we end up with four position signals and an un-blank pulse sent to the CRO. Let us briefly
review the operation of a CRO before we continue. The core of a CRO consists of an
evacuated tube with an electron gun at one end and a phosphor-coated screen at the other end.
The electron gun generates an electron beam which is directed at the screen and the screen
emits light at those points struck by the electron beam. The position of the electron beam can
be controlled by vertical and horizontal deflection plates and with the appropriate voltages fed
to these plates the electron beam can be positioned at any point on the screen. The normal
mode of operation of an oscilloscope is for the electron beam to remain switched on. In the
case of the gamma camera the electron beam of the CRO is normally switched off - it is said
to be blanked.
When an un-blank pulse is generated by the PHA circuit the electron beam of the CRO is
switched on for a brief period of time so as to display a flash of light on the screen. In other
words the voltage pulse from the PHA circuit is used to un-blank the electrons beam of the
CRO.
So where does this flash of light occur on the screen of the CRO? The position of the flash
of light is dictated by the ±X and ±Y signals generated by the position circuit. These signals as
you might have guessed are fed to the deflection plates of the CRO so as to cause the un-
blanked electron beam to strike the screen at a point related to where the scintillation was
originally produced in the NaI(Tl) crystal. Simple!
The gamma camera can therefore be considered to be a sophisticated arrangement of
electronic circuits used to translate the position of a flash of light in a scintillation crystal to a
flash of light at a related point on the screen of an oscilloscope. In addition the use of a pulse

138
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS

height analyzer in the circuitry allows us to translate the scintillations related only to
photoelectric events in the crystal by rejecting all voltage pulses except those occurring within
the photo peak of the gamma-ray energy spectrum.
Let us summarize where we have got to before we proceed. A radiopharmaceutical is
administered to the patient and it accumulates in the organ of interest. Gamma-rays are
emitted in all directions from the organ and those heading in the direction of the gamma
camera enter the crystal and produce scintillations (note that there is a device in front of the
crystal called a collimator which we will discuss later). The scintillations are detected by an
array of PM tubes whose outputs are fed to a position circuit which generates four voltage
pulses related to the position of scintillation within the crystal. These voltage pulses are fed to
the deflection circuitry of the CRO. They are also fed to a summation circuit whose output
(the Z-pulse) is fed to the PHA and the output of the PHA is used to switch on (that is, un-
blank) the electron beam of the CRO. A flash of light appears on the screen of the CRO at a
point related to where the scintillation occurred within the NaI(Tl) crystal. An image of the
distribution of the radiopharmaceutical within the organ is therefore formed on the screen of
the CRO when the gamma-rays emitted from the organ are detected by the crystal.
What we have described above is the operation of a fairly traditional gamma camera.
Modern designs are a good deal more complex but the basic design has remained much the
same as has been described. One area where major design improvements have occurred is the
area of image formation and display. The most basic approach to image formation is to
photograph the screen of the CRO over a period of time to allow integration of the light
flashes to form an image on photographic film. A stage up from this is to use a storage
oscilloscope which allows each flash of light to remain on the screen for a reasonable period
of time.
The most modern approach is to feed the position signals into the memory circuitry of a
computer for storage. The memory contents can therefore be displayed on a computer monitor
and can also be manipulated (that is processed) in many ways. For example various colors can
be used to represent different concentrations of a radiopharmaceutical within an organ.
The use of such digital image processing is now widespread in nuclear medicine in that it
can be used to rapidly and conveniently control image acquisition and display as well as to
analyze an image or sequences of images, to annotate images with the patient's name and
examination details, to store the images for subsequent retrieval and to communicate the
image data to other computers over a network. We will continue with our description of the
gamma camera by considering the construction and purpose of the collimator.
7.10 Collimation
The collimator is a device which is attached to the front of the gamma camera head. It
functions something like a lens used in a photographic camera but this analogy is not quite
correct because it is rather difficult to focus gamma-rays. Nevertheless in its simplest form it
is used to block out all gamma rays which are heading towards the crystal except those which
are travelling at right angles to the plane of the crystal:

139
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS

The Figure 7.5 illustrates a magnified view of a parallel-hole collimator attached to a crystal.
The collimator simply consists of a large number of small holes drilled in a lead plate. Notice
that gamma-rays entering at an angle to the crystal get absorbed by the lead and that only
those entering along the direction of the holes get through to cause scintillations in the crystal.
If the collimator was not in place these obliquely incident gamma-rays would blur the images
produced by the gamma camera. In other words the images would not be very clear.

Collimator

Crystal

Pb

Mahmood & Haider


Figure7.5: Diagram of parallel-hole collimator attached to a
crystal of a gamma camera. Obliquely incident gamma-rays
are absorbed by the septa.
Most gamma cameras have a number of collimators which can be fitted depending on the
examination. The basic design of these collimators is the same except that they vary in terms
of the diameter of each hole, the depth of each hole and the thickness of lead between each
hole (commonly called the septum thickness). The choice of a specific collimator is dependent
on the amount of radiation absorption that occurs (which influences the sensitivity of the
gamma camera), and the clarity of images (that is the spatial resolution) it produces.
Unfortunately these two factors are inversely related in that the use of a collimator which
produces images of good spatial resolution generally implies that the instrument is not very
sensitive to radiation.
Other collimator designs beside the parallel hole type are also in use. For example a
diverging hole collimator produces a minified image and converging hole and pin-hole
collimators produce a magnified image. The pin-hole collimator is illustrated in the Figure 7.6:

140
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS

Cone is shown in the figure. It operates in a similar fashion to a pin-hole photographic camera
and produces an inverted image of an object - an arrow is used in the figure to illustrate this
inversion. This type of collimator has been found useful for imaging small objects such as the
thyroid gland.

Crysta
l

Pinhole
Pb Collimator

Object
Mahmood & Haider
Figure 7.6: Diagram of a pin-hole collimator illustrating the inversion
of acquired images.

7.11 Emission Tomography


The form of imaging which we have been describing is called Planar Imaging. It produces a
two-dimensional image of a three-dimensional object. As a result images contain no depth
information and some details can be superimposed on top of each other and obscured or
partially obscured as a result. Note that this is also a feature of conventional X-ray imaging.
The usual way of trying to overcome this limitation is to take at least two views of the
patient, one from the front and one from the side for example. So in chest radiography a
posterio-anterior (PA) and a lateral view can be taken. And in a nuclear medicine liver scan an
antero-posterior (AP) and lateral scan are taken.
This limitation of planar X-ray imaging was overcome by the development of the CAT
scanner about 1970 or thereabouts. CAT stands for Computerized Axial Tomography or
Computer Assisted Tomography and today the term is often shortened to Computed
Tomography or CT scanning. Irrespective of its exact name the technique allows images of
slices through the body to be produced using a computer. It does this in essence by taking X-
ray images at a number of angles around the patient. These slice images show the third
dimension which is missing from planar images and thus eliminate the problem of
superimposed details. Furthermore images of a number of successive slices through a region
of the patient can be stacked on top of each other using the computer to produce a three-
dimensional image. Clearly CT scanning is a very powerful imaging technique which is far
superior to planar imaging. The equivalent nuclear medicine imaging technique is called
Emission Computed Tomography.

141
CHAPTER 8

IMAGING WITH ULTRASOUND

This chapter will explain the basic physics of how sound waves can
produce images of the human body. Start from idea, All the various
Rationale techniques of diagnostic ultrasound involve the detection and display
the acoustic energy reflected off different tissues in the body.

Mahmood & Haider

Performance Objectives

After studying the chapter eight, the student will be able to:-
1. State the Physical and Medical Definition of ultrasound.
2. Explain how the operator Piezoelectric.
3. Identify the Properties of Ultrasound.
5. Describe the basic function of a transducer and how it forms an ultrasound
pulse.
7. Describe the Modes Ultrasound.
8. Discuss physical factors that determine ultrasound wavelength and its
significance in imaging.
9. Describe the general relationship between wavelength and image quality.
10. Describe the physical conditions in the body that produce ultrasound
reflections or echoes.
11. Describe the factors that determine the intensity of a reflected pulse.
12. Identify the three physical factors that determine the total attenuation of
an ultrasound pulse passing through a section of tissue.
13. Identify the factors that determine ultrasound velocity and state the
approximate velocity value in tissue
14. State the basic principles of Doppler Effect.
CHPTER 8 IMAGING WITH SOUND

CHAPTER EIGHT: IMAGING WITH ULTRASOUND


CHAPTER CONTENTS
8.1. Introduction and Overview 145
8.2. Definition Ultrasound 147
8.3. Properties of Ultrasound 147
8.3.1. Type of Waves Depends on the Medium 147
8.3.2. Phase Velocity–Group Velocity 148
8.3.2.1. Phase Velocity 148
8.3.2.2. Group Velocity 148
8.3.3. Wavelength and Speed of Propagation 148
8.4. Diagnostic Ultrasound 149
8.5. Piezoelectric Materials 150
8.6. Piezoelectric Effect 152
8.7. Reverse Piezoelectric Effect 152
8.8. Detection of Ultrasound 153
8.9. Ultrasound Imaging Systems 154
8.9.1. Ultrasound Transducers 154
8.9.1.1 Ultrasonic Transducer Structures 154
8.9.1.2. Types of Ultrasound Transducers 156
8.9.2. Amplification 159
8.9.3. Scan Generator 159
8.9.4. Scan Converter 159
8.9.5. Image Processor 159
8.9.6. Display 159
8.9.7. Things to Consider 159
8.9.7.1. Thickness Range 159
8.9.7.2. Geometry 159
8.9.7.3. Temperature 160
8.9.7.4. Accuracy 160
8.10. Ultrasound Modalities 160
8.10.1. Ultrasound Pulse Generator 160
8.10.1.1 Short Pulse 161
8.10.2. Continuous Wave Mode 162
8.11. Ultrasound Characteristics 163
8.11.1. Frequency 164
8.11.2. Velocity 164
8.11.3. Wavelength 165
8.11.4 Amplitude 166
8.12. Intensity and Power 168
8.12.1. Temporal Characteristics 169
8.12.2. Spatial Characteristics 170
8.12.3. Temporal/Spatial Combinations 170
8.13. Interactions of Ultrasound with Tissue 170
8.13.1. Attenuation 171
8.13.2. Refraction 172
8.13.3. Reflection 173

143
CHPTER 8 IMAGING WITH SOUND

8.13.4. Scattering 175


8.13.5. Absorption 177
8.14. Acoustic Impedance 177
8.15. Ultrasound Contrast Agents 180
8.16. Spatial Resolution 181
8.16.1. Lateral Resolution 181
8.16.2. Axial Resolution 182
8.17. Beam Forming and Transducers 182
8.17.1 Ultrasound Field 183
8.18. Transducer Focusing 184
8.18.1. Dynamic Receive Focus 185
8.18.2. Ultrasonic Phased Arrays 185
8.18.3. Unfocused Transducers 186
8.18.4. Fixed Focus 187
8.18.5. Adjustable Transmit Focus 187
8.19. Time Gain Compensation (TGC) 188
8.20. Ultrasound Techniques 189
8.21. Modes Ultrasound 189
8.21.1. A-mode 190
8.21.2. B-Mode 192
8.21.3. M-mode or TM-mode 193
8.21.4. B-scan, Two-dimensional 193
8.21.5. Three- and Four-Dimensional Ultrasound Techniques 195
8.21.6. Harmonic Imaging 197
8.21.7. B-flow 197
8.22. Doppler Effect 198
8.23. Basic Principles 198
8.24. The Doppler Equation 200
8.25. Spectral Doppler 201
8.26. Pulsed and Continuous Wave Doppler 202
8.26.1. Continuous Wave Doppler 202
8.26.1.1. The Advantage of CW Doppler 202
8.26.1.2. The Disadvantage of CW Doppler 203
8.27. Color Flow Mapping 203
8.28. Pulsed Wave Doppler 203
8.29. Angle of Incidence 206
8.30. Aliasing 207

144
CHPTER 8 IMAGING WITH SOUND

8.1 Introduction and Overview


Unlike light and x-ray, sound requires a medium which propagate through it such as water or
soft tissue and it consists of longitudinal vibrations in much the same way as a compression
can be seen to travel along the length of a spring as shown in figure 8.1.
Mahmood & Haider

Transverse wave
Wavelengt
h

Compression Expansion

Longitudinal wave Wavelengt


h
Figure 8.1: Examples of longitudinal and transverse waves.
Diagnostic Ultrasound (ultrasonography) is an ultrasound-based diagnostic imaging technique
used for visualizing subcutaneous and examination of different parts of human body structures
including tendons, muscles, joints, vessels and internal organs for possible pathology or
lesions using high frequency sound waves, which are emitted from a probe and directed into
the body. Ultrasonography commonly used during pregnancy and usually recognized widely
by the public.
All the various techniques of diagnostic ultrasound involve the detection and display the
acoustic energy reflected off different tissues in the body. Different body structures have
different characteristics that scatter and reflect sound energy in predictable ways, making it
possible to identify these structures in the two-dimensional images, gray-scale scanners
produced by ultrasound.
There are many variables involved in the production, detection and treatment of ultrasound
data, which are, for the most part, under the control of the operator. All the different imaging
techniques, ultrasound is the most affected by the skill and experience of the operator, both in
the acquisition and interpretation of images.
Diagnostic ultrasound offers advantages over other imaging modalities, and the most
important of these advantages do not use ionizing radiation, making it safer, especially in
imaging during pregnancy. The other important feature is its ability to image in real time,
making it simple to perform live active and passive range of motion studies, in addition to the
low cost. In summary, the feature of ultrasound:
 Uses no ionizing radiation

145
CHPTER 8 IMAGING WITH SOUND

 Safe in pregnancy
 Has no known side effects
 Inexpensive
 Portable
 Minimal preparation of patients
 Painless
 Gives direct vision for biopsies
Some problems with ultrasound imaging are that the diagnostic images sometimes cannot be
obtained because of the size of the patient, or because the ultrasound beam cannot traverse the
areas of air-filled or bone in such cases, the cross-sectional imaging with CT or MRI can be
used instead.
Medical treatment can be given after a proper diagnosis or identify the disease properly.
After the first use of ionizing radiation (x-ray) by Roentgen in 1895 to visualize the interior of
the body has been the only way for decades. However, during the second half of the twentieth
century was the discovery of new imaging methods are quite different from those of the X-
rays. Was one of the most important of these ways is ultrasound, which showed the particular
potential and greater benefit of imaging which relies on X-rays.
During the last decade of the twentieth century, the use of ultrasound in medical practice
and hospitals were increasingly common in all parts of the world. Proved much the scientific
research the benefit of ultrasound and sometimes superiority in many cases commonly used X-
ray techniques, resulting in significant changes in diagnostic imaging procedures.
Sound is a physical phenomenon that carries energy from one point to another. In this
respect is similar to radiation, but differs from the radiation that does not pass through the
vacuum and that means it needs to matter in order to transfers from one place to another. This
is because the sound waves are actually vibrations that pass through the material. If there is
any substance, nothing can vibrate and sound cannot exist.
One of the most important features in the sound is frequency, defined as the rate of
vibration source of sound and material that passes through it. Sound frequency is measured in
a basic unit called the hertz, and defines the hertz as one vibration, or cycle, per second. Pitch
is the term commonly used as a synonym for sound frequency.
Assays medical uses high-frequency sound waves to look inside the body. These sound
waves are too high for the human ear can hear. There is a specific extent of the frequencies in
the human ear can hear or respond to them. The human ear in young adults can hear the
frequency extent from 20 Hz to 20,000 Hz. The extent of frequencies greater than this limit is
called ultrasonic frequencies (Ultrasound). Frequencies in the extent of 2 MHz (million
cycles per second) to 20 MHz are used in diagnostic ultrasound which is too high for the
human ear can hear. Where, ultrasound is used as a diagnostic tool because it can be focused
into small, well-defined beams that can probe the human body and interact with the tissue
structures to form images. Acoustic waves are shed on the internal organs through a small

146
CHPTER 8 IMAGING WITH SOUND

scanner called transducer of hand-holding which is placed in direct contact with the patient's
skin to be imaged. Transducer has a crystal vibrate and a scanner unlike the sound or echo to
form an image.
The transducer (probe) is the small hand-held component of the ultrasound imaging
equipment that resembles a microphone and it performs several functions as will be described
in detail later. Its first function is to produce and send the ultrasound pulses when electrical
pulses are applied to it. A short time later, receives the echoing waves when the transducer is
pressed against the skin and converted back into electrical pulses that are then processed by
the system and formed into an image.
Produces echo by the surfaces or the boundaries between the two different types of tissue in
the form of bright white spots in the image. Many surfaces in general produce a white or gray
background can be seen in the image. And the absence of reflecting surfaces within the fluid,
such as the cyst, dark spots appear in the image. For this reason, the ultrasound image,
sometimes called a Brightness modulation "B mode" image, which is a display of echo
producing sites within the anatomical area.
Another physical characteristic that can be imaged with ultrasound devices are processes
the echoes produced by blood flowing and blood vessels. That is a special application of
ultrasound uses the Doppler principle, which the measures the direction and speed of blood
cells as they move through vessels. A computer collects and processes the sounds and graphs
or constitutes images with different colors representing the different flow velocities and
directions that represent the flow of blood through the blood vessels.
8.2 Definition Ultrasound
 Physical Definition; Ultrasound (ultrasonic) is the term used to describe sound of
frequencies above 20 000 Hertz (Hz), beyond the range of human hearing. The term
"ultrasonic" applied to sound refers to anything above the frequencies of audible sound.
 Medical Definition; Diagnostic Medical Ultrasound is the use of high frequency sound
to aid in the diagnosis and treatment of patients and the frequency ranges used in
medical ultrasound imaging are 2 - 15 MHz.
8.3 Properties of Ultrasound
Sound is a pressure disturbance (vibration) transmitted through all forms of matter: gases,
liquids, solids, and plasmas as mechanical pressure waves that carry kinetic energy. A medium
must therefore be present for the propagation of these waves, since cannot travel through a
vacuum.
8.3.1 Type of Waves Depends on the Medium
Ultrasound and sound waves propagates in a fluid (gases and liquids) as longitudinal waves,
in which the particles of the medium vibrate to and fro along the direction of propagation,
alternately compressing and rarefying the material.

147
CHPTER 8 IMAGING WITH SOUND

In hard tissues like bone, ultrasound can be transmitted as both longitudinal (compression)
and transverse (shear) waves; in the latter case, the particles move perpendicularly to the
direction of propagation.

8.3.2. Phase Velocity–Group Velocity


There are two different fundamental sound velocities we must distinguish between them: the
group velocity and the phase velocity
8.3.2.1 Phase Velocity
The phase velocity of a wave is the speed at which a given phase of a wave travels through
space. The phase velocity of a wave is the rate at which the phase of the wave propagates in
space. This is the velocity at which the phase of any one frequency component of the wave
will propagate. You could pick one particular phase of the wave (for example the crest) and it
would appear to travel at the phase velocity. Phase velocity term generally comes into picture
when there is a single wave, suppose in optical fiber when there is only one mode (i.e. single
mode) then your phase velocity is defined and same as group velocity. Phase velocity
corresponds to the propagation velocity of a given phase that is of a single frequency (i.e.
single mode) component of a periodic wave. A propagating medium is said to be dispersive if
the phase velocity is a function of frequency or wavelength, which is the case for example in
all attenuating media. This means that the different frequencies contained in the signal do not
propagate at constant velocities.

8.3.2.2 Group Velocity


The group velocity is the speed of the overall shape of a modulated wave (called the
envelope). The group velocity of a wave is the rate that changes in amplitude (known as the
envelope of the wave) will propagate through space. (The existence of different modes each
has a different velocity). Group velocity actually corresponds physically to the velocity with
which energy or information is transferred along the direction of wave propagation. If the
wave is travelling through a medium of absorptive, and this does not always hold. In the case
of an absorptive medium, the group velocity may vary from the phase velocity. It is important
to be aware of the speed of dispersion because it affects are likely to be accurate
measurements of the speed of sound. Hence similar to the average speed, the term has been
defined so-called speed of the group, which gives a sort of average velocity of different
modes.

8.3.3 Wavelength and Speed of Propagation


The speed of sound is the distance that a point on a wave (such as a compression or a
rarefaction) travelled during a unit of time by a sound wave propagating through an elastic
medium.
The speed of sound (c) in a medium depends on the density and compressibility of the
medium. For example, in pure water, it is 1492 m/s (20 °C). Note that the speed of sound in air
depends only on the temperature.

148
CHPTER 8 IMAGING WITH SOUND

As known from basic physics the characteristic variables describing the propagation of a
monochromatic wave in time and space are frequency ( f ) or period (T) velocity (v) and
wavelength (λ) are related to each other following:

Usually measured waves in the electromagnetic spectrum, such as radio waves and light
waves, is in millimeters, or nm, instead of centimeters or meters. This is because it has much
shorter wavelengths than the sound waves.
As it does in water, ultrasound propagates in biological soft tissues as longitudinal waves, the
average speed of ultrasound propagation of soft tissue are approximately 1540 m/s (fatty
tissue, 1470 m/s; muscle, 1570 m/s). The construction of ultrasound images depends very
much on the measurement of distances, which depends on this almost constant propagation
speed. The speed in bone (3600 m/s) and cartilage is, however, much higher and can create
misleading effects in images.
The wavelengths of ultrasound are closely related to ultrasound frequency, both influence
the resolution of the images. Better resolution is associated with a higher ultrasound
frequency, the shorter wavelength but absorption of the sound energy by tissue also increases
with frequency. The resolution can determine the degree of image clarity. That meant the
resolution is the ability of the ultrasound machine to distinguish two structures (reflectors or
scatters) that are close together as separate.
The kinetic energy of the sound waves is converted to heat (thermal energy) in the medium
when sound waves are absorbed. The applications of ultrasound to bring heat or agitation into
the body (thermotherapy) were the first use of ultrasound in medicine.
8.4 Diagnostic Ultrasound
Today, ultrasound (US) is one of the most commonly used imaging technologies in medicine.
Sounds in the range 2 and 18 megahertz (MHz) are typically used for diagnostic ultrasound.
The accuracy of ultrasound diagnosis based on computerized analysis of reflected ultrasound
waves, which non-invasively build up fine images of internal body structures. The best
resolution can be achieved by using shorter wavelengths, with a wavelength that is inversely
proportional to the frequency. However, the use of high frequencies is limited due to the
increased attenuation (loss of signal strength) in various tissues and so easily absorbed and
thus shorter than the depth of penetration. For this reason, specific probes are used for
different frequency ranges to examine different parts of the body:

 3–5 MHz for abdominal areas


 5–10 MHz for small and superficial parts and
 10–30 MHz for the skin or the eyes

149
CHPTER 8 IMAGING WITH SOUND

Because of the heterogeneity of the different tissues within the body with different densities,
the ultrasonic waves penetrate varies accordingly. Bones absorb ultrasound much more than
soft tissue, so that, in general, ultrasound is suitable for examining only the surfaces of the
bone. For this reason, ultrasound images show a black zone behind the bones due to the
inability of ultrasound energy to reach those areas. If the high frequencies used, called
acoustic shadow.
8.5 Piezoelectric Materials
The word piezoelectricity means the ability of some materials (notably crystals and certain
ceramics, including bone) to generate an electric charge or of electric polarity in dielectric
crystals in response to applied mechanical stress in such crystals subjected to an applied
voltage. The piezo is derived from the Greek word 'piezein' (πιέζειν), which means to
squeeze or press, and electric or electron (ήλεκτρον), which stands for amber, an ancient
source of electric charge. The piezoelectric effect was first discovered in 1880 by brothers
Pierre Curie and Jacques Curie (French physicists). The Curie brothers only found that
piezoelectric materials can produce electricity. The next development was the discovery by
Gabriel Lippmann that electricity can deform piezoelectric materials. It was not until the early
twentieth century that practical devices began to appear. Today, it is known that many
materials such as quartz, topaz, cane sugar, Rochelle salt, and bone have this effect.
In summary;
- Piezoelectricity discovered by the Curies in 1880 using natural quartz
- SONAR (originally an acronym for SOund Navigation And Ranging) is a technique
that uses sound propagation (usually underwater, as in submarine navigation) was first
used in 1940’s war-time
- Diagnostic Medical applications in use since late 1950’s

Piezoelectric crystals or materials generate an electrical voltage from separation of positive


and negative charges when they are squeezed or stretched. In other words, it is the charge that
accumulates in certain solid materials (notably crystals, certain ceramics, and biological
matter such as bone, DNA and various proteins) in response to applied mechanical stress. We
can view each SiO2 unit as a sphere that has a positively charged core (lawn green) and a
negatively charged shell (light pink), as depicted in Figure 8.2. Figure 8.2 explains what
happens if the SiO2 is put under mechanical stress (symbolized by yellow arrows). The overall
formula of quartz is SiO2, and since every oxygen atom carries the same extra amount of
negative charge taken from silicon atom, the central silicon atoms carries 2 positive charges
and the oxygen just one negative charge.
Under mechanical stress the crystal all tetrahedral are affected, with their central silicon
atom pushed downwards. The whole structure (All SiO2 units) are electrically polarized in the
same way, in this case being more negative on the top and more positive on the bottom, as

150
CHPTER 8 IMAGING WITH SOUND

depicted in Figure 8.2 to the right. The voltage built up in each SiO2 unit is very small, but
since millions of them line up in the crystal structure, their voltage adds up to a measurable
amount.
Piezoelectric crystals have the property of change of polarization density within the
material's volume when a voltage is applied. Thus applying an alternating current (AC) across
the materials causes them to oscillate at very high frequencies, thus producing very high
frequency sound waves. For good examples of piezoelectric material, Lead (from Latin:
plumbum) Zirconate Titanate – more commonly known as PZT – crystals. PZT is an
inorganic compound with the chemical formula Pb[ZrxTi1-x]O3 0 ≤ x ≤ 1) that shows a marked
piezoelectric effect, which finds practical applications in the area of electro-ceramics. PZT is a
white solid that is insoluble in all solvents. PZT crystals are the most widely-used
piezoelectric material used for energy harvesting. PZT will generate measurable
piezoelectricity when their static structure is deformed by about 0.1% of the original
dimension. Conversely, those same crystals will change about 0.1% of their static dimension
when an external electric field is applied to the material. A key advantage of PZT materials is
that they can be optimized to suit specific applications through their ability to be manufactured
in any shape or size. Moreover, PZT materials characterized by their ability to resilient, and
resistance to high temperatures and various air pressures and chemically inert.

No Stress Tension Compression


+ ++ ++ ++ — ——————
— —

++ ++ ++ ++
++ ++
— — — —
— ++ —
++ ++
———————— ++ ++ ++

Oxygen Atom
++ Silicon Atom

Mahmood & Haider Quartz (SiO2)


Figure 8.2: A piezoelectric disk generates a voltage when deformed (change in shape is
greatly exaggerated) Quartz (SiO2): Effect of deformation on charge distribution.

Quartz is an electrical insulator but has a number of unique physical properties that are very
different from that of other solid substances, like plastic, wood, concrete, bone, or glass. For

151
CHPTER 8 IMAGING WITH SOUND

example, may show some interesting behavior when exposed to electric fields or get
electrically charged when put under stress. This is one of the most important properties of
quartz technically and it has many applications. This property can be found in certain types of
crystals, but not in amorphous substances, which reacts differently depending on the direction
of external forces which is called anisotropic. To explain this, we have to look at individual
molecules of crystal that is made up of an ordered repeating pattern of the same atom or
molecule. Each molecule is polarized since one end is more negatively charged and the other
end is positively charged, and is called a dipole as a result of the atoms that make up the
molecule and the way the molecules are shaped.
8.6 Piezoelectric Effect
The piezoelectric effect is the ability of certain non-conducting materials, such as quartz
crystals and ceramics, to generate electric current when they are exposed to mechanical stress
(such as pressure or vibration). It also works in the opposite direction, mechanical deformation
slightly (the substance shrinks or expands) may be produced or the generation of vibrations in
such materials when subjected to an AC voltage, or both. This vibration or oscillation caused
by applied AC transmitted as ultrasonic waves into the surrounding medium. The piezoelectric
crystal, therefore, serves as a transducer, which converts electrical energy into mechanical
energy and vice versa.
The piezoelectric effect occurs only in crystals with a special crystal structure which they
lack of center of symmetry. All piezoelectric classes lack a centre of symmetry. Under an
applied force the centers of mass for positive and negative ions are shifted which results in a
net dipole moment. When the force is along a different direction, there may not be a resulting
net dipole moment in that direction though there may be a net dipole moment along a different
direction. In the absence of an applied force, the centre of mass of the positive ions
coincides with that of the negative ions and there is no resulting dipole moment or
polarization.
8.7 Reverse Piezoelectric Effect
The piezoelectric effect is a reversible process in that materials exhibiting the direct
piezoelectric effect (the internal generation of electrical charge resulting from an applied
mechanical force) also exhibit the reverse piezoelectric effect (the internal generation of a
mechanical strain resulting from an applied electrical field). This effect is formed in crystals
that have no center of symmetry. To explain this, we have to look at the individual molecules
that make up the crystal (cf. Figure 8.2). Each molecule has a polarization, one end is more
negatively charged and the other end is positively charged, and is called a dipole. This is a
result of the atoms that make up the molecule and the way the molecules are shaped. The polar
axis is an imaginary line that runs through the center of both charges on the molecule. In a
mono-crystal the polar axes of all of the dipoles lie in one direction. The crystal is said to be
symmetrical because if you were to cut the crystal at any point, the resultant polar axes of the
two pieces would lie in the same direction as the original. In a poly-crystal, there are different

152
CHPTER 8 IMAGING WITH SOUND

regions within the material that have a different polar axis. It is asymmetrical because there is
no point at which the crystal could be cut that would leave the two remaining pieces with the
same resultant polar axis.
In summary
Ultrasound waves are generated by piezoelectric crystals. Piezoelectric means "pressure
electric" effect. When electric current applied on quartz crystal produces a mechanical
deformation of the shape and change polarity. Thus applying an alternating current (AC)
across the materials causes expansion and contraction that in turn leads to the production of
compression and rarefaction of sound waves. It also works in the opposite direction; an
electrical current is generated on exposure to returning echoes that are processed to generate a
display. Hence the piezoelectric crystals are both transmitter (small proportion of the time)
and receiver (most of the time). It is known that many materials have piezoelectric effect (e.g.
topaz, cane sugar, Rochelle salt, and bone) and the frequency of the generated wave is a
specific feature of the crystal used.
8.8 Detection of Ultrasound
As we have known previously, the piezoelectric effect works in reverse. If the crystal is
squeezed or stretched, an electric field is produced across it. So, if ultrasound hits the crystal
from outside, it will cause the crystal to vibrate in and out, and this will produce an alternating
electric field. The resulting electrical signal can be amplified and processed in a number of
ways. So a second crystal can be used to detect any returning ultrasound which has been
reflected from an obstacle.
Normally the transmitting and receiving crystals are built into the same hand-held unit,
which is called an ultrasonic transducer (generally, a transducer is any device to convert
energy from one form to another, usually to or from electrical energy.
May come to mind a question what is the material used by doctors and placed on the skin prior
to the examination:
Ultrasound gel is a type of conductive medium that is used in ultrasound diagnostic
techniques and treatment therapies. It is placed on the patient’s skin at the beginning of the
ultrasound examination or therapy. The transducer, which is the device used to send and
receive sound waves, is then placed on top of it. Ultrasound gel is also used with a fetal
Doppler, which can be employed to allow parents and doctors to listen to the heart beat of an
unborn child.
Many doctors, hospitals, clinics, and other facilities use ultrasound technology for
diagnostic purposes. It works by passing sound waves into a person’s body. Once there, they
don’t remain for long. Instead, they bounce off the organ or other part of the body the doctors
are trying to view. The sound waves then move back through the transducer, and they are
ultimately analyzed by a computer, which allows the analyzed sound waves to be viewed on a
monitor or even printed out for doctor or patient use.

153
CHPTER 8 IMAGING WITH SOUND

8.9 Ultrasound Imaging Systems


The basic functional components of an ultrasound imaging system are shown below. Modern
ultrasound systems use digital computer electronics to control most of the functions in the
imaging process (see Figure 8.3). Therefore, the boxes in the illustration above represent
functions performed by the computer and other electronic circuits and not individual physical
components.
Display

Processor
Contrast

Scan Generator Scan Converter

Intensity Gain
Transducer
Pulse Generator Amplifier

Ultrasound pulse Echo pulse

Reflecting interface
Mahmood & Haider
Figure 8.3: The principal functional components of an ultrasound
imaging system

We will now consider some of these functions in more detail and how they contribute to
image formation.
8.9.1 Ultrasound Transducers
In general, a transducer is a device that converts energy from one form to another, in the case
of Ultrasound transducers; the conversion is from electrical to mechanical energy (or vice
versa).
8.9.1.1 Ultrasonic Transducer Structures
Transducers for ultrasound imaging consist of one or more piezoelectric crystals or elements.
The basic properties of ultrasound transducers (resonance, frequency response, focusing, etc.)
can be illustrated in terms of single-element transducers. However, imaging is often preformed
with multiple-element “arrays” of piezoelectric crystals.
A piezoelectric transducer comprises a "crystal" sandwiched between two metal plates.
When a sound wave strikes one or both of the plates, the plates vibrate. The crystal picks up
this vibration, which it translates into a weak AC voltage. Therefore, an AC voltage arises
between the two metal plates, with a waveform similar to that of the sound waves. Conversely,

154
CHPTER 8 IMAGING WITH SOUND

if an AC signal is applied to the plates, it causes the crystal to vibrate in sync with the signal
voltage. As a result, the metal plates vibrate also, producing an acoustic disturbance.
Piezoelectric transducers are common in ultrasonic applications, such as intrusion detectors
and alarms. Piezoelectric devices are employed at AF (audio frequencies) as pickups,
microphones, earphones, beepers, and buzzers. In wireless applications, piezoelectricity makes
it possible to use crystals and ceramics as oscillators that generate predictable and stable
signals at RF (radio frequencies).
Ultrasound transducers are usually made of thin discs of an artificial ceramic perovskite
material such as PZT. The basic design of a plain transducer is shown in Figure 8.4.

(a)
Acoustic Insulator Rear & Front
Acoustic Absorber
Electrodes apply an
alternating potential
difference
Power cable Acoustic Lens

Acoustic Matching
Layers
Piezoelectric
Backing Material Crystal
(b) Mahmood & Haider
Figure 8.4: Basic design of (a) probe contains hundreds of transducers (b)
single-element transducer.

The crystal is cut into a slice with a thickness equal to half a wavelength of the desired
ultrasound frequency, as this thickness ensures most of the energy is emitted at the
fundamental frequency. Generally, the thickness of thin discs (usually 0.1–1 mm) determines
the ultrasound frequency. In most diagnostic applications, ultrasound is emitted in extremely
short pulses as a narrow beam comparable to that of a flashlight. When not emitting a pulse
(as much as 99% of the time), the same piezoelectric crystal can act as a receiver that is the
transducer can act as both a transmitter and a receiver. The transducer (or probe) is containing
multiple piezoelectric crystals, which are interconnected electronically and vibrating in
response to the applied voltage (see figure 8.5). Also proved the reverse piezoelectric effect,
any application of electricity to the quartz leads to vibration of quartz.

155
CHPTER 8 IMAGING WITH SOUND

Figure 8.5: The transducer (or probe) is containing multiple piezoelectric crystals.

8.9.1.2 Types of Ultrasound Transducers


The essential element of each ultrasound transducer is a piezoelectric crystal, serving two
major functions:

(1) producing ultrasound pulses and


(2) receiving or detecting the returning echoes

Ultrasound systems today rely on the electro-mechanical properties of piezoelectric elements


to generate the sound waves and to measure the amplitude of the reflected waves. These
elements convert high voltage pulses into sound waves that travel through the relevant tissues
during transmission and convert small displacements into small voltage waveforms during
reception. These devices are quite expensive to manufacture because of the various layers that
must be attached to achieve the desired level of impedance matching between the device and
the human skin.
The Medical ultrasonic transducers (probes) come in a variety of shapes that are generally
labeled according to their design or intended usage each containing a specified number of
piezoelectric elements sector. That means, the ultrasound transducers differ in construction
according to:

 piezoelectric crystal arrangement,


 aperture (footprint),
 operating frequency (which is directly related to the penetration depth)

There are three types of transducers that most often used in the critical ultrasound imaging:
Linear, Sector and Convex (standard or micro-convex) as shown in Figure 8.6.

156
CHPTER 8 IMAGING WITH SOUND

Mahmood & Haider

Linear Sector Convex

Figure 8.6: Three types of transducers are most often used in


the critical ultrasound imaging.
 Linear Transducer
Linear transducers are used primarily for small parts requiring high resolution and typically
involve shallow depths. To achieve high resolution, higher frequencies are typically required.
Since produce a rectangular image the elements are arranged in a linear fashion.
 piezoelectric crystal arrangement: linear
 footprint size: usually big (small for the hockey transducers)
 operating frequency (bandwidth): 3-12 MHz (usually 5-7.5 MHz)
 ultrasound beam shape: rectangular
 use: ultrasound of the superficial structures, e.g. obstetrics ultrasound, breast or thyroid
ultrasound, vascular ultrasound
 Sector Transducer
To broaden the field of view, sector transducers are used. These transducers have a small
footprint and are good for cardio applications due to small rib spacing.
 piezoelectric crystal arrangement: phased-array (most commonly used)
 footprint size: small
 operating frequency (bandwidth): 1-5 MHz (usually 3.5-5 MHz)
 ultrasound beam shape: sector, almost triangular
 use: small acoustic windows, mainly echocardiography, gynecological ultrasound,
upper body ultrasound
 Convex Transducer
For abdominal viewing, curved transducers are typically used because of resolution and
penetration benefits. They allow for the maximum field of view and depth because of their

157
CHPTER 8 IMAGING WITH SOUND

large aperture. Note that any type of transducer can be phase arrayed to produce a beam of
sound that can be steered and focused by the ultrasound controller.
 piezoelectric crystal arrangement: curvilinear, along the aperture
 footprint size: big (small for the micro-convex transducers)
 operating frequency (bandwidth): 1-5 MHz (usually 3.5-5 MHz)
 ultrasound beam shape: sector; the ultrasound beam shape and size vary with distance
from the transducer, that causes the lack of lateral resolution at greater depths
 use: useful in all ultrasound types except echocardiography, typically abdominal,
pelvic and lung (micro-convex transducer) ultrasound
Also, the medical ultrasonic transducers (probes) come in a variety of shapes according to
intended usage each as shown in Figure 8.7. The transducer may be passed over the surface of
the body or inserted into a body opening such as the rectum or vagina. For example, In other
words, the transducer is the component of the ultrasound system that is placed in direct contact
with the patient's body. Inside the transducer contains one or more of the piezoelectric
elements. It's also focuses the beam of pulses to give it a specific size and shape at various
depths within the body and also scans the beam over the anatomical area that is being imaged.

3.5MHz 7.5MHz
Convex probe Linear probe
Abdominal transducers- Small parts transducers
general purpose (muscles, tendons, skin,
thyroid, breast, scrotum)

Mahmood & Haider

6.5MHz 6.5MHz
Linear probe Linear probe
Micro-Convex probe Trans-vaginal probe
Pediatric, cardiac intravaginal transducer
(uterus, ovaries,
pregnancy)
Figure 8.7: Ultrasound Transducer Types.
 Attention!
An ultrasound transducer is the most important and usually the most expensive element of the
ultrasound machine, so it should be used carefully, which means the following:
 do not throw, drop or knock the transducer,
 do not allow to spoil the transducer`s duct,
 wipe the gel from the transducer after each use,
 do not sluice with alcohol-based confections.

158
CHPTER 8 IMAGING WITH SOUND

8.9.2 Amplification
Amplification is used to increase the size of the electrical pulses coming from the transducer
after an echo is received. The amount of amplification is determined by the gain setting. The
principal control associated with the amplifier is the time gain compensation (TGC), which
allows the user to adjust the gain in relationship to the depth of echo sites within the body.
This function will be considered in much more detail in the next section.
8.9.3 Scan Generator
The scan generator controls the scanning of the ultrasound beam over the body section being
imaged. This is usually done by controlling the sequence in which the electrical pulses are
applied to the piezoelectric elements within the transducer. This is also considered in more
detail later.

8.9.4 Scan Converter


Scan conversion is the function that converts from the format of the scanning ultrasound beam
into a digital image matrix format for processing and display.
8.9.5 Image Processor
The digital image is processed to produce the desired characteristics for display. This includes
giving it specific contrast characteristics and reformatting the image if necessary.
8.9.6 Display
The digital ultrasound images are viewed on the equipment display (monitor) and usually
transferred to the physician display or work station.
One component of the ultrasound imaging system that is not shown is the digital storage
device that is used to store images for later viewing if that process is used.
8.9.7 Things to Consider
For any ultrasonic gauging application, the choice of gauge and transducer will depend on the
material to be measured, thickness range and accuracy requirements, geometry and
temperature, and any special conditions that may be present. Listed below, in order of
importance, are brief descriptions of some of the conditions that should be considered.
8.9.7.1 Thickness Range
Thickness ranges will also dictate the type of gauge and transducer to be selected. In general,
thin materials require high frequency transducers and thick or attenuating materials require
lower frequencies. Very thin material may not be within the range of a gauge utilizing contact
transducers; a delay line transducer may then be the answer. Similarly, gauges with delay line
and immersion transducers have limited maximum thickness capabilities primarily due to
potential interference from a multiple of the interface echo.
8.9.7.2 Geometry
A contract transducer is preferred for most ultrasonic measurements, unless sharp curvature or
small part size makes contact measurements impractical. As the surface curvature of the test

159
CHPTER 8 IMAGING WITH SOUND

piece increases, the coupling efficiency from the transducer to the test piece is reduced. In
general, as the surface curvature increases, the size of the contact transducer should be
reduced. Extreme curvature or inaccessibility of the test surface requires a system with a delay
line or an immersion transducer.
8.9.7.3 Temperature
When heated above a certain temperature (about 350ºC for PZT), called its "Curie
Temperature", transducers lose their piezoelectric properties. Transducer probes should
obviously not be autoclaved (nor should they be immersed in water unless waterproofed).
Thin slices of naturally occurring quartz crystals also show the piezoelectric effect, and are
used in digital timers and computers.
8.9.7.4 Accuracy
It should be considered that many factors may affect accuracy: sound attenuation and
scattering, sound velocity variations, poor coupling, surface roughness, non-parallelism,
curvature, echo polarity, etc. Selection of the best possible combination of gauge and
transducer should take into account all these factors. With proper calibration, measurements
can usually be made to an accuracy of 0.001inch or 0.01 mm.
8.10 Ultrasound Modalities
Ultrasound is diagnostically useful in medicine two modalities, continuous energy and
pulsed energy:
 Continuous sound energy uses a steady sound source, and has applications that include
fetal heart beat detectors and monitors. This Doppler ultrasound can also be used to
evaluate blood flow through different structures.
 Pulsed sound energy utilizes a quick blip of sound (like a hand clap), followed by a
relatively long pause, during which time an echo has a chance to bounce off the target
and return to the transducer. Through electronic processing of the returning sounds, a
two-dimensional image can be created that provides information about the tissues and
objects within the tissues.

8.10.1 Ultrasound Pulse Generator


The basic principles of ultrasound pulse production and transmission are illustrated in Figure
8.8. The pulse generator produces the electrical pulses that are applied to the transducer. For
conventional ultrasound imaging the pulses are produced at a rate of approximately 1,000
pulses per second. A typical ultrasound pulse consists of cycles of oscillating amplitudes (see
Figure 8.8), and contains a spectrum of frequencies (bandwidth) dominated by a centre
frequency.
Ultrasound relies on high-frequency sounds to image the body and diagnose patients.
Therefore ultrasound is longitudinal waves that cause particles to oscillate back and forth and
produce a series of compressions and rarefactions. Sound source (piezoelectric transducer

160
CHPTER 8 IMAGING WITH SOUND

element) is vibrating object which is in contact with the tissue, causing it to vibrate. Vibrations
are passed in the area of tissue next to the transducer to nearby tissues. This process continues
vibrations or sound pass from one area of the tissue to another. The rate at which tissue
structures vibrate back and forth is the frequency of the sound. The rate at which vibrations
move through the tissue is the speed of sound. When the transducer is in contact with a patient
(or some other medium) and a few hundred volts DC are suddenly applied to the disk, it
instantly expands, thereby compressing layer of the material in contact with it. Due to the
elasticity of the material the compressed layer expands and compresses an adjacent layer of
material.

one pulse

Pulse Length distance


(PL)

Pulse Repetition
Frequency PRF per unit time = 3
(PRF)
Mahmood & Haider
Figure 8.8: Schematic representation of ultrasound pulse generation.
In this way a layer or wave of compression travels with a velocity v through the material,
followed by a corresponding wave of decompression of rarefaction. In imaging, such short
regular pulses of ultrasound are used. This mechanical sound waves vibrating to create
alternating zones of pressure and upset when spread through the body's tissues.
8.10.1.1 Short Pulse
For practical use, most modern ultrasound systems are designed based on the principle of
pulse-echo technique, which means that transducer emits only a few cycles of pulses at a time
into the human body. When encountering tissues interfaces, reflection and scattering will
occur and produce pulse echoes, By detecting these echoes, tissue positioning and
identification as well as diagnosis can be made.
NOTE: This is the pulse rate (pulses per second) and not the frequency which is the number
of cycles or vibrations per second within each pulse. The principal control associated with the
pulse generator is the size of the electrical pulses that can be used to change the intensity and
energy of the ultrasound beam.

161
CHPTER 8 IMAGING WITH SOUND

8.10.2 Continuous Wave Mode


Frequency, period, wavelength and propagation speed are sufficient to describe continuous-
wave (CW) ultrasound. Cycles repeat indefinitely. Sonography uses pulsed ultrasound, i.e. a
few cycles of ultrasound separated in time with gaps of no signal.
If, instead of DC, an alternating current (AC) voltage is applied, the crystal face pulses
forwards and backwards like a piston, producing successive compressions and rarefactions
(Figure 8.9c). Each compression wave has moved forwards by a distance called the
wavelength (λ) by the time the next one is produced.
As shown in Figure 8.9, the space through which the ultrasound pulse moves is the beam.
In a diagnostic system, pulses are emitted at a rate of approximately 1,000 per second. The
pulse rate (pulses per second) should not be confused with the frequency, which is the rate of
vibration of the tissue within the pulse and is in the range of 2-20 MHz.

Mahmood & Haider

(a) Pulse Pulse repetition period

C = compression Pressure Amplitude


R = rarefaction
Electrical pulse Compression

Wavelength Rarefaction
Beam C R C R C
Velocity

Ultrasound pulse
Piezoelectric Element Vibration

(b)
Pressure

Maximum
+∆P
pressure C C C
Amplitude
Distance
Minimum R R
-∆P pressure

(c) 1 cycle Wavelength

Figure 8.9: The Production of an Ultrasound Pulse.

162
CHPTER 8 IMAGING WITH SOUND

The frequency (f) with which compressions pass any given point is the same as the
frequency at which the transducer vibrates and the frequency of the AC voltage applied to it. It
is measured in megahertz (MHz). Using the 8.1 formula, it is possible to calculate the velocity,
frequency or wavelength of a wave if the other two values are known. For comparison, Figure
8.9c shows the pressure wave-form of a pulsed wave, after periods in duration. Pulse duration
is the amount of time from the beginning to the end of a single pulse of ultrasound.
The sound in most diagnostic ultrasound systems is emitted in pulses rather than a
continuous stream of vibrations. At any instant, the vibrations are contained within a relatively
small volume of the material. It is this volume of vibrating material that is referred to as the
ultrasound pulse. As the vibrations are passed from one region of material to another, the
ultrasound pulse, but not the material, moves away from the source.
In soft tissue and fluid materials the direction of vibration is the same as the direction of
pulse movement away from the transducer. This is characterized as longitudinal vibration as
opposed to the transverse vibrations that occur in solid materials. As the longitudinal
vibrations pass through a region of tissue, alternating changes in pressure are produced.
During one half of the vibration cycle the tissue will be compressed with an increased
pressure. During the other half of the cycle there is a reduction in pressure and a condition of
rarefaction. Therefore, as an ultrasound pulse moves through tissue, each location is subjected
to alternating compression and rarefaction pressures.

8.11 Ultrasound Characteristics


Ultrasound pulses have several physical characteristics that should be considered by the user
in order to adjust the imaging procedure for specific diagnostic applications. The most
significant characteristics are illustrated here.

8.11.1 Frequency
Frequency (ƒ) is the number of wavelengths that pass per unit time. It is measured as cycles
(or wavelengths) per second and the unit is hertz (Hz). It is a specific feature of the crystal
used in the ultrasound transducer. It can be varied by the operator within set limits - the higher
the frequency, the better the resolution but the lower the penetration.
Ultrasound Pulse Frequency:
- The range (2- 20 MHz)
- Determined by the transducer
- Affects Absorption and Penetration
- Affects image Detail
The frequency of ultrasound pulses must be carefully selected to provide a proper balance
between image detail and depth of penetration. In general, high frequency pulses produce
higher quality images but cannot penetrate very far into the body.

163
CHPTER 8 IMAGING WITH SOUND

The frequency of sound is determined by the source. In diagnostic ultrasound equipment, the
source of sound is the transducer. The major element within the transducer is a crystal
designed to vibrate with the desired frequency. A special property of the crystal material is
that it is piezoelectric. This means that the crystal will deform if electricity is applied to it.
Therefore, if an electrical pulse is applied to the crystal it will have essentially the same effect
as the striking of a piano string: the crystal will vibrate. If the transducer is activated by a
single electrical pulse, the transducer will vibrate, or "ring," for a short period of time. This
creates an ultrasound pulse as opposed to a continuous ultrasound wave. The ultrasound pulse
travels into the tissue in contact with the transducer and moves away from the transducer
surface. A given transducer is often designed to vibrate with only one frequency, called its
resonant frequency. Therefore, the only way to change ultrasound frequency is to change
transducers. This is a factor that must be considered when selecting a transducer for a specific
clinical procedure. Certain frequencies are more appropriate for certain types of examinations
than others. Some transducers are capable of producing different frequencies. For these the
ultrasound frequency is determined by the electrical pulses applied to the transducer.

8.11.2 Velocity
Propagation Velocity (v) is the speed that sound waves propagate through a medium and
depends on tissue density and compressibility. The relationship between these variables is
expressed by the Wave Equation (Eq. 8.1). In soft tissue propagation velocity is relatively
constant at 1540m/s and this is the value assumed by ultrasound machines for all human
tissue. Hence wavelength is inversely proportional to frequency.
Factors Related to Ultrasound Pulse Velocity:
- Determined by the material
- Affects the depth dimension in the image
- Average for soft tissue: 1540 m/s, for air: 330 m/s
The importance of the speed of ultrasound is that it is used to locate the depth of the structures
in the body. The speed is determined with which sound travels through a medium by the
characteristics of the medium and not characteristics of the sound. The velocity of
longitudinal, or compression, sound waves in which the particles of the medium vibrate in the
direction of wave propagation, can propagate in liquids like tissue and gases, is given by:

where "stiffness" is a factor related to the elastic properties of the medium or the bulk
modulus. The velocities of sound through several materials of interest are given in the table
8.1.

164
CHPTER 8 IMAGING WITH SOUND

Most ultrasound systems are set up to determine distances using an assumed velocity of 1540
m/sec. This means that displayed depths will not be completely accurate in materials that
produce other ultrasound velocities such as fat and fluid.
Table 8.1: Approximate Velocity of Sound in Various Materials
Material Velocity (m/sec)
Fat 1450
Water 1480
Soft tissue (average) 1540
Bone 4100

8.11.3 Wavelength
Wavelength (λ) is the distance sound travel during the period of one vibration which is the
distance between two areas of maximal compression (or rarefaction) (see figure 8.10). The
importance of wavelength is that the penetration of the ultrasound wave is proportional to
wavelength and image resolution is no more than 1-2 wavelengths. It is typically measured
between two easily identifiable points, such as two adjacent crests or troughs in a waveform.
Wavelength is inversely proportional to frequency. That means if two waves are traveling at
the same speed, the wave with a higher frequency will have a shorter wavelength. Likewise, if
one wave has a longer wavelength than another wave, it will also have a lower frequency if
both waves are traveling at the same speed.
Although wavelength is not a unique property of a given ultrasound pulse, it is of some
significance because it determines the size (length) of the ultrasound pulse. This has an effect
on image quality, as we will see later.

Wave Length

High-frequency

Wave Length

Low-frequency

Wavelength ≈ 1/Frequency
Mahmood & Haider
Figure 8.10: Dependence of Pulse Length on Wavelength and Frequency.

165
CHPTER 8 IMAGING WITH SOUND

The illustration Figure 8.11 shows both temporal and spatial (length) characteristics related to
the wavelength. A typical ultrasound pulse consists of several wavelengths or vibration cycles.
The number of cycles within a pulse is determined by the damping characteristics of the
transducer. Damping is what keeps the transducer element from continuing to vibrate and
produce a long pulse.
The period is the time required for one vibration cycle. It is the reciprocal of the frequency.
Increasing the frequency decreases the period. In other words, wavelength is simply the ratio
of velocity to frequency or the product of velocity and the period. This means that the
wavelength of ultrasound is determined by the characteristics of both the transducer
(frequency) and the material through which the sound is passing (velocity).
The amplitude of an ultrasound pulse is the range of pressure excursions as shown in figure
9.9. The pressure is related to the degree of tissue displacement caused by the vibration. The
amplitude is related to the energy content, or "loudness," of the ultrasound pulse. The
amplitude of the pulse as it leaves the transducer is generally determined by how hard the
crystal is "struck" by the electrical pulse.
TIME
Pulse Duration

Period

+
VELOCITY Compression Pressure
Amplitude
Rarefaction
­
+
Wavelength

Pulse Length

DISTANCE Mahmood & Haider


Figure 8.11: The Temporal and Length Characteristics of an Ultrasound Pulse.
In ultrasound imaging the significance of wavelength is that short wavelengths are required to
produce short pulses for good anatomical detail (in the depth direction) and this requires
higher frequencies as illustrated below.

8.11.4 Amplitude
Amplitude is the height above the baseline and represents maximal compression. It is
expressed in a decibel which is a logarithmic scale (see figure 8.12). Most systems have a
control on the pulse generator that changes the size of the electrical pulse and the ultrasound
pulse amplitude. We designate this as the intensity control, although different names are used
by various equipment manufacturers.

166
CHPTER 8 IMAGING WITH SOUND

In diagnostic applications, it is usually necessary to know only the relative amplitude of


ultrasound pulses. For example, it is necessary to know how much the amplitude, A, of a pulse
decreases as it passes through a given thickness of tissue. The relative amplitude of two
ultrasound pulses, or of one pulse after it has undergone an amplitude change, can be
expressed by means of a ratio as follows:
Relative amplitude (ratio) = A2/A1
There are advantages in expressing relative pulse amplitude in terms of the logarithm of the
amplitude ratio. When this is done the relative amplitude is specified in units of decibels (dB).

A1
+ + A2

-
Mahmood & Haider
Figure 8.12: Ultrasound Pulse Amplitude, Intensity, and Energy.

The relative pulse amplitude, in decibels, is related to the actual amplitude ratio by

Relative amplitude (dB) = 20 log A2/A1

When the amplitude ratio is greater than 1 (comparing a large pulse to a smaller one), the
relative pulse amplitude has a positive decibel value; when the ratio is less than 1, the decibel
value is negative. In other words, if the amplitude of a pulse is increased by some means, it
will gain decibels, and if it is reduced, it will lose decibels.
The figure 8.13, illustration compares decibel values to pulse amplitude ratios and percent
values. The first two pulses differ in amplitude by 1 dB. In comparing the second pulse to the
first, this corresponds to an amplitude ratio of 0.89, or a reduction of approximately 11%. If
the pulse is reduced in amplitude by another 11%, it will be 2 dB smaller than the original
pulse. If the pulse is once again reduced in amplitude by 11 % (of 79%), it will have an
amplitude ratio (with respect to the first pulse) of 0.71:1, or will be 3 dB smaller.
Perhaps the best way to establish a "feel" for the relationship between pulse amplitude
expressed in decibels and in percentage is to notice that amplitudes that differ by a factor of 2

167
CHPTER 8 IMAGING WITH SOUND

differ by 6 dB. A reduction in amplitude of -6 dB divides the amplitude by a factor of 2, or


50%. The doubling of pulse amplitude increases it by +6 dB.

1dB 2dB
3dB
89%
6dB
79% 12d
71% B 18d
B 24d
B
50%

25%
12.5
% 6
Mahmood & Haider %
Figure 8.13: Pulse Amplitudes Expressed in Decibels and Percentages.
During its lifetime, an ultrasound pulse undergoes many reductions in amplitude as it passes
through tissue because of absorption. If the amount of each reduction is known in decibels, the
total reduction can be found by simply adding all of the decibel losses. This is much easier
than multiplying the various amplitude ratios.
8.12 Intensity and Power
Acoustic Power is the amount of acoustic energy generated per unit time. Energy is measured
in joules (J) with joules being the amount of heat generated by the energy in question. The unit
is the Watt (W) with 1W = 1J/s. The biological effects of ultrasound in terms of power are in
the milliwatt range. Therefore the power is the rate of energy transfer and is expressed in the
units of watts. Intensity is the rate at which power passes through a specified area. It is the
amount of power per unit area and is expressed in the units of watts per square centimeter.
Intensity is the rate at which ultrasound energy is applied to a specific tissue location within
the patient's body. It is the quantity that must be considered with respect to producing
biological effects and safety. The intensity of most diagnostic ultrasound beams at the
transducer surface is on the order of a few milliwatts per square centimeter.
Intensity is the power density or concentration of power within an area expressed as W/m 2
or mW/cm2. Intensity varies spacially within the beam and is greatest in the centre. In a pulsed
beam it varies temporally as well as spacially. Therefore the intensity is related to the pressure
amplitude of the individual pulses and the pulse rate. Since the pulse rate is fixed in most
systems, the intensity is determined by the pulse amplitude.
The relative intensity of two pulses (I1 and I2) can be expressed in the units of decibels by:
Relative Intensity = 10 log I2/I1

168
CHPTER 8 IMAGING WITH SOUND

Note that when intensities are being considered, a factor of 10 appears in the equation rather
than a factor of 20, which is used for relative amplitudes. This is because intensity is
proportional to the square of the pressure amplitude, which introduces a factor of 2 in the
logarithmic relationship.
The intensity of an ultrasound beam is not constant with respect to time nor uniform with
respect to spatial area, as shown in the following figure. This must be taken into consideration
when describing intensity. It must be determined if it is the peak intensity or the average
intensity that is being considered.
8.12.1 Temporal Characteristics
The figure 8.14 shows two sequential pulses. Two important time intervals are the pulse
duration and the pulse repetition period.

TIME AVERAGED INTENSITY


Pulse

Pulse Average Average

Pulse
Period Pulse Repetition Period

Peak Average

SPATIAL AVERAGED INTENSITY


Mahmood & Haider
Figure 8.14: The Temporal and Spatial Characteristics of Ultrasound
Pulses That Affect Intensity Values.

The ratio of the pulse duration to the pulse repetition period is the duty factor. The duty factor
is the fraction of time that an ultrasound pulse is actually being produced. If the ultrasound is
produced as a continuous wave (CW), the duty factor will have a value of 1. Intensity and
power are proportional to the duty factor. Duty factors are relatively small, less than 0.01, for
most pulsed imaging applications.
With respect to time there are three possible power (intensity) values. One is the peak
power, which is associated with the time of maximum pressure. Another is the average power
within a pulse. The lowest value is the average power over the pulse repetition period for an
extended time. This is related to the duty factor.

169
CHPTER 8 IMAGING WITH SOUND

8.12.2 Spatial Characteristics


The energy or intensity is generally not distributed uniformly over the area of an ultrasound
pulse. It can be expressed either as the peak intensity, which is often in the center of the pulse,
for as the average intensity over a designated area.
8.12.3 Temporal/Spatial Combinations
There is some significance associated with each of the intensity expressions. However, they
are not all used to express the intensity with respect to potential biological effects.
Thermal effects are most closely related to the spatial-peak and temporal-average intensity
(ISPTA). This expresses the maximum intensity delivered to any tissue averaged over the
duration of the exposure. Thermal effects (increase in temperature) also depend on the
duration of the exposure to the ultrasound. Mechanical effects such as cavitations are more
closely related to the spatial-peak, pulse-average intensity (ISPPA).
8.13 Interactions of Ultrasound with Tissue
As an ultrasound pulse passes through matter, such as human tissue, it interacts in several
different ways. Some of these interactions are necessary to form an ultrasound image, whereas
others absorb much of the ultrasound energy or produce artifacts and are generally undesirable
in diagnostic examinations. The ability to conduct and interpret the results of an ultrasound
examination depends on a thorough understanding of these ultrasound interactions.
Interaction between ultrasound and the medium, such as human tissue, is in several
different ways which can be described by attenuation, reflection, scattering, refraction and
diffraction (all well-known optical phenomena). Together with absorption, they cause
attenuation of an ultrasound beam on its way through the medium. The total attenuation in a
medium is expressed in terms of the distance within the medium at which the intensity of
ultrasound is reduced to 50% of its initial level, called the half-value thicknesses.
Energy is lost as the wave overcomes the natural resistance of the particles in the medium
to displacement, i.e. the viscosity of the medium. Thus, absorption increases with the viscosity
of the medium and contributes to the attenuation of the ultrasound beam. Absorption increases
with the frequency of the ultrasound.
In soft tissue, attenuation by absorption is approximately 0.5 decibels (dB) per centimeter
of tissue and per megahertz. Attenuation limits the depth at which examination with
ultrasound of a certain frequency is possible; this distance is called the ‘penetration depth’.
In this connection, it should be noted that the reflected ultrasound echoes also have to pass
back out through the same tissue to be detected. Energy loss suffered by distant reflected
echoes must be compensated for in the processing of the signal by the ultrasound unit using
echo gain techniques (depth gain compensation (DGC) or time gain compensation (TGC))
to construct an image with homogeneous density over the varying depth of penetration.

170
CHPTER 8 IMAGING WITH SOUND

8.13.1 Attenuation
Attenuation generally indicates that the ultrasound wave constantly loses energy (decreasing
intensity) when transmitted through the medium. It is the result of absorption of ultrasound
energy by conversion to heat, as well as reflection, refraction and scattering that occurs
between the boundaries of tissue with different densities. The rate at which an ultrasound
wave is absorbed generally depends on two factors:
(1) the material through which it is passing, and
(2) the frequency of the ultrasound.
This means that the attenuation increases (and hence penetration of the beam reduced) by:
 Increased distance from the transducer
 Less homogenous medium to traverse due to increased acoustic impedance mismatch
 Higher frequency (shorter wavelength) transducers
Air forms a virtually impenetrable barrier to ultrasound, while fluid offers the least
resistance.
Attenuation (absorption) rate is described in terms of attenuation coefficient of tissues. It
is the relation of attenuation to distance; therefore it is measure decibels per centimeter units,
and depends on the tissues traversed and the frequency of the ultrasound wave. So it is
necessary specify the frequency when an attenuation rate is given due to the attenuation in
tissue increases with frequency. Through a thickness of material, x, the attenuation is given
by:
Attenuation (dB) = () (f) (x)
where  is the attenuation coefficient (in decibels per centimeter at 1 MHz), and f is the
ultrasound frequency, in megahertz.
Attenuation coefficient values in the table 8.2, it is clear that there is considerable variation
in the rate of attenuation from material to material. All materials mentioned, and water
produce far less attenuation. This means that the water is a very good conductor of ultrasound.
So attenuation is low in fluid-filled structures as the case in cysts and bladder. Most of the
soft tissues in the body contain the attenuation coefficient values of about 1 dB per cm in
MHz, with the exception of fat and muscle which have high attenuation rate. Muscle has a set
of values that depend on the direction of ultrasound with respect to the muscle fibers. Lung
has a much higher proportion of tissue attenuation either air or soft. This is because the small
pockets of air in the alveoli are very effective in scattering ultrasound energy. Because of this,
and normal lung structure is very difficult to penetrate with ultrasound. Compared with the
soft tissues of the body, attenuation rate is high in the bones. Higher frequency waves are
subject to greater attenuation than lower frequency ones. To compensate for attenuation,
returning signals can be amplified by the ultrasound system, known as gain.

171
CHPTER 8 IMAGING WITH SOUND

Table 8.2: Approximate values of the attenuation coefficient


for various materials of interest.
Attenuation Coefficient
Body tissue
(dB/cm at 1MHz)
Water 0.002
Blood 0.18
Fat 0.66
Liver 0.5-0.94
kidney 1.0
Soft tissue (average) 0.9
Muscle 1.3-3.3
Air 12.0
Bone 20.0
Lung 40.0

8.13.2 Refraction
The refracted wave obeys Snell's Law and describes reflection where ultrasound beam crosses
an interface between two tissues at an oblique angle. The angle of refraction is dependent on
two things; the angle the sound wave strikes the boundary between the two tissues and the
difference in their propagation velocities. In Figure 8.15, for example, If the propagation
velocity of ultrasound is higher in the first medium (v1> v2), the beam that enters second
medium refracted at a less oblique (more steep) angle towards the center (A). If the velocity of
ultrasound is higher in the second medium (v1< v2), refraction occurs away from the
originating beam (B). As the beam emerges from medium 2 and reenters medium 1, it resumes
its original direction of motion. This behavior of ultrasound transmitted obliquely across an
interface is termed refraction. The presence of medium 2 simply displaces the ultrasound
beam laterally for a distance that depends upon the difference in ultrasound velocity and
density in the two media and upon the thickness of medium 2. Suppose a small structure
below medium 2 is visualized by reflected ultrasound. The position of the structure would
appear to the viewer as an extension of the original direction of the ultrasound through
medium 1. In this manner, the sound is not reflected directly back to the transducer, refraction
adds spatial distortion and the image being depicted may not be clear, or potentially altered,
"confusing" the ultrasound system since it assumes that sound travels in a straight line.
These phenomena can allow for improved image quality by the use of acoustic lenses that
can focus the ultrasound beam and improve resolution.

172
CHPTER 8 IMAGING WITH SOUND

Angle of Angle of
incidence reflection
(θi) (θr)

Angle of
=
Angle of Reflection
Incidence Reflection

Sound velocity in the medium 1 = v1 Medium 1

Sound velocity in the medium 2 = v2 Medium 2

Refraction v1 < v2

v1 > v2 B
Original direction
Mahmood & Haider A of motion

Figure 8.15: Lateral displacement of an ultrasound beam as it traverses a


slab interposed in an otherwise homogeneous medium.

 Big surface: Ultrasound refraction only happens at big surface compared to its
wavelength.
 Velocity mismatch: The acoustic medium at both sides of the surface must have
different sound velocity.
 Dependence on angle: The refracted wave obeys Snell's Law. Laws of reflection
and refraction hold

8.13.3 Reflection
Ultrasound imaging is based on the "pulse-echo" principle in which performed by emitting a
pulse from a transducer and directed into tissue. When a sound wave is incident on an
interface between two tissues, part reflected from a boundary, and part transmitted (figure
8.16). According to the law of reflection, the angle of reflection of a reflected wave is equal
to its angle of incidence.
Medical ultrasound imaging relies utterly on the fact that biological tissues scatter or
reflect incident sound. Scattering refers to the interaction between sound waves and particles
that are much smaller than the sound's wavelength λ, while reflection refers to such
interaction with particles or objects larger than λ.

173
CHPTER 8 IMAGING WITH SOUND

Part Specular
Reflection

Part
transmitted
Specular Reflection

Part Diffuse
Reflection

Part
transmitted
Diffuse Reflection
Mahmood & Haider

Figure 8.16: Reflection of ultrasound beam can be categorized as


either specular or diffuse reflection.

Reflection can be categorized as either specular or diffuse. Specular reflectors are large,
smooth surfaces, such as bone (see figure 8.17), where the sound wave is reflected back in a
singular direction. The large smooth surface of the bone causes a uniform reflection because
of the significant difference in the acoustic impedance between it and the adjoining soft tissue.
The greater the acoustic impedance between the two tissue surfaces, the greater the reflection
and the brighter the echo will appear on ultrasound.

Figure 8.17: The tibia, (yellow arrows) is a good example of a


specular reflector.

174
CHPTER 8 IMAGING WITH SOUND

Conversely, soft tissue is classified as a diffuse reflector, where adjoining cells create an
uneven surface causing the reflections to return in various directions in relation to the
transmitted beam. This means that the incident sound is spread out over a range of angles. As
shown in figure 8.18, the different acoustic impedances of the structures located within the
muscle result in the various shades of grey seen on the B-Mode image. However, because of
the numerous surfaces, sound is able to get back to the transducer in a relatively uniform
manner.
The difference is that in the case of reflection, the angle at which the reflected beam of
sound are directed is the opposite angle to their incidence. By comparison, scattering
randomises the direction of the sound that emerges from the scattering process.

Figure 8.18: The pectoris major muscle (PM) located between the
white arrows is an example of diffuse reflection.

8.13.4 Scattering
The scattering or reflections of acoustic waves arise from inhomogeneities in the medium's
density and/or compressibility. Sound is primarily scattered or reflected by a discontinuity in
the medium's mechanical properties, to a degree proportional to the discontinuity. (By
contrast, continuous changes in a medium's material properties cause the direction of
propagation to change gradually.) The elasticity and density of a material are related to its
sound speed, and thus sound is scattered or reflected most strongly by significant
discontinuities in the density and/or sound speed of the medium.
The scattering or reflections of acoustic waves arise from inhomogeneities in the medium's
density and/or compressibility. Rayleigh scattering occurs at interfaces involving structures
of small dimensions as shown in figure 8.19 & 8.20. This is common with red blood cells
(RBC), where the average diameter of an RBC is 7μm, and an ultrasound wavelength may be
300μm (5 MHz). When the sound wave is greater than the structure it comes in contact with,

175
CHPTER 8 IMAGING WITH SOUND

it creates uniform amplitude in all directions with little or no reflection returning to the
transducer.

Scattering

Mahmood & Haider


Figure 8.19: Rayleigh scattering occurs at interfaces involving
structures of small dimensions.

Figure 8.20: the image of the left saphenous vein (SV), common femoral
vein (CFV), superficial femoral (SFA) and profunda femoris (PFA)
arteries, Rayleigh scattering is present within each of the blood vessels.

Scattering is dependent for four different factors:


- the dimension of the scatterer,
- the number of scatterers present,
- the extent to which the scatterer differs from surrounding material, and
- the ultrasound frequency.
In most diagnostic applications of ultrasound, use is made of ultrasound wave's reflected from
interfaces between different tissues in the patient. The reflected echoes return to the transducer

176
CHPTER 8 IMAGING WITH SOUND

and form the ultrasound imaging. The amount reflected depends on the difference in acoustic
impedance of the two tissues.
8.13.5 Absorption
Absorption is the main form of attenuation. Absorption happens as sound travels through soft
tissue, the particles that transmit the waves vibrate and cause friction and a loss of sound
energy occurs and heat is produced. Sound intensity in the soft tissue decreases exponentially
with depth (see figure 8.21).

Absorption

+
A1 A2

Intensity ≈ A2
(Energy)
Mahmood & Haider
Figure 8.21: The Reduction of Pulse Amplitude by Absorption of It's Energy

8.14 Acoustic Impedance


Acoustic impedance (Z) is a measure of the resistance of the particles of the density of the
medium, and the velocity of ultrasound in the medium. In other words, Acoustic impedance
is the opposition of a tissue to the passage of ultrasound waves and depends on:
 Density of the medium (ρ)
 Propagation velocity of ultrasound through the medium (v)
Mathematically, the acoustic impedance (Z) of a tissue (or any material) is defined as the
product of tissue density (ρ) and acoustic velocity in that tissue (v) which can be written as:
Z = ρ (kg/m3) × v (m/s2)
Hence, acoustic impedance is measured in kgm-2s-1. The acoustic impedance is important in
1. The determination of acoustic transmission and reflection at the boundary of two
materials having different acoustic impedances.
2. The design of ultrasonic transducers.
3. Assessing absorption of sound in a medium.

177
CHPTER 8 IMAGING WITH SOUND

The ratio of the intensity of the reflected ( wave to the incident wave ( ) at the boundary
called Reflection coefficient (R). For an ultrasound wave incident perpendicularly upon an
interface, the fraction R of the incident energy that is reflected is

Where Z1 and Z2 are the acoustic impedance of the first and second mediums, respectively.
The fraction of the incident energy that is transmitted across an interface is described by the
transmission coefficient T,

Obviously T+ R=1.
The body consists from a range of materials, such as the air in the lungs and intestinal gas,
water, blood, muscle, fat and bone. Each component of the body has characteristic impedance,
which depends on the nature of this matter in it. Gases have a very low density, and therefore
very low acoustic impedance as shown in table 8.3.
A large difference in acoustic impedances of the materials on each side of the boundary is
referred to as acoustic impedance mismatch. In substances with a greater the impedance
mismatch there is a greater amount of energy that will be reflected at the interface or boundary
between one medium and another and a less amount of transmitted energy.
Table 8.3: characteristic acoustic impedance of air and water.
Substance Characteristic acoustic impedance
Air 429 kgm-2s-1
Water 1.43x106 kgm-2s-1
The contrast in the ultrasound image is generated by acoustic reflections result of changes in
acoustic impedance (the speed of sound and density).
The brightness of a structure in an ultrasound image depends on the strength of the
reflection, or echo. This in turn depends on how much the two materials differ in terms of
acoustic impedance. The amplitude ratio of the reflected to the incident pulse is related to the
tissue impedance values by

Reflection loss (dB) = 20 log (Z2  Z1)/(Z2 + Z1).


The greater R, the greater the degree of reflection, i.e. R for a soft tissue interface such as liver
and kidney is 0.01, i.e. only 1% of the sound is reflected and about 99% of the energy is
transmitted across the interface. Muscle/bone interface 40% is reflected and for a soft
tissue/air interface 99% is reflected.

178
CHPTER 8 IMAGING WITH SOUND

Even though the reflected energy is small, it is often sufficient to reveal the liver border.
Because of the high value of the coefficient of ultrasound reflection (R) at an air–tissue
interface, water paths and various creams and gels are used during ultrasound examinations to
remove air pockets (i.e., to obtain good acoustic coupling) between the ultrasound transducer
and the patient’s skin. With adequate acoustic coupling, the ultrasound waves will enter the
patient with little reflection at the skin surface.
Similarly, strong reflection of ultrasound occurs at the boundary between the chest wall and
the lungs and at the millions of air–tissue interfaces within the lungs. Because of the large
impedance mismatch at these interfaces, efforts to use ultrasound as a diagnostic tool for the
lungs have been unrewarding. The impedance mismatch is also high between soft tissues and
bone, and the use of ultrasound to identify tissue characteristics in regions behind bone has
had limited success (see figure 8.22).
Ultrasonic
probe

Ultrasound Reflected
beam signal

Human
body
Mahmood & Haider
Figure 8.22: Reflection at a boundary between tissues.

This is the basis of ultrasound as different organs in the body have different densities and
acoustic impedance and this creates different reflectors. In some cases the acoustic impedance
can be so great that all the sound waves energy can be reflected, this happens when sound
comes in contact with bone and air. This is the reason why ultrasound is not used as a primary
imaging modality for bone, digestive tract and lungs.
Basic imaging by ultrasound does only use the amplitude information in the reflected
signal. One pulse is emitted, the reflected signal, however, is sampled more or less
continuously (actually multiple times). As the velocity of sound in tissue is fairly constant, the
time between the emission of a pulse and the reception of a reflected signal is dependent on
the distance; i.e. the depth of the reflecting structure. The reflected pulses are thus sampled at
multiple time intervals (multiple range gating), corresponding to multiple depths, and
displayed in the image as depth. Different structures will reflect different amount of the

179
CHPTER 8 IMAGING WITH SOUND

emitted energy, and thus the reflected signal from different depths will have different
amplitudes. At most soft tissue interfaces, only a small fraction of the pulse is reflected.
Therefore, the reflection process produces relatively weak echoes. At interfaces between soft
tissue and materials such as bone, stones, and gas, strong reflections are produced. The
reduction in pulse amplitude during reflection at several different interfaces is given in the
table 8.4.
Table 8.4: Pulse Amplitude loss produced by a reflection.
Interface Amplitude Loss (dB)
Ideal reflector 0.0
Tissue-air -0.01
Bone-soft tissue -3.8
Fat-Muscle -20.0
Tissue-water -26.0
Muscle-blood -30.0

The amplitude of a pulse is attenuated both by absorption and reflection losses. Because of
this, an echo returning to the transducer is much smaller than the original pulse produced by
the transducer.
The discussion of ultrasound reflection above assumes that the ultrasound beam strikes the
reflecting interface at a right angle. In the body, ultrasound impinges upon interfaces at all
angles. For any angle of incidence, the angle at which the reflected ultrasound energy leaves
the interface equals the angle of incidence of the ultrasound beam; that is,
Angle of incidence = Angle of reflection
In a typical medical examination that uses reflected ultrasound and a transducer that both
transmits and detects ultrasound, very little reflected energy will be detected if the ultrasound
strikes the interface at an angle more than about 3 degrees from perpendicular. A smooth
reflecting interface must be essentially perpendicular to the ultrasound beam to permit
visualization of the interface.
8.15 Ultrasound Contrast Agents
Ultrasound contrast agents rely on the different ways in which sound waves are reflected from
interfaces between substances. This may be the surface of a small air bubble or a more
complex structure. Commercially available contrast media are gas-filled microbubbles that are
administered intravenously to the systemic circulation. Microbubbles have a high degree of
echogenicity, which is the ability of an object to reflect the ultrasound waves.
The echogenicity difference between the gas in the microbubbles and the soft tissue
surroundings of the body is immense. Thus, ultrasonic imaging using Microbubbles contrast
agents enhances the ultrasound backscatter, or reflection of the ultrasound waves (see figure
8.23), to produce a unique sonogram with increased contrast due to the high echogenicity
difference. Contrast-enhanced ultrasound can be used to image blood perfusion in organs,

180
CHPTER 8 IMAGING WITH SOUND

measure blood flow rate in the heart and other organs, and has other applications as well. (In
physiology, perfusion is the process of a body delivering blood to a capillary bed in its
biological tissue. The word is derived from the French verb "perfuser" meaning to "pour over
or through.")

Figure 8.23: Contrast agent image.


8.16 Spatial Resolution
Spatial resolution is defined as the minimum distance between two objects that are still
distinguishable. The lateral and the axial resolution must be differentiated in ultrasound
images.
8.16.1 Lateral Resolution
Lateral resolution is the minimum separation from other tissue the ultrasound beam can see or
distinguish in a plane perpendicular to the ultrasound beam (Figure 8.24).

Would be Would be
Transducer seem as one seem as one
structures structures

Would be
seem as two
structures

Focal Zone

Mahmood & Haider

181
CHPTER 8 IMAGING WITH SOUND

Figure 8.24: Lateral Resolution is the minimum separation from other


tissue the ultrasound beam can distinguish in a plane perpendicular to
the ultrasound beam.
The lateral resolution in an ultrasound beam varies with beam width (diameter of the
ultrasound beam) and the area of the focal zone. Also it varies in the axial direction, being best
in the focus zone. As many array transducers can be focused in only one plane, because the
crystals are arranged in a single line, lateral resolution is particularly poor perpendicular to
that plane.
8.16.2 Axial Resolution
The axial resolution is the minimum separation between structures the ultrasound beam can
distinguish parallel to the beam path (Figure 8.25). Sometimes this is also called depth
resolution. Axial resolution is most directly affected by the frequency of the transducer. It is
depends on the pulse length and improves as the length of the pulse shortens. Wide-band
transducers (transducers with a high transmission bandwidth, e.g. 3–7 MHz) are suitable for
emitting short pulses down to nearly one wavelength.

The ability to separate


Transducer structures Parallel to
the Ultrasound beam

Focal Zone

Mahmood & Haider


Figure 8.25: Axial resolution is the minimum separation between
structures the ultrasound beam can distinguish parallel to the beam path.
Modern transducers use multiple small elements to generate the ultrasound wave. If a single
small element transducer is used the waves radiate from it in a circular fashion as do ripples in
a pool. If multiple small elements fire simultaneously however the individual curved wave
fronts combine to form a linear wave front moving perpendicularly away from the transducer
face. This system, that is multiple small elements fired individually, is termed a phased array.
8.17 Beam Forming and Transducers
An important characteristic of an ultrasound pulse is its diameter, which is also the width of
the ultrasound beam. The diameter of a pulse changes as it moves along the beam path. The
effect of pulse size on image detail will be considered in the next section. At this point we will

182
CHPTER 8 IMAGING WITH SOUND

observe the change in pulse diameter as it moves along the beam and shows how it can be
controlled.
The diameter of the pulse is determined by the characteristics of the transducer. At the
transducer surface, the diameter of the pulse is the same as the diameter of the vibrating
crystal. As the pulse moves through the body, the diameter generally changes. This is
determined by the focusing characteristics of the transducer.
8.17.1 Ultrasound Field
An important distinction is made between the near-field (called the Fresnel zone) and far
field (called the Fraunhofer zone) as shown in figure 8.26. the near-field is between the
transducer and the focus, since the beam Initially, is of comparable diameter to the transducer
as the series of ultrasound waves that make up the beam travel parallel to each other.

Far Field
(Fraunhofer Zone)
Unfocused

(Fresnel Zone)
Near Field

Focal
Zone
Focused

Focal Depth

Mahmood & Haider


Figure 8.26: Beam Width and pulse diameter characteristics of both
Unfocused and Focused transducers.

The far field which can be described the divergent far field beyond the focus, since at some
point distal to the transducer however the beam begins to diverge which will reduce the ability
to distinguish two objects close together (resolution). The border of the beam is not smooth, as
the energy decreases away from its axis.
The Focal Zone is the area in the ultrasound beam that has the smallest beam diameter and
is the area a user will get the best side to side (lateral) resolution.
The best detail will be obtained for structures within the focal zone. The distance between the
transducer and the focal zone is the focal depth.
The focus zone is the narrowest section of the beam, defined as the section with a diameter
no more than twice the transverse diameter of the beam at the actual focus. If attenuation is
ignored, the focus is also the area of highest intensity. The length of the near field, the position

183
CHPTER 8 IMAGING WITH SOUND

of the focus and the divergence of the far field depend on the frequency and the diameter (or
aperture) of the active surface of the transducer. In the case of a plane circular transducer of
radius R, the near field length (L) is given by the expression:

The divergence angle (x) of the ultrasound beam in the far field is given by the expression:

The diameter of the beam in the near field corresponds roughly to the radius of the transducer.
A small aperture and a large wavelength (low frequency) lead to a short near field and greater
divergence of the far field, while a larger aperture or higher frequency gives a longer near field
but less divergence. The focal distance, L as well as the diameter of the beam at the focal point
can be modified by additional focusing.
The additional focusing is accomplished using either a concave transducer or an acoustic
lens which called static focus or by use of electronic means for delaying parts of the signal
for the different crystals in an array system enables variable focusing of the composite
ultrasound beam, adapted to different depths during receive which called dynamic focusing.
8.18 Transducer Focusing
Transducers can be designed to produce either a focused or non-focused beam. A focused
beam is desirable for most imaging applications because it produces pulses with a small
diameter which in turn gives better visibility of detail in the image. Therefore, we will explain
with details focused transducer.
It is possible to focus the ultrasound beam to cause convergence and narrowing of the beam
thus improves (lateral) resolution. Focusing can be achieved by either mechanical or electric
focusing by electronic means in a phased array element (see Figure 8.27).

(a) Unfocused (b) Mechanical (c) Electric focusing


focused (phased array)

Mahmood & Haider

184
CHPTER 8 IMAGING WITH SOUND

Figure 8.27: Design of Focused transducer.

If the transducer face is concave or a concave acoustic lens is placed on the surface of the
transducer, the beam can be narrowed at a predetermined distance from the transducer. The
point at which the beam is at its narrowest is the focal point or focal zone and is the point of
greatest intensity and best lateral resolution.
8.18.1 Dynamic Receive Focus
The focusing of an array transducer can also be changed electronically when it is in the echo
receiving mode. This is achieved by processing the electrical pulses from the individual
transducer elements through different time delays before they are combined to form a
composite electrical pulse. The effect of this is to give the transducer a high sensitivity for
echoes coming from a specific depth along the central axis of the beam. This produces a
focusing effect for the returning echoes.
An important factor is that the receiving focal depth can be changed rapidly. Since echoes
at different depths do not arrive at the transducer at the same time, the focusing can be swept
down through the depth range to pick up the echoes as they occur. This is the major distinction
between dynamic or sweep focusing during the receive mode and adjustable transmit focus.
Any one transmitted pulse can only be focused to one specific depth. However, during the
receive mode, the focus can be swept through a range of depths to pick up the multiple echoes
produced by one transmit pulse.
8.18.2 Ultrasonic Phased Arrays
Ultrasonic phased arrays are widely used in medicine and most of the present imaging devices
use the 1D or 2D phased array technique. In order to steer the ultrasonic beam, two
dimensional (2D) arrays should be used. The conventional 2D phased array transducer
(link) is a multi-element piezoelectric device, the elements of which are electrically isolated
from each other, arranged in a single row. The transmit a sound pulse passing into the body,
and the received the echoes that return from scattering structures passing from the array
elements at delay times in phase with respect to the transmit initiation time, hence the term
phased array. The time delay required for steering and focusing the excited beam is then
provided by the incident surface wave propagating through the array along chosen directions.
This is done electronically to steer and focus of each of a series of acoustic pulses through the
level or volume to be imaged in the body. It is known that the process of steering and focusing
of these acoustic pulses as beamforming. This process is shown schematically in Figure 9.26.
The form and especially the diameter of the beam strongly influence the lateral resolution
and thus the quality of the ultrasound image. The focus zone is the zone of best resolution and
should always be positioned to coincide with the region of interest. This is another reason for
using different transducers to examine different regions of the body; for example, transducers
with higher frequencies and mechanical focusing should be used for short distances (small-

185
CHPTER 8 IMAGING WITH SOUND

part scanner). Most modern transducers have electronic focusing to allow adaption of the
aperture to specific requirements (dynamic focusing, Figure 8.28).

Delayed transmit pulses

τ1 Converging
wave front
τ2

τ3

τ4

τ5

Signal PZE elements (Transducer elements)


Alignment Received
Echoes
τ1 Expanding
Delayed and wave front
summed echo τ2

Σ τ3 Scattering
medium
τ4

τ5
Mahmood & Haider
Figure 8.28: A conceptual diagram of phased array beam forming. (Top)
Appropriately delayed pulses are transmitted from an array of
piezoelectric elements to achieve steering and focusing at the point of
interest. (For simplicity, only focusing delays are shown here.) (Bottom)
The echoes returning are likewise delayed before they are summed
together to form a strong echo signal from the region of interest.

8.18.3 Unfocused Transducers


An unfocused transducer produces a beam with two distinct regions, as shown in the previous
figure. One is the so-called near field or Fresnel zone and the other is the far field or
Fraunhofer zone. In the near field, the ultrasound pulse maintains a relatively constant
diameter that can be used for imaging and that is determined by the diameter of the transducer.
The length of the near field is related to the diameter, D, of the transducer and the wavelength,
, of the ultrasound by:
Near field length = D2/4.

Recall that the wavelength is inversely related to frequency. Therefore, for a given transducer
size, the length of the near field is proportional to frequency. Another characteristic of the near
field is that the intensity along the beam axis is not constant; it oscillates between maximum
and zero several times between the transducer surface and the boundary between the near and
far field. This is because of the interference patterns created by the sound waves from the
transducer surface. An intensity of zero at a point along the axis simply means that the sound

186
CHPTER 8 IMAGING WITH SOUND

vibrations are concentrated around the periphery of the beam. A picture of the ultrasound
pulse in that region would look more like concentric rings or "donuts" than the disk that has
been shown in various illustrations.
The major characteristic of the far field is that the beam diverges. This causes the ultrasound
pulses to be larger in diameter but to have less intensity along the central axis. The
approximate angle of divergence is related to the diameter of the transducer, D, and the
wavelength,  by
Divergence angle (degrees) = 70/D.
Because of the inverse relationship between wavelength and frequency, divergence is
decreased by increasing frequency. The major advantage of using the higher ultrasound
frequencies (shorter wavelengths) is that the beams are less divergent and generally produce
less blur and better detail.
The previous figure is a representation of the ideal ultrasound beam. However, some
transducers produce beams with side lobes. These secondary beams fan out around the
primary beam. The principal concern is that under some conditions echoes will be produced
by the side lobes and produce artifacts in the image.
8.18.4 Fixed Focus
A transducer can be designed to produce a focused ultrasound beam by using a concaved
piezoelectric element or an acoustic lens in front of the element. Transducers are designed
with different degrees of focusing. Relatively weak focusing produces a longer focal zone and
greater focal depth. A strongly focused transducer will have a shorter focal zone and a shorter
focal depth.
Fixed focus transducers have the obvious disadvantages of not being able to produce the
same image detail at all depths within the body.
8.18.5 Adjustable Transmit Focus
The focusing of some transducers can be adjusted to a specific depth for each transmitted
pulse. This concept is illustrated in the following figure. The transducer is made up of an array
of several piezoelectric elements rather than a single element as in the fixed focus transducer.
There are two basic array configurations: linear and annular. In the linear array the
elements are arranged in either a straight or curved line. The annular array transducer
consists of concentric transducer elements as shown. Although these two designs have
different clinical applications, the focusing principles are similar.
Focusing is achieved by not applying the electrical pulses to all of the transducer elements
simultaneously. The pulse to each element is passed through an electronic delay. Now let's
observe the sequence in which the transducer elements are pulsed in the figure 8.29. The
outermost element (annular) or elements (linear) will be pulsed first. This produces ultrasound
that begins to move away from the transducer. The other elements are then pulsed in sequence,

187
CHPTER 8 IMAGING WITH SOUND

working toward the center of the array. The centermost element will receive the last pulse. The
pulses from the individual elements combine in a constructive manner to create a curved
composite pulse, which will converge on a focal point at some specific distance (depth) from
the transducer.

Electrical pulses Mahmood & Haider


Linear
Array
Time

Annular Acoustic
Array Lens

Electronic Focal Zone


Focusing
Electronic
Mechanical Focusing
Focusing

Wave front

Side view Plan view


Figure 8.29: The Principle of Electronic Focusing with an Array Transducer.
The focal depth is determined by the time delay between the electrical pulses. This can be
changed electronically to focus pulses to give good image detail at various depths within the
body rather than just one depth as with the fixed focus transducer. One approach is to create
an image by using a sequence of pulses, each one focused to a different depth or zone within
the body.
One distinction between the two transducer designs illustrated here is that the annular array
focuses the pulse in two dimensions whereas the linear array can only focus in the one
dimension; that is, in the plane of the transducer.

8.19 Time Gain Compensation (TGC)


Due to attenuation in the tissue, the amplitude of the sound pulse diminishes as it travels into
the body, and the echo pulse is similarly attenuated as it travels back toward the transducer. A
particular interface, or 'reflector', deep in the body therefore produces a much weaker echo
than an identical interface near the surface, as seen in Figure 8.30.
Attenuation is compensated and the echoes equalized electronically by swept gain or time
gain compensation (TGC). As soon as the transducer is pulsed, the decibel gain of the
amplifier is steadily and automatically increased, in proportion to the time that has elapsed and

188
CHPTER 8 IMAGING WITH SOUND

the distance that has been traveled by the sound. In this way, all echoes from identical
interfaces are rendered the same, independent of their depth. Swept gain is typically varied
from 0 to 50 dB.

Mahmood & Haider


Depth

Transducer

Ultrasound wave Without TGC

Ultrasound wave With TGC

Figure 8.30: Illustration of an echo pulse with and without TGC.


8.20 Ultrasound Techniques
The echo principle forms the basis of all common ultrasound techniques. The distance between
the transducer and the reflector or scattered in the tissue is measured by the time between the
emission of a pulse and reception of its echo. Additionally, the intensity of the echo can be
measured. With Doppler techniques, comparison of the Doppler shift of the echo with the
emitted frequency gives information about any movement of the reflector. The various
ultrasound techniques used are described below.
8.21 Modes Ultrasound
The time lag, , between emitting and receiving a pulse is the time it takes for sound to travel
the distance to the scatter and back, i.e. twice the range, r, to the scatter at the speed of sound,
c, in the tissue. Thus:

The pulse is thus emitted, and the system is set to await the reflected signals, calculating the
depth of the scatterer on the basis of the time from emission to reception of the signal. The
total time for awaiting the reflected ultrasound is determined by the preset depth desired in the
image.
The received energy at a certain time, i.e. from a certain depth, can be displayed as energy
amplitude, A-mode (Amplitude). A-mode shows the depth and the reflected energy from
each scatter (figure 8.31).

189
CHPTER 8 IMAGING WITH SOUND

Mahmood & Haider

A-Mode B-Mode M-Mode

Amplitude

Depth
Time

Figure 8.31: Transfer reflected "echoes" to images.

The amplitude can also be displayed as the brightness of the certain point representing the
scatter, in a B-mode (Brightness) plot. B-mode shows the energy or signal amplitude as the
brightness (in this case the higher energy is shown darker, against a light background) of the
point. And if some of the scatters are moving, the motion curve can be traced by letting the B-
mode image sweep across a screen or paper. This is called the M-mode or T-M Mode
(Motion or Time-Motion Mode). If the depth is shown in a time plot, the motion is seen as a
curve (and horizontal lines for the non moving scatterers) in an M-mode plot. These modes
illustrated in Fig. 8.31 and will be explained in detail in the following sections.
The ratio of the amplitude (energy) of the reflected pulse and the incident is called the
reflection coefficient. The ratio of the amplitude of the incident pulse and the transmitted
pulse is called the transmission coefficient. Both are dependent on the differences in acoustic
impedance of the two materials. The acoustic impedance (Z) of a medium is the speed (c) of
sound in the material × the density (ρ):

Z=c×ρ

Thus, if the velocities of sound in two materials are very different, the reflection will be close to
total, and no energy will pass into the deeps material. This occurs in boundary zones between
f.i. soft tissue and bone, and soft tissue and air. This means that the deepest material can be
considered to be in a shadow.
8.21.1 A-mode
As late as the 1960’s, was used A-mode (A-scan, amplitude modulation) and it is now
obsolete in medical imaging. It is just of historical importance and rarely used today, as it
conveys limited information, e.g. measurement of distances. The A-mode is sometimes used

190
CHPTER 8 IMAGING WITH SOUND

only in ophtalmology or for showing Mid-line displacement in the brain (see figure 8.32). A-
mode is a one-dimensional examination technique in which a transducer with a single crystal
is used. It is the simplest form of ultrasound imaging which shows only the position of tissue
interfaces. As an imaging technique it has been largely superseded by B-mode imaging or
other imaging techniques, such as computed tomography (TC). However, A-mode illustrates
the basic principles of ultrasound imaging.

Figure 8.32: A-mode imaging is sometimes used for


examining the eye.
In the A-mode, the echoes returning from the body are displayed as signals on the screen
along a time (distance) axis as peaks proportional to the intensity (amplitude) of each signal.
A-mode displays the voltage produced across the transducer on an oscilloscope screen
representing echo amplitude (hence the term “A-mode”) on the ordinate, or y-axis, as a
function of time on the abscissa, or x-axis. Time can be presented on the x-axis as distance
from the ultrasound transducers with the assumption of a constant speed of sound. The
amplitude of the reflected sound is shown by the height of the vertical deflection on the
oscilloscope.
For example, if a sound wave is transmitted through the side of the head, the reflected beam
is a line with three distinct peaks (amplitude) that form where the sound is reflected off three
hard formations as shown in figure 8.33. One is the skull closest to the transducer, next is the
midline structure in the brain, the falx cerebri, which is not as hard as bone, but is hard enough
to deflect sound, and the third peak is formed from the skull bone on the opposite side of the
head. This type of scan was used as a crude check for a brain tumor or for bleeding from a
vessel within the brain, which could be inferred if the falx was shifted from the midline.
There were (and are) several problems with this simple system:
 You don't know the exact direction it came from.
 You don't know for sure what the echo bounced off of.
 You don't know what the object generating the echo looks like.

191
CHPTER 8 IMAGING WITH SOUND

Skull

Falx cerebri
Amplitude

Time (depth)
Mahmood & Haider
Figure 8.33: A-mode (amplitude mode) of ultrasound display. An
oscilloscope display records the amplitude of echoes as a function of time
or depth.

8.21.2 B-Mode
B-Mode refers to Brightness mode and was the first practical application of ultrasound for
diagnostic purposed. Modern medical Ultrasound is performed primarily using a pulse-echo
approach with B-mode display. B-Mode ultrasound imaging collects the same information as
in A-mode, but, unlike A-mode. Since it adds a sense of direction (where the echo is coming
from in a two-dimensional plane) as well as the memory to recall all the different echoes,
strong and weak. This is the mainstay of ultrasound imaging, providing a real-time, gray-scale
display, where the variations in intensity and brightness indicate reflected signals of different
amplitudes (the brighter parts indicate larger reflections of sound). The basic principles of B-
mode imaging involve transmitting small pulses of ultrasound echo from a transducer into the
body. The direction of ultrasound propagation along the beam line is called the axial direction,
and the direction in the image plane perpendicular to axial is called the lateral direction. As
ultrasound waves are penetrates the tissues of the body, which has different acoustic
impedances along the transmission path. so small fraction of the ultrasound pulse are returns
as a reflected echo after reaching a body tissue interface to the transducer (echo signals) while
the remainder of the pulse continues along the beam line to greater tissue depths. The echo
signals returned from many sequential coplanar pulses are processed and combined to
generate an image. This image becomes recognizable, particularly with practice. The
recognizable image can then be evaluated for abnormalities, and measured. This is the
mainstay of ultrasound imaging, providing a real-time, gray-scale display, where the
variations in intensity and brightness indicate reflected signals of different amplitudes (the
brighter parts indicate larger reflections of sound.

192
CHPTER 8 IMAGING WITH SOUND

The first B-mode images were simple black or white pictures, with no shades of grey. Grey-
scale images were a huge step forward in the quality of ultrasound pictures. In modern
ultrasound scanners, the transducer position produces a series of dots of variable brightness on
the display screen by sampling multiple lines, which build up a 2-dimensional representation
of echoes returning from the different body parts being scanned.
On a black background, the signals of greatest intensity are white, absence of echoes are
black, with all the intermediate intensities appearing as shades of gray, with each intensity of
the reflected sound being assigned a gray-scale value.
The scanner has a digital memory of 512 x 512 x 512 x 640 pixels, which is used to store
the values associated with the echo intensities, which is then sent to the video monitor to
display the gamut of shades for that particular ultrasound image. The operator can then adjust
the dynamic range of the display, which is best using as wide a dynamic range as possible to
differentiate the slightest changes in tissue echogenicity.
I. A short pulse is generated by the ultrasound source (transducer) and propagates into
the body at the speed of sound (~1540 m/s).
II. At an interface a small fraction of the pulse (echo) is reflected back to the transducer.
III. The echo signals are displayed as a function of time. This display is called an “A” scan.
IV. When the transducer is coupled to an arm and the signals are displayed as bright spots
representing the tomographic image of the tissue, the image is called a “B” scan.
8.21.3 M-mode or TM-mode
M-mode [Motion mode – sometimes called TM-mode (Time-Motion Mode)] provides a one-
dimensional view of moving objects over time. It is a particularly useful modality in
echocardiography where it displays the echo amplitude from the beating heart, including the
motion of the heart valves.
The transducer is positioned over the heart and is kept stationary. It records the returning
echoes over the same line of sight repeatedly. What is changing is the position of the heart
wall and valves from one moment to the next. The brightness of the display indicates the
intensity of the reflected signal.
A limitation of M-mode imaging is the difficulty in achieving consistent and accurate beam
placement for standard measurements and calculations. Beam placement guided by 2-D
imaging can be used but accurate placement of the M-mode beam at the appropriate locations
within the heart, as well as on the endocardial surfaces are crucial to obtain accurate
measurements and calculations.
8.21.4 B-scan, Two-dimensional
The arrangement of many (e.g. 256) one-dimensional lines in one plane makes it possible to
build up a two-dimensional (2D) ultrasound image (2D B-scan). The single lines are generated
one after the other by moving (rotating or swinging) transducers or by electronic multi-

193
CHPTER 8 IMAGING WITH SOUND

element transducers. Rotating transducers with two to four crystals mounted on a wheel and
swinging transducers (‘wobblers’) produce a sector image with diverging lines (mechanical
sector scanner; Figure 8.34).

Mahmood & Haider

Figure 8.34: Two-dimensional B-scan, the ultrasound image is still


visible.
Electronic transducers are made from a large number of separate elements arranged on a plane
(linear array) or a curved surface (curved array). A group of elements is triggered
simultaneously to form a single composite ultrasound beam that will generate one line of the
image. The whole two-dimensional image is constructed step-by-step, by stimulating one
group after the other over the whole array (Fig. 8.35).

Mahmood & Haider

Figure 8.35: Linear and curved array transducer, showing ultrasound


beams generated by groups of elements.
The lines can run parallel to form a rectangular (linear array) or a divergent image (curved
array). The phased array technique requires use of another type of electronic multielement

194
CHPTER 8 IMAGING WITH SOUND

transducer, mainly for echocardiography. In this case, exactly delayed electronic excitation of
the elements is used to generate successive ultrasound beams in different directions so that a
sector image results (electronic sector scanner).
Construction of the image in fractions of a second allows direct observation of movements
in real time. A sequence of at least 15 images per second is needed for real-time observation,
which limits the number of lines for each image (up to 256) and, consequently, the width of
the images, because of the relatively slow velocity of sound. The panoramic-scan technique
was developed to overcome this limitation. With the use of high-speed image processors,
several real-time images are constructed to make one large (panoramic) image of an entire
body region without loss of information, but no longer in real time.
Many technical advances have been made in the electronic focusing of array transducers
(beam forming) to improve spatial resolution, by elongating the zone of best lateral resolution
and suppressing side lobes (points of higher sound energy falling outside the main beam).
Furthermore, use of complex pulses from wide-band transducers can improve axial resolution
and penetration depth. The elements of the array transducers are stimulated individually by
precisely timed electronic signals to form a synthetic antenna for transmitting composite
ultrasound pulses and receiving echoes adapted to a specific depth. Parallel processing allows
complex image construction without delay.
For years, two-dimensional (2D) ultrasound images were the norm, showing black and
white images. Now, though, technology has advanced to a point where three-dimensional (3D)
and four-dimensional (4D) ultrasounds can be produced, instead of just relying on 2D
ultrasounds. The standard common obstetric diagnostic mode is 2D images scanning. In 2D
can be seen size and shape, without depth, but can be used to see the internal organs of the
baby. This is helpful in diagnosing heart defects, issues with the kidneys and other internal
issues. In recent years this has been replaced with the new technique which is 3D images
technique.
8.21.5 Three- and Four-Dimensional Ultrasound techniques
3D ultrasound is a medical ultrasound technique, often used in obstetric ultrasonography
(during pregnancy), providing three-dimensional images of the fetus. By this technique can be
showing details such as facial features. Where, in 3D instead of the sound waves being sent
straight down and reflected back, they are sent at different angles. That means, the ultrasound
takes images from a few different angles and combines them into one three dimensional image
that will give you a size, shape, and depth. In other words, the 3D images are used to show
you three dimensional external images which may be helpful in diagnosing issues such as a
cleft lip. The returning echoes are processed by a sophisticated computer program resulting in
a reconstructed three-dimensional volume image of the fetus's surface or internal organs, in
much the same way as a CT scan machine constructs a CT scan image from multiple x-rays.

195
CHPTER 8 IMAGING WITH SOUND

Now 4D images used which takes 3D to the next step by combining numerous 3D images into
a real-time movie. That is the image is continuously updated, it becomes a moving image, like
a movie. 4D ultrasounds are similar to 3D scans, with the difference associated with time: 4D
allows a three dimensional picture in real time, rather than delayed, due to the lag associated
with the computer constructed image, as in classic three dimensional ultrasound. It is worth
mentioning that if the system is used only in the obstetrics application, the ultrasound energy
is limited by less than 100mW/cm2 (specifically a maximum of 94mW/cm2), whether
scanning 2, 3 or 4 dimensionally.
The main prerequisite for construction of 3D ultrasound images is very fast data
acquisition. The transducer is moved by hand or mechanically perpendicular to the scanning
plane over the region of interest. The collected data are processed at high speed, so that real-
time presentation on the screen is possible. This is called the four-dimensional (4D)
technique (4D = 3D + real time). The 3D image can be displayed in various ways, such as
transparent views of t he entire volume of interest or images of surfaces, as used in obstetrics
and not only for medical purposes. It is also possible to select two-dimensional images in any
plane, especially those that cannot be obtained by a 2D B-scan.
3D or 4D ultrasound has been developed and researched in two major ways. One is to
overcome the limitations of 2D ultrasound by providing an imaging technique that reduces the
variability of the 2D technique and allows the clinician to view the anatomy in 3D, the other is
to provide better spatial guidance for various interventional procedures, such as biopsy, focal
ablative therapy or image-guided surgery. In the field of diagnostic radiology, various 3D
ultrasound techniques, such as ultrasound cholangiography using minimum intensity
projection and volume contrast imaging, have shown excellent performance in achieving
better spatial resolution and have reduced inherent noise in comparison with conventional 2D
ultrasound. As guidance for interventional procedures, 3D ultrasound was proved to be useful
in improving the depiction and understanding of the geometric relationships of needles and
probes to tumors and other nearby structures, so as to optimize delivery of the needle or
ablative agent. Furthermore, 4D ultrasound, which is a dynamic 3D ultrasound, provides real-
time feature of volume datasets instead of “static” 3D ultrasound images, and so enables more
intuitive recognition of the 3D spatial relationship between the needle and the target lesion and
allows easy alteration in the orientation of the needle under real-time monitoring. The
advantages of 3D ultrasound are primarily derived from the fundamental properties of 2D
ultrasound. Ultrasound has many advantages over computed tomography and magnetic
resonance imaging, including real-time imaging with vessel visualization, decreased procedure
time and cost, portability, and lack of ionizing radiation. With continuing technological
improvements including computer technology and visualization techniques, 3D ultrasound
imaging is beginning to migrate from the research laboratory to the examination room.
Therefore, radiologists or sonographers should be ready to accept the paradigm shift of
viewing 3D images on a computer monitor

196
CHPTER 8 IMAGING WITH SOUND

8.21.6 Harmonic Imaging


To understand the benefits of Harmonic Imaging, consider how it differs from conventional
ultrasound imaging. With conventional imaging, the ultrasound system transmits and receives
a sound pulse of a specific frequency (Figure 8.36). In this figure, simple compare between
conventional imaging (Figure 8.36A) and Harmonic Imaging (Figure 8.36B). In the
conventional imaging, the transducer transmits "a" and receives "b" sound waves of a given
frequency. The received signal is lower in intensity because it is attenuated by the tissue. In
the harmonic imaging, the returning signal "c" is actually a combination of frequencies. It
contains not only the fundamental signal that was originally transmitted "a", but also the
harmonic signal "b", which is twice a’s frequency. The ultrasound system processes these
signals separately.

a a

b c

(A) (B)
Figure 8.36: (A) conventional imaging, (B) Harmonic Imaging.
The difference between the transmitted and returned signal is that the returned signal is less
intense, losing strength as it passes through tissue. With Harmonic Imaging, on the other hand,
the signal returned by the tissue includes not only the transmitted “fundamental” frequency,
but also signals of other frequencies – most notably, the “harmonic” frequency, which is twice
the fundamental frequency (Figure 8.36B). Once this combined fundamental/harmonic signal
is received, the ultrasound system separates out the two components and then processes the
harmonic signal alone.
8.21.7 B-flow
B-Flow is a new imaging technique which utilizes Digitally Encoded Ultrasound technology
to provide direct visualization of blood echoes to image blood flow and tissue simultaneously.
It is a special B-scan technique for blood flow imaging that can be used to show movement
without relying upon the Doppler Effect. This technique is effective in showing exceptional
clarity and speed in the display of blood flow and vessel walls, but, unlike Doppler methods,
it provides no information about flow velocity (Figure 8.37). Therefore this technique is

197
CHPTER 8 IMAGING WITH SOUND

promise to expand clinical applications of ultrasound and improve early detection of


peripheral vascular diseases. B-flow technology has convex array and sector transducers
suitable for abdominal examination. The echoes from moving scatterers (particularly blood
cells in blood vessels) are separated from stationary scatterers by electronic comparison of
echoes from successive pulses (autocorrelation). These very weak echoes are amplified and
depicted as moving dots on the screen. B-Flow images are formed by using:
(1) Coded Excitation to improve blood echo sensitivity and
(2) Tissue equalization to reduce the relative brightness of tissue such that both tissue and
blood can be simultaneously visualized.

Figure 8.37: B-flow image of an aorta with arteriosclerosis. This technique


gives a clear delineation of the inner surface of the vessel (+…+ measures
the outer diameter of the aorta)

8.22 Doppler Effect


The Doppler Effect was discovered by Christian Andreas Doppler
(1803 - 1853), and shows how the frequency of an emitted wave
changes with the velocity of the emitter or observer. The theory was
presented in the royal Bohemian society of Science in 25th of May1842
(5 listeners at the occasion!), and published in 1843. Christian Andreas Doppler
(1803 - 1853)
8.23 Basic Principles
In ultrasound imaging, Echoes from most tissues will be at the same frequency as the
transmitted beam. However, if the echoes received are from tissue or blood cells that move,
send and receive frequencies will not be the same. This “shifted” frequency can be used to
determine the relative speed and direction of moving these tissues. This principle is known as

198
CHPTER 8 IMAGING WITH SOUND

the Doppler Effect. Basically, the greater the frequency shift, the higher the speed of the
moving object. Based on Doppler principle, the movement of blood cells or tissue towards the
transducer is the receiving frequency higher, and vice versa, movement away from the
transducer is receiving frequency less. Doppler is the apparent change in wavelength (or
frequency) of an acoustic wave when there is relative movement between the transmitter (or
frequency source) and the receiver. The Doppler Effect is used to provide further information
in various ways, as discussed below. They are especially important for examining blood flow.
The basis for the Doppler Effect is that the propagation velocity of the waves in a medium
is constant, so the waves propagate with the same velocity in all directions, and thus there is
no addition of the velocity of the waves and the velocity of the source. Thus, as the source
moves in the direction of the propagation of the waves, this does not increase the propagation
velocity of the waves, but instead increases the frequency.
• In ultrasound imaging, if echoes received are from tissues or blood cells that are moving,
the transmitted and received frequencies will not be the same.
• This “shifted” frequency can be used to determine the relative velocity and the direction
of these moving tissues.
• This effect is known as the Doppler Effect
Doppler principle can be summarized generally as follows:
If the reflector is moving toward the transmitter, the received frequency will be higher than the
transmit frequency. If the reflector is traveling away from the transmitter, the received
frequency will be lower than the transmit frequency.
This same principle applies with blood flow. If the blood moves toward a probe
(antegrade), sound bouncing off that increases in frequency. The Source of these sound
waves is the probe itself, because it emits waves and detects them. The blood that if moves
away from the probe (retrograde), the frequency decreases. So the Doppler shift can be used
to detect blood flow and blood flow velocity.
Ultrasound images of flow, whether color flow or spectral Doppler, are essentially
obtained from measurements of movement. In ultrasound scanners, a series of pulses are
transmitted to detect movement of blood. Echoes from stationary tissue are the same from
pulse to pulse. Echoes from moving scatterers exhibit slight differences in the time for the
signal to be returned to the receiver (Figure 8.38A). These differences can be measured as a
direct time difference or, more usually, in terms of a phase shift from which the "Doppler
frequency" is obtained (Figure 8.38B). They are then processed to produce either a color flow
display or a Doppler sonogram.

199
CHPTER 8 IMAGING WITH SOUND

A insonation angle
Transducer

First pulse S- Position
at first
pulse
t1
Echo from
first pulse

B Transduce
Transducer
r
Second pulse S- Position
at second
pulse
t2
Echo from
second Mahmood & Haider
pulse
Figure 8.38: Ultrasound velocity measurement. The diagram shows a
scatterer S moving at velocity v with a beam/flow angle θ. The velocity can
be calculated by the difference in transmit-to-receive time from the first
pulse to the second (t2), as the scatterer moves through the beam.

8.24 The Doppler Equation


In ultrasound, the wave is sent from a stationary transducer, the moving red blood cells is
firstly moving towards the transducer and then following the reflected wave towards the
transducer as shown in figure 1, and thus the Doppler shift is approximately twice as great. In
the case of reflected ultrasound, the Doppler shift (or change in frequency) is:

Where ft : transmitted frequency


v : velocity of the moving blood
∝ : "insonation angle" the angle between the direction of the motion red blood cells and
the beam of ultrasound transmitted from the transducer, best when below 20o (Note
that, = 1 and = 0).
c : The velocity of sound in blood is an important part of the Doppler equation and is
constant.
The Doppler Effect in tissues maybe expressed as an equation (8.1). The echo frequency
(returned frequency, fD) is also called the "frequency shift" or "Doppler shift". Thus, in the
case of reflected ultrasound, the velocity of blood or tissue can be measured by the Doppler
shift of the reflected ultrasound: Basically, the Doppler Effect can be used to measure blood
and tissue velocities from the Doppler shift of reflected ultrasound. As can be seen from figure
8.37A, B and equation (1) there has to be motion in the direction of the beam; if the flow is
perpendicular to the beam, there is no relative motion from pulse to pulse.

200
CHPTER 8 IMAGING WITH SOUND

The size of the Doppler signal is dependent on:


1- Blood velocity: as velocity increases, so does the Doppler frequency;
2- Ultrasound frequency: higher ultrasound frequencies give increased Doppler frequency.
As in B-mode, lower ultrasound frequencies have better penetration.
3- The choice of frequency is a compromise between better sensitivity to flow or better
penetration;
4- The angle of "insonation": the Doppler frequency increases as the Doppler ultrasound
beam becomes more aligned to the flow direction (the angle θ between the beam and
the direction of flow becomes smaller). This is of the utmost importance in the use of
Doppler ultrasound.
The implications are illustrated schematically in Figure 8.39. (A) higher-frequency Doppler
signal is obtained if the beam is aligned more to the direction of flow. In the diagram, beam
(A) is more aligned than (B) and produces higher-frequency Doppler signals. The beam/flow
angle at (C) is almost 90° and there is a very poor Doppler signal. The flow at (D) is away
from the beam and there is a negative signal.

Figure 8.39: Effect of the Doppler angle in the sonogram.

All types of Doppler ultrasound equipment employ filters to cut out the high amplitude, low-
frequency Doppler signals resulting from tissue movement, for instance due to vessel wall
motion. Filter frequency can usually be altered by the user, for example, to exclude
frequencies below 50, 100 or 200 Hz. This filter frequency limits the minimum flow velocities
that can be measured.
8.25 Spectral Doppler
Spectral Doppler, of high value in ultrasound diagnosis, can be used for evaluation of blood
flow, includes three kinds:
- Pulse Doppler (PW)

201
CHPTER 8 IMAGING WITH SOUND

- High Pulse Repetition Frequency Pulse Doppler (HPRF)


- Continuous Wave Doppler (CW).
8.26 Pulsed and Continuous Wave Doppler
At the present time there are two main types of Doppler echocardiographic systems that are
commonly used:
 continuous wave and
 pulsed wave.
These two systems differ in each of the transducer design, operating features, signal
processing procedures and types of information provided. For each system, advantages and
disadvantages task, in our opinion, the current practice of Doppler echocardiography requires
some capability for both forms (see figure 8.40).
Mahmood & Haider CW
 Accurate Measurement
CW PW  Poor range resolution

PW
 Good range resolution
 Limitation on
maximum velocity

Blood vessel

Figure 8.40: Schematic representation of the principle of continuous


and pulse wave Doppler.

8.26.1 Continuous Wave Doppler


Continuous Wave Doppler (CWD) is an ultrasound imaging mode, which records blood flow
velocities along the length of the beam. It is the older and electronically simpler of the two
kinds. Continuous wave Doppler works basically with two different piezoelectric crystals, one
permanently emitting ultrasound and the other receiving all the echoes. These crystals are
placed at angles from each other and thus ultrasound waves can be picked up by the receiver.
The transducer accomplishes dual function with one half of the elements devoted to each
function. One half of the elements are continuously sending sound waves of a single
frequency while the other half is continuously receiving the reflected signals.
8.26.1.1 The advantage of CW Doppler
The important advantage of continuous wave Doppler is its ability to measure high blood
velocities along the ultrasound line (for example in aortic stenosis).

202
CHPTER 8 IMAGING WITH SOUND

8.26.1.2 The Disadvantage of CW Doppler


The disadvantage is its lack of selectivity or depth discrimination. Since CW Doppler is
constantly transmitting and receiving from two different transducer heads (crystals) there is no
provision for imaging or range gating to allow selective placing of a given Doppler sample
volume in space. As a consequence, it reflects the ultrasound data from every red cell
reflecting ultrasound back to the transducer along the course of the ultrasound beam.
Thus, true CW Doppler is functionally a stand-alone technique whether or not the capability is
housed within a two-dimensional imaging transducer. The absence of anatomic information
during CW examination may lead to interpretive difficulties, particularly if more than one
heart chamber or blood vessel lies in the path of the ultrasound beam.
It is possible, however, to program a phased array system to perform both two-dimensional
and CW Doppler functions almost simultaneously. The quasi-simultaneous CW-imaging uses
a time sharing arrangement in which the transducer rapidly switches back and forth from one
type of examination to the other. Because this switching is done at very high speeds, the
operator gets the impression that both studies are being done continuously and in real-time.
During the imaging period, no Doppler data is being collected, so an estimate is generated,
usually from the preceding data. During the Doppler collection period, previously stored
image data is displayed. This arrangement usually degrades the quality of both the image and
Doppler data.

8.27 Color Flow Mapping


Color Flow Mapping (CFM) combines B-mode image format and Pulsed Doppler to provide a
two dimensional representation of blood flow in Real Time.
The Doppler ultrasound lines, like B-mode lines, are sequentially scanned through the
frame. Multiple range gates are taken along the Doppler lines. The calculated velocity data is
assigned a color to represent a certain velocity and direction, and then displayed combining
with the B-mode image at the original location.
8.28 Pulsed Wave Doppler
The pulse is sent out, and the frequency shift in the reflected pulse is measured after a certain
time. This will correspond to a certain depth (range gating), i.e. velocity is measured at a
specific depth, which can be adjusted. The width is the same as the beam width, and the length
of the sample volume is equal to the length of the pulse. The same transducer is used both for
transmitting and receiving.
A problem in pulsed Doppler is that the Doppler shift is very small compared to the
ultrasound frequency. This makes it problematic to estimate the Doppler shift from a single
pulse, without increasing the pulse length too far. A velocity of 100 cm/s with a ultrasound
frequency of 3.5 MHz results in a maximum Doppler shift of 2.3 KHz. The solution to this
problem is shooting multiple pulses in the same direction and produce a new signal with one
sample from each pulse, the Doppler curve from this signal will be a new curve with the

203
CHPTER 8 IMAGING WITH SOUND

frequency equal to the Doppler shift. (This means that a full package of pulses is considered
one pulse in the sampling frequency sense).
The pulsed modus results in a practical limit on the maximum velocity that can be
measured. In order to measure velocity at a certain depth, the next pulse cannot be sent out
before the signal is returned. The Doppler shift is thus sampled once for every pulse that is
transmitted, and the sampling frequency is thus equal to the pulse repetition frequency (PRF)
(the number of pulses per second.).
One main advantage of pulsed Doppler is its ability to provide Doppler shift data
selectively from a small volume of sample along the ultrasound beam (for example mitral
valve inflow). The location of this sample volume is operator controlled. The main
disadvantage of PW Doppler is its inability to accurately measure high blood flow velocities
(velocities above 1.5 to 2 m/s), known as aliasing. Frequency aliasing occurs at a Doppler
shift that is equal to half of the PRF.

Which called "Nyquist Limit" that is the highest detectable velocity is limited by one half of
the rate at which the ultrasound lines are fired. We will give more detail on aliasing later.
Pulsed wave (PW) Doppler systems use a transducer that alternates transmission and reception
of ultrasound in a way similar to the M-mode transducer. One main advantage of pulsed
Doppler is its ability to provide Doppler shift data selectively from a small segment along the
ultrasound beam, referred to as the "sample volume".
The sample volume is really a three-dimensional, teardrop shaped portion of the ultrasound
beam (Figure 8.41). Its volume varies with different Doppler machines, different size and
frequency transducers and different depths into the tissue. Its width is determined by the width
of the ultrasound beam at the selected depth. Its length is determined by the length of each
transmitted ultrasound pulse.

Sample volume

Mahmood & Haider


Figure 8.41: The sample volume of PW Doppler is actually a three-
dimensional volume that a change in size at its location relative to the
transducer is changed. When placed in the far field, it becomes very large.

204
CHPTER 8 IMAGING WITH SOUND

Therefore, the farther into the heart the sample volume is moved, the larger it effectively
becomes. This happens because the ultrasound beam diverges as it gets farther away from the
transducer.
The main disadvantage of PW Doppler is its inability to accurately measure high blood
flow velocities, such as may be encountered in certain types of valvular and congenital heart
disease. This limitation is technically known as "aliasing" and results in an inability of pulsed
Doppler to faithfully record velocities above 1.5 to 2 m/sec when the sample volume is located
at standard ranges in the heart (figure 8.42).

Figure 8.42: Schematic rendering of the full spectral display of a high


velocity profile fully recorded by CW Doppler. The PW display is
aliased, or cut off, and the top is placed at the bottom.

The spectral outputs from PW and CW appear differently as shown in figure 8.43. The
transducer is located at the apex and diastolic flow is toward the transducer (positive) Note the laminar
appearance of the PW display. The CW does not usually display the same laminar flow pattern as it
receives flow information from all portions of the ultrasound beam. When there is no turbulence,
PW will generally show a laminar (narrow band) spectral output. CW, on the other hand,
rarely displays such a neat narrow band of flow velocities even with laminar flow because all
the various velocities encountered by the ultrasound beams are detected by CW.

Figure 8.43: Spectral displays of diastolic flow through the mitral orifice.

205
CHPTER 8 IMAGING WITH SOUND

It can usually be said that when an operator wants to know where a specific area of abnormal
flow is located that pulsed wave Doppler is indicated. When accurate measurement of elevated
flow velocity is required, then CW Doppler should be used. The various differences between
pulsed and continuous wave Doppler are summarized in Table 10.5.
Table 8.5: Summarizing the advantages and disadvantages of pulsed
and continuous wave Doppler echocardiography.
Range Limitation on
resolution maximum velocity
Pulsed wave yes yes
Continuous wave no no

In Pulse Doppler, a single ultrasound line is repeatedly fired. Echoes reflected from moving
structure, including blood cells, experience a Doppler shift in frequency. Using the Doppler
equation, the echo information obtained within the Sample Volume is analyzed for shifted
frequency content and amplitude, rather than transmit frequency amplitude. From this, the
blood velocity can be determined.
In order to obtain enough data to calculate the frequency components of the sampled
volume, many ultrasound lines must be fired. The frequency data is converted to velocity, and
displayed in a scrolling strip format on the monitor.
8.29 Angle of Incidence
When the motion of the object and the transmitted beam are not parallel, it is necessary to
correct for the angular difference. Motion that occurs at an angle to the beam axis will result in
a decrease in the magnitude of the frequency shift and a lower calculated velocity. Therefore,
the transmitted beam needs to be parallel to the flow for the most accurate velocity. An
equation is used to correct for the angle offset. The transducer receives only the component
parallel to the beam (Vcos q).
The location of the sample volume is operator controlled. An ultrasound pulse is
transmitted into the tissues travels for a given time (time t) until it is reflected back by a
moving red cell. It then returns to the transducer over the same time interval but at a shifted
frequency. The total transit time to and from the area is 2t. Since the speed of ultrasound in the
tissues is constant, there is a simple relationship between roundtrip travel time and the location
of the sample volume relative to the transducer face (i.e., distance to sample volume equals
ultrasound speed divided by round trip travel time). This process is alternately repeated
through many transmit-receive cycles each second.
This range gating is therefore dependent on a timing mechanism that only samples the
returning Doppler shift data from a given region. It is calibrated so that as the operator chooses
a particular location for the sample volume, the range gate circuit will permit only Doppler
shift data from inside that area to be displayed as output. All other returning ultrasound
information is essentially "ignored".

206
CHPTER 8 IMAGING WITH SOUND

Another main advantage of PW Doppler is the fact that some imaging may be carried on
alternately with the Doppler and thus the sample volume may be shown on the actual two-
dimensional display for guidance. PW Doppler capability is possible in combination with
imaging from a mechanical or phased array imaging system. It is also generally steerable
through the two-dimensional field of view, although not all systems have this capability.
In reality, since the speed of sound in body tissues is constant, it is not possible to
simultaneously carry on both imaging and Doppler functions at full capability in the same
ultrasound system. In mechanical systems, the cursor and sample volume are positioned
during real-time imaging, and the two-dimensional image is then frozen when the Doppler is
activated. With most phased array imaging systems the Doppler is variably programmed to
allow periodic update of a single frame two-dimensional image every few beats (figure 8.44).
In other phased arrays, two-dimensional frame rate and line density are significantly decreased
to allow enough time for the PW Doppler to sample effectively. This latter arrangement gives
the appearance of near simultaneity.

Mahmood & Haider


Image Gating

Gate every
5 beats +211
cm/s
Spectral 0
display
-211
Figure 8.44: When the PW Doppler operates, it causes the two-dimensional
image to be held in a frozen frame. The image is periodically updated and
will usually appear as a blank on the spectral display (dashed lines).

8.30 Aliasing
There are fundamental limitations suffered pulsed wave systems. The aliasing phenomenon
occurs with pulsed ultrasound it is not possible to measure very high-flow velocities with
accuracy. If the flow is too fast it will be shown in the wrong direction and its velocity
underestimated. This artifact shows as 'wrap-round' top and bottom in the sonogram, and is
known as 'aliasing'. The upper limit of the Doppler shift (maximum Doppler frequency fD)
which can be the pulsed wave system can record it properly or displayed unambiguously is
known as the Nyquist limit and is half the pulse repetition frequency (PRF/2). In other
words, when pulses are transmitted at a given sampling frequency (known as the pulse
repetition frequency), the maximum Doppler frequency fD that can be measured

207
CHPTER 8 IMAGING WITH SOUND

unambiguously is half the pulse repetition frequency. In the case of pulse wave spectral
Doppler (or color Doppler) when the abnormal velocity exceeds the upper limit, generation of
Doppler shifts more than Nyquist limit. This leads to aliasing occurs and is displayed as
bright, turbulent appearing flow in color Doppler and in blood flow profiles which "wrap
around" the displayed scale in pulse wave spectral Doppler as shown in figure 8.45. In other
words, the frequency with which the Doppler pulses are repeated must be at least twice the
maximum Doppler shift frequency produced by the flow. Thus the fastest flow that can be
measured with accuracy is the velocity which produces a Doppler shift frequency equal to half
the PRF being used. A greater flow velocity than this produces 'aliasing'. Aliasing does not
occur with continuous-wave Doppler. It is therefore particularly difficult to measure fast flow
in deep blood vessels. The deeper the gate has to be set, the smaller the PRF that can be used,
and so the smaller the fastest flow that can be measured without aliasing.
For example, if the Doppler shift frequency produced by the fast blood flow associated with
a stenosis is 8 kHz, the PRF must be at least 16 kHz. This allows a listening time of only 60 µs
between pulses, in which time the sound can travel to and fro through a depth of view of only
5 cm. The depth of the sampling volume determines the PRF needed, and the PRF determines
the maximum velocity that can be measured without aliasing. Thus:
maximum velocity (cm s-1) × range (cm) × transducer frequency (MHz) = 4000
The risk of aliasing can he reduced by reducing the Doppler Effect by (a) using a probe of
lower frequency f or (b) increasing the angle θ, but both increase the error in the measured
flow. The risk can also be reduced by (c) increasing the PRF, but this causes problems, as we
shall now see.

Figure 8.45: Aliased spectral display of aortic insufficiency (left arrow) in PW


mode detected from the ventricular apex. Abnormal flow is toward the transducer.
After 3 beats, the system is switched to CW and the full profile is seen.

208
CHPTER 8 IMAGING WITH SOUND

The pulse repetition frequency is itself constrained by the range of the sample volume. The
time interval between sampling pulses must be sufficient for a pulse to make the return
journey from the transducer to the reflector and back. If a second pulse is sent before the first
is received, the receiver cannot discriminate between the reflected signal from both pulses and
ambiguity in the range of the sample volume ensues. As the depth of investigation increases,
the journey time of the pulse to and from the reflector is increased, reducing the pulse
repetition frequency for unambiguous ranging. The result is that the maximum fD measurable
decreases with depth.
Low pulse repetition frequencies are employed to examine low velocities (e.g. venous
flow). The longer interval between pulses allows the scanner a better chance of identifying
slow flow. Aliasing will occur if low pulse repetition frequencies or velocity scales are used
and high velocities are encountered (figure 8.46 and 8.47). Conversely, if a high pulse
repetition frequency is used to examine high velocities; low velocities may not be identified.

a Mahmood & Haider b


Figure 8.46: (a) Aliasing of color doppler imaging and artefacts of color. Color
image shows regions of aliased flow (yellow arrows). (b) Reduce color gain and
increase pulse repetition frequency.

Figure 8.47: (a,b): Example of aliasing and correction of the aliasing. (a)
Waveforms with aliasing, with abrupt termination of the peak systolic and
display this peaks bellow the baseleineSonogram clear without aliasing. (b)
Correction: increased the pulse repetition frequency and adjust baseline (move
down).

209
CHAPTER 9

MAGNETIC RESONANCE
IMAGING
Magnetic resonance imaging (MRI) is a test that uses a magnetic
field and pulses of radio wave energy to make pictures of organs
and structures inside the body. In many cases MRI gives different
information about structures in the body that can be seen with an
X-ray, ultrasound, or computed tomography (CT) scan. MRI also
Rationale may show problems that cannot be seen with other imaging
methods. From here highlights the great importance for the study
of this technology and understanding of the fundamentals upon
which to build
Mahmood & Haider

Performance Objectives

After studying the chapter eleven, the student will be able to:-

1. Explain the principles of MRI


2. Understand the role of MRI for the detection and characterization of
malignant liver lesions and to learn about the relevant MR imaging features
3. Explain how different tissues have different T1 relaxation times and how
this affects the overall image that is created.
4. Define the meaning of TR (repetition time) and TE (echo delay time).
5. Discuss the differences between T1-weighted images, proton density-
weighted images, and T2-weighted images.
CHAPTER 9 MAGNETIC RESONANCE IMAGING

CHAPTER NINE: MAGNETIC RESONANCE IMAGING


CHAPTER CONTENTS
9.1. Historical Introduction 212
9.2. The Hardware 214
9.3. Magnet Types 214
9.3.1. Permanent Magnets 214
9.3.2. Electromagnets 215
9.3.2.1. Resistive Magnets 215
9.3.2.2. Superconducting Magnets 215
9.4. Shimming 217
9.5. RF Coils 217
9.5.1. Volume RF Coils 217
9.5.2. Surface Coils 218
9.5.3. Quadrature Coils 219
9.5.4. Phased Array Coils 219
9.6. Other Hardware 219
9.6.1. Faraday shield 219
9.7. Atomic Structure 220
9.8. Magnetization 221
9.9. Magnetic Moments 222
9.10. In-Phase and Diphase 226
9.11. RF Pulse 227
9.12. Excitation 227
9.13. Relaxation 229
9.13.1. T1 Relaxation 229
9.13.1.1 T1 Relaxation Curves 230
9.13.2. T2 Relaxation 232
9.13.2.1. T2 Relaxation Curves 233
9.13.2.2. T2* Relaxation 234
9.14. Acquisition 236
9.15. Computing and Display 238
9.16. Fourier Transformation 239
9.17. Gradient Coils 240
9.18. Magnetization Gradients 241
9.19. Signal Coding 243
9.19.1 Slice Encoding Gradient 244
9.19.1.1. Slice Location 246
9.19.1.2. Slice Thickness 246
9.19.1.3. Receiver Bandwidth 249
9.19.2. Frequency Encoding Gradient 250
9.19.2.1. Application of the Frequency Encoding Gradient 251
9.19.2.2. Frequency Encoding in Both Directions 252
9.19.3. Phase Encoding Gradient 252
9.20. K-Space 254
9.21. Gradient Echo Pulse Sequence Diagram 255
9.22. Gradient Specifications 257

211
CHAPTER 9 MAGNETIC RESONANCE IMAGING

9.23. MRI Image Quality, Artifacts, and Imaging Parameters 258


9.23.1. Signal to Noise and Contrast Resolution 258
9.23.2. Pixel, Voxel, Matrix 259
9.23.3. Inter-Slice Gap 260
9.23.4. Size of the (Image) Matrix 261
9.23.5 Scan Parameters (TR, TE, Flip Angle) 262
9.23.6. Number of Acquisitions 264
9.23.7. Field of View 264
9.23.8 Selection of the Transmit and Receive Coil (RF Coil) 265
9.24. MRI Contrast Agents 265
9.25. Special Applications 267
9.25.1. Magnetic Resonance Angiography and Venography 267
9.25.2. Magnetic Resonance Myelography 268
9.25.3. Magnetic resonance Cholangiopancreatography 268
9.25.4. Chemical Shift Imaging 269
9.25.5. Diffusion-Weighted Imaging 269

9.1 Historical Introduction


Magnetic resonance imaging (MRI) is an imaging technique used primarily in medical settings
to produce high quality images of the inside of the human body. MRI is based on the
principles of nuclear magnetic resonance (NMR), a spectroscopic technique used by scientists
to obtain microscopic chemical and physical information about molecules. The technique was
called magnetic resonance imaging rather than nuclear magnetic resonance imaging (NMRI)
because of the negative connotations associated with the word nuclear in the late 1970's. MRI
started out as a tomographic imaging technique, that is it produced an image of the NMR
signal in a thin slice through the human body. MRI has advanced beyond a tomographic
imaging technique to a volume imaging technique. This package presents a comprehensive
picture of the basic principles of MRI.
Before beginning a study of the science of MRI, it will be helpful to reflect on the brief
history of MRI. Felix Bloch and Edward Purcell, both of whom were awarded the Nobel Prize
in 1952, discovered the magnetic resonance phenomenon independently in 1946. In the period
between 1950 and 1970, NMR was developed and used for chemical and physical molecular
analysis.
In 1971 Raymond Damadian showed that the nuclear magnetic relaxation times of tissues
and tumors differed, thus motivating scientists to consider magnetic resonance for the
detection of disease. In 1973 the x-ray-based computerized tomography (CT) was introduced
by Hounsfield. This date is important to the MRI timeline because it showed hospitals were
willing to spend large amounts of money for medical imaging hardware. Magnetic resonance
imaging was first demonstrated on small test tube samples that same year by Paul Lauterbur.
He used a back projection technique similar to that used in CT. In 1975 Richard Ernst
proposed magnetic resonance imaging using phase and frequency encoding, and the Fourier
Transform. This technique is the basis of current MRI techniques. A few years later, in 1977,

212
CHAPTER 9 MAGNETIC RESONANCE IMAGING

Raymond Damadian demonstrated MRI called field-focusing nuclear magnetic resonance. In


this same year, Peter Mansfield developed the echo-planar imaging (EPI) technique. This
technique will be developed in later years to produce images at video rates (30 ms / image).
Edelstein and coworkers demonstrated imaging of the body using Ernst's technique in
1980. A single image could be acquired in approximately five minutes by this technique. By
1986, the imaging time was reduced to about five seconds, without sacrificing too much image
quality. The same year people were developing the NMR microscope, which allowed
approximately 10 µm resolution on approximately one cm samples. In 1987 echo-planar
imaging was used to perform real-time movie imaging of a single cardiac cycle. In this same
year Charles Dumoulin was perfecting magnetic resonance angiography (MRA), which
allowed imaging of flowing blood without the use of contrast agents.
In 1991, Richard Ernst was rewarded for his achievements in pulsed Fourier Transform
NMR and MRI with the Nobel Prize in Chemistry. In 1992 functional MRI (fMRI) was
developed. This technique allows the mapping of the function of the various regions of the
human brain. Five years earlier many clinicians thought echo-planar imaging's primary
applications were to be in real-time cardiac imaging. The development of fMRI opened up a
new application for EPI in mapping the regions of the brain responsible for thought and motor
control. In 1994, researchers at the State University of New York at Stony Brook and
Princeton University demonstrated the imaging of hyperpolarized 129Xe gas for respiration
studies.
In 2003, Paul C. Lauterbur of the University of Illinois and Sir Peter Mansfield of the
University of Nottingham were awarded the Nobel Prize in Medicine for their discoveries
concerning magnetic resonance imaging. MRI is clearly a young, but growing science.
Why MRI?
When using x-rays to image the body one doesn't see very much. The image is gray and
flat. The overall contrast resolution of an x-ray image is poor. In order to increase the image
contrast one can administer some sort of contrast medium, such as barium or iodine based
contrast media. By manipulating the x-ray parameters kV and mAs one can try to optimize the
image contrast further but it will remain sub optimal. With CT scanners one can produce
images with a lot more contrast, which helps in detecting lesions in soft tissue. The principle
advantage of MRI is its excellent contrast resolution. With MRI it is possible to detect minute
contrast differences in (soft) tissue, even more so than with CT images. By manipulating the
MR parameters one can optimize the pulse sequence for certain pathology. Another advantage
of MRI is the possibility to make images in every imaginable plane, something, which is quite
impossible with x-rays or CT. (With CT it is possible to reconstruct other planes from an
axially acquired data set).
However, the spatial resolution of x-ray images is, when using special x-ray film, excellent.
This is particularly useful when looking at bone structures. The spatial resolution of MRI
compared to that of x-ray is poor. In general one can use x-ray and CT to visualize bone

213
CHAPTER 9 MAGNETIC RESONANCE IMAGING

structures whereas MRI is extremely useful for detecting soft tissue lesions. Before beginning
a study of the science of MRI, it will be helpful to reflect on the brief the hardware of MRI.
9.2 The Hardware
Scanners of magnetic resonance imaging (MRI) come in many varieties. There is a permanent
magnet type, resistive, superconducting, and opening or bore, with or without helium, high
field strength or low. The choice of magnet mainly governed by what you intend to do, and the
cost. Field magnets offer high quality image better, faster scanning and a wider range of
applications, but they are more cost than their counterpart's field is low.
9.3 Magnet Types
The static magnetic field (Bo) in MRI systems can be created by: Permanent magnets and
Electromagnets.
9.3.1 Permanent Magnets
A permanent magnet that originates from permanently ferromagnetic materials, which does
not lose the magnet field, that remains over time without weakening. Due to weight
considerations, these types of magnets are usually limited to maximum field strengths
of 0.4 T (the unit for magnetic field strength is Tesla: 1 Tesla = 10000 Gauss). Permanent
magnets have usually an open design system (see Figure 9.1) which has ample open space
which is more comfortable for the patient. So, the open design accommodates extremely large
patients and dramatically reduces anxiety for all patients especially those who have
claustrophobic tendencies or have larger body structures.

ADVANTAGES DISADVANTAGES
Low power consumption Limited field strength (<0.3T)
Low operating cost Very heavy
Small fringe field No quench possibility
No cryogen
Figure 9.1: Open MRI system "OPER"

214
CHAPTER 9 MAGNETIC RESONANCE IMAGING

9.3.2 Electromagnets
There are two categories can be used in MR scanner: Resistive and Superconducting Magnets

9.3.2.1. Resistive Magnets


Resistive magnets are made from loops of wire wrapped around a cylinder through which a
large electric current is passed. These magnets are very large that utilizes the principles of
electromagnetism to generate the magnetic field, like the ones used in scrap yards to pick up
cars. They are lower in cost, but need a lot of power to run that means, large current values
which runs through loops of wire because of the natural resistance of the wire. Therefore they
produce a lot of heat, which requires significant cooling of the magnet coils. Resistive
magnets come in two general categories: iron-core and air-core. Resistive magnets are
typically limited to maximum field strengths can be up to 0.6 Tesla. They usually have an
open design, which reduces claustrophobia. Figure 9.2 shows Hitachi's Airis 0.3 Tesla (air-
core) system.

Mahmood & Haider


ADVANTAGES DISADVANTAGES
Low capital cost High power consumption
Light weight Limited field strength (<0.2T)
Can be shut off Water cooling required
Large fringe field
Figure 9.2: Hitachi's Airis 0.3 Tesla (air-core) system.
9.3.2.2 Superconducting Magnets
Superconducting magnets are Today's most commonly used in MRIs. These superconductors,
such as niobium-tin and niobium-titanium are used to make the coil windings for
superconducting magnets. The magnetic field is generated by a passing electrical current
through coils of wire. The wire is surrounded with a coolant, such as liquid helium, to reduce
the electric resistance of the wire. At 4 Kelvin (-269 oC) electric wire looses its resistance.
Once a system is energized, it won't loose its magnetic field. Superconductivity allows for
systems with very high field strengths up to 12 Tesla. The ones that are most used in clinical
environments run at 1.5 Tesla. Most superconducting magnets are bore type magnets. A

215
CHAPTER 9 MAGNETIC RESONANCE IMAGING

number of vacuum vessels, which act as temperature shields, surround the core. These shields
are necessary to prevent the helium to boil off too quickly. Another advantage of
superconducting magnets is the high magnetic field homogeneity.

ADVANTAGES DISADVANTAGES

High field strength High capital costs


High field homogeneity High cryogen costs
Low power consumption Acoustic noise
High SNR Motion artifacts
Fast scanning Technical complexity
Figure 9.3: bore type magnets.
[In 1997 Toshiba introduced the world's first open superconducting magnet. The system uses a
special metal alloy, which conducts the low temperature needed for superconductivity. The
advantage of this is that the system does not need any helium refills, which dramatically
reduces running costs. The open design reduces anxiety and claustrophobia. Figure 9.4 shows
Toshiba's OPART 0.35 Tesla system, which combines an open design with the advantages
related to superconducting magnets.]

Figure 9.4: Toshiba's OPART 0.35 Tesla system,


which combines an open design.

216
CHAPTER 9 MAGNETIC RESONANCE IMAGING

The current trend in magnet design is low field open design versus high field bore design.
Obviously it would be desirable to combine the two, and only time will tell whether this can
be done within reasonable manufacturing costs and technical/structural limitations.

9.4 Shimming
MRI requires a very high homogeneous static magnetic field. In order to produce high-
resolution images, the magnetic field inhomogeneity produced in a high performance MRI
scanner must be maintained to the order of several ppm. After manufacturing, the magnet
must be adjusted in some points to produce a more uniform field by making small mechanical
and/or electrical adjustments to the overall field. This process is known as shimming. Because
the magnet itself is not adequately homogeneous, it is necessary to improve or “shim” the
homogeneity of the static magnetic field (Bo). A shim is a device used to adjust the
homogeneity of a magnetic field.
Shimming (or adjustment of the static magnetic field homogeneity) is accomplished by two
methods: (1) Passive shimming (2) Active shimming
Passive shimming: The mechanical adjustments, which add small pieces of iron or
magnetized materials, are typically called passive shimming. Passive shimming involves
pieces of steel with good magnetic qualities. The steel pieces are placed near the permanent or
superconducting magnet. They become magnetized and produce their own magnetic field.
Active shimming: The electrical adjustments, which use extra exciting currents, are known
as active shimming. Active shimming is performed with coils with adjustable current. Active
shimming requires passage of electric current through coils with unique geometric
configurations. The shim coils are designed to correct inhomogeneities of specific geometries.
In both cases (active and passive shimming), the additional magnetic fields (produced by
coils or steel) add to the overall magnetic field of the superconducting magnet in such a way
as to increase the homogeneity of the total field.
9.5 Radio Frequency Coils
Radio Frequency (RF) coils are needed to receive and/or transmit the RF signals used in MRI
scanners. RF coils system comprises the set of components for transmitting and receiving the
radiofrequency waves involved in exciting the nuclei, selecting slices, applying gradients and
in signal acquisition. RF coils are vital component in the performance of the radiofrequency
system. They one of the most important components that affects image quality and obtaining
clear images of the human body. RF coils for MRI can be categorized into two different
categories: volume coils and surface coils.
9.5.1 Volume RF Coils
The design of a volume coil is to provide a homogeneous RF field inside the coil which is
highly desirable for transmit, but is less ideal when the region of interest is small. The large
field of view of volume coils means that by receiving the noise that they receive from the

217
CHAPTER 9 MAGNETIC RESONANCE IMAGING

whole body, not just the region of interest. Volume coils need to have the area of examination
inside the coil. They can be used for transmit and receive, although sometimes they are used
for receive only. Most clinical applications volume coil is built to perform whole-body
imaging, and smaller volume coils have been constructed for the head and other extremities.
These coils are requiring a great deal of RF power because of their size, so they are often
driven in quadrature in order to reduce by two the RF power requirements. Figure 9.5 shows
two volume coils. The head coil is a transmit/receive coil; the knee coil is receive only.

(a) (b)

Figure 9.5: shows two volume coils (a) Head coil (b) Knee coil

9.5.2 Surface Coils


Surface coils have very high RF sensitivity over a small area of interest. As the name already
implies, surface coils are placed over or around the surface of the anatomy of interest to the
patient directly such as the temporo-mandibular joint, the orbits or the shoulder. The coil
consists of single or multi-turn loops of copper wire. They have a high Signal to Noise Ratio
(SNR) and allow for very high-resolution imaging because their small field of view and hence
they only detect noise from the region of interest. The disadvantage is that they loose signal
uniformity very quickly when you move away from the coil. In case of a circular surface coil,
the depth penetration is about half its diameter. Surface coils make poor transmit coils because
they have poor RF homogeneity, even over their region of interest. Figure 9.6 shows a few
examples of surface coils.

Surface coils Shoulder coil Neck coil Spine coil


Figure 9.6: few examples of surface coils.

218
CHAPTER 9 MAGNETIC RESONANCE IMAGING

9.5.3 Quadrature Coils


The quadrature coil consists of two coils, which are placed at right angles to one another that
mean oriented 90 degrees relative to each other. Therefore, the MRI signals received by each,
coil is 90 degrees out of phase with each other. The advantage of this design is that they
produce √2 more signal than single loop coils. The quadrature coil operates in the circular
polarization circularization mode. The quadrature coil can generate three types of images:
Real image, Imaginary image, and Magnitude image. Nowadays, most volume coils are
Quadrature coils. The coils shown in Figure 9.7 are Quadrature coils.
9.5.4 Phased Array Coils
Phased array coils consist of multiple surface coils with small diameter which are combined
(coil elements in phased array) to record the signal simultaneously and independently, so a
greater level can be explored. Surface coils have the highest signal-to-noise ratio (SNR) than
that delivered by one large diameter but have a limited sensitive area. By combining 4 or 6
surface coils it is possible to create a coil with a large sensitive area.

Figure 9.7: QD Body Array coil Spine Array coil


Figure 9.7 shows the design of two phased array coils. The QD Body Array coil is a volume
coil, while the Spine Array coil is a surface coil. Phased Array coils produce in average √2
more signal than Quadrature coils. Today most MRI systems come with Quadrature and
phased array coils.
9.6 Other Hardware
There is more hardware needed to make an MRI system work. A very important part is the
Radio Frequency (RF) chain, which produces the RF signal transmitted into the patient, and
receives the RF signal from the patient (see figure 9.8). Actually, the receive coil is a part of
the RF chain.
9.6.1 Faraday shield
The frequency range used in MRI is the same as used for radio transmissions. That's why MRI
scanners are placed in a Faraday cage to prevent radio waves to enter the scanner room,
which may cause artifacts on the MRI image. Someone once said: “MRI is like watching
television with a radio”. To function properly, an MRI scanner needs to sit in a specialized
room or chamber shielded against Radio Frequency (RF) interference. Without such

219
CHAPTER 9 MAGNETIC RESONANCE IMAGING

protection the very weak RF signals that emanate from the patient when scanned would be
overwhelmed. Also, to stop the radio frequencies produced by the scanner from interfering
with equipment outside the cage.

Figure 9.8: MRI Scanner Cutaway


Furthermore, one needs a processor to process the received signal, as well as to control the
complex business of scanning.
9.7 Atomic Structure
All things are made of atoms, including the human body. Atoms are very small. Half a million
lined up together are narrower than a human hair. Atoms are organized in molecules, which
are two or more atoms arranged together. The most abundant atom in the body
is hydrogen. This is most commonly found in molecules of water (where two hydrogen atoms
are arranged with one oxygen atom, H2O) and fat (where hydrogen atoms are arranged with
carbon and oxygen atoms; the number of each depends on the type of fat).
The atom consists of a central nucleus and orbiting electrons. The nucleus is very small,
one millionth of a billionth of the total volume of an atom, but it contains the entire atom’s
mass. This mass comes mainly from particles called nucleons, which are subdivided
into protons and neutrons. Atoms are characterized in two ways. The atomic number is the
sum of the protons in the nucleus. This number gives an atom its chemical identity. The mass
number is the sum of the protons and neutrons in the nucleus. The number of neutrons and
protons in a nucleus are usually balanced so that the mass number is an even number. In some
atoms, however, there are slightly more or fewer neutrons than protons. Atoms of elements
with the same number of protons but a different number of neutrons are called isotopes. Nuclei
with an odd mass number (a different number of protons to neutrons) are important in MRI
(see later).
Electrons are particles that spin around the nucleus. Traditionally this is thought of as being
analogous to planets orbiting around the sun. In reality, electrons exist around the nucleus in a
cloud; the outermost dimension of the cloud is the edge of the atom. The position of an

220
CHAPTER 9 MAGNETIC RESONANCE IMAGING

electron in the cloud is not predictable as it depends on the energy of an individual electron at
any moment in time (physicists call this Heisenberg’s Uncertainty Principle). The number of
electrons, however, is usually the same as the number of protons in the nucleus.
Protons have a positive electrical charge, neutrons have no net charge and electrons are
negatively charged. So atoms are electrically stable if the number of negatively charged
electrons equals the number of positively charged protons. This balance is sometimes altered
by applying external energy to knock out electrons from the atom. This causes a deficit in the
number of electrons compared with protons and causes electrical instability. Atoms, in which
this has occurred, are called ions.
9.8 Magnetization
The earth electrically charged and spinning ball is floating in space. Quite happily: nothing to
worry about. From our physics lessons in school we may remember that a rotating electrical
charge creates a magnetic field. And sure enough, the earth has a magnetic field, which we use
to find our way from one place to another by means of a compass. The magnetic field strength
of the earth is rather small: 30 T at the poles and 70T at the equator. (Tesla is unit for magnetic
fields). In short we can establish that the earth is a giant spinning bar magnet, with a north and
a south pole (Figure 9.9).

Figure 9.9: north and a south pole of earth


We have many in common with earth. If we take a bit from our body and we would put it
under an electron microscope we can see things that look rather familiar. We see tiny little
balls, which rotate around their own axes and also have an electrical charge and they have
particles floating around it. This balls which we see are atoms. And atoms have everything to
do with MRI, because we use them to generate our MR image. Another thing we have in
common with earth is water. Our body consists of 80% water.
The moving charges give rise to magnetic fields. Consequently, we might expect that
something that is charged and spinning would also possess a magnetic field. This is indeed true.

221
CHAPTER 9 MAGNETIC RESONANCE IMAGING

Well known in the chemistry that there are many different elements, to be precise there are
110 element. Human body consists mainly of water "about 80% water". Water consists of one
oxygen and two hydrogen atoms. Consider the simplest nucleus, which is hydrogen atom (the
first element in the periodic table) has a nucleus contain one proton, and one electron orbital.
This proton is electrically positive charged and it rotates around (spin) its axis.
Also the hydrogen proton behaves as if it were a tiny bar magnet with a north and a south
pole (Figure 9.10). Hydrogen protons in the body thus act like many tiny magnets. The
nucleus is said to be a magnetic dipole, and the name for its magnetism is magnetic moment.
It is essential that there be a source of protons (protons in the nuclei of hydrogen atoms, which
are associated with fat molecules and water) in order to form the MR signal.

S
+ Spinning
charged
particle
Mahmood & Haider
Figure 9.10: hydrogen proton. The positively charged hydrogen
proton (+) spins about its axis and acts like a tiny magnet. N =
north, S = south.
We conclude from the above that there are two reasons for taking hydrogen as a source to
form the MR signal or MR imaging source.
 First off all we have a lot of them in our body. Actually it's the most abundant element
we have.
 Secondly, in quantum physics there is a thing called “Gyro Magnetic Ratio”. It is
beyond the scope of this book what it represents; suffice to know that this ratio is
different for each proton. It just so happens, that this gyro magnetic ratio for Hydrogen
is the largest; 42.57 MHz/Tesla.
9.9 Magnetic Moments
In most materials, such as soft tissue, these little magnetic moments are all oriented randomly
(see figure 9.11). That is, if one nucleus has its spin and therefore its magnetic moment pointed
up, there will be another nearby nucleus with its spin pointed down. Other magnetic moments
will be oriented in various directions. This random orientation causes all the spins and magnetic
moments to cancel, so that the net magnetization is zero. Net magnetization is symbolized by M.

222
CHAPTER 9 MAGNETIC RESONANCE IMAGING

Mahmood & Haider

Figure 9.11: the magnetic moments of protons in all directions


randomly.
If the patient however, is placed in a strong magnetic field, the magnetic moments will align
themselves much as a compass needle aligns itself with the earth's magnetic field. Although all the
magnetic moments are illustrated as being aligned in the same direction as the external magnetic
field, in fact nearly as many align against the field as with it. It is a result of quantum
mechanics that the moments must align either with the field or against it. A small excess of
moments aligned with the field gives the patient a net magnetization, M as shown in figure 9.12.

mxy
Precession
m
Mz
M
m

Precession
Bo (c)

(a) Parallel (b) Anti-parallel Mahmood & Haider


Figure 9.12: the hydrogen protons parallel or anti-parallel.
The atoms in any material are in constant thermal motion, and thus the nuclei are being con-
tinually banged out of alignment. At any particular time however, slightly more of the nuclei
will align with the field than against it, creating net magnetization of the patient. The patient
becomes a magnet.
For whom really wants to know, Hydrogen is not the only element we can use for MRI. In
fact any element, which has an odd number of particles in the nucleus, can be used. Some
elements, which can be used, are: The protons in a molecule bunch of hydrogen a lot of tiny

223
CHAPTER 9 MAGNETIC RESONANCE IMAGING

bar magnets spinning their own axes. As well known two north poles and two south poles of
two magnets repel each other, while two poles of opposite sign attract each other. In human
body these tiny bar magnets are ordered in such a way that the magnetic forces equalize.
Human bodies are, magnetically speaking, in balance.
As we saw at the beginning of this chapter in the paragraph about the hardware, the
magnets used in MR imaging can be in different field strengths. For example, the magnetic
field strength of 1 Tesla magnet is ± 20000 times stronger than the Earth's gravitational field!
This shows that we are working with the equipment to be potentially dangerous.
Table 9.1: MRI friendly elements
Spin Quantum Gyro Magnetic
Isotope Symbol
number Ratio (MHz/T)
1
Hydrogen H 1/2 42.6
13
Carbon C 1/2 10.7
17
Oxygen O 5/2 5.8
19
Fluorine F 1/2 40.0
23
Sodium Na 3/2 11.3
25
Magnesium Mg 5/2 2.6
31
Phosphorus P 1/2 17.2
33
Sulphur S 3/2 3.3
57
Iron Fe 1/2 1.4

If the person placed in the MRI scanner some interesting things happen to the hydrogen
protons:
1. They align with the magnetic field. This is done in two ways, parallel or anti-
parallel.
2. They process or “wobble” around the direction of the external magnetic field (the
z-axis) due to the magnetic momentum of the atom. (see figure 11.12)
They precess at a frequency called the Larmor frequency. Larmor frequency to its importance
needs to be further explained. The Larmor frequency can be calculated from the following
equation:

Where ωo = precessional or Larmor frequency (MHz)


γ = Gyro Magnetic Ratio (MHz/T)
Bo = Magnetic field strength (T)
Here we see the Gyro Magnetic Ratio and the Magnetic field strength, come together which
the two discussed before. Highlights the importance of this equation is by the need to Larmor
frequency to calculate the operating frequency in the magnetic resonance imaging system. For
example, if the magnetic resonance imaging system 1.5 Tesla then Larmor frequency or
precessional is:

224
CHAPTER 9 MAGNETIC RESONANCE IMAGING

42.57 × 1.5 = 63.855 MHz


The precessional frequencies of 1.0T, 0.5T, 0.35T and 0.2T systems would work out to be
42.57 MHz, 21.285 MHz, 14.8995 MHz and 8.514 MHz respectively. When applied strong
magnetic field of the scanner on protons, it could align with the field in two ways: parallel and
anti-parallel.
Can also be called the two cases are low-energy state (parallel) and high energy state (anti-
parallel). Distributions of protons for both states are not the same. To approximate the mind
image can compare the protons just like a lot of people are lazy. They prefer to be in a low
energy state. Protons aligned parallel which are low-energy State more than anti-parallel
protons to the direction of the applied magnetic field which are a high-energy state (Figure
9.13). However, the difference between the two states is not large.

Magnetic
field (Bo)

Extra
atoms

Anti-parallel Parallel Mahmood & Haider


Figure 9.13: Hydrogen atoms and magnetic field
For example, the excess number of protons that aligned parallel or low energy state within a
field 0.5T is only 3 per million (3 ppm = 3 parts per million), in a 1.0T system there are 6 per
million in the system There 1.5T 9 per million. So, the excess number of protons is
proportional with Bo. This is also the reason for the 1.5T systems make images better than
systems with lower field strength.
The excess number of protons within a field 1.5T is only 9 per million which is don't seem
very many, but in real life it adds up to quite a number. For example, if we calculated how
many excess protons there are in a single voxel (volume element) at 1.5T.
 Assume a voxel is 2 x 2 x 5 mm = 0.02 ml
 Avogadro's Number says that there are 6.02 x 1023 molecules per mole.
 1 mole of water weighs 18 grams (O16 + 2H1), has 2 moles of Hydrogen and fills 18
ml, so…………
 1 voxel of water has 2 x 6.02 x 1023 x 0.02 / 18 = 1.338 x 1021 total protons

225
CHAPTER 9 MAGNETIC RESONANCE IMAGING

 The total number of excess protons

In the end we see that there is a net magnetization (the sum of all tiny magnetic fields of each
proton) pointing in the same direction as the system's magnetic field.
Now, if like us, net magnetization using an easy by vector in order to see what is
happening with them in MRI. A vector (the red arrow in the Figure 9.14) has a direction and a
force. We imagine a frame of rotation, which is a set of axes called X, Y and Z. The Z-axis is
always pointing in the direction of the main magnetic field, while X and Y are pointing at
right angles from Z. Here we see the (red) net magnetization vector pointing in the same
direction as the Z-axis. The net magnetization is now called M or longitudinal magnetization.

Bo
Z Net Z
Magnetization
(MZ)
Y Y

X X

Overall magnetization of nuclei = Sum


of vectors from individual nuclei
Mahmood & Haider

Figure 9.14: Direction and a force of net magnetization


To obtain an image from a patient it is not enough to put him into the magnet. We have to do a
little bit more than that. What we also have to do is discussed in the following pages. The
following steps can be divided into Excitation, Relaxation, Acquisition, Computing and
Display. Before that, should understand the meaning of some of the important expressions: a
in-Phase and diphase
9.10 In-Phase and Diphase
To explain first what do we mean by Phase? Through the following simple example can
illustrate Phase.
 In Figure 9.15 we see two wheels with an arrow. (a) The wheels rotate in the same speed
and in the same angle. The arrows will therefore point in the same direction at any time.
Say the wheels rotate in the same phase (in-Phase). Another two wheels (b) with different
angle therefore, we say out of phase (de-phase).

226
CHAPTER 9 MAGNETIC RESONANCE IMAGING

Peaks simultaneous Phase angle θ = 90o


Y Y

T T

Sine waves of the same frequency Sine waves of the same frequency which are a
which are in-phase quarter cycle (90o) out of phase (diphase)

Mahmood & Haider


(a) (b)
Figure 9.15: (a) two wheels are rotating in-Phase (b) two wheels are rotating de-phase

9.11 RF Pulse
As we said in the section 9.9, if the person placed in the MRI scanner, the first thing which
happens is the precession of spins around the direction of the external magnetic field (the z-
axis). Now what happen, if another magnetic field is temporarily switched on in a different
direction (in the direction of the x- or y-axis, say)?
Precession will occur around the direction of that magnetic field also.
If the second applied magnetic field is static, the resultant movement of the net magnetization
of a spin isochromat will be a complicated motion due to precession from the two static
fields. However, if the second magnetic field which is temporarily applied is oscillating with
the frequency of precession of the precessing spins a simple rotation of the net magnetization
vector results. (Rotation of magnetization into the x-y-plane is a "90° pulse". The dephasing of
the components of magnetization in the x-y-plane starts to occur straight away, as does the re-
growth of magnetization in the z-direction as shown in next section.)
9.12 Excitation
Before the system starts to acquire the data must be perform a quick measurements (also,
called pre-scan) to determine the frequency of protons which are spinning (Larmor frequency).
Selecting this frequency is important because it uses the system for the next step.
Once the Larmor frequency is determined the system will start the acquisition. For now we
only send a radio frequency (RF) pulse (An RF pulse is a magnetic field, the direction of
which is oscillating at the Larmor frequency) into the patient and we look at what happens.
The oscillating magnetic field at the Larmor frequency is switched on for a very small
amount of time (a few milliseconds) to achieve such a rotation. This magnetic field is called
an RF pulse; it is short (a burst or pulse) and the Larmor frequency for MRI is in the radio

227
CHAPTER 9 MAGNETIC RESONANCE IMAGING

frequency range (tens of MHz). This process is sometimes called RF excitation of the spin
system. Different amounts of rotation can be achieved by applying the oscillating magnetic
field for different durations.
To understand it more deeply can through the following example:
Let us assume we work with a 1.5 Tesla system. The centre or operating frequency of the
system is 63.855 MHz (Calculated using the Larmor equation: ). In order to
manipulate the net magnetization we will therefore have to send a Radio Frequency (RF) pulse
with a frequency that matches the centre frequency of the system: 63.855 MHz. This is where
the Resonance comes from in the name Magnetic Resonance Imaging. Resonance you know
from the opera singer who sings a high note and the crystal glass shatters to pieces. MRI
works with the same principle. Only protons that spin with the same frequency as the RF pulse
will respond to that RF pulse. If we would send an RF pulse with a different frequency, let's
say 59.347 MHz, nothing would happen. Therefore, by sending an RF pulse at the Larmor
Frequency, with certain strength (amplitude) and for a certain period of time it is possible to
rotate the net magnetization into a plane perpendicular to the Z-axis, in this case the X-Y
plane (Figure 9.16).

Z Bo = constant
Flip Angle (FA) 90o
RF Z
63.855 MHz

Y Y
X X Mahmood & Haider

Figure 9.16: RF pulse at the Larmor Frequency, it is possible to rotate the


net magnetization into a plane perpendicular to the Z-axis.

(Note how by use of the vectors facilitated imagine what is happening, without the use of
vectors would be quite impossible that this event draws).
We just “flipped” the net magnetization 90o. Later we will see that there is a parameter in
our pulse sequence, called the Flip Angle (FA), which indicates the amount of degrees we
rotate the net magnetization. It is possible to flip the net magnetization any degree in the range
from 1o to 180o. For now we only use an FA of 90o. This process is called excitation.

228
CHAPTER 9 MAGNETIC RESONANCE IMAGING

9.13 Relaxation
Now it becomes interesting. If the net magnetization rotated 90 degrees in x-y plane and this
means the same thing if we say that the protons raised to a higher energy state. This occurs
because the protons absorbed energy from the RF pulse. This is called the perturbation that
protons do not "like or want" continue in high energy situation "excitation" they tend to return
to the normal or low energy situation "equilibrium". This can be compared with the abnormal
situation in the case of walking on your hands, this is possible, but you do not want to
continue this case for a long time and you inevitably you prefer the natural state is walking on
your feet. A general principle of thermodynamics is that every system seeks its lowest energy
level. The same thing for the protons, and they prefer the lineup with the main magnetic field
or, in other words, they would be in a low power state. The relaxation means the return of a
perturbed system into the original situation "equilibrium" and each relaxation process can be
characterized by a relaxation time. The relaxation process can be divided into two parts: T1
and T2 relaxation.
9.13.1 T1 Relaxation
T1 is spin-lattice relaxation time which relates to the recovery of the magnetization along z
direction after RF pulse. We can say that this as the time it takes tissue to recover from an RF
pulse so you can give another pulse and still get signal. T1 is called the spin-lattice relaxation
time because it refers to the time it takes for the spins to give the energy they obtained from
the RF pulse back to the surrounding tissue (lattice) in order to go back to their equilibrium
state. T1 relaxation describes what happens in the Z direction. So, after a little while, the
situation is exactly as before we sent an RF pulse into the patient. In other words,
immediately after the 90° pulse, the magnetization Mxy precesses within the x–y plane,
oscillating around the z-axis with all protons rotating in-phase. After the magnetization has
been flipped 90° into the x–y plane, the RF pulse is turned off. Therefore, after the RF pulse
is turned off, two things will occur:
1. The spins will go back to the lowest energy state.
2. The spins will get out of phase with each other.
The Protons are returning to its original situation "equilibrium" by the releasing the absorbed
energy in the form of "very little" warmth and RF waves. That means, in principle the net
magnetization rotates back to align itself with the Z-axis. After the stops of the RF excitation
pulse, the net magnetization will re-grow along the Z-axis, while emitting radio-frequency
waves (Figure 9.17).
T1 relaxation describes what happens in the Z direction. So, after a little while, the situation
is exactly as before we sent an RF pulse into the patient. T1 relaxation is also known as Spin-
Lattice relaxation, because the energy is released to the surrounding tissue (lattice). So far, so
good! This process is relatively easy to understand because one can, somehow, picture this in
one's mind.

229
CHAPTER 9 MAGNETIC RESONANCE IMAGING

Z RF Z RF

X
X
Net
Magnetization
MZ
(MZ)
Y Y

1 2
Z RF Z

X X
MZ MZ

Y Y

3 4
Mahmood & Haider

Figure 9.17: The net magnetization will re-grow along the Z-axis after the
stops of the RF excitation pulse

9.13.1.1 T1 Relaxation Curves


T1 relaxation happens to the protons in the volume that experienced the 90o-excitation pulse.
However, not all the protons are bound in their molecules in the same way. This is different
for each tissue. One 1H atom may be bound very tight, such as in fat tissue, while the other has
a much looser bond, such as in water. Tightly bound protons will release their energy much
quicker to their surroundings than protons, which are bound loosely. The rate at which they
release their energy is therefore different. The rate of T1 relaxation can be depicted as shown
in Figure 9.18.
The curve shows at time = 0 that there is no magnetization in the Z-direction right after the
F-pulse. But immediately the MZ starts to recover along the Z-axis. T1 relaxation is a time
constant. [T1 is defined as the time it takes for the longitudinal magnetization (MZ) to reach 63
% of the original magnetization]. In other words, magnetization (MZ) be at the beginning and
before sending a pulse on the Z-axis with the maximum value (100%), after sending a pulse,
MZ go down to zero on the Z-axis and a full appear in the xy plane, (i.e. 90 degree turn spiral
path as shown in figure 9.17). This means that the protons are in an excited state. Immediately
after cutting-off the pulse, begin declining in the xy plane and at the same time grow on the Z-
axis to reach 63% of its original value with a time called T1, (i.e. that the protons begin to
return to equilibrium or primary status) according to the following equation:

230
CHAPTER 9 MAGNETIC RESONANCE IMAGING

For example; After

where
or
After

or

MZ Mahmood & Haider

Mo
100%

63%

Figure 9.18: The rate of T1 relaxation.


A similar curve can be drawn for each tissue as shown in figure 9.19 which illustrates four
tissues found in the head. Each tissue will release energy (relax) at a different rate and that's
why MRI has such good contrast resolution.

MZ Mahmood & Haider

100%

Fat
White Matter

63% Gray Matter

CSF
Short T1: tissue with a "tighter" molecular
structure
Long T1: tissue with a "looser" molecular
structure
Time (msec)

Figure 9.19: Example T1 curves for four tissues found in the head.

231
CHAPTER 9 MAGNETIC RESONANCE IMAGING

9.13.2 T2 Relaxation
As I mentioned before, the relaxation process is divided into two parts. The second part, relax
T2, is a bit more complicated. We have found that students in general and even some
radiologists have difficulties in understanding T2 in addition to understanding the relationship
between T1 and T2.
First of all, it is very important to realize that T1 and T2 relaxation are two independent
processes. The one has nothing to do with the other. The only thing they have in common is
that both processes happen simultaneously. T1 relaxation describes what happens in the Z
direction, while T2 relaxation describes what happens in the X-Y plane. That's why they have
nothing to do with one another. I cannot emphasize this enough.
Let's go back one step and have a look at the net magnetization vector before we apply
the 90o RF pulse. The net magnetization vector is the sum of all the small magnetic fields of
the protons, which are aligned along the Z-axis.
Each individual proton is spinning around its own axis. Although they may be rotating with
the same speed, they are not spinning in-phase or, in other words, there is no phase
coherence. The arrows of the two wheels from the previous example would point in different
directions. When we apply the 90o RF pulse something interesting happens. Apart from
flipping the magnetization into the X-Y plane, the protons will also start spinning in-phase!!
So, right after the 90o RF pulse the net magnetization vector (now called transverse
magnetization) is rotating in the X-Y plane around the Z-axis at the Larmor frequency
(Figure 9.20A).

z x Mahmood & Haider


z
x
y
z
y T2 de-phasing
x

A y z
B x

y
C
Maximum
Signal D Time
Mxy (FID)

Minimum
Signal
t
Signal – Free Induction Decay (FID) Time

Figure 9.20: De-phasing and free induction decay (FID).

232
CHAPTER 9 MAGNETIC RESONANCE IMAGING

That is transverse magnetization formed by tilting the longitudinal magnetization into the
transverse plane by using a radiofrequency pulse. The transverse magnetization induces an
MR signal in the radiofrequency coil immediately after its formation, it has a maximum
magnitude, and all of the protons are in phase. Therefore the vectors all point in the same
direction because they are in-Phase. However, they don't stay like this. The transverse
magnetization starts decreasing in magnitude immediately as protons start going out of phase.
This process of de-phasing and reduction in the amount of transverse magnetization is called
transverse relaxation.
This is similar to a group of soldiers walking one behind the other in a similar pattern (in-
phase). If someone stumbled resulting in a state of mini chaos with other soldiers who are
walking and then walk change in different directions: this soldier got out-of-Phase or he were
(de-phasing).
A similar situation happens with the vectors in MRI. Remember that each proton can be
thought of as a tiny bar magnet with a north and a south pole. And two poles of the same sign
repel each other. Because the magnetic fields of each vector are influenced by one another the
situation will occur that one vector is slowed down while the other vector might speed up. The
vectors will rotate at different speeds and therefore they are not able to point into the same
direction anymore: they will start to de-phase. At first the amount of de-phasing will be small
(Figure 9.20B, C), but quickly that will increase until there is no more phase coherence left:
there is not one vector pointing in the same direction anymore.
In the meanwhile the whole lot is still rotating around the Z-axis in the X-Y plane (Figure
9.20D). A characteristic time representing the decay of the signal by 1/e, or 37%, is called the
T2 relaxation time. 1/T2 is referred to as the transverse relaxation rate. This process of getting
from a total in-phase situation to a total out-of-phase situation is called T2 relaxation.

9.13.2.1 T2 Relaxation Curves


Just like T1 relaxation, T2 relaxation does not happen at once. Again, it depends on how the
Hydrogen proton is bound in its molecule and that again is different for each tissue.
Right after the 90o RF-pulse all the magnetization is “flipped” into the XY-plane. The net
magnetization changes name and is now called MXY. At time = 0 all spins are in-phase, but
immediately start to de-phase. T2 relaxation is also a time constant. T2 is defined as the time it
takes for the spins to de-phase to 37% of the original value. Immediately after cutting-off the
pulse, begin declining in the xy plane, according to the following equation:

The rate of de-phasing is different for each tissue. Fat tissue will de-phase quickly, while
water will de-phase much slower.
Also here we can draw a curve. (Figure 9.21)

233
CHAPTER 9 MAGNETIC RESONANCE IMAGING

Signal Mahmood & Haider


100%

75%

50%
37% Tissue A
25%
Tissue B

0 20 40 60 80 100 Time (ms)


T2 T2
Figure 9.21: Transverse magnetization decay (T2)

One more remark about T2: it happens much faster than T1 relaxation. T2 relaxation happens
in tens of milliseconds, while T1 can take up to seconds. T2 relaxation is also called spin–spin
relaxation because it describes interactions between protons in their immediate surroundings
(molecules).
Table 9.2: Approximate spin density (SD) and relaxation times (Tl, T2) for
various tissues.
Tissue SD Tl (ms) T2 (ms)
Water 100 2700 2700
Skeletal muscle 79 720 55
Cardiac muscle 80 725 60
Liver 71 290 50
Fat - 360 30
Bone <12 <100 <10
Spleen 79 570
50
Kidney 81 505
Gray matter 84 4053/2 105
White matter 70 345 65

9.13.2.2 T2* Relaxation


All relaxation mechanisms mentioned so far are heavily influenced by temperature and
molecular environment. Transverse relaxation is the result of random interactions at the
atomic and molecular levels. Transverse relaxation is primarily related to the intrinsic field
caused by adjacent protons (spins) and hence is called spin-spin relaxation. Transverse
relaxation causes irreversible de-phasing of the transverse magnetization.
By contrast, the so-called T2* relaxation (as a variant of T2) is a result of dephasing
processes due to an inhomogeneous magnet field which can be minimized by manual

234
CHAPTER 9 MAGNETIC RESONANCE IMAGING

justification ("shimming"). Since T2* is usually much smaller than T2, the signal decay of an
FID (see Fig. 7) is almost completely caused by T2* effects. In general, T1>T2>T2*.
There is also a reversible bulk field de-phasing effect caused by local field
inhomogeneities, and its characteristic time is referred to as T2* relaxation. These additional
de-phasing fields come from the main magnetic field inhomogeneity, the differences in
magnetic susceptibility among various tissues or materials, chemical shift, and gradients
applied for spatial encoding. This de-phasing can be eliminated by using a 180° pulse, as in a
spin-echo sequence. Hence, in a spin-echo sequence, only the “true” T2 relaxation is seen.
In gradient-echo (GRE) sequences, there is no 180° refocusing pulse, and these de-phasing
effects are not eliminated. Hence, transverse relaxation in GRE sequences (i.e., T2*
relaxation) is a combination of “true” T2 relaxation and relaxation caused by magnetic field
inhomogeneities. T2* is shorter than T2 (Figure 9.22), and their relationship can be expressed
by the following equation, where γ is the gyromagnetic ratio:
1/T2* = 1/T2 + γ ΔBinhom, or
1/T2* = 1/T2 + 1/T2′
Where 1/T2′ = γ ΔBinhom, and ΔBinhom is the magnetic field inhomogeneity across a voxel.
Remember this:
 T1 and T2 relaxation are two independent processes, which happen simultaneously.
 T1 happens along the Z-axis; T2 happens in the X-Y plane.
 T2 is much quicker than T1
When both relaxation processes are finished the net magnetization vector is aligned with the
main magnetic field (B0) again and the protons are spinning Out-Of-Phase; the situation before
we transmitted the 90o RF-pulse.

Mahmood & Haider

Signal
37% T2 Decay

T2* Decay

T2* T2
Echo Time
Figure 9.22: graph shows T2 andT2* relaxation curves. T2* is shorter than T2.

235
CHAPTER 9 MAGNETIC RESONANCE IMAGING

9.14 Acquisition
During the relaxation processes the spins shed their excess energy, which they acquired from
the 90o RF pulse, in the shape of radio frequency waves. In order to produce an image we need
to pick up these waves before they disappear into space. This can be done with a Receive coil.
The receive coil can be the same as the Transmit coil or a different one. An interesting, but
ever so important, fact is the position of the receive coil.
The receive coil must be positioned at right angles to the main magnetic field (B0). Failing
to do so will result in an image without signal. This is why: if we open up a coil we see it is
basically nothing but a loop of copper wire. When a magnetic field goes through the loop, a
current is induced (Figure 9.23).

Bo

Mahmood & Haider


Figure 9.23: A magnetic field goes through the loop, a current is induced.
Bo is a very strong magnetic field; much stronger than the RF signal we are about to receive.
That means if we position the coil such that Bo goes through the coil an enormous current is
induced, and the tiny current induced by the RF wave is overwhelmed. We will only see a lot
of speckles (called: noise) in our image. Therefore, we have to make sure that the receive coil
is positioned in such a way that B0 can't go through the coil. The only way to achieve this is to
position the receive coil at right angles to B0 as shown in Figure 9.24.
It is quite interesting to try this for yourself with your scanner. Just make a series of scans
where you position the receive coil at different angles. Start with the coil at a right angle with
B0, and then turn it a bit such that B0 is allowed to run through the coil. Next turn it a bit
further until B0 runs entirely through the coil. You will see your image degrade very quickly.
At some stage the system is probably not able to “tune” the coil anymore and won't be able to
make a scan.

236
CHAPTER 9 MAGNETIC RESONANCE IMAGING

Y Mahmood & Haider

Z
Bo

Receive Coil

Figure 9.24: The receive coil at right angles to B0.

Remember this:
 The only proper way to position the receive coil is at right angles to Bo.
Note: Many coils are specifically designed for a certain body part. Take for instance the Head
coil; if you fix the coil on the scanner table it seems that B0 runs through the coil. This is only
'optical Illusion'. The coil is designed such that the loops of copper wire, which make up the
coil, are at right angles to B0. Designing a coil for a bore type magnet where B0 runs through
the length of the body is exceptionally difficult. If you open up a Head coil you'll see probably
two copper wires, which are saddle shaped and positioned at right angles to one another. In
order to receive enough signal there are two coils, because saddle shaped coils are relatively
inefficient.
According to Mr. Faraday a Radio Frequency wave has an electric and a magnetic
component, which are at right angles from one another, have a 90o phase difference and both
move in the same direction with the speed of light (Figure 9.25).

Magnetic field (B)

Electric field (E)

Propagation
direction

Wavelength (λ)

Mahmood & Haider


Figure 9.25: electric and a magnetic component of Radio
Frequency wave.

237
CHAPTER 9 MAGNETIC RESONANCE IMAGING

It is the magnetic component in which we are interested because that induces the current in the
receive coil.
Determine the positioning of the coils so that they form a right angle with B0 means we can
receive signals from processes only when the right angles be a between the coils and B0 which
happens to be T2 relaxation. T2 relaxation is a decaying process, which means phase
coherence is strong in the beginning, but rapidly becomes less until there is no phase
coherence left. Consequently, the signal that is received is strong in the beginning and quickly
becomes weaker due to T2 relaxation (Figure 9.26).

90o

MXY

FID
Mahmood & Haider
Figure 9.26: Free Induction Decay

The signal is called: Free Induction Decay (FID). The FID is the signal we would receive in
absence of any magnetic field. In the presence of a magnetic field T2 decay goes much faster
due to local (microscopic) magnetic field inhomogeneity and chemical shift, which are known
as T2* effects). The signal we receive is much shorter than T2. The actual signal decays very
rapidly; in ± 40 milliseconds it's reduced to practically zero. This poses a problem, as we will
see later.

9.15 Computing and Display


In general, system of MRI consists of five major components: magnet, gradient systems, RF
coil system, receiver, and computer system. Figure 9.27 shows the entire process
graphically.
The received signal (Figure 9.28) is then fed into a computer; computer system is beyond
the scope of this book.

238
CHAPTER 9 MAGNETIC RESONANCE IMAGING

Figure 9.27: General Major Components of MRI system.


z z Mahmood & Haider z
RF
x x

y y y

Excitation Relaxation

Viewing Computing Receiving

Figure 9.28: the entire process graphically.


So far we learned how to work magnetic resonance imaging (MRI). Good that can be likened
to that there is something happening and you at this time you see what is happening, but you
did not participate. This is the time to get out and go a little deeper and participate what is
happening.
9.16 Fourier Transformation
The Fourier transform (FT) is a mathematical technique for converting time domain data to
frequency domain data, and vice versa. In order to transfer MR data from time (FID) to
frequency domain (spectrum), we have to apply FT. Using FT all signals can be separated by
their frequencies and the intensities in the spectrum. Application of an FT to the signal in
Figure 9.29a leads to a three-line spectrum with more intensity on the higher frequency signal
are defined by the maximum FID amplitudes of the respective frequency components (figure

239
CHAPTER 9 MAGNETIC RESONANCE IMAGING

9.29c). If there would be no signal damping in the FID, all lines in the spectrum were
infinitely narrow.

FT FT

(Time domain) (Frequency


domain)
Intensity

Intensity
Intensity
Time Time Frequency
(a (b) (c)
)
Figure 9.29: Fourier transforms (FT) converting time domain data to
frequency domain data, and vice versa.
In real life, there is always relaxation leading to such a damping (see Figure. 9.30) and
therefore, we always find line broadening in the spectra which depends on the exponential
decay of the FID: the faster the decay, the broader the lines.

Exponential function
Intensity

Free induction decay

Time
Figure 9.30: The decay of the FID due to the relaxation.
9.17 Gradient Coils
As we explained previously to produce an image, you must stimulate the hydrogen nuclei in
the body, and then determine the location of those nuclei within the body. These tasks are
accomplished using the gradient coil.

240
CHAPTER 9 MAGNETIC RESONANCE IMAGING

To make it clear that more than during the following assumption: If we assume a completely
homogeneous magnetic field (this ideal situation does not exist), then all the protons in the
body will spin at the Larmor frequency. This also means that all protons when you return to
equilibrium give the same signal. In this case we will not know whether the signal coming
from the head or foot. So you will not get a clear image.
The solution to our problem can be found in the characteristics of the RF-wave, which are:
Phase, Frequency and Amplitude
First, we will divide the body up to the volume elements, also known as: voxels.
 The protons within that voxel will emit RF wave with known phase and frequency.
 Amplitude of the signal depends on the amount of protons in the voxel.
The answer to our problem is: Gradient Coils
The gradient coils are resistant type electromagnets, which enable us to create additional
magnetic fields, which are, in a way, superimposed on the main magnetic field B0. The
gradient coils are used to spatially encode the positions of the MRI spins by varying the
magnetic field linearly across the imaging volume such that the Larmor frequency varies as a
function of position.
To achieve adequate image quality and frame rates, the gradient coils in the MRI imaging
system must rapidly change the strong static magnetic field by approximately 5% in the area
of interest. High-voltage (operating at a few kilovolts) and high-current (100s of amps) power
electronics are required to drive these gradient coils. Notwithstanding the large power
requirements, low noise and stability are key performance metrics since any ripple in the coil
current causes noise in the subsequent RF pickup. That noise directly affects the integrity of
the images. To differentiate tissue types, the MRI systems analyze the magnitude of the
received signals. Excited nuclei continue to emit a signal until the energy absorbed during the
excitation phase has been released. The time constant of these exponentially decaying signals
ranges from tens of milliseconds to over a second; the recovery time is a function of field
strength and the type of tissue. It is the variations in this time constant that allow different
tissue types to be identified.
9.18 Magnetization Gradients
There are three Individual gradients as shown in figure 9.31. An MRI system must have 3 sets
of wires (x, y, and z gradient coils) to produce gradients in three dimensions and thereby
create an image slice over any plane within the patient's body. Each set can create a magnetic
field in a specific direction: x, y or z. When a current is fed into the Z gradient, then a
magnetic field is generated in the Z direction (Figure 9.31A). The same goes for the other
gradients. (Figure 9.31B, C).
The application of each gradient field and the excitation pulses must be properly
sequenced, or timed, to allow the collection of an image data set. Therefore, in a transverse

241
CHAPTER 9 MAGNETIC RESONANCE IMAGING

image, the Z gradient would be used in “selecting” a slice of tissue to image. That means the
spatial location of the 2D plane to be imaged is controlled by changing the excitation
frequency.

x X Gradient Coil
Gx
Gx B

z A
Bo y
B

x Y Gradient Coil

B B
z
Bo Gy B
y
Gy

x Z Gradient Coil
Gz
C
Gx
z
Bo y B B

Mahmood & Haider


Figure 9.31: MRI scanner set Gradient Magnets (Individual gradients)

After the excitation sequence is complete, the X gradient may be used for phase encoding.
This requires properly applied gradient in the x direction can be used to spatially change the
resonant frequency of the nuclei as they return to their static position. The frequency
information of this signal can then be used to locate the position of the nuclei in the X
direction. Similarly, a gradient field properly applied in the Y direction can be used to
spatially change the phase of the resonant signals and, hence, be used to detect the location of
the nuclei in the y direction. This means, the Y gradient for frequency encoding. In short
we can say, by properly applying gradient and RF-excitation signals in the proper sequence
and at the proper frequency, the MRI system maps out a 3-D section of the body. Figure 9.32
shows schematically how the 3 gradient coils form a cylinder. This cylinder is then placed in
the magnet bore of main magnetic coil.

242
CHAPTER 9 MAGNETIC RESONANCE IMAGING

Mahmood & Haider Radio frequency transmitter and


receiver.
(Send and receives radio signals)
Main magnet coil
(Creates a uniform magnetic field)

X magnetic coils
(Creates a varying magnetic field from left to
right)
Y magnetic coils
(Creates a varying magnetic field from top to
bottom)

Z magnetic coils
(Creates a varying magnetic field from head to
toe)

Figure 9.32: Diagram of coils within the bore of the magnet that are used to create
gradients.
Let's move on and discuss how the gradients are used to code the signal.
9.19 Signal Coding
To explain this subject clearly and easy assimilation, suppose some of the assumptions:
considering an axial image of the brain using a 1.5 Tesla magnet. Also we work with a
homogeneous magnetic field, which covers the whole body from head to toe. (This is quite
different in reality, where there is only a homogenous sphere of 40 cm in diameter in the iso-
centre (center of the MRI bore) of the magnet, but this assumption easy to the explanation and
the idea explained). Just as important as the strength of the main magnet is its precision. The
straightness of the magnetic lines within the center (or, as it is technically known, the iso-
center) of the magnet needs to be near-perfect. This is known as homogeneity. Fluctuations
(inhomogeneities in the field strength) within the scan region should be less than three parts
per million (3 ppm). When we put a patient in the magnet, all the protons, from head to toe,
align with B0. They spin at the Larmor frequency of 63.6 MHz. (Figure 9.33).

Mahmood & Haider


Magnetic
field (Bo)

63.6 MHz 63.6 MHz 63.6 MHz

Figure 9.33: patient in the magnet, all the protons, align with B0 and
they spin at the Larmor frequency

243
CHAPTER 9 MAGNETIC RESONANCE IMAGING

If we use a 90o excitation RF-pulse to flip the magnetization into the x-y plane, then all the
protons would react and return a signal. We would have no clue where the signal comes from:
from head or toe.

9.19.1 Slice Encoding Gradient


The magnetic field gradient (e.g. Z-gradient) is temporarily applied (Z-gradient is switched
on) at the same time as the RF pulse (see figure 9.34).

90o

RF

Gs

Figure 9.34: Slice selection pulse sequence, a gradient magnetic field is


applied at the same time as the RF pulse.
This will generate an additional magnetic field in the Z-direction, which is superimposed on
Bo. The indication +Gz in Figure 9.34 means there is a slightly stronger Bo field in the head as
there is in the iso-centre of the magnet. A stronger Bo field means a higher Larmor frequency.
Along the entire the slope of the gradient there is a different Bo field and consequently the
protons spin at slightly different frequencies. Therefore, the protons in the head will spin
slightly faster than the ones in the iso-centre. The reverse goes for the protons in the feet.
Figure 11.30 shows that the protons in the feet now spin at 63.5 MHz, the ones in the iso-
centre of the magnet still at 63.6 MHz and the ones in the head with 63.7 MHz.
This means we can "pick out" the section which we want to excite by choosing the right
frequency range of RF excitation pulse. The section which contains Larmor frequencies which
match the frequencies of the oscillating magnetic field will respond. An MRI signal will be
generated only from that section of the patient. This is called Slice-Encoding or Slice-
Selection. Usually the slice selection gradient is applied in the z-axis—the head-foot direction
in the scanner. But because a gradient magnetic field may be applied in any orientation, slices
may be acquired at literally any angle or orientation in the patient. This is one of strength
MRI.
Now, if we apply an RF-pulse with a frequency of 63.7 MHz only the protons in a thin slice
in the head will react because they are the only ones which spin with the same frequency. In
this example Gz is the slice-encoding gradient. If we would stop here, this means that we

244
CHAPTER 9 MAGNETIC RESONANCE IMAGING

receive the returned signal comes from the single slice in the head. That is we have identified
the site by using the Z-gradient (Gz). (figure 9.35).
These frequencies are only used for this example; in reality the differences are much
smaller.

Slightly lower Slightly higher


Gradient
magnetic field magnetic field
+
0

_
Null
Slightly lower Larmor frequency Slightly higher
Precessional frequency Precessional frequency
FOV position

Transmitted RF bandwidth
RF
Frequency Amplitude

RF carrier frequency

Larmor equation applies here: the image


slice
63.7 MHz 63.6 MHz 63.5 MHz

RF
63.7 MHz Mahmood & Haider

Figure 9.35: Generate an additional magnetic field in the Z-direction.

245
CHAPTER 9 MAGNETIC RESONANCE IMAGING

9.19.1.1 Slice Location


 The slice to be imaged is moved to the centre of the scanner bore (the iso-centre).
If there's time between imaging different slices, we can simply move the patient-table so
that the section of interest within the patient is at the iso-centre. This is preferable because
placing the section of interest in the part of the main magnetic field which is most
homogenous will give us better images.

Figure 9.36: Process of choice the slice location

 The carrier frequency of the RF excitation pulse may be changed as shown in figure 9.36.
The carrier frequency of the transmitted RF pulse determines which spins along the patient
will resonate (because they have a matching Larmor frequency). If multiple slices are to be
acquired in quick sequence, the carrier frequency can be set to determine the location of the
imaging slice in the patient.

9.19.1.2 Slice Thickness


Slice thickness helps get better resolution and finer detailed images. The slice thickness is
governed by the following equation:

246
CHAPTER 9 MAGNETIC RESONANCE IMAGING

Where;
is the slice thickness,

is the transmitted RF bandwidth (the range of frequencies it covers),

is the gyromagnetic ratio and

is the magnitude of the slice selection magnetic field gradient.

In Figure 9.37A show that varying the steepness of the gradient, while keeping the RF-pulse
bandwidth the same. Alternatively, Figure 9.37B the steepness of the gradient is kept the
same, while the bandwidth of the RF-pulse is varied. Can also change the slice thickness, the
slice thickness may be reduced by either increasing the gradient of the magnetic field (dashed
line in figure 9.37A) or by decreasing the RF pulse width, (or transmit bandwidth, figures
9.37B & 9.38). A thinner slice produces better anatomical detail, the partial volume effect
being less, but it takes longer to excite.
In practice, the slice thickness is determined by a combination of both gradient steepness
and RF-pulse bandwidth.
The total magnetic field at a position ZSS (SS = slice selection) during application of GZ is
given by: B0+ZSS·GZ and the spatially selective excitation energy or frequency, respectively,
can be easily calculated using the Larmor equation.

247
CHAPTER 9 MAGNETIC RESONANCE IMAGING

Fixed RF, Variable Gradient Variable RF, Fixed Gradient


Gradient Strength GZ Bandwidth
2GZ
High
Fixed Badwidth Narrow Wide
Gradient Strength

RF Frequency
Low
ωo e ωo

Large Large
Small Small

Slice Thickness Slice Thickness


A B
High gradient → Small slice thickness Narrow bandwidth → Small slice thickness

Low gradient → Large slice thickness Wide bandwidth → Large slice thickness
Mahmood & Haider

Figure 9.37: Slice thickness is dependent on RF bandwidth and gradient strength. (A)
For a fixed gradient strength, the RF bandwidth determines the slice thickness, (B) for a
fixed RF bandwidth, gradient strength determines the slice thickness.

Figure 9.38: Reduced of slice thickness by decreasing the RF (or transmit) bandwidth.

A typical slice thickness is 2-10 mm. the RF pulse inevitably contains a certain amount of
electromagnetic energy of frequencies slightly higher or lower than the intended bandwidth,

248
CHAPTER 9 MAGNETIC RESONANCE IMAGING

thus mildly exciting tissues either side of the desired slice. To prevent this affecting the image
slice, a gap (say 10% of the slice thickness) may be left between slices, although this is not
necessary when the slices are interleaved.
For example, for a 10 mm slice thickness using a gradient magnetic field strength of
(10mTm-1), the transmitted RF pulse bandwidth would be about 4.3 kHz (using γ0 = 42.58
MHz T-1).
In order to get optimal image resolution, must be very thin slices with a high SNR. But
whenever were thinner slices the noise was more, the SNR decreases and spatial resolution
increases. Spatial Resolution is the ability to distinguish one structure from another.
Conversely, increase of the slice thickness leads to increase signal to noise ratio and reduces
spatial resolution. Because the thicker slices result other problems such as an increase in
partial volume effects.
Effects The poorer SNR of thin slices can be addressed for to some extent by increasing the
number of acquisitions or by a longer TR. Yet this is accomplished only at the expense of the
overall image acquisition time (the period of time required to collect the image data. This
time does not include the time necessary to reconstruct the image. ADC - analog-to-digital
converter) and reduces the cost efficiency of the MR imaging system. The slice thickness can
be determined by: (1) the steepness of the slope of the gradient field (Gss) and (2) the
bandwidth of the 90o RF-pulse.

9.19.1.3 Receiver Bandwidth


The receiver bandwidth is the range of frequencies collected by an MR system during
frequency encoding. The bandwidth is either set automatically or can be changed by the
operator. A wide receiver bandwidth enables faster data acquisition and minimizes chemical
shift artifacts but also reduces SNR as more noise is included. Halving the bandwidth
improves SNR by about 30%. With a narrow bandwidth, on the other hand, there will be more
chemical shift and motion artifacts and the number of slices that can be acquired for a given
TR is limited.
Is this enough? Certainly not, and we will know immediately why.
Figure 9.39 shows the axial slice, which has just been created by the Gz gradient. If we take a
closer look at proton 1 and 2 in this slice we see that they both spin with the same frequency
and have the same phase.

249
CHAPTER 9 MAGNETIC RESONANCE IMAGING

All protons at
the same
frequency and
at the same
phase

A A

FT

F T

Anterior Same frequency

1 2

Same phase
1

1 2
Mahmood & Haider

Figure 9.39: The axial slice, which has just been created
by the Gz gradient.
Within the slice there are still an awful lot of protons and we still don't know from where the
signal is coming from within the slice. Whether it comes from anterior, posterior, left or right.
Further encoding is therefore required in order to allow us to pinpoint the exact origin of the
signals.
Important Notes:
 The thickness of the slice can be changed by varying the steepness of the magnetic
field gradient, or by changing the transmitted RF pulse bandwidth as will be discussed
later in detail (section 9.8.2).
 The RF pulse and the magnetic field gradient have to apply together. This process may
be depicted in a pulse sequence timing diagram.
 The shape of the RF excitation pulse in time is not a square (on/off) shape. This is
because to excite a discrete range of frequencies (a slice) a sinc shape pulse is used
which can be seen by calculating sin(x)/x.
9.19.2 Frequency Encoding Gradient
Figure 9.36 shows the axial slice, which has just been created by the Gz gradient. The protons
in this slice have spin with the same frequency and have the same phase. Within the slice

250
CHAPTER 9 MAGNETIC RESONANCE IMAGING

there are still an awful lot of protons and we still don't know from where the signal is coming
from within the slice. Whether it comes from anterior, posterior, left or right.
Further encoding is therefore required in order to allow us to pinpoint the exact origin of
the signals. The frequency encoding gradient is a static gradient field, just like the slice
selection magnetic field gradient. It does the same thing; it causes range of Larmor frequencies
to exist in the direction in which it is applied (according to the Larmor equation).
To encode in the left – right direction the second, gradient (Gx) is switched on. This will
create an additional gradient magnetic field in the left – right direction. we need now is to do
one more encoding to determine whether the signal comes from the left, the centre or the right
side of the head. The protons on the left hand side spin with a lower frequency than the ones
on the right (Figure 9.40).

Patient
Frequency

Bandwidth

Different frequency
Figure 9.40: The different frequency of protons (Gx gradient:
Frequency encoding gradient).
By causing this range of frequencies to exist, we can use the Fourier transform to separate
them out after we measure an MRI signal as shown in Figure 9.41 (which is a mix of all
signals from a slice).
They will accumulate an additional phase shift because of the different frequency, but – and
this is utterly important - the already acquired phase difference, generated by the Phase
Encoding gradient in the previous step, will remain. Now it is possible to determine whether
the signal comes from the left, centre or right hand side of the slice. We can pinpoint the exact
origin of the signals, which are received by the coil.

251
CHAPTER 9 MAGNETIC RESONANCE IMAGING

Gx - on

Gx gradient- off Gx gradient- on


Image
FT FT

Figure 9.41: Fourier transform to separate frequencies.

9.19.2.1 Application of the Frequency Encoding Gradient


The frequency encoding gradient applied during the recording of the MRI signal. The
frequency encoding gradient causes the precession of net magnetizations within the slice to be
position dependent in that direction, whilst the signal is being recorded (see figure 9.42).

90o

RF

Gs

Gf

Echo

Figure 9.42: The frequency encoding gradient (Gf) is applied during


signal measurement.

If the frequency encoding gradient is not applied at the same time as measuring the MRI
signal, the signals from the different columns in the imaging slice will all have the same
frequency. We cannot, therefore, use the Fourier transform to separate them out.
9.19.2.2 Frequency Encoding in Both Directions
It has been shown that frequency encoding allows us to localize the signal from within an
imaging slice into columns. The next step is to encode the columns (into rows) so that we can
"plot" unique signal values into an array of pixels to get an image of the slice. One might think

252
CHAPTER 9 MAGNETIC RESONANCE IMAGING

that we can simply apply a third magnetic field in a direction perpendicular to the frequency
encoding gradient within the imaging slice as shown in figure 9.43.

ー Frequency encode +
0 0 0 -1 0 +1 +1 +1 +1

0 0 0 -1 0 +1 + 0 0 0

0 0 0 -1 0 +1 -1 -1 -1

ー Frequency encode +
Field
(mT) 0 +1 +2
B
A -1 0 +1

-2 -1 0

Space (m)

Figure 9.43: Example nine-pixel image slice. Frequency encoding in two directions
within an image slice does not produce unique Larmor frequencies related to position. It
becomes impossible to deduce unique signal intensity values for image pixels.

That is to say, why not just frequency-encode in the other direction too? Unfortunately, that
doesn't work. If frequency encoding is performed in two directions, it becomes impossible to
deduce signal intensity values for unique image pixels. This is because the Fourier transform
can tell us the total amplitude of the signal at a particular frequency, but when that amplitude
is the sum of multiple voxels in the image slice, we don't have enough information to plot
signal intensity values in unique pixels in an image.
9.19.3 Phase Encoding Gradient
In order to encode the image or the imaged object along the so-call phase encoding direction,
here the y direction (but the direction could also be in z or x direction), a gradient along the y
direction is applied, and thereafter the signal is sampled. ( ) The phase encoding grading is
a magnetic field gradient that allows the encoding of the spatial signal location along a second
dimension by different spin phases. The phase encoding gradient is applied after slice
selection and excitation (before the frequency encoding gradient), orthogonally to the other
two gradients. The spatial resolution is directly related to the number of phase encoding steps
(gradients). In fact it is necessary to apply this gradient several times, each time increasing the
gradient by an equidistant amount.

253
CHAPTER 9 MAGNETIC RESONANCE IMAGING

This is done by a gradient field is briefly switched on and then off again at the beginning of
the pulse sequence right after the radio frequency pulse, the magnetization of the external
voxels will either precess faster or slower relative to those of the central voxels.
During readout of the signal, the phase of the x-y-magnetization vector in different columns
will thus systematically differ. When the x- or y- component of the signal is plotted as a
function of the phase encoding step number n and thus of time n TR, it varies sinusoidally, fast
at the left and right edges and slow at the center of the image. Voxels at the image edges along
the phase encoding direction are thus characterized by a higher 'frequency' of rotation of their
magnetization vectors than those towards the center.
As each signal component has experienced a different phase encoding gradient pulse, its
exact spatial reconstruction can be specifically and precisely located by the Fourier
transformation analysis. Spatial resolution is directly related to the number of phase encoding
levels (gradients) used. The phase encoding direction can be chosen, e.g. whenever oblique
MR images are acquired or when exchanging frequency and phase encoding directions to
control wrap around artifacts.
In a MRI sequence diagram this procedure is indicated by the phase encoding (see Figure
9.44). As seen this figure consists of 32 steps in this example, each lasting for 0.250 ms, and
with an increment of 0.734085 mT/m (milliTesla pr metre). One could go through this figure
from the bottom to the top, or from the top to the bottom, this is called linear phase encoding.
One could also go from the middle and outward like 0, 1, -1, 2, -2 etc, this is called low-high
phase encoding.

15
10 0.734085 mT/m
Gy (mT/m)

-10

-15
-1 -0.5 0 0.5 1 1.5 2
Time (msec)

Figure 9.44: Phase encoding table - symmetrical

Note that one particulate step corresponds to applying no gradient at all, and for the low-high
encoding, this would then be the first step. In order to code the protons further the gradient
is switched on very briefly. During the time the gradient is switched on an additional gradient
magnetic field is created in the Anterior-Posterior direction.
As you can see small volumes (voxels) have been created. Each voxel has a unique
combination of frequency and phase. The amount of protons in each voxel determines how

254
CHAPTER 9 MAGNETIC RESONANCE IMAGING

strong (amplitude). The signal received contains a complex mix of frequencies, phases and
amplitudes each from a different location (voxel) within the brain.
The computer receives this massive amount of information and then a "Miracle" occurs. In
about 0.25 seconds the computer can analyze all this and creates an image. The "Miracle" is a
mathematical process, known as Two-Dimensional Fourier Transform (2DFT), which enables
the computer to calculate the exact location and intensity (brightness) of each voxel.
This can be summarized (slice selection, phase encoding and frequency encoding) as
follows:
After slice selection the encoding of spatial information has only to be performed in two
dimensions. This can be accomplished by magnetic field gradients in the respective directions.
These are differentiated by the time of gradient switching, i.e. before or during data
acquisition. The first case, the so-called phase encoding, is discussed below. In the second
case (frequency encoding), a readout gradient is switched during data acquisition and the
gradient direction is therefore called the 'readout direction'. produces an additional,
linearly varying magnetic field and due to the proportionality between magnetic field and
frequency, the latter also alters linearly. Spins at different positions therefore emit radiation
with different frequencies which can be distinguished after Fourier transformation. Each
frequency is related to a specific position on the readout axis and the intensity of the radiation
with this frequency is proportional to the number of spins emitting at this position.
9.20 K-Space
K-space is a formalism widely used in magnetic resonance imaging introduced in 1979 by
Likes and in 1983 by Ljunggren and Twieg, which form raw data matrix in MRI which can be
converted into an image using Fourier transformation (Figure . 9.45).

Figure 9.45: Raw data matrix in MRI converted into an image using FT.
The value defines the number of phase cycles per meter distance from the origin (x = 0) a

magnetization vector passes through due to application of the magnetic field gradient ( ).

255
CHAPTER 9 MAGNETIC RESONANCE IMAGING

By analogy to the frequency given as cycles per time, k is called the "spatial frequency". To
illustrate the idea of k-space to start with the following question: Why is k-space so important?
The answer is: It helps us to understand how an MRI image is acquired and how various
pulse-sequences work.
In MRI physics, k-space is the 2D or 3D Fourier transform of the MR image measured. Its
complex values are sampled during an MR measurement, in a premeditated scheme controlled
by a pulse sequence, i.e. an accurately timed sequence of radiofrequency and gradient pulses.
In practice, k-space often refers to the temporary image space, usually a matrix, in which data
from digitized MR signals are stored during data acquisition. When k-space is full (at the end
of the scan) the data are mathematically processed to produce a final image. Thus k-space
holds raw data before reconstruction.
“The MRI data prior to becoming an image (raw or unprocessed data) is what makes up k-
space”. Synonyms for k-space are matrix and time time-domain. The task of an MRI scanner
is to recognize and collect MR signals and store them in a specific order which is recognizable
for further analysis. At each RF excitation, combinations of different excitations are collected
as one complex signal. The read-out MR signal is stored in a 2D array called k-space,
containing samples of the continuous Fourier transform of the object's magnetization.
9.21 Gradient Echo Pulse Sequence Diagram
A pulse sequence is the implementation of the hardware components necessary for excitation,
spatial encoding, and data acquisition in MR imaging.
It is described by a scheme in which all RF pulses and gradient amplitudes as well as the
acquisition window are displayed as functions of time for the smallest repetition interval. This
will be explained in detail for the pulse sequence of a simple 2D gradient echo method (Figure
9.46). The upper trace (RF) shows the radiofrequency excitation pulse with a variable flip
angle θ which is usually considerably smaller than the 90° shown in Figure 9.46. This means
that
(i) The signal measured will be smaller, but
(ii) Less time must be allowed to pass before we can take the next measurement
(shorter TR).
This reduces the scan time to a reasonable duration. The three traces indicated by GS, GPh, and
readout or Echo (RO) represent the individual gradient amplitudes, ACQ shows the
acquisition window.
Slice selection (GS) is realized by switching on the appropriate gradient at the same time as
the RF pulse. Different phases are generated within the excited slice by the GS gradient, a
rephasing pulse with negative amplitude is necessary to compensate for this effect.
Subsequently, phase encoding (GPh) and readout dephasing gradient (RO) are switched on and
off before GRO is applied simultaneously with data acquisition.

256
CHAPTER 9 MAGNETIC RESONANCE IMAGING

It must be noted that due to the orthogonality of the three directions, all phase
manipulations can be applied independently. Hence, GPE and the negative readout gradient can
also be switched at different times. Alternatively, both gradients can be applied at the time of
slice selection rephasing. These details depend on the implementation of the pulse sequence or
on the timing parameters selected by the user. By contrast, differences specific to the
individual sequences are mentioned explicitly.

e.g. θ = 15o flip angle

RF

Gs

△GPh
GPh

GRO
Gf TACQ

ACQ

Time
TE
TR

Figure 9.46: Gradient echo pulse sequence diagram


The timing diagram shown in Figure 9.46 covers the acquisition of one k-space line which is
repeated (almost identically) until the end of the measurement. Thereby, the RF pulse and the
gradients in slice selection and readout directions remain unchanged, whereas the phase
encoding gradient is incremented by a constant amount ΔGPh between the cycles. This leads to
a depiction of GPh according to Figure 9.42 within the pulse sequences.
This basic pulse sequence is a mainstay of MRI, and the majority of advanced pulse
sequences are based on a gradient echo. It is repeated for many different levels of phase
encoding gradient (e.g. 256 times), until the k-space matrix is full. A 2D Fourier transform is
performed on this data set, from which a single MRI image results.
The echo time (TE) and the repetition time (TR) very important timing parameters (will be
discussed in detail later). TE is defined as the time between excitation and maximum
amplitude of the echo, TR is the duration of a phase encoding cycle. Both quantities are

257
CHAPTER 9 MAGNETIC RESONANCE IMAGING

imaging parameters defined by the user, and they can be used e.g. to produce contrast between
different tissues due to their individual relaxation properties. As indicated by the axis breaks,
the length of TR can be raised to any value by increasing the time interval between data
acquisition and the next excitation pulse.
9.22 Gradient Specifications
When you are shopping for an MRI scanner, it is very important to pay special attention to the
gradient sub-system. Ideally, when a gradient is switched on it immediately reaches maximum
power and when you switch it off the power is immediately back to zero (Figure 9.47A).
Unfortunately this is not the case, as we do not live in an ideal world. In reality the gradient
needs a little time to reach maximum power and to power down (Figure 9.47B). The time it
takes to reach maximum power is called: Rise Time (Figure 9.47C). When we divide the
maximum power by the rise time we get a number called: Slew Rate. These are the
specifications for a gradient system.

A C Mahmood & Haider

B Max. Strength
(17 T/m)

Realistic waveform
Rise time Slew rate
(0.7s) (17/0.7= 24 T/m/s)

Figure 9.47: When a gradient is switched on it immediately reaches maximum


power.
You should compare these values because they are different for each system:

1. Maximum strength: as high as possible (minimum FOV and maximum Matrix).


2. Rise time: as short as possible.
3. Slew rate: as big as possible (min. TR, TE and ETS).

The performance – and therefore the range of applications, which can be done – is mainly
determined by the performance of the gradient system. Other issues you may look for are the
field strength B0, the computer system and the ease of use of the user interface.

258
CHAPTER 9 MAGNETIC RESONANCE IMAGING

9.23 MRI Image Quality, Artifacts, and Imaging Parameters


This part covers the factors that affect the quality of an MR image quality which depends on
several factors:
 Signal-to-noise ratio (SNR) is equal to the ratio of MR signal received from the tissue
being imaged to the background noise.
 Contrast resolution looks at the different and subtle differences in signal intensities
from tissues being imaged.
 The spatial resolution of the image is the visualization of detail in the MR image.
Therefore, it could be saying, the spatial resolution is the ability to distinguish between
two points as separate and distinct. It is controlled by the voxel size. Spatial resolution
may be increased by selecting: Thin slices, Fine matrices and small FOV.
 Sequence parameters such as Repetition time (TR), TE, slice thickness, field of view,
and matrix size can affect these image quality issues adversely. Conversely, such
parameters can have a positive effect on the quality of the image. Artifacts in MR are
also a big issue with regards to image quality. Artifacts can be caused by a variety of
things, from the equipment to the patient. These different artifacts can be assessed and
resolutions found to correct for them.

9.23.1 Signal to Noise and Contrast Resolution


We can be likened to the noise, such as interferences which present as an irregular granular
pattern. The noise can be degrades image information (MR signal). Image noise results from a
number of different factors but it comes mainly from the tissue of the patient's body (RF
emission due to thermal motion) and electronics inherent in the imaging process. These factors
can be classified into two classes first that are beyond the operator's control (the MR scanner
specifications and pulse sequence design) and on factors that the user can change:
 Fixed factors: Imperfections of the MR system such as magnetic field
inhomogeneities, thermal noise from the RF coils, pulse sequence design, patient-related
factors resulting from body movement or respiratory motion.
 Factors under the operator's control
o RF coil to be used
o Sequence parameters : voxel size (limiting spatial resolution), number of
averaging, receiver bandwidth
The relationship between the MR signal and the amount of image noise present is expressed as
the signal-to-noise ratio (SNR). The signal is the voltage induced in the receiver coil by the net
magnetization when moved into the transverse plane. In other words, the signal comes from
the excited protons on the selected slice plane. The SNR has a direct effect on the contrast
resolution. The definition of contrast resolution is the difference in SNR between two adjacent
areas. If the SNR is improved, then the contrast resolution of the image is improved; if the

259
CHAPTER 9 MAGNETIC RESONANCE IMAGING

SNR is low, then the contrast resolution of the image is poor. Mathematically, SNR can be
expressed as the intensity of the signal measured in the region of interest divided by the
standard deviation of the signal intensity in a region outside the anatomy or the object being
imaged (i.e. a region from which no tissue signal is obtained). The SNR is dependent on the
following parameters:
 Slice thickness and receiver bandwidth
 Field of view
 Size of the (image) matrix
 Number of acquisitions
 Scan parameters (TR, TE, flip angle)
 Magnetic field strength
 Selection of the transmit and receive coil (RF coil)
Repetition time (TR) is the interval between two successive excitations of the same slice. That
means, it is the length of the relaxation period between two excitation pulses and is therefore
crucial for T1 contrast.
Before we discuss the effects of each of these parameters, it is first necessary to clarify
some concepts.
9. 23.2 Pixel, Voxel, Matrix
Images that we get from MRI are digital images consist of a matrix of pixels (picture
elements). Knowing that, the matrix are mathematically well known two-dimensional grid of
rows and columns. Each square of the grid is a pixel, which assigns the value corresponding to
the signal intensity. Each pixel of the MR image corresponding three-dimensional volume
element called voxel therefore provides information on the pixel corresponding voxel, (Figure
9.48).

Figure 9.48: A voxel is the tissue volume represented


by a pixel in the two-dimensional MR image.
Voxel size determines the spatial resolution of the MR image. The size of a voxel can be
calculated from the field of view, the matrix size, and the slice thickness. In general, the
resolution of an MR image increases as the voxel size decreases.

260
CHAPTER 9 MAGNETIC RESONANCE IMAGING

9. 23.3 Inter-Slice Gap


Spacing or inter-slice gap is the small space between two adjacent slices and can be measured
by millimeters. It provides method to compensate for imperfect RF excitation pulse. Inter-slice
gap allows the technologist to control the size of the imaging volume by increasing and
decreasing space in between slices. Because the resultant slice profiles are not perfectly
rectangular (Figure 9.49), two adjacent slices overlap at their edges when closely spaced.
Therefore it would be desirable to acquire contiguous slices but inter-slice gaps are necessary
in spin echo (SE) imaging.

Figure 9.49: (a) Ideal slice profile. (b)


Distorted, nonrectangular slice profile in
SE imaging with inadvertent excitation
of adjacent slices reduces SNR. (c)
With inter-slice gaps, the drop in SNR
is minimized

The RF pulse for one slice also excites protons in adjacent slices. Such interference is known
as cross-talk. That is, when radio frequency pulse for one slice stimulates protons in the
adjacent slices. The cross-talk will lead to a reduction SNR (Figure 9.49b). Therefore when
insert small gaps with thirty percent of slice thickness in between slices to minimize the
artifact and improve signal to noise ratio. Because the resultant slices profiles are not
perfectly rectangular (Figure9.49). In selecting an appropriate inter-slice gap one has to find a
compromise between an optimal SNR, which requires a large enough gap to completely
eliminate cross-talk, and the desire to reduce the amount of information that is missed when
the inter-slice gap is too large. In most practical applications insert small inter-slice gaps with
25–50% of the slice thickness noise ratio. Alternative way is to reduce the saturation of
protons in adjacent slices a situation which is undesired by-slice imaging.

261
CHAPTER 9 MAGNETIC RESONANCE IMAGING

9. 23.4 Size of the (image) Matrix


Another factor affecting signal to noise and contrast resolution is the voxel volume (3
dimensional volume of tissue) which is represented on the image matrix by a pixel (or picture
elements). Spatial resolution corresponds to the size of the smallest detectable detail. The
smaller the voxels are, the higher the potential spatial resolution will be.
Three parameters affect the Voxel volume (size of the voxel):
 Pixel size, which is established when the matrix size is chosen (256 × 256 or 512 ×
512 etc...)
 Field Of View (FOV) (area of interest) (10 cm, 20 cm, etc.... the small FOV is usually
less than 18 cm and the large FOV is more than 30 cm). The field of view (FOV)
defines the image size in two (for 2D scans) or three (for 3D scans) dimensions and
 Slice thickness.
Matrices are two types:
 Coarse matrices: they have a small number of pixels in the field of view. and
 Fine matrices: they have a large number of pixels in the field of view.
An example of a coarse matrix is 128 × 128, whereas a fine matrix is 512 × 512 (see figure
9.50).

(A) (B)

Figure 9.50: Types of matrix: (A) Fine Matrix (B) Coarse Matrix.
Larger voxels have an increased signal to noise and better contrast resolution because there are
more hydrogen nuclei in the voxel to contribute to the signal. Larger voxels are therefore
represented on the image matrix by larger pixels.
Matrix size chosen establishes a pixel size and therefore the size of the voxel it represents.
Another way to alter voxel size is with the slice thickness used. Assuming the field of view is
square, doubling the slice thickness of the area doubles signal to noise ratio and voxel volume;
thereby increasing contrast resolution. Halving the slice thickness adversely affects the signal
to noise ratio and therefore decreasing the contrast resolution by half. The field of view also
can influence the voxel volume. Doubling the field of view doubles the voxel volume on both
axes and increases the signal-to-noise by four. This also increases the contrast resolution of the

262
CHAPTER 9 MAGNETIC RESONANCE IMAGING

image. This is the single best and most efficient way to increase signal to noise ratio and
contrast resolution. Halving the field of view reduces the voxel volume and reduces the signal
to noise ratio by a quarter. The contrast resolution is decreased.
[The background noise that comes from the system is a constant amount for each patient,
but is different for every patient namely that impact is different from one patient to another.
Factors affecting the signal amplitude from the tissue affect the noise. The best pulse sequence
for the signal amplitude is the classic spin echo (SE) sequence. Its use of the 180 degree radio
frequency pulse to re-phase all of the hydrogen protons in order to create an echo allows for
the best signal amplitude. Other sequences such as the variations of gradient echo do not re-
phase the hydrogen nuclei as effectively and signal is lost. The number of hydrogen protons in
the area of tissue to be scanned has an effect on SNR and contrast resolution. If there are a
large number of hydrogen protons in the area, then the signal amplitude will be increased;
therefore the contrast resolution will be increased. If the number of protons in the area is low,
then the signal will be low and the contrast resolution will be poor.]
9. 23.5 Scan Parameters (TR, TE, Flip Angle)
Two controls determine tissue contrast: TR (repetition time) and TE (echo time) of the scan.
They can be used for example to produce contrast between different tissues due to their
individual relaxation properties. TR and TE both affect signal-to-noise and contrast resolution.
(a) Repetition time (TR) is the time between successive RF pulses, that is, the duration of a
phase encoding cycle. A long TR allows the protons in all of the tissues to relax back into
alignment with the main magnetic field. A short repetition time will result in the protons
from some tissues not having fully relaxed back into alignment before the next measurement
is made decreasing the signal from this tissue. In other words, TR controls the T1 relaxation
time of the tissue by allowing a certain amount of the net magnetization to re-grow into the
longitudinal plane, back to equilibrium before a signal is read. A long TR will increase signal
to noise ratio because more net magnetization has re-grown back to equilibrium and is
available to be excited and flipped once again into the transverse plane. A short TR decreases
the signal to noise ratio because not as much of the net magnetization has recovered and is
not there to be excited and flipped again into the transverse plane.
(b) Echo time (TE) is the time at which the electrical signal induced by the spinning protons is
measured. That is, the time between giving the RF pulse (excitation) and the peak (maximum
amplitude) of the echo signal (Fig 9.51). During this time interval, the transverse
magnetization decays, e.g. signal decays, due to the T2 relaxation effects. So TE directly
determines how much the transverse signal decays. For a T2 weighted image, use a TE that
is longer than the T2 of some tissues but shorter than the T2 of other tissues. A long TE
results in reduced signal in tissues like white matter and gray matter since the protons are
more likely to become out of phase. Protons in a fluid will remain in phase for a longer time
since they are not constrained by structures such as axons and neurons. A short echo time
reduces the amount of dephasing that can occur in tissue like white matter and gray matter.

263
CHAPTER 9 MAGNETIC RESONANCE IMAGING

In other words, TE controls the T2 relaxation time of the tissue by allowing a certain amount
of the net magnetization to decay in the transverse plane before a signal is read. A long TE
decreases signal to noise because all of the net magnetization has decayed when the signal is
read. A short TE increases signal-to-noise because there is net magnetization in the
transverse plane to contribute to the signal.

RF
TR
TE
TE

Signal

1st 2nd
echo echo
Figure 9.51: Echo time (TE) and Repetition time (TR).
A long TR and a short TE increase signal to noise ratio and contrast resolution of the MR
images. A specific weighting for the combination short TR and a long TE decrease signal-to-
noise ratio and contrast resolution in MR. The results three parameters discussed so far are can
be summarized in the Figure 9.52 together with several other important variables.

Proton
Long Density T2

TR

T1 Poor
Short

Short TE Long
Figure 9.52: Summarize of TR-TE combinations which also shows
some brain images impressively demonstrating the effects of
relaxation weighting on the contrast.

Another sequence parameter affecting the signal to noise ratio is the flip angle. The flip angle
is how far the net magnetization has been moved into the transverse plane. A larger flip angle

264
CHAPTER 9 MAGNETIC RESONANCE IMAGING

will increase the signal to noise ratio because there is more net magnetization being moved
into the transverse plane for a better signal. A small flip puts less net magnetization into the
transverse plane so a strong signal is not possible.

9. 23.6 Number of Acquisitions


The NEX, averages, or acquisitions represent how many times the tissue is sampled per TR.
The number of excitations (NEX) or number of signal averages (NSA) denotes how many
times a signal from a given slice is measured. The averages control the amount of data per line
of K-space. If the averages are doubled, the data is doubled, so an increase occurs in the signal
to noise ratio. Increasing the averages when using a low field strength magnet is necessary to
maintain signal to noise ratio. The SNR, which is proportional to the square root of the NEX,
improves as the NEX increases, but scan time also increases linearly with the NEX. Doubling
the NEX or averages increases the signal to noise ratio (NSR) by the square root of 2 or by
1.44 times. Manipulating the NEX or averages is therefore not the best way to increase the
signal to noise ratio and the contrast resolution. Increasing the NEX also increases the scan
time and allow motion artifacts to appear on the images.
A parameter that is not often manipulated to increase image quality is the receive
bandwidth. The receive bandwidth is the range of frequencies sampled by and during the
readout gradient application. If the receive bandwidth is decreased, the signal to noise ratio is
increased because less noise is picked up inherently by the readout gradient. If the receive
bandwidth is cut by a half, the signal to noise ratio is increased by forty percent. However, the
sampling time, or the time the readout gradient is left on, must be increased. A decreased
bandwidth also could increase the minimum TE that can be chosen because of the increased
time the readout gradient is left on. The readout or frequency encoding gradient is turned on
usually during the re-phasing, peak, and de-phasing cycle of the echo or signal. If the readout
or frequency gradient must be left on longer, then a short, short TE cannot be used because all
of these pieces of the cycle have to occur. A decreased receive bandwidth increases signal to
noise and also contrast resolution.
9. 23.7 Field of View
There is a close relationship between field of view (FOV) and SNR. When matrix size is held
constant, the FOV determines the size of the pixels. Pixel size in the frequency-encoding
direction is calculated as:
FOV in mm divided by the matrix in the frequency-encoding direction and
Pixel size in the phase-encoding direction is calculated as:
FOV in mm divided by the matrix in the phase-encoding direction.
Another limiting factor is image acquisition or scan time, which increases in direct proportion
to the matrix size. Scan time is the key to the economic efficiency of all MR systems and can
be calculated by a simple equation.

265
CHAPTER 9 MAGNETIC RESONANCE IMAGING

Scan time = TR × number of phase-encoding steps × number of signal averages


(NSA) [echo train length (ETL)].
9.23.8 Selection of the Transmit and Receive Coil (RF Coil)
Surface coils are pads or pieces of equipment that are placed next to a body part in order to
enhance signal from the tissue. The correct size of the coil must match the size of the body
part to be imaged in order for optimal image quality. Quadrature coils have a better signal to
noise than most because it is made up of two receiver coils to get the signal with. Surface coils
in general increase signal to noise as well as contrast resolution if used properly.
The most beneficial ways to increase SNR and contrast resolution in MR images is the use
of spin echo imaging, long TR and short TE, coarse matrices with a large field of view, thick
slices, and increased NEX or acquisitions.

9.24 MRI Contrast Agents


‘Contrast’ refers to the signal differences between adjacent regions, which could be ‘tissue and
tissue’, ‘tissue and vessel’, and ‘tissue and bone’. The inherent difference in T1 relaxation
time between biological tissues, or between normal and pathologic tissue is not always large
enough to obtain a detectable contrast in the MR image (cf. 9.53a). In order for pathology (or
any process for that matter) to be visible in MRI, there must be contrast or a difference in
signal intensity between it and the adjacent tissue. The contrast mechanism for MRI is more
complicated and not as in the Contrast agents for X-ray and CT where show contrasting
effects according to the electron-density difference, and they produce direct contrast effects on
their positions. For MRI, the contrast enhancement occurs as a result of the interaction
between the contrast agents and neighboring water protons, which can be affected by many
intrinsic and extrinsic factors such as proton density and MRI pulse sequences. As we know
from previous subjects, the signal contrasts can arise in MRI from differences in four basic
physical parameters:
SD: The spin density of the various tissues/fluids being analyzed
T1: The time constant with which the spin magnetization of a given tissue will build up
after being saturated/inverted/pulsed-away
T2*: The time constant with which the spins’ signals arising from a given tissue will
diphase due to inhomogeneous broadening –this is the kind of signal decay that can be
echoed away by π pulses; for instance the one arising from field inhomogeneities or
susceptibility differences
T2: The (longer) time constant with which the spins’ signals arising from a given tissue
will decay away due to homogeneous broadening – this is the kind of irreversible
decay that can’t be echoed away, arising from microscopic random fluctuations in the
magnetic field.

266
CHAPTER 9 MAGNETIC RESONANCE IMAGING

Contrast in most MR images is actually a mixture of all these effects; but careful design of the
imaging pulse sequence allows one contrast mechanism to be emphasized while the others are
minimized. The ability to choose different contrast mechanisms by tailoring the appropriate
pulse sequence and choosing the right pulse sequence parameters is what gives MRI its
tremendous flexibility.
In certain cases, the intrinsic differences in T1, T2, T2*, etc, may not be sufficient to
achieve the desired degree or kind of contrast. In those cases additional differences can be
introduced by adding contrast agents (see figure 9.53b): paramagnetic chemicals that localize
in certain tissues/fluids, and artificially change their spin relaxation properties.

(a) (b)

Figure 9.53: MR image to diagnose tumor in the brain without contrast


agent (a) and with contrast agent.

In order for an excited spin system to return to its equilibrium magnetization, energy must be
transferred from the spin system to the lattice (surrounding), as discussed in section 9.13. The
return to equilibrium is described by the spin-lattice relaxation time, T1. When T1-weighted
sequences are used, the magnitude of the MR-signal increases with decreasing T1-relaxation
times. Further, the contrast between two tissues will of course also increase with increasing
difference in T1 relaxation times between the two tissues. Sufficient contrast is of particular
importance in differentiating pathological tissue from normal surrounding tissue. Exogenous
MR contrast agents were therefore developed shortly after the first commercial MR systems
became available in the early 1980’s. Today, MR contrast agents are typically in a significant
proportion of MR examinations; with the highest usage in CNS applications (tumor
diagnosis). MR contrast agents are also widely used in MR angiography (MRA). MR contrast
agents act by selectively reducing T1 (and T2) relaxation times of tissue water through spin-
interaction between electron spins of the metal-containing contrast agent and water protons in
tissue
There are two classes of MRI contrast agents available,

267
CHAPTER 9 MAGNETIC RESONANCE IMAGING

(1) T1-weighted contrast agents (e.g., gadolinium-(Gd3+) and manganese- (Mn2+) chelates) are
paramagnetic in nature which increase the T1 relaxation time, resulting in bright contrast
T1-weighted images; and
(2) T2-weighted contrast agents are superparamagnetic materials (e.g., magnetite (Fe3O4)
nanoparticles) which reduce T2 relaxation times, giving rise to dark contrast T2-weighted
images. The efficiency of a contrast agent to reduce the T1 or T2 of water protons is
referred to as relaxivity and defined by followed equation:
1/T1,2 = 1/T01,2 + γ1,2C.
Where 1/T1,2 is the observed relaxation rate in the presence of contrast agents, 1/T01,2 is the
relaxation rate of pure water, C is the concentration of the contrast agents and r1 and r2 are the
longitudinal and transverse relaxivities, respectively
Various inorganic nanoparticles have been used as magnetic resonance imaging (MRI)
contrast agents due to their unique properties, such as large surface area and efficient
contrasting effect. Since the first use of superparamagnetic iron oxide (SPIO) as a liver
contrast agent, nanoparticulate MRI contrast agents have attracted a lot of attention. Magnetic
iron oxide nanoparticles have been extensively used as MRI contrast agents due to their ability
to shorten T2* relaxation times in the liver, spleen, and bone marrow. More recently, uniform
ferrite nanoparticles with high crystallinity have been successfully employed as new T2 MRI
contrast agents with improved relaxation properties. Iron oxide nanoparticles functionalized
with targeting agents have been used for targeted imaging via the site-specific accumulation of
nanoparticles at the targets of interest. Recently, extensive research has been conducted to
develop nanoparticle-based T1 contrast agents to overcome the drawbacks of iron oxide
nanoparticle-based negative T2 contrast agents. In this report, we summarize the recent
progress in inorganic nanoparticle-based MRI contrast agents.

9.25 Special Applications


There are countless variants usually specialized for the measurement of certain chemical or
physical properties. Methods like chemical shift or diffusion-weighted imaging.
9.25.1 Magnetic Resonance Angiography and Venography
Magnetic Resonance Angiography (MRA) and Magnetic Resonance Venography (MRV)
examinations are similar to an MRI, but they focus exclusively on the venous and arterial
blood vessels in the head and neck area and other parts of the human body. A strong magnetic
field is used to evaluate blood flow patterns and blood vessel abnormalities. MRA is a
medical procedure that helps physicians to diagnose and then treat different medical
conditions related to blood vessels throughout the body by creating images for blood vessels
(such as those in the brain, neck, heart, lungs, kidneys, abdomen, pelvis, and legs). MRA is
safer alternative and a less invasive procedure than a traditional catheter angiogram (in which
a small cut is made in the skin, a thin plastic tube called a catheter is inserted into an artery to
release an x-ray sensitive contrast material, and then an x-ray is taken). Also it can produce

268
CHAPTER 9 MAGNETIC RESONANCE IMAGING

much higher quality images of the body's arteries and veins than other methods. MRA is the
depiction of vessels and of flowing blood. In MRA, flowing blood is depicted as bright and
stationary tissue as dark. In MRA coherent type of gradient echo (GRE) pulse sequences are
used which are sequences that use a gradient to reduce the magnetic homogeneity effects, as
opposed to a 180° RF pulse that used in spin echo sequences. These sequences (GRE) should
be used when T2 weighted images (bright blood/water/CSF) are required with good temporal
resolution, Thereby, MR imaging is inherently capable of depicting flowing blood in spatially
confined vessel regions with a bright signal without administering contrast agent. Also a
quantification of blood flow is possible. The venous application of gadolinium (Gd) which is a
based MR contrast agents in conjunction with fast MRA-sequences can enable the spatially
(3D) or temporally (4D) resolved depiction of MRA. MRV is the type of magnetic resonance
imaging (MRI) used to visualize veins, the blood vessels that bring blood from the body's
internal organs back into the systemic circulation. The MRV uses the same machine as an
MRI, but special computer software allows it to only extract generated by blood images, as it
flows through the veins. These images give doctors an idea of whether the blood flow through
a vein of interest is affected by blood clots or other disease processes.
9.25.2 Magnetic Resonance Myelography
MR myelography is noninvasive technique that studies the spinal canal and subarachnoid
space by high-resolution MRI with a technique in which used sequences are T2 weighted fast
spin echo pulse sequences or a refocused gradient echo pulse sequence with strong T2
weighting to provide high contrast between the spinal cord which appears dark and its nerves
with the surrounding cerebrospinal fluid (CSF) which appear bright. MR Myelography as part
of an entire MR examination has virtually replaced the X-ray myelography in localizing CSF
leaks, disc protrusions and can identify spinal canal and foraminal stenosis.
9.25.3 Magnetic resonance cholangiopancreatography
Magnetic resonance cholangiopancreatography (MRCP) is 3D fast spin echo (FSE) based non-
invasive technique designed to produce detailed images of the hepatobiliary and pancreatic
systems, including the liver, gallbladder, bile ducts, pancreas and pancreatic duct., without
need for contrast injection. This high resolution volumetric acquisition is combined with
respiratory triggering and acceleration to help achieve excellent image quality in short scan
time. The automatically generated maximum intensity projection (MIP) images benefit from
bright fluid enhancement and effective background suppression resulting in clear, high
resolution 3D structural images. It was introduced in 1991.

9.25.4 Chemical Shift Imaging


Chemical Shift Imaging (CSI) is a combination of MR imaging and NMR spectroscopy and is
therefore also called "spectroscopic imaging" or "localized spectroscopy".

269
CHAPTER 9 MAGNETIC RESONANCE IMAGING

In tomographic images a signal can be assigned to each pixel after two-dimensional Fourier
transform of the raw data. The integral of the signal is proportional to the intensity (grey scale
value) in the image.
In CSI a complete NMR spectrum is assigned to each pixel instead of a single value, but
the grid is usually much coarser, so that the number of pixels reduces significantly. By using
this method signals that stem from outside the heart can be eliminated and regional differences
within the heart may be detected.
To obtain the spectroscopic information within the FID or echo, we cannot use frequency
encoding to encode spatial information (as in standard MR imaging methods) and therefore,
no readout gradient is used. As in NMR spectroscopy an FID is acquired with frequency
components stemming from nuclei with different environments.
Using no readout gradient means to spatially encode with phase encoding steps only, so that
for an N×N matrix N2 measurement cycles are required. This is one of the reasons for the
considerably lower matrix size in CSI applications compared to standard imaging data sets.
In summary, a (2D) CSI sequence provides three-dimensional data sets with two
dimensions for phase encoding of the spatial coordinates and one for the spectroscopic
component.
9.25.5 Diffusion-Weighted Imaging
Diffusion weighted imaging (DWI) is a form of MR imaging based upon measuring the
random Brownian motion of water molecules within a voxel of tissue, and is particularly
useful in cerebral ischaemia and tumour characterisation.
Diffusion-weighted (spin echo) sequences are widely-used in medical applications, since
e.g., they can be used for very early detection of infarcted areas in the brain (stroke
diagnostics). They are accompanied with a signal decrease at places of high diffusion and a
relative increase in areas where motion is hindered.
A great deal of confusion exists in the way the clinicians and radiologists refer to diffusion
restriction, with both groups often appearing to not actually understand what they are referring
to. The first problem is that the term "diffusion weighted imaging" is used to denote a number
of different things:
1. isotropic diffusion map (what most radiologists will refer to as DWI)
2. sequence which results in generation of DWI, b=0 and apparent diffusion coefficient
(ADC) maps
3. a more general term to encompass all diffusion techniques including diffusion tensor
imaging DTI.
Additionally confusion also exists in how to refer to abnormal restricted diffusion. This
largely stems from the initial popularization of DWI in brain stroke, which presented as
infarcted tissue as high signal on isotropic maps and described it merely as "restricted

270
CHAPTER 9 MAGNETIC RESONANCE IMAGING

diffusion", implying that the rest of the brain did not demonstrate restricted diffusion, which is
clearly not true. Unfortunately this short-hand is appealing and widespread rather than using
the more accurate but clumsier "diffusion demonstrates greater restriction than one would
expect for this tissue". To make matters worse many are not aware with the idea of T2 shine-
through, another cause of high signal on DWI.
A much safer and more accurate way of referring to diffusion restriction is to remember
that we are referring to actual apparent diffusion coefficient (ADC) is a measure of the
magnitude of diffusion (of water molecules) within tissue. ADC values, and to use wording
such as the region demonstrates abnormally low ADC values (abnormal diffusion restriction)
or even "high signal on isotropic images (DWI) is confirmed to represent abnormal restricted
diffusion on ADC maps".

271
CHAPTER 10

RADIATION HAZARDS AND


PROTECTION

This chapter provides an overview of the broad field of radiation


protection as it might be relevant to the generalist Occupational
Health and Safety (OHS) professional. It assumes the reader has a
basic understanding of the principles of nuclear science, including
Rationale atomic structure and isotopes, and the structure and function of
the human body at the cellular and organ level sufficient to
understand how radiation causes damage to the human body.
Mahmood & Haider

Performance Objectives
After studying Chapter Eleven, the student should be able to:
1. Explains the basic concepts of radiation
2. Know biological effects of ionization radiation
3. Estimate of and explain the basis for possible risk of injury, illness, or
death resulting from occupational radiation exposure
4. Estimate of radiation risk and comparisons with other types of risk
5. List and discuss the three Cardinal Principles of radiation protection.
6. Discuss why distance is the best method of limiting radiation exposure.
7. Discuss the mandate of ALARA as a means of radiation protection.
8. State the proper application and limitation of radiation exposure survey
instruments
9. Identify commonly used dosimetry devices.
CHAPTER 10 RADIATION HAZARDS AND PROTECTION

CHAPTER TEN: RADIATION HAZARDS AND PROTECTION


CHAPTER CONTENTS
10.1 Introduction 273
10.2 Sources of Ionizing Radiation 273
10.2.1 Natural sources of Ionizing Radiation 273
10.2.2 Man-Made Sources of Ionizing Radiation 275
10.3 Mechanisms of Radiation Damage 275
10.3.1 Direct Action 276
10.3.2 Indirect Action 277
10.4 Understanding Radiation Risk 278
10.5 Health Effects of Exposure to Radiation 278
10.6 Risks from Occupational Radiation Exposure 279
10.7 Radiation Risk Estimates 279
10.8 Biological Effects of Ionizing Radiation 279
10.8.1 Somatic Effects 280
10.8.2 Genetic Effects 281
10.8.3 Developmental Effects 281
10.9 Radiation Protection Principles 282
10.9.1 Justification 282
10.9.2 Optimization 283
10.9.3 Limitation 283
10.10 Factors in Dose Uptake 283
10.10.1 Time 284
10.10.2 Distance 284
10.10.3 Shielding by Barriers 284
10.11 Maximum Permissible Dose 284
10.12 Whole-Body Occupational Exposure 285
10.13 Exposure of Pregnant 286
10.13.1 Pregnant Patients 286
10.13.2 Pregnant Staff 286
10.13.3 Protection of Staff and Members of the Public 286
10.14 Design of Protective Barriers 286
10.15 Sources of X-Ray Exposure 286
10.15.1 Primary Radiation 287
10.15.2 Secondary Radiation 287
10.15.2.1 Scattered Radiation 287
10.15.2.1 Leakage Radiation 287
10.16 Protection of Patients 288
10.17 Dosimeters Devices 288
10.17.1 Self Reading Dosimeters 288
10.17.2 Electronic Dosimeters 289
10.17.3 Thermo-luminescent Dosimeters 290

272
CHAPTER 10 RADIATION HAZARDS AND PROTECTION

10.1 Introduction
Fear of radioactivity and radiation in general present in the wider community in spite of the
increased use of radiation significantly in medicine, and the military, food preparation, power
generation, industry due largely to a lack of knowledge on this subject. Determining the
harmful effects (if any) of the typically low doses of radiation received in routine daily
activities is a very difficult science field. Further, as new technologies emerge that utilize the
physical properties of radiation the health effect data from exposure can be incomplete and
heightened levels of anxiety may occur.
However, as with many other agents, determining the harmful effects (if any) of the
typically low doses of radiation received in routine daily activities is a very difficult field of
science that can cause divisions between sections of the community. Further, as new
technologies emerge that utilize the physical properties of radiation the health effect data from
exposure can be incomplete and heightened levels of anxiety may occur.
Understanding the nature of radiation and radioactivity requires a solid grasp of
fundamental scientific knowledge commensurate with the increasing complexity of the
possible exposure scenarios. When ionizing radiation, such as X- and gamma rays, passes
through the living tissue without energy transmitted, no biological effects and no radiological
image would be produced. If the ionizing radiation passed through living tissue, with absorbed
of energy, causes to tissue damage. Whenever radiation energy is absorbed will chemical
changes are produced virtually immediately, and subsequent molecular damage follows in a
short space of time (seconds to minutes). It is after this, during a much longer time span of
hours to decades that the biological damage becomes evident. We live in a naturally
radioactive world. But how much do physicians, nurses and medical technicians who may
have to respond in a radiation emergency know about what radiation is, what it does and how
to protect against it? This chapter is directed at medical personnel and outlines basic concepts
of radiation and radiation protection.
10.2 Sources of Ionizing Radiation
Ionizing radiation enters our lives in a variety of ways. It arises from natural processes, such
as the decay of uranium in the Earth, and from artificial procedures like the use of x-rays in
medicine. So we can classify radiation as natural or artificial according to its origin.
10.2.1 Natural sources of Ionizing Radiation
Radiation has always been present and is all around us in many forms (cosmic, radon, plants,
our bodies, radioactive soil and rocks etc.). Life has evolved in a world with significant levels
of ionizing radiation, and our bodies have adapted to it. Many radioisotopes are naturally
occurring, and originated during the formation of the solar system and through the interaction
of cosmic rays with molecules in the atmosphere. Tritium is an example of a radioisotope
formed by cosmic rays’ interaction with atmospheric molecules. Some radioisotopes (such as
uranium and thorium) that were formed when our solar system was created have half-lives of
billions of years, and are still present in our environment. Background radiation is the ionizing

273
CHAPTER 10 RADIATION HAZARDS AND PROTECTION

radiation constantly present in the natural environment. So natural radiation is everywhere and
everyone is exposed daily to various kinds of radiation: heat, light, ultraviolet, microwave,
ionizing, and so on. Actually, all human activities involve exposure to radiation. People are
exposed to different amounts of natural "background" ionizing radiation depending on where
they live. Radon gas in homes is a problem of growing concern. Exposure of a person may be
external or internal and may be incurred by various exposure pathways. External exposure
may be due to direct irradiation from a sealed source or due to contamination, i.e. airborne
radionuclides or radionuclides deposited onto the ground or onto clothing and skin. Internal
exposure may result from the inhalation of radioactive material in air, the ingestion of
contaminated food or water, or contamination of an open wound. The total worldwide average
effective dose from background radiation is approximately 3mSv per year distributed
according to the table below which comes from the four following sources:
Average Annual Dose
Terrestrial: radiation from soil and rocks …………....... 21 millirem (0.21 mSv)
Cosmic: radiation from out space ………………….….. 33 millirem (0.33 mSv)
Radioactivity normally found within the human body… 29 millirem (0.29 mSv)
Radon ………………………………………………….. 228 millirem (2.28 mSv)
This chart (Figure 10.1) shows that of the total dose of about 360 millirems/year, (millirem
and millisievert are units of radiation dose see chapter one) natural sources of adiation account
for about 82% of all public exposure, while man-made sources account for the remaining
18%. Individual exposures will vary depending on factors such as altitude (space), local soils
(radon and thoron), and the number of nuclear medicine procedures or x-rays received.

Man-Made Radiation Other < 1%


Sources: This includes:
Medical X-rays Occupational - 0.3%
Nuclear Medicine Fallout - < 0.3%
Consumer Products Nuclear Fuel Cycle - 0.1%
Other Miscellaneous - 0.1%
---------------------------
Total of 18%
Natural Radiation Sources:
Radon
Internal
Terrestrial
Cosmic
-----------------------------------
Total of 82%

Mahmood & Haider


Figure 10.1: Relative contributions of natural and artificial sources of ionizing radiation
exposure to the public
The United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR)
identifies four major sources of public exposure to natural radiation:

274
CHAPTER 10 RADIATION HAZARDS AND PROTECTION

• Cosmic radiation: is the term given to various high-energy particles arriving from outer
space strikes the Earth, therefore increases with altitude. It includes a collection of many
different types mainly (89%) protons – nuclei of hydrogen, the lightest and most common
element in the universe – but they also include nuclei of helium (10%) and heavier nuclei
(1%), all the way up to uranium. When they arrive at Earth, they collide with the nuclei of
atoms in the upper atmosphere, creating more particles, mainly pions. The charged pions
can swiftly decay, emitting particles called muons. Unlike pions, these do not interact
strongly with matter, and can travel through the atmosphere to penetrate below ground.
While space is full of radiation, the earth’s magnetic field generally protects the planet
and people in low earth orbit from these particles.
• Terrestrial radiation: some regions receive more terrestrial radiation from soils that
contain greater quantities of uranium. The average effective dose from the radiation
emitted from the soil (and the construction materials that come from the ground) is
approximately 0.5 mSv per year. However, this dose varies depending on location and
geology, with doses reaching as high as 260 mSv in Northern Iran or 90 mSv in Nigeria.
• Inhalation: (example: Radon) the earth’s crust produces radon gas, which is present in the
air we breathe. Radon has four decay products that will irradiate the lungs if inhaled. The
worldwide average annual effective dose of radon radiation is approximately 1.3 mSv.
• Ingestion: (example: food and drink) Natural radiation from many sources enters our
bodies through the food we eat, the air we breathe and the water we drink. Potassium-40
is the main source of internal irradiation (aside from radon decay). The average effective
dose from these sources is approximately 0.3 mSv a year.
10.2.2 Man-Made Sources of Ionizing Radiation
There is no difference between the effects caused by natural or man-made radiation. Man-
made sources of radiation (from commercial and industrial activities) account for
approximately 0.2 μSv of our annual radiation exposure. Sources of radiation in medicine
include x-ray machines and radioactive materials used in the diagnosis and therapeutic
medical procedures account for approximately 1.2 mSv a year. Most medical exposure comes
from the use of standard x-rays and CT scans to diagnose injuries and diseases in patients.
Drugs with radioactive material attached, known as radiopharmaceuticals, also are used to
diagnose some diseases. These procedures are an important tool to help doctors save lives
through quick and accurate diagnoses. Also, other procedures, such as radiation therapy, use
radiation to treat patients. Overall, natural radiation accounts for approximately 60% of our
annual radiation dose, with medical procedures accounting for the remaining 40%.
10.3 Mechanisms of Radiation Damage
The exact mechanism of these complex events is incompletely understood, but biological
damage following exposure to ionizing radiations has been well documented at a variety of
levels. At a molecular level, macromolecules such as DNA, RNA, and enzymes are damaged;
at the subcellular level, cell membranes, nuclei, chromosomes, etc., are affected; and at the

275
CHAPTER 10 RADIATION HAZARDS AND PROTECTION

cellular level, cell division can be inhibited, cell death brought about, or transformation to a
malignant state induced. Cell repair can also occur, and is an important mechanism when there
is sufficient time for recovery between irradiation events.
The fact that ionizing radiation produces biological damage has been known for many
years. During radiation exposures it is the ionization process that causes the majority of
immediate chemical changes in tissue. The critical molecules for radiation damage are
believed to be the proteins (such as enzymes) and nucleic acid (principally DNA). The
mechanism by which the radiation damage occurs can happen via one of two basic ways: by
the direct or indirect action of radiation on the DNA molecules (see figure 1).

Figur
e 10.2: Direct and indirect actions of radiation (a) Single-Strand Break (b) Double-Strand Break
When the DNA is attacked, either via direct or indirect action, damage is caused to the strands
of molecules that make up the double-helix structure. Most of this damage consists of breaks
in only one of the two strands and is easily repaired by the cell, using the opposing strand as a
template. If, however, a double-strand break occurs, the cell has much more difficulty
repairing the damage and may make mistakes. This can result in mutations, or changes to the
DNA code, which can result in consequences such as cancer or cell death. Double-strand
breaks occur at a rate of about one double-stand break to 25 single-strand breaks. Thus, most of
the radiation damage that occur in the DNA may be repaired.
10.3.1 Direct Action
In the direct action can be visualized as a “direct hit” by the radiation on the DNA directly
(see figure 1) and thus is a fairly uncommon occurrence due to the small size of the target; the
diameter of the DNA helix is only about 2 nm. Radiation damage starts at the cellular level,
and radiation may impact the DNA directly, causing ionization of the atoms in the DNA

276
CHAPTER 10 RADIATION HAZARDS AND PROTECTION

molecule. Radiation which is absorbed in a cell has the potential to impact a variety of critical
targets in the cell, the most important of which is the DNA. The damage to the DNA is what
causes cell death, mutation, and carcinogenesis. Evidence indicates that damaged cells that
survive may later induce carcinogenesis or other abnormalities. This process becomes
predominant with high radiation doses. It is nowadays accepted that the detrimental effects of
ionizing radiation are not restricted only in the irradiated cells, but also to non-irradiated
bystander or even distant cells manifesting various biological effects.

10.3.2 Indirect Action


In this way, the radiation interacts with non-critical target atoms or molecules, usually water.
It has been found that the majority of radiation-induced damage results from the indirect
action mechanism because water constitutes nearly 70% of the composition of the cell. This
results in the production of free radicals, which are atoms or molecules that have an unpaired
electron and thus are highly reactive. These free radicals can then attack critical targets such as
the DNA (Figure 1). Because they are able to diffuse some distance in the cell, the initial
ionization event does not have to occur so close to the DNA in order to cause damage. Thus,
damage from indirect action is much more common than damage from direct action,
especially for radiation that has a low specific ionization. In addition to the damages caused by
water radiolysis products (i.e. the indirect effect), cellular damage may also involve reactive
nitrogen species (RNS) and other species, and can occur also as a result of ionization of atoms
on constitutive key molecules (e.g. DNA).
Ionization of water causes it to lose an electron and produces a positively
charged ion radical (H2O+) and a free electron (e-). Through a chain of chemical reactions,
(H2O+) and (e-) generate highly reactive free radicals such as the hydroxyl radical (OH*) and
(H*). Radiolysis of water:
H2O + radiation → H2O+ + e-
H2O + H2O+ (decomposes) → H3O+ + OH*
H2O+ decomposes → H+ + OH*
These free radicals can diffuse to the target molecule leading to DNA or sugar radicals that
eventually produce single and double strand breaks of the DNA helix.
RH + OH* → R* + H2O
The highly reactive hydroxyl radical (OH*) and is believed to be responsible for more than 2/3
of mammalian cell damage. Radiation protection drugs typically work by scavenging free
radicals. These drugs are generally less effective for radiations which have high the amount of
energy released over the length of its attenuation track.
The hydroxyl free radical OH* is a highly reactive and powerful oxidizing agent which
produces chemical modifications in solute organic molecules. These interactions, which occur
in microseconds or less after exposure, are one way in which a sequence of complex chemical

277
CHAPTER 10 RADIATION HAZARDS AND PROTECTION

events can be started, but the free radical species formed can lead to many biologically
harmful products and can produce damaging chain reactions in tissue.
10.4 Understanding Radiation Risk
Risk can be defined in general as the probability or chance of injury, illness, or death resulting
from radiation exposure. However, the perception of risk is affected by how the individual
views its probability and its severity. Radiation can damage living tissue by changing cell
structure and damaging DNA. The amount of damage depends upon the type of radiation, its
energy and the total amount of radiation absorbed. Also, some cells are more sensitive to
radiation. Because damage is at the cellular level, the effect from small or even moderate
exposure may not be noticeable. Most cellular damage is repaired. Some cells, however, may
not recover as well as others and could become cancerous. Radiation also can kill cells. The
most important risk from exposure to radiation is cancer. Much of our knowledge about the
risks from radiation is based on studies of more than 100,000 survivors of the atomic bombs at
Hiroshima and Nagasaki, Japan, at the end of World War II. Other studies of radiation
industry workers and studies of people receiving large doses of medical radiation also have
been an important source of knowledge. Scientists learned many things from these studies.
The most important are: The higher the radiation dose, the greater the chance of developing
cancer. The chance o f developing cancer, not the seriousness of the cancer, increases as the
radiation dose increases. Cancers caused by radiation do not appear until years after the
radiation exposure. Some people are more likely to develop cancer from radiation exposure
than others.Radiation can damage health in ways other than cancer. It is less likely, but
damage to genetic material in reproductive cells can cause genetic mutations, which could be
passed on to future generations. Exposing a developing embryo or fetus to radiation can
increase the risk of birth defects. Although such levels of exposure rarely happen, a person
who is exposed to a large amount of radiation all at one time could become sick or even die
within hours or days. This level of exposure would be rare and can happen only in extreme
situations, such as a serious nuclear accident or a nuclear attack.
10.5 Health Effects of Exposure to Radiation
There are many technical factors that must be considered to understand the complex nature of
exposure to ionizing radiation. Some of the health effects that exposure to radiation may cause
are cancer (including leukemia), birth defects in the future children of exposed parents, and
cataracts. These effects (with the exception of genetic effects) have been observed in studies
of medical radiologists, uranium miners, radium workers, and radiotherapy patients who have
received large doses of radiation. Studies of people exposed to radiation from atomic weapons
have also provided data on radiation effects. In addition, radiation effects studies with
laboratory animals have provided a large body of data on radiationinduced health effects,
including genetic effects.
The observations and studies mentioned above, however, involve levels of radiation
exposure that are much higher (hundreds of rem) than those permitted occupationally today
(<5 rem per year). Although studies have not shown a cause-effect relationship between health

278
CHAPTER 10 RADIATION HAZARDS AND PROTECTION

effects and current levels of occupation radiation exposure, it is prudent to assume that some
health effects do occur at lower exposure levels.
10.6 Risks from Occupational Radiation Exposure
The safety problems are related to ionizing radiation exposure from x-ray devices, particle
accelerators, naturally occurring radionuclides and accelerator-produced radioactivity. All
Protection Agency in its radiation protection guidance for occupational exposure urges that
workers be clearly informed of the biological implications of radiation exposure. It is intended
that the following information will enable you to develop an attitude of healthy respect for the
risk associated with radiation exposure rather than an unnecessary fear or lack of concern.
10.7 Radiation Risk Estimates
A measure of the biological damage sustained by tissue due to ionizing radiation is expressed by
the tissue's dose equivalent (often referred to by just "dose"), the traditional unit of which is the
rem (see chapter one). The total effective dose equivalent (TEDE) represents the sum of the
deep dose due to external radiation and the effective dose equivalent (EDE) due to internal
contamination. The total effective dose equivalent due to the natural "background radiation"
level in a total effective dose equivalent to all persons in the any state that per year due
primarily to:
 radiation reaching earth from outer space,
 the radioactive content of all terrestrial materials and
 exposure to naturally occurring radon gas)
10.8 Biological Effects of Ionizing Radiation
Radiation Effects can be early or be late effects; depending on the amount of dose. Early
(prompt) effects are observable shortly after receiving a very large dose in a short period of
time. For example, a whole-body dose of 450 rem (90 times the annual dose limit for routine
occupational exposure) in an hour to an average adult will cause vomiting and diarrhea within
a few hours; loss of hair, fever and weight loss within a few weeks; and about a 50 percent
chance of death within 60 days without medical treatment.
At the low levels of occupational exposure it is difficult to demonstrate the relationship
between dose and effect. The changes induced by radiation often require many years or
decades before being evident and, thus, a very long follow up period is necessary to define late
(delayed) effects. Studies of human populations exposed to low level radiation are the
appropriate basis for defining risk. Yet the number of such investigations, from which the
relationship between radiation dose and response can be determined, is limited, the best being
those of the bomb survivors in Nagasaki and Hiroshima. Accordingly, there is considerable
uncertainty and controversy regarding the best estimates of the radiation risk of low level
doses.
The biological effects of ionizing radiation can depend, among other factors, on:
 the type of radiation,

279
CHAPTER 10 RADIATION HAZARDS AND PROTECTION

 the amount of the dose and the rate at which it is received,


 the type of tissues irradiated, and
 the age and sex of the exposed person.
The biological damage is primarily due to the fact that the charged particles (ion pairs) that
result from ionization, particularly in the water of body cells, yield highly reactive free
radicals. The radicals then readily interact with molecules in the irradiated cells to break
chemical bonds or produce other chemical changes. The resultant biological effects can be
classified into three categories:
10.8.1 Somatic Effects
Somatic effects are occurring in the exposed person. The manifestation may be prompt or
delayed. The period of time between exposure and demonstration of the delayed effect is
referred to as the latent period. Somatic effects may be deterministic or stochastic (statistical).
In the deterministic case a threshold dose applies, although it may be different for different
persons. Once the threshold is exceeded, the severity of the effect increases with dose.
Examples of deterministic (sometimes called nonstochastic) effects are cataracts, skin
damage, bone marrow cell loss, and sterility. Biological damage cannot be identified at doses
less than 50 mSv and in general, thresholds are greater than 500 mSv. The amount of
radiation damage increases with the radiation dose, but it also increases with the volume of
tissue irradiated and the rate at which the dose is given.
Radiation doses which produce deterministic effects when only specific organs are
irradiated will have a completely different effect if given as the same single absorbed dose, but
this time to the whole body. For example, of a group of people exposed to 1.5 Sv whole body
dose, half will show signs of radiation sickness (vomiting and diarrhea) while the others may
seem unaffected. Of a group exposed to 5 Sv, half will probably die within 30 days but half
will recover if not exposed to infection. This value is known as the lethal dose (50% in 30
days) or LD50/30. However, a whole body dose of 10 Sv or more is 100% lethal to humans,
death being caused by a complete breakdown of the central nervous system, as well as the
gastrointestinal tract. The somatic effects of interest are cataract and cancer induction.
Cataract Induction: Damaging the lens of the eye is one of the highest deterministic risks.
The lens of the eye differs from other organs in that dead and injured cells are not removed
and it has no repair mechanism, the damage will be cumulative. Single doses of several
hundred rem have induced opacities that interfere with vision within a year. When the dose is
fractionated over a period of a few years, larger doses are required and the cataract appears
several years after the last exposure. The cataract induction should not be a concern for the
doses currently permitted radiation workers. Cataracts are produced above a threshold of about
5 Sv to the lens. Computed tomography (CT) scans can give very high eye doses, and the
possibility of damaging the eye lens in patients should not be forgotten. For skin damage the
threshold for erythema and dry desquamation is about 4 Sv, and has been seen in some
patients after interventional radiological procedures. Impairment of fertility varies with age,

280
CHAPTER 10 RADIATION HAZARDS AND PROTECTION

but 1 Sv to the gonads has a measurable effect and 4 Sv will cause sterilization. If, however,
the threshold dose for testicular damage is given in small amounts, say 1 mSv per week over a
number of years, no deterministic effects will be observed.
Cancer Induction: Cancers arising in a variety of organs and tissues are thought to be the
principal somatic effect of low and moderate radiation exposure. Organs and tissues differ
greatly in their susceptibility to cancer induction by radiation. Induction of leukemia by
radiation stands out because of the natural rarity of the disease, the relative ease of its radiation
induction and its short latent period (2−4) years). However, the combined risk of induced solid
tumors exceeds that of leukemia. It is currently thought that cancer induction is the only
possible somatic effect due to exposure to low levels of ionizing radiation. The risk of
radiation doses of the order of the natural background level (300 millirem average in the US)
may be zero. However, upper−limit risk of chronic radiation doses less than 10 rem. Estimates
the total fatal risk is about 4 10−4 per rem of effective dose equivalent (or 4 chances in 10000
per rem) when averaged over an adult population of radiation workers.
10.8.2 Genetic Effects
Genetic effects are abnormalities occurring in the future children of exposed persons and in
subsequent generations. Genetic effects can occur when there is radiation damage to the genetic
material and can result in mutation. These effects may show up as birth defects or other
conditions in the future children of the exposed individual and succeeding generation, as
demonstrated in animal experiments. A mutation is an inheritable change in the genetic
material within chromosomes. Generally speaking, mutations are of two types, dominant and
recessive. The effects of dominant mutations usually appear in the first and subsequent
generations while the effects of recessive mutations do not appear until a child receives a
similarly changed gene for that trait from both parents. This may not occur for many
generations or it may never occur. Mutations can cause harmful effects which range from
undetectable to fatal. Thus, the possibility exists that genetic effects can be caused in humans
by low doses even though no direct evidence exists as yet. It is difficult to assess the genetic
risk in humans, as even the descendants of those exposed at Hiroshima and Nagasaki have
shown no additional genetic or cytogenetic effects. The frequency of congenital defects,
fecundity, and life expectancy appear to be no different than for the children of nonirradiated
parents. However, in view of the paucity of data, a safety margin is included in all risk
estimates. The risk of hereditary ill-health in subsequent children or grandchildren is estimated
to be at worst 10 extra cases from a million individual parents exposed to 1mGy, whereas the
normal incidence, without irradiation, is about 70 000 cases.
10.8.3 Developmental Effects
Developmental effects (or teratogenic) are the effects which observe in children who were
exposed during the fetal or embryonic stages of development. An exposed unborn child may
be subjected to more risk from a given dose of radiation than is either of its parents. The
developmental effects of radiation on the embryo and fetus are strongly related to the stage at

281
CHAPTER 10 RADIATION HAZARDS AND PROTECTION

which exposure occurs. The greatest concerns are of inducing malformations and functional
impairments during early development and an increased incidence of cancer during childhood.
The most frequent radiation−induced human malformations are small size at birth, stunted
postnatal growth, microcephaly (small head size), microencephaly (small brain), certain eye
defects, skeletal malformations and cataracts. Fortunately, these effects are observed only for
radiation doses much larger than those permitted radiation workers.
The current knowledge regarding developmental effects, according to the International
Commission on Radiological Protection (ICRP), is as follows:
• Exposure of the embryo during the first 3 weeks following conception may result in a
failure to implant or an undetectable death of the conceptus. Otherwise, the pregnancy
continues in normal fashion with no deleterious effects. This "all or nothing" response is
thought to occur only for acute doses greater than several rem,
• After 3 weeks, malformations may occur which are radiation dose dependent but with a
threshold dose estimated to be about 10 rem of acute exposure,

• From 3 weeks to the end of pregnancy it is possible that radiation exposure can result in
an increased chance of childhood cancer with a risk factor of, at most, a few times
(probably 2 to 3) that for the whole population, and
• Irradiation during the development of the forebrain, in the period of 8−15 weeks after
conception, may reduce the child's IQ by 0.3 point per rem, on the average, for relatively
large doses. 
These conclusions are reassuring for individuals who incur small work−related doses since the
possible developmental effects are thought to occur only at much higher doses or to occur with
very low probability, if at all.

10.9 Radiation Protection Principles


Fundamental to radiation protection is the reduction of expected dose and the measurement of
human dose uptake. The system of radiation protection proposed by the International
Commission on Radiological Protection (ICRP) set out three fundamental principles for an
overall system are used as the basis of radiation protection internationally: justification, dose
limitation, and optimization of protection.
10.9.1 Justification
Justification refers to the necessity the benefit of the radiation exposure should be greater than
the risk of using it whether this applies to staff, visitors or patients. In other words, no practice
involving exposures to radiation should be adopted unless it produces sufficient benefit to the
exposed individual or to society to offset the detriment it causes. Even though there are more
staff using medical radiation than for any other use of radiation, they are not, in general
exposed to such an extent as to offset the medical benefits either to the patient or to society.

282
CHAPTER 10 RADIATION HAZARDS AND PROTECTION

For example; chest x-ray to follow up pneumonia, must be justified both as a general
procedure and then as regards the individual patient before the latter undergoes the procedure.
Clearly, some exposures are easier to justify than others, while some are obviously unjustified.
An example of the unjustified would be mammography screening in 20-30-year-old well-
women, because it would probably cause more harm than benefit. Sometimes an individual
exposure is unjustified as the diagnosis can be made otherwise, for example using ultrasound,
magnetic resonance imaging (MRI), or endoscopy, or would not actually contribute to the
patient's management, for example in coccydynia.
10.9.2 Optimization
Optimization can be defined as a process or method used to make a system of protection as
effective as possible within the given criteria and constraints. That means, consider how best
to use resources in reducing radiation risks to individuals and populations so far as is
reasonably achievable social and economic factors being taken into account. This is the
principle of radiation safety to reduce the radiation doses and releases of radioactive
materials by employing all reasonable means. That is the basis of the ALARA principle or
ALARP principle (but ALARP does not consider social and economic factors and is
developed from Case Law). ALARA is an acronym for "As Low As Reasonably
Achievable" and is a regulatory requirement for all radiation safety programs. For members
of staff or visitors the effective dose should be as low as reasonably practicable as
constrained by the working procedures. For a patient, the radiation exposure should be as
low as compatible with providing the diagnostic information required. This can be achieved
by reducing the number of images taken of a patient.
10.9.3 Limitation
Dose Limitation, together with Justification and Optimisation, are used for controlling
radiation risks and as the basis of Radiation Protection internationally. There are legal dose
limits for workers and members of the public, based on ensuring that no deterministic
effects are produced and that the probability of stochastic effects is reasonably low.
Limitation is effective dose to individuals shall not exceed the dose limits recommended
requires that Deterministic Effects are avoided and that Probabilistic / Stochastic effects are
as low as reasonably achievable (ALARA ). Limits are not appropriate for patients although
'reference values' have been published to indicate levels above which exposures should be
reviewed.

10.10 Factors in Dose Uptake


The radiology technician can control the amount, or dose, of radiation received from a source
and limit exposure to penetrating radiation by taking advantage of time, distance, and
shielding. In general the basic means of reducing your exposure to radiation and keeping your
exposure ALARA regardless of the specific source of radiation are as follows:

283
CHAPTER 10 RADIATION HAZARDS AND PROTECTION

10.10.1 Time
Keep the time of exposure to a minimum. By reducing the time of exposure to a radiation
source, the effective dose to the worker is reduced in direct proportion with that time. Time
directly influences the dose received: if you minimize the time spent near the source, the dose
received is minimized. This can be done by improving the training of operators to reduce the
time they take to handle a source. The dose to an individual is directly related to the duration
of exposure. The equation for this relationship is:

During radiography the time of exposure is kept to a minimum to reduce motion blur. During
fluoroscopy the time of exposure should also be kept to a minimum to reduce patient and
personnel exposure. Most fluoroscopic examinations take less than 5minutes; only during
difficult special procedures should it be necessary to exceed 5 minutes of exposure time.
10.10.2 Distance
Maintain distance from source. The exposure rate from a radiation source drops off by the
inverse of the distance squared. If a problem arises during a procedure, don't stand next to the
source and discuss your options with others present. Move away from the source or return it to
storage, if possible.

10.10.3 Shielding by Barriers


Where appropriate, place shielding between yourself and the source. The shield is term refers
to a mass of absorbing material placed around source, to reduce the radiation to a level safe for
humans. Shielding used in diagnostic radiology usually consists of lead, although often
conventional building materials are used. In x-ray facilities, the plaster on the rooms with the
x-ray machin contains barium sulfate and the operators stay behind a leaded glass screen and
wear lead aprons. Almost any material can act as a shield from gamma or x-rays if used in
sufficient amounts. The amount a protective barrier reduces radiation intensity can be
estimated if the Half-Value Layer (HVL) or the Tenth-Value Layer (TVL) of the barrier
material is known.
 HVL: it is the thickness of absorbing
material necessary to reduce the radiation intensity to half of its original value.
 TVL: it is the thickness of material that will reduce the radiation intensity to one-tenth its
original value. The relationship between HVL and TVL is: TVL = 3.3 HVL
10.11 Maximum Permissible Dose
It is the maximum dose of radiation that in light of present knowledge would be expected to
produce no significant radiation effects.
The Maximum Permissible Dose (MPD) is specified not only for whole-body exposure but
also for partial-body exposure, organ exposure, and exposure of the general population (again,
excluding medical exposure as a patient). These MPD values are known as dose-limiting
recommendations and are summarized in table below:

284
CHAPTER 10 RADIATION HAZARDS AND PROTECTION

GROUP MPD
Radiation workers
Combined whole-body occupational exposure
Prospective annual limit 5 rem in any given year
Retrospective annual limit 10 to 15 rem in any given year
Long-term accumulation to age N years 5 (N-18) rem
Skin 15 rem in any given year
Hands 75 rem in any given year (25 rem per quarter)
Forearms 30 rem in any given ( 10 rem per quarter)
Other organs, tissues , and organ systems 15 rem in any given ( 5 rem per quarter)
Pregnant women (with respect to fetus) 0.5 rem in gestation period
Public or occasionally exposed individuals 0.5 rem in any given year
Students 0.1 rem in any given year
General population
Genetic 0.17 rem average per year
Somatic 0.17 rem average per year

10.12 Whole-Body Occupational Exposure


The cumulative MPD for occupationally exposed persons is determined by:

Where N is age in years. This results in an annual MPD of 5 rem, or 5000 mrem (50 mSv).
One consequence of this specification for MPD is that persons less than 18 years of age should
not be employed in radiation occupations. There are several special situations associated with
the whole-body occupational MPD. Students under the age of 18 may not receive more than
100 mrem/yr (1 mSv/yr) during the course of their educational activities. This is included in
and not in addition to the 500 mrem (5mSv) permitted each year as a nonoccupational
exposure. Consequently, student radiologic technologists under the age of 18 may be engaged
in departments of radiology, but their personnel exposure must be monitored and should
remain below 100 mrem/yr. Because of this, it is general practice not to accept underage
persons into schools of radiologic technology unless their eighteenth birthday is only a few
months away. The MPD established for non-occupationally exposed persons is one tenth of
that for the radiation worker. Individuals in the general population are limited to 500 mrem/yr
(5 mSv/yr).
In addition to a limitation of 5000 mrem/yr (50 mSv/yr) for the whole body, the dose to any
of the parts of the body (head, neck, trunk, lens of the eye, blood-forming organs, and gonads.)
may not exceed the same limit. This interpretation is accepted because irradiation of any of
these parts carries a presumed risk of late effects equal to the risk associated with whole-body
irradiation. Some organs of the body have a higher MPD than the whole-body MPD.
The MPD for the: skin is 15000 mrem/yr (150 mSv/yr)
forearms 30000 mrem/yr (300 mSv/yr)
hands 75000 mrem/yr (750 mSv/yr)

285
CHAPTER 10 RADIATION HAZARDS AND PROTECTION

10.13 Exposure of Pregnant


Special care must be taken for pregnant women, whether was pregnant patients or staff.
10.13.1 Pregnant Patients
A special case where individual justification is needed is for patients who are or might be
pregnant. This follows the "28 day rule", outlined in Fig. 5. It applies to the radiographic
examination of any area between the knees and the diaphragm and to the injection of
radionuclides. It is based on the principle that there is little or no risk to the live-born child
from irradiation during the first 3 weeks or so of gestation, i.e. before the first missed period;
except possibly from high-dose procedures, such as barium enemas and abdominal or pelvic
computed tomography (CT).
10.13.2 Pregnant Staff
There is a requirement to ensure that the fetus of a pregnant employee is not exposed to a
significant risk. Female staff should not be exposed to spasmodic high doses and on average
should not receive more than about 1 mSv a month. A dose limit is set that is equal to the limit
for a member of the public. The limit applies over the declared term of the pregnancy; that is,
from the date that the employee informs her employer in writing that she is pregnant. For
diagnostic x-rays, it can be assumed that the fetal dose is no greater than 50% of the dose on
the surface of the abdomen, i.e. of the dose recorded on the dose monitor. Very few staff
exceeds those limits even now.
10.13.3 Protection of Staff and Members of the Public
The legislation is enacted to ensure that individual doses are as low as reasonably practicable.
This is achieved by ensuring premises and practices are designed so that people are most
unlikely to exceed a proportion of the set dose limits.
10.14 Design of Protective Barriers
It is usually necessary to insert protective barriers, usually sheets of lead, in the walls of x-ray
examining rooms. If the radiology facility is located on an upper floor, then it may be
necessary to shield the floor as well. A great number of factors should be considered in
designing a protective barrier. This discussion will touch on only the fundamentals and some
basic definitions. Any time new x-ray facilities are being designed or old ones renovated, a
medical physicist must be consulted for assistance in designing proper radiation shielding.
10.15 Sources of X-Ray Exposure
For the purpose of designing protective barriers, three types of radiation - the useful beam,
leakage radiation, and scatter radiation – must be considered in designing the protective
barriers of an X-ray room (See figure 10.3).

286
CHAPTER 10 RADIATION HAZARDS AND PROTECTION

Leakage radiation

Useful beam

Scatter radiation

Mahmood & Haider


Figure 10.3: Three types of radiation, the useful beam, leakage radiation,
and scatter radiation
10.15.1 Primary Radiation
Primary radiation (or direct beam) is the useful beam and it is the most intense and therefore
the most hazardous and the most difficult to protect against. Therefore, it should avoid
exposing anyone other than the patient to the direct beam.

10.15.2 Secondary Radiation


There are two types of secondary radiation;
10.15.2.1 Scattered Radiation
X-rays are scattered in all directions when the useful beam strikes any object, including the
patient, who is therefore a source of scattered rays whenever the tube is energized. The
radiologist and radiographer should be as far away from the patient as practicable for any given
procedure. It should be remembered that the high kilovoltage used produces a high side scatter.
Lead-rubber aprons and curtains, glass screens, etc., should be used to protect staff from scatter.
With CT scanners, scatter is high close to the aperture, and this area should be avoided when
injecting the patient with contrast medium during exposure. During both radiography and
fluoroscopy, the patient is the single most important scattering object. As a general rule, the
intensity of scatter radiation 1m from the patient is 0.1% of the intensity of the useful beam at
the patient.
10.15.2.2 Leakage Radiation
It is radiation emitted from the x-ray tube housing assembly in all directions other than that of
the useful beam. When the tube housing is design properly, the radiation leaking will never
exceed 100 mR / hr (26 μC / Cg-hr) at 1 meter. Although in practice leakage radiation levels
are much less than this limit, 100 mR/hr at 1 m is used for barrier calculations. Barriers
designed to protect areas from secondary radiation are called secondary radiation barriers that
are always thinner than the initial protective barriers. Lead is rarely required for secondary

287
CHAPTER 10 RADIATION HAZARDS AND PROTECTION

barriers because the computation usually results in less than 0.4 mm Pb. Table ( ) contains
equivalent thicknesses for secondary barrier material.
Table: equivalent material thicknesses for secondary barriers
Computed Substitutes
Lead Required Steel Glass Gypsum Wood
(mm) (mm) (mm) (mm) (mm)
0.1 0.5 1.2 2.8 19
0.2 1.2 2.5 5.9 33
0.3 1.8 3.7 8.8 44
0.4 1.5 4.8 12 53

10.16 Protection of Patients


There should be regulations are mainly concerned with the appropriate use of radiation
procedures on patients, by staff who have been properly trained. The regulations outline the
theoretical knowledge that staff should have, and specify the need for practical instruction for
all staff whether they are irradiating the patient directly, or carrying clinical responsibility for
the patient's exposure. In the legislation, these latter two aspects are referred to, respectively,
as 'physically' or 'clinically' directing the exposure. An individual not deemed to be fully trained
may physically direct an exposure as part of their training only while under the supervision of a
person who is adequately trained. This radiologist, for example, should ensure that only
accepted diagnostic practices are used and that persons who are physically directing the
exposure select procedures which ensure that the dose to the patient is as low as reasonably
practicable, consistent with the requirements for diagnosis; these are the justification and
optimization principles. Particular care should be taken over pregnant patients.
10.17 Dosimetry Devices
Dosimetry devices are useful for keeping track of total accumulated radiation dose. A
dosimeter is like the odometer on the car. For example, where the odometer measures total
miles traveled, the dosimeter measures the total amount of dose you have received. Most
medical facilities use systems routine personnel dosimetry on which occupational doses of
record are based. These systems dosimetry must meet certain criteria that have been
established to demonstrate that the dosimetry is able to read and interpret the dosimeter
responses with sufficient accuracy and precision to allow an acceptable level of confidence in
the results. There are several different types of dosimeters available. We will address three
different types of personal dosimetry commonly used measure, which is usually worn in the
pocket.
10.17.1 Self Reading Dosimeters
A self-reading pocket dosimeteris (SRD) are called by many names: direct reading dosimeter
(DRD), pocket ion chamber (PIC), and pencil dosimeters are a few common names. It come
in a variety of sizes and shapes usually the same size and shape of a fountain pen. It is
measure the exposure dose to incident radiation in roentgens (R) or milliroentgens (mR).

288
CHAPTER 10 RADIATION HAZARDS AND PROTECTION

Generally, SRDs only measure gamma and X-ray radiation and can be read out immediately
upon finishing a job involving external exposure to radiation.
The dosimeter has a small ionization chamber with the size of approximately two
milliliters. Inside the ionization chamber is a central anode wire, attached to the anode wire is
a metal-coated quartz fiber. When the positive potential of the anode are shipped, the charge
is distribution between the anode wire and fiber quartz. Electrical repulsion deviates quartz
fiber, and whenever the charge, increasing the deviation of quartz fiber. Radiation incident on
the chamber produces ionization inside the active volume of the chamber. Attracted electrons
produced by the ionization for that, and collected by, the central positively charged anode.
This group of electrons reduces the net positive charge and allowed to return quartz fiber in
the direction of the original position. The amount of movement is directly proportional to the
amount of ionization that occurs. To read the dosimeter, hold it up to a light source and look
through the eyepiece. You should always record the SRD reading before you enter a radiation
field (hot zone). Read the SRD periodically (at 15 to 30 minute intervals) while when
working in hot zone and out of the hot zone. If a higherthan-expected reading is indicated, or
if the SRD reading is off-scale, you should: n Notify others in the hot zone n Have them
check their SRDs n Exit the hot zone immediately n Follow local reporting procedures If you
are using a low range dosimeter (e.g., 0 to 200 mR), you should consider exiting the hot zone
if the dosimeter reads greater than 75% of full scale. The reason for this is to prevent your
dosimeter from going off scale; if your dosimeter goes off scale, it will no longer keep a
record of the dose you received. A dosimeter can be recharged or “zeroed” after each use.
Record the final reading upon leaving the hot zone. Exercise care when using a SRD, they are
sensitive instruments. Rough handling, static electricity, or dropping a dosimeter may result
in erroneous or off-scale readings.
10.17.2 Electronic Dosimeters
The electronic dosimeter serves the same basic function as the SRD, except that it has a
digital readout that displays the total dose received by the wearer in milliroentgens (mR) or
millirem (mrem). It is based on Geiger-Muller tubes or single silicon diodes can provide
effective alarms and immediate dose readings in areas where there are real risks of high
exposures. However, these devices have a very poor response to photon energies below
around 80 keV. This makes them unable to detect low-energy gamma radiation and
diagnostic X-rays. The Siemens electronic personal dosimeter overcomes that problem with a
linear response to below 20 keV, which makes it suitable for radiodiagnostic staff. Now,
Electronic dosimeters are available from various manufacturers in a variety of sizes and
shapes. There are many options available, depending on the required or desired response.
Many electronic dosimeters have an audible response that indicates the exposure rate through
a series of chirping noises. The frequency of the increase and decrease in the voice depends
on the dose rate where these voices provide audible warning feature when rates are high dose.

289
CHAPTER 10 RADIATION HAZARDS AND PROTECTION

10.17.3 Thermoluminescent Dosimeters


Thermoluminescent dosimeters (TLDs) do not provide a direct reading indication of
accumulated dose as the previously mentioned dosimeters do. They are available in many
different forms and materials. In one simple form a small crystal of lithium fluoride
containing trace quantity of manganese as an impurity, which is mounted in a plastic holder.
As with any phosphor, when X- or gamma rays fall on lithium fluoride and are absorbed,
atomic electrons are raised to higher energy levels.
In the case of a thermoluminescent (TL) material such as lithium fluoride absorbs and
stores some of the energy from radiation that strikes it. The electrons in the TLD crystal or
"chip" stay indefinitely in their excited state, and the material retains of the radiation
exposure. The greater the dose absorbed, the more electrons are excited state. After the badge
has been worn for the prescribed period it is returned to the approved dosimetry laboratory
where it is processed. The chip of TL material is inserted in a light-tight chamber, and its
temperature is raised to 300-400°C at a carefully controlled rate. This causes the electrons are
excited state fall to their ground state. In doing so stored energy is released from the chip in
the form of photons of light, a phenomenon called thermoluminescence. The amount of light
emitted from a TLD chip is very small. When reading a TLD chip it is placed in a dark
chamber equipped with a photomultiplier tube. When the chip is heated, the photomultiplier
converts the light into an electronic signal which is then amplified. The resulting output of the
TLD reader which are collected and measured by a photoelectric device as the temperature is
raised (glow curve). The total light energy emitted under the curve is proportional to the dose
of X- or gamma rays originally absorbed. In ather words, the area under this curve is directly
proportional to the amount of radiation that was absorbed in the chip. Calibration is performed
as with film dosemeters. TLD chips are almost always used in extremity monitors because of
their small size, energy response characteristics and linearity through a wide range of
exposures and exposure rates. When used as a finger monitor, the TLD chip is placed in a
small cavity in the face of a plastic ring and covered with a protective identification label.
The dose is indicated on a digital read-out, and may be digitized and stored in a computer,
together with the glow curve of the TL material characterized by the dose received. The chip is
then annealed using another controlled heating program to remove any residual stored energy
and any 'memory' of the previous exposure. Having been returned to its original condition, it
can then be reissued for reuse.
In other forms, TL dosemeters are also used to measure patient dose in radiological
procedures; mounted in rings to measure staff finger doses; or, in sachets, placed on the forehead
to estimate eye doses.

290
BOOK REFERENCES

BOOK REFERENCES
1. A. Bjornerud, "The Physics of Magnetic Resonance Imaging", FYS-KJM 4740, chp. 11,
(2008)
2. A. L. Kholkin, N.A. Pertsev, and A.V. Goltsev "Piezoelectricity and Crystal Symmetry",
Springer Science+Business Media, LLC (2008)
3. Australian Radiation Protection and Nuclear Safety Agency. Radiation Protection in
Diagnostic and Interventional Radiology, Radiation Protection Series Publication No.
14.1, Commonwealth of Australia (2008)
4. B. Diarra, H. Liebgott, P. Tortoli and C. Cachard , "Sparse array techniques for 2D array
ultrasound imaging", Proceedings of the Acoustics 2012 Nantes Conference, Nantes,
France, (2012)
5. B. Lila, J. Kapelewski, "The Focused Beam Forming with a Monolithic 2D Phased
Array", Acta Physica Polonica A, Vol. 114, No. 6-A, (2008)
6. C. Westbrook, "MRI at a Glance", Blackwell Science Ltd, (2002)
7. C. Zhang, L. H. Le, R. Zheng ,E. Lou, "Measurements of ultrasonic phase velocities and
attenuation of slow waves in cellular aluminum foams as cancellous bone-mimicking
phantoms", J. Acoust. Soc. Am. 129 (5), (2011)
8. D. J. Brenner, C. H. McCollough, C. G. Orton, "It is time to retire the computed
tomography dose index (CTDI) for CT quality assurance and dose optimization",
Medical Physics, Vol. 33, No. 5, (2006)
9. D. Pointer, "Computed Tomography (CT) Scan Image Reconstruction", SRC Computers,
www.srccomputers.com, (2008)
10 D. Tomasi and T. Ernst, "A Simple Theory for Vibration of MRI Gradient Coils"
Brazilian Journal of Physics, vol. 36, no. 1A, March, (2006).
11. D. Weishaupt "Factors Affecting the Signal-to-Noise Ratio MRI", Springer (2006), pp
29-39
12. E. B. Podgorsak, "Radiation physics for medical physicists", Springer-Verlag Berlin
Heidelberg, Germany, (2006)
13. E. Buscarini, "Manual of diagnostic ultrasound" Vol. 1, 2nd ed., Harald Lutz, World
Health Organization (2011)

291
BOOK REFERENCES

14. E. D. Seleþchi and O. G. Duliu, "Image Processing And Data Analysis In Computed
Tomography" Rom. Journ. Phys., Vol. 52, Nos. 5–7, P. 667–675, Bucharest, (2007)
15. E. J. Blink, "MRI: Physics", Application Specialist MRI, (2004)
16. E. L. Dove, "Physics of Medical Imaging – An Introduction", www.imt.liu.se., (2004)
17. Environmental Protection Agency, Radiation: Facts, Risks and Realities, EPA-402-K-10-
008 Office of Air and Radiation, United states (2012)
18. G. Schoonenberg, M. Schrijverc, Q. Duan, R. Kemkersd, A. Laine, "Adaptive spatial-
temporal filtering applied to x-ray fluoroscopy Angiography", Proc. of SPIE Vol. 5744,
(2005).
19. G. Starkschall, N. Desai, P. Balter, K. Prado, D. Luo, D. Cody, T. Pan, "Quantitative
assessment of four-dimensional computed tomography image acquisition quality", J.
Appl. C. M. Phy. , Vol. 8, No. 3, (2007)
20. H. F. Routh1 and D. M. Skyba, "Functional imaging with ultrasound", Medicamundi
46/1, (2002)
21. H. T. Lutz · H.A. Gharbi, Manual of Diagnostic Ultrasound in Infectious Tropical
Diseases", Springer-Verlag Berlin Heidelberg, Germany, (2006)
22. J. A. Seibert, and J. M. Boone, "X-Ray Imaging Physics for Nuclear Medicine
Technologists: Part 2: X-Ray Interactions and Image Formation", VOL. 33, No. 1,
(2005)
23. J. Calabia, P. Torguet, M. Garcia, I. Garcia, N. Martin, B. Guasch, D. Faur and M.
Vallés "Doppler ultrasound in the measurement of pulse wave velocity: agreement with
the Complior method" Cardiovascular Ultrasound (2011)
24. J. E. Aldrich, "Basic physics of ultrasound imaging" Crit Care Med, Vol. 35, No. 5
(Suppl.) (2007)
25. J. Hsieh, "Computed Tomography: Principles, Design, Artifacts, and Recent Advances",
2nd ed. Wiley Inter-science, Bellingham, Washington, USA, (2009)
26. J. Hsieh, B. Nett, Z.u Yu, K. Sauer, J-B Thibault, C. A. Bouman, "Recent Advances in
CT Image Reconstruction" Springer Science-Business Media New York, (2013)
27. J. L. Creasy, "The General Appearance of Edema and Hemorrhage on CT, MR and US
(Including a General Introduction to CT, MR and US Scanning)", Chapter 2, Springer
Science-Business Media, LLC, (2011)
28. J. Prince, "Medical Imaging and systems", Upper Saddle River, N.J.: Pearson Prentice
Hall, (2006)
29. J. R. Connolly, "The Interaction of X-rays with Matter and Radiation Safety", EPS400-
002, (2012)
30. J. T. Bushberg, J. A. Seibert, , E. M. Leidholdt, and, J. M. Boone, "The Essential Physics
of Medical Imaging", 3rd ed, pp 283‐285St. Baltimore: Williams & Wilkins (1994).
31. K. Doi, "Diagnostic imaging over the last 50 years: research and development in medical
imaging science and technology" Phys. Med. Biol. 51, (2006)

292
BOOK REFERENCES

32. K. J. "From Radioisotopes to Medical Imaging, History of Nuclear Medicine",


www.lbl.gov, Science-Articles / Archive / nuclear-med-history.html, Accessed, (2009)
33. K. J. Jang, D. C. Kweon, J. W. Lee, J. Choi, E. H. Goo, K. R. Dong, J. S. Lee, G. H. Jin,
S. Seo, "Measurement of Image Quality in CT Images Reconstructed with Different
Kernels" J. Korean Phys. Society, Vol. 58, No. 2, (2011)
34. K. K. Shung, J. M. Cannata, Q. F. Zhou, "Piezoelectric materials for high frequency
medical imaging applications: A review " J. Electroceram 19, (2007) 139–145,
35. K. M. Kanal, "Fluoroscopy", www.expertconsultbook.com, L. Yu and S. Leng, "Image
Reconstruction Techniques", American College of Radiology, www.Imagewisely.Org,
(2010)
36. L. Romans, "CT Image Quality", www.CEwebsource.com , produced (ECEI) (2013)
37. M. F. McNitt-Gray, "Tradeoffs in CT Image Quality and Dose", David Geffen School of
Medicine at UCLA, (2005)
38. M. F. McNitt-Gray, “Radiation dose in CT,” Radio Graphics, vol. 22, no. 6, (2002), pp.
1541-1553
39. M. MacGregor, L. Kelliher, J. Kirk-Bayley "The Physics of Ultrasound: Part 2",
Anaesthesia Tutorial of the Week 218, (2011)
40. M. Mahesh, "CT Physics", www.mededconnect.com, (2013)
41. National Academy of Sciences. The Health Effects of Exposures to Low Levels of
Ionizing Radiation, BEIR V. Committee on the Biological Effects of Ionizing Radiations,
National Academy Press, Washington, D.C. (1990)
42. O. P. Chimankar, R. S.Shriwas, S. Jajodia, and V.A.Tabhane, "Ultrasonic Absorption
and Relaxation Studies in Aqueous Arginine and Methionine using PEO Technique",
Arch. Phy. Res., 1 (4), (2010), PP.160-167
43. P. Atkinson and N.T. Wells, "Pulse-Doppler Ultrasound and Its Clinical Application"
Yale J. Biol Med. 50(4), (1977), PP. 367–373.
44. P. J. Nacher, "Magnetic Resonance Imaging: From Spin Physics to Medical Diagnosis",
Birkh• auser Verlag Basel, (2008)
45. P. Laugier and G. Haiat, "Introduction to the Physics of Ultrasound", Springer
Science+Business Media B.V. (2011)
46. P. Sprawls1, P-A. T. Duong, "Effective Physics Education for Optimizing CT Image
Quality and Dose Management with Open Access Resources" Medical Physics
International Journal, Vol.1, No.1, (2013)
47. R. A. Geise, C. Eubig, S. Franz, C. Kelsey, R. Lieto, N. Shaikh, M. Wexler, "Managing
the use of fluoroscopy in medical institutions", Report No. 58 Medical Physics
Publishing, (1998)
48. R. F. Farr, and P. J. Allisy-Roberts, "Physics for Medical Imaging", 2nd Edition, WB
Saunders, UK, (2008)

293
BOOK REFERENCES

49. R. H. Editor, "Study Guide to Computed Tomography Advanced Applications",


Greenwich, Connecticut: Clinical Communications Inc. (1994).
50. R. S. Lawson, "An Introduction to Radioactivity", Nuclear Medicine Department
Manchester Royal Infirmary, (1999)
51. S. Balter , J. W. Hopewell , D. L. Miller , L. K. Wagner , M. J. Zelefsky,
"Fluoroscopically Guided Interventional Procedures: A Review of Radiation Effects on
Patients’ Skin and Hair", Radiology: V. 254: No. 2, (2010)
52. S. C. Bushong, "Radiologic Science for Technologists". 5th ed. St. Louis: CV Mosby
Publishers (1993).
53. S. R. Bhatnagar, "Converting Sound Energy To Electric Energy", International J.
Emerging Technology and Advanced Engineering, Volume 2, Issue 10, (2012)
54. S. Schaller, J. E. Wildberger, R. Raupach, M. Niethammer, K. Klingenbeck-Regn and T.
Flohr, IEEE Trans. Med. Imaging 22, 846 (2003).
55. Safety Institute of Australia. Physical Hazards: Ionizing Radiation, HaSPA. The Core
Body of Knowledge fo Generalist OHS Professionals. Tullamarine, VIC. Safety Institute
of Australia Ltd, Tullamarine, Victoria, Australia (2012)..
56. T. Claesson, "A Medical Imaging Demonstrator of Computed Tomography and Bone
Mineral Densitometry" Universitetsservice US AB, Stockholm, (2001)
57. T. G. Dehn, "A Discussion of the Issue and Implications for the Health Care Industry",
Medical Isotope (Nuclear Medicine) Shortages - National Imaging,
www.NIAhealthcare.com, (2013)
58. T. W. Redpath, "signal-to-noise ratio in MRI", the British Journal of Radiology, 71
(1998)
59. V. A. Kovalev, F. Kruggel, and D. Yves von Cramon "Gender and age effects in
structural brain asymmetry as measured by MRI texture analysis", NeuroImage 19
(2003) PP. 895–905
60. V. Chan and A. Perlas, "Basics of Ultrasound Imaging", Springer Science+Business
Media, LLC (2011)
61. W. L. Reddinger, "CT Instrumentation & Physics", www.e-radiography.net
62. W. R. Hendee and E. R. Ritenour "Medical Imaging Physics", 4th Edition, Wiley-Liss,
Inc., (2002)
63. W. R. Hendee, E. R. Ritenour "Medical Imaging Physics",4th ed., John Wiley & Sons,
New York, (2002)
64. W. Reddinger,"CT Image Quality" OutSource, Inc. (all rights reserved) (1998)
65. Z. Ren, D. Xie, and H. Li "Study on shimming method for open permanent magnet of
mri" Progress In Electromagnetics Research M, Vol. 6, (2009) PP. 23–34.

294
ACRONYMS AND UNITS

APPENDIX
ACRONYMS AND UNITS
CHAPTER ONE 31 J Joule
1 A Mass Number 32 Sv Seivert
2 Z Atomic Number 33
3 n Shell Number Dose to Tissue T because of
34 DT,R
Binding Energies of Electron shells, Radiation
4 EK,L,M 35
K, L, M, etc.)
5 KeV kilo-electron-volts (1000 eV) 36 WT Tissue Weighting Factor
6 MeV million-electron-volts (106 eV) 37 SI unit
System International Unit
7 f or Frequency (cycles per second, cps) Background Radiation Equivalent
38 BRET
8 h Planck's Constant Time
CHAPTER TWO
9 λ Wavelength
1 kVp Peak kilovoltage
10 FM Frequency Modulation
2 mA milliamperage
11 AM Amplitude Modulation
3 mAs Milliamperage-second
12 nm Nanometer
4 Ko initial kinetic energy
13 μm Micrometer
Characteristic X-Ray (X-ray
14 A Amplitude 5
photon of energy )
15 γ- Gamma Characteristic X-Ray (X-ray
6
16 α- Alpha photon of energy
17 β Beta particles 7 C speed of light
18 DNA Deoxyribonucleic acid 8 ∆K change in energy
19 CT Computerized Tomography 9 I X-ray intensity
20 CAT Computerized Axial Tomography CHAPTER THREE
21 UV Ultraviolet 1 scattering angle
22 R or r Rontgen 2 Change in a photon's wavelength
23 C/kg Coulomb per kilogram 3 Wavelength after scattering
24 Rad Radiation absorbed dose 4 Electron rest mass
25 Gy gray 5 Å angstrom
26 Rem Roentgen equivalent man 6 KE kinetic energy
27 DE dose equivalent 7 μ linear attenuation coefficient
28 Ci Curie 8 mass attenuation coefficient
29 Bq Becquerel 9 ρ density
30 q Charge 10 Effective Atomic Number

295
ACRONYMS AND UNITS

11 HVL Half-Value Layer 4 PET Positron emission tomography


CHAPTER THREE 5 PHA Pulse Height Analysis
1 Scattering Angle 6 CRO Cathode Ray Oscilloscope
2 Change in a photon's wavelength 7 AP Antero-Posterior
3 Wavelength after scattering 8 PA Posterio-Anterior
4 Electron rest mass CHAPTER EIGHT
5 Å Angstrom 1 US Ultrasound
6 KE Kinetic Energy 2 SONAR Sound Navigation and Ranging
7 Μ linear attenuation coefficient 3 B-mode Brightness Modulation
8 μm mass attenuation coefficient 4 A-mode Amplitude Modulation
9 Ρ density 5 M- mode Motion Mode
10 Zeff Effective atomic number 6 T-M mode Time-Motion Mode
11 HVL half-value layer 7 AF Audio Frequencies
12 LET linear energy transfer 8 Hz Hertz
CHAPTER FOUR Lead (from Latin: Plumbum)
9 PZT
1 S Amount of Scattered Radiation Zirconate Titanate
10 TGC Time Gain Compensation
2 P Amount of Primary Radiation
11 DGC Depth Gain Compensation
3 ∅ Angle of Acceptance
12 CW Continuous-Wave
4 FFD Focal-Film Distance
13 dB Decibels
5 OFD Object-Film Distance
14 W Watt
CHAPTER FIVE
Spatial-Peak and Temporal-
Video Camera: Charge-Coupled 15 ISPTA
1 CCD Average Intensity
Device
16 RBC Red Blood Cells
2 GI The Gastrointestinal, or Digestive
17 Z Acoustic Impedance
3 AC Alternating Current
18 R Reflection Coefficient
4 DC Direct Current
19 Intensity of the Reflected
CHAPTER SIX
20 T Transmission Coefficient
1 FOV Field of View
21 2D Two-Dimensional
2 SFOV Scan FOV
22 3D Three-Dimensional
3 DFOV Display FOV
Electron-Beam Computed 23 4D Four-Dimensional
4 EBCT
24 PW Pulse Doppler
Tomography
5 EBT Electron-Beam Tomography 25 PRF Pulse Repetition Frequency
6 HU Hounsfield Units High Pulse Repetition Frequency
26 HPRF
7 ROC Receiver Operator Characteristics Pulse Doppler
27 PWD Pulse Wave Doppler
8 DAS Data Acquisition System
28 CWD Continuous Wave Doppler
9 FBP Filtered Backprojection
Algebraic Reconstruction 29 CFM Color Flow Mapping
10 ART 30 Doppler frequency
Technique
CHAPTER SEVEN CHAPTER NINE
1 t½ Half-Life 1 ADC Apparent Diffusion Coefficient
2 Ao Activity 2 Bo Static Magnetic Field
Single-Photon Emission Computed 3 CSF Cerebrospinal Fluid
3 SPECT
Tomography 4 CSI Chemical Shift Imaging

296
ACRONYMS AND UNITS

5 de-phase Out of Phase International Commission on


3 ICRP
Radiological Protection
6 DWI Diffusion Weighted Imaging
4 ALARP As Low As Reasonably Practicable
7 EPI Echo-Planar Imaging
5 ALARA As Low As Reasonably Achievable
8 ETL Echo Train Length
6 HVL Half-Value Layer
9 FA Flip Angle
7 TVL Tenth-Value Layer
10 FID Free Induction Decay
8 MPD Maximum Permissible Dose
11 FT Fourier Transform
9 SRD Self-Reading Pocket Dosimeters
12 GPH Phase Encoding Gradient
10 DRD Direct Reading Dosimeter
13 GRE Gradient-Echo
11 PIC Pocket Ion Chamber
14 GRO Readout Gradient Direction
12 TLD Thermoluminescent dosimeter
Slice Selection Magnetic Field
15 Gs
Gradient 13 TL Thermoluminescent
16 Same Phase
in-Phase
17 M Net Magnetization
18 MIP Maximum Intensity Projection
19 MRA Magnetic Resonance Angiography
20 MRI Magnetic Resonance Imaging
21 MRV Magnetic Resonance Venography
Transverse Magnetization (Rotating
22 MXY
in the X-Y plane)
23 MZ Longitudinal Magnetization
24 NEX Number of Excitations
25 NMR Nuclear Magnetic Resonance
Nuclear Magnetic Resonance
26 NMRI
Imaging
27 NSA Number of Signal Averages
28 QD Quadrature Coils
29 RF Radio Frequency
30 RO Readout or Echo
31 SD Spin Density
32 SNR Signal-to-Noise Ratio
33 SPIO Superparamagnetic Iron Oxide
34 SS Slice Selection
35 T Tesla
36 T1 Spin-Lattice Relaxation Ttime
37 T2 Spin-Spin Relaxation Time
38 TE Echo Time
39 Thk Slice Thickness
40 TR Repetition Time
41 Γ Gyro Magnetic Ratio
42 ωo Precessional or Larmor Frequency
CHAPTER TEN
UNSCE United Nations Scientific Committee on
1
AR the Effects of Atomic Radiation
2 LD50/30 Lethal Dose (50% in 30 days)

297
Curriculum Vitae
1. Name : Dr.Mahmood Radhi Al- Qurayshi
2. Nationality: Iraqi
3. Religion: Muslim
4. Birth: Wasit, july 1964
5. Certifications:
A. Bachelor of Science in Physics
B. Master in Nuclear Physics
C. Ph.D. in Solid State Electronics
6. Current Grade: Assistant Professor
7. Contact Info: E-mail: mradhi64@yahoo.com
8. Languages fluently: English and Arabic
9. Positions:
A. Head of Radiological Techniques Department / College of Health and
Medical Technologies / Baghdad
B. Associate Dean for Administrative Affairs; College of Health and
Medical Technologies / Baghdad

Curriculum Vitae
1. Name : Dr. Haider Qasim Al-Mosawi
2. Nationality: Iraqi
3. Religion: Muslim
4. Birth: Baghdad April 1970
5. Certifications:
A. Bachelor of Medicine and Surgery (MBChB)
B. Higher Diploma in Diagnostic Radiology (DMRD)
C. Board (Ph.D.) in Diagnostic Radiology (FIBMS)
6. Current Grade : Assistant Professor
7. Contact Info: E-mail: haiderdo@yahoo.com
8. Languages fluently : English and Arabic
9. Positions:
A. Head of Radiological Techniques Department / College of Health and
Medical Technologies / Baghdad
B. Dean of the college of health and medical technology / Baghdad
C. Dean of the Institute of Medical technology / Baghdad
D. Dean of the Institute of technology / Suwaira
E. Dean of the Institute of Medical technology / Mansour

View publication stats

You might also like