Professional Documents
Culture Documents
net/publication/322156993
CITATION READS
1 3,389
3 authors, including:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Haider Qasim Hamood on 31 December 2017.
2015
BOOK CONTENTS
Atoms are far too small to see directly, even with the most powerful
optical microscopes. It is through the "language of light" that we
Rationale communicate with the world of the atom. This chapter will introduce
you to the rudiments of this language.
Performance Objectives
After studying the chapter one, the student will be able to:-
2
CHAPTER 1 RADIATION AND ATOM
used most, it is best to start by discussing the structure of the atom and the production of X-
rays.
Table 1.1: Fundamental properties of particulate radiation
Relative Mass Approximate Energy
particle Symbol
charge (amu) Equivalent (MeV)
The nucleons comprise Z protons, where Z is the atomic number of the element, and so (A-Z)
neutrons.
A nuclide is a species of nucleus characterized by the two numbers Z and A. The atomic
number is synonymous with the name of the element. The electron limit per shell can be
calculated from the expression: 2n2 where n is the shell number.
In each atom, the outermost or valence shell is concerned with the chemical, thermal,
optical and electrical properties of the element. X-rays involve the inner shells, and
radioactivity concerns the nucleus.
1.1.3 Binding Energy
Binding energy, amount of energy required to separate a particle from a system of particles or
to disperse all the particles of the system. Binding energy is especially applicable to
3
CHAPTER 1 RADIATION AND ATOM
subatomic particles in atomic nuclei, to electrons bound to nuclei in atoms, and to atoms and
ions bound together in crystals.
Nuclear binding energy is the energy required to separate an atomic nucleus completely
into its constituent protons and neutrons, or, equivalently, the energy that would be liberated
by combining individual protons and neutrons into a single nucleus. The hydrogen-2 nucleus,
for example, composed of one proton and one neutron, can be separated completely by
supplying 2.23 million electron volts (MeV) of energy. Conversely, when a slowly moving
neutron and proton combine to form a hydrogen-2 nucleus, 2.23 MeV are liberated in the
form of gamma radiation. The total mass of the bound particles is less than the sum of the
masses of the separate particles by an amount equivalent (as expressed in Einstein’s mass–
energy equation) to the binding energy. Electron binding energy, also called ionization
potential, is the energy required to remove an electron from an atom, a molecule, or an ion. In
general, the binding energy of a single proton or neutron in a nucleus is approximately a
million times greater than the binding energy of a single electron in an atom.
An atom is said to be ionized when one of its electrons has been completely removed. The
detached electron is negative ion and the remnant atom a positive ion. Together they form an
ion pair.
The binding energy depends on the shell , and on the element,
increasing as the atomic number increases.
An atom is excited when an electron is raised from one shell to another farther out.
or
Because of the wave-particle duality of light, the energy of a wave can be related to the
wave's frequency by the equation:
4
CHAPTER 1 RADIATION AND ATOM
This relation is true of all kinds of wave motion, including sound; although for sound the
velocity is about a million times less. More usefully, since frequency is inversely proportional
to wavelength, so also is photon energy:
At any point, the graph of field strength against time is a sine wave, depicted as a solid curve
in Figure 1.3. The peak field strength is called the amplitude (A). The interval between
successive crests of the wave is called the period (T). The frequency is the number of
crests passing a point in a second, and . The dashed curve refers to a later instant,
showing how the wave has travelled forward with velocity c.
At any instant, the graph of field strength against distance is also a sine wave. The distance
between successive crests of the wave is called the wavelength (λ).
Wavelength (λ)
or Period (T)
Amplitude (A)
Propagation Velocity
Mahmood & Haider
Figure 1.2: Electromagnetic wave.
5
CHAPTER 1 RADIATION AND ATOM
The types of radiation are listed in Table 1.2, in order of increasing photon energy, increasing
frequency, and decreasing wavelength (see figure 1.3. When the energy is less than 1 keV the
radiation is usually described in terms of its frequency, except that visible light is usually
described in terms of its wavelength. It is curious that only radiations at the ends of the
spectrum, radio waves and X- or gamma rays, penetrate the human body sufficiently to be
used in transmission imaging.
Table 1.2: Electromagnetic spectrum
Electromagnetic Wavelength Frequency Energy
Radio waves 30-6 m 10-50 MHz 40-200 neV
Infrared 10-0.7 µrn 30-430 THz 0. 2-1.8 eV
Visible light 700-400 nm 430-750 THz l.8-3eV
Ultraviolet 400-100 nm 750-3000 THz 3-12 eV
X- and gamma 60-2.5 pm 5× I06– 120× 106THz 20-500 keV
1.3 Radiation
Radiation is a fact of life: all around us, all the time. Radiation is energy moving in the form
of waves or streams of particles. Understanding radiation requires basic knowledge of atomic
structure, energy and how radiation may damage cells in the human body. There are many
kinds of radiation all around us. When people hear the word radiation, they often think of
atomic energy, nuclear power and radioactivity, but radiation has many other forms. Sound
and visible light are familiar forms of radiation; other types include ultraviolet radiation (that
produces a suntan), infrared radiation (a form of heat energy), and radio and television
signals. Figure 1.3 presents an overview of the electromagnetic spectrum.
Electromagnetic radiation is a form of energy. Electromagnetic energy is the term given to
energy traveling across empty space and used to describe all the different kinds of energies
released into space by stars such as the Sun. All forms of electromagnetic radiation (which
includes radio waves, light, cosmic rays, etc.) moves through empty space with the same
velocity at the speed of 299,792 km per second (very close to 3×108 ms-1) and not
significantly less in air. These kinds of energies include some that you will recognize and
some that will sound strange. They include:
Radio Waves
TV waves
Radar waves
Heat (infrared radiation)
Light
Ultraviolet Light (This is what causes Sunburns)
X-rays (emitted by X-ray tubes)
Short waves
Microwaves, like in a microwave oven
6
CHAPTER 1 RADIATION AND ATOM
Gamma Rays; gamma rays (emitted by radioactive nuclei) have essentially the same
properties of X-rays and differ only in their origin.
All these waves do different things (for example, light waves make things visible to the
human eye, while heat waves make molecules move and warm up, and x rays can pass
through a person and land on film, allowing us to take a picture inside someone's body) but
they have some things in common. They all travel in waves. The fact that electromagnetic
radiation travels in waves lets us measure the different kind by wavelength or how long the
waves are. That is one way we can tell the kinds of radiation apart from each other.
Although all kinds of electromagnetic radiation are released from the Sun, our atmosphere
stops some kinds from getting to us. For example, the ozone layer stops a lot of harmful
ultraviolet radiation from getting to us, and that's why people are so concerned about the hole
in it.
We humans have learned uses for a lot of different kinds of electromagnetic radiation and
have learned how to make it using other kinds of energy when we need to.
7
CHAPTER 1 RADIATION AND ATOM
8
CHAPTER 1 RADIATION AND ATOM
much greater distances than alpha or beta radiation, and it can penetrate bodily tissues and
organs when the radiation source is outside the body. Photon radiation can also be hazardous
if photon-emitting nuclear substances are taken into the body. An example of a nuclear
substance that undergoes photon emission is cobalt-60, which decays to nickel-60. There are
several types of ionizing radiation.
1.4.1 Particle Radiation
Particle radiation consists of a stream of charged or neutral particles, both charged ions and
subatomic elementary particles. This includes solar wind, cosmic radiation, and neutron flux
in nuclear reactors.
1.4.1.1 Alpha Particles
Alpha particles (α), helium nuclei, are the least penetrating. Some unstable atoms emit alpha
particles. Alpha particles are positively charged and made up of two protons and two neutrons
from the atom’s nucleus. Alpha particles come from the decay of the heaviest radioactive
elements, such as uranium, radium and polonium. Even very energetic alpha particles can be
stopped by a single sheet of paper. They are so heavy that they use up their energy over short
distances and are unable to travel very far from the atom.
The health effect from exposure to alpha particles depends greatly on how a person is
exposed. Alpha particles lack the energy to penetrate even the outer layer of skin, so exposure
to the outside of the body is not a major concern. Inside the body, however, they can be very
harmful. If alpha-emitters are inhaled, swallowed, or get into the body through a cut, the alpha
particles can damage sensitive living tissue. The way these large, heavy particles cause
damage makes them more dangerous than other types of radiation. The ionizations they cause
are very close together--they can release all their energy in a few cells. This results in more
severe damage to cells and DNA.
9
CHAPTER 1 RADIATION AND ATOM
10
CHAPTER 1 RADIATION AND ATOM
11
CHAPTER 1 RADIATION AND ATOM
of the sphere. Being strictly geometric in its origin, the inverse square law applies to diverse
phenomena.
Point sources of gravitational force, electric field, light, sound or radiation obey the inverse
square law. When light is emitted from a source such as the sun or a light bulb, the intensity
decreases rapidly with the distance from the source. X-rays exhibit precisely the same
property. The intensity of the radiation is inversely proportional to the square of the
distance from a point source (see figure 1.6).
It naturally decreases with the square of the distance as the size of the radiative spherical
wave front increases with distance. So, the luminous intensity on a spherical surface a
distance from a source radiating a total power P is:
As P and remain constant, the luminous intensity is proportional to the inverse of distance:
12
CHAPTER 1 RADIATION AND ATOM
Thus, if I double the distance to a light source the observed intensity is decreased to
of its original value. Generally, the ratio of intensities at distances and
are
13
CHAPTER 1 RADIATION AND ATOM
not just air. An absorbed dose of 1 rad means that 1 gram of material absorbed 100 ergs of
energy (a small but measurable amount) as a result of exposure to radiation.
1Rad=100 ergs/g (10-2 Gy)
Where the erg (joule) is a unit of energy, and the gram (kilogram) is a unit of mass. The
related international system unit is the gray (Gy), where 1 Gy is equivalent to 100 rad.
1.7.3 Rem
The rem (Roentgen equivalent man) is the traditional unit of dose equivalent (DE) or
occupational exposure. It is used to express the quantity of radiation received by radiation
workers. Some types of radiation produce more damage than x-rays. The rem accounts for
these differences in biologic effectiveness. This is particularly important to persons working
near nuclear reactors or particle accelerator.
1.7.4 Curie
Curie (Ci) the original unit used to express the decay rate of a sample of radioactive material.
The curie is equal to that quantity of radioactive material (not the radiation emitted by that
material) in which the number of atoms decaying per second is equal to 37 billion (3.7×1010).
In other words, One Curie is that quantity of material in which 3.7x1010 atoms disintegrate
every second (3.7x1010 Becquerel, Bq). It was based on the rate of decay of atoms within
one gram of radium. It is named for Marie and Pierre Curie who discovered radium in 1898.
The curie is the basic unit of radioactivity used in the system of radiation units in the United
States, referred to as "traditional" units. Becquerel (Bq) or Curie (Ci) is a measure of the rate
(not energy) of radiation emission from a source.
1.7.5 Electron Volt
Electron Volt (eV) is the amount of energy by the charge of a single electron moved across an
electric potential difference of one volt. A more fundamental unit of energy is the Joule (J).
That means, a particle with charge q has energy after passing through the potential
V. Therefore, one electron volt is equal to J. The energy of an x-ray is
measured in electron volts or, more often, thousands of electron volts (KeV). An electron that
is accelerated by an electric potential of one volt will acquire energy to one eV. Most x-ray
used in diagnostic radiology have energy up to 150 KeV, where as those in radiotherapy are
measured in MeV. Other radiological important energies, such as electron and nuclear binding
energies and mass-energy equivalence, are also expressed in eV.
Note: because diagnostic radiology is concerned primarily with x-rays, for our purposes we
may consider:
1R equal to 1rad equal to 1rem
With other types of ionizing radiation this generalization is not true.
14
CHAPTER 1 RADIATION AND ATOM
15
CHAPTER 1 RADIATION AND ATOM
Where
The sievert (Sv) is the SI unit of equivalent dose. Although it has the same units as the
gray, J/kg, it measures something different. For a given type and dose of radiation(s)
applied to a certain body part(s) of a certain organism, it measures the magnitude of an X-
rays or gamma radiation dose applied to the whole body of the organism, such that the
probabilities of the two scenarios to induce cancer is the same according to current
statistics.
1 sievert = 100 rem. Because the rem is a relatively large unit, typical equivalent dose is
measured in millirem (mrem), 10−3 rem, or in microsievert (μSv), 10−6 Sv. 1 mrem = 10
μSv.
A unit sometimes used as a measure of low level radiation exposure is the BRET
(Background Radiation Equivalent Time, BRET). This is the number of days of an average
person's background radiation exposure the dose is equivalent to. That mean, One BRET is
the equivalent of one day worth of average human exposure to background radiation. This
unit is not standardized, and depends on the value used for the average background
radiation dose.
For comparison, the average 'background' dose of natural radiation received by a person per
day makes BRET 6.6 μSv (660 μrem). However local exposures vary, with the yearly average
in the US being around 3.6 mSv (360 mrem), and in a small area in India as high as 30 mSv (3
rem). The lethal full-body dose of radiation for a human is around 4–5 Sv (400–500 rem).
The health hazards of low doses of ionizing radiation are unknown and controversial,
because the effects, mainly cancer and genetic damage, take many years to appear, and the
incidence due to radiation exposure can't be statistically separated from the many other causes
of these diseases. The purpose of the BRET measure is to allow a low level dose to be easily
compared with a universal yardstick: the average dose of background radiation, mostly from
natural sources, that every human unavoidably receives during daily life. Background
radiation level is widely used in radiological health fields as a standard for setting exposure
limits. Presumably, a dose of radiation which is equivalent to what a person would receive in
a few days of ordinary life will not increase his rate of disease measurably.
16
CHAPTER 1 RADIATION AND ATOM
The definition of the BRET unit is apparently unstandardized, and depends on what value is
used for the average annual background radiation dose, which differs in different countries
and regions. The United Nations Scientific Committee on the Effects of Atomic Radiation
(UNSCEAR 2000) estimate for worldwide background radiation dose is 2.4 mSv (240 mrem).
Using this value each BRET unit equals 6.6 μSv. BRET values range from 2 BRET for a
dental x-ray to around 400 for a barium enema study.
17
CHAPTER 1 RADIATION AND ATOM
For practical puposes, one gray and one sivert are essentially equal and Roentgen, rad and
rem are equivalent
For example, if someone’s lungs and thyroid are exposed separately to radiation, and the
equivalent doses to the organs are 2 mSv (they have a weighting factor of 0.12) and 1 mSv (it
has a weighting factor of 0.05) respectively.
The effective dose is: (2 mSv × 0.12) + (1 × 0.06) = 0.3 mSv.
The risk of harmful effects from this radiation would be equal to a 15.5 mSv dose delivered
uniformly throughout the whole body. This model says that the cancer risk from the whole
body getting 0.3 mSv uniformly is the same as the lungs getting 2 mSv and the thyroid getting
1 mSv (and no other organ getting a significant dose).
Figure 1.8 presents an overview of the relationship between effective, equivalent and
absorbed doses.
18
CHAPTER 1 RADIATION AND ATOM
Multiply by the
RADIATION WEIGHTING FACTOR
"WR" for the radiation used
Absorbed dose weighted for susceptibility to effect of different
radiations
Multiply by the
TISSUE WEIGHTING FACTOR
"WT" for the tissue or organ concerned
Equivalent dose weighted for susceptibility to effect of different tissues
19
CHAPTER 2
PRODUCTION OF X-
RAYS
Performance Objectives
After studying the chapter two, the student will be able to:-
20
CHAPTER 2 PRODUCTION OF X-RAYS
Anode
Cathode
21
CHAPTER 2 PRODUCTION OF X-RAYS
high temperature. But because the filament gives off electrons in all directions, some means
must be used to focus them on a target. A reflector or focusing cup within the cathode
structure, into which the filament is centered, serves to focus the electron beam much as light
is focused by a flashlight reflector.
2.2.2 Anode
As mentioned previously, there must be a target for the electron beam to strike before X-rays
are actually produced. In radiographic tubes the target material is generally made of tungsten.
The choice of tungsten as a target for industrial radiography is based on four material
characteristics:
1. High atomic number (74). The higher the atomic number of a material the more
efficient is the conversion from electrical energy into X-ray energy.
2. High melting point (690.F*). Most of the energy in the electrons bombarding the
target is dissipated in the form of heat. The extremely high melting point of tungsten
permits operation of the target at very high temperatures.
3. High thermal conductivity. Permits rapid removal of heat from the target, allowing
maximum energy input for a given area size.
4. Low vapor pressure. This reduces the amount of target material vaporized during
operation.
The tungsten target material is usually imbedded into a massive copper rod. Copper is an
excellent thermal conductor and is used to remove the heat from the target for dissipation by
air, oil, or water cooling, depending on tube design and operation. The target and its copper
support are the anode. To produce x-rays it must be at a positive potential (voltage) with
respect to the cathode in order to attract the electrons available at the cathode.
22
CHAPTER 2 PRODUCTION OF X-RAYS
small, lightweight components may only require a system capable of producing only a few
tens of kilovolts.
2.3.2 Focal Spot
Another important consideration is the focal spot size of the tube since this factor into the
geometric unsharpness of the image produced. The focal spot is the area of the target that is
bombarded by the electrons from the cathode. The shape and size of the focusing cup of the
cathode and the length and diameter of the filament all determine the size and shape of the
focal spot. The size of the focal spot has a very important effect upon the quality of the x-ray
image. Generally, the smaller the focal spot the better the detail of the image. But as the
electron stream is focused to a smaller area, the power of the tube must be reduced to prevent
overheating at the tube anode. Therefore, the focal spot size becomes a tradeoff of resolving
capability and power. Generators can be classified as a conventional, minifocus, and
microfocus system. Conventional units have focal-spots larger than about 0.5 mm, minifocus
units have focal-spots ranging from 50 microns to 500 microns (.050 mm to .5 mm), and
microfocus systems have focal-spots smaller than 50 microns. Smaller spot sizes are
especially advantageous in instances where the magnification of an object or region of an
object is necessary. The cost of a system typically increases as the spot size decreases and
some microfocus tubes exceed $100,000. Some manufacturers combine two filaments of
different sizes to make a dual-focus tube. This usually involves a conventional and a
minifocus spot-size and adds flexibility to the system.
The electron stream from the filament is focused as a narrow rectangle on the anode target.
The typical target face is made at an angle of about 20 degrees to the cathode. When the
rectangular focal spot is viewed from below, in the position of the film, it appears more nearly
a small square. Thus, effective area of the focal spot is only a fraction of its actual area. By
using the X-rays that emerge at this angle, a small focal spot is created, improving
radiographic definition. Because the electron stream is spread over a greater area of the target,
heat dissipation by the anode is improved.
2.4 Inherent Filtration
Inherent filtration is the filtration. The primary purpose of these filters is to reduce the number
of low-energy x-rays that reach the patient. The target itself, the glass wall of the tube, the
material necessary to provide the vacuum, mechanical rigidity and other materials collectively
referred to as the filtration substantially absorb the lower-energy photons. There is therefore a
low-energy cut-off, at about 20 keV, as well as a maximum energy. Low-energy X-rays
contribute nothing to diagnostic quality and serve only to increase patient dose unnecessarily
because they are absorbed in superficial tissues and do not penetrate to reach the film. The
latter depends only on the and the former on the filtration added to the tube. Peak
kilovoltage ( ) is the maximum voltage applied across an X-ray tube. It determines the
kinetic energy of the electrons accelerated in the X-ray tube and the peak energy of the X-ray
emission spectrum. The actual voltage across the tube may fluctuate.
23
CHAPTER 2 PRODUCTION OF X-RAYS
In construction of some glass x-ray tubes, the port is reduced in thickness to provide less
inherent filtration. In some other tubes the port is made of beryllium which is a light metal of
low atomic number and low x-ray absorption. Because of tremendous pressures exerted by the
atmosphere on large evacuated containers, x-ray ports must be designed with sufficient
thickness to withstand these pressures without implosion. In center-grounded x-ray
equipment, it is also necessary to provide gas (e.g., sulfur hexafluoride, SF6) and solid
insulation for electrical isolation of the x-ray tube. Excessive inherent filtration reduces the x-
ray output as well as the radiographic contrast on equipment of a given rating.
X-ray machines have metal filters positioned in the useful beam, usually in normal practice
it is acceptable to tolerate inherent filtration equivalent to 1 mm of aluminum up to 100 kVp
(kilovolts peak); 3 mm of aluminum up to 175 kVp; 5 mm of aluminum equivalent up to 250
kVp; and higher filtration in 1,000 to 2,000 kVp units. Inherent filtration above these
tolerances reduces contrast, and hence, sensitivity of radiographic inspection, and as a result,
limits the sensitivity of inspection, especially on thin sections and light alloys. For this reason,
during radiographic inspections using kilovoltage of 150 or less, the tube head shall be
configured so that generated radiation will travel from the target through a beryllium window
without passing through any media other than air or insulating gas.
2.5 Cooling Requirements
The product of and kV equals watts of electrical power in the electron beam striking the
X-ray target. One watt of electrical power is equal to one volt-ampere. Therefore, in an X-ray
tube operating at (or 0.01 amperes) and 140 kV (140,000 volts), 1400 watts of
electrical power are in the electron beam. Only a very small amount of the energy in the
electron beam is converted into X radiation. This ranges from about 0.05 percent at 30 kV to
approximately 10 percent in the energy range. Most of the electron beam energy is
converted into heat. This generation of heat in the X-ray tube target material is one of the
limiting factors in the capabilities of the X-ray tube. It is necessary to remove this heat from
the target as rapidly as possible. Various techniques are used for removal of heat. In some
instances, the target is comparatively thin, and suitable oil is circulated on the back surface to
remove heat. Others (where the anode is being operated at ground potential) use water-
antifreeze mixture to conduct heat away from the target. Most X-ray targets are mounted in
copper, using the copper as a heat sink. Some units have no external method of heat removal,
but depend upon heat dissipation into the atmosphere by fins of a thermal radiator. Some
totally enclosed tubes depend upon the heat storage capacity of the anode structure to absorb
the heat generated during X-ray exposure. This heat is then dissipated after the unit is turned
off. These units usually have a duty cycle as a limiting factor of operation that is dependent
upon the heat storage capacity of the anode structure and the rate of heat dissipation by
thermal radiation. The rate of heat removal from the X-ray target is the primary limiting factor
in X-ray tube operation.
24
CHAPTER 2 PRODUCTION OF X-RAYS
25
CHAPTER 2 PRODUCTION OF X-RAYS
High-voltage cables
80 – 140 kV
+
Anode Rotor
Vacuum
Oil
Cathode Bearings
Stator windings
Lead shielding
x-ray beam Mahmood & Haider
Figure 2.2: Fundamentals of X-Ray Tube
Point two can be understood from the kinetic energy, , of the incoming electron
corresponding to the potential energy of the electric potential between the cathode and the
target,
There is not more energy for the outcome x-ray. The quantization of this radiation implies that
the x-ray photon has energy
From where
26
CHAPTER 2 PRODUCTION OF X-RAYS
The cutoff wavelength is totally independent of the target material. However, the other
characteristics of the spectrum depend on the target material.
Very rarely, an electron arriving at the target is immediately a completely stopped in this
way and produces a single photon of energy equivalent to the . This is the largest photon
energy that can be produced this kilovoltage. X-ray production by energy conversion. Events
1, 2, and 3 depict incident electrons interacting in the vicinity of the target nucleus, resulting
in bremsstrahlung production caused by the deceleration and change of momentum, with the
emission of a continuous energy spectrum of x-ray photons.
It is more likely that the bombarding electron first loses some of its energy as heat and
then, when it interacts with the nucleus, it loses only part of its remaining energy, with
emission of bremsstrahlung of lower photon energy.
27
CHAPTER 2 PRODUCTION OF X-RAYS
Scattered
Electrons
Incident
Electron ++
s 3 ++ + +
+ + ++
++ ++
2
++
1
2
3 Close interaction
Impact with nucleus Moderate energy
Maximum energy
Distant interaction
Low energy Mahmood & Haider
The X-rays may be emitted in any direction (although mainly sideways to the electron beam)
and with any energy up to the maximum. Figure 2.3 plots the relative number of photons
having each photon energy (kiloelectronvolts).The bremsstrahlung forms a continuous
spectrum. The maximum photon energy (kiloelectronvolts) is equivalent to the .
28
CHAPTER 2 PRODUCTION OF X-RAYS
Electron
++ ejected
++ +
+ leaving a
+ +++
++ "hole"
Incident
Electron
s
K-shell photon
L-shell photon
M-shell Characteristi
N-shellphoton
photon c x-ray
O-shell photon photon
N (n =
0
M 4)
(n =
L (n3)= 2)
-5
Energy ( KeV )
-10 Kα
Kβ
-15
K (n =
-20 1)
29
CHAPTER 2 PRODUCTION OF X-RAYS
There is also L-radiation, produced when a hole created in the L-shell is filled by an electron
falling in from farther out. Even in the case of tungsten these photons have only of
energy, insufficient to leave the X-ray tube assembly, and so they play no part in radiology.
The X-ray photons produced in an X-ray tube in this way have a few discrete or separate
photon energies and constitute a line spectrum.
The photon energy of the K-radiation therefore increases as the atomic number of the
target increases. It is characteristic of the target material and is unaffected by the tube voltage.
A K-electron cannot be ejected and the K-radiation is not produced at all if the peak tube
voltage is less than EK, i.e. 70 kV in the case of a tungsten target. The rate of production of
the characteristic radiation increases as the kV is increased above this value.
Kα
Relative Intensity
3
Characteristic
2 Continuous x-rays
Spectrum
"Bremsstrahlung" Kβ x-rays from a
molybdenum
1 target at 35 kV
30 40 50 60 70 80 90
λmin Wavelength " λ" (10 -12 m)
Mahmood & Haider
Figure 2.5: Wavelength distribution of X-Ray production in a
molybdenum target.
The two peaks of Figure 2.5, labeled and , are part of what is called the Characteristic
X-Ray Spectrum. Similar peaks appear for greater wavelengths or smaller frequencies.
The emission of characteristic X-Rays involves the following processes
1. The energetic electrons collide with an atom of the target knocking out one of the most
internal electrons of the atom (n small) creating a hole in the atomic structure of the
atom.
2. The hole is filled when an electron from a greater energy level from a middle shell of the
atom (n mid value) jumps down to the lower energy shell emitting a high energy
photon (characteristic X-Ray).
The electron from the middle shell is subsequently replaced by and electron from an upper
energy shell which in the transition emits a low energy photon.
The average or effective energy of the continuous spectrum lies between these two, and is
typically one-third to one-half of the kVp . Thus, an X-ray tube operated at 90 kVp can be
30
CHAPTER 2 PRODUCTION OF X-RAYS
thought of as emitting, effectively, 45 keV X-rays. As this peak kV is greater than the K-shell
binding energy, characteristic X-rays are also produced. They are shown at C in figure 7 as
line superimposed on the continuous spectrum.
–The intensity of X-rays emitted is proportional to kV2 × mA.
–The efficiency of X-ray production is the ratio
and increases with the kV. The efficiency is greater the higher the atomic number of the
target.
Kα
Continuous
Spectrum
Characteristic
Relative numbers
"Bremsstrahlung"
Lines
of photons
120KV
Kβ
80KV
40KV
40 80 120
Photon energy (kVe) Mahmood & Haider
Figure 2.8: Effect of tube kilovoltage on X-ray spectra
31
CHAPTER 2 PRODUCTION OF X-RAYS
peak value throughout the exposure. In Figure 2.9 it is below peak value during the greater
part of each half cycle. A single-phase generator produces useful X-rays in pulses, each
lasting 30 ms during the middle of each 100 ms half cycle of the mains.
INTENSITY
High kVp High mA
WAVELENGTH WAVELENGTH
Mahmood & Haider
Figure 2.9: Relationship between kV and mA and wavelength.
32
CHAPTER 2 PRODUCTION OF X-RAYS
rays. That is, kilovoltage (or Kilovolt Peak, " ") is the component that controls the quality
of the x-ray beam produced. It is also what controls the contrast or gray scale in the produced
x-ray film, the higher the the lower the contrast. When the kV is set on the control
console, the maximum kilovolt that will be achieved is the number you have selected. For
example, if you set the at “60”, the maximum kilovolt that will be produced is 60 kV, or
60,000 volts. The reason we call it Kilovolts Peak or is, you will also get some voltages
that are less than the kilovolts peak, or maximum kV set on the control console. You will get
some voltages at 58 kV or 59 kV etc.
The change in x-ray quantity is proportional to the square of the ratio of the ; in other
words, if were doubled, the x-ray intensity would increase by a factor of four.
Where and are the X-ray intensities at and , respectively. Selecting the proper
For example:
X-ray quantity is directly proportional to the mAs. When the mAs is doubled, the number of
electrons striking the tube target is doubled, and therefore the number of x-rays emitted is
doubled.
33
CHAPTER 2 PRODUCTION OF X-RAYS
34
CHAPTER 3
INTERACTION OF
X-RAYS
Interaction processes of x-ray are very important subjects to be studied
in order to have a full knowledge about the mechanisms of interaction
that are lead to x-rays attenuation.
Subjects such as effective atomic number, Absorption edges, the
Rationale relative importance of Compton and photoelectric attenuation,
secondary electrons, properties of x-and gamma rays and attenuation
of x-rays by the patient conceder which will they need it to deep
comprehension to the radiation.
Filtration are very important subjects to be studiedMahmood
in order &to Haider
have a
full knowledge about structure of matters which will they need it to
understand how does filtration work.
Performance Objectives
After studying the chapter three, the student will be able to:-
1. Define attenuation.
2. Name and describe mechanisms of x-rays interaction.
3. Contrast between the types of attenuation.
4. Define Compton process and photoelectric absorption
5. Compare the Compton process and photoelectric absorption.
6. Define absorption edges.
7. Determine the relative importance of Compton and photoelectric
attenuation.
8. Determine the sources of attenuation of x-rays by the patient.
9. Define filtration.
10. Determine the advantage of filtration.
11. Explain the effects of filtration.
CHAPTER 3 INTERACTION OF X-RAYS
35
CHAPTER 3 INTERACTION OF X-RAYS
X- and gamma-rays have many modes of interaction with matter. Some of those which are not
important to radiography or nuclear medicine imaging are:
Mössbauer Effect
Coherent Scattering
Pair Production
and will not be described here.
Those which are very important to radiography are the Photoelectric Effect and the Compton
Effect. We will consider each of these in turn below. Note that the effects described here are
also of relevance to the interaction of gamma-rays with matter since as we have noted before
X-rays and gamma-rays are essentially the same entities and differ only in their origin. So
the treatment below is also of relevance to medicine imaging.
However, the total fraction of photons passing through an absorber decreases
exponentially with the thickness of the absorber.
As a beam of X- or gamma rays passes through matter, three possible fates await each
photon, as listed below:
a. Penetrate (Transmitted): photon can pass through the section of matter without
interacting.
b. Absorbed: photon can interact with the matter and transferring to the matter all of their
energy (the photon completely absorbed by depositing its energy) or some of it (partial
absorption).
c. Produce Scattered Radiation: photon can interact and diverted in a new direction, as
scattered or defected from its original direction and deposited part of its energy.
Photons Entering the Human Body will either Penetrate, be Absorbed, or Produce Scattered
Radiation (see figure 3.1).
b
Absorbed X-ray
Scattered X-ray
36
CHAPTER 3 INTERACTION OF X-RAYS
There are two kinds of interactions through which photons deposited their energy; both are with
electrons. In one type of interaction the photon loses all its energy; in the other, it loses a
portion of its energy, and the remaining energy is scattered. X-ray absorption and scattering
processes are stochastic processes, governed by the statistical laws of chance. It is impossible to
predict which of the individual photons in a beam will be transmitted by 1 mm of a material,
but it is possible to be quite precise about the fraction of them that will be, on account of the
large numbers of photons the beam contains.
Although a large number of possible interaction mechanisms are known for radiation X- or
gamma rays) in matter only two major types play an important role in diagnostic radiation:
photoelectric absorption, and Compton scattering.
3.2 Compton Scattering (Modified Scatter)
A Compton interaction is one in which only a part of the energy of the photon is transferred to a
valance electron which is essentially free and a photon is produced with reduced energy i.e. the
electron recoils and taking away some of the energy of the photon as kinetic energy. In other
words, the Compton scattering is the process whereby an X- or gamma ray interacts with a free
or weakly bound electron and transfers part of its energy to the electron. Notice that the
electron leaves the atom and may act like a beta-particle and that the x-ray photon leaves the
site of the interaction in a direction different from that of the original photon, i.e. diverted in a
new direction with reduced energy as shown in figure 3.2. Because of the change in photon
direction, this type of interaction is classified as a scattering process sometimes called
Compton Scattering, also known a incoherent scattering. This deflected or scattered X-ray can
undergo further Compton Effects within the material.
- Compton electron
Incident
X-Ray
- Angle of
deflection
- - - photon
(θ)
++ -
- ++ ++ Scattered
-
+ +++ K
X-Ray
++ L
M
-
- -
-
-
Mahmood & Haider
Figure 3.2: A schematic representation of the Compton Scattering.
In effect, a portion of the incident radiation "bounces off" or is scattered by the material. This is
significant in some situations because the material within the primary X-ray beam becomes a
secondary radiation source. The most significant object producing scattered radiation in an X-
37
CHAPTER 3 INTERACTION OF X-RAYS
ray procedure is the patient's body. The portion of the patient's body that is within the primary
X-ray beam becomes the actual source of scattered radiation. This has two undesirable
consequences.
The scattered radiation that continues in the forward direction and reaches the image
receptor decreases the quality (contrast) of the image.
The radiation that is scattered from the patient is the predominant source of radiation
exposure to the personnel conducting the examination.
The angle of deflection (scatter) θ is the angle between the scattered ray and the incident ray.
Photons may be scattered in all directions. The electrons are projected only in sideways and
forward directions.
3.2.1 Direction of Scatter
In Compton interactions, the relationship of the electron energy to that of the photon depends
on the angle of scatter and the original photon energy.
It is possible for photons to scatter in any direction. The direction in which an individual
photon will scatter is purely a matter of chance. There is no way in which the angle of scatter
for a specific photon can be predicted. However, there are certain directions that are more
probable and that will occur with a greater frequency than others. The factor that can alter the
overall scatter direction pattern is the energy of the original photon. In diagnostic examinations,
the most significant scatter will be in the forward direction. This would be an angle of scatter of
only a few degrees. However, especially at the lower end of the energy spectrum, there is a
significant amount of scatter in the reverse direction, i.e., backscatter. For the diagnostic photon
energy range, the number of photons that scatter at right angles to the primary beam is in the
range of one-third to one-half of the number that scatter in the forward direction. Increasing
primary photon energy causes a general shift of scatter to the forward direction. However, in
diagnostic procedures, there is always a significant amount of back- and side-scatter radiation.
3.2.2 Energy of Scattered Radiation
When a photon undergoes a Compton interaction, its energy is divided between the scattered
secondary photon and the electron with which it interacts. The electron's kinetic energy is
quickly absorbed by the material along its path. In other words, in a Compton interaction, part
of the original photon's energy is absorbed and part is converted into scattered radiation.
The manner in which the energy is divided between scattered and absorbed radiation depends
on two factors-the angle of scatter and the energy of the original photon. The relationship
between the energy of the scattered radiation and the angle of scatter is a little complex and
should be considered in two steps. The photon characteristic that is specifically related to a
given scatter angle is its change in wavelength. It should be recalled that a photon's wavelength
() and energy (E) are inversely related as given by:
E = 12.4 /
38
CHAPTER 3 INTERACTION OF X-RAYS
Conservation of energy and momentum allows only a partial energy transfer when the electron
is not bound tightly enough for the atom to absorb recoil energy. This interaction involves the
outer, least tightly bound electrons in the scattering atom. The electron becomes a free electron
with kinetic energy equal to the difference of the energy lost by the X-ray and the electron
binding energy. Because the electron binding energy is very small compared to the X-ray
energy, the kinetic energy of the electron is very nearly equal to the energy lost by the X-ray:
Where
Leave the interaction site: the freed electron and the scattered X-ray. The directions of the
electron and the scattered X-ray depend on the amount of energy transferred to the electron
during the interaction. Equation (2) gives the energy of the scattered X-ray. It was observed that
when X-rays of a known wavelength interact with atoms, the X-rays are scattered through an
angle and emerge at a different wavelength related to .
where
Since photons lose energy in a Compton interaction, the wavelength always increases. The
relationship between the
, and the angle of scatter is given by:
where
is the initial wavelength,
is the wavelength after scattering,
is the Planck constant,
is the electron rest mass,
is the speed of light, and
is the scattering angle.
39
CHAPTER 3 INTERACTION OF X-RAYS
The wavelength shift is at least zero (for θ = 0°) and at most twice the Compton
Recoil
Electron Recoil
Incident Incident electron
Incident
o
photon Recoil photon photon θ≈0
Electron
o
o θ= 90
Back-scattered θ ≈ 180 Forward-
photon Side-scattered scattered photon
photon
Thus
Back-scattered photon is energy minimum for a head-on collision where the X-ray is
scattered 180° and the electron moves forward in the direction of the incident X-ray.
forward-scattered photon, For very small angle scatterings (θ ≈ 0), the energy of the,
scattered X-ray is only slightly less than the energy of the incident X-ray and the
scattered electron takes very little energy away from the interaction.
The higher the initial photon energy:
40
CHAPTER 3 INTERACTION OF X-RAYS
The greater the remaining photon energy of the scattered radiation and the more
penetrating it is; also
The greater the energy
that is carried off by the recoil electron and the greater its range. This is seen in the
following examples:
Incident photon Back-scattered photon Recoil electron
25 keV 22 keV 3 keV
150 keV 100 keV 50 keV
The softening effect of Compton scatter is therefore greatest with large scattering
angles as well as with high energy X-rays.
- Photoelectron
-
Incident - -
-
X-Ray
++
- ++ + + -
+ +++ K
++ L
M
- -
-
-
Mahmood & Haider
Figure 3.4: A schematic representation of the photoelectric absorption process.
The energy transfer is a two-step process:
The first step is the photoelectric interaction in which the photon transfers its energy to
the electron.
The second step is the depositing of the energy in the surrounding matter by the
electron.
41
CHAPTER 3 INTERACTION OF X-RAYS
Photoelectric interactions usually occur with electrons that are tightly bound to the atom, that
is, those with a relatively high binding energy. Photoelectric interactions are most probable
when the electron binding energy is only slightly less than the energy of the photon. If the
binding energy is more than the energy of the photon, a photoelectric interaction cannot occur.
This interaction is possible only when the photon has sufficient energy to overcome the binding
energy and removes the electron from the atom.
On the basis of the principle of conservation of energy, we can deduce that the photon's
energy is divided into two parts by the interaction. A portion of the energy is used to overcome
the electron's binding energy and to remove it from the atom. The remaining energy is
transferred to the electron as kinetic energy and is deposited near the interaction site. Since the
interaction creates a vacancy in one of the electron shells, typically the K or L, an electron
moves down to fill in. The electron will leave the atom with a kinetic energy equal to the
energy of the X-ray less that of the orbital binding energy. The electrons so ejected are called
photoelectrons.
Note that an ion results when the photoelectron leaves the atom. Also note that the X-ray energy
is totally absorbed in the process.
The photon disappears: Part of its energy, equal to the binding energy of the K-shell, is
expended in removing the electron from the atom, and the remainder becomes the kinetic
energy (KE) of that electron:
KE of the electron = photon energy - EK
Less often, the X- or gamma ray photon may interact with an electron in the L-shell of an atom.
The electron is then ejected from the atom with
KE = photon energy - EL
Two subsequent points should also be noted.
The photoelectron can cause ionizations along its track in a similar manner to a beta-particle.
The vacancy "holes" created in the atomic shell are filled by an electron from an outer shell
(farther out) of the atom, with the emission of a series of photons of characteristic radiation
(see figure 3.5). In this way the whole of the original photon energy is accounted for.
42
CHAPTER 3 INTERACTION OF X-RAYS
Characteristic
- radiation
- -
-
++
- ++ + + -
+ +++ K
++ L M
- - -
-
Mahmood & Haider
Figure 3.5: Production of characteristic radiation.
The drop in energy of the filling electron often produces a characteristic x-ray photon. The
energy of the characteristic radiation depends on the binding energy of the electrons involved.
Characteristic radiation initiated by an incoming photon is referred to as fluorescent radiation.
Fluorescence, in general, is a process in which some of the energy of a photon is used to create
a second photon of less energy. This process sometimes converts x-rays into light photons.
Whether the fluorescent radiation is in the form of light or x-rays depends on the binding
energy levels in the absorbing material.
In the case of air, tissue, and other light-atom materials, the characteristic radiation is so soft
that it is absorbed immediately with the ejection of a further, low-energy, photoelectron or
"Auger electron". Thus, all the original photon energy is converted into the energy of electronic
motion and is said to have been absorbed by the material. Photoelectric absorption in such
materials is complete absorption.
On the other hand, the characteristic rays from barium and iodine in contrast media are
sufficiently energetic to leave the patient. In this respect they act like Compton scattered rays.
3.4 Coherent Scatter
The incident photon interacts with an electron which is tightly bound to its parent atom and
excites it, causing to vibrate. The vibration causes the photon to scatter. The atom is too
massive to recoil and the photon is scattered with no loss of energy and deposits no energy in
the material. No secondary electron is set moving and no ionization or other effect is
produced in the material. This process occurs only with low-energy photons and at very small
angles of scattering, in which case the scattered radiation does not leave the beam. This type
of interaction has generally little significance in most diagnostic procedures. It is also
called variously as coherent, classical, elastic, or Thomson scattering.
3.5 Attenuation
43
CHAPTER 3 INTERACTION OF X-RAYS
Attenuation refers to the fact that the reduction in intensity of an x-ray beam as it passes
through the material due to the interaction event between x-ray and matter (i.e. absorption and
scattering of photons). Some of the photons interact with the material, and some pass on
through. These interactions mainly include the photoelectric effect and Compton scattering,
remove some of the photons from the beam in a process known as attenuation. The degree of
attenuation depends on the intensity of the original x-ray beam and the physical density of the
material through which the x-ray beam passes. Under specific conditions, a certain percentage
of the photons will interact, or be attenuated, in a 1-unit thickness of material.
In clinical applications we are generally not concerned with the fate of an individual photon
but rather with the collective interaction of the large number of photons. In most instances we
are interested in the overall rate at which photons interact as they make their way through a
specific material.
For a narrow beam of mono-energetic photons, the change in x-ray beam intensity at some
distance in a material can be expressed in the form of an equation as:
Where: the minus sign indicating that the intensity is reduced by the absorber.
The experimental set-up is illustrated in the figure 3.6. We refer to the intensity of the radiation
which strikes the absorber as the incident intensity, I0, and the intensity of the radiation which
gets through the absorber as the transmitted intensity, Ix. Notice also that the thickness of the
absorber is denoted by x.
When this equation is integrated, it becomes:
The number of atoms/cm3 (n) and the proportionality constant (s) are usually combined to yield
the linear attenuation coefficient (μ). Therefore the equation becomes:
Where:
44
CHAPTER 3 INTERACTION OF X-RAYS
This final expression tells us that the radiation intensity will decrease in an exponential fashion
with the thickness of the absorber with the rate of decrease being controlled by the Linear
Attenuation Coefficient.
Absorber
Io Ix
Io
Ln Intensity
Slope: -μ
Intensity
Thickness, X Thickness, X
Mahmood & Haider
Figure 3.7: Graphical representation of the dependence of radiation
intensity on the thickness of absorber: Intensity versus thickness on the left
and the natural logarithm of the intensity versus thickness on the right.
3.5.1 Linear Attenuation Coefficient
The linear attenuation coefficient (µ) measures the probability that a photon (x-rays or gamma
rays) interacts (i.e. is absorbed or scattered) per unit length of the path it travels in a specified
material. In our example the fraction that interacts in the 1-cm thickness is 0.1, or 10%, and the
value of the linear attenuation coefficient is 0.1 per cm. This value basically accounts for the
number of atoms in a cubic cm volume of material and the probability of a photon being
scattered or absorbed from the nucleus or an electron of one of these atoms.
45
CHAPTER 3 INTERACTION OF X-RAYS
Using the transmitted intensity equation above, linear attenuation coefficients can be used to
make a number of calculations. These include:
the intensity of the energy transmitted through a material when the incident x-ray
intensity, the material and the material thickness are known.
the intensity of the incident x-ray energy when the transmitted x-ray intensity, material,
and material thickness are known.
the thickness of the material when the incident and transmitted intensity, and the
material are known.
the material can be determined from the value of µ when the incident and transmitted
intensity, and the material thickness are known.
Linear attenuation coefficient values indicate the rate at which photons interact as they move
through material and are inversely related to the average distance photons travel before
interacting. The rate at which photons interact (attenuation coefficient value) is determined by
the energy of the individual photons and the atomic number and density of the material.
The influence of the Linear Attenuation Coefficient can be seen in the Figure 3.8. All three
curves here are exponential in nature, only the Linear Attenuation Coefficient is different.
Notice that when the Linear Attenuation Coefficient has a low value the curve decreases
relatively slowly and when the Linear Attenuation Coefficient is large the curve decreases very
quickly.
Io
Intensity
μ small
μ medium
μ large
Thickness
Mahmood & Haider
Figure 3.8: Exponential attenuation expressed using a small, medium
and large value of the Linear Attenuation Coefficient, μ.
46
CHAPTER 3 INTERACTION OF X-RAYS
Therefore if we were to double the atomic number of our absorber we would increase the
attenuation by a factor of two cubed, that is 8, if we were to triple the atomic number we would
increase the attenuation by a factor of 27 that is three cubed, and so on. It is for this reason that
high atomic number materials (e.g. Lead, Pb) are used for radiation protection.
Density: The low density absorber will give rise to less attenuation than a high density absorber
since the chances of an interaction between the radiation and the atoms of the absorber are
relatively lower. That is, the change in x-ray beam intensity is proportional to density:
Thickness: A third factor which we could vary is the thickness of the absorber. At this stage as
you should be able to predict, the thicker the absorber the greater the attenuation.
X-Ray Energy: Absorption characteristics will increase or decrease as the energy of the x-ray is
increased or decreased. Since attenuation characteristics of materials are important in the
development of contrast in a radiograph, an understanding of the relationship between material
thickness, absorption properties, and photon energy is fundamental to producing a quality
radiograph. A radiograph with higher contrast will provide greater probability of detection of a
given discontinuity. An understanding of absorption is also necessary when designing x-ray and
gamma ray shielding, cabinets, or exposure vaults. Thus, the higher energy of x-rays means the
attenuation less.
3.6 Photoelectric Rates
The probability, and thus attenuation coefficient value, for photoelectric interactions depends
on how well the photon energies and electron binding energies match. This can be considered
from two perspectives.
In a specific material with a fixed binding energy, a change in photon energy alters
the match and the chance for photoelectric interactions.
On the other hand, with photons of a specific energy, the probability of
photoelectric interactions is affected by the atomic number of the material, which
changes the binding energy.
47
CHAPTER 3 INTERACTION OF X-RAYS
relationship. One is that the coefficient value, or the probability of photoelectric interactions,
decreases rapidly with increased photon energy. It is generally said that the probability of
photoelectric interactions is inversely proportional to the cube of the photon energy ( ).
This general relationship can be used to compare the photoelectric attenuation coefficients at
two different photon energies. The significant point is that the probability of photoelectric
interactions occurring in a given material drops drastically as the photon energy is increased.
The other important feature of the attenuation coefficient-photon energy relationship shown
in the Figure 3.9 is that it changes abruptly at one particular energy: the binding energy of the
shell electrons. The K-electron binding energy is 33 keV for iodine. This feature of the
attenuation coefficient curve is generally designated as the K, L, or M edge.
100
Linear Attenuation
Coefficient, cm-1
10
Iodine
Bone
1 Muscle
Fat
0.1
0 25 50 75 100
Energy, keV Mahmood & Haider
48
CHAPTER 3 INTERACTION OF X-RAYS
The probability of photoelectric interactions occurring is also dependent on the atomic number
of the material. An explanation for the increase in photoelectric interactions with atomic
number is that as atomic number is increased, the binding energies move closer to the photon
energy. The general relationship is that the probability of photoelectric interactions (attenuation
coefficient value) is proportional to . In general, the conditions that increase the probability
of photoelectric interactions are low photon energies and high-atomic-number materials.
To summarize
As the photon energy is increased, photoelectric attenuation decreases according to the
following formula until the binding energy EK of the particular material is reached:
At this energy, the photoelectric absorption jumps to a higher value and then decreases
again as the photon energy further increases.
This discontinuity is an exception to the general rule that attenuation decreases with
increasing energy. The reason is that photons with less energy than EK can only eject
L-electrons and can only be absorbed in that shell. Photons with greater energy than
EK can eject K-electrons as well, and can therefore be absorbed in both shells.
The sudden change of absorption coefficient is called the K-absorption edge, and occurs
at different photon energies with different materials. The higher the atomic number of
the material, the greater is EK and the greater is the photon energy at which the edge
occurs.
3.7 Effective Atomic Number
Effective atomic number is a term that is similar to atomic number but is used for compounds
(e.g. water) and mixtures of different materials (such as tissue and bone) rather than for
atoms. To take account of the photoelectric absorption, the effective atomic number is defined
as the cube root of the average of the cube roots of the atomic numbers of the constituents.
One proposed formula for the effective atomic number, Zeff , is as follows:
Where
fn : is the fraction of the total number of electrons associated with each element,
Zn : is the atomic number of each element.
2.94 ≈ 3
Some examples are listed in table 3.1:
Table 3.1:Physical Characteristics of Contrast-Producing Materials
Material Effective Atomic Density
49
CHAPTER 3 INTERACTION OF X-RAYS
Effective atomic number is important for predicting how X-rays interact with a substance, as
certain types of X-ray interactions depend on the atomic number.
The Compton and photoelectric interactions depend on the effective atomic number but not
on the molecular configuration.
3.8 Compton Rates
Compton interactions can occur with the very loosely bound electrons. All electrons in low-
atomic-number materials and the majority of electrons in high-atomic-number materials are in
this category. The characteristic of the material that affects the probability of Compton
interactions is the number of available electrons. It was shown earlier that all materials, with the
exception of hydrogen, have approximately the same number of electrons per gram of material.
Since the concentration of electrons in a given volume is proportional to the density of the
materials, the probability of Compton interactions is proportional only to the physical density
and not to the atomic number, as in the case of photoelectric interactions. The major exception
is in materials with a significant proportion of hydrogen. In these materials with more electrons
per gram, the probability of Compton interactions is enhanced.
Although the chances of Compton interactions decrease slightly with photon energy, the
change is not so rapid as for photoelectric interactions, which are inversely related to the cube
of the photon energy.
3.9 Mass Attenuation Coefficient
The mass attenuation coefficient is a measurement of how strongly substance absorbs or
scatters light at a given wavelength, per unit mass. Since a linear attenuation coefficient is
dependent on the density of a material, the mass attenuation coefficient is often reported for
convenience. The Linear Attenuation Coefficient is useful when we were considering an
absorbing material of the same density but of different thicknesses. Material density does have
a direct effect on linear attenuation coefficient values. Confusion often arises as to the effect of
material density on attenuation coefficient values.
Consider water for example. The linear attenuation for water vapor is much lower than it is for
ice because the molecules are more spread out in vapor so the chance of a photon encounter
50
CHAPTER 3 INTERACTION OF X-RAYS
with a water particle is less. Normalizing by dividing it by the density of the element or
compound will produce a value that is constant for a particular element or compound. This
constant () is known as the mass attenuation coefficient and has units of cm2/gm, and
therefore do not change with changes in density. A related coefficient can be of value when we
wish to include the density, ρ, of the absorber in our analysis. This is the Mass Attenuation
Coefficient which is defined as the:
To convert a mass attenuation coefficient () to a linear attenuation coefficient (), simply
multiply it by the density () of the material.
The measurement unit used for the Linear Attenuation Coefficient is cm-1, and a common unit
of density is the g cm-3. You might like to derive for yourself on this basis that the cm2g-1 is the
equivalent unit of the Mass Attenuation Coefficient.
The total attenuation rate depends on the individual rates associated with photoelectric and
Compton interactions. The respective attenuation coefficients are related as follows:
Let us now consider the factors that affect attenuation rates and the competition between
photoelectric and Compton interactions. Both types of interactions occur with electrons within
the material. The chance that a photon will interact as it travels a 1-unit distance depends on
two factors.
One factor is the concentration, or density, of electrons in the material. Increasing the
concentration of electrons increases the chance of a photon coming close enough to an electron
to interact. The electron concentration was determined by the physical density of the material.
Therefore, density affects the probability of both photoelectric and Compton interactions.
All electrons are not equally attractive to a photon. What makes an electron more or less
attractive is its binding energy. The two general rules are:
1. Photoelectric interactions occur most frequently when the electron binding energy is
slightly less than the photon energy.
2. Compton interactions occur most frequently with electrons with relatively low binding
energies.
The electrons with binding energies within the energy range of diagnostic x-ray photons were
the K-shell electrons of the intermediate- and high-atomic-number materials. Since an atom can
have, at the most, two electrons in the K shell, the majorities of the electrons are located in the
other shells and have relatively low binding energies.
51
CHAPTER 3 INTERACTION OF X-RAYS
10000
Linear Attenuation
Coefficient, cm-1
1000
Tungsten (W)
100
10
Lead (Pb)
1 Copper (Cu)
Aluminum (Al)
0.1
1 10 100 1000
Energy, keV
Mahmood & Haider
Figure 3.9: linear attenuation coefficients versus radiation energy
The HVL is inversely proportional to the attenuation coefficient. If an incident energy of 1 and
a transmitted energy is 0.5 is plugged into the equation introduced on the preceding page, it can
be seen that the HVL multiplied by m must equal 0.693.
If x is the HVL then m times HVL must equal 0.693 (since the number 0.693 is the exponent
value that gives a value of 0.5, ln (0.5) = 0.693).
52
CHAPTER 3 INTERACTION OF X-RAYS
These last two equations express the relationship between the Linear Attenuation Coefficient
and the Half Value Layer. They are very useful when solving numerical questions relating to
attenuation and frequently form the first step in solving a numerical problem.
50%
Percent Transmitted
25%
Incident Energy
12.5%
Energy
100%
6.25%
3.1%
1.6%
0.8%
1 2 3 4 5 6 7
Half Value Layers Mahmood & Haider
Figure 3.10: Relationship between penetration and object thickness expressed
in HVLs.
The HVL is often used in radiography simply because it is easier to remember values and
perform simple calculations. In a shielding calculation, such as illustrated in Figure 3.10, it can
be seen that if the thickness of one HVL is known, it is possible to quickly determine how
much material is needed to reduce the intensity to less than 1%.
53
CHAPTER 3 INTERACTION OF X-RAYS
Note: The values presented on this page are intended for educational purposes. Other sources
of information should be consulted when designing shielding for radiation sources.
Io
Intensity
The first point to note is that the Half Value Layer highly
dependent on the atomic number of the absorbing material. Since it decreases as the atomic
number increases. For example the lead is more effective than either aluminum or air at
absorbing X-rays due to of its higher density and atomic number and therefore the
attenuation should be relatively large. The value for air at 100 keV is about 35 meters and
it decreases to just 0.12 mm for lead at this energy. In other words 35 m of air is needed
to reduce the intensity of a 100 keV x-ray beam by a factor of two whereas just 0.12 mm
of lead can do the same thing.
The second thing to note is that the Half Value Layer increases with increasing x-ray
energy. For example from 0.18 cm for copper at 100 keV to about 0.5 cm at 200 keV.
54
CHAPTER 3 INTERACTION OF X-RAYS
Thirdly note that relative to the data in the previous table there is a reciprocal relationship
between the Half Value Layer and the Linear Attenuation Coefficient.
55
CHAPTER 3 INTERACTION OF X-RAYS
10
0.1
Compton – all material
0.01
100 200 300
Photon Energy ( keV )
Mahmood & Haider
Figure 3.12: Comparison of Photoelectric and Compton
Interaction Rates for Different Materials and Photon Energies.
The total attenuation coefficient value for materials involved in x-ray and gamma interactions
can vary tremendously if photoelectric interactions are involved. A minimum value of
approximately 0.15 cm2/g is established by Compton interactions. Photoelectric interactions
can cause the total attenuation to increase to very high values. For example, at 30 keV, lead
(Z=82) has a mass attenuation coefficient of 30 cm2/g.
From all above, the photoelectric coefficient is proportional to Z3/E3, and is particularly high
when the photon energy is just greater than EK. The Compton coefficient is independent of Z
and little affected by E. Accordingly, photoelectric absorption is more important than the
Compton process with high-Z materials as well as with relatively low-energy photons.
Conversely, the Compton process is more important than photoelectric absorption with low-Z
materials as well as with high-energy photons. As regards diagnostic imaging with X-rays (20-140
keV), therefore:
1. The Compton process is the predominant process for air, water, and soft tissues;
2. The photoelectric absorption predominates for contrast media, lead, and the materials used in
films, screens, and other imaging devices; while
3. Both are important for bone.
56
CHAPTER 3 INTERACTION OF X-RAYS
57
CHAPTER 3 INTERACTION OF X-RAYS
The total distance an electron travels in a material before losing all its energy is generally
referred to as its range. When it has lost the whole of its initial energy in this way, the electron
comes to the end of its range. The two factors that determine the range are
(1) The initial energy of the electrons; the greater the initial energy of the electron, the
greater its range.
(2) The density of the material; the range is inversely proportional to the density of the
material.
One important characteristic of electron interactions is that all electrons of the same energy
have the same range in a specific material.
In general, the range of electron radiation in materials such as tissue is a fraction of a
millimeter. This means that essentially all electron radiation energy is absorbed in the body
very close to the site interactions.
For example, when 140 keV photons are absorbed in soft tissue, some of the secondary
electrons are photoelectrons having energy of 140 keV, able to produce some 4000 ion pairs
and having a range of about 0.2 mm.
Most of the secondary electrons are recoil electrons with a spectrum of energies averaging
25 keV and an average range of about'0.02 mm. The ranges in air are some 800 times greater
than in tissue. Due to their continual 'collisions' with the atoms, the tracks of secondary
electrons are somewhat tortuous.
The effectiveness of a particular radiation in producing biological damage is often related to the
LET of the radiation. The actual relationship of the efficiency in producing damage to LET
58
CHAPTER 3 INTERACTION OF X-RAYS
values depends on the biological effect considered. For some effects, the efficiency increases
with an increase in LET, for some it decreases, and for others it increases up to a point and then
decreases with additional increases in LET. For a given biological effect, there is an LET value
that produces an optimum energy concentration within the tissue. Radiation with lower LET
values does not produce an adequate concentration of energy. Radiations with higher LET
values tend to deposit more energy than is needed to produce the effect; this tends to waste
energy and decrease efficiency.
59
CHAPTER 3 INTERACTION OF X-RAYS
the radiation reaching the film, and so the resulting image. This dose reduction is achieved by
interposing between the X-ray tube and patient a uniform flat sheet of metal, usually aluminum,
and called the added or additional filtration. The predominant attenuation process should be
photoelectric absorption, which varies inversely as the cube of the photon energy. The filter
will therefore attenuate the lower-energy photons (which mainly contribute to patient dose)
much more than it does the higher-energy photons (which are mainly responsible for the
image).
The X-ray photons produced in the target are first filtered by the window of the tube
housing, the insulating oil, the glass insert, and, principally, the target material itself. The
combined effect of these disparate components are expressed as an equivalent thickness of
aluminum, typically 0.5-1 mm Al and called the inherent filtration. The light beam diaphragm
mirror also adds to the filtration. When inherent filtration must be minimized, a tube with a
window of beryllium (Z = 4) instead of glass may be used.
The total filtration is the sum of the added filtration and the inherent filtration. For general
diagnostic radiology it should be at least 2.5 mm Al equivalent. (This will produce an HVL of
about 2.5 mm Al at 70 kV, and 4.0 mm at 120 kV.) Mammography is a special case.
3.20 Choice of Filter Material
The atomic number should be sufficiently high to make the energy-dependent attenuating
process, photoelectric absorption, predominate. It should not be too high, since the whole of the
useful X-ray spectrum should lie on the high-energy side of the absorption edge. If not, the
filter might actually soften the beam.
Aluminum (Z = 13) is generally used, as it has a sufficiently high atomic number to be
suitable for most diagnostic X-ray beams. With the higher kV values, copper (Z = 29) is
sometimes used, being a more efficient filter, but it emits 9 keV characteristic X-rays. These
must be absorbed by ’backing filter’ of aluminum on the patient side of the 'compound filter'.
Other filter materials (molybdenum or palladium) have absorption edges (20 or 24 keV,
respectively) favorable for mammography. Erbium (58 keV) has been used at moderate kV
values, and is another so-called 'K-edge filter'.
60
CHAPTER 3 INTERACTION OF X-RAYS
It increases the minimum and effective photon energies but does not affect the
maximum photon energy.
It reduces the area of the spectrum and the total output of X-rays. Finally, it increases
the exit dose/entry dose ratio, or film dose/skin dose ratio.
Above a certain thickness, there is no gain from increasing the filtration, as the output is
further reduced with little further improvement in patient dose or HVL.
Relative number of photons
60
Photon Energy ( keV )
Mahmood & Haider
Figure 3.13: Schematic effect of increasing filtration on the X-ray spectrum
61
CHAPTER 4
X-ray imaging is one of the fastest and easiest ways for a physician
to view the internal organs and structures of the body. Highlight the
importance of some effects such as the effect of scattered radiation,
Rationale Grid ratio and the effect of the direct rays to obtain the highest
quality of the image. These topics should be taught in depth to get
the full knowledge of such effects
Performance Objectives
After studying the chapter four, the student will be able to:-
62
CHAPTER 4 IMAGING WITH X-RAYS
4.1Contrast
Any medical image can be described in terms of three basic concepts that we already have
mentioned:
Spatial resolution or clarity refers to the spatial extent of small objects within the image.
Noise, refers to the precision with which the signal is received; a noisy image will have
large fluctuations in the signal across a uniform object while a precise signal will have
very small fluctuations.
contrast,
Image contrast, refers to the difference in brightness or darkness in the image between an area
of interest and its surrounding background. For example, if gray and white dots are painted
onto a black canvas, the white circle is presented with a larger contrast with respect to the
background than the gray circle (Figure 4.1). The information in a medical image usually is
presented in "shades of gray". (Color is avoided because it creates false borders that can distract
the observer.) One uses the differences in gray shades to distinguish different tissue types,
analyze anatomical relationships, and sometimes assess physiological function. The larger the
difference in gray shades between two different tissue types, the easier it is to make important
clinical distinctions. It is often the objective of an imaging system to maximize the contrast in
the image for a particular object of interest, although this is not always true since there may be
design compromises where noise and spatial resolution are also very important. The contrast in
an image depends on both material characteristics of the object being imaged as well as
properties of the device(s) used to image the object. In this chapter we will detail the concept of
contrast and describe the physical determinants of contrast, including material properties, x-ray
spectra, detector response, and the role of perturbations such as scatter radiation and image
intensifier veiling glare.
63
CHAPTER 4 IMAGING WITH X-RAYS
64
CHAPTER 4 IMAGING WITH X-RAYS
attenuated by various materials in the patent (muscle, fat, bone, air, and contrast agents) along
the path between the source and detector.
The photon attenuation of each material depends on its elemental composition as well as the
energy of the beam. This effect is assessed using its linear attenuation coefficient (μ), which
gives the fraction of photons absorbed by a unit thickness of the material. An equation (eq. 6 in
chapter 3) is useful as mass attenuation coefficients are commonly tabulated rather than linear
attenuation coefficients.
(4.1)
Where:
The energy of the x-ray beam determines the value of the mass attenuation coefficient and is
one of the most important factors in controlling both the radiographic and overall (image)
contrast. The mass attenuation coefficients generally decreases as the photon energy increases
except at points of discontinuity called absorption edges, mostly the K-edge or L-edge. At these
energies, photoelectric interactions between the photon and inner shell electrons cause large
increases in the photoelectric cross-section as the photon energy slightly exceeds the binding
energy of the orbital electrons. Correspondingly, contrast tends to decrease as the photon
energy increases except when K-edge or L-edge discontinuities cause large increases in
contrast.
65
CHAPTER 4 IMAGING WITH X-RAYS
of each tissue type in turn are determined by its elemental and chemical composition, as it is for
all other (i.e. biologic and non-biologic) chemical compounds and mixtures.
For purposes of our discussion, we can consider the body being composed of three different
tissues:
Fat
soft tissue (Lean)
Bone
Air found in the lungs (and in the gastrointestinal tract)
contrast agents that possibly will be introduced into the body
The chemical composition of the three major tissue types are given in Table 4.1 and will be
used in the description of attenuation properties presented in the following section.
Some of their relevant physical properties are given in Table 4.2 and graphs of their mass
attenuation coefficients as a function of energy are presented in Figure 4.2.
66
CHAPTER 4 IMAGING WITH X-RAYS
100000
…….. Water & soft Tissue
10
1
0.1
0.01
1 10 100 1000
67
CHAPTER 4 IMAGING WITH X-RAYS
4.3.2. Fat
Along with the energy of the x-ray beam, electron density, physical density, and atomic number
determine the attenuation of any material through their impact on the attenuation coefficient.
Due to the presence of low atomic number elements, fat has a lower physical density and lower
effective atomic number, and therefore a lower photoelectric attenuation coefficient, than either
soft tissue or bone. For this reason, fat has a lower attenuation coefficient than other materials
in the body (except air) at low energies where the photoelectric interactions are the dominant
effect.
However, at higher energies, fat has a somewhat higher Compton mass attenuation
coefficient than other tissues found in the body. Unlike other elements, the nucleus of hydrogen
is free of neutrons, giving hydrogen a higher electron density (electrons/mass) than other
elements. Because hydrogen contributes a larger proportion of the mass in fat than it does in
soft tissue and bone, fats have a larger electron density than other tissues. This becomes
particularly important at higher energies where Compton interactions dominate attenuation. In
fact, inspection of a table of mass attenuation coefficients (Figure 2) shows that at higher
energies the mass attenuation coefficient of fat slightly exceeds that of bone or soft tissue,
precisely due to the higher electron density of fat. However, due to its low density it does not
have a higher linear attenuation coefficient.
As Tables 4.1 and 4.2 shows, the differences in atomic number, physical density, and
electron density between soft tissue and fat are slight. The differences in the linear attenuation
coefficients and therefore in radiographic contrast between fat and soft tissue is small. One
must depend on the energy dependence of the photoelectric effect to produce contrast between
these two materials. This is particularly true in mammography where one uses an x-ray beam
with an effective energy of about 18 keV. Such a low energy spectrum maximizes the contrast
between glandular tissue, connective tissue, skin, and fat, all of which have increasingly similar
attenuation coefficients at higher photon energies.
4.3.3 Bone
The mineral component of bone gives it excellent contrast properties for x-ray photons in the
diagnostic range. This is due to two properties. First, its physical density is 60% to 80% higher
than soft tissue. This increases the linear attenuation coefficient of bone by a proportionate
fraction over that of soft tissue. Second, its effective atomic number (about 11.6) is
significantly higher than that of soft tissue (about 7.4). Since the photoelectric mass attenuation
coefficient varies with the cube of the atomic number, the photoelectric mass attenuation
coefficient for bone is about [11.6/7.4]3 = 3.85 times that of soft tissue. The combined effect of
its greater physical density and its larger effective atomic number gives bone a photoelectric
68
CHAPTER 4 IMAGING WITH X-RAYS
linear attenuation coefficient approximately 6 times greater than that of soft tissue or fat. This
difference decreases at higher energies where the Compton Effect becomes more dominant.
However, even at higher energies, the higher density of bone still allows it to have excellent
contrast with respect to both soft tissue and fat. Therefore when imaging bone, one can resort to
higher energies to minimize patient exposure while maintaining reasonable contrast instead of
resorting to low x-ray beam energies as one is compelled to do when attempting to differentiate
fat from soft tissue.
Iodine and barium are used as contrast agents for several reasons.
The first is that they can be incorporated into chemicals that are not toxic even when
rather large quantities are introduced into the body.
Second, to be useful as a contrast agent, the material must have a linear attenuation
coefficient that is different from that of most materials found in the human body. When
iodinated contrast agents are used in angiography, the iodine must provide sufficient x-
ray attenuation to provide discernible contrast from surrounding soft tissues when
imaged with x-rays.
Both barium (Z = 56) and iodine (Z = 53) meet these requirements. A common iodinated
contrast agent is Hydopaque (Table 3), a cubic centimeter of which contains 0.25 grams
69
CHAPTER 4 IMAGING WITH X-RAYS
Air
One can maximize the contrast of iodine and barium contrast media by imaging with x-ray
photons just slightly above the k-edge of the contrast agent. This is achieved by "shaping" the
x-ray spectrum by an additional metallic filter with a k-edge higher than the k-edge of the
contrast agent. The metallic filter attenuates photons at energies higher than its k-edge but
transmits a larger proportion of photons with energies just below its k-edge. If the k-edge of the
filter is higher than the k-edge of the contrast agent, a large proportion of the transmitted
photons will fall over the k-edge of the contrast agent. Since these photon energies are chosen
to fall in a region of maximum attenuation for the contrast agent, they will maximize its
contrast. As you might deduce, metals with slightly higher atomic numbers than the contrast
agent are useful as x-ray beam filters in these applications since they also have slightly higher
70
CHAPTER 4 IMAGING WITH X-RAYS
kedges. Therefore rare earth metals such as samarium (Sm) and cerium (Ce) are commonly
used to filter the x-ray beam in contrast studies involving iodine or barium. Because of the
principle of their operation, they also are called "k-edge" filters.
100000
Iodine
Mass Attenuation Coefficient 1000 Barium
Water
100
Air
Cm2 / gm
10
0.1
0.01
1 10 100 1000
71
CHAPTER 4 IMAGING WITH X-RAYS
uniform over the image, it acts like a veil and reduces the contrast which would otherwise be
produced by the primary rays by the factor (1 + S/P) which may be anything up to 10 times.
4.6 Scatter reduction and Contrast Improvement
The image formation process in diagnostic radiology essentially captures a radiographic
"shadow" created by the body of x-rays from a point source. The accuracy of this "shadow"
depends on the photons being highly directional. However, scattered radiation is not emitted by
a single point source. Rather, it strikes the film-screen cassette from random directions and
carries little useful information, unlike the directional primary photons that arise from the
source.
There are a number of ways to reduce scattered radiation which produced by the patient
(relative to primary) to improve radiographic contrast
4.6.1 Field size
Reducing the field area, by the use of cones or the light beam diaphragm, reduces the volume
of scattering tissue and so decreases scatter and improves contrast as shown in figure 4.4.
X-ray Fan
Beam
Scatter
Radiation
4.6.2 Kilovoltage
Using a lower kV produces less forward scatter and more side scatter. At the same time it
produces less penetrating scatter, so scatter produced at some distance from the film is less
likely to reach it. In practice, these effects may not be very significant. Reducing the kV does
72
CHAPTER 4 IMAGING WITH X-RAYS
increase the contrast, but primarily because of the increased differential photoelectric
absorption.
The amount of scatter (relative to the primary rays) reaching the film-screen may be reduced
and contrast increased by interposing between it and the patient:
4.6.3 Grid
An 'anti-scatter' grid, seen in cross-section in Figure 4.5, consists of thin (0.07 mm) strips of a
heavy metal (such as lead) sandwiched between thicker (0.18 mm) strips of inter-space material
(plastic, carbon, fiber, or aluminum, which are transparent to X-rays), encased in aluminum or
carbon fiber. The lead strips absorb (say, 90% of) the scattered rays which hit the grid
obliquely, while allowing (say, 70%) of the primary rays to pass through the gaps and reach the
film.
Mahmood & Haider
X-ray Fan
Beam
Scatter
Radiation
Radiographic
Grid
(1) a small reduction in the intensity of the primary radiation which comes from the anode,
some distance away, but
(2) a large reduction in the intensity of the scattered radiation, since that comes from points
within the patient, much nearer. Use of an air gap increases contrast but necessitate an
increase in the kV or mAs, and also results in a magnified image.
73
CHAPTER 4 IMAGING WITH X-RAYS
X-ray Fan
Beam
Scatter
Radiation
Narrow Beam
Geometries
Fore Slit
X-Ray Fan
Beam
Scatter
Radiation
Aft Slit
Mahmood &
Haider
Figure 4.7: Scanning slit system using fan beam to reduce scatter.
74
CHAPTER 4 IMAGING WITH X-RAYS
strips and B the interspace material. It will be seen that the grid has only a small angle of
acceptance within which scattered rays can reach a point on the film.
Primary X-Ray
A A
Beam
B B Grid
Transmitted beam
Film Film
Figure 4.8: (a) Construction of a grid, (b) Grid cut-off is caused by placing a focused grip
upside down on the film.
75
CHAPTER 4 IMAGING WITH X-RAYS
cut-off of the primary rays will occur. With a linear (i.e. uncrossed) grid, the X-ray tube can be
angled along the length of the grid without 'cutting off the primary radiation. This can be useful
for certain examinations. If the tube is angled the other way, or if a focused grid is accidentally
placed the wrong way round or upside down on the film, the primary beam will be absorbed,
leaving perhaps only one small central area of the film exposed. This is known as 'grid cut-off
(see Fig. 1b), and is more restrictive with high-ratio grids.
Unfocused grids, in which the strips are completely parallel, may be used at any focus
distance but suffer severely from cut-off. The effect can be reduced by using a longer FFD or a
grid with a lower grid ratio.
4.9.3 Stationary and Moving Grids
- Grid lines (grid lattice) are shadows of the lead strips of a stationary grid
superimposed on the radiological image. If the line density (number of grid lines per
millimeter) is sufficiently high they may not be noticeable at the normal viewing distance but
they nevertheless reduce the definition of fine detail.
- A moving grid (‘Bucky’) has typically five lines per millimeter. During the
exposure it moves for a short distance, perpendicular to the grid lines. It can move to and fro
(reciprocating) or in a circular fashion (oscillating). Such movement blurs out the grid lines. It
is important that the grid starts to move before the exposure starts, moves steadily during the
exposure, and does not stop moving until after the exposure is over.
- A multiline grid has seven or more lines per millimeter together with a high grid
ratio, and can be used as a stationary grid without the lines being visible. It is used when a
moving grid cannot be used, and, being thinner, incurs fewer doses to the patient.
4.9.4 Speed and Selectivity
The two tasks of a grid - to transmit primary radiation and absorb scattered radiation - may be
judged by its selectivity:
Typical figures range from 6 to 12, depending on the grid ratio and tube kV.
The use of grids necessitates increased radiographic exposure for thp same film density,
because of the removal of some of the direct rays and most of the scatter. Speed or exposure
factor is the ratio
and is typically 3-5, depending on the grid structure, patient thickness, etc. Using a grid and,
consequently, an increased exposure obviously increases patient dose.
76
CHAPTER 4 IMAGING WITH X-RAYS
Film
S F density
Structure
h C
I
Film
B Image B B B
Figure 4.9: Magnification and blurring, (a) Formation of magnified image, (b) A
plot of film density along a line across the film.
If the diagram is redrawn with larger or smaller values for F and h, it will be seen that
magnification is reduced by using a longer focal-film distance (FFD) F or by decreasing the
object-film distance (OFD) h. When positioning the patient, the film is therefore usually placed
close to the structures of interest. If the tissues are compressed this will also reduce patient
dose. On the other hand, advantage is taken of increased magnification in macroradiography.
The magnification M is equal to
4.10.1 Distortion
This refers to a difference between the shape of a structure in the image and in the subject. It
may be due to foreshortening of the shadow of a tilted object, e.g. a tilted circle projects as an
77
CHAPTER 4 IMAGING WITH X-RAYS
ellipse. It may also be caused by differential magnification of the parts of a structure nearer to
and farther away from the film–screen, an effect which is familiar to photographers. It can be
reduced by using a longer FFD.
78
CHAPTER 4 IMAGING WITH X-RAYS
4.12.1 Kilovoltage
In general, as high as possible a kV will be used so as to (a) increase the penetration of the
beam and reduce patient dose, (b) increase the latitude of exposure and range of tissues
displayed, and (c) reduce the mAs needed and thus allow shorter exposure times, within the
rating of the tube; but (d) not so high a kV that insufficient contrast results in the area of
diagnostic interest.
4.12.2 Milliampere-seconds
Having chosen the kV for a particular examination, this determines the required mAs, which is
then subdivided into (a) as short as possible an exposure time (to arrest motion) and (b) a
correspondingly high mA-just within the rating of the tube.
4.12.3 Exposure time
The necessary exposure time can be reduced by selecting (a) a higher kV and (b) a larger focal
spot. However, as explained in Section 3.6, using a larger focal spot to reduce the exposure
time may sometimes increase the overall blurring. Focal spot size, exposure time and screen
speed should be chosen together, to give minimal total blurring, which occurs when the
separate blurring components are approximately equal. The necessary exposure time can also
be reduced by using:
a three-phase generator, rather than single phase; full-wave single-phase rather than
self-rectification; and a higher speed and larger-diameter anode disk.
4.13 Macroradiography
Macroradiography is a radiographic imaging technique used to increase the size of the image
relative to that of the object. Where a magnified image is required, the anode-object distance is
79
CHAPTER 4 IMAGING WITH X-RAYS
decreased relative to the object-film distance, which is increased. Figure 3.8 shows that the
image of a structure S is larger when the film F2 is some distance from the patient than when it
is at F1, close to the patient. This has the following implications for technique:
Focal spot
A very small focal spot must be used (e.g. 0.1 mm) otherwise geometric blurring would
be unacceptable. As a result, the heat rating is reduced.
Exposure time
Exposure times may have to be increased to several seconds, to keep within the rating
of the focal spot and as a result increased movement blurring may result.
Immobilization is therefore important.
S Structure
F1 Film
F2 Film
Figure 4.10: Macroradiography.
Patient dose
Patient dose is increased because of the increased exposure needed.
Contrast
No grid is needed as the air gap reduces the scatter reaching the film. This helps to
reduce the additional exposure needed.
Screen blurring
The relative effect of screen blurring is reduced. This means that fast screens may lay
used, which again helps to reduce the additional exposure needed.
Geometrical blurring
Geometrical blurring is increased, relative to the size of the image.
Quantum mottle
Quantum mottle is not increased, since the same number of X-ray photons are absorbed
in the screen, for the same film blackening.
80
CHAPTER 4 IMAGING WITH X-RAYS
4.14 Mammography
Mammography aims to demonstrate both microcnlcifications as small as 100 um in size of high
inherent contrast and larger areas of tissue having much lower contrast, on the same film.
The breast does not attenuate the beam greatly, allowing the use of the low kV that is needed
to obtain sufficient photoelectric absorption in order to differentiate between normal and
abnormal breast tissues. Ideally, monoenergetic X-rays of about 18-20 keV would produce
optimal contrast and penetration in the case of a small breast.
This can be approximated by operating a tube having a ben-Ilium window and a
molybdenum target at 28 kV (constant potential) and using a 0.03 mm molybdenum filter
which has an absorption edge at 20 keV. This transmits most of the characteristic radiation
(17.9 and 19.5 keV) but removes most of the continuous spectrum. Figure 3.9 compares (A) the
spectrum produced in this way by a molybdenum-molybdenum combination with (B) the
spectrum that would be produced by operating a tube with a tungsten target at 30 kV and using
a 0.5 mm Al filter.
Molybdenum
Tungsten
B
B
A
Figure 4.11: X-ray spectra using a molybdenum target and filter (A) and a
tungsten target with aluminum filter (B).
With the thicker breast, higher-energy radiation is preferred. Better results, with a significant
decrease in absorbed dose to the breast tissue, are obtained with a rhodium or a palladium filter,
having absorption edges at 23 and 24 keV, respectively. Better still is a tube with a rhodium
target, giving 20.2 and 22.7 keV characteristic rays, used with a rhodium filter.
The small focal spot and the inefficient production of X-rays consequent on the low kV and
target atomic number incur problems of tube loading.
Films with a value of y of about 3 are used. Contrast is also improved (but the exposure and
patient dose are increased) by the use of a grid or air gap. A very fine grid is the preferred
option.
81
CHAPTER 5
FLUOROSCOPY
Performance Objectives
After studying the chapter five, the student will be able to:-
83
CHAPTER 5 FLUOROSCOPY
5.2 Description
Fluoroscopy is a type of medical imaging produces a video x-ray that uses a continuous X-ray
beam to display an organ or part (internal structures) of a patient in real time on a computer
screen or television monitor. Fluoroscopes are used for interventional procedures such as
guiding the placement of a catheter during an arteriography, for assessing stomach and bowel
movement and function, and for detecting obstructions in the airway or blood vessels. A
contrast agent may also be used to enhance the images. During a fluoroscopy procedure, an
X-ray beam is passed through the body. The image is transmitted to a monitor so the
movement of a body part or of an instrument or contrast agent (“X-ray dye”) through the body
can be seen in detail.
Fluoroscopy is most often used to view the upper gastrointestinal (GI) tract, which includes
the stomach, esophagus, duodenum, and the upper small intestine. It is also used to view the
lower GI tract.
In its simplest form, a fluoroscope consists of an X-ray source and fluorescent (phosphor)
screen between which a patient is placed to convert the pattern of x-rays leaving the patient
into a pattern of light. Since the intensity of light is proportional to the intensity of x-rays.
However, modern fluoroscopes couple the screen to an X-ray image intensifier and CCD
video camera allowing the images to be recorded and played on a monitor. The x-ray tube is
usually located under the table and is attached to the fluoroscopic tower but in some types of
Fluoroscopy, the x-ray tube located above the table (see figure 5.2).
(a) (b)
Figure 5.2: (a) tube x-ray above the table (b) tube x-ray under the table
5.3 Fluoroscopy Works
The fluoroscope is a type of x-ray machine that can use either a continuous or a pulsing x-ray
beam. The x-ray machine has an x-ray tube that is constructed of glass or metal and has a
vacuum seal inside. The x-ray tube is usually located under the table and is attached to the
fluoroscopic tower. It generates x-rays by converting electricity from its power line (AC
current of 120-480 volts) into electricity that falls into the 25-150 kilo volt range. This creates
84
CHAPTER 5 FLUOROSCOPY
a stream of electrons that are shot against a tungsten target. When the electrons hit this target
(called an anode) the atomic structure of the tungsten stops the electrons, causing a release of
x-ray energy. This energy is focused by the x-ray tube onto the area of the body to be imaged.
These very energetic electromagnetic waves can pass through the body and create images
of internal structures. Because the different tissues within the body are of different densities,
those waves are attenuated (weakened) at differing rates as they pass through. Bone, for
example, is very dense and absorbs a lot of the x-rays, while the tissues surrounding the bone
are less dense and absorb less of the x-ray. It is this difference in the absorption of the waves
that creates variations in the exposures and allows the detail of the image to be formed.
With a fluoroscope, when the beam passes through the body it hits an image intensifier that
increases the brightness of the image many times (e.g. x1000 to x5000) so that it can be
viewed on a display screen. The image intensifier itself is coupled to a video camera that
captures and encodes the two-dimensional patterns of light as a video signal from the x-ray
machine. The signal is converted back into a pattern of light seen as the image on the monitor.
The camera output can be digitized for computer image enhancements.
Because fluoroscopy involves the use of X-rays, a form of ionizing radiation, all fluoroscopic
procedures carries some risks of radiation-induced cancer to the patient. The radiation dose
the patient receives varies depend greatly on the size of the patient as well as length of the
procedure. Fluoroscopy can result in relatively high radiation doses, especially for complex
interventional procedures (such as placing stents or other devices inside the body) which
require fluoroscopy be administered for a long period of time. The use of X-rays, a form of
ionizing radiation, requires the potential risks from a procedure to be carefully balanced with
the benefits of the procedure to the patient. While physicians always try to use low dose rates
during fluoroscopic procedures, the length of a typical procedure often results in a relatively
high absorbed dose to the patient. Recent advances include the digitization of the images
85
CHAPTER 5 FLUOROSCOPY
captured and flat panel detector systems which reduce the radiation dose to the patient still
further. Radiation-related risks associated with fluoroscopy include:
radiation-induced injuries to the skin and underlying tissues (“burns”), which occur
shortly after the exposure, and
radiation-induced cancers, which may occur sometime later in life.
The probability that a person will experience these effects from a fluoroscopic procedure is
statistically very small. Therefore, if the procedure is medically needed, the radiation risks are
outweighed by the benefit to the patient. In fact, the radiation risk is usually far less than other
risks not associated with radiation, such as anesthesia or sedation, or risks from the treatment
itself. To minimize the radiation risk, fluoroscopy should always be performed with the
lowest acceptable exposure for the shortest time necessary.
See the Medical X-ray Imaging webpage for more information on benefits and risks of X-
ray imaging, including fluoroscopy.
5.5 Components of Fluoroscope
Simple fluoroscopes consist of nothing more than a fluorescent screen and an X-ray source; a
patient is placed in between them. Modern fluoroscopes can be more complicated.
Components, as shown in figure 5.3, of the fluoroscopic imaging chain are:
x-ray generator
x-ray tube
collimator
filters
patient table
grid
image intensifier
optical coupling
television system
image recording
The components pertinent to orthopedic surgery are the x-ray generator, x-ray tube,
collimator, patient table and pad, and image intensifier.
5.5.1 X-ray Generator
Produces electrical energy and allows selection of kilovolt peak (kVp) and tube current (mA)
that is delivered to x-ray tube.
86
CHAPTER 5 FLUOROSCOPY
5.5.3 Collimator
Collimator contains multiple sets of shutter blades that define the shape of the x-ray beam.
There is a rectangular and a round set of blades. By further collimating the beam, or "coning
down" to the area of interest, the exposed volume of tissue is reduced, which results in less
scatter production and better image contrast. It also reduces the overall patient and surgeon
radiation dose by minimizing scatter and direct exposure.
5.5.4 Patient Table and Pad
Must be balance adequate strength to support the patient's body weight while minimizing x-
ray attenuation. This can be accomplished with carbon fiber composite materials. Thin foam
pads are better than thick gel pads.
Monitor
Video Camera
Optical Coupling
Image Intensifier
Grid
Patient
Table
Filtration
Collimator
X-ray Tube
X-ray Generator
Mahmood & Haider
Figure 5.3: Components of the fluoroscopic imaging chain
87
CHAPTER 5 FLUOROSCOPY
Major components include a curved input layer to convert x-rays to electrons. The CsI
crystals are grown as tiny needles (see figure 5.5) and are tightly packed in a layer of
approximately 300 μm, each crystal is approximately 5 μm in diameter. This results in
microlight pipes with little dispersion and improved spatial resolution.
—
e
Video or CCD camera to
Output ADC to digital image
Phosphor
X-rays in
~25 KeV
acceleration Light out → Recorder
Grid —
e e
—
—
e —
e
ee e
—
—
—
ZnCds:Ag
—
e e
—
SbCs3 Output phosphor
CsI
photocathode
phosphor Electrons → Light
~5000 X amplification
X-rays → Light → Electrons
Mahmood & Haider
Electrons
Photocathode
CsI Needles
Support
X-Ray
Light Pipe (optical Fiber) Mahmood & Haider
Figure 5.5: Structured Phosphor; Cesium Iodide (CsI), crystal grow in long
columns that act as light pipes
88
CHAPTER 5 FLUOROSCOPY
X-rays that exit the patient and are incident on the image-intensifier tube are transmitted
through the glass envelope and interact with and deposit energy into the layer of phosphor
(which is composed of cesium iodide, CsI); a portion of this energy is converted into light.
The light from the phosphor is then absorbed by the photocathode layer of the image
intensifier, which uses the light energy to emit electrons: The number of electrons emitted is
in direct proportion to the amount of light that was absorbed. The electrons are then amplified
and accelerated by a high voltage (25,000–35,000 V) placed between the input cathode of the
image intensifier and the output phosphor, the emitted electrons gain substantial kinetic
energy and travel at a high velocity. Electrostatic plates are used to focus the electrons and
direct them to the output phosphor, which has a much smaller surface area.
5.6 Image Intensifier Fluoroscopy Systems
Upon impacting the output phosphor, a portion of the energy is converted back to a light
image. This is similar to the effect of radiographic intensifying screens. Because the electron
flux from a large input surface area is concentrated onto a much smaller output surface area at
the output phosphor, the light image that emerges from the output phosphor is much brighter
than it would be at the input phosphor layer (magnification gain). Moreover, the high kinetic
energy gained by the electrons, a result of the high voltage applied across the image
intensifier, also increases the emitted light from the output phosphor (flux gain). After passing
through a lens system and an aperture, the television camera tube intercepts this light image
and converts the light pattern into a series of electrical signals that may be displayed on the
television monitor. The tube components are contained within a glass or metal envelope that
provides structural support but more importantly maintains a vacuum. When installed, the
tube is mounted inside a metal container to protect it from rough handling and breakage.
There are different diameters that can accommodate body parts of various sizes.
The next active element of the image-intensifier tube is the photocathode, which is
bonded directly to the input phosphor with a thin, transparent adhesive layer. The
photocathode is a thin metal layer usually composed of cesium and antimony compounds that
respond to stimulation of input phosphor light by the emission of electrons. The photocathode
emits electrons when illuminated by the input phosphor. This process is known as
photoemission. The term is similar to thermionic emission, which refers to electron
emission that follows heat stimulation. Photoemission is electron emission that follows light
stimulation.
For the image pattern to be accurate, the electron path from the photocathode to the output
phosphor must be precise. The engineering aspects of maintaining proper electron travel are
called electron optics because the pattern of electrons emitted from the large cathode end of
the image-intensifier tube must be reduced to the small output phosphor.
The devices responsible for this control, called electrostatic focusing lenses, are located
along the length of the image-intensifier tube. The electrons arrive at the output phosphor with
high kinetic energy and contain the image of the input phosphor in minified form.
89
CHAPTER 5 FLUOROSCOPY
The increased illumination of the image is due to the multiplication of light photons at the
output phosphor compared with x-rays at the input phosphor and the image magnification
from input phosphor to output phosphor. The ability of the image intensifier to increase the
illumination level of the image is called its brightness gain. The brightness gain is simply the
product of the magnification gain and the flux gain.
90
Chapter 6
Computed Tomography
Performance Objectives
After studying the chapter six, the student will be able to:-
1. Describe the scan-and step slice acquisition method and the general
characteristics of the data sets it produces.
2. Describe the helical/spiral volume acquisition method and the general
characteristics of the data set it produces.
3. Describe and illustrate the general concept of the back-projection method of
image reconstruction.
4. Explain what is meant by "filtered" back projection.
5. Sketch a slice of tissue and illustrate the concept of voxels that are formed
during image reconstruction.
6. Describe and illustrate the general range of CT numbers for tissue and
materials in a human body.
7. Explain how windowing contributes to high contrast sensitivity.
CHAPTER 6 COMPUTED TOMOGRAPHY
92
CHAPTER 6 COMPUTED TOMOGRAPHY
6.1 Introduction
Computed Tomography (CT) is a medical imaging technique that uses X-rays to obtain
structural and functional information about the human body. The digital geometry processing
can be used to generate a three-dimensional image of the internal structures of the human
body from a large series of two-dimensional X-ray images taken around a single axis of
rotation. It is also called a computed axial tomography scan. The origin of the word
"tomography" is derived from the Greek word tomē ("cut") or "tomos" meaning "slice" or
"section" and graphein meaning ("drawing"). A CT imaging system uses computer-processed
X-rays to produce tomographic images or 'slices' of specific areas of the body, like the slices
in a loaf of bread. Computed tomography (CT) has the capability to provide a different form
of high-quality imaging. Computed tomography known as cross-sectional imaging is used for
diagnostic procedures and visualization to guide therapeutic procedures in various medical
disciplines. On the other hand, the radiation dose imparted to the patient’s body during a
procedure is a relatively high dose compared to radiography. But in spite of that, the
computed tomography (CT) is now one of the most effective imaging methods and value to
medical diagnosis and guidance of therapeutic procedures.
In CT scanning, both the image quality characteristics and the radiation dose depend on
and are controlled by the specific imaging protocol selected for each patient. The image
quality has a complex combination of many adjustable imaging factors and is influenced by
many technical parameters for each procedure. Therefore, there is the need to manage the
radiation dose for each patient and balance it with respect to the image quality requirements.
This can be achieved by adjusting the protocol factors for each procedure.
The general objective for each imaging procedure is to visualize the various anatomical
structures and any signs of pathology if they are present. Therefore adjust the image
characteristics to provide the required visualization and limit the radiation dose to no more
than that required to produce the necessary image quality.
Optimized imaging protocols for a specific clinical study, must take the factors are
adjusted to provide the proper balance among necessary image quality and patient exposure of
radiation.
6.2 History of Computed Tomography
In 1972 CT became feasible with the development of modern computer technology in the
1960s, but some of the ideas on which it is based can be traced back to the first half of that
century. In 1917 the Bohemian mathematician Radon (1917) proved in a research paper of
fundamental importance that the distribution of a material or material property in an object
layer can be calculated if the integral values along any number of lines passing through the
same layer are known. The first applications of this theory were developed for
radioastronomy by Bracewell (1956), but they met with little response and were not exploited
for medical purposes.
93
CHAPTER 6 COMPUTED TOMOGRAPHY
It is quite remarkable that image reconstruction from projections was attempted as early as
1940. Needless to say; these attempts were made without the benefits of modern computer
technology. In a patent granted in 1940, Gabriel Frank described the basic idea of today’s
tomography.
Twenty-one years later, William H. Oldendorf, an American neurologist from Los
Angeles, performed a series of experiments based on principles similar to those later used in
CT. The objective of his work was to determine whether internal structures within dense
structures could be identified by transmission measurements.
In 1963, David E. Kuhl and Roy Q. Edwards introduced transverse tomography using
radioisotopes, which was further developed and evolved into today’s emission computed
tomography. A sequence of scans was acquired at uniform steps and regular angular intervals
with two opposing radiation detectors. At each angle, the film was exposed to a narrow line of
light moving across the face of a cathode ray tube with a location and orientation
corresponding to the detectors’ linear position. This is essentially the analog version of the
backprojection operation. The process was repeated at 15-deg angular increments (the film
was rotated accordingly to sum up the backprojected views). In later experiments, the film
was replaced by a computer-based backprojection process. What was lacking in these
attempts was an exact reconstruction technique.
In 1963, the physicist Allan M. Cormack reported the findings from investigations of
perhaps the first CT scanner actually built. Cormack was carried out the first experiments on
medical applications of this type of reconstructive tomography. His work could be traced back to
1955 when he was asked to spend one and a half days a week at Groote Schuur Hospital
attending to the use of isotopes after the resignation of the hospital physicist (Cormack was
the only nuclear physicist in Cape Town, South Africa). While observing the planning of
radiotherapy treatments, Cormack came to realize the importance of knowing the x-ray
attenuation coefficient distribution inside the body. He wanted to reconstruct attenuation
coefficients of tissues to improve the accuracy of radiation treatment.
The development of the first clinical CT scanner began in 1967 with English engineer
Godfrey N. Hounsfield at the Central Research Laboratories of EMI, Ltd. in England. The first
successful practical implementation of the theory was achieved in 1972 by Hounsfield, who is
now generally recognized as the inventor of CT. While investigating pattern recognition
techniques, he deduced, independent of Cormack, that x-ray measurements of a body taken
from different directions would allow the reconstruction of its internal structure.20
Preliminary calculations by Hounsfield indicated that this approach could attain a 0.5%
accuracy of the x-ray attenuation coefficients in a slice. This is an improvement of nearly a
factor of 100 over the conventional radiograph. For their pioneering work in CT, Cormack
and Hounsfield shared the 1979 Nobel Prize in Physiology and Medicine.
The first laboratory scanner was built in 1967. Linear scans were performed on a rotating
specimen in 1-deg steps (the specimen remained stationary during each scan). Because of the
94
CHAPTER 6 COMPUTED TOMOGRAPHY
low-intensity americium gamma source, it took nine days to complete the data acquisition and
produce a picture. Unlike the reconstruction method used by Cormack, a total of 28,000
simultaneous equations had to be solved by a computer in 2.5 hours.
The use of a modified interpolation method, a higher-intensity x-ray tube, and a crystal
detector with a photomultiplier reduced the scan time to nine hours and improved the
accuracy from 4% to 0.5%.
The first clinically available CT device was installed at London’s Atkinson-Morley
Hospital in September 1971, after further refinement on the data acquisition and
reconstruction techniques. Images could be produced in 4.5 minutes. On October 4, 1971, the
first patient, who had a large cyst, was scanned and the pathology was clearly visible in the
image.
The table, on which the patient lies, is at the front of the scanner. The table then moves into
the central bore of the CT scanner. Within the scanner gantry is a rotating ring with both the
X-ray tube and the detector array.
A high-voltage x-ray generator supplies electric power to the x-ray tube, which usually has
a rotating anode and is capable of withstanding the high heat loads generated during rapid
multiple-slice acquisition. The x-ray tube generator, detector array, collimators, and rotational
frame are housed in moveable frame as ring shaped unit called the gantry (Fig. 6.1).
95
CHAPTER 6 COMPUTED TOMOGRAPHY
Source
detector
Source
Object
6.4.2 Second-Generation
Although clinical results from the first-generation scanners were promising, there remained a
serious image quality issue associated with patient motion during the 4.5-min data acquisition.
The data acquisition time had to be reduced. This need led to the development of the second-
generation scanner illustrated in Fig. 6.3. Although this was still a translation-rotation
96
CHAPTER 6 COMPUTED TOMOGRAPHY
scanner, the number of rotation steps was reduced by the use of multiple pencil beams. The
figure depicts a design in which six detector modules were used. The angle between the pencil
beams was 1 deg. Therefore, for each translation scan, projections were acquired from six
different angles. This allowed the x-ray tube and detector to rotate 6 deg at a time for data
acquisition, representing a reduction factor of 6 in acquisition time. In late 1975, EMI
introduced a 30-detector scanner that was capable of acquiring a complete scan under 20 sec.
This was an important milestone for body scanning, since the scan interval fell within the
breath-holding range for most patients.
Source
Object
6.4.3 Third-generation CT
One of the most popular scanner types is the third-generation CT scanner illustrated in Fig.
6.4. In this configuration, many detector cells are located on an arc concentric to the x-ray
source. The size of each detector is sufficiently large so that the entire object is within each
detector’s field-of-view (FOV) at all times. The x-ray source and the detector remain
stationary to each other while the entire apparatus rotates about the patient. Linear motion is
eliminated to significantly reduce the data acquisition time. In the early models of the third-
generation scanners, both the x-ray tube power and the detector signals were transmitted by
cables.
Limitations on the length of the cables forced the gantry to rotate both clockwise and
counterclockwise to acquire adjacent slices. The acceleration and deceleration of the gantry,
which typically weighed several hundred kilograms, restricted the scan speed to roughly 2 sec
per rotation. Later models used slip rings for power and data transmission. Since the gantry
could rotate at a constant speed during successive scans, the scan time was reduced to 0.5 sec
or less. The introduction of slip ring technology was also a key to the success of helical or
97
CHAPTER 6 COMPUTED TOMOGRAPHY
spiral CT (Chapter 9 is devoted to this topic). Because of the inherent advantages of the third-
generation technology, nearly all of the state-of-the-art scanners on the market today are third
generation.
Object
6.4.4 Fourth-Generation
Several technology challenges in the design of the third-generation CT, including detector
stability and aliasing, led to investigations of the fourth generation concept depicted in Fig.
6.5. In this design, the detector forms an enclosed ring and remains stationary during the
entire scan, while the x-ray tube rotates about the patient. Unlike the third-generation scanner,
a projection is formed with signals measured on a single detector as the x-ray beam sweeps
across the object. The projection, therefore, forms a fan with its apex at the detector, as shown
by the shaded area in Figure 6.5 (a projection in a third generation scanner forms a fan with
the x-ray source as its apex). One advantage of the fourth-generation scanner is the fact that
the spacing between adjacent samples in a projection is determined solely by the rate at which
the measurements are taken. This is in contrast to third-generation scanning in which the
sample spacing is determined by the detector cell size. A higher sampling density can
eliminate potential aliasing artifacts. In addition, since at some point during every rotation
each detector cell is exposed directly to the x-ray source without any attenuation, the detector
can be recalibrated dynamically during the scan. This significantly reduces the stability
requirements of the detector.
A potential drawback of the fourth-generation design is scattered radiation. Because each
detector cell must receive x-ray photons over a wide angle, no effective and practical scatter
rejection can be performed by a post-patient collimator. Although other scatter correction
schemes, such as the use of a set of reference detectors or software algorithms, are useful, the
98
CHAPTER 6 COMPUTED TOMOGRAPHY
Source
Object
99
CHAPTER 6 COMPUTED TOMOGRAPHY
coplanar) to make room for the overlapped portion. When multiple target tracks and detector
rings are used, coverage of 8 cm along the patient long axis can be obtained for the heart.
Since the system has no mechanical moving parts, scan times as fast as 50 ms can be
achieved. However, for noise considerations, multiple scans are often averaged to produce the
final image.
Data acquisition
Magnetic focus and system
Electron gun deflection coils
Detectors
Electron beam
X-ray beams
Patient
couch
Vacuum drift tube
X-ray
collimators
Target rings
Mahmood & Haider
Figure 6.6: Geometry of an electron-beam scanner.
100
CHAPTER 6 COMPUTED TOMOGRAPHY
volume) element (voxel) extending through the thickness of the tissue section. In addition, in
a real CT image, all tissues within a single pixel would be the same shade of gray (see figure
6.7). The image can be stored for retrieval and use later.
10 mm
Voxels X-ray tube
0.5 mm
0.5 mm
0.5 mm Pixels
0.5 mm Detectors
Voxel
W
When X-rays pass through the material subjected to attenuation which is the removal fraction
of x-ray photons, as a result of tissue absorption and scatter, from the x-ray beam as it passes
through matter. An attenuation measurement quantifies the fraction of radiation removed in
passing through a given amount of a specific material of thickness Δx (Fig. 6.8, A).
Attenuation is expressed as:
(6.1)
where, It and Io are the x-ray intensities measured with and without the material in the x-ray
beam path, respectively, and μ is the linear attenuation coefficient of the specific material.
Formula (6.1) is expressed as the natural logarithm:
101
CHAPTER 6 COMPUTED TOMOGRAPHY
The image reconstruction process derives the average attenuation coefficient (μ) values for
each voxel in the cross section by using many rays from many different rotational angles
around the cross section. The specific attenuation of a voxel (μ) increases with the density and
the atomic numbers of tissues averaged through the volume of the voxel and declines with
increasing x-ray energy.
X-ray tube
∆X
Specific material
of thickness ∆x
∆X
Specific material
of certain Stack of voxels
thickness of thickness ∆x
Detector
A B Mahmood & Haider
Figure 6.8: Principles of CT. Diagram shows the x-ray attenuation through
a specific material of finite thickness (Δx) (Eq 6.2) (A) and through a
material considered as a stack of voxels with each voxel of finite thickness
(Δx)(Eq 6.2)(B).
Mathematically, the attenuation value (μ) for each voxel could be determined algebraically
with a very large number of simultaneous equations by using all ray sums that intersect the
voxel. However, a much more elegant and simpler method called filtered back-projection
was used in the early CT scanners and remains in use today. That means, the filtered back-
projection method and many other methods are applied to derive the average attenuation
coefficient (μ) values for each voxel in the cross section, using many rays from many
different rotational angles around the cross section. Rays are collected in sets called
projections, which are made across the patient in a particular direction in the section plane.
There may be from 500 to 1,000 or more rays in a single projection. To reconstruct the image
from the ray measurements, each voxel must be viewed from multiple different directions. A
complete data set requires many projections at rotational intervals of 1° or less around the
cross section. Back-projection effectively reverses the attenuation process by adding the
attenuation value of each ray in each projection back through the reconstruction matrix.
Because this process generates a blurred image, the data from each projection are
102
CHAPTER 6 COMPUTED TOMOGRAPHY
6.6 CT Image
The Computed tomography number (CT number) is a selectable scan factor based on the
Hounsfield scale. Each elemental region of the CT image (pixel) is expressed in terms of
Hounsfield units (HU) corresponding to the x-ray attenuation (or tissue density). CT
numbers are displayed as gray-scale pixels on the viewing monitor. White represents
pixels with higher CT numbers (bone). Varying shades of gray are assigned to
intermediate CT numbers e.g., soft tissues, fluid and fat. Black represents regions with
lower CT numbers like lungs and air-filled organs.
For radiologists, the most important output from a CT scanner is the image itself. The
variable signal intensity in CT results from tissue discrimination based on the variations in
attenuation between “voxels,” which depends on differences in voxel density and atomic
number of elements present and is influenced by the detected mean photon energy. Image that
produces CT which will show later is composed of pixels (picture elements) and each pixel on
it represents the average x-ray attenuation in a small volume (voxel) that extends through the
tissue section. In addition, all tissues within single pixel in a real CT image will be the same
shade of gray.
As a final step, the individual voxel attenuation values are scaled to more convenient
integers and normalized to voxel values containing water ( ). The CT image does not
show μ values directly, but the intensity scale (called the CT number) used in the
reconstructed CT image is defined by:
where μ is the measured attenuation of the material in the voxel and is the linear
attenuation coefficient of water. This unit is often called the Hounsfield unit (HU), honoring
the inventor of CT. Voxels containing materials that attenuates more than water (e.g. muscle
tissue, liver, and bone) have positive CT numbers, whereas materials with less attenuation
than water (e.g. lung or adipose tissues) have negative CT numbers. With the exception of
water and air, the CT numbers for a given material will vary with changes in the x-ray tube
potential and from manufacturer to manufacturer.
By definition, water has a CT number of zero. The CT number for air is –1000 HU,
since . Soft tissues (including fat, muscle, and other body tissues) have CT numbers
ranging from –100 HU to 60 HU. Cortical bones are more attenuating and have CT numbers
from 250 HU to over 1000 HU. The linear attenuation coefficient is magnified by a factor
over 1000 (note the division by ). Medical scanners typically work in a range of –1024
HU to +3071 HU.
103
CHAPTER 6 COMPUTED TOMOGRAPHY
The contrast and metal objects have values from several hundred to several thousand HU.
Because of the large dynamic range of the CT number, it is impossible to adequately visualize
it without modification on a standard grayscale monitor or film. Typical display devices use
eight-bit grayscales, representing 256 different shades of gray. If a CT image is displayed
without transformation, the original dynamic range of well over 2000 HU must be
compressed by a factor of at least 8.
6.7 Principles of Helical CT Scanning Operation
There are two modes for a CT scan: conventional, (or slice-to-slice scan), and helical (or
spiral) CT. For slice-to-slice CT scan, it consists of two alternate stages:
Data acquisition: During this stage, the patient remains stationary and the x-ray tube rotates
about the patient to acquire a complete set of projections at a prescribed scanning location.
Patient positioning: During this stage, no data are acquired and the patient is transported to
the next prescribed scanning location.
The data acquisition stage typically takes one second or less while the patient positioning
stage is around one second. Thus, the duty cycle of the slice-to-slice CT is 50% at best. This
poor scanning efficiency directly limits the volume coverage speed versus performance and
therefore the scan throughput of the step-and-shoot CT.
Helical (or spiral) CT scanners use slip-ring technology, which was introduced around
1990. Slip-ring scanners can perform a scan in which the patient moves slowly through the
gantry which is referred to as the table speed, while the X-ray tube and detector rotate in a
plane perpendicular to the major axis of the patient’s body. In this technique the data are
continuously acquired or collected without pausing while the patient is simultaneously
transported at a constant speed through the gantry. For this reason the duty cycle of the helical
scan is improved to nearly 100% and the volume coverage speed performance can be
substantially improved. This means that the X-ray tube and detector perform a ‘spiral’ or
‘helical’ movement with respect to the patient, generally at a rate of one revolution per
second. This technique allows fast and continuous acquisition of the data from a complete
volume. Many coarse data sets, each of one rotation, are created by interpolation of the spiral
data, after which the axial images are generated using the standard reconstruction techniques.
The helical scanning is characterized by continuous gantry rotation and continuous data
acquisition while the patient table is moving at constant speed; see Fig. 6.9. The acquired
volume of data can be reconstructed at any point during the scan. All modern CT scanners are
multi-slice which refers to a special CT system equipped with a multiple-row detector array to
simultaneously collect data at different slice locations. The data from each full rotation are
mathematically reconstructs by computer to produce an image of one slice.
104
CHAPTER 6 COMPUTED TOMOGRAPHY
Z-axis
105
CHAPTER 6 COMPUTED TOMOGRAPHY
size refers to how many pixels are present in the grid. A 512 matrix will have 512 pixels
across the rows and 512 pixels down the columns. The most common matrix sizes used
in CT are 256, 512, and 1024. Because the perimeter of the grid is held constant, a larger
matrix six (i.e. 1024 as opposed to 512) will contain smaller individual pixels.
Therefore, matrix size is one of the factors that control pixel size.
Each pixel has a width X and a length Y. The two-dimensional pixel represents a three-
dimensional portion of patient tissue. The pixel value represents the proportional amount
of x-ray energy that passes through anatomy and strikes the detector. The information
contained in each pixel is averaged so that one density number (or Hounsfield unit
"HU") is assigned to each pixel. If an object is smaller than a pixel, its density will be
averaged with the information in the remainder of the pixel. This phenomenon is
referred to as the partial volume effect or volume averaging. It results in a less accurate
image.
A large pixel size will make it more likely that multiple objects are contained within a
pixel (Figure 6.11). Because no object smaller than a pixel can be accurately displayed
due to volume averaging, the pixel size affects the spatial resolution. When pixels are
smaller, it is less likely that they will contain different densities, therefore decreasing
the likelihood of volume averaging (Figure 6.12). Hence, smaller pixel size will improve
spatial resolution.
Because no object smaller than a pixel can be accurately displayed due to volume
averaging (and the matrix size influences the size of the pixel), it follows that matrix
size affects spatial resolution.
106
CHAPTER 6 COMPUTED TOMOGRAPHY
Small Matrix
Large Pixel Size (Large Pixel Size)
Large Matrix
Small Pixel Size (Small Pixel Size)
107
CHAPTER 6 COMPUTED TOMOGRAPHY
why part of the anatomy is cut off in scanning larger patients. On the other hand, DFOV is
area of reconstructed image that can be displayed. Smaller DFOV results in larger image size.
The SFOV influences the physical dimensions of image pixel. A 10-cm FOV in a 512 × 512
matrix results in pixel dimensions of approximately 0.2 mm, and a 35-cm FOV produces pixel
widths of about 0.7 mm (Figure 6.13).
Large Image
(Field of View)
Small Image
(Field of View)
Small Pixels
108
CHAPTER 6 COMPUTED TOMOGRAPHY
A voxel (volume element) represents a volume of patient data. Voxel size also plays a
role in volume averaging. As stated earlier, in order to create an image, the system must
break up the patient data into segments. We have seen how a matrix is used to divide the
data into pixels with X and Y dimensions. This allows the system to create a two-
dimensional image. However, it is important to keep in mind that a three -dimensional
object is being represented. By accounting for the slice thickness, the voxel represents a
volume of patient data. Thus, instead of a square of data-as is the case with a pixel-the
voxel is a cube of data. All of the data within the voxel are averaged together to result in
one HU.
Y
Z
X
Mahmood & Haider
Figure 6.14: The depth of the voxel, or Z axis, correlates to the slice
thickness.
The depth of the voxel correlates to the operator’s selection of slice thickness. This dimension
is referred to as the Z axis (Figure 6.14). When comparing the X, Y, and Z dimensions, even
with a relatively large matrix and a small field of view, the slice thickness–or Z axis–will be
longer than either the X or Y dimensions. Therefore, the slice thickness will play an even
larger role in volume averaging (as well as the subsequent spatial resolution) than either
display field or matrix size. In fact, slice thickness is the primary factor affecting the degree of
volume averaging in the image.
Decreasing the slice thickness affects the resolution in two ways. First, it reduces the
amount of tissue averaged together. Second, it will increase the image noise if technique is not
increased to compensate for photon absorption from increased collimation.
Figures 6.15A and B illustrate how a wide slice thickness will affect the amount of volume
averaging in the image. Assuming a 2-mm object is contained in a 10-mm slice (Figure
6.15A); 8 mm of normal tissue will be averaged to produce a less accurate image. By
decreasing the slice thickness, as shown in Figure 6.15B, the amount of normal tissue that is
averaged in with data from the abnormality is reduced. In this way, an image is created that
more closely represents the actual object scanned.
109
CHAPTER 6 COMPUTED TOMOGRAPHY
110
CHAPTER 6 COMPUTED TOMOGRAPHY
To understand this theorem, it is again important to recall the way an object may be
segmented by the system. If the object in question is the same size as a pixel, it is possible that
by chance the object may fall entirely within a single pixel. Figure 6.16A represents this pos-
sibility. However, random chance will dictate that it is much more likely the object will
straddle two different pixels. Figure 6.16B illustrates this possibility. It should be apparent
that the image resulting from the case portrayed by Figure 6.16B would be inferior to that of
the image resulting from Figure 6.16A. A third possibility could also arise. In Figure 6.16C,
the object falls at the junction of four separate pixels. Therefore, only a fourth of the object
will be averaged in with three fourths of a pixel of normal tissue. This would be the worst
case scenario and would result in an image with the worst spatial resolution.
A B C
D E
Furthermore, the Nyquist sampling theorem states that we can reduce the likelihood of our
worst case scenario occurring by reducing the size of the pixel. We can see that in Figure
6.16D, the smaller pixel size will improve spatial resolution by allowing four pixels to
represent the object, with no normal tissue enclosed in the pixel. However, even with the
smaller pixel size, cases such as Figure 6.16E will still arise. In this case, the object will fall
so that only two of the pixels accurately represent the object, whereas the four surrounding
will have some degree of volume averaging. However, we can see from our illustrations that
the situation depicted by Figure 6.16E would be preferable to that of Figure 6.16C. To review,
by reducing the size of the pixel, we can increase our chance of accurately representing a
small object. The theorem further states that in order to best increase our chances, the pixel
size should be half the size of the object.
6.10 Low-Contrast Resolution
Low-contrast resolution is the ability to differentiate objects with slightly different densities.
This factor is the second major aspect of image quality. In order to discern an object on an
111
CHAPTER 6 COMPUTED TOMOGRAPHY
image, there must be a density difference between the object and its background. This is the
type of contrast we are concerned with in the case of a liver lesion that is nearly the same
density as normal liver tissue. The term low-contrast detectability is used when discussing the
ability to see an object that is nearly the same density as its background. Often, intravascular
or oral contrast agents are used to create or increase a density difference, thereby increasing
an image’s low-contrast resolution. Low-contrast resolution may also be referred to as the
sensitivity of the system; hence, the term low-contrast sensitivity is also used.
The size of the object that is visible depends on three factors.
The first is the level of contrast in the object. For example, consider a calcified nodule.
It will be much easier to see if the nodule is in the lung, where the air within the lung
will provide a substantial amount of contrast. On the other hand, imagine the difficulty
in differentiating the nodule if it were to lie next to the iliac crest. The level of contrast
that is related to the density of the objects being scanned is often called subject contrast
(or sometimes inherent contrast).
The second factor that influences the size of the object that is visible is image noise. We
can recognize noise as the grainy appearance-or salt-and-pepper look-on an under-
exposed image.
The third factor is the window setting used to display an image. Narrow window widths
will improve low-contrast discrimination in the image.
112
CHAPTER 6 COMPUTED TOMOGRAPHY
progressively smaller low-contrast circles will appear on the resulting image. Counting the
number of circles clearly visible will determine the level of low-contrast resolution on the
image. However, different observers may look at the same image and evaluate it differently.
Some individuals may say they see six circles clearly, whereas other persons evaluating the
same image may feel they can only see four circles. Therefore, the degree of contrast
measured on an image is somewhat subjective.
Quantum noise produces visible fluctuations in the image (i.e., a salt-and-pepper look).
This factor will degrade images, particularly their low-contrast resolution. Quantum noise is
the result of too few x-ray photons reaching the detectors. Therefore, noise and radiation dose
are linked; as radiation dose increases, image noise is suppressed. As the noise decreases,
small low-contrast objects are more visible. Smoothing algorithms can help to reduce the
visibility of noise by averaging each pixel with its neighbor. Similarly, wide window widths
also help disguise noise. For this reason, it is a common practice in CT to increase the window
width on images of obese patients.
6.12 Basic CT scanner components
6.12.1 Scanning Unit (Gantry)
The largest component of the CT installation consists of an x-ray unit which functions as a
transmitter, a data acquisition unit, which functions as a receiver and detector array. Also
containing other control electronics and the mechanical components required for the scanning
motions including collimators and filters, detectors, data acquisition system (DAS), rotational
components including slip ring systems and all associated electronics such as gantry
angulation motors and positioning laser lights. In commercial CT systems these components
are housed in moveable frame as ring shaped unit called the gantry (see Figure 6.17).
A CT gantry can be angled up to 30 degrees toward a forward or backward position.
Gantry angulation is determined by the manufacturer and varies among CT systems. Gantry
angulation allows the operator to align pertinent anatomy with the scanning plane. The
opening through which a patient passes is referred to as the gantry aperture. Gantry aperture
diameters generally range from 50-85 cm. Generally, larger gantry aperture diameters, 70-85
cm, are necessary for CT departments that do a large volume of biopsy procedures. The
larger gantry aperture allows for easier manipulation of biopsy equipment and reduces the
risk of injury when scanning the patient and the placement of the biopsy needle
simultaneously. The diameter of the gantry aperture is different for the diameter of the
scanning circle or scan field of view. If a CT system has a gantry aperture of 70 cm diameter
it does not mean that you can acquire patient data utilizing a 70 cm diameter. Generally, the
scanning diameter in which patient or projection data is acquired is less than the size of the
gantry aperture. Lasers or high intensity lights are included within or mounted on the gantry.
The lasers or high intensity lights serve as anatomical positioning guides that reference the
center of the axial, coronal, and sagittal planes.
113
CHAPTER 6 COMPUTED TOMOGRAPHY
114
CHAPTER 6 COMPUTED TOMOGRAPHY
115
CHAPTER 6 COMPUTED TOMOGRAPHY
create a computed tomography image. Obtaining a single view does not give the entire
perspective of the object being scanned. Therefore, we can say that the detector is "seeing" an
insufficient amount of information. The attenuation properties of each ray sum are accounted
for and correlated with the position of each ray. At this point, the detector has "collected" the
projection or raw data. The more photons collected, the stronger and more accurate the
detector signal. This is essential for accurate image reconstruction. The detector accomplishes
this task by adding together all the photon energy it has received. The detector receives all the
projection data and subsequently generates an electrical or analog signal. The signal
represents an absorption or attenuation profile. An attenuation profile is obtained for each
view or projection. Every detector in the detector array is responsible for this task. Detector
efficiency describes the percent of incoming photons that a detector converts to a useable
electrical signal. The two primary factors that determine how well a detector can capture
photons relative to efficiency is the width and the distance between each detector. It is
important that detectors are placed as close to one another as possible. A detector is a crystal
or ionizing gas that when struck by an x-ray photon produces light or electrical energy.
The two types of detectors utilized in CT systems are scintillation or solid state and xenon
gas detectors. Scintillation detectors convert 99-100 percent of the attenuated photons into a
useable electrical signal. Scintillation detectors utilize a crystal that fluoresces when struck by
an x-ray photon which produces light energy. A photodiode is attached to the scintillation
portion of the detector. The photodiode transforms the light energy into electrical or analog
energy. The strength of the detector signal is proportional to the number of attenuated
photons that are successfully converted to light energy and then to an electrical or analog
signal. The most frequently used scintillation crystals are made of Bismuth Germinate
(Bi4Ge3012) and Cadmium Tungstate (CdWO4). Earlier designs utilized Sodium and
Cesium Iodide as the light producing agent. One of the problems associated with these
elements was that at times it would fluoresce more than necessary. The afterglow problems
associated with Sodium and Cesium Iodide altered the strength of the detector signal which
could cause inaccuracies during computer reconstruction.
The second type of detector utilized for CT imaging system is a gas detector. The gas
detector is usually constructed utilizing a chamber made of a ceramic material with long thin
ionization plates usually made of Tungsten submersed in Xenon gas. Xenon gas detectors are
less efficient, converting 60-90 percent of the photons that enter the chambers. The long thin
tungsten plates act as electron collection plates. When attenuated photons interact with the
charged plates and the xenon gas ionization occurs. The ionization of ions produces an
electrical current. Xenon gas is the element of choice because of its ability to remain stable
under extreme amounts of pressure. Utilizing more gas in a detector increases the number of
molecules that can be ionized therefore; the strength of the detector signal or response is
increased. The long thin tungsten plates of the gas detector are highly directional. Ionization
of the plates and the resultant detector signal rely on attenuated photons entering the chamber
and ionizing the gas. If the xenon gas detectors are not positioned properly there is a chance
116
CHAPTER 6 COMPUTED TOMOGRAPHY
that the ability of the detector to produce an accurate signal is compromised because the
photons may miss the chamber. The xenon gas detectors are generally fixed with the position
of the x-ray tube which occurs with 3rd generation scanner geometry designs. The term
detector refers to a single element or a single type of detector used in a CT system. The term
detector array is used to describe the total number of detectors that a CT system utilizes for
collecting attenuated information. 3rd generation CT imaging systems employ 800-1000
detectors while 4th generation scanners include 4000-5000 individual detectors in a detector
array.
The efficiency of the xenon gas detector is compromised by the absorption of some of the
photons by the ionization plates. Additionally, photons may pass through the chamber
without interacting with the gas molecules. However, one advantage to this situation may be
that some of the photons absorbed by the plates were scattered photons. As in conventional
radiography scatter also adversely affects the CT image. Therefore, it is reasonable to
conclude that the gas detectors have low scatter acceptability. Scintillation detectors convert
almost all the information it receives including scattered photons therefore, the detectors have
high scatter acceptability.
6.12.4 Data-Acquisition System
The part of a CT scanner connects the detectors of a CT scanner to the system computer, and
may consist of a preamplifier, integrator, multiplexer, logarithmic amplifier, and analog-to-
digital converter.
Once the detector generates the analog or electrical signal it is directed to the data
acquisition system (DAS). The analog signal generated by the detector is a weak signal and
must be amplified to further be analyzed. Amplifying the electrical signal is one of the tasks
performed by the data acquisition system (DAS). The DAS is located in the gantry right after
or above the detector system. In some modern CT scanning systems the signal amplification
occurs within the detector itself. Before the projection or raw data, which is currently in the
form of an electrical or analog signal, goes to the computer it must be converted to digital
information. The computer does not "understand" analog signals therefore; the information
must be converted to digital information. This task is accomplished by an analog to digital
converter which is an essential component of the DAS. The digital signal is transferred to an
array processor. The array processor solves the statistical information using algorithmic
calculations essential for mathematical reconstruction of a CT image. An array processor is a
specialized high speed computer designed to execute mathematical algorithms for the
purpose of reconstruction. The array processor solves reconstruction mathematics faster than
a standard microprocessor. It is important to note that special algorithms may require several
seconds to several minutes for a standard microprocessor to compute. Recently, processors
that compute CT reconstruction mathematics faster than array processors have been utilized
to solve reconstruction mathematics essential to the development of CT fluoroscopy. The
term image or reconstruction generator is used to describe this type of computer.
117
CHAPTER 6 COMPUTED TOMOGRAPHY
118
CHAPTER 6 COMPUTED TOMOGRAPHY
spatial coordinate system. However, if the image data sets are translated into the frequency
domain, the filtering operation becomes trivial.
Filtered Backprojection
119
CHAPTER 6 COMPUTED TOMOGRAPHY
radiation dose. It is the responsibility of CT users to select the most appropriate reconstruction
kernel and slice thickness for each clinical application so that the radiation dose can be
minimized consistent with the image quality needed for the examination.
Iterative reconstruction has recently received much attention in CT because it has many
advantages compared with conventional FBP techniques. Important physical factors including
focal spot and detector geometry, photon statistics, X-ray beam spectrum, and scattering can
be more accurately incorporated into iterative reconstruction, yielding lower image noise and
higher spatial resolution compared with FBP. In addition, iterative reconstruction can reduce
image artifacts such as beam hardening, windmill, and metal artifacts. A recent clinical study
on an early version of iterative reconstruction demonstrated a potential dose reduction of up to
65% compared with FBP-based reconstruction algorithms. Due to the intrinsic difference in
data handling between FBP and iterative reconstruction, images from iterative reconstruction
may have a different appearance (e.g., noise texture) from those using FBP reconstruction.
Careful clinical evaluation and reconstruction parameter optimization will be required before
iterative reconstruction can be accepted into mainstream clinical practice. High computation
load has always been the greatest challenge for iterative reconstruction and has impeded its
use in clinical CT imaging. Software and hardware methods are being investigated to
accelerate iterative reconstruction. With further advances in computational technology,
iterative reconstruction may be incorporated into routine clinical practice in the future.
(6.1)
Here, four independent equations are established for the four unknowns. From elementary
algebraic knowledge, we know that there is a unique solution to the problem, since the
number of equations equals the number of unknowns. If we generalize the problem to the case
where the object is divided into N by N small elements, we could easily reach the conclusion
that as long as enough independent measurements (N2) are taken, we can always uniquely
solve the attenuation coefficient distribution of the object.
120
CHAPTER 6 COMPUTED TOMOGRAPHY
Many techniques are readily available to solve linear sets of equations. Direct matrix
inversion was the method used on the very first CT apparatus in 1967. Over 28,000 equations
were simultaneously solved. When the object is divided into finer and finer elements
(corresponding to higher spatial resolutions), the task of solving simultaneous sets of
equations becomes quite a challenge, even with today’s computer technology. In addition, to
ensure that enough independent equations are formed, we often need to take more than N2
measurements, since some of the measurements may not be independent. A good example is
the one given in Figure 6.19. Assume that four measurements, p1, p2, p3, and p5, are taken in
the horizontal and vertical directions. It can be shown that these measurements are not linearly
independent (p5 = p1 + p2 + p3). A diagonal value must be added to ensure their
orthogonality. When the number of equations exceeds the number of unknowns, a
straightforward solution may not always be available. This is even more problematic when we
consider the inevitable possibility that errors exist in some of the measurements. Therefore,
different reconstruction techniques need to be explored. Despite its limited usefulness, the
linear algebraic approach proves the existence of a mathematical solution to the CT problem.
One possible remedy to solve this problem is the so-called iterative reconstruction approach.
For the ease of illustration, we again start with an oversimplified. Consider the four-block
object problem discussed previously. This time, we assign specific attenuation values for each
block, as shown in Figure 6.20a. The corresponding projection measurements are depicted in
the same figure. We will start with an initial guess of the object’s attenuation distribution.
Since we have no a priori knowledge of the object itself, we assume that it is homogeneous.
We can start with an initial estimate using the average of the projection samples. The sum of
the projection samples (3 + 7 = 10 or 4 + 6 = 10) evenly distributed over the four blocks
results in an average value of 2.5 (10/4 =2.5). Next, we calculate the line integrals of our
estimated distribution along the same paths as the original projection measurement. For
121
CHAPTER 6 COMPUTED TOMOGRAPHY
example, we can calculate the projection samples along the horizontal direction and obtain the
calculated projection values of 5 (2.5 + 2.5) and 5, as shown in Figure 6.20b.
By comparing the calculated projections against the measured values of 3 and 7 (Figure
6.20a), we observe that the top row is overestimated by 2 (5 − 3) and the bottom row is
underestimated by 2 (5 − 7). Since we have no a priori knowledge of the object, we again
assume that the difference between the measured and the calculated projections must be split
evenly among all pixels along each ray path. Therefore, we decrease the value of each block
in the top row by 1 and increase the bottom row by 1, as shown in Figure 6.20c. The
calculated projections in the horizontal direction are now consistent with the measured
projections. We repeat the same process for projections in the vertical direction and reach the
conclusion that each element in the first column must be decreased by 0.5 and each element in
the second column increased by 0.5, as shown in Figure 6.20d. The calculated projections in
all directions are now consistent with the measured projections (including the diagonal
measurement), and the reconstruction process stops. The object is correctly reconstructed.
This reconstruction process is called the algebraic reconstruction technique (ART). Based on
the above discussion, it is clear that iterative reconstruction methods are computationally
intensive because forward projections (based on the estimated reconstruction) must be
performed repeatedly. This is in addition to the updates required of the reconstructed pixels
based on the difference between the measured projection and the calculated projection. All of
the iterative reconstruction algorithms require several iterations before they converge to the
desired results. Given the fact that the state-of-the-art CT scanner can acquire a complete
projection data set in a fraction of a second, and each CT examination typically contains
several hundred images, the routine clinical usage of ART is still a long way from reality.
Despite its limited utility, ART does provide some insight into the reconstruction process.
Recall the image-update process that was used in Figure 6.20. When there is no a priori
knowledge of the object, we always assume that the intensity of the object is uniform along
the ray path. In other words, we distribute the projection intensity evenly among all pixels
along the ray path. This process leads to the concept of backprojection.
Figure 6.20: Illustration of iterative reconstruction (a) original object and its projections (b) initial
estimate of the object and its projection (c) updated estimation of object and its projection and (d) final
estimation and projections.
122
CHAPTER 6 COMPUTED TOMOGRAPHY
Consider a simple case in which the object of interest is an isolated point. The corresponding
projection is an impulse function with its peak centered at the location where a parallel ray
intersects the point, as shown in Figure 6.21. Similar to the reasoning used for ART, we do
not know, a priori, the location of the point other than the fact that it is located on that line.
Therefore, we have to assume a uniform probability distribution for its location. We paint the
entire ray path with the same intensity as the measured projection, as shown in Figure 6.21
(a). In this example, the first projection is oriented vertically. The next projection is again an
impulse function, and again we paint the entire ray path that intersects the impulse with the
same intensity as the measurement. This time, however, the ray path is slightly rotated relative
to the first one because of the difference in projection angles. This process is repeated for all
projection samples. Figures 6.21 (b)–(i) depict the results obtained over different angular
ranges in 22.5-deg increments.
Note that the painting procedure essentially reverses the projection process and formulates a
2D object from a set of 1D line integrals. As a result, this process is called backprojection,
123
CHAPTER 6 COMPUTED TOMOGRAPHY
and is one of the key image reconstruction steps used in many commercial CT scanners. From
Figure 6.21(i), it is clear that by backprojecting over the range of 0 to 180 deg, a rough
estimate of the original object (a point) can be obtained. By examining the intensity profile of
the reconstructed point (Figure 6.22), we conclude that the reconstructed point computer
graphical techniques. Yellow line is a blurred version of the true object (gray line).
Degradation of the spatial resolution is obvious. From the linear system theory, we know that
Figure 6.21(i) is essentially the impulse response of the backprojection process. Therefore, we
should be able to recover the original object by simply deconvolving the backprojected
images with the inverse of the impulse response. This approach is often called the
backprojection-filtering approach.
1200
1000
Intensity (HU)
800
600
400
200
0
0 20 40 60 80 100
Pixels Mahmood & Haider
Figure 6.22: Profile of a reconstructed point. Solid black line:
reconstruction with backprojection; thick gray line: ideal reconstruction.
6.17 The Filtered Backprojection Algorithm
Although the Fourier slice theorem provides a straightforward solution for tomographic
reconstruction, it presents some challenges in actual implementation. First, the sampling
pattern produced in the Fourier space is non-Cartesian. The Fourier slice theorem states that
the Fourier transform of a projection is a line through the origin in 2D Fourier space. As a
result, samples from different projections fall on a polar coordinate grid, as shown in Figure
6.23.
To perform a 2D inverse Fourier transform, these samples must be interpolated or
regridded to a Cartesian coordinate. Interpolation in the frequency domain is not as
straightforward as interpolation in real space. In real space, an interpolation error is localized
to the small region where the pixel is located. This property does not hold, however, for
interpolation in the Fourier domain, since each sample in a 2D Fourier space represents
certain spatial frequencies (in the horizontal and vertical directions). Therefore, an error
produced on a single sample in Fourier space affects the appearance of the entire image (after
the inverse Fourier transform).
124
CHAPTER 6 COMPUTED TOMOGRAPHY
6.18.3 Windowing
In the CT image, density values are represented as gray scale values. However, since the
human eye can discern only approx. 80 gray scale values, not all possible density values can
be displayed in discernible shades of gray. For this reason, the density range of diagnostic
relevance is assigned the whole range of discernible gray values. This process is called
windowing. To set the window, it is first defined which CT number the central gray scale
125
CHAPTER 6 COMPUTED TOMOGRAPHY
value is to be assigned to. By setting the window width, it is then defined which CT numbers
above and below the central gray value can still be discriminated by varying shades of gray,
with black representing tissue of the lowest density and white representing tissue of the
highest density.
6.18.4 Volume Visualization
Traditionally, CT images are viewed in a slice-by-slice mode. A series of reconstructed CT
images are placed on films, and radiologists are trained to form, in their heads, the volume
information from multiple 2D images. Although the ability to generate 3D images by
computer has been available nearly since the beginning of CT, the use of this capability has
only recently become popular. This change is mainly due to three factors. The first is related
to the quality and efficiency of 3D image generation. Because of the slow acquisition speed of
the early CT scanners, thicker slices were typically acquired in order to cover the entire
volume of an organ. The large mismatch between the in-plane and cross-plane resolution
produced undesirable 3D image quality and artifacts. In addition, the amount of time needed
to generate 3D images from a 2D dataset was quite long, due to computer hardware
limitations and a lack of advanced and efficient algorithms.
The second factor is the greater productivity of radiologists. Historically, CT scans were
"organ centric"; each CT examination covered only a specific organ, such as the liver, lung, or
head, with either thick slices or sparsely placed thin slices. The number of images produced
by a single examination was well below 100. With the recent advances in CT technology
(helical and multi-slice), a large portion of the human body can be easily covered with thin
slices. For example, some of the CT angiographic applications typically cover a 120-cm range
from the celiac artery to a foot with thin slices. The number of images produced by a single
examination can be several hundred to over 1000. To view these images sequentially would
be an insurmountable task. The third factor that influences the image presentation format is
related to clinical applications. CT images have become increasingly useful tools for surgical
planning, therapy treatment, and other applications outside the radiology department. It is
more convenient and desirable to present these images in a format that can be understood by
people who do not have radiological training.
6.19 Image Quality Characteristics
CT image quality characteristics can be described in one comprehensive term of visibility that
is the visibility of anatomical structures, various tissues, and signs of pathology. The visibility
depends on the characteristics of the imaging system which is somewhat complex combination of
the five (5) factors. That means the image quality is not a single factor but is a composite of at
least five factors:
Contrast Sensitivity
Visibility of Detail (Blurring), as affected by blurring (Sometimes called spatial
resolution)
Visual Noise
126
CHAPTER 6 COMPUTED TOMOGRAPHY
Low
(Soft Tissues,
Fluids, etc)
High
(Bones, Bullets,
Barium, etc) Images
Mahmood & Haider
Figure 6.24: The physical contrast for bones, bullets, and barium relative
to the soft tissues.
127
CHAPTER 6 COMPUTED TOMOGRAPHY
The CT is excels in an imaging the very low density differences between and among the soft
that is the real challenge. That means the contrast sensitivity determines the range of visibility
with respect to physical contrast. The CT procedure has high contrast sensitivity therefore the
tissues with small differences in density will be visualized. The procedures have low contrast
sensitivity either because of limitations of the specific imaging modality or the adjustments of
the imaging protocol factors then only objects with high physical contrast will be visible.
While the tissues that have small differences in density (physical contrast) will not be visible.
128
CHAPTER 6 COMPUTED TOMOGRAPHY
129
Chapter 7
Nuclear Medicine
Imaging Systems
In this chapter, we gave a simple explanation of the relevant topics
(radioactivity) with gamma camera in a manner which should be
understandable by those without a formal physics background. So is
this chapter as an introduction to the process of radioactivity and uses
Rationale in gamma camera for those who encounter radioactive materials in
their work and who would like to better understand the phenomenon,
but whose education did not include physics to the appropriate level.
Performance Objectives
After studying the chapter seven, the student will be able to:-
7.1 Introduction
Radioactivity is a collection of unstable atoms that undergo spontaneous, random
transformation (radioactive decay) resulting in new elements or a lower energy state of the
atoms. Radioactivity is a phenomenon that occurs naturally in a number of substances. Atoms
of the substance spontaneously emit invisible but energetic radiations, which can penetrate
materials that are opaque to visible light. The effects of these radiations can be harmful to
living cells but, when used in the right way, they have a wide range of beneficial applications,
particularly in medicine. Radioactivity has been present in natural materials on the earth since
its formation (for example in potassium-40 which forms part of all our bodies). However,
because its radiations cannot be detected by any of the body’s five senses, the phenomenon
was only discovered 100 years ago when radiation detectors were developed. Nowadays we
have also found ways of creating new man made sources of radioactivity; some (like iodine-
131 and molybdenum-99) are incidental waste products of the nuclear power industry which
nevertheless have important medical applications, whilst others (for example fluorine-18) are
specifically produced for the benefits of their medical use.
7.2 Definition of “Activity”
Activity is the shortened term commonly used for radioactivity.
Activity is the number of atoms that decay per unit time.
7.3 Isotopes and Nuclides
While all atoms of the same element contain the same number of protons, the number of
neutrons may be different. For example, carbon atoms have six protons. If a carbon atom also
has six neutrons, it is Carbon-12. If it has seven neutrons, it is Carbron-13. A carbon atom
containing six protons and eight neutrons is Carbon-14. This form or isotope of carbon is
radioactive. Carbon-14 is radioactive while Carbon-12 and Carbon-13 are stable. The term
nuclide is used to refer to any type of atom, so that Carbon-12 and Hydrogen-2 are nuclides.
They are not isotopes of each other because they differ in the number of protons that they each
131
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS
have in their nucleus. The prefix “radio-” can be added to either term, making radioisotope or
radionuclide, whenever the atom referred to is radioactive.
7.4 Half-life
The activity of a radioactive source reduced over a period of time which was different for each
substance. The time required for half of a large number of identical radioactive atoms to decay
is called the half-life which is denoted by the symbol t½. In other words, the time taken for the
activity to fall to half of its original value is called the half-life of the source. The physical
characteristic associated with a radioactive material is the half-life. The definition of the half-
life is:
The period of time it takes for the number of radioactive atoms to be reduced to one-half
of its original amount.
The mathematical symbol used to identify half-life is t½.
Each radioactive isotope has its own characteristic half-life.
However the activity does not fall at a steady rate, so it is not the case that the activity will
have fallen to nothing after two half-lives. Instead the activity falls at an ever decreasing rate
so that in every half-life the activity will halve.
Figure 7.1 shows a graph of how the activity of a source changes with time. If the activity
starts out at a value Ao then after one half-life the activity will have fallen to half of Ao. After
two half-lives the activity will have fallen to one quarter of Ao and after three half-lives to one
eighth of Ao. It can be seen that the activity is falling more and more slowly and, in principle,
it will never actually reach zero.
1/2 Ao
1/4 Ao
1/8 Ao
132
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS
7.5 Radioactivity
The radioactivity is generally administered to the patient in the form of a radiopharmaceutical
- the term radiotracer is also used. This follows some physiological pathway to accumulate for
a short period of time in some part of the body. A good example is 99mTc-tin colloid which
following intravenous injection accumulates mainly in the patient's liver. The substance emits
gamma-rays while it is in the patient's liver and we can produce an image of its distribution
using a nuclear medicine imaging system. This image can tell us whether the function of the
liver is normal or abnormal or if sections of it are damaged from some form of disease.
Different radiopharmaceuticals are used to produce images from almost all regions of the
body:
Note that the form of information obtained using this imaging method is mainly related to the
physiological functioning of an organ as opposed to the mainly anatomical information which
is obtained using X-ray imaging systems. Nuclear medicine therefore provides a different
perspective on a disease condition and generates additional information to that obtained from
X-ray images. Our purpose here is to concentrate on the imaging systems used to produce the
images.
Early forms of imaging system used in this field consisted of a radiation detector (a
scintillation detector for example) which was scanned slowly over a region of the patient in
order to measure the radiation intensity emitted from individual points within the region. One
such device was called the Rectilinear Scanner. Such imaging systems have been replaced
since the 1970s by more sophisticated devices which produce images much more rapidly. The
133
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS
most common of these modern devices is called the Gamma Camera and we will consider its
construction and mode of operation below.
7.6 Radioactive Decay
Isotopes that are not stable and emit radiation are called radioisotopes. A radioisotope is an
isotope of an element that undergoes spontaneous decay and emits radiation as it decays.
During the decay process, it becomes less radioactive over time, eventually becoming stable.
Once an atom reaches a stable configuration, it no longer gives off radiation. For this
reason, radioactive sources – or sources that spontaneously emit energy in the form of ionizing
radiation as a result of the decay of an unstable atom – become weaker with time. As more and
more of the source’s unstable atoms become stable, less radiation is produced and the activity
of the material decreases over time to zero.
The time it takes for a radioisotope to decay to half of its starting activity is called the
radiological half-life. Each radioisotope has a unique half-life, and it can range from a fraction
of a second to billions of years. For example, iodine-131 has an eight-day half-life, whereas
plutonium-239 has a half-life of 24,000 years. A radioisotope with a short half-life is more
radioactive than a radioisotope with a long half-life, and therefore will give off more radiation
during a given time period.
There are three main types of radioactive decay:
• Alpha decay: Alpha decay occurs when the atom ejects a particle from the nucleus,
which consists of two neutrons and two protons. When this happens, the atomic number
decreases by 2 and the mass decreases by 4. Examples of alpha emitters include radium,
radon, uranium and thorium.
• Beta decay: In basic beta decay, a neutron is turned into a proton and an electron is
emitted from the nucleus. The atomic number increases by one, but the mass only decreases
slightly. Examples of pure beta emitters include strontium-90, carbon-14, tritium and
sulphur-35.
• Gamma decay: Gamma decay takes place when there is residual energy in the nucleus
following alpha or beta decay, or after neutron capture (a type of nuclear reaction) in a
nuclear reactor. The residual energy is released as a photon of gamma radiation. Gamma
decay generally does not affect the mass or atomic number of a radioisotope. Examples of
gamma emitters include iodine-131, cesium-137, cobalt-60, radium-226, and technetium-
99m. Gamma rays are produced by unstable nuclei when protons and neutrons re-range to a
more stable configuration. Gamma decay usually follows an alpha or beta decay and does
not change element.
Many isotopes can decay by more than one method. For example, when actinium-226
(Z=89) decays, 83% of the rate is through -decay,
226
Ac → 226Th + e + ,
134
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS
135
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS
136
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS
137
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS
This voltage pulse is commonly called the Z-pulse (or zee-pulse in American English!) which
following pulse height analysis (PHA) is fed as the unblank pulse to the cathode ray
oscilloscope (CRO).
+X
-X
+Y
-Y
CRO
Position
Circuit Z Unblank
Σ PHA
PM Tube
Array
Crystal
Collimator
Organ containing
radiopharmaceutical
Mahmood & Haider
Figure 7.4: A block diagram of the basic components of a gamma camera
So we end up with four position signals and an un-blank pulse sent to the CRO. Let us briefly
review the operation of a CRO before we continue. The core of a CRO consists of an
evacuated tube with an electron gun at one end and a phosphor-coated screen at the other end.
The electron gun generates an electron beam which is directed at the screen and the screen
emits light at those points struck by the electron beam. The position of the electron beam can
be controlled by vertical and horizontal deflection plates and with the appropriate voltages fed
to these plates the electron beam can be positioned at any point on the screen. The normal
mode of operation of an oscilloscope is for the electron beam to remain switched on. In the
case of the gamma camera the electron beam of the CRO is normally switched off - it is said
to be blanked.
When an un-blank pulse is generated by the PHA circuit the electron beam of the CRO is
switched on for a brief period of time so as to display a flash of light on the screen. In other
words the voltage pulse from the PHA circuit is used to un-blank the electrons beam of the
CRO.
So where does this flash of light occur on the screen of the CRO? The position of the flash
of light is dictated by the ±X and ±Y signals generated by the position circuit. These signals as
you might have guessed are fed to the deflection plates of the CRO so as to cause the un-
blanked electron beam to strike the screen at a point related to where the scintillation was
originally produced in the NaI(Tl) crystal. Simple!
The gamma camera can therefore be considered to be a sophisticated arrangement of
electronic circuits used to translate the position of a flash of light in a scintillation crystal to a
flash of light at a related point on the screen of an oscilloscope. In addition the use of a pulse
138
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS
height analyzer in the circuitry allows us to translate the scintillations related only to
photoelectric events in the crystal by rejecting all voltage pulses except those occurring within
the photo peak of the gamma-ray energy spectrum.
Let us summarize where we have got to before we proceed. A radiopharmaceutical is
administered to the patient and it accumulates in the organ of interest. Gamma-rays are
emitted in all directions from the organ and those heading in the direction of the gamma
camera enter the crystal and produce scintillations (note that there is a device in front of the
crystal called a collimator which we will discuss later). The scintillations are detected by an
array of PM tubes whose outputs are fed to a position circuit which generates four voltage
pulses related to the position of scintillation within the crystal. These voltage pulses are fed to
the deflection circuitry of the CRO. They are also fed to a summation circuit whose output
(the Z-pulse) is fed to the PHA and the output of the PHA is used to switch on (that is, un-
blank) the electron beam of the CRO. A flash of light appears on the screen of the CRO at a
point related to where the scintillation occurred within the NaI(Tl) crystal. An image of the
distribution of the radiopharmaceutical within the organ is therefore formed on the screen of
the CRO when the gamma-rays emitted from the organ are detected by the crystal.
What we have described above is the operation of a fairly traditional gamma camera.
Modern designs are a good deal more complex but the basic design has remained much the
same as has been described. One area where major design improvements have occurred is the
area of image formation and display. The most basic approach to image formation is to
photograph the screen of the CRO over a period of time to allow integration of the light
flashes to form an image on photographic film. A stage up from this is to use a storage
oscilloscope which allows each flash of light to remain on the screen for a reasonable period
of time.
The most modern approach is to feed the position signals into the memory circuitry of a
computer for storage. The memory contents can therefore be displayed on a computer monitor
and can also be manipulated (that is processed) in many ways. For example various colors can
be used to represent different concentrations of a radiopharmaceutical within an organ.
The use of such digital image processing is now widespread in nuclear medicine in that it
can be used to rapidly and conveniently control image acquisition and display as well as to
analyze an image or sequences of images, to annotate images with the patient's name and
examination details, to store the images for subsequent retrieval and to communicate the
image data to other computers over a network. We will continue with our description of the
gamma camera by considering the construction and purpose of the collimator.
7.10 Collimation
The collimator is a device which is attached to the front of the gamma camera head. It
functions something like a lens used in a photographic camera but this analogy is not quite
correct because it is rather difficult to focus gamma-rays. Nevertheless in its simplest form it
is used to block out all gamma rays which are heading towards the crystal except those which
are travelling at right angles to the plane of the crystal:
139
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS
The Figure 7.5 illustrates a magnified view of a parallel-hole collimator attached to a crystal.
The collimator simply consists of a large number of small holes drilled in a lead plate. Notice
that gamma-rays entering at an angle to the crystal get absorbed by the lead and that only
those entering along the direction of the holes get through to cause scintillations in the crystal.
If the collimator was not in place these obliquely incident gamma-rays would blur the images
produced by the gamma camera. In other words the images would not be very clear.
Collimator
Crystal
Pb
140
CHPTER 7 NUCLEAR MEDICINE IMAGING SYSTEMS
Cone is shown in the figure. It operates in a similar fashion to a pin-hole photographic camera
and produces an inverted image of an object - an arrow is used in the figure to illustrate this
inversion. This type of collimator has been found useful for imaging small objects such as the
thyroid gland.
Crysta
l
Pinhole
Pb Collimator
Object
Mahmood & Haider
Figure 7.6: Diagram of a pin-hole collimator illustrating the inversion
of acquired images.
141
CHAPTER 8
This chapter will explain the basic physics of how sound waves can
produce images of the human body. Start from idea, All the various
Rationale techniques of diagnostic ultrasound involve the detection and display
the acoustic energy reflected off different tissues in the body.
Performance Objectives
After studying the chapter eight, the student will be able to:-
1. State the Physical and Medical Definition of ultrasound.
2. Explain how the operator Piezoelectric.
3. Identify the Properties of Ultrasound.
5. Describe the basic function of a transducer and how it forms an ultrasound
pulse.
7. Describe the Modes Ultrasound.
8. Discuss physical factors that determine ultrasound wavelength and its
significance in imaging.
9. Describe the general relationship between wavelength and image quality.
10. Describe the physical conditions in the body that produce ultrasound
reflections or echoes.
11. Describe the factors that determine the intensity of a reflected pulse.
12. Identify the three physical factors that determine the total attenuation of
an ultrasound pulse passing through a section of tissue.
13. Identify the factors that determine ultrasound velocity and state the
approximate velocity value in tissue
14. State the basic principles of Doppler Effect.
CHPTER 8 IMAGING WITH SOUND
143
CHPTER 8 IMAGING WITH SOUND
144
CHPTER 8 IMAGING WITH SOUND
Transverse wave
Wavelengt
h
Compression Expansion
145
CHPTER 8 IMAGING WITH SOUND
Safe in pregnancy
Has no known side effects
Inexpensive
Portable
Minimal preparation of patients
Painless
Gives direct vision for biopsies
Some problems with ultrasound imaging are that the diagnostic images sometimes cannot be
obtained because of the size of the patient, or because the ultrasound beam cannot traverse the
areas of air-filled or bone in such cases, the cross-sectional imaging with CT or MRI can be
used instead.
Medical treatment can be given after a proper diagnosis or identify the disease properly.
After the first use of ionizing radiation (x-ray) by Roentgen in 1895 to visualize the interior of
the body has been the only way for decades. However, during the second half of the twentieth
century was the discovery of new imaging methods are quite different from those of the X-
rays. Was one of the most important of these ways is ultrasound, which showed the particular
potential and greater benefit of imaging which relies on X-rays.
During the last decade of the twentieth century, the use of ultrasound in medical practice
and hospitals were increasingly common in all parts of the world. Proved much the scientific
research the benefit of ultrasound and sometimes superiority in many cases commonly used X-
ray techniques, resulting in significant changes in diagnostic imaging procedures.
Sound is a physical phenomenon that carries energy from one point to another. In this
respect is similar to radiation, but differs from the radiation that does not pass through the
vacuum and that means it needs to matter in order to transfers from one place to another. This
is because the sound waves are actually vibrations that pass through the material. If there is
any substance, nothing can vibrate and sound cannot exist.
One of the most important features in the sound is frequency, defined as the rate of
vibration source of sound and material that passes through it. Sound frequency is measured in
a basic unit called the hertz, and defines the hertz as one vibration, or cycle, per second. Pitch
is the term commonly used as a synonym for sound frequency.
Assays medical uses high-frequency sound waves to look inside the body. These sound
waves are too high for the human ear can hear. There is a specific extent of the frequencies in
the human ear can hear or respond to them. The human ear in young adults can hear the
frequency extent from 20 Hz to 20,000 Hz. The extent of frequencies greater than this limit is
called ultrasonic frequencies (Ultrasound). Frequencies in the extent of 2 MHz (million
cycles per second) to 20 MHz are used in diagnostic ultrasound which is too high for the
human ear can hear. Where, ultrasound is used as a diagnostic tool because it can be focused
into small, well-defined beams that can probe the human body and interact with the tissue
structures to form images. Acoustic waves are shed on the internal organs through a small
146
CHPTER 8 IMAGING WITH SOUND
scanner called transducer of hand-holding which is placed in direct contact with the patient's
skin to be imaged. Transducer has a crystal vibrate and a scanner unlike the sound or echo to
form an image.
The transducer (probe) is the small hand-held component of the ultrasound imaging
equipment that resembles a microphone and it performs several functions as will be described
in detail later. Its first function is to produce and send the ultrasound pulses when electrical
pulses are applied to it. A short time later, receives the echoing waves when the transducer is
pressed against the skin and converted back into electrical pulses that are then processed by
the system and formed into an image.
Produces echo by the surfaces or the boundaries between the two different types of tissue in
the form of bright white spots in the image. Many surfaces in general produce a white or gray
background can be seen in the image. And the absence of reflecting surfaces within the fluid,
such as the cyst, dark spots appear in the image. For this reason, the ultrasound image,
sometimes called a Brightness modulation "B mode" image, which is a display of echo
producing sites within the anatomical area.
Another physical characteristic that can be imaged with ultrasound devices are processes
the echoes produced by blood flowing and blood vessels. That is a special application of
ultrasound uses the Doppler principle, which the measures the direction and speed of blood
cells as they move through vessels. A computer collects and processes the sounds and graphs
or constitutes images with different colors representing the different flow velocities and
directions that represent the flow of blood through the blood vessels.
8.2 Definition Ultrasound
Physical Definition; Ultrasound (ultrasonic) is the term used to describe sound of
frequencies above 20 000 Hertz (Hz), beyond the range of human hearing. The term
"ultrasonic" applied to sound refers to anything above the frequencies of audible sound.
Medical Definition; Diagnostic Medical Ultrasound is the use of high frequency sound
to aid in the diagnosis and treatment of patients and the frequency ranges used in
medical ultrasound imaging are 2 - 15 MHz.
8.3 Properties of Ultrasound
Sound is a pressure disturbance (vibration) transmitted through all forms of matter: gases,
liquids, solids, and plasmas as mechanical pressure waves that carry kinetic energy. A medium
must therefore be present for the propagation of these waves, since cannot travel through a
vacuum.
8.3.1 Type of Waves Depends on the Medium
Ultrasound and sound waves propagates in a fluid (gases and liquids) as longitudinal waves,
in which the particles of the medium vibrate to and fro along the direction of propagation,
alternately compressing and rarefying the material.
147
CHPTER 8 IMAGING WITH SOUND
In hard tissues like bone, ultrasound can be transmitted as both longitudinal (compression)
and transverse (shear) waves; in the latter case, the particles move perpendicularly to the
direction of propagation.
148
CHPTER 8 IMAGING WITH SOUND
As known from basic physics the characteristic variables describing the propagation of a
monochromatic wave in time and space are frequency ( f ) or period (T) velocity (v) and
wavelength (λ) are related to each other following:
Usually measured waves in the electromagnetic spectrum, such as radio waves and light
waves, is in millimeters, or nm, instead of centimeters or meters. This is because it has much
shorter wavelengths than the sound waves.
As it does in water, ultrasound propagates in biological soft tissues as longitudinal waves, the
average speed of ultrasound propagation of soft tissue are approximately 1540 m/s (fatty
tissue, 1470 m/s; muscle, 1570 m/s). The construction of ultrasound images depends very
much on the measurement of distances, which depends on this almost constant propagation
speed. The speed in bone (3600 m/s) and cartilage is, however, much higher and can create
misleading effects in images.
The wavelengths of ultrasound are closely related to ultrasound frequency, both influence
the resolution of the images. Better resolution is associated with a higher ultrasound
frequency, the shorter wavelength but absorption of the sound energy by tissue also increases
with frequency. The resolution can determine the degree of image clarity. That meant the
resolution is the ability of the ultrasound machine to distinguish two structures (reflectors or
scatters) that are close together as separate.
The kinetic energy of the sound waves is converted to heat (thermal energy) in the medium
when sound waves are absorbed. The applications of ultrasound to bring heat or agitation into
the body (thermotherapy) were the first use of ultrasound in medicine.
8.4 Diagnostic Ultrasound
Today, ultrasound (US) is one of the most commonly used imaging technologies in medicine.
Sounds in the range 2 and 18 megahertz (MHz) are typically used for diagnostic ultrasound.
The accuracy of ultrasound diagnosis based on computerized analysis of reflected ultrasound
waves, which non-invasively build up fine images of internal body structures. The best
resolution can be achieved by using shorter wavelengths, with a wavelength that is inversely
proportional to the frequency. However, the use of high frequencies is limited due to the
increased attenuation (loss of signal strength) in various tissues and so easily absorbed and
thus shorter than the depth of penetration. For this reason, specific probes are used for
different frequency ranges to examine different parts of the body:
149
CHPTER 8 IMAGING WITH SOUND
Because of the heterogeneity of the different tissues within the body with different densities,
the ultrasonic waves penetrate varies accordingly. Bones absorb ultrasound much more than
soft tissue, so that, in general, ultrasound is suitable for examining only the surfaces of the
bone. For this reason, ultrasound images show a black zone behind the bones due to the
inability of ultrasound energy to reach those areas. If the high frequencies used, called
acoustic shadow.
8.5 Piezoelectric Materials
The word piezoelectricity means the ability of some materials (notably crystals and certain
ceramics, including bone) to generate an electric charge or of electric polarity in dielectric
crystals in response to applied mechanical stress in such crystals subjected to an applied
voltage. The piezo is derived from the Greek word 'piezein' (πιέζειν), which means to
squeeze or press, and electric or electron (ήλεκτρον), which stands for amber, an ancient
source of electric charge. The piezoelectric effect was first discovered in 1880 by brothers
Pierre Curie and Jacques Curie (French physicists). The Curie brothers only found that
piezoelectric materials can produce electricity. The next development was the discovery by
Gabriel Lippmann that electricity can deform piezoelectric materials. It was not until the early
twentieth century that practical devices began to appear. Today, it is known that many
materials such as quartz, topaz, cane sugar, Rochelle salt, and bone have this effect.
In summary;
- Piezoelectricity discovered by the Curies in 1880 using natural quartz
- SONAR (originally an acronym for SOund Navigation And Ranging) is a technique
that uses sound propagation (usually underwater, as in submarine navigation) was first
used in 1940’s war-time
- Diagnostic Medical applications in use since late 1950’s
150
CHPTER 8 IMAGING WITH SOUND
depicted in Figure 8.2 to the right. The voltage built up in each SiO2 unit is very small, but
since millions of them line up in the crystal structure, their voltage adds up to a measurable
amount.
Piezoelectric crystals have the property of change of polarization density within the
material's volume when a voltage is applied. Thus applying an alternating current (AC) across
the materials causes them to oscillate at very high frequencies, thus producing very high
frequency sound waves. For good examples of piezoelectric material, Lead (from Latin:
plumbum) Zirconate Titanate – more commonly known as PZT – crystals. PZT is an
inorganic compound with the chemical formula Pb[ZrxTi1-x]O3 0 ≤ x ≤ 1) that shows a marked
piezoelectric effect, which finds practical applications in the area of electro-ceramics. PZT is a
white solid that is insoluble in all solvents. PZT crystals are the most widely-used
piezoelectric material used for energy harvesting. PZT will generate measurable
piezoelectricity when their static structure is deformed by about 0.1% of the original
dimension. Conversely, those same crystals will change about 0.1% of their static dimension
when an external electric field is applied to the material. A key advantage of PZT materials is
that they can be optimized to suit specific applications through their ability to be manufactured
in any shape or size. Moreover, PZT materials characterized by their ability to resilient, and
resistance to high temperatures and various air pressures and chemically inert.
Oxygen Atom
++ Silicon Atom
Quartz is an electrical insulator but has a number of unique physical properties that are very
different from that of other solid substances, like plastic, wood, concrete, bone, or glass. For
151
CHPTER 8 IMAGING WITH SOUND
example, may show some interesting behavior when exposed to electric fields or get
electrically charged when put under stress. This is one of the most important properties of
quartz technically and it has many applications. This property can be found in certain types of
crystals, but not in amorphous substances, which reacts differently depending on the direction
of external forces which is called anisotropic. To explain this, we have to look at individual
molecules of crystal that is made up of an ordered repeating pattern of the same atom or
molecule. Each molecule is polarized since one end is more negatively charged and the other
end is positively charged, and is called a dipole as a result of the atoms that make up the
molecule and the way the molecules are shaped.
8.6 Piezoelectric Effect
The piezoelectric effect is the ability of certain non-conducting materials, such as quartz
crystals and ceramics, to generate electric current when they are exposed to mechanical stress
(such as pressure or vibration). It also works in the opposite direction, mechanical deformation
slightly (the substance shrinks or expands) may be produced or the generation of vibrations in
such materials when subjected to an AC voltage, or both. This vibration or oscillation caused
by applied AC transmitted as ultrasonic waves into the surrounding medium. The piezoelectric
crystal, therefore, serves as a transducer, which converts electrical energy into mechanical
energy and vice versa.
The piezoelectric effect occurs only in crystals with a special crystal structure which they
lack of center of symmetry. All piezoelectric classes lack a centre of symmetry. Under an
applied force the centers of mass for positive and negative ions are shifted which results in a
net dipole moment. When the force is along a different direction, there may not be a resulting
net dipole moment in that direction though there may be a net dipole moment along a different
direction. In the absence of an applied force, the centre of mass of the positive ions
coincides with that of the negative ions and there is no resulting dipole moment or
polarization.
8.7 Reverse Piezoelectric Effect
The piezoelectric effect is a reversible process in that materials exhibiting the direct
piezoelectric effect (the internal generation of electrical charge resulting from an applied
mechanical force) also exhibit the reverse piezoelectric effect (the internal generation of a
mechanical strain resulting from an applied electrical field). This effect is formed in crystals
that have no center of symmetry. To explain this, we have to look at the individual molecules
that make up the crystal (cf. Figure 8.2). Each molecule has a polarization, one end is more
negatively charged and the other end is positively charged, and is called a dipole. This is a
result of the atoms that make up the molecule and the way the molecules are shaped. The polar
axis is an imaginary line that runs through the center of both charges on the molecule. In a
mono-crystal the polar axes of all of the dipoles lie in one direction. The crystal is said to be
symmetrical because if you were to cut the crystal at any point, the resultant polar axes of the
two pieces would lie in the same direction as the original. In a poly-crystal, there are different
152
CHPTER 8 IMAGING WITH SOUND
regions within the material that have a different polar axis. It is asymmetrical because there is
no point at which the crystal could be cut that would leave the two remaining pieces with the
same resultant polar axis.
In summary
Ultrasound waves are generated by piezoelectric crystals. Piezoelectric means "pressure
electric" effect. When electric current applied on quartz crystal produces a mechanical
deformation of the shape and change polarity. Thus applying an alternating current (AC)
across the materials causes expansion and contraction that in turn leads to the production of
compression and rarefaction of sound waves. It also works in the opposite direction; an
electrical current is generated on exposure to returning echoes that are processed to generate a
display. Hence the piezoelectric crystals are both transmitter (small proportion of the time)
and receiver (most of the time). It is known that many materials have piezoelectric effect (e.g.
topaz, cane sugar, Rochelle salt, and bone) and the frequency of the generated wave is a
specific feature of the crystal used.
8.8 Detection of Ultrasound
As we have known previously, the piezoelectric effect works in reverse. If the crystal is
squeezed or stretched, an electric field is produced across it. So, if ultrasound hits the crystal
from outside, it will cause the crystal to vibrate in and out, and this will produce an alternating
electric field. The resulting electrical signal can be amplified and processed in a number of
ways. So a second crystal can be used to detect any returning ultrasound which has been
reflected from an obstacle.
Normally the transmitting and receiving crystals are built into the same hand-held unit,
which is called an ultrasonic transducer (generally, a transducer is any device to convert
energy from one form to another, usually to or from electrical energy.
May come to mind a question what is the material used by doctors and placed on the skin prior
to the examination:
Ultrasound gel is a type of conductive medium that is used in ultrasound diagnostic
techniques and treatment therapies. It is placed on the patient’s skin at the beginning of the
ultrasound examination or therapy. The transducer, which is the device used to send and
receive sound waves, is then placed on top of it. Ultrasound gel is also used with a fetal
Doppler, which can be employed to allow parents and doctors to listen to the heart beat of an
unborn child.
Many doctors, hospitals, clinics, and other facilities use ultrasound technology for
diagnostic purposes. It works by passing sound waves into a person’s body. Once there, they
don’t remain for long. Instead, they bounce off the organ or other part of the body the doctors
are trying to view. The sound waves then move back through the transducer, and they are
ultimately analyzed by a computer, which allows the analyzed sound waves to be viewed on a
monitor or even printed out for doctor or patient use.
153
CHPTER 8 IMAGING WITH SOUND
Processor
Contrast
Intensity Gain
Transducer
Pulse Generator Amplifier
Reflecting interface
Mahmood & Haider
Figure 8.3: The principal functional components of an ultrasound
imaging system
We will now consider some of these functions in more detail and how they contribute to
image formation.
8.9.1 Ultrasound Transducers
In general, a transducer is a device that converts energy from one form to another, in the case
of Ultrasound transducers; the conversion is from electrical to mechanical energy (or vice
versa).
8.9.1.1 Ultrasonic Transducer Structures
Transducers for ultrasound imaging consist of one or more piezoelectric crystals or elements.
The basic properties of ultrasound transducers (resonance, frequency response, focusing, etc.)
can be illustrated in terms of single-element transducers. However, imaging is often preformed
with multiple-element “arrays” of piezoelectric crystals.
A piezoelectric transducer comprises a "crystal" sandwiched between two metal plates.
When a sound wave strikes one or both of the plates, the plates vibrate. The crystal picks up
this vibration, which it translates into a weak AC voltage. Therefore, an AC voltage arises
between the two metal plates, with a waveform similar to that of the sound waves. Conversely,
154
CHPTER 8 IMAGING WITH SOUND
if an AC signal is applied to the plates, it causes the crystal to vibrate in sync with the signal
voltage. As a result, the metal plates vibrate also, producing an acoustic disturbance.
Piezoelectric transducers are common in ultrasonic applications, such as intrusion detectors
and alarms. Piezoelectric devices are employed at AF (audio frequencies) as pickups,
microphones, earphones, beepers, and buzzers. In wireless applications, piezoelectricity makes
it possible to use crystals and ceramics as oscillators that generate predictable and stable
signals at RF (radio frequencies).
Ultrasound transducers are usually made of thin discs of an artificial ceramic perovskite
material such as PZT. The basic design of a plain transducer is shown in Figure 8.4.
(a)
Acoustic Insulator Rear & Front
Acoustic Absorber
Electrodes apply an
alternating potential
difference
Power cable Acoustic Lens
Acoustic Matching
Layers
Piezoelectric
Backing Material Crystal
(b) Mahmood & Haider
Figure 8.4: Basic design of (a) probe contains hundreds of transducers (b)
single-element transducer.
The crystal is cut into a slice with a thickness equal to half a wavelength of the desired
ultrasound frequency, as this thickness ensures most of the energy is emitted at the
fundamental frequency. Generally, the thickness of thin discs (usually 0.1–1 mm) determines
the ultrasound frequency. In most diagnostic applications, ultrasound is emitted in extremely
short pulses as a narrow beam comparable to that of a flashlight. When not emitting a pulse
(as much as 99% of the time), the same piezoelectric crystal can act as a receiver that is the
transducer can act as both a transmitter and a receiver. The transducer (or probe) is containing
multiple piezoelectric crystals, which are interconnected electronically and vibrating in
response to the applied voltage (see figure 8.5). Also proved the reverse piezoelectric effect,
any application of electricity to the quartz leads to vibration of quartz.
155
CHPTER 8 IMAGING WITH SOUND
Figure 8.5: The transducer (or probe) is containing multiple piezoelectric crystals.
There are three types of transducers that most often used in the critical ultrasound imaging:
Linear, Sector and Convex (standard or micro-convex) as shown in Figure 8.6.
156
CHPTER 8 IMAGING WITH SOUND
157
CHPTER 8 IMAGING WITH SOUND
large aperture. Note that any type of transducer can be phase arrayed to produce a beam of
sound that can be steered and focused by the ultrasound controller.
piezoelectric crystal arrangement: curvilinear, along the aperture
footprint size: big (small for the micro-convex transducers)
operating frequency (bandwidth): 1-5 MHz (usually 3.5-5 MHz)
ultrasound beam shape: sector; the ultrasound beam shape and size vary with distance
from the transducer, that causes the lack of lateral resolution at greater depths
use: useful in all ultrasound types except echocardiography, typically abdominal,
pelvic and lung (micro-convex transducer) ultrasound
Also, the medical ultrasonic transducers (probes) come in a variety of shapes according to
intended usage each as shown in Figure 8.7. The transducer may be passed over the surface of
the body or inserted into a body opening such as the rectum or vagina. For example, In other
words, the transducer is the component of the ultrasound system that is placed in direct contact
with the patient's body. Inside the transducer contains one or more of the piezoelectric
elements. It's also focuses the beam of pulses to give it a specific size and shape at various
depths within the body and also scans the beam over the anatomical area that is being imaged.
3.5MHz 7.5MHz
Convex probe Linear probe
Abdominal transducers- Small parts transducers
general purpose (muscles, tendons, skin,
thyroid, breast, scrotum)
6.5MHz 6.5MHz
Linear probe Linear probe
Micro-Convex probe Trans-vaginal probe
Pediatric, cardiac intravaginal transducer
(uterus, ovaries,
pregnancy)
Figure 8.7: Ultrasound Transducer Types.
Attention!
An ultrasound transducer is the most important and usually the most expensive element of the
ultrasound machine, so it should be used carefully, which means the following:
do not throw, drop or knock the transducer,
do not allow to spoil the transducer`s duct,
wipe the gel from the transducer after each use,
do not sluice with alcohol-based confections.
158
CHPTER 8 IMAGING WITH SOUND
8.9.2 Amplification
Amplification is used to increase the size of the electrical pulses coming from the transducer
after an echo is received. The amount of amplification is determined by the gain setting. The
principal control associated with the amplifier is the time gain compensation (TGC), which
allows the user to adjust the gain in relationship to the depth of echo sites within the body.
This function will be considered in much more detail in the next section.
8.9.3 Scan Generator
The scan generator controls the scanning of the ultrasound beam over the body section being
imaged. This is usually done by controlling the sequence in which the electrical pulses are
applied to the piezoelectric elements within the transducer. This is also considered in more
detail later.
159
CHPTER 8 IMAGING WITH SOUND
piece increases, the coupling efficiency from the transducer to the test piece is reduced. In
general, as the surface curvature increases, the size of the contact transducer should be
reduced. Extreme curvature or inaccessibility of the test surface requires a system with a delay
line or an immersion transducer.
8.9.7.3 Temperature
When heated above a certain temperature (about 350ºC for PZT), called its "Curie
Temperature", transducers lose their piezoelectric properties. Transducer probes should
obviously not be autoclaved (nor should they be immersed in water unless waterproofed).
Thin slices of naturally occurring quartz crystals also show the piezoelectric effect, and are
used in digital timers and computers.
8.9.7.4 Accuracy
It should be considered that many factors may affect accuracy: sound attenuation and
scattering, sound velocity variations, poor coupling, surface roughness, non-parallelism,
curvature, echo polarity, etc. Selection of the best possible combination of gauge and
transducer should take into account all these factors. With proper calibration, measurements
can usually be made to an accuracy of 0.001inch or 0.01 mm.
8.10 Ultrasound Modalities
Ultrasound is diagnostically useful in medicine two modalities, continuous energy and
pulsed energy:
Continuous sound energy uses a steady sound source, and has applications that include
fetal heart beat detectors and monitors. This Doppler ultrasound can also be used to
evaluate blood flow through different structures.
Pulsed sound energy utilizes a quick blip of sound (like a hand clap), followed by a
relatively long pause, during which time an echo has a chance to bounce off the target
and return to the transducer. Through electronic processing of the returning sounds, a
two-dimensional image can be created that provides information about the tissues and
objects within the tissues.
160
CHPTER 8 IMAGING WITH SOUND
element) is vibrating object which is in contact with the tissue, causing it to vibrate. Vibrations
are passed in the area of tissue next to the transducer to nearby tissues. This process continues
vibrations or sound pass from one area of the tissue to another. The rate at which tissue
structures vibrate back and forth is the frequency of the sound. The rate at which vibrations
move through the tissue is the speed of sound. When the transducer is in contact with a patient
(or some other medium) and a few hundred volts DC are suddenly applied to the disk, it
instantly expands, thereby compressing layer of the material in contact with it. Due to the
elasticity of the material the compressed layer expands and compresses an adjacent layer of
material.
one pulse
Pulse Repetition
Frequency PRF per unit time = 3
(PRF)
Mahmood & Haider
Figure 8.8: Schematic representation of ultrasound pulse generation.
In this way a layer or wave of compression travels with a velocity v through the material,
followed by a corresponding wave of decompression of rarefaction. In imaging, such short
regular pulses of ultrasound are used. This mechanical sound waves vibrating to create
alternating zones of pressure and upset when spread through the body's tissues.
8.10.1.1 Short Pulse
For practical use, most modern ultrasound systems are designed based on the principle of
pulse-echo technique, which means that transducer emits only a few cycles of pulses at a time
into the human body. When encountering tissues interfaces, reflection and scattering will
occur and produce pulse echoes, By detecting these echoes, tissue positioning and
identification as well as diagnosis can be made.
NOTE: This is the pulse rate (pulses per second) and not the frequency which is the number
of cycles or vibrations per second within each pulse. The principal control associated with the
pulse generator is the size of the electrical pulses that can be used to change the intensity and
energy of the ultrasound beam.
161
CHPTER 8 IMAGING WITH SOUND
Wavelength Rarefaction
Beam C R C R C
Velocity
Ultrasound pulse
Piezoelectric Element Vibration
(b)
Pressure
Maximum
+∆P
pressure C C C
Amplitude
Distance
Minimum R R
-∆P pressure
162
CHPTER 8 IMAGING WITH SOUND
The frequency (f) with which compressions pass any given point is the same as the
frequency at which the transducer vibrates and the frequency of the AC voltage applied to it. It
is measured in megahertz (MHz). Using the 8.1 formula, it is possible to calculate the velocity,
frequency or wavelength of a wave if the other two values are known. For comparison, Figure
8.9c shows the pressure wave-form of a pulsed wave, after periods in duration. Pulse duration
is the amount of time from the beginning to the end of a single pulse of ultrasound.
The sound in most diagnostic ultrasound systems is emitted in pulses rather than a
continuous stream of vibrations. At any instant, the vibrations are contained within a relatively
small volume of the material. It is this volume of vibrating material that is referred to as the
ultrasound pulse. As the vibrations are passed from one region of material to another, the
ultrasound pulse, but not the material, moves away from the source.
In soft tissue and fluid materials the direction of vibration is the same as the direction of
pulse movement away from the transducer. This is characterized as longitudinal vibration as
opposed to the transverse vibrations that occur in solid materials. As the longitudinal
vibrations pass through a region of tissue, alternating changes in pressure are produced.
During one half of the vibration cycle the tissue will be compressed with an increased
pressure. During the other half of the cycle there is a reduction in pressure and a condition of
rarefaction. Therefore, as an ultrasound pulse moves through tissue, each location is subjected
to alternating compression and rarefaction pressures.
8.11.1 Frequency
Frequency (ƒ) is the number of wavelengths that pass per unit time. It is measured as cycles
(or wavelengths) per second and the unit is hertz (Hz). It is a specific feature of the crystal
used in the ultrasound transducer. It can be varied by the operator within set limits - the higher
the frequency, the better the resolution but the lower the penetration.
Ultrasound Pulse Frequency:
- The range (2- 20 MHz)
- Determined by the transducer
- Affects Absorption and Penetration
- Affects image Detail
The frequency of ultrasound pulses must be carefully selected to provide a proper balance
between image detail and depth of penetration. In general, high frequency pulses produce
higher quality images but cannot penetrate very far into the body.
163
CHPTER 8 IMAGING WITH SOUND
The frequency of sound is determined by the source. In diagnostic ultrasound equipment, the
source of sound is the transducer. The major element within the transducer is a crystal
designed to vibrate with the desired frequency. A special property of the crystal material is
that it is piezoelectric. This means that the crystal will deform if electricity is applied to it.
Therefore, if an electrical pulse is applied to the crystal it will have essentially the same effect
as the striking of a piano string: the crystal will vibrate. If the transducer is activated by a
single electrical pulse, the transducer will vibrate, or "ring," for a short period of time. This
creates an ultrasound pulse as opposed to a continuous ultrasound wave. The ultrasound pulse
travels into the tissue in contact with the transducer and moves away from the transducer
surface. A given transducer is often designed to vibrate with only one frequency, called its
resonant frequency. Therefore, the only way to change ultrasound frequency is to change
transducers. This is a factor that must be considered when selecting a transducer for a specific
clinical procedure. Certain frequencies are more appropriate for certain types of examinations
than others. Some transducers are capable of producing different frequencies. For these the
ultrasound frequency is determined by the electrical pulses applied to the transducer.
8.11.2 Velocity
Propagation Velocity (v) is the speed that sound waves propagate through a medium and
depends on tissue density and compressibility. The relationship between these variables is
expressed by the Wave Equation (Eq. 8.1). In soft tissue propagation velocity is relatively
constant at 1540m/s and this is the value assumed by ultrasound machines for all human
tissue. Hence wavelength is inversely proportional to frequency.
Factors Related to Ultrasound Pulse Velocity:
- Determined by the material
- Affects the depth dimension in the image
- Average for soft tissue: 1540 m/s, for air: 330 m/s
The importance of the speed of ultrasound is that it is used to locate the depth of the structures
in the body. The speed is determined with which sound travels through a medium by the
characteristics of the medium and not characteristics of the sound. The velocity of
longitudinal, or compression, sound waves in which the particles of the medium vibrate in the
direction of wave propagation, can propagate in liquids like tissue and gases, is given by:
where "stiffness" is a factor related to the elastic properties of the medium or the bulk
modulus. The velocities of sound through several materials of interest are given in the table
8.1.
164
CHPTER 8 IMAGING WITH SOUND
Most ultrasound systems are set up to determine distances using an assumed velocity of 1540
m/sec. This means that displayed depths will not be completely accurate in materials that
produce other ultrasound velocities such as fat and fluid.
Table 8.1: Approximate Velocity of Sound in Various Materials
Material Velocity (m/sec)
Fat 1450
Water 1480
Soft tissue (average) 1540
Bone 4100
8.11.3 Wavelength
Wavelength (λ) is the distance sound travel during the period of one vibration which is the
distance between two areas of maximal compression (or rarefaction) (see figure 8.10). The
importance of wavelength is that the penetration of the ultrasound wave is proportional to
wavelength and image resolution is no more than 1-2 wavelengths. It is typically measured
between two easily identifiable points, such as two adjacent crests or troughs in a waveform.
Wavelength is inversely proportional to frequency. That means if two waves are traveling at
the same speed, the wave with a higher frequency will have a shorter wavelength. Likewise, if
one wave has a longer wavelength than another wave, it will also have a lower frequency if
both waves are traveling at the same speed.
Although wavelength is not a unique property of a given ultrasound pulse, it is of some
significance because it determines the size (length) of the ultrasound pulse. This has an effect
on image quality, as we will see later.
Wave Length
High-frequency
Wave Length
Low-frequency
Wavelength ≈ 1/Frequency
Mahmood & Haider
Figure 8.10: Dependence of Pulse Length on Wavelength and Frequency.
165
CHPTER 8 IMAGING WITH SOUND
The illustration Figure 8.11 shows both temporal and spatial (length) characteristics related to
the wavelength. A typical ultrasound pulse consists of several wavelengths or vibration cycles.
The number of cycles within a pulse is determined by the damping characteristics of the
transducer. Damping is what keeps the transducer element from continuing to vibrate and
produce a long pulse.
The period is the time required for one vibration cycle. It is the reciprocal of the frequency.
Increasing the frequency decreases the period. In other words, wavelength is simply the ratio
of velocity to frequency or the product of velocity and the period. This means that the
wavelength of ultrasound is determined by the characteristics of both the transducer
(frequency) and the material through which the sound is passing (velocity).
The amplitude of an ultrasound pulse is the range of pressure excursions as shown in figure
9.9. The pressure is related to the degree of tissue displacement caused by the vibration. The
amplitude is related to the energy content, or "loudness," of the ultrasound pulse. The
amplitude of the pulse as it leaves the transducer is generally determined by how hard the
crystal is "struck" by the electrical pulse.
TIME
Pulse Duration
Period
+
VELOCITY Compression Pressure
Amplitude
Rarefaction
+
Wavelength
Pulse Length
8.11.4 Amplitude
Amplitude is the height above the baseline and represents maximal compression. It is
expressed in a decibel which is a logarithmic scale (see figure 8.12). Most systems have a
control on the pulse generator that changes the size of the electrical pulse and the ultrasound
pulse amplitude. We designate this as the intensity control, although different names are used
by various equipment manufacturers.
166
CHPTER 8 IMAGING WITH SOUND
A1
+ + A2
—
-
Mahmood & Haider
Figure 8.12: Ultrasound Pulse Amplitude, Intensity, and Energy.
The relative pulse amplitude, in decibels, is related to the actual amplitude ratio by
When the amplitude ratio is greater than 1 (comparing a large pulse to a smaller one), the
relative pulse amplitude has a positive decibel value; when the ratio is less than 1, the decibel
value is negative. In other words, if the amplitude of a pulse is increased by some means, it
will gain decibels, and if it is reduced, it will lose decibels.
The figure 8.13, illustration compares decibel values to pulse amplitude ratios and percent
values. The first two pulses differ in amplitude by 1 dB. In comparing the second pulse to the
first, this corresponds to an amplitude ratio of 0.89, or a reduction of approximately 11%. If
the pulse is reduced in amplitude by another 11%, it will be 2 dB smaller than the original
pulse. If the pulse is once again reduced in amplitude by 11 % (of 79%), it will have an
amplitude ratio (with respect to the first pulse) of 0.71:1, or will be 3 dB smaller.
Perhaps the best way to establish a "feel" for the relationship between pulse amplitude
expressed in decibels and in percentage is to notice that amplitudes that differ by a factor of 2
167
CHPTER 8 IMAGING WITH SOUND
1dB 2dB
3dB
89%
6dB
79% 12d
71% B 18d
B 24d
B
50%
25%
12.5
% 6
Mahmood & Haider %
Figure 8.13: Pulse Amplitudes Expressed in Decibels and Percentages.
During its lifetime, an ultrasound pulse undergoes many reductions in amplitude as it passes
through tissue because of absorption. If the amount of each reduction is known in decibels, the
total reduction can be found by simply adding all of the decibel losses. This is much easier
than multiplying the various amplitude ratios.
8.12 Intensity and Power
Acoustic Power is the amount of acoustic energy generated per unit time. Energy is measured
in joules (J) with joules being the amount of heat generated by the energy in question. The unit
is the Watt (W) with 1W = 1J/s. The biological effects of ultrasound in terms of power are in
the milliwatt range. Therefore the power is the rate of energy transfer and is expressed in the
units of watts. Intensity is the rate at which power passes through a specified area. It is the
amount of power per unit area and is expressed in the units of watts per square centimeter.
Intensity is the rate at which ultrasound energy is applied to a specific tissue location within
the patient's body. It is the quantity that must be considered with respect to producing
biological effects and safety. The intensity of most diagnostic ultrasound beams at the
transducer surface is on the order of a few milliwatts per square centimeter.
Intensity is the power density or concentration of power within an area expressed as W/m 2
or mW/cm2. Intensity varies spacially within the beam and is greatest in the centre. In a pulsed
beam it varies temporally as well as spacially. Therefore the intensity is related to the pressure
amplitude of the individual pulses and the pulse rate. Since the pulse rate is fixed in most
systems, the intensity is determined by the pulse amplitude.
The relative intensity of two pulses (I1 and I2) can be expressed in the units of decibels by:
Relative Intensity = 10 log I2/I1
168
CHPTER 8 IMAGING WITH SOUND
Note that when intensities are being considered, a factor of 10 appears in the equation rather
than a factor of 20, which is used for relative amplitudes. This is because intensity is
proportional to the square of the pressure amplitude, which introduces a factor of 2 in the
logarithmic relationship.
The intensity of an ultrasound beam is not constant with respect to time nor uniform with
respect to spatial area, as shown in the following figure. This must be taken into consideration
when describing intensity. It must be determined if it is the peak intensity or the average
intensity that is being considered.
8.12.1 Temporal Characteristics
The figure 8.14 shows two sequential pulses. Two important time intervals are the pulse
duration and the pulse repetition period.
Pulse
Period Pulse Repetition Period
Peak Average
The ratio of the pulse duration to the pulse repetition period is the duty factor. The duty factor
is the fraction of time that an ultrasound pulse is actually being produced. If the ultrasound is
produced as a continuous wave (CW), the duty factor will have a value of 1. Intensity and
power are proportional to the duty factor. Duty factors are relatively small, less than 0.01, for
most pulsed imaging applications.
With respect to time there are three possible power (intensity) values. One is the peak
power, which is associated with the time of maximum pressure. Another is the average power
within a pulse. The lowest value is the average power over the pulse repetition period for an
extended time. This is related to the duty factor.
169
CHPTER 8 IMAGING WITH SOUND
170
CHPTER 8 IMAGING WITH SOUND
8.13.1 Attenuation
Attenuation generally indicates that the ultrasound wave constantly loses energy (decreasing
intensity) when transmitted through the medium. It is the result of absorption of ultrasound
energy by conversion to heat, as well as reflection, refraction and scattering that occurs
between the boundaries of tissue with different densities. The rate at which an ultrasound
wave is absorbed generally depends on two factors:
(1) the material through which it is passing, and
(2) the frequency of the ultrasound.
This means that the attenuation increases (and hence penetration of the beam reduced) by:
Increased distance from the transducer
Less homogenous medium to traverse due to increased acoustic impedance mismatch
Higher frequency (shorter wavelength) transducers
Air forms a virtually impenetrable barrier to ultrasound, while fluid offers the least
resistance.
Attenuation (absorption) rate is described in terms of attenuation coefficient of tissues. It
is the relation of attenuation to distance; therefore it is measure decibels per centimeter units,
and depends on the tissues traversed and the frequency of the ultrasound wave. So it is
necessary specify the frequency when an attenuation rate is given due to the attenuation in
tissue increases with frequency. Through a thickness of material, x, the attenuation is given
by:
Attenuation (dB) = () (f) (x)
where is the attenuation coefficient (in decibels per centimeter at 1 MHz), and f is the
ultrasound frequency, in megahertz.
Attenuation coefficient values in the table 8.2, it is clear that there is considerable variation
in the rate of attenuation from material to material. All materials mentioned, and water
produce far less attenuation. This means that the water is a very good conductor of ultrasound.
So attenuation is low in fluid-filled structures as the case in cysts and bladder. Most of the
soft tissues in the body contain the attenuation coefficient values of about 1 dB per cm in
MHz, with the exception of fat and muscle which have high attenuation rate. Muscle has a set
of values that depend on the direction of ultrasound with respect to the muscle fibers. Lung
has a much higher proportion of tissue attenuation either air or soft. This is because the small
pockets of air in the alveoli are very effective in scattering ultrasound energy. Because of this,
and normal lung structure is very difficult to penetrate with ultrasound. Compared with the
soft tissues of the body, attenuation rate is high in the bones. Higher frequency waves are
subject to greater attenuation than lower frequency ones. To compensate for attenuation,
returning signals can be amplified by the ultrasound system, known as gain.
171
CHPTER 8 IMAGING WITH SOUND
8.13.2 Refraction
The refracted wave obeys Snell's Law and describes reflection where ultrasound beam crosses
an interface between two tissues at an oblique angle. The angle of refraction is dependent on
two things; the angle the sound wave strikes the boundary between the two tissues and the
difference in their propagation velocities. In Figure 8.15, for example, If the propagation
velocity of ultrasound is higher in the first medium (v1> v2), the beam that enters second
medium refracted at a less oblique (more steep) angle towards the center (A). If the velocity of
ultrasound is higher in the second medium (v1< v2), refraction occurs away from the
originating beam (B). As the beam emerges from medium 2 and reenters medium 1, it resumes
its original direction of motion. This behavior of ultrasound transmitted obliquely across an
interface is termed refraction. The presence of medium 2 simply displaces the ultrasound
beam laterally for a distance that depends upon the difference in ultrasound velocity and
density in the two media and upon the thickness of medium 2. Suppose a small structure
below medium 2 is visualized by reflected ultrasound. The position of the structure would
appear to the viewer as an extension of the original direction of the ultrasound through
medium 1. In this manner, the sound is not reflected directly back to the transducer, refraction
adds spatial distortion and the image being depicted may not be clear, or potentially altered,
"confusing" the ultrasound system since it assumes that sound travels in a straight line.
These phenomena can allow for improved image quality by the use of acoustic lenses that
can focus the ultrasound beam and improve resolution.
172
CHPTER 8 IMAGING WITH SOUND
Angle of Angle of
incidence reflection
(θi) (θr)
Angle of
=
Angle of Reflection
Incidence Reflection
Refraction v1 < v2
v1 > v2 B
Original direction
Mahmood & Haider A of motion
Big surface: Ultrasound refraction only happens at big surface compared to its
wavelength.
Velocity mismatch: The acoustic medium at both sides of the surface must have
different sound velocity.
Dependence on angle: The refracted wave obeys Snell's Law. Laws of reflection
and refraction hold
8.13.3 Reflection
Ultrasound imaging is based on the "pulse-echo" principle in which performed by emitting a
pulse from a transducer and directed into tissue. When a sound wave is incident on an
interface between two tissues, part reflected from a boundary, and part transmitted (figure
8.16). According to the law of reflection, the angle of reflection of a reflected wave is equal
to its angle of incidence.
Medical ultrasound imaging relies utterly on the fact that biological tissues scatter or
reflect incident sound. Scattering refers to the interaction between sound waves and particles
that are much smaller than the sound's wavelength λ, while reflection refers to such
interaction with particles or objects larger than λ.
173
CHPTER 8 IMAGING WITH SOUND
Part Specular
Reflection
Part
transmitted
Specular Reflection
Part Diffuse
Reflection
Part
transmitted
Diffuse Reflection
Mahmood & Haider
Reflection can be categorized as either specular or diffuse. Specular reflectors are large,
smooth surfaces, such as bone (see figure 8.17), where the sound wave is reflected back in a
singular direction. The large smooth surface of the bone causes a uniform reflection because
of the significant difference in the acoustic impedance between it and the adjoining soft tissue.
The greater the acoustic impedance between the two tissue surfaces, the greater the reflection
and the brighter the echo will appear on ultrasound.
174
CHPTER 8 IMAGING WITH SOUND
Conversely, soft tissue is classified as a diffuse reflector, where adjoining cells create an
uneven surface causing the reflections to return in various directions in relation to the
transmitted beam. This means that the incident sound is spread out over a range of angles. As
shown in figure 8.18, the different acoustic impedances of the structures located within the
muscle result in the various shades of grey seen on the B-Mode image. However, because of
the numerous surfaces, sound is able to get back to the transducer in a relatively uniform
manner.
The difference is that in the case of reflection, the angle at which the reflected beam of
sound are directed is the opposite angle to their incidence. By comparison, scattering
randomises the direction of the sound that emerges from the scattering process.
Figure 8.18: The pectoris major muscle (PM) located between the
white arrows is an example of diffuse reflection.
8.13.4 Scattering
The scattering or reflections of acoustic waves arise from inhomogeneities in the medium's
density and/or compressibility. Sound is primarily scattered or reflected by a discontinuity in
the medium's mechanical properties, to a degree proportional to the discontinuity. (By
contrast, continuous changes in a medium's material properties cause the direction of
propagation to change gradually.) The elasticity and density of a material are related to its
sound speed, and thus sound is scattered or reflected most strongly by significant
discontinuities in the density and/or sound speed of the medium.
The scattering or reflections of acoustic waves arise from inhomogeneities in the medium's
density and/or compressibility. Rayleigh scattering occurs at interfaces involving structures
of small dimensions as shown in figure 8.19 & 8.20. This is common with red blood cells
(RBC), where the average diameter of an RBC is 7μm, and an ultrasound wavelength may be
300μm (5 MHz). When the sound wave is greater than the structure it comes in contact with,
175
CHPTER 8 IMAGING WITH SOUND
it creates uniform amplitude in all directions with little or no reflection returning to the
transducer.
Scattering
Figure 8.20: the image of the left saphenous vein (SV), common femoral
vein (CFV), superficial femoral (SFA) and profunda femoris (PFA)
arteries, Rayleigh scattering is present within each of the blood vessels.
176
CHPTER 8 IMAGING WITH SOUND
and form the ultrasound imaging. The amount reflected depends on the difference in acoustic
impedance of the two tissues.
8.13.5 Absorption
Absorption is the main form of attenuation. Absorption happens as sound travels through soft
tissue, the particles that transmit the waves vibrate and cause friction and a loss of sound
energy occurs and heat is produced. Sound intensity in the soft tissue decreases exponentially
with depth (see figure 8.21).
Absorption
+
A1 A2
—
Intensity ≈ A2
(Energy)
Mahmood & Haider
Figure 8.21: The Reduction of Pulse Amplitude by Absorption of It's Energy
177
CHPTER 8 IMAGING WITH SOUND
The ratio of the intensity of the reflected ( wave to the incident wave ( ) at the boundary
called Reflection coefficient (R). For an ultrasound wave incident perpendicularly upon an
interface, the fraction R of the incident energy that is reflected is
Where Z1 and Z2 are the acoustic impedance of the first and second mediums, respectively.
The fraction of the incident energy that is transmitted across an interface is described by the
transmission coefficient T,
Obviously T+ R=1.
The body consists from a range of materials, such as the air in the lungs and intestinal gas,
water, blood, muscle, fat and bone. Each component of the body has characteristic impedance,
which depends on the nature of this matter in it. Gases have a very low density, and therefore
very low acoustic impedance as shown in table 8.3.
A large difference in acoustic impedances of the materials on each side of the boundary is
referred to as acoustic impedance mismatch. In substances with a greater the impedance
mismatch there is a greater amount of energy that will be reflected at the interface or boundary
between one medium and another and a less amount of transmitted energy.
Table 8.3: characteristic acoustic impedance of air and water.
Substance Characteristic acoustic impedance
Air 429 kgm-2s-1
Water 1.43x106 kgm-2s-1
The contrast in the ultrasound image is generated by acoustic reflections result of changes in
acoustic impedance (the speed of sound and density).
The brightness of a structure in an ultrasound image depends on the strength of the
reflection, or echo. This in turn depends on how much the two materials differ in terms of
acoustic impedance. The amplitude ratio of the reflected to the incident pulse is related to the
tissue impedance values by
178
CHPTER 8 IMAGING WITH SOUND
Even though the reflected energy is small, it is often sufficient to reveal the liver border.
Because of the high value of the coefficient of ultrasound reflection (R) at an air–tissue
interface, water paths and various creams and gels are used during ultrasound examinations to
remove air pockets (i.e., to obtain good acoustic coupling) between the ultrasound transducer
and the patient’s skin. With adequate acoustic coupling, the ultrasound waves will enter the
patient with little reflection at the skin surface.
Similarly, strong reflection of ultrasound occurs at the boundary between the chest wall and
the lungs and at the millions of air–tissue interfaces within the lungs. Because of the large
impedance mismatch at these interfaces, efforts to use ultrasound as a diagnostic tool for the
lungs have been unrewarding. The impedance mismatch is also high between soft tissues and
bone, and the use of ultrasound to identify tissue characteristics in regions behind bone has
had limited success (see figure 8.22).
Ultrasonic
probe
Ultrasound Reflected
beam signal
Human
body
Mahmood & Haider
Figure 8.22: Reflection at a boundary between tissues.
This is the basis of ultrasound as different organs in the body have different densities and
acoustic impedance and this creates different reflectors. In some cases the acoustic impedance
can be so great that all the sound waves energy can be reflected, this happens when sound
comes in contact with bone and air. This is the reason why ultrasound is not used as a primary
imaging modality for bone, digestive tract and lungs.
Basic imaging by ultrasound does only use the amplitude information in the reflected
signal. One pulse is emitted, the reflected signal, however, is sampled more or less
continuously (actually multiple times). As the velocity of sound in tissue is fairly constant, the
time between the emission of a pulse and the reception of a reflected signal is dependent on
the distance; i.e. the depth of the reflecting structure. The reflected pulses are thus sampled at
multiple time intervals (multiple range gating), corresponding to multiple depths, and
displayed in the image as depth. Different structures will reflect different amount of the
179
CHPTER 8 IMAGING WITH SOUND
emitted energy, and thus the reflected signal from different depths will have different
amplitudes. At most soft tissue interfaces, only a small fraction of the pulse is reflected.
Therefore, the reflection process produces relatively weak echoes. At interfaces between soft
tissue and materials such as bone, stones, and gas, strong reflections are produced. The
reduction in pulse amplitude during reflection at several different interfaces is given in the
table 8.4.
Table 8.4: Pulse Amplitude loss produced by a reflection.
Interface Amplitude Loss (dB)
Ideal reflector 0.0
Tissue-air -0.01
Bone-soft tissue -3.8
Fat-Muscle -20.0
Tissue-water -26.0
Muscle-blood -30.0
The amplitude of a pulse is attenuated both by absorption and reflection losses. Because of
this, an echo returning to the transducer is much smaller than the original pulse produced by
the transducer.
The discussion of ultrasound reflection above assumes that the ultrasound beam strikes the
reflecting interface at a right angle. In the body, ultrasound impinges upon interfaces at all
angles. For any angle of incidence, the angle at which the reflected ultrasound energy leaves
the interface equals the angle of incidence of the ultrasound beam; that is,
Angle of incidence = Angle of reflection
In a typical medical examination that uses reflected ultrasound and a transducer that both
transmits and detects ultrasound, very little reflected energy will be detected if the ultrasound
strikes the interface at an angle more than about 3 degrees from perpendicular. A smooth
reflecting interface must be essentially perpendicular to the ultrasound beam to permit
visualization of the interface.
8.15 Ultrasound Contrast Agents
Ultrasound contrast agents rely on the different ways in which sound waves are reflected from
interfaces between substances. This may be the surface of a small air bubble or a more
complex structure. Commercially available contrast media are gas-filled microbubbles that are
administered intravenously to the systemic circulation. Microbubbles have a high degree of
echogenicity, which is the ability of an object to reflect the ultrasound waves.
The echogenicity difference between the gas in the microbubbles and the soft tissue
surroundings of the body is immense. Thus, ultrasonic imaging using Microbubbles contrast
agents enhances the ultrasound backscatter, or reflection of the ultrasound waves (see figure
8.23), to produce a unique sonogram with increased contrast due to the high echogenicity
difference. Contrast-enhanced ultrasound can be used to image blood perfusion in organs,
180
CHPTER 8 IMAGING WITH SOUND
measure blood flow rate in the heart and other organs, and has other applications as well. (In
physiology, perfusion is the process of a body delivering blood to a capillary bed in its
biological tissue. The word is derived from the French verb "perfuser" meaning to "pour over
or through.")
Would be Would be
Transducer seem as one seem as one
structures structures
Would be
seem as two
structures
Focal Zone
181
CHPTER 8 IMAGING WITH SOUND
Focal Zone
182
CHPTER 8 IMAGING WITH SOUND
observe the change in pulse diameter as it moves along the beam and shows how it can be
controlled.
The diameter of the pulse is determined by the characteristics of the transducer. At the
transducer surface, the diameter of the pulse is the same as the diameter of the vibrating
crystal. As the pulse moves through the body, the diameter generally changes. This is
determined by the focusing characteristics of the transducer.
8.17.1 Ultrasound Field
An important distinction is made between the near-field (called the Fresnel zone) and far
field (called the Fraunhofer zone) as shown in figure 8.26. the near-field is between the
transducer and the focus, since the beam Initially, is of comparable diameter to the transducer
as the series of ultrasound waves that make up the beam travel parallel to each other.
Far Field
(Fraunhofer Zone)
Unfocused
(Fresnel Zone)
Near Field
Focal
Zone
Focused
Focal Depth
The far field which can be described the divergent far field beyond the focus, since at some
point distal to the transducer however the beam begins to diverge which will reduce the ability
to distinguish two objects close together (resolution). The border of the beam is not smooth, as
the energy decreases away from its axis.
The Focal Zone is the area in the ultrasound beam that has the smallest beam diameter and
is the area a user will get the best side to side (lateral) resolution.
The best detail will be obtained for structures within the focal zone. The distance between the
transducer and the focal zone is the focal depth.
The focus zone is the narrowest section of the beam, defined as the section with a diameter
no more than twice the transverse diameter of the beam at the actual focus. If attenuation is
ignored, the focus is also the area of highest intensity. The length of the near field, the position
183
CHPTER 8 IMAGING WITH SOUND
of the focus and the divergence of the far field depend on the frequency and the diameter (or
aperture) of the active surface of the transducer. In the case of a plane circular transducer of
radius R, the near field length (L) is given by the expression:
The divergence angle (x) of the ultrasound beam in the far field is given by the expression:
The diameter of the beam in the near field corresponds roughly to the radius of the transducer.
A small aperture and a large wavelength (low frequency) lead to a short near field and greater
divergence of the far field, while a larger aperture or higher frequency gives a longer near field
but less divergence. The focal distance, L as well as the diameter of the beam at the focal point
can be modified by additional focusing.
The additional focusing is accomplished using either a concave transducer or an acoustic
lens which called static focus or by use of electronic means for delaying parts of the signal
for the different crystals in an array system enables variable focusing of the composite
ultrasound beam, adapted to different depths during receive which called dynamic focusing.
8.18 Transducer Focusing
Transducers can be designed to produce either a focused or non-focused beam. A focused
beam is desirable for most imaging applications because it produces pulses with a small
diameter which in turn gives better visibility of detail in the image. Therefore, we will explain
with details focused transducer.
It is possible to focus the ultrasound beam to cause convergence and narrowing of the beam
thus improves (lateral) resolution. Focusing can be achieved by either mechanical or electric
focusing by electronic means in a phased array element (see Figure 8.27).
184
CHPTER 8 IMAGING WITH SOUND
If the transducer face is concave or a concave acoustic lens is placed on the surface of the
transducer, the beam can be narrowed at a predetermined distance from the transducer. The
point at which the beam is at its narrowest is the focal point or focal zone and is the point of
greatest intensity and best lateral resolution.
8.18.1 Dynamic Receive Focus
The focusing of an array transducer can also be changed electronically when it is in the echo
receiving mode. This is achieved by processing the electrical pulses from the individual
transducer elements through different time delays before they are combined to form a
composite electrical pulse. The effect of this is to give the transducer a high sensitivity for
echoes coming from a specific depth along the central axis of the beam. This produces a
focusing effect for the returning echoes.
An important factor is that the receiving focal depth can be changed rapidly. Since echoes
at different depths do not arrive at the transducer at the same time, the focusing can be swept
down through the depth range to pick up the echoes as they occur. This is the major distinction
between dynamic or sweep focusing during the receive mode and adjustable transmit focus.
Any one transmitted pulse can only be focused to one specific depth. However, during the
receive mode, the focus can be swept through a range of depths to pick up the multiple echoes
produced by one transmit pulse.
8.18.2 Ultrasonic Phased Arrays
Ultrasonic phased arrays are widely used in medicine and most of the present imaging devices
use the 1D or 2D phased array technique. In order to steer the ultrasonic beam, two
dimensional (2D) arrays should be used. The conventional 2D phased array transducer
(link) is a multi-element piezoelectric device, the elements of which are electrically isolated
from each other, arranged in a single row. The transmit a sound pulse passing into the body,
and the received the echoes that return from scattering structures passing from the array
elements at delay times in phase with respect to the transmit initiation time, hence the term
phased array. The time delay required for steering and focusing the excited beam is then
provided by the incident surface wave propagating through the array along chosen directions.
This is done electronically to steer and focus of each of a series of acoustic pulses through the
level or volume to be imaged in the body. It is known that the process of steering and focusing
of these acoustic pulses as beamforming. This process is shown schematically in Figure 9.26.
The form and especially the diameter of the beam strongly influence the lateral resolution
and thus the quality of the ultrasound image. The focus zone is the zone of best resolution and
should always be positioned to coincide with the region of interest. This is another reason for
using different transducers to examine different regions of the body; for example, transducers
with higher frequencies and mechanical focusing should be used for short distances (small-
185
CHPTER 8 IMAGING WITH SOUND
part scanner). Most modern transducers have electronic focusing to allow adaption of the
aperture to specific requirements (dynamic focusing, Figure 8.28).
τ1 Converging
wave front
τ2
τ3
τ4
τ5
Σ τ3 Scattering
medium
τ4
τ5
Mahmood & Haider
Figure 8.28: A conceptual diagram of phased array beam forming. (Top)
Appropriately delayed pulses are transmitted from an array of
piezoelectric elements to achieve steering and focusing at the point of
interest. (For simplicity, only focusing delays are shown here.) (Bottom)
The echoes returning are likewise delayed before they are summed
together to form a strong echo signal from the region of interest.
Recall that the wavelength is inversely related to frequency. Therefore, for a given transducer
size, the length of the near field is proportional to frequency. Another characteristic of the near
field is that the intensity along the beam axis is not constant; it oscillates between maximum
and zero several times between the transducer surface and the boundary between the near and
far field. This is because of the interference patterns created by the sound waves from the
transducer surface. An intensity of zero at a point along the axis simply means that the sound
186
CHPTER 8 IMAGING WITH SOUND
vibrations are concentrated around the periphery of the beam. A picture of the ultrasound
pulse in that region would look more like concentric rings or "donuts" than the disk that has
been shown in various illustrations.
The major characteristic of the far field is that the beam diverges. This causes the ultrasound
pulses to be larger in diameter but to have less intensity along the central axis. The
approximate angle of divergence is related to the diameter of the transducer, D, and the
wavelength, by
Divergence angle (degrees) = 70/D.
Because of the inverse relationship between wavelength and frequency, divergence is
decreased by increasing frequency. The major advantage of using the higher ultrasound
frequencies (shorter wavelengths) is that the beams are less divergent and generally produce
less blur and better detail.
The previous figure is a representation of the ideal ultrasound beam. However, some
transducers produce beams with side lobes. These secondary beams fan out around the
primary beam. The principal concern is that under some conditions echoes will be produced
by the side lobes and produce artifacts in the image.
8.18.4 Fixed Focus
A transducer can be designed to produce a focused ultrasound beam by using a concaved
piezoelectric element or an acoustic lens in front of the element. Transducers are designed
with different degrees of focusing. Relatively weak focusing produces a longer focal zone and
greater focal depth. A strongly focused transducer will have a shorter focal zone and a shorter
focal depth.
Fixed focus transducers have the obvious disadvantages of not being able to produce the
same image detail at all depths within the body.
8.18.5 Adjustable Transmit Focus
The focusing of some transducers can be adjusted to a specific depth for each transmitted
pulse. This concept is illustrated in the following figure. The transducer is made up of an array
of several piezoelectric elements rather than a single element as in the fixed focus transducer.
There are two basic array configurations: linear and annular. In the linear array the
elements are arranged in either a straight or curved line. The annular array transducer
consists of concentric transducer elements as shown. Although these two designs have
different clinical applications, the focusing principles are similar.
Focusing is achieved by not applying the electrical pulses to all of the transducer elements
simultaneously. The pulse to each element is passed through an electronic delay. Now let's
observe the sequence in which the transducer elements are pulsed in the figure 8.29. The
outermost element (annular) or elements (linear) will be pulsed first. This produces ultrasound
that begins to move away from the transducer. The other elements are then pulsed in sequence,
187
CHPTER 8 IMAGING WITH SOUND
working toward the center of the array. The centermost element will receive the last pulse. The
pulses from the individual elements combine in a constructive manner to create a curved
composite pulse, which will converge on a focal point at some specific distance (depth) from
the transducer.
Annular Acoustic
Array Lens
Wave front
188
CHPTER 8 IMAGING WITH SOUND
the distance that has been traveled by the sound. In this way, all echoes from identical
interfaces are rendered the same, independent of their depth. Swept gain is typically varied
from 0 to 50 dB.
Transducer
The pulse is thus emitted, and the system is set to await the reflected signals, calculating the
depth of the scatterer on the basis of the time from emission to reception of the signal. The
total time for awaiting the reflected ultrasound is determined by the preset depth desired in the
image.
The received energy at a certain time, i.e. from a certain depth, can be displayed as energy
amplitude, A-mode (Amplitude). A-mode shows the depth and the reflected energy from
each scatter (figure 8.31).
189
CHPTER 8 IMAGING WITH SOUND
Amplitude
Depth
Time
The amplitude can also be displayed as the brightness of the certain point representing the
scatter, in a B-mode (Brightness) plot. B-mode shows the energy or signal amplitude as the
brightness (in this case the higher energy is shown darker, against a light background) of the
point. And if some of the scatters are moving, the motion curve can be traced by letting the B-
mode image sweep across a screen or paper. This is called the M-mode or T-M Mode
(Motion or Time-Motion Mode). If the depth is shown in a time plot, the motion is seen as a
curve (and horizontal lines for the non moving scatterers) in an M-mode plot. These modes
illustrated in Fig. 8.31 and will be explained in detail in the following sections.
The ratio of the amplitude (energy) of the reflected pulse and the incident is called the
reflection coefficient. The ratio of the amplitude of the incident pulse and the transmitted
pulse is called the transmission coefficient. Both are dependent on the differences in acoustic
impedance of the two materials. The acoustic impedance (Z) of a medium is the speed (c) of
sound in the material × the density (ρ):
Z=c×ρ
Thus, if the velocities of sound in two materials are very different, the reflection will be close to
total, and no energy will pass into the deeps material. This occurs in boundary zones between
f.i. soft tissue and bone, and soft tissue and air. This means that the deepest material can be
considered to be in a shadow.
8.21.1 A-mode
As late as the 1960’s, was used A-mode (A-scan, amplitude modulation) and it is now
obsolete in medical imaging. It is just of historical importance and rarely used today, as it
conveys limited information, e.g. measurement of distances. The A-mode is sometimes used
190
CHPTER 8 IMAGING WITH SOUND
only in ophtalmology or for showing Mid-line displacement in the brain (see figure 8.32). A-
mode is a one-dimensional examination technique in which a transducer with a single crystal
is used. It is the simplest form of ultrasound imaging which shows only the position of tissue
interfaces. As an imaging technique it has been largely superseded by B-mode imaging or
other imaging techniques, such as computed tomography (TC). However, A-mode illustrates
the basic principles of ultrasound imaging.
191
CHPTER 8 IMAGING WITH SOUND
Skull
Falx cerebri
Amplitude
Time (depth)
Mahmood & Haider
Figure 8.33: A-mode (amplitude mode) of ultrasound display. An
oscilloscope display records the amplitude of echoes as a function of time
or depth.
8.21.2 B-Mode
B-Mode refers to Brightness mode and was the first practical application of ultrasound for
diagnostic purposed. Modern medical Ultrasound is performed primarily using a pulse-echo
approach with B-mode display. B-Mode ultrasound imaging collects the same information as
in A-mode, but, unlike A-mode. Since it adds a sense of direction (where the echo is coming
from in a two-dimensional plane) as well as the memory to recall all the different echoes,
strong and weak. This is the mainstay of ultrasound imaging, providing a real-time, gray-scale
display, where the variations in intensity and brightness indicate reflected signals of different
amplitudes (the brighter parts indicate larger reflections of sound). The basic principles of B-
mode imaging involve transmitting small pulses of ultrasound echo from a transducer into the
body. The direction of ultrasound propagation along the beam line is called the axial direction,
and the direction in the image plane perpendicular to axial is called the lateral direction. As
ultrasound waves are penetrates the tissues of the body, which has different acoustic
impedances along the transmission path. so small fraction of the ultrasound pulse are returns
as a reflected echo after reaching a body tissue interface to the transducer (echo signals) while
the remainder of the pulse continues along the beam line to greater tissue depths. The echo
signals returned from many sequential coplanar pulses are processed and combined to
generate an image. This image becomes recognizable, particularly with practice. The
recognizable image can then be evaluated for abnormalities, and measured. This is the
mainstay of ultrasound imaging, providing a real-time, gray-scale display, where the
variations in intensity and brightness indicate reflected signals of different amplitudes (the
brighter parts indicate larger reflections of sound.
192
CHPTER 8 IMAGING WITH SOUND
The first B-mode images were simple black or white pictures, with no shades of grey. Grey-
scale images were a huge step forward in the quality of ultrasound pictures. In modern
ultrasound scanners, the transducer position produces a series of dots of variable brightness on
the display screen by sampling multiple lines, which build up a 2-dimensional representation
of echoes returning from the different body parts being scanned.
On a black background, the signals of greatest intensity are white, absence of echoes are
black, with all the intermediate intensities appearing as shades of gray, with each intensity of
the reflected sound being assigned a gray-scale value.
The scanner has a digital memory of 512 x 512 x 512 x 640 pixels, which is used to store
the values associated with the echo intensities, which is then sent to the video monitor to
display the gamut of shades for that particular ultrasound image. The operator can then adjust
the dynamic range of the display, which is best using as wide a dynamic range as possible to
differentiate the slightest changes in tissue echogenicity.
I. A short pulse is generated by the ultrasound source (transducer) and propagates into
the body at the speed of sound (~1540 m/s).
II. At an interface a small fraction of the pulse (echo) is reflected back to the transducer.
III. The echo signals are displayed as a function of time. This display is called an “A” scan.
IV. When the transducer is coupled to an arm and the signals are displayed as bright spots
representing the tomographic image of the tissue, the image is called a “B” scan.
8.21.3 M-mode or TM-mode
M-mode [Motion mode – sometimes called TM-mode (Time-Motion Mode)] provides a one-
dimensional view of moving objects over time. It is a particularly useful modality in
echocardiography where it displays the echo amplitude from the beating heart, including the
motion of the heart valves.
The transducer is positioned over the heart and is kept stationary. It records the returning
echoes over the same line of sight repeatedly. What is changing is the position of the heart
wall and valves from one moment to the next. The brightness of the display indicates the
intensity of the reflected signal.
A limitation of M-mode imaging is the difficulty in achieving consistent and accurate beam
placement for standard measurements and calculations. Beam placement guided by 2-D
imaging can be used but accurate placement of the M-mode beam at the appropriate locations
within the heart, as well as on the endocardial surfaces are crucial to obtain accurate
measurements and calculations.
8.21.4 B-scan, Two-dimensional
The arrangement of many (e.g. 256) one-dimensional lines in one plane makes it possible to
build up a two-dimensional (2D) ultrasound image (2D B-scan). The single lines are generated
one after the other by moving (rotating or swinging) transducers or by electronic multi-
193
CHPTER 8 IMAGING WITH SOUND
element transducers. Rotating transducers with two to four crystals mounted on a wheel and
swinging transducers (‘wobblers’) produce a sector image with diverging lines (mechanical
sector scanner; Figure 8.34).
194
CHPTER 8 IMAGING WITH SOUND
transducer, mainly for echocardiography. In this case, exactly delayed electronic excitation of
the elements is used to generate successive ultrasound beams in different directions so that a
sector image results (electronic sector scanner).
Construction of the image in fractions of a second allows direct observation of movements
in real time. A sequence of at least 15 images per second is needed for real-time observation,
which limits the number of lines for each image (up to 256) and, consequently, the width of
the images, because of the relatively slow velocity of sound. The panoramic-scan technique
was developed to overcome this limitation. With the use of high-speed image processors,
several real-time images are constructed to make one large (panoramic) image of an entire
body region without loss of information, but no longer in real time.
Many technical advances have been made in the electronic focusing of array transducers
(beam forming) to improve spatial resolution, by elongating the zone of best lateral resolution
and suppressing side lobes (points of higher sound energy falling outside the main beam).
Furthermore, use of complex pulses from wide-band transducers can improve axial resolution
and penetration depth. The elements of the array transducers are stimulated individually by
precisely timed electronic signals to form a synthetic antenna for transmitting composite
ultrasound pulses and receiving echoes adapted to a specific depth. Parallel processing allows
complex image construction without delay.
For years, two-dimensional (2D) ultrasound images were the norm, showing black and
white images. Now, though, technology has advanced to a point where three-dimensional (3D)
and four-dimensional (4D) ultrasounds can be produced, instead of just relying on 2D
ultrasounds. The standard common obstetric diagnostic mode is 2D images scanning. In 2D
can be seen size and shape, without depth, but can be used to see the internal organs of the
baby. This is helpful in diagnosing heart defects, issues with the kidneys and other internal
issues. In recent years this has been replaced with the new technique which is 3D images
technique.
8.21.5 Three- and Four-Dimensional Ultrasound techniques
3D ultrasound is a medical ultrasound technique, often used in obstetric ultrasonography
(during pregnancy), providing three-dimensional images of the fetus. By this technique can be
showing details such as facial features. Where, in 3D instead of the sound waves being sent
straight down and reflected back, they are sent at different angles. That means, the ultrasound
takes images from a few different angles and combines them into one three dimensional image
that will give you a size, shape, and depth. In other words, the 3D images are used to show
you three dimensional external images which may be helpful in diagnosing issues such as a
cleft lip. The returning echoes are processed by a sophisticated computer program resulting in
a reconstructed three-dimensional volume image of the fetus's surface or internal organs, in
much the same way as a CT scan machine constructs a CT scan image from multiple x-rays.
195
CHPTER 8 IMAGING WITH SOUND
Now 4D images used which takes 3D to the next step by combining numerous 3D images into
a real-time movie. That is the image is continuously updated, it becomes a moving image, like
a movie. 4D ultrasounds are similar to 3D scans, with the difference associated with time: 4D
allows a three dimensional picture in real time, rather than delayed, due to the lag associated
with the computer constructed image, as in classic three dimensional ultrasound. It is worth
mentioning that if the system is used only in the obstetrics application, the ultrasound energy
is limited by less than 100mW/cm2 (specifically a maximum of 94mW/cm2), whether
scanning 2, 3 or 4 dimensionally.
The main prerequisite for construction of 3D ultrasound images is very fast data
acquisition. The transducer is moved by hand or mechanically perpendicular to the scanning
plane over the region of interest. The collected data are processed at high speed, so that real-
time presentation on the screen is possible. This is called the four-dimensional (4D)
technique (4D = 3D + real time). The 3D image can be displayed in various ways, such as
transparent views of t he entire volume of interest or images of surfaces, as used in obstetrics
and not only for medical purposes. It is also possible to select two-dimensional images in any
plane, especially those that cannot be obtained by a 2D B-scan.
3D or 4D ultrasound has been developed and researched in two major ways. One is to
overcome the limitations of 2D ultrasound by providing an imaging technique that reduces the
variability of the 2D technique and allows the clinician to view the anatomy in 3D, the other is
to provide better spatial guidance for various interventional procedures, such as biopsy, focal
ablative therapy or image-guided surgery. In the field of diagnostic radiology, various 3D
ultrasound techniques, such as ultrasound cholangiography using minimum intensity
projection and volume contrast imaging, have shown excellent performance in achieving
better spatial resolution and have reduced inherent noise in comparison with conventional 2D
ultrasound. As guidance for interventional procedures, 3D ultrasound was proved to be useful
in improving the depiction and understanding of the geometric relationships of needles and
probes to tumors and other nearby structures, so as to optimize delivery of the needle or
ablative agent. Furthermore, 4D ultrasound, which is a dynamic 3D ultrasound, provides real-
time feature of volume datasets instead of “static” 3D ultrasound images, and so enables more
intuitive recognition of the 3D spatial relationship between the needle and the target lesion and
allows easy alteration in the orientation of the needle under real-time monitoring. The
advantages of 3D ultrasound are primarily derived from the fundamental properties of 2D
ultrasound. Ultrasound has many advantages over computed tomography and magnetic
resonance imaging, including real-time imaging with vessel visualization, decreased procedure
time and cost, portability, and lack of ionizing radiation. With continuing technological
improvements including computer technology and visualization techniques, 3D ultrasound
imaging is beginning to migrate from the research laboratory to the examination room.
Therefore, radiologists or sonographers should be ready to accept the paradigm shift of
viewing 3D images on a computer monitor
196
CHPTER 8 IMAGING WITH SOUND
a a
b c
(A) (B)
Figure 8.36: (A) conventional imaging, (B) Harmonic Imaging.
The difference between the transmitted and returned signal is that the returned signal is less
intense, losing strength as it passes through tissue. With Harmonic Imaging, on the other hand,
the signal returned by the tissue includes not only the transmitted “fundamental” frequency,
but also signals of other frequencies – most notably, the “harmonic” frequency, which is twice
the fundamental frequency (Figure 8.36B). Once this combined fundamental/harmonic signal
is received, the ultrasound system separates out the two components and then processes the
harmonic signal alone.
8.21.7 B-flow
B-Flow is a new imaging technique which utilizes Digitally Encoded Ultrasound technology
to provide direct visualization of blood echoes to image blood flow and tissue simultaneously.
It is a special B-scan technique for blood flow imaging that can be used to show movement
without relying upon the Doppler Effect. This technique is effective in showing exceptional
clarity and speed in the display of blood flow and vessel walls, but, unlike Doppler methods,
it provides no information about flow velocity (Figure 8.37). Therefore this technique is
197
CHPTER 8 IMAGING WITH SOUND
198
CHPTER 8 IMAGING WITH SOUND
the Doppler Effect. Basically, the greater the frequency shift, the higher the speed of the
moving object. Based on Doppler principle, the movement of blood cells or tissue towards the
transducer is the receiving frequency higher, and vice versa, movement away from the
transducer is receiving frequency less. Doppler is the apparent change in wavelength (or
frequency) of an acoustic wave when there is relative movement between the transmitter (or
frequency source) and the receiver. The Doppler Effect is used to provide further information
in various ways, as discussed below. They are especially important for examining blood flow.
The basis for the Doppler Effect is that the propagation velocity of the waves in a medium
is constant, so the waves propagate with the same velocity in all directions, and thus there is
no addition of the velocity of the waves and the velocity of the source. Thus, as the source
moves in the direction of the propagation of the waves, this does not increase the propagation
velocity of the waves, but instead increases the frequency.
• In ultrasound imaging, if echoes received are from tissues or blood cells that are moving,
the transmitted and received frequencies will not be the same.
• This “shifted” frequency can be used to determine the relative velocity and the direction
of these moving tissues.
• This effect is known as the Doppler Effect
Doppler principle can be summarized generally as follows:
If the reflector is moving toward the transmitter, the received frequency will be higher than the
transmit frequency. If the reflector is traveling away from the transmitter, the received
frequency will be lower than the transmit frequency.
This same principle applies with blood flow. If the blood moves toward a probe
(antegrade), sound bouncing off that increases in frequency. The Source of these sound
waves is the probe itself, because it emits waves and detects them. The blood that if moves
away from the probe (retrograde), the frequency decreases. So the Doppler shift can be used
to detect blood flow and blood flow velocity.
Ultrasound images of flow, whether color flow or spectral Doppler, are essentially
obtained from measurements of movement. In ultrasound scanners, a series of pulses are
transmitted to detect movement of blood. Echoes from stationary tissue are the same from
pulse to pulse. Echoes from moving scatterers exhibit slight differences in the time for the
signal to be returned to the receiver (Figure 8.38A). These differences can be measured as a
direct time difference or, more usually, in terms of a phase shift from which the "Doppler
frequency" is obtained (Figure 8.38B). They are then processed to produce either a color flow
display or a Doppler sonogram.
199
CHPTER 8 IMAGING WITH SOUND
A insonation angle
Transducer
∝
First pulse S- Position
at first
pulse
t1
Echo from
first pulse
B Transduce
Transducer
r
Second pulse S- Position
at second
pulse
t2
Echo from
second Mahmood & Haider
pulse
Figure 8.38: Ultrasound velocity measurement. The diagram shows a
scatterer S moving at velocity v with a beam/flow angle θ. The velocity can
be calculated by the difference in transmit-to-receive time from the first
pulse to the second (t2), as the scatterer moves through the beam.
200
CHPTER 8 IMAGING WITH SOUND
All types of Doppler ultrasound equipment employ filters to cut out the high amplitude, low-
frequency Doppler signals resulting from tissue movement, for instance due to vessel wall
motion. Filter frequency can usually be altered by the user, for example, to exclude
frequencies below 50, 100 or 200 Hz. This filter frequency limits the minimum flow velocities
that can be measured.
8.25 Spectral Doppler
Spectral Doppler, of high value in ultrasound diagnosis, can be used for evaluation of blood
flow, includes three kinds:
- Pulse Doppler (PW)
201
CHPTER 8 IMAGING WITH SOUND
PW
Good range resolution
Limitation on
maximum velocity
Blood vessel
202
CHPTER 8 IMAGING WITH SOUND
203
CHPTER 8 IMAGING WITH SOUND
frequency equal to the Doppler shift. (This means that a full package of pulses is considered
one pulse in the sampling frequency sense).
The pulsed modus results in a practical limit on the maximum velocity that can be
measured. In order to measure velocity at a certain depth, the next pulse cannot be sent out
before the signal is returned. The Doppler shift is thus sampled once for every pulse that is
transmitted, and the sampling frequency is thus equal to the pulse repetition frequency (PRF)
(the number of pulses per second.).
One main advantage of pulsed Doppler is its ability to provide Doppler shift data
selectively from a small volume of sample along the ultrasound beam (for example mitral
valve inflow). The location of this sample volume is operator controlled. The main
disadvantage of PW Doppler is its inability to accurately measure high blood flow velocities
(velocities above 1.5 to 2 m/s), known as aliasing. Frequency aliasing occurs at a Doppler
shift that is equal to half of the PRF.
Which called "Nyquist Limit" that is the highest detectable velocity is limited by one half of
the rate at which the ultrasound lines are fired. We will give more detail on aliasing later.
Pulsed wave (PW) Doppler systems use a transducer that alternates transmission and reception
of ultrasound in a way similar to the M-mode transducer. One main advantage of pulsed
Doppler is its ability to provide Doppler shift data selectively from a small segment along the
ultrasound beam, referred to as the "sample volume".
The sample volume is really a three-dimensional, teardrop shaped portion of the ultrasound
beam (Figure 8.41). Its volume varies with different Doppler machines, different size and
frequency transducers and different depths into the tissue. Its width is determined by the width
of the ultrasound beam at the selected depth. Its length is determined by the length of each
transmitted ultrasound pulse.
Sample volume
204
CHPTER 8 IMAGING WITH SOUND
Therefore, the farther into the heart the sample volume is moved, the larger it effectively
becomes. This happens because the ultrasound beam diverges as it gets farther away from the
transducer.
The main disadvantage of PW Doppler is its inability to accurately measure high blood
flow velocities, such as may be encountered in certain types of valvular and congenital heart
disease. This limitation is technically known as "aliasing" and results in an inability of pulsed
Doppler to faithfully record velocities above 1.5 to 2 m/sec when the sample volume is located
at standard ranges in the heart (figure 8.42).
The spectral outputs from PW and CW appear differently as shown in figure 8.43. The
transducer is located at the apex and diastolic flow is toward the transducer (positive) Note the laminar
appearance of the PW display. The CW does not usually display the same laminar flow pattern as it
receives flow information from all portions of the ultrasound beam. When there is no turbulence,
PW will generally show a laminar (narrow band) spectral output. CW, on the other hand,
rarely displays such a neat narrow band of flow velocities even with laminar flow because all
the various velocities encountered by the ultrasound beams are detected by CW.
Figure 8.43: Spectral displays of diastolic flow through the mitral orifice.
205
CHPTER 8 IMAGING WITH SOUND
It can usually be said that when an operator wants to know where a specific area of abnormal
flow is located that pulsed wave Doppler is indicated. When accurate measurement of elevated
flow velocity is required, then CW Doppler should be used. The various differences between
pulsed and continuous wave Doppler are summarized in Table 10.5.
Table 8.5: Summarizing the advantages and disadvantages of pulsed
and continuous wave Doppler echocardiography.
Range Limitation on
resolution maximum velocity
Pulsed wave yes yes
Continuous wave no no
In Pulse Doppler, a single ultrasound line is repeatedly fired. Echoes reflected from moving
structure, including blood cells, experience a Doppler shift in frequency. Using the Doppler
equation, the echo information obtained within the Sample Volume is analyzed for shifted
frequency content and amplitude, rather than transmit frequency amplitude. From this, the
blood velocity can be determined.
In order to obtain enough data to calculate the frequency components of the sampled
volume, many ultrasound lines must be fired. The frequency data is converted to velocity, and
displayed in a scrolling strip format on the monitor.
8.29 Angle of Incidence
When the motion of the object and the transmitted beam are not parallel, it is necessary to
correct for the angular difference. Motion that occurs at an angle to the beam axis will result in
a decrease in the magnitude of the frequency shift and a lower calculated velocity. Therefore,
the transmitted beam needs to be parallel to the flow for the most accurate velocity. An
equation is used to correct for the angle offset. The transducer receives only the component
parallel to the beam (Vcos q).
The location of the sample volume is operator controlled. An ultrasound pulse is
transmitted into the tissues travels for a given time (time t) until it is reflected back by a
moving red cell. It then returns to the transducer over the same time interval but at a shifted
frequency. The total transit time to and from the area is 2t. Since the speed of ultrasound in the
tissues is constant, there is a simple relationship between roundtrip travel time and the location
of the sample volume relative to the transducer face (i.e., distance to sample volume equals
ultrasound speed divided by round trip travel time). This process is alternately repeated
through many transmit-receive cycles each second.
This range gating is therefore dependent on a timing mechanism that only samples the
returning Doppler shift data from a given region. It is calibrated so that as the operator chooses
a particular location for the sample volume, the range gate circuit will permit only Doppler
shift data from inside that area to be displayed as output. All other returning ultrasound
information is essentially "ignored".
206
CHPTER 8 IMAGING WITH SOUND
Another main advantage of PW Doppler is the fact that some imaging may be carried on
alternately with the Doppler and thus the sample volume may be shown on the actual two-
dimensional display for guidance. PW Doppler capability is possible in combination with
imaging from a mechanical or phased array imaging system. It is also generally steerable
through the two-dimensional field of view, although not all systems have this capability.
In reality, since the speed of sound in body tissues is constant, it is not possible to
simultaneously carry on both imaging and Doppler functions at full capability in the same
ultrasound system. In mechanical systems, the cursor and sample volume are positioned
during real-time imaging, and the two-dimensional image is then frozen when the Doppler is
activated. With most phased array imaging systems the Doppler is variably programmed to
allow periodic update of a single frame two-dimensional image every few beats (figure 8.44).
In other phased arrays, two-dimensional frame rate and line density are significantly decreased
to allow enough time for the PW Doppler to sample effectively. This latter arrangement gives
the appearance of near simultaneity.
Gate every
5 beats +211
cm/s
Spectral 0
display
-211
Figure 8.44: When the PW Doppler operates, it causes the two-dimensional
image to be held in a frozen frame. The image is periodically updated and
will usually appear as a blank on the spectral display (dashed lines).
8.30 Aliasing
There are fundamental limitations suffered pulsed wave systems. The aliasing phenomenon
occurs with pulsed ultrasound it is not possible to measure very high-flow velocities with
accuracy. If the flow is too fast it will be shown in the wrong direction and its velocity
underestimated. This artifact shows as 'wrap-round' top and bottom in the sonogram, and is
known as 'aliasing'. The upper limit of the Doppler shift (maximum Doppler frequency fD)
which can be the pulsed wave system can record it properly or displayed unambiguously is
known as the Nyquist limit and is half the pulse repetition frequency (PRF/2). In other
words, when pulses are transmitted at a given sampling frequency (known as the pulse
repetition frequency), the maximum Doppler frequency fD that can be measured
207
CHPTER 8 IMAGING WITH SOUND
unambiguously is half the pulse repetition frequency. In the case of pulse wave spectral
Doppler (or color Doppler) when the abnormal velocity exceeds the upper limit, generation of
Doppler shifts more than Nyquist limit. This leads to aliasing occurs and is displayed as
bright, turbulent appearing flow in color Doppler and in blood flow profiles which "wrap
around" the displayed scale in pulse wave spectral Doppler as shown in figure 8.45. In other
words, the frequency with which the Doppler pulses are repeated must be at least twice the
maximum Doppler shift frequency produced by the flow. Thus the fastest flow that can be
measured with accuracy is the velocity which produces a Doppler shift frequency equal to half
the PRF being used. A greater flow velocity than this produces 'aliasing'. Aliasing does not
occur with continuous-wave Doppler. It is therefore particularly difficult to measure fast flow
in deep blood vessels. The deeper the gate has to be set, the smaller the PRF that can be used,
and so the smaller the fastest flow that can be measured without aliasing.
For example, if the Doppler shift frequency produced by the fast blood flow associated with
a stenosis is 8 kHz, the PRF must be at least 16 kHz. This allows a listening time of only 60 µs
between pulses, in which time the sound can travel to and fro through a depth of view of only
5 cm. The depth of the sampling volume determines the PRF needed, and the PRF determines
the maximum velocity that can be measured without aliasing. Thus:
maximum velocity (cm s-1) × range (cm) × transducer frequency (MHz) = 4000
The risk of aliasing can he reduced by reducing the Doppler Effect by (a) using a probe of
lower frequency f or (b) increasing the angle θ, but both increase the error in the measured
flow. The risk can also be reduced by (c) increasing the PRF, but this causes problems, as we
shall now see.
208
CHPTER 8 IMAGING WITH SOUND
The pulse repetition frequency is itself constrained by the range of the sample volume. The
time interval between sampling pulses must be sufficient for a pulse to make the return
journey from the transducer to the reflector and back. If a second pulse is sent before the first
is received, the receiver cannot discriminate between the reflected signal from both pulses and
ambiguity in the range of the sample volume ensues. As the depth of investigation increases,
the journey time of the pulse to and from the reflector is increased, reducing the pulse
repetition frequency for unambiguous ranging. The result is that the maximum fD measurable
decreases with depth.
Low pulse repetition frequencies are employed to examine low velocities (e.g. venous
flow). The longer interval between pulses allows the scanner a better chance of identifying
slow flow. Aliasing will occur if low pulse repetition frequencies or velocity scales are used
and high velocities are encountered (figure 8.46 and 8.47). Conversely, if a high pulse
repetition frequency is used to examine high velocities; low velocities may not be identified.
Figure 8.47: (a,b): Example of aliasing and correction of the aliasing. (a)
Waveforms with aliasing, with abrupt termination of the peak systolic and
display this peaks bellow the baseleineSonogram clear without aliasing. (b)
Correction: increased the pulse repetition frequency and adjust baseline (move
down).
209
CHAPTER 9
MAGNETIC RESONANCE
IMAGING
Magnetic resonance imaging (MRI) is a test that uses a magnetic
field and pulses of radio wave energy to make pictures of organs
and structures inside the body. In many cases MRI gives different
information about structures in the body that can be seen with an
X-ray, ultrasound, or computed tomography (CT) scan. MRI also
Rationale may show problems that cannot be seen with other imaging
methods. From here highlights the great importance for the study
of this technology and understanding of the fundamentals upon
which to build
Mahmood & Haider
Performance Objectives
After studying the chapter eleven, the student will be able to:-
211
CHAPTER 9 MAGNETIC RESONANCE IMAGING
212
CHAPTER 9 MAGNETIC RESONANCE IMAGING
213
CHAPTER 9 MAGNETIC RESONANCE IMAGING
structures whereas MRI is extremely useful for detecting soft tissue lesions. Before beginning
a study of the science of MRI, it will be helpful to reflect on the brief the hardware of MRI.
9.2 The Hardware
Scanners of magnetic resonance imaging (MRI) come in many varieties. There is a permanent
magnet type, resistive, superconducting, and opening or bore, with or without helium, high
field strength or low. The choice of magnet mainly governed by what you intend to do, and the
cost. Field magnets offer high quality image better, faster scanning and a wider range of
applications, but they are more cost than their counterpart's field is low.
9.3 Magnet Types
The static magnetic field (Bo) in MRI systems can be created by: Permanent magnets and
Electromagnets.
9.3.1 Permanent Magnets
A permanent magnet that originates from permanently ferromagnetic materials, which does
not lose the magnet field, that remains over time without weakening. Due to weight
considerations, these types of magnets are usually limited to maximum field strengths
of 0.4 T (the unit for magnetic field strength is Tesla: 1 Tesla = 10000 Gauss). Permanent
magnets have usually an open design system (see Figure 9.1) which has ample open space
which is more comfortable for the patient. So, the open design accommodates extremely large
patients and dramatically reduces anxiety for all patients especially those who have
claustrophobic tendencies or have larger body structures.
ADVANTAGES DISADVANTAGES
Low power consumption Limited field strength (<0.3T)
Low operating cost Very heavy
Small fringe field No quench possibility
No cryogen
Figure 9.1: Open MRI system "OPER"
214
CHAPTER 9 MAGNETIC RESONANCE IMAGING
9.3.2 Electromagnets
There are two categories can be used in MR scanner: Resistive and Superconducting Magnets
215
CHAPTER 9 MAGNETIC RESONANCE IMAGING
number of vacuum vessels, which act as temperature shields, surround the core. These shields
are necessary to prevent the helium to boil off too quickly. Another advantage of
superconducting magnets is the high magnetic field homogeneity.
ADVANTAGES DISADVANTAGES
216
CHAPTER 9 MAGNETIC RESONANCE IMAGING
The current trend in magnet design is low field open design versus high field bore design.
Obviously it would be desirable to combine the two, and only time will tell whether this can
be done within reasonable manufacturing costs and technical/structural limitations.
9.4 Shimming
MRI requires a very high homogeneous static magnetic field. In order to produce high-
resolution images, the magnetic field inhomogeneity produced in a high performance MRI
scanner must be maintained to the order of several ppm. After manufacturing, the magnet
must be adjusted in some points to produce a more uniform field by making small mechanical
and/or electrical adjustments to the overall field. This process is known as shimming. Because
the magnet itself is not adequately homogeneous, it is necessary to improve or “shim” the
homogeneity of the static magnetic field (Bo). A shim is a device used to adjust the
homogeneity of a magnetic field.
Shimming (or adjustment of the static magnetic field homogeneity) is accomplished by two
methods: (1) Passive shimming (2) Active shimming
Passive shimming: The mechanical adjustments, which add small pieces of iron or
magnetized materials, are typically called passive shimming. Passive shimming involves
pieces of steel with good magnetic qualities. The steel pieces are placed near the permanent or
superconducting magnet. They become magnetized and produce their own magnetic field.
Active shimming: The electrical adjustments, which use extra exciting currents, are known
as active shimming. Active shimming is performed with coils with adjustable current. Active
shimming requires passage of electric current through coils with unique geometric
configurations. The shim coils are designed to correct inhomogeneities of specific geometries.
In both cases (active and passive shimming), the additional magnetic fields (produced by
coils or steel) add to the overall magnetic field of the superconducting magnet in such a way
as to increase the homogeneity of the total field.
9.5 Radio Frequency Coils
Radio Frequency (RF) coils are needed to receive and/or transmit the RF signals used in MRI
scanners. RF coils system comprises the set of components for transmitting and receiving the
radiofrequency waves involved in exciting the nuclei, selecting slices, applying gradients and
in signal acquisition. RF coils are vital component in the performance of the radiofrequency
system. They one of the most important components that affects image quality and obtaining
clear images of the human body. RF coils for MRI can be categorized into two different
categories: volume coils and surface coils.
9.5.1 Volume RF Coils
The design of a volume coil is to provide a homogeneous RF field inside the coil which is
highly desirable for transmit, but is less ideal when the region of interest is small. The large
field of view of volume coils means that by receiving the noise that they receive from the
217
CHAPTER 9 MAGNETIC RESONANCE IMAGING
whole body, not just the region of interest. Volume coils need to have the area of examination
inside the coil. They can be used for transmit and receive, although sometimes they are used
for receive only. Most clinical applications volume coil is built to perform whole-body
imaging, and smaller volume coils have been constructed for the head and other extremities.
These coils are requiring a great deal of RF power because of their size, so they are often
driven in quadrature in order to reduce by two the RF power requirements. Figure 9.5 shows
two volume coils. The head coil is a transmit/receive coil; the knee coil is receive only.
(a) (b)
Figure 9.5: shows two volume coils (a) Head coil (b) Knee coil
218
CHAPTER 9 MAGNETIC RESONANCE IMAGING
219
CHAPTER 9 MAGNETIC RESONANCE IMAGING
protection the very weak RF signals that emanate from the patient when scanned would be
overwhelmed. Also, to stop the radio frequencies produced by the scanner from interfering
with equipment outside the cage.
220
CHAPTER 9 MAGNETIC RESONANCE IMAGING
electron in the cloud is not predictable as it depends on the energy of an individual electron at
any moment in time (physicists call this Heisenberg’s Uncertainty Principle). The number of
electrons, however, is usually the same as the number of protons in the nucleus.
Protons have a positive electrical charge, neutrons have no net charge and electrons are
negatively charged. So atoms are electrically stable if the number of negatively charged
electrons equals the number of positively charged protons. This balance is sometimes altered
by applying external energy to knock out electrons from the atom. This causes a deficit in the
number of electrons compared with protons and causes electrical instability. Atoms, in which
this has occurred, are called ions.
9.8 Magnetization
The earth electrically charged and spinning ball is floating in space. Quite happily: nothing to
worry about. From our physics lessons in school we may remember that a rotating electrical
charge creates a magnetic field. And sure enough, the earth has a magnetic field, which we use
to find our way from one place to another by means of a compass. The magnetic field strength
of the earth is rather small: 30 T at the poles and 70T at the equator. (Tesla is unit for magnetic
fields). In short we can establish that the earth is a giant spinning bar magnet, with a north and
a south pole (Figure 9.9).
221
CHAPTER 9 MAGNETIC RESONANCE IMAGING
Well known in the chemistry that there are many different elements, to be precise there are
110 element. Human body consists mainly of water "about 80% water". Water consists of one
oxygen and two hydrogen atoms. Consider the simplest nucleus, which is hydrogen atom (the
first element in the periodic table) has a nucleus contain one proton, and one electron orbital.
This proton is electrically positive charged and it rotates around (spin) its axis.
Also the hydrogen proton behaves as if it were a tiny bar magnet with a north and a south
pole (Figure 9.10). Hydrogen protons in the body thus act like many tiny magnets. The
nucleus is said to be a magnetic dipole, and the name for its magnetism is magnetic moment.
It is essential that there be a source of protons (protons in the nuclei of hydrogen atoms, which
are associated with fat molecules and water) in order to form the MR signal.
S
+ Spinning
charged
particle
Mahmood & Haider
Figure 9.10: hydrogen proton. The positively charged hydrogen
proton (+) spins about its axis and acts like a tiny magnet. N =
north, S = south.
We conclude from the above that there are two reasons for taking hydrogen as a source to
form the MR signal or MR imaging source.
First off all we have a lot of them in our body. Actually it's the most abundant element
we have.
Secondly, in quantum physics there is a thing called “Gyro Magnetic Ratio”. It is
beyond the scope of this book what it represents; suffice to know that this ratio is
different for each proton. It just so happens, that this gyro magnetic ratio for Hydrogen
is the largest; 42.57 MHz/Tesla.
9.9 Magnetic Moments
In most materials, such as soft tissue, these little magnetic moments are all oriented randomly
(see figure 9.11). That is, if one nucleus has its spin and therefore its magnetic moment pointed
up, there will be another nearby nucleus with its spin pointed down. Other magnetic moments
will be oriented in various directions. This random orientation causes all the spins and magnetic
moments to cancel, so that the net magnetization is zero. Net magnetization is symbolized by M.
222
CHAPTER 9 MAGNETIC RESONANCE IMAGING
mxy
Precession
m
Mz
M
m
Precession
Bo (c)
223
CHAPTER 9 MAGNETIC RESONANCE IMAGING
bar magnets spinning their own axes. As well known two north poles and two south poles of
two magnets repel each other, while two poles of opposite sign attract each other. In human
body these tiny bar magnets are ordered in such a way that the magnetic forces equalize.
Human bodies are, magnetically speaking, in balance.
As we saw at the beginning of this chapter in the paragraph about the hardware, the
magnets used in MR imaging can be in different field strengths. For example, the magnetic
field strength of 1 Tesla magnet is ± 20000 times stronger than the Earth's gravitational field!
This shows that we are working with the equipment to be potentially dangerous.
Table 9.1: MRI friendly elements
Spin Quantum Gyro Magnetic
Isotope Symbol
number Ratio (MHz/T)
1
Hydrogen H 1/2 42.6
13
Carbon C 1/2 10.7
17
Oxygen O 5/2 5.8
19
Fluorine F 1/2 40.0
23
Sodium Na 3/2 11.3
25
Magnesium Mg 5/2 2.6
31
Phosphorus P 1/2 17.2
33
Sulphur S 3/2 3.3
57
Iron Fe 1/2 1.4
If the person placed in the MRI scanner some interesting things happen to the hydrogen
protons:
1. They align with the magnetic field. This is done in two ways, parallel or anti-
parallel.
2. They process or “wobble” around the direction of the external magnetic field (the
z-axis) due to the magnetic momentum of the atom. (see figure 11.12)
They precess at a frequency called the Larmor frequency. Larmor frequency to its importance
needs to be further explained. The Larmor frequency can be calculated from the following
equation:
224
CHAPTER 9 MAGNETIC RESONANCE IMAGING
Magnetic
field (Bo)
Extra
atoms
225
CHAPTER 9 MAGNETIC RESONANCE IMAGING
In the end we see that there is a net magnetization (the sum of all tiny magnetic fields of each
proton) pointing in the same direction as the system's magnetic field.
Now, if like us, net magnetization using an easy by vector in order to see what is
happening with them in MRI. A vector (the red arrow in the Figure 9.14) has a direction and a
force. We imagine a frame of rotation, which is a set of axes called X, Y and Z. The Z-axis is
always pointing in the direction of the main magnetic field, while X and Y are pointing at
right angles from Z. Here we see the (red) net magnetization vector pointing in the same
direction as the Z-axis. The net magnetization is now called M or longitudinal magnetization.
Bo
Z Net Z
Magnetization
(MZ)
Y Y
X X
226
CHAPTER 9 MAGNETIC RESONANCE IMAGING
T T
Sine waves of the same frequency Sine waves of the same frequency which are a
which are in-phase quarter cycle (90o) out of phase (diphase)
9.11 RF Pulse
As we said in the section 9.9, if the person placed in the MRI scanner, the first thing which
happens is the precession of spins around the direction of the external magnetic field (the z-
axis). Now what happen, if another magnetic field is temporarily switched on in a different
direction (in the direction of the x- or y-axis, say)?
Precession will occur around the direction of that magnetic field also.
If the second applied magnetic field is static, the resultant movement of the net magnetization
of a spin isochromat will be a complicated motion due to precession from the two static
fields. However, if the second magnetic field which is temporarily applied is oscillating with
the frequency of precession of the precessing spins a simple rotation of the net magnetization
vector results. (Rotation of magnetization into the x-y-plane is a "90° pulse". The dephasing of
the components of magnetization in the x-y-plane starts to occur straight away, as does the re-
growth of magnetization in the z-direction as shown in next section.)
9.12 Excitation
Before the system starts to acquire the data must be perform a quick measurements (also,
called pre-scan) to determine the frequency of protons which are spinning (Larmor frequency).
Selecting this frequency is important because it uses the system for the next step.
Once the Larmor frequency is determined the system will start the acquisition. For now we
only send a radio frequency (RF) pulse (An RF pulse is a magnetic field, the direction of
which is oscillating at the Larmor frequency) into the patient and we look at what happens.
The oscillating magnetic field at the Larmor frequency is switched on for a very small
amount of time (a few milliseconds) to achieve such a rotation. This magnetic field is called
an RF pulse; it is short (a burst or pulse) and the Larmor frequency for MRI is in the radio
227
CHAPTER 9 MAGNETIC RESONANCE IMAGING
frequency range (tens of MHz). This process is sometimes called RF excitation of the spin
system. Different amounts of rotation can be achieved by applying the oscillating magnetic
field for different durations.
To understand it more deeply can through the following example:
Let us assume we work with a 1.5 Tesla system. The centre or operating frequency of the
system is 63.855 MHz (Calculated using the Larmor equation: ). In order to
manipulate the net magnetization we will therefore have to send a Radio Frequency (RF) pulse
with a frequency that matches the centre frequency of the system: 63.855 MHz. This is where
the Resonance comes from in the name Magnetic Resonance Imaging. Resonance you know
from the opera singer who sings a high note and the crystal glass shatters to pieces. MRI
works with the same principle. Only protons that spin with the same frequency as the RF pulse
will respond to that RF pulse. If we would send an RF pulse with a different frequency, let's
say 59.347 MHz, nothing would happen. Therefore, by sending an RF pulse at the Larmor
Frequency, with certain strength (amplitude) and for a certain period of time it is possible to
rotate the net magnetization into a plane perpendicular to the Z-axis, in this case the X-Y
plane (Figure 9.16).
Z Bo = constant
Flip Angle (FA) 90o
RF Z
63.855 MHz
Y Y
X X Mahmood & Haider
(Note how by use of the vectors facilitated imagine what is happening, without the use of
vectors would be quite impossible that this event draws).
We just “flipped” the net magnetization 90o. Later we will see that there is a parameter in
our pulse sequence, called the Flip Angle (FA), which indicates the amount of degrees we
rotate the net magnetization. It is possible to flip the net magnetization any degree in the range
from 1o to 180o. For now we only use an FA of 90o. This process is called excitation.
228
CHAPTER 9 MAGNETIC RESONANCE IMAGING
9.13 Relaxation
Now it becomes interesting. If the net magnetization rotated 90 degrees in x-y plane and this
means the same thing if we say that the protons raised to a higher energy state. This occurs
because the protons absorbed energy from the RF pulse. This is called the perturbation that
protons do not "like or want" continue in high energy situation "excitation" they tend to return
to the normal or low energy situation "equilibrium". This can be compared with the abnormal
situation in the case of walking on your hands, this is possible, but you do not want to
continue this case for a long time and you inevitably you prefer the natural state is walking on
your feet. A general principle of thermodynamics is that every system seeks its lowest energy
level. The same thing for the protons, and they prefer the lineup with the main magnetic field
or, in other words, they would be in a low power state. The relaxation means the return of a
perturbed system into the original situation "equilibrium" and each relaxation process can be
characterized by a relaxation time. The relaxation process can be divided into two parts: T1
and T2 relaxation.
9.13.1 T1 Relaxation
T1 is spin-lattice relaxation time which relates to the recovery of the magnetization along z
direction after RF pulse. We can say that this as the time it takes tissue to recover from an RF
pulse so you can give another pulse and still get signal. T1 is called the spin-lattice relaxation
time because it refers to the time it takes for the spins to give the energy they obtained from
the RF pulse back to the surrounding tissue (lattice) in order to go back to their equilibrium
state. T1 relaxation describes what happens in the Z direction. So, after a little while, the
situation is exactly as before we sent an RF pulse into the patient. In other words,
immediately after the 90° pulse, the magnetization Mxy precesses within the x–y plane,
oscillating around the z-axis with all protons rotating in-phase. After the magnetization has
been flipped 90° into the x–y plane, the RF pulse is turned off. Therefore, after the RF pulse
is turned off, two things will occur:
1. The spins will go back to the lowest energy state.
2. The spins will get out of phase with each other.
The Protons are returning to its original situation "equilibrium" by the releasing the absorbed
energy in the form of "very little" warmth and RF waves. That means, in principle the net
magnetization rotates back to align itself with the Z-axis. After the stops of the RF excitation
pulse, the net magnetization will re-grow along the Z-axis, while emitting radio-frequency
waves (Figure 9.17).
T1 relaxation describes what happens in the Z direction. So, after a little while, the situation
is exactly as before we sent an RF pulse into the patient. T1 relaxation is also known as Spin-
Lattice relaxation, because the energy is released to the surrounding tissue (lattice). So far, so
good! This process is relatively easy to understand because one can, somehow, picture this in
one's mind.
229
CHAPTER 9 MAGNETIC RESONANCE IMAGING
Z RF Z RF
X
X
Net
Magnetization
MZ
(MZ)
Y Y
1 2
Z RF Z
X X
MZ MZ
Y Y
3 4
Mahmood & Haider
Figure 9.17: The net magnetization will re-grow along the Z-axis after the
stops of the RF excitation pulse
230
CHAPTER 9 MAGNETIC RESONANCE IMAGING
where
or
After
or
Mo
100%
63%
100%
Fat
White Matter
CSF
Short T1: tissue with a "tighter" molecular
structure
Long T1: tissue with a "looser" molecular
structure
Time (msec)
Figure 9.19: Example T1 curves for four tissues found in the head.
231
CHAPTER 9 MAGNETIC RESONANCE IMAGING
9.13.2 T2 Relaxation
As I mentioned before, the relaxation process is divided into two parts. The second part, relax
T2, is a bit more complicated. We have found that students in general and even some
radiologists have difficulties in understanding T2 in addition to understanding the relationship
between T1 and T2.
First of all, it is very important to realize that T1 and T2 relaxation are two independent
processes. The one has nothing to do with the other. The only thing they have in common is
that both processes happen simultaneously. T1 relaxation describes what happens in the Z
direction, while T2 relaxation describes what happens in the X-Y plane. That's why they have
nothing to do with one another. I cannot emphasize this enough.
Let's go back one step and have a look at the net magnetization vector before we apply
the 90o RF pulse. The net magnetization vector is the sum of all the small magnetic fields of
the protons, which are aligned along the Z-axis.
Each individual proton is spinning around its own axis. Although they may be rotating with
the same speed, they are not spinning in-phase or, in other words, there is no phase
coherence. The arrows of the two wheels from the previous example would point in different
directions. When we apply the 90o RF pulse something interesting happens. Apart from
flipping the magnetization into the X-Y plane, the protons will also start spinning in-phase!!
So, right after the 90o RF pulse the net magnetization vector (now called transverse
magnetization) is rotating in the X-Y plane around the Z-axis at the Larmor frequency
(Figure 9.20A).
A y z
B x
y
C
Maximum
Signal D Time
Mxy (FID)
Minimum
Signal
t
Signal – Free Induction Decay (FID) Time
232
CHAPTER 9 MAGNETIC RESONANCE IMAGING
That is transverse magnetization formed by tilting the longitudinal magnetization into the
transverse plane by using a radiofrequency pulse. The transverse magnetization induces an
MR signal in the radiofrequency coil immediately after its formation, it has a maximum
magnitude, and all of the protons are in phase. Therefore the vectors all point in the same
direction because they are in-Phase. However, they don't stay like this. The transverse
magnetization starts decreasing in magnitude immediately as protons start going out of phase.
This process of de-phasing and reduction in the amount of transverse magnetization is called
transverse relaxation.
This is similar to a group of soldiers walking one behind the other in a similar pattern (in-
phase). If someone stumbled resulting in a state of mini chaos with other soldiers who are
walking and then walk change in different directions: this soldier got out-of-Phase or he were
(de-phasing).
A similar situation happens with the vectors in MRI. Remember that each proton can be
thought of as a tiny bar magnet with a north and a south pole. And two poles of the same sign
repel each other. Because the magnetic fields of each vector are influenced by one another the
situation will occur that one vector is slowed down while the other vector might speed up. The
vectors will rotate at different speeds and therefore they are not able to point into the same
direction anymore: they will start to de-phase. At first the amount of de-phasing will be small
(Figure 9.20B, C), but quickly that will increase until there is no more phase coherence left:
there is not one vector pointing in the same direction anymore.
In the meanwhile the whole lot is still rotating around the Z-axis in the X-Y plane (Figure
9.20D). A characteristic time representing the decay of the signal by 1/e, or 37%, is called the
T2 relaxation time. 1/T2 is referred to as the transverse relaxation rate. This process of getting
from a total in-phase situation to a total out-of-phase situation is called T2 relaxation.
The rate of de-phasing is different for each tissue. Fat tissue will de-phase quickly, while
water will de-phase much slower.
Also here we can draw a curve. (Figure 9.21)
233
CHAPTER 9 MAGNETIC RESONANCE IMAGING
75%
50%
37% Tissue A
25%
Tissue B
One more remark about T2: it happens much faster than T1 relaxation. T2 relaxation happens
in tens of milliseconds, while T1 can take up to seconds. T2 relaxation is also called spin–spin
relaxation because it describes interactions between protons in their immediate surroundings
(molecules).
Table 9.2: Approximate spin density (SD) and relaxation times (Tl, T2) for
various tissues.
Tissue SD Tl (ms) T2 (ms)
Water 100 2700 2700
Skeletal muscle 79 720 55
Cardiac muscle 80 725 60
Liver 71 290 50
Fat - 360 30
Bone <12 <100 <10
Spleen 79 570
50
Kidney 81 505
Gray matter 84 4053/2 105
White matter 70 345 65
234
CHAPTER 9 MAGNETIC RESONANCE IMAGING
justification ("shimming"). Since T2* is usually much smaller than T2, the signal decay of an
FID (see Fig. 7) is almost completely caused by T2* effects. In general, T1>T2>T2*.
There is also a reversible bulk field de-phasing effect caused by local field
inhomogeneities, and its characteristic time is referred to as T2* relaxation. These additional
de-phasing fields come from the main magnetic field inhomogeneity, the differences in
magnetic susceptibility among various tissues or materials, chemical shift, and gradients
applied for spatial encoding. This de-phasing can be eliminated by using a 180° pulse, as in a
spin-echo sequence. Hence, in a spin-echo sequence, only the “true” T2 relaxation is seen.
In gradient-echo (GRE) sequences, there is no 180° refocusing pulse, and these de-phasing
effects are not eliminated. Hence, transverse relaxation in GRE sequences (i.e., T2*
relaxation) is a combination of “true” T2 relaxation and relaxation caused by magnetic field
inhomogeneities. T2* is shorter than T2 (Figure 9.22), and their relationship can be expressed
by the following equation, where γ is the gyromagnetic ratio:
1/T2* = 1/T2 + γ ΔBinhom, or
1/T2* = 1/T2 + 1/T2′
Where 1/T2′ = γ ΔBinhom, and ΔBinhom is the magnetic field inhomogeneity across a voxel.
Remember this:
T1 and T2 relaxation are two independent processes, which happen simultaneously.
T1 happens along the Z-axis; T2 happens in the X-Y plane.
T2 is much quicker than T1
When both relaxation processes are finished the net magnetization vector is aligned with the
main magnetic field (B0) again and the protons are spinning Out-Of-Phase; the situation before
we transmitted the 90o RF-pulse.
Signal
37% T2 Decay
T2* Decay
T2* T2
Echo Time
Figure 9.22: graph shows T2 andT2* relaxation curves. T2* is shorter than T2.
235
CHAPTER 9 MAGNETIC RESONANCE IMAGING
9.14 Acquisition
During the relaxation processes the spins shed their excess energy, which they acquired from
the 90o RF pulse, in the shape of radio frequency waves. In order to produce an image we need
to pick up these waves before they disappear into space. This can be done with a Receive coil.
The receive coil can be the same as the Transmit coil or a different one. An interesting, but
ever so important, fact is the position of the receive coil.
The receive coil must be positioned at right angles to the main magnetic field (B0). Failing
to do so will result in an image without signal. This is why: if we open up a coil we see it is
basically nothing but a loop of copper wire. When a magnetic field goes through the loop, a
current is induced (Figure 9.23).
Bo
236
CHAPTER 9 MAGNETIC RESONANCE IMAGING
Z
Bo
Receive Coil
Remember this:
The only proper way to position the receive coil is at right angles to Bo.
Note: Many coils are specifically designed for a certain body part. Take for instance the Head
coil; if you fix the coil on the scanner table it seems that B0 runs through the coil. This is only
'optical Illusion'. The coil is designed such that the loops of copper wire, which make up the
coil, are at right angles to B0. Designing a coil for a bore type magnet where B0 runs through
the length of the body is exceptionally difficult. If you open up a Head coil you'll see probably
two copper wires, which are saddle shaped and positioned at right angles to one another. In
order to receive enough signal there are two coils, because saddle shaped coils are relatively
inefficient.
According to Mr. Faraday a Radio Frequency wave has an electric and a magnetic
component, which are at right angles from one another, have a 90o phase difference and both
move in the same direction with the speed of light (Figure 9.25).
Propagation
direction
Wavelength (λ)
237
CHAPTER 9 MAGNETIC RESONANCE IMAGING
It is the magnetic component in which we are interested because that induces the current in the
receive coil.
Determine the positioning of the coils so that they form a right angle with B0 means we can
receive signals from processes only when the right angles be a between the coils and B0 which
happens to be T2 relaxation. T2 relaxation is a decaying process, which means phase
coherence is strong in the beginning, but rapidly becomes less until there is no phase
coherence left. Consequently, the signal that is received is strong in the beginning and quickly
becomes weaker due to T2 relaxation (Figure 9.26).
90o
MXY
FID
Mahmood & Haider
Figure 9.26: Free Induction Decay
The signal is called: Free Induction Decay (FID). The FID is the signal we would receive in
absence of any magnetic field. In the presence of a magnetic field T2 decay goes much faster
due to local (microscopic) magnetic field inhomogeneity and chemical shift, which are known
as T2* effects). The signal we receive is much shorter than T2. The actual signal decays very
rapidly; in ± 40 milliseconds it's reduced to practically zero. This poses a problem, as we will
see later.
238
CHAPTER 9 MAGNETIC RESONANCE IMAGING
y y y
Excitation Relaxation
239
CHAPTER 9 MAGNETIC RESONANCE IMAGING
9.29c). If there would be no signal damping in the FID, all lines in the spectrum were
infinitely narrow.
FT FT
Intensity
Intensity
Time Time Frequency
(a (b) (c)
)
Figure 9.29: Fourier transforms (FT) converting time domain data to
frequency domain data, and vice versa.
In real life, there is always relaxation leading to such a damping (see Figure. 9.30) and
therefore, we always find line broadening in the spectra which depends on the exponential
decay of the FID: the faster the decay, the broader the lines.
Exponential function
Intensity
Time
Figure 9.30: The decay of the FID due to the relaxation.
9.17 Gradient Coils
As we explained previously to produce an image, you must stimulate the hydrogen nuclei in
the body, and then determine the location of those nuclei within the body. These tasks are
accomplished using the gradient coil.
240
CHAPTER 9 MAGNETIC RESONANCE IMAGING
To make it clear that more than during the following assumption: If we assume a completely
homogeneous magnetic field (this ideal situation does not exist), then all the protons in the
body will spin at the Larmor frequency. This also means that all protons when you return to
equilibrium give the same signal. In this case we will not know whether the signal coming
from the head or foot. So you will not get a clear image.
The solution to our problem can be found in the characteristics of the RF-wave, which are:
Phase, Frequency and Amplitude
First, we will divide the body up to the volume elements, also known as: voxels.
The protons within that voxel will emit RF wave with known phase and frequency.
Amplitude of the signal depends on the amount of protons in the voxel.
The answer to our problem is: Gradient Coils
The gradient coils are resistant type electromagnets, which enable us to create additional
magnetic fields, which are, in a way, superimposed on the main magnetic field B0. The
gradient coils are used to spatially encode the positions of the MRI spins by varying the
magnetic field linearly across the imaging volume such that the Larmor frequency varies as a
function of position.
To achieve adequate image quality and frame rates, the gradient coils in the MRI imaging
system must rapidly change the strong static magnetic field by approximately 5% in the area
of interest. High-voltage (operating at a few kilovolts) and high-current (100s of amps) power
electronics are required to drive these gradient coils. Notwithstanding the large power
requirements, low noise and stability are key performance metrics since any ripple in the coil
current causes noise in the subsequent RF pickup. That noise directly affects the integrity of
the images. To differentiate tissue types, the MRI systems analyze the magnitude of the
received signals. Excited nuclei continue to emit a signal until the energy absorbed during the
excitation phase has been released. The time constant of these exponentially decaying signals
ranges from tens of milliseconds to over a second; the recovery time is a function of field
strength and the type of tissue. It is the variations in this time constant that allow different
tissue types to be identified.
9.18 Magnetization Gradients
There are three Individual gradients as shown in figure 9.31. An MRI system must have 3 sets
of wires (x, y, and z gradient coils) to produce gradients in three dimensions and thereby
create an image slice over any plane within the patient's body. Each set can create a magnetic
field in a specific direction: x, y or z. When a current is fed into the Z gradient, then a
magnetic field is generated in the Z direction (Figure 9.31A). The same goes for the other
gradients. (Figure 9.31B, C).
The application of each gradient field and the excitation pulses must be properly
sequenced, or timed, to allow the collection of an image data set. Therefore, in a transverse
241
CHAPTER 9 MAGNETIC RESONANCE IMAGING
image, the Z gradient would be used in “selecting” a slice of tissue to image. That means the
spatial location of the 2D plane to be imaged is controlled by changing the excitation
frequency.
x X Gradient Coil
Gx
Gx B
z A
Bo y
B
x Y Gradient Coil
B B
z
Bo Gy B
y
Gy
x Z Gradient Coil
Gz
C
Gx
z
Bo y B B
After the excitation sequence is complete, the X gradient may be used for phase encoding.
This requires properly applied gradient in the x direction can be used to spatially change the
resonant frequency of the nuclei as they return to their static position. The frequency
information of this signal can then be used to locate the position of the nuclei in the X
direction. Similarly, a gradient field properly applied in the Y direction can be used to
spatially change the phase of the resonant signals and, hence, be used to detect the location of
the nuclei in the y direction. This means, the Y gradient for frequency encoding. In short
we can say, by properly applying gradient and RF-excitation signals in the proper sequence
and at the proper frequency, the MRI system maps out a 3-D section of the body. Figure 9.32
shows schematically how the 3 gradient coils form a cylinder. This cylinder is then placed in
the magnet bore of main magnetic coil.
242
CHAPTER 9 MAGNETIC RESONANCE IMAGING
X magnetic coils
(Creates a varying magnetic field from left to
right)
Y magnetic coils
(Creates a varying magnetic field from top to
bottom)
Z magnetic coils
(Creates a varying magnetic field from head to
toe)
Figure 9.32: Diagram of coils within the bore of the magnet that are used to create
gradients.
Let's move on and discuss how the gradients are used to code the signal.
9.19 Signal Coding
To explain this subject clearly and easy assimilation, suppose some of the assumptions:
considering an axial image of the brain using a 1.5 Tesla magnet. Also we work with a
homogeneous magnetic field, which covers the whole body from head to toe. (This is quite
different in reality, where there is only a homogenous sphere of 40 cm in diameter in the iso-
centre (center of the MRI bore) of the magnet, but this assumption easy to the explanation and
the idea explained). Just as important as the strength of the main magnet is its precision. The
straightness of the magnetic lines within the center (or, as it is technically known, the iso-
center) of the magnet needs to be near-perfect. This is known as homogeneity. Fluctuations
(inhomogeneities in the field strength) within the scan region should be less than three parts
per million (3 ppm). When we put a patient in the magnet, all the protons, from head to toe,
align with B0. They spin at the Larmor frequency of 63.6 MHz. (Figure 9.33).
Figure 9.33: patient in the magnet, all the protons, align with B0 and
they spin at the Larmor frequency
243
CHAPTER 9 MAGNETIC RESONANCE IMAGING
If we use a 90o excitation RF-pulse to flip the magnetization into the x-y plane, then all the
protons would react and return a signal. We would have no clue where the signal comes from:
from head or toe.
90o
RF
Gs
244
CHAPTER 9 MAGNETIC RESONANCE IMAGING
receive the returned signal comes from the single slice in the head. That is we have identified
the site by using the Z-gradient (Gz). (figure 9.35).
These frequencies are only used for this example; in reality the differences are much
smaller.
_
Null
Slightly lower Larmor frequency Slightly higher
Precessional frequency Precessional frequency
FOV position
Transmitted RF bandwidth
RF
Frequency Amplitude
RF carrier frequency
RF
63.7 MHz Mahmood & Haider
245
CHAPTER 9 MAGNETIC RESONANCE IMAGING
The carrier frequency of the RF excitation pulse may be changed as shown in figure 9.36.
The carrier frequency of the transmitted RF pulse determines which spins along the patient
will resonate (because they have a matching Larmor frequency). If multiple slices are to be
acquired in quick sequence, the carrier frequency can be set to determine the location of the
imaging slice in the patient.
246
CHAPTER 9 MAGNETIC RESONANCE IMAGING
Where;
is the slice thickness,
In Figure 9.37A show that varying the steepness of the gradient, while keeping the RF-pulse
bandwidth the same. Alternatively, Figure 9.37B the steepness of the gradient is kept the
same, while the bandwidth of the RF-pulse is varied. Can also change the slice thickness, the
slice thickness may be reduced by either increasing the gradient of the magnetic field (dashed
line in figure 9.37A) or by decreasing the RF pulse width, (or transmit bandwidth, figures
9.37B & 9.38). A thinner slice produces better anatomical detail, the partial volume effect
being less, but it takes longer to excite.
In practice, the slice thickness is determined by a combination of both gradient steepness
and RF-pulse bandwidth.
The total magnetic field at a position ZSS (SS = slice selection) during application of GZ is
given by: B0+ZSS·GZ and the spatially selective excitation energy or frequency, respectively,
can be easily calculated using the Larmor equation.
247
CHAPTER 9 MAGNETIC RESONANCE IMAGING
RF Frequency
Low
ωo e ωo
Large Large
Small Small
Low gradient → Large slice thickness Wide bandwidth → Large slice thickness
Mahmood & Haider
Figure 9.37: Slice thickness is dependent on RF bandwidth and gradient strength. (A)
For a fixed gradient strength, the RF bandwidth determines the slice thickness, (B) for a
fixed RF bandwidth, gradient strength determines the slice thickness.
Figure 9.38: Reduced of slice thickness by decreasing the RF (or transmit) bandwidth.
A typical slice thickness is 2-10 mm. the RF pulse inevitably contains a certain amount of
electromagnetic energy of frequencies slightly higher or lower than the intended bandwidth,
248
CHAPTER 9 MAGNETIC RESONANCE IMAGING
thus mildly exciting tissues either side of the desired slice. To prevent this affecting the image
slice, a gap (say 10% of the slice thickness) may be left between slices, although this is not
necessary when the slices are interleaved.
For example, for a 10 mm slice thickness using a gradient magnetic field strength of
(10mTm-1), the transmitted RF pulse bandwidth would be about 4.3 kHz (using γ0 = 42.58
MHz T-1).
In order to get optimal image resolution, must be very thin slices with a high SNR. But
whenever were thinner slices the noise was more, the SNR decreases and spatial resolution
increases. Spatial Resolution is the ability to distinguish one structure from another.
Conversely, increase of the slice thickness leads to increase signal to noise ratio and reduces
spatial resolution. Because the thicker slices result other problems such as an increase in
partial volume effects.
Effects The poorer SNR of thin slices can be addressed for to some extent by increasing the
number of acquisitions or by a longer TR. Yet this is accomplished only at the expense of the
overall image acquisition time (the period of time required to collect the image data. This
time does not include the time necessary to reconstruct the image. ADC - analog-to-digital
converter) and reduces the cost efficiency of the MR imaging system. The slice thickness can
be determined by: (1) the steepness of the slope of the gradient field (Gss) and (2) the
bandwidth of the 90o RF-pulse.
249
CHAPTER 9 MAGNETIC RESONANCE IMAGING
All protons at
the same
frequency and
at the same
phase
A A
FT
F T
1 2
Same phase
1
1 2
Mahmood & Haider
Figure 9.39: The axial slice, which has just been created
by the Gz gradient.
Within the slice there are still an awful lot of protons and we still don't know from where the
signal is coming from within the slice. Whether it comes from anterior, posterior, left or right.
Further encoding is therefore required in order to allow us to pinpoint the exact origin of the
signals.
Important Notes:
The thickness of the slice can be changed by varying the steepness of the magnetic
field gradient, or by changing the transmitted RF pulse bandwidth as will be discussed
later in detail (section 9.8.2).
The RF pulse and the magnetic field gradient have to apply together. This process may
be depicted in a pulse sequence timing diagram.
The shape of the RF excitation pulse in time is not a square (on/off) shape. This is
because to excite a discrete range of frequencies (a slice) a sinc shape pulse is used
which can be seen by calculating sin(x)/x.
9.19.2 Frequency Encoding Gradient
Figure 9.36 shows the axial slice, which has just been created by the Gz gradient. The protons
in this slice have spin with the same frequency and have the same phase. Within the slice
250
CHAPTER 9 MAGNETIC RESONANCE IMAGING
there are still an awful lot of protons and we still don't know from where the signal is coming
from within the slice. Whether it comes from anterior, posterior, left or right.
Further encoding is therefore required in order to allow us to pinpoint the exact origin of
the signals. The frequency encoding gradient is a static gradient field, just like the slice
selection magnetic field gradient. It does the same thing; it causes range of Larmor frequencies
to exist in the direction in which it is applied (according to the Larmor equation).
To encode in the left – right direction the second, gradient (Gx) is switched on. This will
create an additional gradient magnetic field in the left – right direction. we need now is to do
one more encoding to determine whether the signal comes from the left, the centre or the right
side of the head. The protons on the left hand side spin with a lower frequency than the ones
on the right (Figure 9.40).
Patient
Frequency
Bandwidth
Different frequency
Figure 9.40: The different frequency of protons (Gx gradient:
Frequency encoding gradient).
By causing this range of frequencies to exist, we can use the Fourier transform to separate
them out after we measure an MRI signal as shown in Figure 9.41 (which is a mix of all
signals from a slice).
They will accumulate an additional phase shift because of the different frequency, but – and
this is utterly important - the already acquired phase difference, generated by the Phase
Encoding gradient in the previous step, will remain. Now it is possible to determine whether
the signal comes from the left, centre or right hand side of the slice. We can pinpoint the exact
origin of the signals, which are received by the coil.
251
CHAPTER 9 MAGNETIC RESONANCE IMAGING
Gx - on
90o
RF
Gs
Gf
Echo
If the frequency encoding gradient is not applied at the same time as measuring the MRI
signal, the signals from the different columns in the imaging slice will all have the same
frequency. We cannot, therefore, use the Fourier transform to separate them out.
9.19.2.2 Frequency Encoding in Both Directions
It has been shown that frequency encoding allows us to localize the signal from within an
imaging slice into columns. The next step is to encode the columns (into rows) so that we can
"plot" unique signal values into an array of pixels to get an image of the slice. One might think
252
CHAPTER 9 MAGNETIC RESONANCE IMAGING
that we can simply apply a third magnetic field in a direction perpendicular to the frequency
encoding gradient within the imaging slice as shown in figure 9.43.
ー Frequency encode +
0 0 0 -1 0 +1 +1 +1 +1
0 0 0 -1 0 +1 + 0 0 0
0 0 0 -1 0 +1 -1 -1 -1
ー Frequency encode +
Field
(mT) 0 +1 +2
B
A -1 0 +1
-2 -1 0
Space (m)
Figure 9.43: Example nine-pixel image slice. Frequency encoding in two directions
within an image slice does not produce unique Larmor frequencies related to position. It
becomes impossible to deduce unique signal intensity values for image pixels.
That is to say, why not just frequency-encode in the other direction too? Unfortunately, that
doesn't work. If frequency encoding is performed in two directions, it becomes impossible to
deduce signal intensity values for unique image pixels. This is because the Fourier transform
can tell us the total amplitude of the signal at a particular frequency, but when that amplitude
is the sum of multiple voxels in the image slice, we don't have enough information to plot
signal intensity values in unique pixels in an image.
9.19.3 Phase Encoding Gradient
In order to encode the image or the imaged object along the so-call phase encoding direction,
here the y direction (but the direction could also be in z or x direction), a gradient along the y
direction is applied, and thereafter the signal is sampled. ( ) The phase encoding grading is
a magnetic field gradient that allows the encoding of the spatial signal location along a second
dimension by different spin phases. The phase encoding gradient is applied after slice
selection and excitation (before the frequency encoding gradient), orthogonally to the other
two gradients. The spatial resolution is directly related to the number of phase encoding steps
(gradients). In fact it is necessary to apply this gradient several times, each time increasing the
gradient by an equidistant amount.
253
CHAPTER 9 MAGNETIC RESONANCE IMAGING
This is done by a gradient field is briefly switched on and then off again at the beginning of
the pulse sequence right after the radio frequency pulse, the magnetization of the external
voxels will either precess faster or slower relative to those of the central voxels.
During readout of the signal, the phase of the x-y-magnetization vector in different columns
will thus systematically differ. When the x- or y- component of the signal is plotted as a
function of the phase encoding step number n and thus of time n TR, it varies sinusoidally, fast
at the left and right edges and slow at the center of the image. Voxels at the image edges along
the phase encoding direction are thus characterized by a higher 'frequency' of rotation of their
magnetization vectors than those towards the center.
As each signal component has experienced a different phase encoding gradient pulse, its
exact spatial reconstruction can be specifically and precisely located by the Fourier
transformation analysis. Spatial resolution is directly related to the number of phase encoding
levels (gradients) used. The phase encoding direction can be chosen, e.g. whenever oblique
MR images are acquired or when exchanging frequency and phase encoding directions to
control wrap around artifacts.
In a MRI sequence diagram this procedure is indicated by the phase encoding (see Figure
9.44). As seen this figure consists of 32 steps in this example, each lasting for 0.250 ms, and
with an increment of 0.734085 mT/m (milliTesla pr metre). One could go through this figure
from the bottom to the top, or from the top to the bottom, this is called linear phase encoding.
One could also go from the middle and outward like 0, 1, -1, 2, -2 etc, this is called low-high
phase encoding.
15
10 0.734085 mT/m
Gy (mT/m)
-10
-15
-1 -0.5 0 0.5 1 1.5 2
Time (msec)
Note that one particulate step corresponds to applying no gradient at all, and for the low-high
encoding, this would then be the first step. In order to code the protons further the gradient
is switched on very briefly. During the time the gradient is switched on an additional gradient
magnetic field is created in the Anterior-Posterior direction.
As you can see small volumes (voxels) have been created. Each voxel has a unique
combination of frequency and phase. The amount of protons in each voxel determines how
254
CHAPTER 9 MAGNETIC RESONANCE IMAGING
strong (amplitude). The signal received contains a complex mix of frequencies, phases and
amplitudes each from a different location (voxel) within the brain.
The computer receives this massive amount of information and then a "Miracle" occurs. In
about 0.25 seconds the computer can analyze all this and creates an image. The "Miracle" is a
mathematical process, known as Two-Dimensional Fourier Transform (2DFT), which enables
the computer to calculate the exact location and intensity (brightness) of each voxel.
This can be summarized (slice selection, phase encoding and frequency encoding) as
follows:
After slice selection the encoding of spatial information has only to be performed in two
dimensions. This can be accomplished by magnetic field gradients in the respective directions.
These are differentiated by the time of gradient switching, i.e. before or during data
acquisition. The first case, the so-called phase encoding, is discussed below. In the second
case (frequency encoding), a readout gradient is switched during data acquisition and the
gradient direction is therefore called the 'readout direction'. produces an additional,
linearly varying magnetic field and due to the proportionality between magnetic field and
frequency, the latter also alters linearly. Spins at different positions therefore emit radiation
with different frequencies which can be distinguished after Fourier transformation. Each
frequency is related to a specific position on the readout axis and the intensity of the radiation
with this frequency is proportional to the number of spins emitting at this position.
9.20 K-Space
K-space is a formalism widely used in magnetic resonance imaging introduced in 1979 by
Likes and in 1983 by Ljunggren and Twieg, which form raw data matrix in MRI which can be
converted into an image using Fourier transformation (Figure . 9.45).
Figure 9.45: Raw data matrix in MRI converted into an image using FT.
The value defines the number of phase cycles per meter distance from the origin (x = 0) a
magnetization vector passes through due to application of the magnetic field gradient ( ).
255
CHAPTER 9 MAGNETIC RESONANCE IMAGING
By analogy to the frequency given as cycles per time, k is called the "spatial frequency". To
illustrate the idea of k-space to start with the following question: Why is k-space so important?
The answer is: It helps us to understand how an MRI image is acquired and how various
pulse-sequences work.
In MRI physics, k-space is the 2D or 3D Fourier transform of the MR image measured. Its
complex values are sampled during an MR measurement, in a premeditated scheme controlled
by a pulse sequence, i.e. an accurately timed sequence of radiofrequency and gradient pulses.
In practice, k-space often refers to the temporary image space, usually a matrix, in which data
from digitized MR signals are stored during data acquisition. When k-space is full (at the end
of the scan) the data are mathematically processed to produce a final image. Thus k-space
holds raw data before reconstruction.
“The MRI data prior to becoming an image (raw or unprocessed data) is what makes up k-
space”. Synonyms for k-space are matrix and time time-domain. The task of an MRI scanner
is to recognize and collect MR signals and store them in a specific order which is recognizable
for further analysis. At each RF excitation, combinations of different excitations are collected
as one complex signal. The read-out MR signal is stored in a 2D array called k-space,
containing samples of the continuous Fourier transform of the object's magnetization.
9.21 Gradient Echo Pulse Sequence Diagram
A pulse sequence is the implementation of the hardware components necessary for excitation,
spatial encoding, and data acquisition in MR imaging.
It is described by a scheme in which all RF pulses and gradient amplitudes as well as the
acquisition window are displayed as functions of time for the smallest repetition interval. This
will be explained in detail for the pulse sequence of a simple 2D gradient echo method (Figure
9.46). The upper trace (RF) shows the radiofrequency excitation pulse with a variable flip
angle θ which is usually considerably smaller than the 90° shown in Figure 9.46. This means
that
(i) The signal measured will be smaller, but
(ii) Less time must be allowed to pass before we can take the next measurement
(shorter TR).
This reduces the scan time to a reasonable duration. The three traces indicated by GS, GPh, and
readout or Echo (RO) represent the individual gradient amplitudes, ACQ shows the
acquisition window.
Slice selection (GS) is realized by switching on the appropriate gradient at the same time as
the RF pulse. Different phases are generated within the excited slice by the GS gradient, a
rephasing pulse with negative amplitude is necessary to compensate for this effect.
Subsequently, phase encoding (GPh) and readout dephasing gradient (RO) are switched on and
off before GRO is applied simultaneously with data acquisition.
256
CHAPTER 9 MAGNETIC RESONANCE IMAGING
It must be noted that due to the orthogonality of the three directions, all phase
manipulations can be applied independently. Hence, GPE and the negative readout gradient can
also be switched at different times. Alternatively, both gradients can be applied at the time of
slice selection rephasing. These details depend on the implementation of the pulse sequence or
on the timing parameters selected by the user. By contrast, differences specific to the
individual sequences are mentioned explicitly.
RF
Gs
△GPh
GPh
GRO
Gf TACQ
ACQ
Time
TE
TR
257
CHAPTER 9 MAGNETIC RESONANCE IMAGING
imaging parameters defined by the user, and they can be used e.g. to produce contrast between
different tissues due to their individual relaxation properties. As indicated by the axis breaks,
the length of TR can be raised to any value by increasing the time interval between data
acquisition and the next excitation pulse.
9.22 Gradient Specifications
When you are shopping for an MRI scanner, it is very important to pay special attention to the
gradient sub-system. Ideally, when a gradient is switched on it immediately reaches maximum
power and when you switch it off the power is immediately back to zero (Figure 9.47A).
Unfortunately this is not the case, as we do not live in an ideal world. In reality the gradient
needs a little time to reach maximum power and to power down (Figure 9.47B). The time it
takes to reach maximum power is called: Rise Time (Figure 9.47C). When we divide the
maximum power by the rise time we get a number called: Slew Rate. These are the
specifications for a gradient system.
B Max. Strength
(17 T/m)
Realistic waveform
Rise time Slew rate
(0.7s) (17/0.7= 24 T/m/s)
The performance – and therefore the range of applications, which can be done – is mainly
determined by the performance of the gradient system. Other issues you may look for are the
field strength B0, the computer system and the ease of use of the user interface.
258
CHAPTER 9 MAGNETIC RESONANCE IMAGING
259
CHAPTER 9 MAGNETIC RESONANCE IMAGING
SNR is low, then the contrast resolution of the image is poor. Mathematically, SNR can be
expressed as the intensity of the signal measured in the region of interest divided by the
standard deviation of the signal intensity in a region outside the anatomy or the object being
imaged (i.e. a region from which no tissue signal is obtained). The SNR is dependent on the
following parameters:
Slice thickness and receiver bandwidth
Field of view
Size of the (image) matrix
Number of acquisitions
Scan parameters (TR, TE, flip angle)
Magnetic field strength
Selection of the transmit and receive coil (RF coil)
Repetition time (TR) is the interval between two successive excitations of the same slice. That
means, it is the length of the relaxation period between two excitation pulses and is therefore
crucial for T1 contrast.
Before we discuss the effects of each of these parameters, it is first necessary to clarify
some concepts.
9. 23.2 Pixel, Voxel, Matrix
Images that we get from MRI are digital images consist of a matrix of pixels (picture
elements). Knowing that, the matrix are mathematically well known two-dimensional grid of
rows and columns. Each square of the grid is a pixel, which assigns the value corresponding to
the signal intensity. Each pixel of the MR image corresponding three-dimensional volume
element called voxel therefore provides information on the pixel corresponding voxel, (Figure
9.48).
260
CHAPTER 9 MAGNETIC RESONANCE IMAGING
The RF pulse for one slice also excites protons in adjacent slices. Such interference is known
as cross-talk. That is, when radio frequency pulse for one slice stimulates protons in the
adjacent slices. The cross-talk will lead to a reduction SNR (Figure 9.49b). Therefore when
insert small gaps with thirty percent of slice thickness in between slices to minimize the
artifact and improve signal to noise ratio. Because the resultant slices profiles are not
perfectly rectangular (Figure9.49). In selecting an appropriate inter-slice gap one has to find a
compromise between an optimal SNR, which requires a large enough gap to completely
eliminate cross-talk, and the desire to reduce the amount of information that is missed when
the inter-slice gap is too large. In most practical applications insert small inter-slice gaps with
25–50% of the slice thickness noise ratio. Alternative way is to reduce the saturation of
protons in adjacent slices a situation which is undesired by-slice imaging.
261
CHAPTER 9 MAGNETIC RESONANCE IMAGING
(A) (B)
Figure 9.50: Types of matrix: (A) Fine Matrix (B) Coarse Matrix.
Larger voxels have an increased signal to noise and better contrast resolution because there are
more hydrogen nuclei in the voxel to contribute to the signal. Larger voxels are therefore
represented on the image matrix by larger pixels.
Matrix size chosen establishes a pixel size and therefore the size of the voxel it represents.
Another way to alter voxel size is with the slice thickness used. Assuming the field of view is
square, doubling the slice thickness of the area doubles signal to noise ratio and voxel volume;
thereby increasing contrast resolution. Halving the slice thickness adversely affects the signal
to noise ratio and therefore decreasing the contrast resolution by half. The field of view also
can influence the voxel volume. Doubling the field of view doubles the voxel volume on both
axes and increases the signal-to-noise by four. This also increases the contrast resolution of the
262
CHAPTER 9 MAGNETIC RESONANCE IMAGING
image. This is the single best and most efficient way to increase signal to noise ratio and
contrast resolution. Halving the field of view reduces the voxel volume and reduces the signal
to noise ratio by a quarter. The contrast resolution is decreased.
[The background noise that comes from the system is a constant amount for each patient,
but is different for every patient namely that impact is different from one patient to another.
Factors affecting the signal amplitude from the tissue affect the noise. The best pulse sequence
for the signal amplitude is the classic spin echo (SE) sequence. Its use of the 180 degree radio
frequency pulse to re-phase all of the hydrogen protons in order to create an echo allows for
the best signal amplitude. Other sequences such as the variations of gradient echo do not re-
phase the hydrogen nuclei as effectively and signal is lost. The number of hydrogen protons in
the area of tissue to be scanned has an effect on SNR and contrast resolution. If there are a
large number of hydrogen protons in the area, then the signal amplitude will be increased;
therefore the contrast resolution will be increased. If the number of protons in the area is low,
then the signal will be low and the contrast resolution will be poor.]
9. 23.5 Scan Parameters (TR, TE, Flip Angle)
Two controls determine tissue contrast: TR (repetition time) and TE (echo time) of the scan.
They can be used for example to produce contrast between different tissues due to their
individual relaxation properties. TR and TE both affect signal-to-noise and contrast resolution.
(a) Repetition time (TR) is the time between successive RF pulses, that is, the duration of a
phase encoding cycle. A long TR allows the protons in all of the tissues to relax back into
alignment with the main magnetic field. A short repetition time will result in the protons
from some tissues not having fully relaxed back into alignment before the next measurement
is made decreasing the signal from this tissue. In other words, TR controls the T1 relaxation
time of the tissue by allowing a certain amount of the net magnetization to re-grow into the
longitudinal plane, back to equilibrium before a signal is read. A long TR will increase signal
to noise ratio because more net magnetization has re-grown back to equilibrium and is
available to be excited and flipped once again into the transverse plane. A short TR decreases
the signal to noise ratio because not as much of the net magnetization has recovered and is
not there to be excited and flipped again into the transverse plane.
(b) Echo time (TE) is the time at which the electrical signal induced by the spinning protons is
measured. That is, the time between giving the RF pulse (excitation) and the peak (maximum
amplitude) of the echo signal (Fig 9.51). During this time interval, the transverse
magnetization decays, e.g. signal decays, due to the T2 relaxation effects. So TE directly
determines how much the transverse signal decays. For a T2 weighted image, use a TE that
is longer than the T2 of some tissues but shorter than the T2 of other tissues. A long TE
results in reduced signal in tissues like white matter and gray matter since the protons are
more likely to become out of phase. Protons in a fluid will remain in phase for a longer time
since they are not constrained by structures such as axons and neurons. A short echo time
reduces the amount of dephasing that can occur in tissue like white matter and gray matter.
263
CHAPTER 9 MAGNETIC RESONANCE IMAGING
In other words, TE controls the T2 relaxation time of the tissue by allowing a certain amount
of the net magnetization to decay in the transverse plane before a signal is read. A long TE
decreases signal to noise because all of the net magnetization has decayed when the signal is
read. A short TE increases signal-to-noise because there is net magnetization in the
transverse plane to contribute to the signal.
RF
TR
TE
TE
Signal
1st 2nd
echo echo
Figure 9.51: Echo time (TE) and Repetition time (TR).
A long TR and a short TE increase signal to noise ratio and contrast resolution of the MR
images. A specific weighting for the combination short TR and a long TE decrease signal-to-
noise ratio and contrast resolution in MR. The results three parameters discussed so far are can
be summarized in the Figure 9.52 together with several other important variables.
Proton
Long Density T2
TR
T1 Poor
Short
Short TE Long
Figure 9.52: Summarize of TR-TE combinations which also shows
some brain images impressively demonstrating the effects of
relaxation weighting on the contrast.
Another sequence parameter affecting the signal to noise ratio is the flip angle. The flip angle
is how far the net magnetization has been moved into the transverse plane. A larger flip angle
264
CHAPTER 9 MAGNETIC RESONANCE IMAGING
will increase the signal to noise ratio because there is more net magnetization being moved
into the transverse plane for a better signal. A small flip puts less net magnetization into the
transverse plane so a strong signal is not possible.
265
CHAPTER 9 MAGNETIC RESONANCE IMAGING
266
CHAPTER 9 MAGNETIC RESONANCE IMAGING
Contrast in most MR images is actually a mixture of all these effects; but careful design of the
imaging pulse sequence allows one contrast mechanism to be emphasized while the others are
minimized. The ability to choose different contrast mechanisms by tailoring the appropriate
pulse sequence and choosing the right pulse sequence parameters is what gives MRI its
tremendous flexibility.
In certain cases, the intrinsic differences in T1, T2, T2*, etc, may not be sufficient to
achieve the desired degree or kind of contrast. In those cases additional differences can be
introduced by adding contrast agents (see figure 9.53b): paramagnetic chemicals that localize
in certain tissues/fluids, and artificially change their spin relaxation properties.
(a) (b)
In order for an excited spin system to return to its equilibrium magnetization, energy must be
transferred from the spin system to the lattice (surrounding), as discussed in section 9.13. The
return to equilibrium is described by the spin-lattice relaxation time, T1. When T1-weighted
sequences are used, the magnitude of the MR-signal increases with decreasing T1-relaxation
times. Further, the contrast between two tissues will of course also increase with increasing
difference in T1 relaxation times between the two tissues. Sufficient contrast is of particular
importance in differentiating pathological tissue from normal surrounding tissue. Exogenous
MR contrast agents were therefore developed shortly after the first commercial MR systems
became available in the early 1980’s. Today, MR contrast agents are typically in a significant
proportion of MR examinations; with the highest usage in CNS applications (tumor
diagnosis). MR contrast agents are also widely used in MR angiography (MRA). MR contrast
agents act by selectively reducing T1 (and T2) relaxation times of tissue water through spin-
interaction between electron spins of the metal-containing contrast agent and water protons in
tissue
There are two classes of MRI contrast agents available,
267
CHAPTER 9 MAGNETIC RESONANCE IMAGING
(1) T1-weighted contrast agents (e.g., gadolinium-(Gd3+) and manganese- (Mn2+) chelates) are
paramagnetic in nature which increase the T1 relaxation time, resulting in bright contrast
T1-weighted images; and
(2) T2-weighted contrast agents are superparamagnetic materials (e.g., magnetite (Fe3O4)
nanoparticles) which reduce T2 relaxation times, giving rise to dark contrast T2-weighted
images. The efficiency of a contrast agent to reduce the T1 or T2 of water protons is
referred to as relaxivity and defined by followed equation:
1/T1,2 = 1/T01,2 + γ1,2C.
Where 1/T1,2 is the observed relaxation rate in the presence of contrast agents, 1/T01,2 is the
relaxation rate of pure water, C is the concentration of the contrast agents and r1 and r2 are the
longitudinal and transverse relaxivities, respectively
Various inorganic nanoparticles have been used as magnetic resonance imaging (MRI)
contrast agents due to their unique properties, such as large surface area and efficient
contrasting effect. Since the first use of superparamagnetic iron oxide (SPIO) as a liver
contrast agent, nanoparticulate MRI contrast agents have attracted a lot of attention. Magnetic
iron oxide nanoparticles have been extensively used as MRI contrast agents due to their ability
to shorten T2* relaxation times in the liver, spleen, and bone marrow. More recently, uniform
ferrite nanoparticles with high crystallinity have been successfully employed as new T2 MRI
contrast agents with improved relaxation properties. Iron oxide nanoparticles functionalized
with targeting agents have been used for targeted imaging via the site-specific accumulation of
nanoparticles at the targets of interest. Recently, extensive research has been conducted to
develop nanoparticle-based T1 contrast agents to overcome the drawbacks of iron oxide
nanoparticle-based negative T2 contrast agents. In this report, we summarize the recent
progress in inorganic nanoparticle-based MRI contrast agents.
268
CHAPTER 9 MAGNETIC RESONANCE IMAGING
much higher quality images of the body's arteries and veins than other methods. MRA is the
depiction of vessels and of flowing blood. In MRA, flowing blood is depicted as bright and
stationary tissue as dark. In MRA coherent type of gradient echo (GRE) pulse sequences are
used which are sequences that use a gradient to reduce the magnetic homogeneity effects, as
opposed to a 180° RF pulse that used in spin echo sequences. These sequences (GRE) should
be used when T2 weighted images (bright blood/water/CSF) are required with good temporal
resolution, Thereby, MR imaging is inherently capable of depicting flowing blood in spatially
confined vessel regions with a bright signal without administering contrast agent. Also a
quantification of blood flow is possible. The venous application of gadolinium (Gd) which is a
based MR contrast agents in conjunction with fast MRA-sequences can enable the spatially
(3D) or temporally (4D) resolved depiction of MRA. MRV is the type of magnetic resonance
imaging (MRI) used to visualize veins, the blood vessels that bring blood from the body's
internal organs back into the systemic circulation. The MRV uses the same machine as an
MRI, but special computer software allows it to only extract generated by blood images, as it
flows through the veins. These images give doctors an idea of whether the blood flow through
a vein of interest is affected by blood clots or other disease processes.
9.25.2 Magnetic Resonance Myelography
MR myelography is noninvasive technique that studies the spinal canal and subarachnoid
space by high-resolution MRI with a technique in which used sequences are T2 weighted fast
spin echo pulse sequences or a refocused gradient echo pulse sequence with strong T2
weighting to provide high contrast between the spinal cord which appears dark and its nerves
with the surrounding cerebrospinal fluid (CSF) which appear bright. MR Myelography as part
of an entire MR examination has virtually replaced the X-ray myelography in localizing CSF
leaks, disc protrusions and can identify spinal canal and foraminal stenosis.
9.25.3 Magnetic resonance cholangiopancreatography
Magnetic resonance cholangiopancreatography (MRCP) is 3D fast spin echo (FSE) based non-
invasive technique designed to produce detailed images of the hepatobiliary and pancreatic
systems, including the liver, gallbladder, bile ducts, pancreas and pancreatic duct., without
need for contrast injection. This high resolution volumetric acquisition is combined with
respiratory triggering and acceleration to help achieve excellent image quality in short scan
time. The automatically generated maximum intensity projection (MIP) images benefit from
bright fluid enhancement and effective background suppression resulting in clear, high
resolution 3D structural images. It was introduced in 1991.
269
CHAPTER 9 MAGNETIC RESONANCE IMAGING
In tomographic images a signal can be assigned to each pixel after two-dimensional Fourier
transform of the raw data. The integral of the signal is proportional to the intensity (grey scale
value) in the image.
In CSI a complete NMR spectrum is assigned to each pixel instead of a single value, but
the grid is usually much coarser, so that the number of pixels reduces significantly. By using
this method signals that stem from outside the heart can be eliminated and regional differences
within the heart may be detected.
To obtain the spectroscopic information within the FID or echo, we cannot use frequency
encoding to encode spatial information (as in standard MR imaging methods) and therefore,
no readout gradient is used. As in NMR spectroscopy an FID is acquired with frequency
components stemming from nuclei with different environments.
Using no readout gradient means to spatially encode with phase encoding steps only, so that
for an N×N matrix N2 measurement cycles are required. This is one of the reasons for the
considerably lower matrix size in CSI applications compared to standard imaging data sets.
In summary, a (2D) CSI sequence provides three-dimensional data sets with two
dimensions for phase encoding of the spatial coordinates and one for the spectroscopic
component.
9.25.5 Diffusion-Weighted Imaging
Diffusion weighted imaging (DWI) is a form of MR imaging based upon measuring the
random Brownian motion of water molecules within a voxel of tissue, and is particularly
useful in cerebral ischaemia and tumour characterisation.
Diffusion-weighted (spin echo) sequences are widely-used in medical applications, since
e.g., they can be used for very early detection of infarcted areas in the brain (stroke
diagnostics). They are accompanied with a signal decrease at places of high diffusion and a
relative increase in areas where motion is hindered.
A great deal of confusion exists in the way the clinicians and radiologists refer to diffusion
restriction, with both groups often appearing to not actually understand what they are referring
to. The first problem is that the term "diffusion weighted imaging" is used to denote a number
of different things:
1. isotropic diffusion map (what most radiologists will refer to as DWI)
2. sequence which results in generation of DWI, b=0 and apparent diffusion coefficient
(ADC) maps
3. a more general term to encompass all diffusion techniques including diffusion tensor
imaging DTI.
Additionally confusion also exists in how to refer to abnormal restricted diffusion. This
largely stems from the initial popularization of DWI in brain stroke, which presented as
infarcted tissue as high signal on isotropic maps and described it merely as "restricted
270
CHAPTER 9 MAGNETIC RESONANCE IMAGING
diffusion", implying that the rest of the brain did not demonstrate restricted diffusion, which is
clearly not true. Unfortunately this short-hand is appealing and widespread rather than using
the more accurate but clumsier "diffusion demonstrates greater restriction than one would
expect for this tissue". To make matters worse many are not aware with the idea of T2 shine-
through, another cause of high signal on DWI.
A much safer and more accurate way of referring to diffusion restriction is to remember
that we are referring to actual apparent diffusion coefficient (ADC) is a measure of the
magnitude of diffusion (of water molecules) within tissue. ADC values, and to use wording
such as the region demonstrates abnormally low ADC values (abnormal diffusion restriction)
or even "high signal on isotropic images (DWI) is confirmed to represent abnormal restricted
diffusion on ADC maps".
271
CHAPTER 10
Performance Objectives
After studying Chapter Eleven, the student should be able to:
1. Explains the basic concepts of radiation
2. Know biological effects of ionization radiation
3. Estimate of and explain the basis for possible risk of injury, illness, or
death resulting from occupational radiation exposure
4. Estimate of radiation risk and comparisons with other types of risk
5. List and discuss the three Cardinal Principles of radiation protection.
6. Discuss why distance is the best method of limiting radiation exposure.
7. Discuss the mandate of ALARA as a means of radiation protection.
8. State the proper application and limitation of radiation exposure survey
instruments
9. Identify commonly used dosimetry devices.
CHAPTER 10 RADIATION HAZARDS AND PROTECTION
272
CHAPTER 10 RADIATION HAZARDS AND PROTECTION
10.1 Introduction
Fear of radioactivity and radiation in general present in the wider community in spite of the
increased use of radiation significantly in medicine, and the military, food preparation, power
generation, industry due largely to a lack of knowledge on this subject. Determining the
harmful effects (if any) of the typically low doses of radiation received in routine daily
activities is a very difficult science field. Further, as new technologies emerge that utilize the
physical properties of radiation the health effect data from exposure can be incomplete and
heightened levels of anxiety may occur.
However, as with many other agents, determining the harmful effects (if any) of the
typically low doses of radiation received in routine daily activities is a very difficult field of
science that can cause divisions between sections of the community. Further, as new
technologies emerge that utilize the physical properties of radiation the health effect data from
exposure can be incomplete and heightened levels of anxiety may occur.
Understanding the nature of radiation and radioactivity requires a solid grasp of
fundamental scientific knowledge commensurate with the increasing complexity of the
possible exposure scenarios. When ionizing radiation, such as X- and gamma rays, passes
through the living tissue without energy transmitted, no biological effects and no radiological
image would be produced. If the ionizing radiation passed through living tissue, with absorbed
of energy, causes to tissue damage. Whenever radiation energy is absorbed will chemical
changes are produced virtually immediately, and subsequent molecular damage follows in a
short space of time (seconds to minutes). It is after this, during a much longer time span of
hours to decades that the biological damage becomes evident. We live in a naturally
radioactive world. But how much do physicians, nurses and medical technicians who may
have to respond in a radiation emergency know about what radiation is, what it does and how
to protect against it? This chapter is directed at medical personnel and outlines basic concepts
of radiation and radiation protection.
10.2 Sources of Ionizing Radiation
Ionizing radiation enters our lives in a variety of ways. It arises from natural processes, such
as the decay of uranium in the Earth, and from artificial procedures like the use of x-rays in
medicine. So we can classify radiation as natural or artificial according to its origin.
10.2.1 Natural sources of Ionizing Radiation
Radiation has always been present and is all around us in many forms (cosmic, radon, plants,
our bodies, radioactive soil and rocks etc.). Life has evolved in a world with significant levels
of ionizing radiation, and our bodies have adapted to it. Many radioisotopes are naturally
occurring, and originated during the formation of the solar system and through the interaction
of cosmic rays with molecules in the atmosphere. Tritium is an example of a radioisotope
formed by cosmic rays’ interaction with atmospheric molecules. Some radioisotopes (such as
uranium and thorium) that were formed when our solar system was created have half-lives of
billions of years, and are still present in our environment. Background radiation is the ionizing
273
CHAPTER 10 RADIATION HAZARDS AND PROTECTION
radiation constantly present in the natural environment. So natural radiation is everywhere and
everyone is exposed daily to various kinds of radiation: heat, light, ultraviolet, microwave,
ionizing, and so on. Actually, all human activities involve exposure to radiation. People are
exposed to different amounts of natural "background" ionizing radiation depending on where
they live. Radon gas in homes is a problem of growing concern. Exposure of a person may be
external or internal and may be incurred by various exposure pathways. External exposure
may be due to direct irradiation from a sealed source or due to contamination, i.e. airborne
radionuclides or radionuclides deposited onto the ground or onto clothing and skin. Internal
exposure may result from the inhalation of radioactive material in air, the ingestion of
contaminated food or water, or contamination of an open wound. The total worldwide average
effective dose from background radiation is approximately 3mSv per year distributed
according to the table below which comes from the four following sources:
Average Annual Dose
Terrestrial: radiation from soil and rocks …………....... 21 millirem (0.21 mSv)
Cosmic: radiation from out space ………………….….. 33 millirem (0.33 mSv)
Radioactivity normally found within the human body… 29 millirem (0.29 mSv)
Radon ………………………………………………….. 228 millirem (2.28 mSv)
This chart (Figure 10.1) shows that of the total dose of about 360 millirems/year, (millirem
and millisievert are units of radiation dose see chapter one) natural sources of adiation account
for about 82% of all public exposure, while man-made sources account for the remaining
18%. Individual exposures will vary depending on factors such as altitude (space), local soils
(radon and thoron), and the number of nuclear medicine procedures or x-rays received.
274
CHAPTER 10 RADIATION HAZARDS AND PROTECTION
• Cosmic radiation: is the term given to various high-energy particles arriving from outer
space strikes the Earth, therefore increases with altitude. It includes a collection of many
different types mainly (89%) protons – nuclei of hydrogen, the lightest and most common
element in the universe – but they also include nuclei of helium (10%) and heavier nuclei
(1%), all the way up to uranium. When they arrive at Earth, they collide with the nuclei of
atoms in the upper atmosphere, creating more particles, mainly pions. The charged pions
can swiftly decay, emitting particles called muons. Unlike pions, these do not interact
strongly with matter, and can travel through the atmosphere to penetrate below ground.
While space is full of radiation, the earth’s magnetic field generally protects the planet
and people in low earth orbit from these particles.
• Terrestrial radiation: some regions receive more terrestrial radiation from soils that
contain greater quantities of uranium. The average effective dose from the radiation
emitted from the soil (and the construction materials that come from the ground) is
approximately 0.5 mSv per year. However, this dose varies depending on location and
geology, with doses reaching as high as 260 mSv in Northern Iran or 90 mSv in Nigeria.
• Inhalation: (example: Radon) the earth’s crust produces radon gas, which is present in the
air we breathe. Radon has four decay products that will irradiate the lungs if inhaled. The
worldwide average annual effective dose of radon radiation is approximately 1.3 mSv.
• Ingestion: (example: food and drink) Natural radiation from many sources enters our
bodies through the food we eat, the air we breathe and the water we drink. Potassium-40
is the main source of internal irradiation (aside from radon decay). The average effective
dose from these sources is approximately 0.3 mSv a year.
10.2.2 Man-Made Sources of Ionizing Radiation
There is no difference between the effects caused by natural or man-made radiation. Man-
made sources of radiation (from commercial and industrial activities) account for
approximately 0.2 μSv of our annual radiation exposure. Sources of radiation in medicine
include x-ray machines and radioactive materials used in the diagnosis and therapeutic
medical procedures account for approximately 1.2 mSv a year. Most medical exposure comes
from the use of standard x-rays and CT scans to diagnose injuries and diseases in patients.
Drugs with radioactive material attached, known as radiopharmaceuticals, also are used to
diagnose some diseases. These procedures are an important tool to help doctors save lives
through quick and accurate diagnoses. Also, other procedures, such as radiation therapy, use
radiation to treat patients. Overall, natural radiation accounts for approximately 60% of our
annual radiation dose, with medical procedures accounting for the remaining 40%.
10.3 Mechanisms of Radiation Damage
The exact mechanism of these complex events is incompletely understood, but biological
damage following exposure to ionizing radiations has been well documented at a variety of
levels. At a molecular level, macromolecules such as DNA, RNA, and enzymes are damaged;
at the subcellular level, cell membranes, nuclei, chromosomes, etc., are affected; and at the
275
CHAPTER 10 RADIATION HAZARDS AND PROTECTION
cellular level, cell division can be inhibited, cell death brought about, or transformation to a
malignant state induced. Cell repair can also occur, and is an important mechanism when there
is sufficient time for recovery between irradiation events.
The fact that ionizing radiation produces biological damage has been known for many
years. During radiation exposures it is the ionization process that causes the majority of
immediate chemical changes in tissue. The critical molecules for radiation damage are
believed to be the proteins (such as enzymes) and nucleic acid (principally DNA). The
mechanism by which the radiation damage occurs can happen via one of two basic ways: by
the direct or indirect action of radiation on the DNA molecules (see figure 1).
Figur
e 10.2: Direct and indirect actions of radiation (a) Single-Strand Break (b) Double-Strand Break
When the DNA is attacked, either via direct or indirect action, damage is caused to the strands
of molecules that make up the double-helix structure. Most of this damage consists of breaks
in only one of the two strands and is easily repaired by the cell, using the opposing strand as a
template. If, however, a double-strand break occurs, the cell has much more difficulty
repairing the damage and may make mistakes. This can result in mutations, or changes to the
DNA code, which can result in consequences such as cancer or cell death. Double-strand
breaks occur at a rate of about one double-stand break to 25 single-strand breaks. Thus, most of
the radiation damage that occur in the DNA may be repaired.
10.3.1 Direct Action
In the direct action can be visualized as a “direct hit” by the radiation on the DNA directly
(see figure 1) and thus is a fairly uncommon occurrence due to the small size of the target; the
diameter of the DNA helix is only about 2 nm. Radiation damage starts at the cellular level,
and radiation may impact the DNA directly, causing ionization of the atoms in the DNA
276
CHAPTER 10 RADIATION HAZARDS AND PROTECTION
molecule. Radiation which is absorbed in a cell has the potential to impact a variety of critical
targets in the cell, the most important of which is the DNA. The damage to the DNA is what
causes cell death, mutation, and carcinogenesis. Evidence indicates that damaged cells that
survive may later induce carcinogenesis or other abnormalities. This process becomes
predominant with high radiation doses. It is nowadays accepted that the detrimental effects of
ionizing radiation are not restricted only in the irradiated cells, but also to non-irradiated
bystander or even distant cells manifesting various biological effects.
277
CHAPTER 10 RADIATION HAZARDS AND PROTECTION
events can be started, but the free radical species formed can lead to many biologically
harmful products and can produce damaging chain reactions in tissue.
10.4 Understanding Radiation Risk
Risk can be defined in general as the probability or chance of injury, illness, or death resulting
from radiation exposure. However, the perception of risk is affected by how the individual
views its probability and its severity. Radiation can damage living tissue by changing cell
structure and damaging DNA. The amount of damage depends upon the type of radiation, its
energy and the total amount of radiation absorbed. Also, some cells are more sensitive to
radiation. Because damage is at the cellular level, the effect from small or even moderate
exposure may not be noticeable. Most cellular damage is repaired. Some cells, however, may
not recover as well as others and could become cancerous. Radiation also can kill cells. The
most important risk from exposure to radiation is cancer. Much of our knowledge about the
risks from radiation is based on studies of more than 100,000 survivors of the atomic bombs at
Hiroshima and Nagasaki, Japan, at the end of World War II. Other studies of radiation
industry workers and studies of people receiving large doses of medical radiation also have
been an important source of knowledge. Scientists learned many things from these studies.
The most important are: The higher the radiation dose, the greater the chance of developing
cancer. The chance o f developing cancer, not the seriousness of the cancer, increases as the
radiation dose increases. Cancers caused by radiation do not appear until years after the
radiation exposure. Some people are more likely to develop cancer from radiation exposure
than others.Radiation can damage health in ways other than cancer. It is less likely, but
damage to genetic material in reproductive cells can cause genetic mutations, which could be
passed on to future generations. Exposing a developing embryo or fetus to radiation can
increase the risk of birth defects. Although such levels of exposure rarely happen, a person
who is exposed to a large amount of radiation all at one time could become sick or even die
within hours or days. This level of exposure would be rare and can happen only in extreme
situations, such as a serious nuclear accident or a nuclear attack.
10.5 Health Effects of Exposure to Radiation
There are many technical factors that must be considered to understand the complex nature of
exposure to ionizing radiation. Some of the health effects that exposure to radiation may cause
are cancer (including leukemia), birth defects in the future children of exposed parents, and
cataracts. These effects (with the exception of genetic effects) have been observed in studies
of medical radiologists, uranium miners, radium workers, and radiotherapy patients who have
received large doses of radiation. Studies of people exposed to radiation from atomic weapons
have also provided data on radiation effects. In addition, radiation effects studies with
laboratory animals have provided a large body of data on radiationinduced health effects,
including genetic effects.
The observations and studies mentioned above, however, involve levels of radiation
exposure that are much higher (hundreds of rem) than those permitted occupationally today
(<5 rem per year). Although studies have not shown a cause-effect relationship between health
278
CHAPTER 10 RADIATION HAZARDS AND PROTECTION
effects and current levels of occupation radiation exposure, it is prudent to assume that some
health effects do occur at lower exposure levels.
10.6 Risks from Occupational Radiation Exposure
The safety problems are related to ionizing radiation exposure from x-ray devices, particle
accelerators, naturally occurring radionuclides and accelerator-produced radioactivity. All
Protection Agency in its radiation protection guidance for occupational exposure urges that
workers be clearly informed of the biological implications of radiation exposure. It is intended
that the following information will enable you to develop an attitude of healthy respect for the
risk associated with radiation exposure rather than an unnecessary fear or lack of concern.
10.7 Radiation Risk Estimates
A measure of the biological damage sustained by tissue due to ionizing radiation is expressed by
the tissue's dose equivalent (often referred to by just "dose"), the traditional unit of which is the
rem (see chapter one). The total effective dose equivalent (TEDE) represents the sum of the
deep dose due to external radiation and the effective dose equivalent (EDE) due to internal
contamination. The total effective dose equivalent due to the natural "background radiation"
level in a total effective dose equivalent to all persons in the any state that per year due
primarily to:
radiation reaching earth from outer space,
the radioactive content of all terrestrial materials and
exposure to naturally occurring radon gas)
10.8 Biological Effects of Ionizing Radiation
Radiation Effects can be early or be late effects; depending on the amount of dose. Early
(prompt) effects are observable shortly after receiving a very large dose in a short period of
time. For example, a whole-body dose of 450 rem (90 times the annual dose limit for routine
occupational exposure) in an hour to an average adult will cause vomiting and diarrhea within
a few hours; loss of hair, fever and weight loss within a few weeks; and about a 50 percent
chance of death within 60 days without medical treatment.
At the low levels of occupational exposure it is difficult to demonstrate the relationship
between dose and effect. The changes induced by radiation often require many years or
decades before being evident and, thus, a very long follow up period is necessary to define late
(delayed) effects. Studies of human populations exposed to low level radiation are the
appropriate basis for defining risk. Yet the number of such investigations, from which the
relationship between radiation dose and response can be determined, is limited, the best being
those of the bomb survivors in Nagasaki and Hiroshima. Accordingly, there is considerable
uncertainty and controversy regarding the best estimates of the radiation risk of low level
doses.
The biological effects of ionizing radiation can depend, among other factors, on:
the type of radiation,
279
CHAPTER 10 RADIATION HAZARDS AND PROTECTION
280
CHAPTER 10 RADIATION HAZARDS AND PROTECTION
but 1 Sv to the gonads has a measurable effect and 4 Sv will cause sterilization. If, however,
the threshold dose for testicular damage is given in small amounts, say 1 mSv per week over a
number of years, no deterministic effects will be observed.
Cancer Induction: Cancers arising in a variety of organs and tissues are thought to be the
principal somatic effect of low and moderate radiation exposure. Organs and tissues differ
greatly in their susceptibility to cancer induction by radiation. Induction of leukemia by
radiation stands out because of the natural rarity of the disease, the relative ease of its radiation
induction and its short latent period (2−4) years). However, the combined risk of induced solid
tumors exceeds that of leukemia. It is currently thought that cancer induction is the only
possible somatic effect due to exposure to low levels of ionizing radiation. The risk of
radiation doses of the order of the natural background level (300 millirem average in the US)
may be zero. However, upper−limit risk of chronic radiation doses less than 10 rem. Estimates
the total fatal risk is about 4 10−4 per rem of effective dose equivalent (or 4 chances in 10000
per rem) when averaged over an adult population of radiation workers.
10.8.2 Genetic Effects
Genetic effects are abnormalities occurring in the future children of exposed persons and in
subsequent generations. Genetic effects can occur when there is radiation damage to the genetic
material and can result in mutation. These effects may show up as birth defects or other
conditions in the future children of the exposed individual and succeeding generation, as
demonstrated in animal experiments. A mutation is an inheritable change in the genetic
material within chromosomes. Generally speaking, mutations are of two types, dominant and
recessive. The effects of dominant mutations usually appear in the first and subsequent
generations while the effects of recessive mutations do not appear until a child receives a
similarly changed gene for that trait from both parents. This may not occur for many
generations or it may never occur. Mutations can cause harmful effects which range from
undetectable to fatal. Thus, the possibility exists that genetic effects can be caused in humans
by low doses even though no direct evidence exists as yet. It is difficult to assess the genetic
risk in humans, as even the descendants of those exposed at Hiroshima and Nagasaki have
shown no additional genetic or cytogenetic effects. The frequency of congenital defects,
fecundity, and life expectancy appear to be no different than for the children of nonirradiated
parents. However, in view of the paucity of data, a safety margin is included in all risk
estimates. The risk of hereditary ill-health in subsequent children or grandchildren is estimated
to be at worst 10 extra cases from a million individual parents exposed to 1mGy, whereas the
normal incidence, without irradiation, is about 70 000 cases.
10.8.3 Developmental Effects
Developmental effects (or teratogenic) are the effects which observe in children who were
exposed during the fetal or embryonic stages of development. An exposed unborn child may
be subjected to more risk from a given dose of radiation than is either of its parents. The
developmental effects of radiation on the embryo and fetus are strongly related to the stage at
281
CHAPTER 10 RADIATION HAZARDS AND PROTECTION
which exposure occurs. The greatest concerns are of inducing malformations and functional
impairments during early development and an increased incidence of cancer during childhood.
The most frequent radiation−induced human malformations are small size at birth, stunted
postnatal growth, microcephaly (small head size), microencephaly (small brain), certain eye
defects, skeletal malformations and cataracts. Fortunately, these effects are observed only for
radiation doses much larger than those permitted radiation workers.
The current knowledge regarding developmental effects, according to the International
Commission on Radiological Protection (ICRP), is as follows:
• Exposure of the embryo during the first 3 weeks following conception may result in a
failure to implant or an undetectable death of the conceptus. Otherwise, the pregnancy
continues in normal fashion with no deleterious effects. This "all or nothing" response is
thought to occur only for acute doses greater than several rem,
• After 3 weeks, malformations may occur which are radiation dose dependent but with a
threshold dose estimated to be about 10 rem of acute exposure,
• From 3 weeks to the end of pregnancy it is possible that radiation exposure can result in
an increased chance of childhood cancer with a risk factor of, at most, a few times
(probably 2 to 3) that for the whole population, and
• Irradiation during the development of the forebrain, in the period of 8−15 weeks after
conception, may reduce the child's IQ by 0.3 point per rem, on the average, for relatively
large doses.
These conclusions are reassuring for individuals who incur small work−related doses since the
possible developmental effects are thought to occur only at much higher doses or to occur with
very low probability, if at all.
282
CHAPTER 10 RADIATION HAZARDS AND PROTECTION
For example; chest x-ray to follow up pneumonia, must be justified both as a general
procedure and then as regards the individual patient before the latter undergoes the procedure.
Clearly, some exposures are easier to justify than others, while some are obviously unjustified.
An example of the unjustified would be mammography screening in 20-30-year-old well-
women, because it would probably cause more harm than benefit. Sometimes an individual
exposure is unjustified as the diagnosis can be made otherwise, for example using ultrasound,
magnetic resonance imaging (MRI), or endoscopy, or would not actually contribute to the
patient's management, for example in coccydynia.
10.9.2 Optimization
Optimization can be defined as a process or method used to make a system of protection as
effective as possible within the given criteria and constraints. That means, consider how best
to use resources in reducing radiation risks to individuals and populations so far as is
reasonably achievable social and economic factors being taken into account. This is the
principle of radiation safety to reduce the radiation doses and releases of radioactive
materials by employing all reasonable means. That is the basis of the ALARA principle or
ALARP principle (but ALARP does not consider social and economic factors and is
developed from Case Law). ALARA is an acronym for "As Low As Reasonably
Achievable" and is a regulatory requirement for all radiation safety programs. For members
of staff or visitors the effective dose should be as low as reasonably practicable as
constrained by the working procedures. For a patient, the radiation exposure should be as
low as compatible with providing the diagnostic information required. This can be achieved
by reducing the number of images taken of a patient.
10.9.3 Limitation
Dose Limitation, together with Justification and Optimisation, are used for controlling
radiation risks and as the basis of Radiation Protection internationally. There are legal dose
limits for workers and members of the public, based on ensuring that no deterministic
effects are produced and that the probability of stochastic effects is reasonably low.
Limitation is effective dose to individuals shall not exceed the dose limits recommended
requires that Deterministic Effects are avoided and that Probabilistic / Stochastic effects are
as low as reasonably achievable (ALARA ). Limits are not appropriate for patients although
'reference values' have been published to indicate levels above which exposures should be
reviewed.
283
CHAPTER 10 RADIATION HAZARDS AND PROTECTION
10.10.1 Time
Keep the time of exposure to a minimum. By reducing the time of exposure to a radiation
source, the effective dose to the worker is reduced in direct proportion with that time. Time
directly influences the dose received: if you minimize the time spent near the source, the dose
received is minimized. This can be done by improving the training of operators to reduce the
time they take to handle a source. The dose to an individual is directly related to the duration
of exposure. The equation for this relationship is:
During radiography the time of exposure is kept to a minimum to reduce motion blur. During
fluoroscopy the time of exposure should also be kept to a minimum to reduce patient and
personnel exposure. Most fluoroscopic examinations take less than 5minutes; only during
difficult special procedures should it be necessary to exceed 5 minutes of exposure time.
10.10.2 Distance
Maintain distance from source. The exposure rate from a radiation source drops off by the
inverse of the distance squared. If a problem arises during a procedure, don't stand next to the
source and discuss your options with others present. Move away from the source or return it to
storage, if possible.
284
CHAPTER 10 RADIATION HAZARDS AND PROTECTION
GROUP MPD
Radiation workers
Combined whole-body occupational exposure
Prospective annual limit 5 rem in any given year
Retrospective annual limit 10 to 15 rem in any given year
Long-term accumulation to age N years 5 (N-18) rem
Skin 15 rem in any given year
Hands 75 rem in any given year (25 rem per quarter)
Forearms 30 rem in any given ( 10 rem per quarter)
Other organs, tissues , and organ systems 15 rem in any given ( 5 rem per quarter)
Pregnant women (with respect to fetus) 0.5 rem in gestation period
Public or occasionally exposed individuals 0.5 rem in any given year
Students 0.1 rem in any given year
General population
Genetic 0.17 rem average per year
Somatic 0.17 rem average per year
Where N is age in years. This results in an annual MPD of 5 rem, or 5000 mrem (50 mSv).
One consequence of this specification for MPD is that persons less than 18 years of age should
not be employed in radiation occupations. There are several special situations associated with
the whole-body occupational MPD. Students under the age of 18 may not receive more than
100 mrem/yr (1 mSv/yr) during the course of their educational activities. This is included in
and not in addition to the 500 mrem (5mSv) permitted each year as a nonoccupational
exposure. Consequently, student radiologic technologists under the age of 18 may be engaged
in departments of radiology, but their personnel exposure must be monitored and should
remain below 100 mrem/yr. Because of this, it is general practice not to accept underage
persons into schools of radiologic technology unless their eighteenth birthday is only a few
months away. The MPD established for non-occupationally exposed persons is one tenth of
that for the radiation worker. Individuals in the general population are limited to 500 mrem/yr
(5 mSv/yr).
In addition to a limitation of 5000 mrem/yr (50 mSv/yr) for the whole body, the dose to any
of the parts of the body (head, neck, trunk, lens of the eye, blood-forming organs, and gonads.)
may not exceed the same limit. This interpretation is accepted because irradiation of any of
these parts carries a presumed risk of late effects equal to the risk associated with whole-body
irradiation. Some organs of the body have a higher MPD than the whole-body MPD.
The MPD for the: skin is 15000 mrem/yr (150 mSv/yr)
forearms 30000 mrem/yr (300 mSv/yr)
hands 75000 mrem/yr (750 mSv/yr)
285
CHAPTER 10 RADIATION HAZARDS AND PROTECTION
286
CHAPTER 10 RADIATION HAZARDS AND PROTECTION
Leakage radiation
Useful beam
Scatter radiation
287
CHAPTER 10 RADIATION HAZARDS AND PROTECTION
barriers because the computation usually results in less than 0.4 mm Pb. Table ( ) contains
equivalent thicknesses for secondary barrier material.
Table: equivalent material thicknesses for secondary barriers
Computed Substitutes
Lead Required Steel Glass Gypsum Wood
(mm) (mm) (mm) (mm) (mm)
0.1 0.5 1.2 2.8 19
0.2 1.2 2.5 5.9 33
0.3 1.8 3.7 8.8 44
0.4 1.5 4.8 12 53
288
CHAPTER 10 RADIATION HAZARDS AND PROTECTION
Generally, SRDs only measure gamma and X-ray radiation and can be read out immediately
upon finishing a job involving external exposure to radiation.
The dosimeter has a small ionization chamber with the size of approximately two
milliliters. Inside the ionization chamber is a central anode wire, attached to the anode wire is
a metal-coated quartz fiber. When the positive potential of the anode are shipped, the charge
is distribution between the anode wire and fiber quartz. Electrical repulsion deviates quartz
fiber, and whenever the charge, increasing the deviation of quartz fiber. Radiation incident on
the chamber produces ionization inside the active volume of the chamber. Attracted electrons
produced by the ionization for that, and collected by, the central positively charged anode.
This group of electrons reduces the net positive charge and allowed to return quartz fiber in
the direction of the original position. The amount of movement is directly proportional to the
amount of ionization that occurs. To read the dosimeter, hold it up to a light source and look
through the eyepiece. You should always record the SRD reading before you enter a radiation
field (hot zone). Read the SRD periodically (at 15 to 30 minute intervals) while when
working in hot zone and out of the hot zone. If a higherthan-expected reading is indicated, or
if the SRD reading is off-scale, you should: n Notify others in the hot zone n Have them
check their SRDs n Exit the hot zone immediately n Follow local reporting procedures If you
are using a low range dosimeter (e.g., 0 to 200 mR), you should consider exiting the hot zone
if the dosimeter reads greater than 75% of full scale. The reason for this is to prevent your
dosimeter from going off scale; if your dosimeter goes off scale, it will no longer keep a
record of the dose you received. A dosimeter can be recharged or “zeroed” after each use.
Record the final reading upon leaving the hot zone. Exercise care when using a SRD, they are
sensitive instruments. Rough handling, static electricity, or dropping a dosimeter may result
in erroneous or off-scale readings.
10.17.2 Electronic Dosimeters
The electronic dosimeter serves the same basic function as the SRD, except that it has a
digital readout that displays the total dose received by the wearer in milliroentgens (mR) or
millirem (mrem). It is based on Geiger-Muller tubes or single silicon diodes can provide
effective alarms and immediate dose readings in areas where there are real risks of high
exposures. However, these devices have a very poor response to photon energies below
around 80 keV. This makes them unable to detect low-energy gamma radiation and
diagnostic X-rays. The Siemens electronic personal dosimeter overcomes that problem with a
linear response to below 20 keV, which makes it suitable for radiodiagnostic staff. Now,
Electronic dosimeters are available from various manufacturers in a variety of sizes and
shapes. There are many options available, depending on the required or desired response.
Many electronic dosimeters have an audible response that indicates the exposure rate through
a series of chirping noises. The frequency of the increase and decrease in the voice depends
on the dose rate where these voices provide audible warning feature when rates are high dose.
289
CHAPTER 10 RADIATION HAZARDS AND PROTECTION
290
BOOK REFERENCES
BOOK REFERENCES
1. A. Bjornerud, "The Physics of Magnetic Resonance Imaging", FYS-KJM 4740, chp. 11,
(2008)
2. A. L. Kholkin, N.A. Pertsev, and A.V. Goltsev "Piezoelectricity and Crystal Symmetry",
Springer Science+Business Media, LLC (2008)
3. Australian Radiation Protection and Nuclear Safety Agency. Radiation Protection in
Diagnostic and Interventional Radiology, Radiation Protection Series Publication No.
14.1, Commonwealth of Australia (2008)
4. B. Diarra, H. Liebgott, P. Tortoli and C. Cachard , "Sparse array techniques for 2D array
ultrasound imaging", Proceedings of the Acoustics 2012 Nantes Conference, Nantes,
France, (2012)
5. B. Lila, J. Kapelewski, "The Focused Beam Forming with a Monolithic 2D Phased
Array", Acta Physica Polonica A, Vol. 114, No. 6-A, (2008)
6. C. Westbrook, "MRI at a Glance", Blackwell Science Ltd, (2002)
7. C. Zhang, L. H. Le, R. Zheng ,E. Lou, "Measurements of ultrasonic phase velocities and
attenuation of slow waves in cellular aluminum foams as cancellous bone-mimicking
phantoms", J. Acoust. Soc. Am. 129 (5), (2011)
8. D. J. Brenner, C. H. McCollough, C. G. Orton, "It is time to retire the computed
tomography dose index (CTDI) for CT quality assurance and dose optimization",
Medical Physics, Vol. 33, No. 5, (2006)
9. D. Pointer, "Computed Tomography (CT) Scan Image Reconstruction", SRC Computers,
www.srccomputers.com, (2008)
10 D. Tomasi and T. Ernst, "A Simple Theory for Vibration of MRI Gradient Coils"
Brazilian Journal of Physics, vol. 36, no. 1A, March, (2006).
11. D. Weishaupt "Factors Affecting the Signal-to-Noise Ratio MRI", Springer (2006), pp
29-39
12. E. B. Podgorsak, "Radiation physics for medical physicists", Springer-Verlag Berlin
Heidelberg, Germany, (2006)
13. E. Buscarini, "Manual of diagnostic ultrasound" Vol. 1, 2nd ed., Harald Lutz, World
Health Organization (2011)
291
BOOK REFERENCES
14. E. D. Seleþchi and O. G. Duliu, "Image Processing And Data Analysis In Computed
Tomography" Rom. Journ. Phys., Vol. 52, Nos. 5–7, P. 667–675, Bucharest, (2007)
15. E. J. Blink, "MRI: Physics", Application Specialist MRI, (2004)
16. E. L. Dove, "Physics of Medical Imaging – An Introduction", www.imt.liu.se., (2004)
17. Environmental Protection Agency, Radiation: Facts, Risks and Realities, EPA-402-K-10-
008 Office of Air and Radiation, United states (2012)
18. G. Schoonenberg, M. Schrijverc, Q. Duan, R. Kemkersd, A. Laine, "Adaptive spatial-
temporal filtering applied to x-ray fluoroscopy Angiography", Proc. of SPIE Vol. 5744,
(2005).
19. G. Starkschall, N. Desai, P. Balter, K. Prado, D. Luo, D. Cody, T. Pan, "Quantitative
assessment of four-dimensional computed tomography image acquisition quality", J.
Appl. C. M. Phy. , Vol. 8, No. 3, (2007)
20. H. F. Routh1 and D. M. Skyba, "Functional imaging with ultrasound", Medicamundi
46/1, (2002)
21. H. T. Lutz · H.A. Gharbi, Manual of Diagnostic Ultrasound in Infectious Tropical
Diseases", Springer-Verlag Berlin Heidelberg, Germany, (2006)
22. J. A. Seibert, and J. M. Boone, "X-Ray Imaging Physics for Nuclear Medicine
Technologists: Part 2: X-Ray Interactions and Image Formation", VOL. 33, No. 1,
(2005)
23. J. Calabia, P. Torguet, M. Garcia, I. Garcia, N. Martin, B. Guasch, D. Faur and M.
Vallés "Doppler ultrasound in the measurement of pulse wave velocity: agreement with
the Complior method" Cardiovascular Ultrasound (2011)
24. J. E. Aldrich, "Basic physics of ultrasound imaging" Crit Care Med, Vol. 35, No. 5
(Suppl.) (2007)
25. J. Hsieh, "Computed Tomography: Principles, Design, Artifacts, and Recent Advances",
2nd ed. Wiley Inter-science, Bellingham, Washington, USA, (2009)
26. J. Hsieh, B. Nett, Z.u Yu, K. Sauer, J-B Thibault, C. A. Bouman, "Recent Advances in
CT Image Reconstruction" Springer Science-Business Media New York, (2013)
27. J. L. Creasy, "The General Appearance of Edema and Hemorrhage on CT, MR and US
(Including a General Introduction to CT, MR and US Scanning)", Chapter 2, Springer
Science-Business Media, LLC, (2011)
28. J. Prince, "Medical Imaging and systems", Upper Saddle River, N.J.: Pearson Prentice
Hall, (2006)
29. J. R. Connolly, "The Interaction of X-rays with Matter and Radiation Safety", EPS400-
002, (2012)
30. J. T. Bushberg, J. A. Seibert, , E. M. Leidholdt, and, J. M. Boone, "The Essential Physics
of Medical Imaging", 3rd ed, pp 283‐285St. Baltimore: Williams & Wilkins (1994).
31. K. Doi, "Diagnostic imaging over the last 50 years: research and development in medical
imaging science and technology" Phys. Med. Biol. 51, (2006)
292
BOOK REFERENCES
293
BOOK REFERENCES
294
ACRONYMS AND UNITS
APPENDIX
ACRONYMS AND UNITS
CHAPTER ONE 31 J Joule
1 A Mass Number 32 Sv Seivert
2 Z Atomic Number 33
3 n Shell Number Dose to Tissue T because of
34 DT,R
Binding Energies of Electron shells, Radiation
4 EK,L,M 35
K, L, M, etc.)
5 KeV kilo-electron-volts (1000 eV) 36 WT Tissue Weighting Factor
6 MeV million-electron-volts (106 eV) 37 SI unit
System International Unit
7 f or Frequency (cycles per second, cps) Background Radiation Equivalent
38 BRET
8 h Planck's Constant Time
CHAPTER TWO
9 λ Wavelength
1 kVp Peak kilovoltage
10 FM Frequency Modulation
2 mA milliamperage
11 AM Amplitude Modulation
3 mAs Milliamperage-second
12 nm Nanometer
4 Ko initial kinetic energy
13 μm Micrometer
Characteristic X-Ray (X-ray
14 A Amplitude 5
photon of energy )
15 γ- Gamma Characteristic X-Ray (X-ray
6
16 α- Alpha photon of energy
17 β Beta particles 7 C speed of light
18 DNA Deoxyribonucleic acid 8 ∆K change in energy
19 CT Computerized Tomography 9 I X-ray intensity
20 CAT Computerized Axial Tomography CHAPTER THREE
21 UV Ultraviolet 1 scattering angle
22 R or r Rontgen 2 Change in a photon's wavelength
23 C/kg Coulomb per kilogram 3 Wavelength after scattering
24 Rad Radiation absorbed dose 4 Electron rest mass
25 Gy gray 5 Å angstrom
26 Rem Roentgen equivalent man 6 KE kinetic energy
27 DE dose equivalent 7 μ linear attenuation coefficient
28 Ci Curie 8 mass attenuation coefficient
29 Bq Becquerel 9 ρ density
30 q Charge 10 Effective Atomic Number
295
ACRONYMS AND UNITS
296
ACRONYMS AND UNITS
297
Curriculum Vitae
1. Name : Dr.Mahmood Radhi Al- Qurayshi
2. Nationality: Iraqi
3. Religion: Muslim
4. Birth: Wasit, july 1964
5. Certifications:
A. Bachelor of Science in Physics
B. Master in Nuclear Physics
C. Ph.D. in Solid State Electronics
6. Current Grade: Assistant Professor
7. Contact Info: E-mail: mradhi64@yahoo.com
8. Languages fluently: English and Arabic
9. Positions:
A. Head of Radiological Techniques Department / College of Health and
Medical Technologies / Baghdad
B. Associate Dean for Administrative Affairs; College of Health and
Medical Technologies / Baghdad
Curriculum Vitae
1. Name : Dr. Haider Qasim Al-Mosawi
2. Nationality: Iraqi
3. Religion: Muslim
4. Birth: Baghdad April 1970
5. Certifications:
A. Bachelor of Medicine and Surgery (MBChB)
B. Higher Diploma in Diagnostic Radiology (DMRD)
C. Board (Ph.D.) in Diagnostic Radiology (FIBMS)
6. Current Grade : Assistant Professor
7. Contact Info: E-mail: haiderdo@yahoo.com
8. Languages fluently : English and Arabic
9. Positions:
A. Head of Radiological Techniques Department / College of Health and
Medical Technologies / Baghdad
B. Dean of the college of health and medical technology / Baghdad
C. Dean of the Institute of Medical technology / Baghdad
D. Dean of the Institute of technology / Suwaira
E. Dean of the Institute of Medical technology / Mansour