MIT Notes S7
MIT Notes S7
TECHNIQUES
PART A
Answer any two full questions, each carries 15 marks. Marks
1 a) Explain the acoustic impedance and its role in ultrasound imaging. (5)
b) Discuss the terminologies (i) Characteristic Impedance (ii) Velocity of
Propagation (5)
c) Explain the acoustic impedance and its role in ultrasound imaging. (5)
2 a) Summarize on X-ray tubes used in CT. Explain the types of X-ray tubes have been
utilized for computed tomography (5)
b) With neat sketch, explain the block diagram of ultrasound machine. Also discuss
the functions of each part. (10)
3 a) Explain Fourier method for 2D image reconstruction (5)
b) Summarize the types of CT used. Explain their need and principle (6)
c) What is the principal of sectional imaging? (4)
PART B
Answer any two full questions, each carries 15 marks.
4 a) Nuclear magnetic resonance tomography is the powerful imaging technique used
in medical field, with relevant schematics explain the principle of NMR (10)
b) Explain the decay in magnetization (5)
5 Based on MRI technique, explain :
a) spin density weighted imaging (7)
b) gradient echo imaging (8)
6 a) What are the reconstruction techniques used in NMR imaging? What is
commonly used method in modern scanners? (10)
b) What are the clinical applications of functional MRI? (5)
PART C
Answer any two full questions, each carries 20 marks.
7 a) Explain the principles of SPECT imaging with suitable diagrams (10)
Page 1 of 2
E G4903 Pages: 2
b) Draw the block diagram and explain multi crystal Gamma camera. (10)
8 a) Discuss the augur position network and pulse height analyzers with suitable
diagrams (10)
b) What are the clinical applications of thermal imaging in Rheumatology,
oncology and physiotherapy? (10)
9 a) What is the principle of thermography? (4)
b) What is Stefan-Boltzman Law? (4)
c) Explain (i) Resistive bolometers (ii) thermo electric detectors (iii) pyroelectric (12)
****
Page 2 of 2
E G192097 Pages:2
1 a) How the frequency and penetration of ultrasound are related? Give example for (3)
choosing frequency based on penetration at different parts on human body
b) Write a short note on colour flow mapping (4)
c) Describe the constructional details of an ultrasound transducer (8)
2 a) Explain the data acquisition system used in X-ray CT (7)
b) i) What is M-mode scan? (5)
ii) Mention any one example for theapplication of M-mode scan (3)
3 a) Explain convolution back projection method of CT image reconstruction (9)
b) List the features of multi-slice CT (6)
PART B
Answer any two full questions, each carries 15 marks.
4 a) Compare T1 weighted imaging and T2 weighted imaging (5)
b) Explain how the magnetic fields and radiofrequency signals are used to obtain (10)
anatomical information?
5 a) Give details about different magnets used in MRI (8)
b) Write a short note On Functional MRI and mention any two applications? (7)
6 a) Explain diffusion tensor imaging and list any five applications (12)
b) Define BOLD signal (3)
PART C
Answer any two full questions, each carries 20 marks.
7 a) What is a scintillation detector (5)
b) Explain Emission Computed Tomography (15)
8 a) Explain the concept of thermal imaging (6)
b) Based on the detector arrangement classify the design types of Positron Emission (4)
Tomography
Page 1 of 2
E G192097 Pages:2
Page 2 of 2
E G1098 Pages: 2
PART A
Answer any two full questions, each carries 15 marks. Marks
PART B
Answer any two full questions, each carries 15 marks.
4 a) Explain the relaxation mechanisms associated with excited nuclear spins (5)
b) How pulse sequence is employed in spin-echo imaging technique (4)
c) With relevant block diagram generalize the detection system used in MRI. (6)
5 a) Based on NMR detection system generalize the types of coils commonly
available at receiver side. (4)
b) Explain the principle of diffusion tensor imaging. (5)
c) Describe the instrumentation systems used in functional MRI. (6)
6 a) Explain Fourier reconstruction methods used in MRI. (10)
b) What are the clinical applications of functional MRI (5)
Page 1of 2
E G1098 Pages: 2
PART C
Answer any two full questions, each carries 20 marks.
7 a) With a simplified diagram explain SPECT system consisting of dual large field-of
view scintillation cameras mounted on a rotatable gantry (8)
b) List out the characteristics of Radio nuclides for imaging (5)
c) Generalize the block diagram of a typical rectilinear scanner. (7)
8 a) Discuss the common radio nuclides used for PET (10)
b) What are the different physics of thermography? Explain the terminologies with
respect to Infrared Imaging. (10)
9 a) Explain the principle of microbolometer thermal detector (6)
b) What is spectral radiant emissivity? What is the behaviour when wavelength
increases. (6)
c) Explain the various parts of pyroelectric vidicon thermographic camera? (8)
****
Page 2of 2
00000BM409121901
E Pages: 1
Page 1of 1
MIT MODULE 1 - SCET
1. Ultrasound Imaging
1|Page
MIT MODULE 1 - SCET
• The frequency of the sound waves used in medical ultrasound is in the range of
millions of cycles per second (megahertz, MHz). In contrast, the upper range of
audible frequencies for human is around 20 thousand cycles per second (20 kHz).
• An ultrasound transducer sends an ultrasound pulse into tissue and then receives echoes
back. The echoes contain spatial and contrast information.
• The concept is analogous to sonar used in nautical applications, but the technique in
medical ultrasound is more sophisticated, gathering enough data to form a rapidly moving
two-dimensional grayscale image.
2|Page
MIT MODULE 1 - SCET
Advantages
• ultrasound uses non-ionizing sound waves and has not been associated with
carcinogenesis. This is particularly important for the evaluation of fetal and gonadal
tissue.
• in most centers, ultrasound is more readily available than more advanced cross-sectional
modalities such as CT or MRI.
• ultrasound examination is less expensive to conduct than CT or MRI.
• there are few (if any) contraindications to use of ultrasound, compared with MRI or
contrast-enhanced CT.
• the real-time nature of ultrasound imaging is useful for the evaluation of physiology as
well as anatomy (e.g. fetal heart rate).
• Doppler evaluation of organs and vessels adds a dimension of physiologic data, not
available on other modalities (with the exception of some MRI sequences).
• ultrasound images may not be as adversely affected
• by metallic objects, as opposed to CT or MRI.
Disadvantages
• training is required to accurately and efficiently conduct an ultrasound exam and there is
non-uniformity in the quality of examinations ("operator dependence").
• ultrasound is not capable of evaluating tissue types with high acoustical impedance (e.g.
bone). It is also limited in evaluating structures encased in bone (e.g. cerebral parenchyma
inside the calvaria).
• the high frequencies of ultrasound result in a potential risk of thermal heating or
mechanical injury to tissue at a micro level. This is of most concern in fetal imaging.
• ultrasound has its own set of unique artifacts (US artifacts), which can potentially degrade
image quality or lead to misinterpretation.
• some ultrasound exams may be limited by abnormally large body habitus
3|Page
MIT MODULE 1 - SCET
4|Page
MIT MODULE 1 - SCET
• Sound waves are characterized by the properties of frequency (f) or number of cycles
per second, amplitude (A or loudness), and wavelength (λ) or distance between two
adjacent ultrasound cycles.
• Using the average propagation velocity of ultrasound through tissue (c) or 1,540 m/s, the
relation between frequency (f) and wavelength (λ) is characterized as:
• λ (mm) = c (1,540 m/s)/f (cycles/s)
• Ultrasound vibrations (or waves) are produced by a very small but rapid push–pull action
of a probe (transducer) held against a material (medium) such as tissue.
• The push–pull action of the transducer causes regions of compression and rarefaction to
pass out from the transducer face into the tissue.
• These regions have increased or decreased tissue density.
5|Page
MIT MODULE 1 - SCET
• In tissue if we could look closely at a particular point, we would see that the tissue is
oscillating rapidly back and forward about its rest position.
• As noted above, the number of oscillations per second is the frequency of the wave.
The speed with which the wave passes through the tissue is very high close to 1540 m/s
for most soft tissue.
Speed of sound
• The speed of sound, c, is simply related to the frequency, f, and the wavelength, λ, by the
formula: c /f = λ
• where c is medium dependent, but for water and most soft tissues it is ∼1500m s−1.
Therefore, typical wavelengths encountered in medical diagnosis range from 1.5mm at
1MHz to 0.1mm at 15MHz
• he wavelength and frequency of ultrasound are inversely related, i.e., ultrasound of high
frequency has a short wavelength and vice versa.
• Medical ultrasound devices use sound waves in the range of 1–20 MHz.
• Proper selection of transducer frequency is an important concept for providing optimal
image resolution in diagnostic and procedural US.
• High-frequency ultrasound waves (short wavelength) generate images of high axial
resolution.
• Increasing the number of waves of compression and rarefaction for a given distance can
more accurately discriminate between two separate structures along the axial plane of
wave propagation.
• However, high-frequency waves are more attenuated than lower frequency waves for a
given distance; thus, they are suitable for imaging mainly superficial structures.
• Conversely, low-frequency waves (long wavelength) offer images of lower resolution
but can penetrate to deeper structures due to a lower degree of attenuation
• For this reason, it is best to use high-frequency transducers (up to 10–15 MHz range)
to image superficial structures (such as for stellate ganglion blocks) and low-frequency
transducers (typically 2–5 MHz) for imaging the lumbar neuraxial structures that are
deep in most adults.
Velocity of propagation
• Ultrasound, like sound in gases, propagates largely as a longitudinal pressure wave
• the energy of a moving source is transferred to the medium either as a local compression,
when the source pushes on the medium, or as a local stretching (rarefaction) when the
source pulls on the medium.
• In either case, potential energy is stored elastically in the layer of the medium that is
immediately adjacent to the source and is released by motion of the adjacent layer of the
medium
• he bulk elastic modulus is a property of the medium that relates the applied pressure
needed to achieve a given fractional change in volume.
• It is, therefore, clear that the speed of the wave depends on the density ρ0 and the bulk
elastic modulus K
6|Page
MIT MODULE 1 - SCET
• Piezoelectric material can be arranged in a number of different ways within the transducer,
with each arrangement having a different purpose. The arrangement of the piezoelectric
material will affect how the transducer sweeps the tissue to produce a two-dimensional
image, as well as the geometry of the ultrasound beam in order to maximize resolution.
Piezoelectric material
• Piezoelectric ceramic (lead zirconate titanate, PZT)
• Plastic (polyvinylidine difluoride, PVDF)
• Silver material is deposited in the faces – for electric connections-application of
potential
• Voltage applied produces a proportional change in thickness
• The crystal has a natural resonanat frequency f0=ccrystal/2d
• c-speed of the sound through the crystal
• d-thickness
Damping blocks
• Damping blocks are attached to the back of the piezoelectric material to accomplish a
wide range of tasks.
• Firstly, these damping blocks absorb stray sound waves, as well as absorb backward
directed ultrasound energy.
• They also dampen the vibrations to create short spatial pulse lengths (SPL) to help
improve axial resolution.
• A short SPL is desirable to minimize interference.
• If the length of the pulses is too long, it can interfere with the sound waves that are on
their way back.
• In a more extreme example, the outgoing pulse may actually collide with the returning
waves if it is too long.
• Thus, a good damping material is important to maximize sensitivity to signal.
Lens
• The acoustic lens is used to help focus the beam to improve resolution.
• In the same way that light diffracts off the lens in a magnifying glass or a pair of
eyeglasses, this helps aim the ultrasound beam at a particular distance depending on the
application.
• An unfocused ultrasound beam will have a broader beam geometry that will have a lower
resolution.
• Thus, an acoustic lens is very beneficial in order to provide a sharper image. It will also
reduce blurring from side lobe interference produced by the piezoelectric material.
Matching layer
• To provide acoustic coupling between the crystal and patient
• This layer helps to overcome the acoustic mismatch between the element and the tissue.
• Z matching=(Z element x Z tissue )1/2
7|Page
MIT MODULE 1 - SCET
• T matching= λ/4
• Example- aluminium powder in araldite
• The impedance matching layer is a crucial part of the transducer.
• This layer usually contains oil and helps maximize the amount of wave energy
transmitted into the skin and onward.
• The basic idea of this layer is to help the area of the transducer touching the skin better
mimic properties of the tissue.
• Ultrasound beams can more effectively travel from tissue to tissue of similar density.
• A coupling gel is also applied to the surface where the transducer will be imaging in
order to reduce interference from air bubbles
The probe is the main part of the ultrasound machine. It sends ultrasound waves into the body
and receives the echoes produced by the waves when it is placed on or over the body part being
imaged. The shape of the probe determines its field of view. Probes are generally described by
the size & the shape of their footprint. Selecting the right probe for the situation is essential to
get good images.
There are four basic types of probes used:
1. Linear probes – are generally high frequency better for imaging superficial structures
& vessels also called vascular probes.
2. Curvilinear probes – have widened footprint and lower frequency for transabdominal
imaging & widen the field of view.
3. Phase array probes – for getting between ribs such as cardiac ultrasound.
4. Endocavitary probes – with high frequency and better imaging as transvaginal &
transrectal probes.
3D & 4D ultrasound probes help in more detailed imaging interns of volume data acquisition
volume display and analysis and multiplanar imaging of organs of interest i.e. assessing
multiple 2-D image planes simultaneously. Probes emit ultrasound waves and pass through the
skin. These waves are reflected by various structures that if encounters. The time taken by the
waves to return and their strength form the foundation for interpreting details into a clear image.
Sophisticated computer soft wave conducts this portion.
8|Page
MIT MODULE 1 - SCET
9|Page
MIT MODULE 1 - SCET
Figure shows a generalised scheme for signal generation and processing in a pulse echo
imaging system, with a pictorial representation of the signals to be expected at each stage.
10 | P a g e
MIT MODULE 1 - SCET
• High repetition rates are desirable, however, for fast scanning or for following moving
structures. The maximum pulse repetition rate, usually referred to as the pulse repetition
frequency (PRFmax), is limited by the maximum depth (Dmax) to which one wishes to
image.
Transmitter
• The voltage pulse applied to the transducer is usually below 500V and often in the range
100–200V.
• Its shape is equipment dependent, but it must have sufficient frequency components to
excite the transducer properly; for example, for frequency components up to 10MHz, a
pulse rise time of less than 25ns is required.
Linear RF Amplifier
• This is an important component, since noise generated at this point may well limit the
performance of the complete instrument.
• Clever design is required, since, whilst maintaining low noise and high gain, the input
must be protected from the high-voltage pulse generated by the transmitter.
• Any circuits that do this must have a short recovery time, and the amplifier itself should
possess a large dynamic range and good linearity
Time Gain Control
• This is provided by a voltage-controlled attenuator.
• Some form of time-varying function, synchronised with the main clock and triggered
via a delay, is used as a control voltage so that the system gain roughly compensates
for attenuation of sound in the tissue.
• The simplest function used is a logarithmic voltage ramp, usually set to compensate for
some mean value of attenuation.
• The effect that this has on the time-varying gain of the system is illustrated
schematically in Figure.
• The delay period indicated is often adjustable, so that the attenuation compensation
does not become active until echoes begin returning from attenuating tissue.
Compression Amplifier
• There is a wide range of gain characteristics in use but a general feature is that the gain
decreases with increasing input signal.
• One example is an amplifier with a logarithmic response.
• This allows the remaining 40–50dB echo range to be displayed as a grey scale on a
screen that might typically have only a 20–30dB dynamic range.
Demodulation
• At this point the envelope of the RF echo signal is extracted, for example, by
rectification followed by smoothing with a time constant of about 1.5λ, although more
accurate techniques of complex demodulation are often used.
11 | P a g e
MIT MODULE 1 - SCET
• The signal depicted at this point is known as a detected A-scan (representing the
amplitude of the echoes).
• There are two types of demodulation used in ultrasound signal processing, amplitude
demodulation as described here for anatomical imaging, and frequency demodulation
for detecting the Doppler shift due to target motion.
Pre- and Postprocessing
• It is common to preprocess the A-scan echo signal both before and after it is digitised
and stored in the display memory
• Examples are various forms of edge detection (usually differentiation), and further
adjustments to the gain characteristic, such as suppression, which is dynamic-range
restriction by rejecting from the display echoes that are below an operator-defined
threshold (noise suppression)
Digitisation and Display
• As the sound beam is scanned across the object being imaged, a sequence of A-scans
is generated, each of which provides one line in the final image.
• This sequence of scan lines must be stored and geometrically reconstructed to form an
image, which is achieved by a process known as a scan conversion
12 | P a g e
MIT MODULE 1 - SCET
• Older systems demodulated the signal to very low frequencies before digitization, but
recent advances in digital receivers allow direct digitization of the signal at the
fundamental frequency, giving a higher image SNR.
• Dynamic receiver filtering, a process in which there is a progressive reduction in the
receiver frequency bandwidth as a function of time after pulse transmission, can also
be used. Since the high-frequency content of the backscattered signal decreases with
depth due to the greater attenuation in tissue at higher frequencies, the receiver
bandwidth can be reduced accordingly.
• This results in an improved SNR in the image because the noise level is proportional to
the square root of the receiver bandwidth. After the signal has been digitized, it can be
processed via envelope detection, edge detection, or whichever algorithm is appropriate
for the particular application, and then displayed as a gray-scale image.
13 | P a g e
MIT MODULE 1 - SCET
• A motion (M)-mode scan provides information on tissue movement within the body,
and essentially displays a continuous series of A-mode scans.
• The brightness of the displayed signal is proportional to the amplitude of the
backscattered echo, with a continuous time ramp being applied to the horizontal axis of
the display
• The maximum time resolution of the M-mode scan is dictated by how long it takes for
the echoes from the deepest tissue to return to the transducer.
• M-mode scanning is used most commonly to detect motion of the heart valves and heart
wall in echocardiography.
Brightness (B)-mode scanning
• Brightness (B)-mode scanning produces a two-dimensional image through a cross
section of tissue.
• Each line in the image consists of an A-mode scan with the brightness of the signal
being proportional to the amplitude of the backscattered echo.
• B-mode scanning can be used to study both stationary and moving structures, such as
the heart, because complete images can be acquired very rapidly.
• For example, in the case of an image with a 10-cm depth-of-view, it takes 130 us after
transmission of the ultrasound pulse for the most distant echo to return to the transducer.
• If the image consists of 120 lines, then the total time to acquire one frame is 15.6 ms
and the frame rate is 64 Hz. If the depth-of-view is increased, then the number of lines
must be reduced in order to maintain the same frame rate.
2D ultrasound
Traditional ultrasound scanning is 2D, meaning it sends and receives ultrasound waves in just
one plane. The reflected waves then provide a flat, black-and-white image of the fetus through
14 | P a g e
MIT MODULE 1 - SCET
that plane. Moving the transducer enables numerous planes of viewing, and when the right
plane is achieved, as judged by the image on the monitor, a still film can be developed from
the recording. Most of the detailed evaluation of fetal anatomy and morphology so far has been
done using 2D ultrasound.
3D ultrasound
Further development of ultrasound technology led to the acquisition of volume data, i.e.,
slightly differing 2D images caused by reflected waves which are at slightly different angles to
each other. These are then integrated by high-speed computing software. This provides a 3-
dimensional image. The technology behind 3D ultrasound thus has to deal with image volume
data acquisition, volume data analysis and finally volume display.
15 | P a g e
MIT MODULE 1 - SCET
it is mostly used to provide fetal keepsake videos, a use which is discouraged by most medical
watchdog sites.
This is because unregulated centers offer it as entertainment ultrasounds. Such use violates the
ALARA (As Low As Reasonably Achievable) principle governing the medical use of
diagnostic imaging.
Disadvantages of non-medical use are:
• The machines may use higher-than-usual levels of ultrasound energy with potential
side-effects on the fetus.
• The ultrasound sessions may be prolonged.
• Uncertified or untrained operators may lead to missed or inadequate diagnosis since
they are not required to be certified by law.
Advantages of 3D/4D ultrasound
Disadvantages
• Expensive machinery.
• Longer training required to operate.
• Volume data acquired may be lower-quality in the presence of fetal movements of any
kind, which will affect all later planes of viewing.
• If the fetal spine is not at the bottom of the scanned field sound shadows may hinder
the view.
11. Discuss the relation between frequency of the ultrasound and the
depth of penetration?
• Sound waves are characterized by the properties of frequency (f) or number of cycles
per second, amplitude (A or loudness), and wavelength (λ) or distance between two
adjacent ultrasound cycles.
16 | P a g e
MIT MODULE 1 - SCET
• Using the average propagation velocity of ultrasound through tissue (c) or 1,540 m/s,
the relation between frequency (f) and wavelength (λ) is characterized as:
• λ (mm) = c (1,540 m/s)/f (cycles/s)
• Ultrasound frequency is above 20,000 Hz or 20 KHz.
• Medical ultrasound is in the range of 3 -15 MHz.
• Average speed of sound through most soft human tissues is 1,540 meters per second.
This can be calculated multiplying the wavelength with frequency.
• The wavelength and frequency of ultrasound are inversely related, i.e., ultrasound of
high frequency has a short wavelength and vice versa.
• The higher frequency wavelength will have shorter wavelength whereas lower
frequency wavelength will have longer wavelength.
• The wavelength for the 2.5 MHz is 0.77 mm whereas that for 15 MHz is 0.1 mm
• High-frequency waves are more attenuated than lower frequency waves for a given
distance; thus, they are suitable for imaging mainly superficial structures.
• Conversely, low-frequency waves (long wavelength) offer images of lower resolution
but can penetrate to deeper structures due to a lower degree of attenuation
• For this reason, it is best to use high-frequency transducers (up to 10–15 MHz range)
to image superficial structures (such as for stellate ganglion blocks) and low-frequency
transducers (typically 2–5 MHz) for imaging the lumbar neuraxial structures that are
deep in most adults.
The spatial resolution of any imaging system is defined as its ability to distinguish two points
as separate in space. Spatial resolution is measured in units of distance such as mm. The higher
the spatial resolution, the smaller the distance which can be distinguished.
Spatial resolution is commonly further sub-categorized into axial resolution and lateral
resolution.
17 | P a g e
MIT MODULE 1 - SCET
18 | P a g e
MIT MODULE 1 - SCET
19 | P a g e
MIT MODULE 1 - SCET
20 | P a g e
MIT MODULE 1 - SCET
• The time to scan one frame is equal to the pulse repetition period x number of scan lines
per frame.
• Common means of improving frame rate include 1) narrowing the imaging sector, which
decreases the time it takes to scan one frame 2) decreasing the depth which decreases the
PRP 3) decreasing the line density, which requires fewer lines to scan one frame (at the
cost of spatial resolution)
13. Discuss the relation between frequency of the ultrasound and the
resolution of images?
•
Image resolution determines the clarity of the image.
•
Such spatial resolution is dependent of axial and lateral resolution.
•
Both of these are dependent on the frequency of the ultrasound.
•
Axial resolution is the ability to see the two structures that are side by side as separate
and distinct when parallel to the beam. So a higher frequency and short pulse length
will provide a better axial image.
• Lateral resolution is the image generated when the two structures lying side by side are
perpendicular to the beam. This is directly related to the width of the ultrasound beam.
• Proper selection of transducer frequency is an important concept for providing optimal
image resolution in diagnostic and procedural US.
• High-frequency ultrasound waves (short wavelength) generate images of high axial
resolution.
• Increasing the number of waves of compression and rarefaction for a given distance can
more accurately discriminate between two separate structures along the axial plane of
wave propagation.
• Narrower the beam better is the resolution.
• The width of the beam is inversely related to the frequency. Higher the frequency
narrower is the beam. If the beam is wide the echoes from the two adjacent structures
will overlap and the image will appear as one.
• Temporal resolution is the ability to detect that an object has moved over time.
• temporal resolution is synonymous with frame rate.
• Typical frame rates in echo imaging systems are 30-100 Hz.
• The temporal resolution or frame rate = 1/(time to scan 1 frame).
• The time to scan one frame is equal to the pulse repetition period x number of scan lines
per frame.
• Common means of improving frame rate include 1) narrowing the imaging sector,
which decreases the time it takes to scan one frame 2) decreasing the depth which
decreases the PRP 3) decreasing the line density, which requires fewer lines to scan one
frame (at the cost of spatial resolution)
21 | P a g e
MIT MODULE 1 - SCET
• A short pulse, typically 1-5 us long, of energy is transmitted into the body using an
ultrasound transducer.
• The transducer is focused to produce a narrow ultrasound beam, which propagates as a
pressure wave through tissue at a speed of approximately 1540 mls.
• The initial trajectory of this beam is represented by line 1 in Figure 3.1.
• When the ultrasound wave encounters tissue surfaces; boundaries between tissues, or
structures within organs , a part of the energy of the pulse is scattered in all directions,
with a certain fraction of the energy being backscattered along the original transmission
path and returning to the transducer.
• As well as transmitting energy into tissue, the transducer also acts as the signal receiver,
and converts the backscattered pressure waves into voltages, which, after amplification
and filtering, are digitized.
• Using the measured time delay between pulse transmission and echo reception and the
propagation velocity of 1540m/s, one can estimate the depth of the feature.
• After all of the echoes have been received from the first beam trajectory, the direction
of the beam is electronically steered to acquire a second line of data adjacent to the first.
• This process is repeated to acquire between 64 and 256 lines, typically, per image. The
time required to acquire the echoes for each line is sufficiently short, on the order of
100-300 us, depending on the required depth of view, that complete ultrasonic images
can be acquired in tens of milliseconds, allowing dynamic imaging studies to be
performed.
22 | P a g e
MIT MODULE 1 - SCET
23 | P a g e
MIT MODULE 1 - SCET
• A well-designed transducer will do this with high fidelity, with good conversion
efficiency and with little introduction of noise or other artefacts.
• Also, it is primarily through transducer design that one has control over the system
resolution and its spatial variation
• There are two types: single element and multielement transducers.
Piezoelectric material
• Piezoelectric ceramic (lead zirconate titanate, PZT)
• Plastic (polyvinylidine difluoride, PVDF)
• Silver material is deposited in the faces – for electric connections-application of potential
• Voltage applied produces a proportional change in thickness
• The crystal has a natural resonanat frequency f0=ccrystal/2d
• c-speed of the sound through the crystal
• d-thickness
Damping blocks
• Damping blocks are attached to the back of the piezoelectric material to accomplish a
wide range of tasks.
• Firstly, these damping blocks absorb stray sound waves, as well as absorb backward
directed ultrasound energy.
• They also dampen the vibrations to create short spatial pulse lengths (SPL) to help
improve axial resolution.
• A short SPL is desirable to minimize interference.
24 | P a g e
MIT MODULE 1 - SCET
• If the length of the pulses is too long, it can interfere with the sound waves that are on
their way back.
• In a more extreme example, the outgoing pulse may actually collide with the returning
waves if it is too long.
• Thus, a good damping material is important to maximize sensitivity to signal.
Lens
• The acoustic lens is used to help focus the beam to improve resolution.
• In the same way that light diffracts off the lens in a magnifying glass or a pair of
eyeglasses, this helps aim the ultrasound beam at a particular distance depending on the
application.
• An unfocused ultrasound beam will have a broader beam geometry that will have a lower
resolution.
• Thus, an acoustic lens is very beneficial in order to provide a sharper image. It will also
reduce blurring from side lobe interference produced by the piezoelectric material.
Matching layer
• To provide acoustic coupling between the crystal and patient
• This layer helps to overcome the acoustic mismatch between the element and the tissue.
• Z matching=(Z element x Z tissue )1/2
• T matching= λ/4
• Example- aluminium powder in araldite
• The impedance matching layer is a crucial part of the transducer.
• This layer usually contains oil and helps maximize the amount of wave energy
transmitted into the skin and onward.
• The basic idea of this layer is to help the area of the transducer touching the skin better
mimic properties of the tissue.
• Ultrasound beams can more effectively travel from tissue to tissue of similar density.
• A coupling gel is also applied to the surface where the transducer will be imaging in
order to reduce interference from air bubbles
Multiple-Element Transducers (transducer array)
• Single-element transducers, as described earlier, are not often used in modern scanning
equipment, although the other basic aspects of design still apply to multiple-element
systems.
• More than one element may be required (in the simplest case) to permit the use of
continuous waves (as in a Doppler system), where separate transmitting and receiving
elements are required.
• In pulse-echo imaging, multiple element arrays may be used for electronic (and rapidly
changing) beam focusing, translation and steering.
• The general principles of transmit beam forming for focusing, scanning and steering
are illustrated schematically, and in two dimensions only, in Figure
• In order to focus (Figure 6.25c) or steer (Figure 6.25b) the sound beam, one must be
able to excite each element via a variable delay. Systems using this technology to create
a sector shaped image by ‘phase steering’ the sound beam.
25 | P a g e
MIT MODULE 1 - SCET
16. Explain how the blood velocity measurements can be done using
ultrasound?
• The Doppler effect is familiar as, for example, the higher pitch of an ambulance siren
as it approaches the observer than when it has passed.
26 | P a g e
MIT MODULE 1 - SCET
• Similarly, blood flow, either toward or away from the transducer, alters the frequency
of the backscattered echoes, as shown in Figure 3.21.
• Because blood contains a high proportion of red blood cells (RBC), which have a
diameter of 7-10 µm, the interaction between ultrasound and blood is a scattering
process.
• The wavelength of the ultrasound is much greater than the dimensions of the scatterer
and therefore the wave is scattered in all directions.
• This means that the backscattered, Doppler-shifted signals have low signal intensities.
• The signal intensity is proportional to the fourth power of the ultrasound frequency, and
so higher operating frequencies are often used for blood velocity measurements.
27 | P a g e
MIT MODULE 1 - SCET
• The Doppler shift can be increased by using higher ultrasound frequencies, but in this
case the maximum depth at which vessels can be measured decreases due to increased
attenuation of the beam at the higher operating frequencies.
• Equation (3.43) also shows that an accurate measurement of blood velocity can only be
achieved if the angle is known.
• This angle is usually estimated from simultaneously acquired B-mode scans using
"duplex imaging"
• Doppler measurements can be performed either in Continuous Wave or pulsed mode,
depending upon the particular application.
Continuous Wave Doppler Measurements
• CW Doppler measurements are used when there is no need to localize exactly the
source of the Doppler shifts.
• A continuous pulse of ultrasound is transmitted by one transducer and the backscattered
signal is detected by a second one: usually both transducers are housed in the same
physical structure.
• The transducers are fabricated with only a small degree of mechanical damping in order
to increase the intensity of the signal transmitted at the fundamental frequency
• The region of overlap of the sensitive regions of the two transducers defines the area in
which blood flow is detected.
28 | P a g e
MIT MODULE 1 - SCET
• This area is often quite large, and problems in interpretation can occur when there is
more than one blood vessel within this region.
• The measured blood velocity is the average value over the entire sensitive region. The
advantages of CW Doppler over pulsed Doppler methods, in which exact localization
is possible, are that the method is neither limited to a maximum depth nor to a maximum
measurable velocity
• Finding clots
• Check blood flow in your veins, arteries, and heart
• Look for narrowed or blocked arteries
• See how blood flows after treatment
• Look for bulging in an artery which is called an aneurysm
29 | P a g e
MIT MODULE 1 - SCET
18. What is Colour flow mapping/ Color Doppler / color flow Doppler /
duplex ultrasonography?
• The term "duplex" refers to the fact that two modes of ultrasound are used, Doppler and
B-mode. The B-mode transducer (like a microphone) obtains an image of the vessel
being studied.
• The Doppler probe within the transducer evaluates the velocity and direction of blood
flow in the vessel.
• Color Doppler or color flow Doppler is the presentation of the velocity by color scale.
• Color Doppler images are generally combined with grayscale (B-mode) images to
display duplex ultrasonography images, allowing for simultaneous visualization of
the anatomy of the area.
• The mean velocity, its sign (positive or negative), and its variance are represented by
the hue, the saturation, and the luminance, respectively, of the color plot.
• Efficient computation of the mean and the variance values is important so that the frame
rate can be as high as possible.
.
30 | P a g e
MIT MODULE 1 - SCET
When an ultrasound wave reaches an interface between two materials, four things will happen:
• attenuation,
• reflection,
• refraction.
Reflection
• Earlier, it was explained that the change in acoustic impedance between tissue at an
interface can affect how much of the ultrasound wave is reflected.
• This could be the surface transitioning from fat to muscle tissue or it could be the surface
of a cyst or mass within a soft tissue.
• The angle of the reflected beam is equal to the angle of the incident beam assuming the
surface is smooth. In a perfect 90 degree angle, the beams will reflect back directly
towards the transducer
Refraction
• Refraction is governed by Snell’s Law and describes reflection where sound strikes the
boundary of two tissues at an oblique angle.
• Refraction takes place at an interface due to the different velocities of the acoustic waves
within the two materials.
31 | P a g e
MIT MODULE 1 - SCET
• The velocity of sound in each material is determined by the material properties (elastic
modulus and density) for that material.
If the wave propagates faster in the second medium, then the angle will decrease such that it
will bend towards the X-axis. If the wave propagates slower in the second medium, then the
angle will increase such that it will bend away from the X-axis.
Attenuation
• When sound travels through a medium, its intensity diminishes with distance.
• weakening results from scattering and absorption.
• Scattering is the reflection of the sound in directions other than its original direction of
propagation.
• Absorption is the conversion of the sound energy to other forms of energy.
• The combined effect of scattering and absorption is called attenuation.
• Ultrasonic attenuation is the decay rate of the wave as it propagates through material.
• All media attenuate ultrasound, so that the intensity of a plane wave propagating in the
x
direction decreases exponentially with distance as
where
• μa is the intensity absorption coefficient, and
• μs is the intensity scattering coefficient.
32 | P a g e
MIT MODULE 1 - SCET
Absorption
• Absorption results in the conversion of the wave energy to heat, and is responsible for
the temperature rise made use of in physiotherapy, ultrasound-induced hyperthermia
and high-intensity focused ultrasound (HIFU) therapy.
• There are many mechanisms by which heat conversion may occur, although they are often
discussed in terms of three classes.
• Classical mechanisms, which for tissues are small and involve mainly viscous (frictional)
losses, give rise to an f 2 frequency dependence.
• Molecular relaxation, in which the temperature or pressure fluctuations associated with
the wave cause reversible alterations in molecular configuration, are thought to be
predominantly responsible for absorption in tissue (except bone and lung), and, because
there are likely to be many such mechanisms simultaneously in action, produce a variable
frequency dependence close to, or slightly greater than, f.
• Finally, relative motion losses, in which the wave induces movement of small-scale
structural elements of tissue, are thought to be potentially important. A number of such
loss mechanisms might also produce a frequency dependence of absorption somewhere
between f and f 2.
• Generally, however, one can say that, increasing molecular complexity results in
increasing absorption. For tissues, a higher protein content (especially structural proteins
such as collagen), or a lower water content, is associated with greater absorption of sound.
Scattering
33 | P a g e
MIT MODULE 1 - SCET
• Pulse repetition frequency (PRF) indicates the number of ultrasound pulses emitted
by the transducer over a designated period of time.
• It is typically measured as cycles per second or hertz (Hz). In medical ultrasound the
typically used range of PRF varies between 1 and 10 kHz 1.
34 | P a g e
MIT MODULE 1 - SCET
21. State the principle that determines the shape of the ultrasound beam
and discuss on how the shape of the beam and wavelength are
related.
Beam shape
• The area through which the sound energy emitted from the ultrasound transducer
travels is known as the ultrasound beam.
• The beam is three-dimensional and is symmetrical around its central axis. It can be
subdivided into two regions: a near field (or Fresnel zone) which is cylindrical in
shape, and a far field (or Fraunhofer zone) where it diverges and becomes cone-
shaped.
35 | P a g e
MIT MODULE 1 - SCET
• The actual shape of the beam depends on a number of factors, including the diameter
of the crystal, the frequency and wavelength, the design of the transducer, and the
amount of focusing applied to the beam.
• Increasing the frequency will result in a longer near field and less far field divergence.
A narrow crystal diameter will result in a narrower beam in the near field, but the
disadvantage is that the near field is shorter and there is more divergence in the far
field.
36 | P a g e
MIT-MODULE2-SCET
2. Computed Tomography
• Motion of the X-ray source and the detector occurred in two ways, linear and rotational.
• In Figure 1.24, M linear steps were taken with the intensity of the transmitted X-rays
being detected at each step.
Page 1 of 36
MIT-MODULE2-SCET
• This produced a single projection with M data points. Then both the source and detector
were rotated by (180/N) degrees, where N is the number of rotations in the complete
scan, and a further M translational lines were acquired at this angle.
• The total data matrix acquired was therefore M x N points.
• Image reconstruction takes place in parallel with data acquisition in order to minimize
the delay between the end of data acquisition and the display of the images on the
operator's console.
• As the signals corresponding to one projection are being acquired, those from the
previous projection are being amplified and digitized, and those from the projection
previous to that are being filtered and processed.
• In order to illustrate the issues involved in image reconstruction, consider the raw
projection data that would be acquired from a simple object such as an ellipse with a
uniform attenuation coefficient, as shown in Figure 1.27.
• The reconstruction goal is illustrated on the right of Figure 1.27 for a simple 2 x 2
matrix of tissue attenuation coefficients: given a series of intensities I1, I2, I3 and I4. The
attenuation coefficients - µ1, µ2, µ3, µ4 can be calculated from this.
• For each projection, the signal intensity recorded by each detector depends upon the
attenuation coefficient and the thickness of each tissue that lies between the X-ray
source and that particular detector.
• For the simple case shown on the right of Figure 1.27, two projections are acquired,
each consisting of two data points: projection I (I1, I2) and projection 2 (I3 and I4). If the
image to be reconstructed is also a two-by-two matrix, then the intensities of the
projections can be expressed in terms of the linear attenuation coefficients by
Page 2 of 36
MIT-MODULE2-SCET
• where x is the dimension of each pixel. It might seem that this problem could be solved
by matrix inversion or similar techniques.
• These approaches are not feasible, however, first due to the presence of noise in the
projections (high noise levels can cause direct inversion techniques to become
unstable), and second because of the large amount of data collected.
• If the data matrix size is, for example, 1024 x 1024, then matrix inversion techniques
become very slow. Image reconstruction, in practice, is carried out using either
backprojection algorithms or iterative techniques.
Page 3 of 36
MIT-MODULE2-SCET
• Since there was only one detector, calibration was easy and there was no problem
with having to balance multiple detectors; also, costs were minimised.
• The scatter rejection of this first-generation system was higher than that of any other
generation because of the 2D collimation at both source and detector.
• The system was slow, however, with typical acquisition times of 4 min per section,
even for relatively low-resolution images.
Page 4 of 36
MIT-MODULE2-SCET
Page 5 of 36
MIT-MODULE2-SCET
Page 6 of 36
MIT-MODULE2-SCET
Gantry
• The gantry houses and provides support for the rotation motors, HV generator, X-ray
tube (one or two), detector array and preamplifiers (one or two), temperature control
system and the slip rings.
• Slip rings enable the X-ray tube (and detectors in a third-generation system) to rotate
continuously. The HV cables have been replaced with conductive tracks on the slip
rings that maintain continuous contact with the voltage supply via graphite or silver
brushes.
• Two slip-ring designs were initially used in commercial scanners: (1) low-voltage slip
rings, in which the HV transformer was mounted on the rotating part of the gantry with
the X-ray tube, and only low voltages were present on the stationary and moving parts
and (2) high-voltage slip rings, in which the HV was generated in the stationary part of
the gantry, thus reducing the inertia of the moving parts. The low-voltage design is now
universally adopted.
X-Ray Generation
• The basic components of the X-ray source, also referred to as the X-ray tube, used for
clinical diagnoses are shown in Figure 1.3.
• The production of X-rays involves accelerating a beam of electrons to strike the surface
of a metal target.
• The X-ray tube has two electrodes, a negatively charged cathode, which acts as the
electron source, and a positively charged anode, which contains the metal target.
• A potential difference of between 15 and 150 kV is applied between the cathode and
the anode; the exact value depends upon the particular application.
• This potential difference is in the form of a rectified alternating voltage, which is
characterized by its maximum value, the kilovolts peak (kVp).
Page 7 of 36
MIT-MODULE2-SCET
• The maximum value of the voltage is also referred to as the accelerating voltage. The
cathode consists of a filament of tungsten wire coiled to form a spiral "'2 mm in
diameter and less than 1 cm in height.
• An electric current from a power source passes through the cathode, causing it to heat
up. When the cathode temperature reaches approximately 22OO°C the thermal energy
absorbed by the tungsten atoms allows a small number of electrons to move away from
the metallic surface, a process termed thermionic emission.
• A dynamic equilibrium is set up, with electrons having sufficient energy to escape from
the surface of the cathode, but also being attracted back to the metal surface.
• The large positive voltage applied to the anode causes these free electrons created at
the cathode surface to accelerate toward the anode.
• The spatial distribution of these electrons striking the anode correlates directly with the
geometry of the X-ray beam that enters the patient.
• Since the spatial resolution of the image is determined by the effective spot size, shown
in Figure, the cathode is designed to produce a tight, uniform beam of electrons. In
order to achieve this, a negatively charged focusing cup is placed around the cathode to
reduce divergence of the electron beam.
Page 8 of 36
MIT-MODULE2-SCET
• The larger the negative potential applied to the cup, the narrower is the electron beam.
If an extremely large potential (2kV) is applied, then the flux of electrons can be
switched off completely.
• At the anode, X-rays are produced as the accelerated electrons penetrate a few tens of
micrometers into the metal target and lose their kinetic energy. This energy is converted
into X-rays.
Collimators
• The geometry of the X-ray beam emanating from the source is a divergent beam.
• Often, the dimensions of the beam when it reaches the patient are larger than the desired
FOV of the image.
• This has two undesirable effects, the first of which is that the patient dose is increased
unnecessarily. The second effect is that the number of Compton-scattered X-rays
contributing to the image is greater than if the extent of the beam had been matched to
the image FOV
• In order to restrict the dimensions of the beam, a collimator, also called a beam
restrictor, is placed between the X-ray source and the patient.
• The collimator consists of sheets of lead, which can be slid over one another to restrict
the beam in either one or two dimensions.
Page 9 of 36
MIT-MODULE2-SCET
Antiscatter Grids
• Ideally, all of the X-rays reaching the detector would be primary radiation, with no
contribution from Compton-scattered X-rays.
• In this case, image contrast would be affected only by differences in attenuation from
photoelectric interactions in the various tissues.
• However, in practice, a large number of X-rays that have undergone Compton
scattering reach the detector.
• As mentioned previously, the contrast between tissues from Compton-scattered X-rays
is inherently low.
• In addition, secondary radiation contains no useful spatial information and is distributed
randomly over the film, thus reducing image contrast further.
• The effect of scattered radiation on the X-ray image is shown schematically in Figure
1.11.
• Collimators can be used to restrict the beam dimensions to the image FOV and therefore
decrease the number of scattered X-ray s contributing to the image, but even with a
collimator in place secondary radiation can represent between 50% and 90% of the X-
rays reaching the detector.
• Additional measures, therefore, are necessary to reduce the contribution of Compton-
scattered X-rays.
• One method is to place an antiscatter grid between the patient and the X-ray detector.
• This grid consists of strips of lead foil interspersed with aluminum as a support, with
the strips oriented parallel to the direction of the primary radiation, as shown in Figure
1.12.
Page 10 of 36
MIT-MODULE2-SCET
• The properties of the grid are defined in terms of the grid ratio and strip line density :
where h, t, and d are the length and the thickness of the lead strips and the distance
between the centers of the strips, respectively.
• The most common detectors for CT scanners are xenon-filled ionization chambers,
shown in Figure 1.26.
• Because xenon has a high atomic number of 66, there is a high probability of
photoelectric interactions between the gas and the incoming X-rays.
• The xenon is kept under pressure at "-'20 atm to increase further the number of
interactions between the X-rays and the gas molecules.
• An array of interlinked ionization chambers, typically 768 in number (although some
commercial scanners have up to 1000), is filled with gas, with metal electrodes
separating the individual chambers.
• X-rays transmitted through the body ionize the gas in the detector, producing electron-
ion pairs.
• These are attracted to the electrodes by an applied voltage difference between the
electrodes, and produce a current which is proportional to the number of incident X-
rays .
• Each detector electrode is connected to a separate amplifier, and the outputs of the
amplifiers are multiplexed through a switch to a single AD converter.
• The digitized signals are logarithmically amplified and stored for subsequent image
reconstruction. In this design of the ionization chamber, the metal electrode plates also
perform the role of an antiscatter grid, with the plates being angled to align with the
Page 11 of 36
MIT-MODULE2-SCET
focal spot of the X-ray tube. The plates are typically 10 Cm in length, with a gap of 1
mm between adjacent plates.
Page 12 of 36
MIT-MODULE2-SCET
Patient Couch
• The patient couch must be able not only to support the weight of the patient (which may
exceed 150kg), but also translate the patient into the gantry aperture without flexing
whilst achieving a positional accuracy of the order of 1mm.
• In addition, it must be radiolucent, safe for the patient and easy to clean.
• Most modern couches are made of a carbon fibre composite which can provide the
required rigidity and radiolucency.
Computer System
• During the course of a CT scan, a multitude of tasks take place: interfacing with the
operator, gantry and couch movement control; acquiring, correcting, reconstructing and
storing the data; image display and archive; generating hard copies, and pushing images
to PACS and networked workstations.
• For a clinical scanner to work efficiently, as many as possible of these tasks must take
place concurrently.
• This was originally achieved with a multiprocessor system and parallel architecture,
but more recently with a multitasking workstation.
Subquestions:
1) What is backprojrction?
2) What is filtered backprojection?
3) Explain iterative algorithm with an example?
4) Discuss the radon transform?
5) Discuss the Fourier slice theorem?
6) Explain the Fourier method of image reconstruction
Page 13 of 36
MIT-MODULE2-SCET
For each projection, the signal intensity recorded by each detector depends upon the
attenuation coefficient and the thickness of each tissue that lies between the X-ray
source and that particular detector.
For the simple case shown on the right of Figure, two projections are acquired, each
consisting of two data points: projection I and projection 2.
where x is the dimension of each pixel. It might seem that this problem could be
solved by matrix inversion or similar techniques. These approaches are not feasible,
however, first due to the presence of noise in the projections (high noise levels can
cause direct inversion techniques to become unstable), and second because of the
large amount of data collected. If the data matrix size is, for example, 1024 x 1024,
then matrix inversion techniques become very slow.
Image reconstruction, in practice, is carried out using either backprojection algorithms
or iterative techniques.
Page 14 of 36
MIT-MODULE2-SCET
Page 15 of 36
MIT-MODULE2-SCET
Radon transform:
o
Each X-ray projection p(r, Ф) can therefore be expressed in terms of the Radon
transform of the object being studied:
where p(r, Ф) refers to the projection data acquired as a function of r, the distance
o along the projection, and Ф, the rotation angle of the X ray source and detector.
Reconstruction of the image therefore requires computation of the inverse Radon
o transform of the acquired projection data.
o The most common methods of implementating the inverse Radon transform use
backprojection or filtered backprojection
A number of one-dimensional projections, PI, P2, ... , Pn, are acquired with the
detector oriented at different angles with respect to the object, as shown in Figure.
In the following analyses, the object is represented as a function f(x, y), in which the
spatially dependent values of f correspond to attenuation coefficients in X-ray CT.
The coordinate system in the measurement frame is represented by (r, s), where r
is the direction parallel to the detector and S is the direction along the ray at 90°to the
r dimension.
The angle between the x and r axes is denoted as Ф, and so by simple trigonometry
Page 16 of 36
MIT-MODULE2-SCET
Page 17 of 36
MIT-MODULE2-SCET
2) Iterative reconstruction
Page 18 of 36
MIT-MODULE2-SCET
Steps:
These algorithms start with an initial estimate of the two dimensional matrix of
attenuation coefficients
By comparing the projections predicted from this initial estimate with those that are
actually acquired, changes are made to the estimated matrix.
This process is repeated for each projection, and then a number of times for the whole
dataset until the residual error between the measured data and those from the estimated
matrix falls below a predesignated value.
Example:
Page 19 of 36
MIT-MODULE2-SCET
• The value of the MSE per pixel after the first iteration is approximately 0.0325I0.
• The next iteration forces the estimated data to agree with the measured vertical
projection.
• Consider the component that passes through pixels µI, µ5, µ9, and µ13
• The measured data is 0.4I0, but the calculated data using the first iteration is 0.22I0,
• The values of the attenuation coefficients have been overestimated and must be
reduced.
• The exact amount by which the attenuation coefficients µI, µ5, µ9, and µ13 should
be reduced is unknown, and again the simple assumption is made that each value
should be reduced by an equal amount.
• Applying this procedure to all four components of the horizontal projection gives
the estimated matrix shown on the right of Figure
Page 20 of 36
MIT-MODULE2-SCET
• Now, of course, the estimated projection data do not agree with the measured data of
the
• horizontal projection but the MSE per pixel has been reduced to 0.005I02 In a practical
realization of a full ray-by-ray iterative reconstruction, many more projections would
be acquired and processed.
• After a full iteration of all of the projections, the process can be repeated a number of
times until the desired accuracy is reached or further iterations produce no significant
improvements in the value of the MSE.
3) Analytical methods
Page 21 of 36
MIT-MODULE2-SCET
Page 22 of 36
MIT-MODULE2-SCET
Page 23 of 36
MIT-MODULE2-SCET
The projection-slice theorem, central slice theorem or Fourier slice theorem in two
dimensions states that the results of the following two calculations are equal:
• Take a two-dimensional function f(r), project (e.g. using the Radon transform) it
onto a (one-dimensional) line, and do a Fourier transform of that projection.
• Take that same function, but do a two-dimensional Fourier transform first, and
then slice it through its origin, which is parallel to the projection line.
In operator terms, if
then
This idea can be extended to higher dimensions
Page 24 of 36
MIT-MODULE2-SCET
Page 25 of 36
MIT-MODULE2-SCET
Page 26 of 36
MIT-MODULE2-SCET
Page 27 of 36
MIT-MODULE2-SCET
Conventional CT Systems:
• Tube Rotates Around Stationary Patient (Table is Incremented Between Acquisitions)
• All Views in a Slice are at Same Table Position
• Power to X-Ray Tube via Cord
• Scan CW and CCW to Wind/Unwind Cord
• Interscan Delays: 3.5 Seconds Between Slices
Differences of spiral CT from Conventional:
• Continuous Tube Rotation - No Interscan Delays (Power to X-ray Tube via Slip Ring)
• Continuous Table Motion as Tube Rotates
• Each View is at a DIFFERENT Table Position Form Images by Synthesizing Projection
Data via Interpolation
Page 28 of 36
MIT-MODULE2-SCET
• MSCT is also different from conventional single slice CT in terms of the image
(reconstruction algorithm) calculation method it employs.
• The helical scanning with 4-row detectors provides data which are several times as
large as those of conventional single slice CT and have higher density.
• These data are used to calculate a high-resolution section image of the target site using
a technique called Z-axis multiple-point weighed interpolation.
• To obtain high image quality with multislice helical scanning, it is important to
determine the distance moved by the patient table during one rotation of the scanner.
Page 29 of 36
MIT-MODULE2-SCET
• A concept known as pitch (helical pitch) is usually used as an index of the distance of
table movement. The helical pitch is determined by dividing the distance of the table
movement per rotation of the X-ray tube by the detector width equivalent to 1 slice.
• In conventional helical scanning, the table moves for a distance equal to the slice width
during 1 rotation of the X-ray tube (that is, pitch 1).
• In contrast, the pitch can be adjusted up to 6 in MSCT: the table can be moved by a half
channel per rotation (pitch 3.5 or 4.5) or by 1 channel per rotation (pitch 3).
• The pitch is closely correlated with image quality and exposure dose.
• In general, image quality is improved and exposure dose is increased as the pitch is
reduced, while image quality is worsened and the exposure dose is reduced as the pitch
is increased.
8. Summarize on X-ray tubes used in CT. Explain the types of X-ray tubes
have been utilized for computed tomography?
Page 30 of 36
MIT-MODULE2-SCET
Page 31 of 36
MIT-MODULE2-SCET
Page 32 of 36
MIT-MODULE2-SCET
Page 33 of 36
MIT-MODULE2-SCET
• The X-ray source is tightly collimated to interrogate a thin "slice" through the patient.
• The source and detectors rotate together around the patient, producing a series of one-
dimensional projections at a number of different angles
• These data are reconstructed to give a two-dimensional image, as shown on the right of
Figure 1.2. CT images have a very high spatial resolution (approx. 1 mm) and provide
reasonable contrast between soft tissues.
• In addition to anatomical imaging, CT is the imaging method that can produce the
highest resolution angiographic images, that is, images that show blood flow in vessels.
Page 34 of 36
MIT-MODULE2-SCET
• Recent developments in spiral and multislice CT have enabled the acquisition of full
three-dimensional images in a single patient breath-hold.
• The major disadvantage of both X-ray and CT imaging is the fact that the technique
uses ionizing radiation. Because ionizing radiation can cause tissue damage, there is a
limit on the total radiation dose per year to which a patient can be subjected. Radiation
dose is of particular concern in pediatric and obstetric radiology.
Page 35 of 36
MIT-MODULE2-SCET
Page 36 of 36
MIT MODULE 3&4-SCET
• In general, the temporal resolution is much slower than for ultrasound or computed
tomography, with scans typically lasting between 3 and 10 min, and MRI is therefore
much more susceptible to patient motion.
• The cost of MRI scanners is relatively high, with the price of a typical clinical l.5-T
whole-body imager on the order of $1.5 million.
• The major uses of MRI are in the areas of assessing brain disease, spinal disorders,
angiography, cardiac function, and musculoskeletal damage.
• The MRI signal arises from protons in the body, primarily water, but also lipid.
• The patient is placed inside a strong magnet, which produces a static magnetic field
typically more than 10,000 times stronger than the earth's magnetic field.
• Each proton, being a charged particle with angular momentum, can be considered as
acting as a small magnet.
• The protons align in two configurations, with their internal magnetic fields aligned
either parallel or antiparallel to the direction of the large static magnetic field, with
slightly more found in the parallel state.
• The protons precess around the direction of the static magnetic field, in an analogous
way to a spinning gyroscope under the influence of gravity.
• The frequency of precession is proportional to the strength of the static magnetic field.
• Application of a weak radiofrequency (RF) field causes the protons to precess
coherently,
• The sum of all of the protons precessing is detected as an induced voltage in a tuned
detector coil.
• Spatial information is encoded into the image using magnetic field gradients.
• These impose a linear variation in all three dimensions in the magnetic field present
within the patient
1|Page
MIT MODULE 3&4-SCET
• In MRI the patient is placed inside a very strong magnet for scanning.
• A typical value of the magnetic field, denoted Bo, is 1.5 T (l5,OOOG), which can be
compared to the earth 's magnetic field of approximately 50 µT (0.5 G).
• The MRI signal arises from the interaction between the magnetic field and hydrogen
nuclei, or protons, which are found primarily as water in tissue and also lipid.
• All nuclei with an odd atomic weight and/or an odd atomic number possess a
fundamental quantum mechanical property termed "spin."
• For MRI the most important nucleus is the hydrogen nucleus, or proton.
2|Page
MIT MODULE 3&4-SCET
• Although not a rigorously accurate model , the property of spin can be viewed as a
proton spinning around an internal axis of rotation giving it a certain value of angular
momentum P.
• Because the proton is a charged particle , this rotation gives the proton a magnetic
moment µ.
• This magnetic moment produces an associated magnetic field, which has a
configuration similar to that of a bar magnet, as shown in Figure.
• In the absence of an external magnetic field the orientation of the individual magnetic
moments is random.
3|Page
MIT MODULE 3&4-SCET
4|Page
MIT MODULE 3&4-SCET
5|Page
MIT MODULE 3&4-SCET
• In order to obtain an MRI signal, transitions must be induced between the protons in
the parallel and the anti parallel energy levels .
6|Page
MIT MODULE 3&4-SCET
• Before the rf pulse is switched on the net magnetisation, Mo, is at equilibrium, aligned
along the z-axis in the same direction as Bo
• When the rf pulse is switched on, the net magnetisation begins to move away from its
alignment with the Bo field and rotate around it.
• The net magnetisation is the result of the sum of many individual magnetic moments.
• So long as they rotate together (a condition known as coherence) they will produce a
net magnetisation that is rotating.
• The greater the amount of energy applied by the rf pulse, the greater the angle that the
net magnetisation makes with the Bo field (the z axis).
• Once the rf pulse has caused the net magnetisation to make an angle with the z-axis, it
can be split into two components.
• One component is parallel to the z-axis. This is known as the z-component of the
magnetisation, Mz, also know as the longitudinal component.
• The other component lies at right angles to the z axis within the plane of the x and y
axes and is known as the x-y component of the net magnetisation, Mxy, or the
transverse component.
• The transverse component rotates at the Larmor frequency within the xy plane and as
it rotates, it generates its own small, oscillating magnetic field which is detected as an
MR signal by the rf receiver coil.
7|Page
MIT MODULE 3&4-SCET
• The 90° rf excitation pulse delivers just enough energy to rotate the net magnetisation
through 90°
• This transfers all of the net magnetisation from the z-axis into the xy (transverse) plane,
leaving no component of magnetisation along the z-axis immediately after the pulse.
• When applied once, a 90° rf pulse produces the largest possible transverse
magnetisation and MR signal.
• Low flip angle rf excitation pulses rotate the net magnetisation through a pre-defined
angle of less than 90°
• A low flip is represented by the symbol α or can be assigned a specific value, e.g. 30°.
Only a proportion of the net magnetisation is transferred from the z axis into the xy
plane, with some remaining along the z axis.
• While a low flip angle rf pulse produces an intrinsically lower signal than the 90°
excitation pulse described above, it can be repeated more rapidly as some of the
magnetisation remains along the z-axis immediately after the pulse.
• This excitation pulse is used to generate the signal in gradient echo pulse sequences to
control the amount of magnetisation that is transferred between the z-axis and the xy
plane for fast imaging applications.
• The 180° refocusing pulse is used in spin echo pulse sequences after the 90° excitation
pulse, where the net magnetisation has already been transferred into the x-y plane.
• It flips the direction of the magnetisation in the x-y plane through 180°
8|Page
MIT MODULE 3&4-SCET
• Immediately after the rf pulse the spin system starts to return back to its original state,
at equilibrium. This process is known as relaxation.
• There are two distinct relaxation processes that relate to the two components of the Net
Magnetisation, the longitudinal (z) and transverse (xy) components
• longitudinal relaxation/ T1 relaxation- The first relaxation process, longitudinal
relaxation, commonly referred to as T1 relaxation is responsible for the recovery of the
z component along the longitudinal (z) axis to its original value at equilibrium.
• Longitudinal and transverse relaxation both occur at the same time, however, transverse
relaxation is typically a much faster process for human tissue.
9|Page
MIT MODULE 3&4-SCET
• Over time, for reasons explained in a moment, the phase angles gradually spread out,
there is a loss of coherence and the magnetic moments no longer rotate together and
they are said to move ‘out of phase’.
• The net sum of the magnetic moments is thus reduced, resulting in a reduction in the
measured net (transverse) magnetisation.
• The signal that the receiver coil detects (if no further rf pulses or magnetic field
gradients are applied) is therefore seen as an oscillating magnetic field that gradually
decays -known as a Free Induction Decay or FID.
10 | P a g e
MIT MODULE 3&4-SCET
• Over time, for reasons explained in a moment, the phase angles gradually spread out,
there is a loss of coherence and the magnetic moments no longer rotate together and
they are said to move ‘out of phase’.
• The net sum of the magnetic moments is thus reduced, resulting in a reduction in the
measured net (transverse) magnetisation.
• The signal that the receiver coil detects (if no further rf pulses or magnetic field
gradients are applied) is therefore seen as an oscillating magnetic field that gradually
decays -known as a Free Induction Decay or FID.
• There are two causes of this loss of coherence. Firstly, the presence of interactions
between neighbouring protons causes a loss of phase coherence known as T2
relaxation
• This arises from the fact that the rate of precession for an individual proton depends on
the magnetic field it experiences at a particular instant.
• While the applied magnetic field Bo is constant, it is however possible for the magnetic
moment of one proton to slightly modify the magnetic field experienced by a
neighbouring proton.
• As the protons are constituents of atoms within molecules, they are moving rapidly and
randomly and so such effects are transient and random.
• The net effect is for the Larmor frequency of the individual protons to fluctuate in a
random fashion, leading to a loss of coherence across the population of protons.
• i.e. the spins gradually acquire different phase angles, pointing in different directions
to one another and are said to move out of phase with one another (this is often referred
to as de-phasing).
• The resultant decay of the transverse component of the magnetisation (Mxy) has an
exponential form with a time constant, T2, hence this contribution to transverse
relaxation is known as T2 relaxation
• As it is caused by interactions between neighbouring proton spins it is also sometimes
known as spin-spin relaxation.
11 | P a g e
MIT MODULE 3&4-SCET
• Due to the random nature of the spin-spin interactions, the signal decay caused by T2
relaxation is irreversible.
T2* relaxation
• The second cause for the loss of coherence (de-phasing) relates to local static variations
(inhomogeneities) in the applied magnetic field, Bo which are constant in time.
• If this field varies between different locations, then so does the Larmor frequency.
• Protons at different spatial locations will therefore rotate at different rates, causing
further de-phasing so that the signal decays more rapidly.
• In this case, as the cause of the variation in Larmor frequency is fixed, the resultant de-
phasing is potentially reversible.
• The combined effect of T2 relaxation and the effect of magnetic field non-uniformities
is referred to as T2* relaxation and this determines the actual rate of decay observed
when measuring an FID signal .
• T2* relaxation is also an exponential process with a time constant T2*.
12 | P a g e
MIT MODULE 3&4-SCET
• The shorter the T1 time constant is, the faster the relaxation process and the return to
equilibrium.
13 | P a g e
MIT MODULE 3&4-SCET
• T1 relaxation involves the release of energy from the proton spin population as it returns
to its equilibrium state.
• The rate of relaxation is related to the rate at which energy is released to the
surrounding molecular structure.
• This in turn is related to the size of the molecule that contains the hydrogen nuclei and
in particular the rate of molecular motion, known as the tumbling rate of the
particular molecule.
• As molecules tumble or rotate they give rise to a fluctuating magnetic field which is
experienced by protons in adjacent molecules.
• For example, lipid molecules are of a size that gives rise to a tumbling rate which is
close to the Larmor frequency and therefore extremely favourable for energy exchange.
• Fat therefore has one of the fastest relaxation rates of all body tissues and therefore the
shortest T1 relaxation time
• Larger molecules have much slower tumbling rates that are unfavourable for energy
exchange, giving rise to long relaxation times.
• For free water, its smaller molecular size has a much faster molecular tumbling rate
which is also unfavourable for energy exchange and therefore it has a long T1 relaxation
time.
14 | P a g e
MIT MODULE 3&4-SCET
• Whilst the FID can be detected as a MR signal, for MR imaging it is more common to
generate and measure the MR signal in the form of an echo
• The two most common types of echo used for MR imaging are gradient echoes and
spin echoes.
• Gradient echoes are generated by the controlled application of magnetic field gradients.
• When a magnetic field gradient is switched on it causes proton spins to lose coherence
or de-phase rapidly along the direction of the gradient
• This de-phasing causes the amplitude of the FID signal to rapidly drop to zero
15 | P a g e
MIT MODULE 3&4-SCET
• de-phasing caused by one magnetic field gradient can however be reversed by applying
a second magnetic field gradient along the same direction with a slope of equal
amplitude but in the opposite direction.
• If the second gradient is applied for the same amount of time as the first gradient, the
de-phasing caused by the first gradient is cancelled and the FID re-appears
• It reaches a maximum amplitude at the point at which the spins de-phased by the first
gradient have moved back into phase, or ‘re-phased’.
• If the second gradient then continues to be applied, the FID signal de-phases and
disappears once more.
• The signal that is re-phased through the switching of the gradient direction is known as
a gradient echo.
• The time from the point at which the transverse magnetisation (the FID) is generated
by the rf pulse, to the point at which the gradient echo reaches it’s maximum amplitude
is known as the echo time (abbreviated TE)
16 | P a g e
MIT MODULE 3&4-SCET
17 | P a g e
MIT MODULE 3&4-SCET
18 | P a g e
MIT MODULE 3&4-SCET
• Tissues that have recovered less quickly will have a smaller longitudinal magnetisation
before the next rf pulse, resulting in a smaller transverse magnetisation after the rf pulse.
• The short TE limits the influence of the different T2 decay rates. The resultant contrast
is therefore said to be T1-weighted.
• T1 weighted spin echo images are typically characterised by bright fat signal and a low
signal from fluid and are useful for anatomical imaging where high contrast is required
between, fat, muscle and fluid.
19 | P a g e
MIT MODULE 3&4-SCET
• T2 decay is the decay of the transverse magnetisation (Mxy) after application of the 90°
RF pulse.
• The longer the time after the 90° RF pulse, the more the Mxy decays and the smaller
the transverse signal.
• As we saw in the spin echo sequence, TE is the "time to echo". If we leave a long TE
we give more time for the Mxy to decay and we get a smaller signal.
• The longer the TE
• The longer the time allowed for Mxy to decay
• The smaller the transverse (T2) signal
• i.e. it is the TE that determines the T2 signal
• The time constant, T2, is the time it takes for the hydrogen nuclei to decay to 37% of
its excited Mxy.
• Hydrogen nuclei in different molecules have different T2s.
• Those with a short T2 will take a shorter time to decay than those with a long T2.
• The parameter choice for T2-weighted spin echo is a long TR and long TE.
• The choice of a long TR allows the z-magnetisation to recover close to the equilibrium
values for most of the tissues, therefore reducing the influence of differences in T1
relaxation time.
• The longer echo time however allows more decay of the xy component of the
magnetisation.
• The differential rate of decay between a tissue with a short T2 (e.g. muscle) and a tissue
with a long T2 (e.g. fluid), leads to a difference in signal that is said to be T2-weighted.
• The short T2 leads to a reduced signal intensity, while the long T2 leads to an increased
signal intensity.
• These images are characterised by bright fluid and are useful for the depiction of fluid
collections and the characterisation of cardiac masses and oedema.
20 | P a g e
MIT MODULE 3&4-SCET
Discuss the Proton density-weighted spin echo/ spin density weighted imaging
• The parameter choice for proton density-weighted spin echo is a long TR and short TE .
• The choice of long TR allows recovery of the z-magnetisation for most tissues, therefore
reducing the influence of differences in T1 relaxation time and the 90° excitation pulse
therefore transfers a similar amount of signal into the xy plane for all tissues.
• The choice of a short TE limits the amount of T2 decay for any tissue at the time of
measurement.
• This results in a high signal from all tissues, with little difference between them.
• So the signal amplitude is not particularly affected by the T1 relaxation properties, or by the
T2 relaxation properties.
• The primary determinant of the signal amplitude is therefore the equilibrium magnetisation
of the tissue and the image contrast is said to be ‘proton density’-weighted.
21 | P a g e
MIT MODULE 3&4-SCET
• This type of weighting is useful where the depiction of anatomical structure is required,
without the need to introduce soft tissue contrast.
22 | P a g e
MIT MODULE 3&4-SCET
Discuss the steps in the Localising and encoding MR signals to make an image?
Explain the image acquisition in MRI?
• The MR echo signals produced can be localised and encoded by applying magnetic
field gradients as they are generated to produce an image.
Step 1 - Selection of an image slice
• First, the resonance of protons is confined to a slice of tissue.
• This is done by applying a gradient magnetic field at the same time as the rf excitation
pulse is transmitted
• The frequency of the rf pulse corresponds to the Larmor frequency at a chosen point
along the direction of the applied gradient.
23 | P a g e
MIT MODULE 3&4-SCET
• The result is for resonance only to occur for protons in a plane that cuts through that
point at right angles to the gradient direction, effectively defining a slice of tissue.
• This process is known as slice selection and the gradient is known as the slice selection
gradient, Gs
• Rather than just a single frequency, the transmitted rf pulse is comprised of a small
range of frequencies, known as the transmit bandwidth of the rf pulse.
• This gives the slice a thickness.
• The thickness of the slice is determined by the combination of the rf pulse bandwidth
and the steepness (or strength) of the gradient.
24 | P a g e
MIT MODULE 3&4-SCET
25 | P a g e
MIT MODULE 3&4-SCET
26 | P a g e
MIT MODULE 3&4-SCET
• The amplitude of each frequency component can be mapped onto a location along the frequency
encoding gradient to determine the relative amount of signal at each location.
• The Fourier Transform can only analyse a signal that changes over time.
• To enable this, a number of signal echoes are generated by repeating the above three-step
process (slice selection, phase encoding and frequency encoding), each time applying the same
slice selection and frequency encoding gradient, but a different amount of phase encoding
• This is done by increasing the strength (or slope) of the phase encoding gradient for each
repetition by equal increments or steps.
• For each phase encoding step the signal echo is measured, digitised and stored in a raw data
matrix.
• Once all the signals for a prescribed number of phase encoding steps have been acquired and
stored, they are analysed together by a two-dimensional (2D) Fourier transform to decode both
the frequency and the phase information (Figure 12).
k-space
• a single pixel in the image may have contributions from all of the MR signals collected.
• Just as each pixel occupies a unique location in image space, each point of an MR signal echo
belongs to a particular location in a related space known as k-space
27 | P a g e
MIT MODULE 3&4-SCET
• There is an inverse relationship between the image space and k-space (Figure 12). Whereas the
coordinates of the image represent spatial position (x and y), the coordinates of k-space
represent 1/x and 1/y, sometimes referred to as spatial frequencies, kx and ky.
• The value of each point in k-space represents how much of a particular spatial frequency is
contained within the corresponding image.
• To make an image that is a totally faithful representation of the imaged subject, it is important
that the whole range of spatial frequencies is acquired (up to a maximum that defines the spatial
resolution of the image), i.e. that the whole of k-space is covered.
• For standard imaging this is done by filling k-space with equally spaced parallel lines of signal
data, line by line, along the kx direction. This is known as a Cartesian acquisition
• The phase encoding gradient determines the position of the line being filled in the ky direction.
• Usually the amplitude of the phase encoding gradient is incremented in steps such that the next
adjacent line in k-space is filled with each successive repetition, starting at one edge of k-space
and finishing at the opposite edge.
28 | P a g e
MIT MODULE 3&4-SCET
• Three basic components make up the MRI scanner: the magnet, three magnetic field
gradient coils, and an RF coil.
• The magnet polarizes the protons in the patient
• the magnetic field gradient coils impose a linear variation on the proton Larmor
frequency as a function of position
• the RF coil produces the oscillating magnetic field necessary for creating phase
coherence between protons, and also receives the MRI signal via Faraday induction
• Each MRI system has a number of different-sized RF coils, used according to the
particular part of the body being imaged, which are placed on or around the patient.
• The gradient coils are fixed permanently inside the bore of the superconducting magnet
• In addition to these three elements there is a series of electronic components used to
turn the gradients on and off, to pulse the B1 field, and to amplify and digitize the signal.
Magnet Design
• The purpose of the magnet is to produce a strong, temporally stable, and homogeneous
magnetic field within the patient.
• A strong magnetic field increases the amplitude of the MRI signal, a homogeneous
magnetic field is required so that the tissue T2 value is not too short and images are not
distorted by Bo inhomogeneities, and high stability is necessary to avoid introducing
unwanted artifacts into the image.
• There are three basic types of magnet: permanent, resistive, and superconducting.
1. Permanent magnets
• In resistive magnets, the magnetic field is created by the passage of a constant current
through a conductor such as copper.
29 | P a g e
MIT MODULE 3&4-SCET
• The strength of the magnetic field is directly proportional to the magnitude of the
current
• thus high currents are necessary to create high magnetic fields.
• However, the amount of power dissipated in the wire is proportional to the resistance
of the conductor and the square of the current.
• Because the power is dissipated in the form of heat, cooling the conductors is a major
problem, and ultimately limits the maximum current, and therefore magnetic field
strength, that can be achieved with a resistive magnet.
• As with permanent magnets, the field homogeneity and the temporal stability of
resistive magnets are highly temperature-dependent.
3. Super conducting magnets
• The solution to the problem of conductor heating is to minimize the resistance of the
conductor by using the phenomenon of superconductivity, in which the resistance of
many conductors becomes zero at very low temperatures.
• In order to create high static magnetic fields, it is still necessary for the conductor to
carry a large current when it is superconducting, and this capability is only possessed
by certain alloys, particularly those made from niobium-titanium.
• The superconducting alloy is usually fashioned into multistranded filaments within a
conducting matrix because this arrangement can support a higher critical current than a
single, larger-diameter superconducting wire.
• This superconducting matrix is housed in a stainless steel can containing liquid helium
at a temperature of 4.2 K, as shown in Figure.
• This can is surrounded by a series of radiation shields and vacuum vessels to minimize
the boil-off of the liquid helium.
• Finally, an outer container of liquid nitrogen is used to cool the outside of the vacuum
chamber and the radiation shields.
• Because heat losses cannot be completely contained, liquid nitrogen and liquid helium
must be replenished on a regular basis.
30 | P a g e
MIT MODULE 3&4-SCET
31 | P a g e
MIT MODULE 3&4-SCET
Radiofrequency Coils
• in order to produce an MRI signal, magnetic energy must be supplied to the protons at
the Larmor frequency in order to stimulate transitions between the parallel and the
antiparallel nuclear energy levels, thus creating precessing transverse magnetization.
• The particular piece of hardware that delivers this energy is called an RF coil, which is
usually placed directly around, or next to, the tissue to be imaged.
• The same RF coil is also usually used to detect the NMR signal via Faraday induction
• The power needed to generate the RF pulses for clinical systems can be many kilowatts,
and the receiver is designed to detect signals only on the order of 1-10 V.
• Examples of RF coil geometries for imaging different body parts are shown in Figure
• The "birdcage" coil, is a "volume coil" designed to give a spatially uniform magnetic
field over the entire volume of the coil.
• It is typically used for brain, abdominal, and knee studies.
• The circular loop coil , "surface" coil, used to image objects at the surface of the body
with high sensitivity.
• The third type of coil, "phased array," which consists of a series of surface coils.
• These coils are typically used to image large structures such as the spine.
32 | P a g e
MIT MODULE 3&4-SCET
33 | P a g e
MIT MODULE 3&4-SCET
• Three basic components make up the MRI scanner: the magnet, three magnetic field
gradient coils, and an RF coil.
• The magnet polarizes the protons in the patient
• the magnetic field gradient coils impose a linear variation on the proton Larmor
frequency as a function of position
• the RF coil produces the oscillating magnetic field necessary for creating phase
coherence between protons, and also receives the MRI signal via Faraday induction
• Each MRI system has a number of different-sized RF coils, used according to the
particular part of the body being imaged, which are placed on or around the patient.
• The gradient coils are fixed permanently inside the bore of the superconducting magnet
• In addition to these three elements there is a series of electronic components used to
turn the gradients on and off, to pulse the B1 field, and to amplify and digitize the signal.
• A simplified block diagram of a system is shown in Figure.
• Various components are discussed further in the following sections
Magnet Design
• The purpose of the magnet is to produce a strong, temporally stable, and homogeneous
magnetic field within the patient.
• A strong magnetic field increases the amplitude of the MRI signal, a homogeneous
magnetic field is required so that the tissue T2 value is not too short and images are not
distorted by Bo inhomogeneities, and high stability is necessary to avoid introducing
unwanted artifacts into the image.
• There are three basic types of magnet: permanent, resistive, and superconducting.
4. Permanent magnets
• In resistive magnets, the magnetic field is created by the passage of a constant current
through a conductor such as copper.
34 | P a g e
MIT MODULE 3&4-SCET
• The strength of the magnetic field is directly proportional to the magnitude of the
current
• thus high currents are necessary to create high magnetic fields.
• However, the amount of power dissipated in the wire is proportional to the resistance
of the conductor and the square of the current.
• Because the power is dissipated in the form of heat, cooling the conductors is a major
problem, and ultimately limits the maximum current, and therefore magnetic field
strength, that can be achieved with a resistive magnet.
• As with permanent magnets, the field homogeneity and the temporal stability of
resistive magnets are highly temperature-dependent.
6. Super conducting magnets
• The solution to the problem of conductor heating is to minimize the resistance of the
conductor by using the phenomenon of superconductivity, in which the resistance of
many conductors becomes zero at very low temperatures.
• In order to create high static magnetic fields, it is still necessary for the conductor to
carry a large current when it is superconducting, and this capability is only possessed
by certain alloys, particularly those made from niobium-titanium.
• The superconducting alloy is usually fashioned into multistranded filaments within a
conducting matrix because this arrangement can support a higher critical current than a
single, larger-diameter superconducting wire.
• This superconducting matrix is housed in a stainless steel can containing liquid helium
at a temperature of 4.2 K, as shown in Figure.
• This can is surrounded by a series of radiation shields and vacuum vessels to minimize
the boil-off of the liquid helium.
• Finally, an outer container of liquid nitrogen is used to cool the outside of the vacuum
chamber and the radiation shields.
• Because heat losses cannot be completely contained, liquid nitrogen and liquid helium
must be replenished on a regular basis.
Magnetic Field Gradient Coils
• the basic principle of MRI requires the generation of magnetic field gradients, in
addition to the static magnetic field, so that the proton resonant frequencies within the
patient are spatially dependent.
• Such gradients are achieved using "magnetic field gradient coils," a term usually
shortened to simply "gradient coils." Three separate gradient coils are required to
encode the x, y. and z dimensions of the image.
• The requirements for gradient coil design are that the gradients are linear over the
region being imaged, that they are efficient in terms of producing high gradient
35 | P a g e
MIT MODULE 3&4-SCET
strengths per unit current, and that they have fast switching times for use in rapid
imaging techniques.
• As in the case of magnet design, a magnetic field gradient is produced by the passage
of current through conducting wires.
• Unlike the design of the magnet, however, the geometry of the conductors for the three
gradient coils must be optimized to produce a linear gradient, rather than a uniform
field.
• Copper at room temperature can therefore be used as the conductor, with chilled-water
cooling being sufficient to remove the heat generated by the current. Because the
gradient coils fit directly inside the bore of the cylindrical magnet, the geometrical
design is usually cylindrical.
Radiofrequency Coils
• in order to produce an MRI signal, magnetic energy must be supplied to the protons at
the Larmor frequency in order to stimulate transitions between the parallel and the
antiparallel nuclear energy levels, thus creating precessing transverse magnetization.
• The particular piece of hardware that delivers this energy is called an RF coil, which is
usually placed directly around, or next to, the tissue to be imaged.
• The same RF coil is also usually used to detect the NMR signal via Faraday induction
36 | P a g e
MIT MODULE 3&4-SCET
• The power needed to generate the RF pulses for clinical systems can be many kilowatts,
and the receiver is designed to detect signals only on the order of 1-10 V.
• Examples of RF coil geometries for imaging different body parts are shown in Figure
• The "birdcage" coil, is a "volume coil" designed to give a spatially uniform magnetic
field over the entire volume of the coil.
• It is typically used for brain, abdominal, and knee studies.
• The circular loop coil , "surface" coil, used to image objects at the surface of the body
with high sensitivity.
• The third type of coil, "phased array," which consists of a series of surface coils.
• These coils are typically used to image large structures such as the spine.
Signal Demodulation, Digitization, and Fourier Transformation
• The oscillating voltage induced in the receiver coil using a standard 1.5-T scanner has
a magnitude between several tens of microvolts and a few millivolts
• Because it is difficult to digitize a signal at this high frequency, 63.9 MHz, using a high-
dynamic range AID converter, the signal must be "demodulated" to a lower frequency
before it can be digitized.
• A schematic for the typical components of a receiver used in magnetic resonance is
shown in Figure.
• The voltage induced in the RF coil first passes through a low-noise preamplifier, with
a typical gain factor of 100 and noise figure of "V0.6 dB.
• If the signal from only the water protons is considered initially, then the induced voltage
s(t) is given by
37 | P a g e
MIT MODULE 3&4-SCET
• The first demodulation step uses a mixer to reduce the frequency of the signal from the
Larmor frequency ωo to an intermediate frequency ωIF, where the value of ωIF is
typically 67.2 x 106 rad s-1 (10.7 MHz).
• A simple circuit for the demodulator is shown in Figure, where the mixer effectively
acts as a multiplier.
38 | P a g e
MIT MODULE 3&4-SCET
What is BOLD Signal? / Explain the principles of functional MRI/ Discuss the
applications?
39 | P a g e
MIT MODULE 3&4-SCET
• The change in blood flow actually exceeds that which is needed so that, at the capillary
level, there is a net increase in the balance of oxygenated arterial blood to deoxygenated
venous blood.
• Essentially, the change in tissue perfusion exceeds the additional metabolic demand, so
the concentration of deoxyhemoglobin within tissues decreases.
• This decrease has a direct effect on the signals used to produce magnetic resonance
images.
• While blood that contains oxyhemoglobin is not very different, in terms of its magnetic
susceptibility, from other tissues or water, deoxyhemoglobin is significantly
paramagnetic (like the agents used for MRI contrast materials, such as gadolinium),
and thus deoxygenated blood differs substantially in its magnetic properties from
surrounding tissues.
• When oxygen is not bound to hemoglobin, the difference between the magnetic field
applied by the MRI machine and that experienced close to a molecule of the blood
protein is much greater than when the oxygen is bound.
• The result of having lower levels of deoxyhemoglobin present in blood in a region of
brain tissue is therefore that the MRI signal from that region decays less rapidly and so
is stronger when it is recorded in a typical magnetic resonance image acquisition.
40 | P a g e
MIT MODULE 3&4-SCET
41 | P a g e
MIT MODULE 3&4-SCET
42 | P a g e
MIT MODULE 3&4-SCET
43 | P a g e
MIT MODULE 3&4-SCET
• Image intensities inversely related to the relative mobility of water molecules in tissue
and the direction of the motion
• Bright regions – decreased water diffusion
• Dark regions – increased water diffusion
44 | P a g e
MIT MODULE 3&4-SCET
Applications:
• Cancer
• Epilepsy
Patients with refractory temporal-lobe epilepsy exhibit increased diffusivity suggesting the
presence of structural disorganization
• Stroke
45 | P a g e
MIT MODULE 3&4-SCET
46 | P a g e
MIT MODULE 3&4-SCET
With relevant block diagram generalize the detection system used in MRI.
An RF receiver is used to process the signals from the receiver coils. Most modern MRI
systems have six or more receivers to process the signals from multiple coils. The signals range
from approximately 1 MHz to 300 MHz, with the frequency range highly dependent on
applied-static magnetic field strength. The bandwidth of the received signal is small, typically
less than 20 kHz, and dependent on the magnitude of the gradient field. The receiver is also a
detection system whose function is to detect the nuclear magnetization and generate an output
signal for processing by the computer. A block diagram of a typical receiver is shown in Fig.
22.22.
The receiver coil usually surrounds the sample and acts as an antenna to pick up the fluctuating
nuclear magnetization of the sample and converts it to a fluctuating output voltage V(t).
47 | P a g e
MIT MODULE 3&4-SCET
48 | P a g e
MIT-MODULE-5-SCET
5. Nuclear Medicine
P a g e 1 | 27
MIT-MODULE-5-SCET
• Decay of the radioactive element produces y-rays, which emanate in all directions.
• Attenuation of y -rays in tissue occurs via exactly the same mechanisms as for X-rays,
namely coherent scattering, Compton scattering, and photoelectric interactions.
• In order to determine the position of the source of the y -rays, a collimator is placed
between the patient and the detector so that only those components of radiation that
have a trajectory at an angle close to 90° to the detector plane are recorded.
• Rather than using film, as in planar X-ray imaging, to record the image, a scintillation
crystal is used to convert the energy of the y -rays that pass through the collimator into
light. These light photons are in turn converted into an electrical signal by
photomultiplier tubes (PMTs).
• The image is formed by analyzing the spatial distribution and the magnitude of the
electrical signals from each PMT.
• Planar nuclear medicine images are characterized, in general, as having a poor SNR
and low spatial resolution (""5 mm), but extremely high sensitivity, being able to detect
very small amounts of radioactive material, and very high specificity because there is
no background radiation in the body
• Three-dimensional nuclear medicine images can be produced using the principle of
tomography.
• A rotating gamma camera is used in a technique called single photon emission
computed tomography (SPECT).
P a g e 2 | 27
MIT-MODULE-5-SCET
• Alpha decay occurs when the nucleus ejects an alpha particle (helium nucleus).
• Beta decay occurs in two ways;
o (i) beta-minus decay, when the nucleus emits an electron and an
antineutrino in a process that changes a neutron to a proton.
o (ii) beta-plus decay, when the nucleus emits a positron and a neutrino
in a process that changes a proton to a neutron, this process is also
known as positron emission.
P a g e 3 | 27
MIT-MODULE-5-SCET
• Radioactive elements can decay via a number of mechanisms, of which the most
common and important in nuclear medicine are α-particle decay, β-particle
emission, γ-ray emission, and electron capture
• The most useful radionuclides for diagnostic imaging are those that emit y -rays
or X-rays because these forms of radiation can pass through tissue and reach a
detector situated outside the body. A useful parameter in quantifying the
attenuation of radiation as it travels through tissue is the half-value layer (HVL),
which corresponds to the thickness oftissue that absorbs one-half ofthe
radioactivity produced.
• An α -particle consists of a helium nucleus-two protons and two neutrons-with
a net positive charge. Typical particle energies are between 4 and 8 MeV. The
a-particle has a tissue HVL of only a few millimeters, and is therefore not
P a g e 4 | 27
MIT-MODULE-5-SCET
• The energy of the emitted γ -ray is 140 keY. Below an energy of 100 keV most
γ -rays are absorbed in the body via photoelectric interactions, in direct analogy
to X-ray attenuation in tissue, and so radionuclides used in nuclear medicine
should emit y-rays with energies greater than this value. Above an energy of
200 keV, γ -rays penetrate the thin collimator septa used in gamma cameras to
reject unwanted, scattered y-rays. Therefore, the ideal energy of a γ -ray for
imaging lies somewhere between 100 and 200 keV.
P a g e 5 | 27
MIT-MODULE-5-SCET
99m
• Tc, which can be produced from an on-site generator.
• The radionuclide 99mTc has a half-life of 6.02 h, is generated from a long-lived
parent, 99Mo, emits a monochromatic 140-keV γ-ray with very minor β-particle
emission, and has an HVL of 4.6 cm.
• In combination with the advantages afforded by on-site production, these
properties result in 99mTc being used in more than 90% of nuclear medicine
studies
• The on-site technetium generator consists of an alumina ceramic column with
radioactive 99Mo absorbed on its surface in the form of ammonium
molybdenate.
P a g e 6 | 27
MIT-MODULE-5-SCET
• The 99mTc eluted from the generator is in the form of sodium pertechnetate,
NaTcO4.
• If this compound is injected into the body, it concentrates in the thyroid,
salivary glands, and stomach, and can be used for scanning these organs.
• The majority of radiopharmaceuticals, however, are prepared by reducing the
pertechnetate to ionic technetium (Tc4 +) and then complexing it with a
chemical ligand that binds to the metal ion.
• The properties of this ligand are chosen to have high selectivity for the organ
of interest with minimal distribution in other tissues.
• The ligand must bind the metal ion tightly so that the radiopharmaceutical
does not fragment in the body.
• General factors which effect the biodistribution of a particular agent include
the strength of the binding to blood proteins such as human serum albumin
(HSA), the lipophilicity and ionization of the chemical ligand
P a g e 7 | 27
MIT-MODULE-5-SCET
• The gamma camera, shown in Figure, is the instrumental basis for all nuclear
medicine imaging studies.
P a g e 8 | 27
MIT-MODULE-5-SCET
1 Collimators
• Many types of collimator are used in nuclear medicine, but the most common
geometry is a parallel-hole collimator, which is designed such that only γ -rays
traveling at angles close to 90° to the collimator surface are detected.
• The collimator thus reduces the contribution from γ -rays that have been Compton-
scattered in tissue; these contain no useful spatial information, and reduce the image
CNR.
• The collimator is usually constructed from thin strips of lead, through which
transmission of γ -rays is negligible. The normal pattern of the lead strips is a
hexagonally based "honeycomb" geometry
• The dimensions and the arrangement of the lead strips determine the contribution
made by the collimator to the overall spatial resolution of the gamma camera. In
Figure, if two point sources are placed a distance less than R apart, then they cannot
be resolved. The value of R is given by
where L is the length of the septa, d is the distance between septa, and z is the distance
between the y -ray source and the front of the collimator. Therefore, the spatial
resolution can be improved by increasing the length of the septa in the collimator, or
minimizing the value of z, that is, positioning the gamma camera as close to the patient
as possible.
• There are a number of other types of collimators, shown in Figure 2.6, that can
be used to magnify, or alternatively reduce the size of, the image.
• A converging collimator, for example, can be used for imaging small organs
close to the surface of the body.
• An extreme form of the converging collimator is a "pinhole collimator," which
is used for imaging very small organs.
P a g e 9 | 27
MIT-MODULE-5-SCET
P a g e 10 | 27
MIT-MODULE-5-SCET
• When a γ -ray strikes the NaI(TI) crystal, light is produced from a very small volume
determined by the range, typically 1 mm, of the photoelectrons or Compton-scattered
electrons.
• The thicker the crystal, the broader is the light spread function and the poorer is the
spatial resolution.
• For obtaining 99mTc nuclear medicine images, the optimal crystal thickness is
approximately 0.6 cm.
• However, this value is too small for detecting, with high sensitivity, the higher energy
γ -rays associated with radiopharmaceuticals containing gallium, iodine, and indium
and so a compromise crystal thickness of 1 cm is generally used in these cases
3 Photomultiplier Tubes
• The second step in forming the nuclear medicine image involves detection of the light
photons emitted by the crystal by hexagonal PMTs, which are closely coupled to the
scintillation crystal.
• This geometry gives efficient packing, and also has the property that the distance from
the center of one PMT to that of each neighboring PMT is the same: this property is
important for determination of the spatial location of the scintillation event using an
Anger position network
• Arrays of 61, 75, or 91 PMTs, each with a diameter of between 25 and 30 mm, are
typically used. The basic design of a PMT is shown in Figure
• Light photons pass through the transparent window of the PMT and strike the
photocathode, which is made of a bialkali material with a spectral sensitivity matched
to the light-emission characteristics of the scintillation crystal.
• Provided that the photon energy is greater than the photoelectric work function of the
photocathode, free electrons are generated in the photocathode via photoelectric
interactions.
• These electrons have energies between 0.1 and 1eV.
• A bias voltage of between 300 and 5000 V applied between the first anode (also
called a dynode) and the photocathode attracts these electrons toward the anode.
• If the kinetic energy of this incident electron is above a certain value, typically 100-
200 eV,when it strikes the anode a large number of electrons are emitted from the
anode for every incident electron : the result is effectively noise-free amplification.
• A series of 10 successive accelerating dynodes produces between 105 and 106
electrons for each photoelectron, creating an amplified current at the output of the
PMTs.
• This current then passes through a series of low-noise preamplifiers and is digitized
using an A/D converter.
P a g e 11 | 27
MIT-MODULE-5-SCET
• In practice, rather than a single threshold value being used, a range of values
of the z-signal is accepted.
• The reason is that even monochromatic 140-keV y-rays that do not undergo
significant scattering in the patient give a statistical distribution in the size of
the z-signal.
• The energy resolution of the system is defined as the full-width half-maximum
(FWHM) of the photopeak, shown in Figure, and typically is about 14 keY (or
10%) for most gamma cameras.
• The narrower the FWHM of the system, the better it is at discriminating
between unscattered and scattered y -rays.
• The threshold level for accepting the "photopeak" is set to a slightly larger
value, typically 15%.
• For example, a 15% window around a 140-keV photopeak means that values
of 129.5-150.5 keV are accepted as corresponding to unscattered y-rays
P a g e 13 | 27
MIT-MODULE-5-SCET
• SPECT scans use radioactive material called tracers. The tracers mix with your
blood and are taken up by the body tissue.
• SPECT radio-nuclides do not require an on-site cyclotron.
• However, the isotopes of Tc, TI, In, and Xe are not normally found in the body.
For example, it is extremely difficult to label a biologically active pharmaceutical with
Tc-99m without altering its biochemical behaviour.
• Presently, SPECT has been used mainly in the detection of tumours and other
lesions, as well as in the evaluation of myocardial function using TI-201. However,
certain pharmaceuticals have been labelled with iodine and technetium and provide
information on blood perfusion within the brain and the heart.
• Gamma-ray photons emitted from the internal distributed radiopharmaceutical
penetrate through the animal’s or patient’s body and are detected by a single or a set of
collimated radiation detectors.
• A special “gamma” camera picks up signals from the tracer as it moves around
the subject.
• The tracer’s signals are converted into images by a computer.
• Most of the detectors used in current SPECT systems are based on a single or
multiple NaI(TI) scintillation detectors.
P a g e 14 | 27
MIT-MODULE-5-SCET
▪ SPECT can be performed using either multidetector or rotating gamma camera systems. In the
former, a large number of scintillation crystals and associated electronics are placed around the
patient.
▪ The primary advantage of the multidetector system is its high sensitivity, resulting in high
spatial resolution and rapid imaging.
▪ However, system complexity and associated cost have meant that these types of systems are
not widely used in the clinic.
▪ The latter approach, using a rotating gamma camera, is preferred for routine clinical imaging
because it also can be used for planar scintigraphy.
▪ Data are collected from multiple views obtained as the detector rotates about the patient's head.
▪ The simplest setup involves a single gamma camera which rotates in a plane around the patient,
collecting a series of signal projections, which, after correction for scatter and attenuation, can
be filtered and backprojected to form the image
▪ Because the array of PMTs is two-dimensional in nature, the data can be reconstructed as a
series of adjacent slices
▪ HERE Write about the gamma camera from the previous
answer
▪ SPECT systems with multiple camera heads are also available.
▪ In a dual-head system, two 180° opposed camera heads are used, and acquisition time
is reduced by half with no loss in sensitivity.
▪ A triple-head SPECT system further improves sensitivity.
P a g e 15 | 27
MIT-MODULE-5-SCET
▪ Some suppliers also offer variable-angle dual-head systems for improved positioning
during cardiac, brain and wholebody imaging.
▪ Imaging times can be decreased by using another SPECT configuration—a ring of
detectors completely surrounding the patient.
▪ Although multiple camera heads reduce acquisition time, they do not significantly
shorten procedure/exam time because of factors such as patient preparation and data
processing. Several approaches are being investigated to improve SPECT sensitivity and
resolution. Novel acquisition geometries are being evaluated for both discrete detector and
camera-based SPECT systems (Fig 21.13).
The sensitivity of a SPECT system is mainly determined by the total area of the detector surface
that is viewing the organ of interest. Of course, there is tradeoff of sensitivity versus spatial
resolution. Kuhl (1976) recognized that the use of banks of discrete detectors [Fig. 21.13(a)]
could be used to improve SPECT performance. The system [Fig. 21.13(b)] developed by
Hirose et al. (1982) consists of a stationary ring of detectors. This system uses a unique fan-
beam collimator that rotates in front of the stationary detectors. Another approach using multi-
detector brain system [Fig. 21.13(c)] uses a set of 12 scintillation detectors coupled with a
complex scanning motion to produce tomographic images (Moore et al., 1984). An advantage
of discrete detector SPECT systems is that they typically have a high sensitivity for a single
slice of the source. However, a disadvantage has been that typically only one or at most a few
non-contiguous sections could be imaged at a time. In order to overcome this deficiency,
Rogers et al. (1984) described a ring system that is capable of imaging several contiguous slices
simultaneously.
P a g e 16 | 27
MIT-MODULE-5-SCET
• Because two "antiparallel" y-rays are produced and both must be detected, a PET
system consists of a complete ring of scintillation crystals surrounding the patient, as
shown in Figure 2.21.
P a g e 17 | 27
MIT-MODULE-5-SCET
• Because the two y-rays are created simultaneously, both are detected within a certain
time window, the value of which is determined by the diameter of the detector ring and
the location of the radiopharmaceutical within the body.
• The location of the two crystals that actually detect the two antiparallel y-rays defines
a line along which the annihilation must have occurred.
• This process of line definition is referred to as annihilation coincidence detection
(ACD) and forms the basis of signal localization in PET.
• This process should be contrasted with that in SPECT, which requires collimation of
single y-rays
• The difference in these localization methods is the major reason for the much higher
detection efficiency (typically 1000-fold) in PET than in SPECT.
• Image reconstruction in PET is via filtered backprojection
• Because the y-ray energy of 511 keV in PET is much higher than the 140 keV of y-rays
in conventional nuclear medicine, different materials such as bismuth germanate are
used for the scintillation crystals.
• The higher y -ray energy means that less attenuation of the y-rays occurs in tissue, a
second factor which results in the high sensitivity of PET.
• The spatial resolution in PET depends upon a number of factors including the number
and size of the individual crystal detectors: typical values of the overall system spatial
resolution are 3-5 mm.
P a g e 18 | 27
MIT-MODULE-5-SCET
• The major differences in PET instrumentation compared to that in SPECT are the
scintillation crystals needed to detect 511-keV y-rays efficiently and the additional
circuitry needed for coincidence detection.
1. Scintillation Crystals
• Detection of the antiparallel y-rays uses a large number of scintillation crystals, which
are usually formed from bismuth germanate (BGO).
• The crystals are placed in a circular arrangement surrounding the patient, and the
crystals are coupled to a smaller number of PMTs.
• Coupling each crystal to a single PMT would give the highest possible spatial
resolution, but would also increase the cost prohibitively.
• Typically, each "block" ofscintillation crystals consists of an 8 x 8 array cut from a
single BGO crystal, with the cuts filled with light-reflecting material.
• The dimensions of each block are roughly 6.5 mm in width and height and 30 mm in
depth.
P a g e 19 | 27
MIT-MODULE-5-SCET
• Localization of the detected y-ray to a particular crystal is performed in the same way
The ideal detector crystal would:
1. Have a high density, which results in a large effective cross-section for Compton
scattering, and a correspondingly high y-ray detection efficiency
2. Have a large effective atomic number, which also results in a high y-ray detection
efficiency due to y-ray absorption via photoelectric interactions
3. Have a short decay time to allow a short coincidence time to be used, with a reduction in
accidental coincidences and an increased SNR in the reconstructed PET image
4. Have a high light output (emission intensity) to allow more crystals to be coupled to a
single PMT, reducing the complexity and cost of the PET scanner
5. Have an emission wavelength near 400 nm; this wavelength represents the point of
maximum sensitivity for standard PMTs
6. Have an index of refraction near 1.5 to ensure efficient transmission of light between the
crystal and the PMT; optical transparency at the emission wavelength is also important
7. Be non-hygroscopic to simplify the design and construction of the many thousands of
crystals needed in the complete system
2. Annihilation Coincidence Detection Circuitry
P a g e 20 | 27
MIT-MODULE-5-SCET
• In Figure 2.23, the first y-ray having been detected by crystal 2, only those crystals
numbered between 7 and 13·can detect the second y-ray from the annihilation.
• When the second y-ray is detected and produces a voltage that is accepted by the
associated PHA, a second logic pulse is sent to the coincidence detector.
• The coincidence detector adds the two logic pulses together and passes the summed
signal through a separate PHA, which has a threshold set to a value just less than twice
the amplitude of each individual logic pulse.
• If the logic pulses overlap in time, then the system accepts the two y-rays as having
evolved from one annihilation and records a line integral between the two crystals.
• The PET system can be characterized by its "coincidence resolving time," which is
defined as twice the length of the logic pulse, and usually has a value between 12 and
20 ns.
P a g e 21 | 27
MIT-MODULE-5-SCET
3. Image Reconstruction
• Basic image reconstruction in PET is essentially identical to that in SPECT, with
both iterative algorithms and those based on filtered backprojection being used to
form the image from individual line projections.
• However, prior to reconstruction, the data must be corrected for attenuation effects
and, more importantly, for accidental and multiple coincidences.
P a g e 22 | 27
MIT-MODULE-5-SCET
P a g e 23 | 27
MIT-MODULE-5-SCET
1. Cancer diagnosis
18
F-Fluorodeoxyglucose (FDG), an analog of glucose that allows assessment of glucose
metabolism in body tissues is the most commonly used PET tracer. FDG enters cells by the
same transport mechanism as glucose and is intracellularly phosphorylated by hexokinase to
FDG-6-phosphate (FDG-6-P). Intracellularly FDG-6-P does not get metabolized further and
accumulates in proportion to the glycolytic rate of the cells. Most malignant cells have higher
rate of glycolysis and higher levels of glucose transporter proteins (GLUT) than do normal
cells and therefore accumulate FDG-6-P to higher levels than do the normal tissues.
2. Study of cellular proliferation
Carbon-11 thymidine and F-18 Fluorothymidine (FLT) an analog of thymidine are markers of
cellular proliferation. Uptake of FLT has considerable analogy with FDG uptake in that FLT
is taken up by actively proliferating cells but is not incorporated further into DNA synthesis
and hence accumulates intracellularly in tumor cells. It has shown promising ability to predict
tumor grade in lung cancers, evaluate brain tumors and may be a good predictor of tumor
response. 11C-methionine and amino acid, has shown great promise in evaluating brain tumors
and other cancers. 11C-choline and 11C-acetate have been used in prostate cancer to evaluate
the primary and metastatic disease
P a g e 24 | 27
MIT-MODULE-5-SCET
P a g e 25 | 27
MIT-MODULE-5-SCET
P a g e 26 | 27
MIT-MODULE-5-SCET
P a g e 27 | 27
MIT-MODULE 6-SCET
6. Thermal Imaging
• The infrared ray is a kind of electromagnetic wave with a frequency higher than the
radio frequencies and lower than visible light frequencies.
• The Infrared region of the electromagnetic spectrum is usually taken as 0.77 and 100
μm.
• For convenience, it is often split into near infrared (0.77 to 1.5 μm), middle infrared
(1.5 to 6 μm) and far infrared (6 to 40 μm) and far infrared (40 to 100 μm).
• Infrared rays are radiated spontaneously by all objects having a temperature above
absolute zero. The total energy ‘W’ emitted by the object and its temperature are related
by the Stefan-Boltzman formula
• The medical thermograph is a sensitive infrared camera which presents a video image
of the temperature distribution over the surface of the skin.
• This image enables temperature differences to be seen instantaneously, providing fairly
good evidence of any abnormality.
• However, thermography still cannot be considered as a diagnostic technique
comparable to radiography.
• Radiography provides essential information on anatomical structures and abnormalities
while thermography indicates metabolic process and circulation changes, so the two
techniques are complementary.
MIT-MODULE 6-SCET
• The human body absorbs infrared radiation almost without reflection, and at the same
time, emits part of its own thermal energy in the form of infrared radiation.
• The intensity of this radiant energy corresponds to the temperature of the radiant
surface. It is, therefore, possible to measure the varying intensity of radiation at a certain
distance from the body and thus determine the surface temperature.
• Fig. 24.1 shows spectral distribution of Infrared emission from human skin.
• In a normal healthy subject, the body temperature may vary considerably from time to
time, but the skin temperature pattern generally demonstrates characteristic features,
and a remarkably consistent bilateral symmetry.
• Thermography is the science of visualizing these patterns and determining any
deviations from the normal brought about by pathological changes.
• Thermography often facilitates detection of pathological changes before any other
method of investigation, and in some circumstances, is the only diagnostic aid available.
• Thermography has a number of distinct advantages over other imaging systems. It is
completely non-invasive, there is no contact between the patient and system as with
ultrasonography, and there is no radiation hazard as with X-rays. In addition,
thermography is a real-time system.
• The examination of the female breast as a reliable aid for diagnosing breast cancer is
probably the best known application of thermography.
• It is assumed that since cancer tissue metabolizes more actively than other tissues and
thus has a higher temperature, the heat produced is conveyed to the skin surface
resulting in a higher temperature in the skin directly over the malignancy than in other
regions
MIT-MODULE 6-SCET
2. Discuss the physical factors which affect the amount of infrared radiation
from the human body?
• Infrared detectors are used to convert infrared energy into electrical signals. Basically,
there are two types of detectors: thermal detectors and photo-detectors.
• Thermal detectors include thermocouples and thermistor bolometers. They feature
constant sensitivity over a long wavelength region. However, they are characterized by
long-time constant, and thus show a slow response. The wavelength at which the human
body has maximum response is 9–10 μm. Therefore, the detector should ideally have a
constant spectral sensitivity in the 3–20μm infrared range.
• However, the spectral response of the photodetectors is highly limited. Most of the
infrared cameras use InSb (indium antimonide) detector which detects infrared rays in
the range 2–6 μm. Only 2.4% of the energy emitted by the human body falls within the
region detected by InSb detectors. But they are highly sensitive and are capable of
detecting small temperature variations as compared to a thermistor. Another detector
making use of an alloy of cadmium, mercury and telluride (CMT) and cooled with
liquid nitrogen, has a peak response at 10–12 μm.
MIT-MODULE 6-SCET
• In photon detectors, incident photons interact with the electrons in the material and
change the electronic charge distribution.
• This perturbation of the charge distribution generates a current or a voltage that can be
measured by an electrical circuit.
• Because the photon-electron interaction is "instantaneous", the response speed of
photon detectors is much higher than that of thermal detectors.
• Indeed, by contrast to thermal detectors, quantum or photon detectors respond to
incident radiation through the excitation of electrons into a non-equilibrium state.
Photoconductive Detectors
• Here, the absorption of incident light creates non-equilibrium electrical carriers, and
that reduces the electrical resistance across two electrodes.
• There are also some exotic cases with negative photo conductivity, i.e., with an
increase of resistance caused by illumination.
• While for the detection of visible light one would usually prefer photodiodes due to
their clearly superior performance (except sometimes in cost-critical applications),
photoconductive detectors are often used as infrared detectors.
Photovoltaic detectors
• These sensors are called photovoltaic cells because of their voltage-generating capacity,
but the cells actually convert EM energy into electrical energy.
1. Pyroelectric Detectors
• Pyroelectric detectors are sensors for light which are based on the pyroelectric effect.
• They are widely used for detecting laser pulses often in the infrared spectral region,
and with the potential for a very broad spectral response.
• Pyroelectric detectors are used as the central parts of many optical energy meters, and
are typically operated at room temperature (i.e., not cooled).
• Compared with energy meters based on photodiodes, they can have a much broader
spectral response.
• There are various other applications of pyroelectric sensors, for example fire detection,
satellite-based infrared detection, and the detection of persons via their infrared
emission (motion detectors).
Pyroelectricity
• Pyroelectricity is a property of certain crystals which are naturally electrically
polarized and as a result contain large electric fields.
• Pyroelectricity can be described as the ability of certain materials to generate a
temporary voltage when they are heated or cooled.
• The change in temperature modifies the positions of the atoms slightly within
the crystal structure, such that the polarization of the material changes.
• This polarization change gives rise to a voltage across the crystal.
• If the temperature stays constant at its new value, the pyroelectric voltage gradually
disappears due to leakage current.
• The pyroelectric coefficient may be described as the change in the spontaneous
polarization vector with temperature
Operation Principle
• We first consider the basic operation principle. A pyroelectric detector contains a piece
of ferroelectric crystal material with electrodes on two sides – essentially a capacitor.
• One of those electrodes has a black coating , which is exposed to the incident
radiation.
MIT-MODULE 6-SCET
• The incident light is absorbed on the coating and thus also causes some heating of the
crystal, because the heat is conducted through the electrode into the crystal.
• As a result, the crystal produces some pyroelectric voltage; one can electronically detect
that voltage or alternatively the current when the voltage is held constant.
• For a constant optical power, that pyroelectric signal would eventually fade away; the
device would therefore not be suitable for measuring the intensity of continuous-wave
radiation.
• Instead, such a detector is usually used with light pulses; in that case, one obtains a
bipolar pulse structure, where one initially obtains a voltage in one direction and after
the pulse a voltage in the opposite direction.
• Due to that operation principle, pyroelectric detectors belong to the thermal detectors:
they do not directly respond to radiation, but only to the generated heat.
• In the simple explained form, the detector would be relatively sensitive to fluctuations
of the ambient temperature.
• Therefore, one often uses an additional compensating crystal, which is exposed to
essentially the same temperature fluctuations but not to the incoming light.
• By taking the difference of signals from both crystals, one can effectively reduce the
sensitivity to external temperature changes.
• The pyroelectric charges are typically detected with an operational amplifier (OpAmp)
based on field-effect transistors (JFETs) with very low leakage current.
Pyroelectric detector element
MIT-MODULE 6-SCET
2. Thermoelectric detectors
Thermoelectricity
A thermoelectric device creates voltage when there is a different temperature on each side.
Conversely, when a voltage is applied to it, heat is transferred from one side to the other,
creating a temperature difference.
This effect can be used to generate electricity, measure temperature or change the temperature
of objects.
Because the direction of heating and cooling is determined by the polarity of the applied
voltage, thermoelectric devices can be used as temperature controllers.
The term "thermoelectric effect" encompasses three separately identified effects: the Seebeck
effect, Peltier effect, and Thomson effect.
1) Seebeck effect: The Seebeck effect is the conversion of heat directly into electricity at the
junction of different types of wire.
MIT-MODULE 6-SCET
2) Peltier effect: When an electric current is passed through a circuit of a thermocouple, heat
is evolved at one junction and absorbed at the other junction. This is known as Peltier Effect.
The Peltier effect is the presence of heating or cooling at an electrified junction of two different
conductors
3) Thomson effect: As per the Thomson effect, when two unlike metals are joined together
forming two junctions, the potential exists within the circuit due to temperature gradient along
the entire length of the conductors within the circuit.
In most of the cases the emf suggested by the Thomson effect is very small and it can be
neglected by making proper selection of the metals. The Peltier effect plays a prominent role
in the working principle of the thermocouple.
i)Thermocouple
The general circuit for the working of thermocouple is shown in the figure 1 above. It comprises
of two dissimilar metals, A and B. These are joined together to form two junctions, p and q,
which are maintained at the temperatures T and T respectively. Remember that the
1 2
thermocouple cannot be formed if there are not two junctions. Since the two junctions are
maintained at different temperatures the Peltier emf is generated within the circuit and it is the
function of the temperatures of two junctions.
If the temperature of both the junctions is same, equal and opposite emf will be generated at
both junctions and the net current flowing through the junction is zero. If the junctions are
maintained at different temperatures, the emf’s will not become zero and there will be a net
current flowing through the circuit. The total emf flowing through this circuit depends on the
metals used within the circuit as well as the temperature of the two junctions. The total emf or
the current flowing through the circuit can be measured easily by the suitable device.
The device for measuring the current or emf is connected within the circuit of the thermocouple.
It measures the amount of emf flowing through the circuit due to the two junctions of the two
dissimilar metals maintained at different temperatures. In figure 2 the two junctions of the
thermocouple and the device used for measurement of emf (potentiometer) are shown.
MIT-MODULE 6-SCET
Now, the temperature of the reference junctions is already known, while the temperature of
measuring junction is unknown. The output obtained from the thermocouple circuit is
calibrated directly against the unknown temperature. Thus the voltage or current output
obtained from thermocouple circuit gives the value of unknown temperature directly.
The amount of emf developed within the thermocouple circuit is very small, usually in
millivolts, therefore highly sensitive instruments should be used for measuring the emf
generated in the thermocouple circuit. Two devices used commonly are the ordinary
galvanometer and voltage balancing potentiometer. Of those two, a manually or automatically
balancing potentiometer is used most often.
ii) Thermopile
A thermopile is an electronic device that converts thermal energy into electrical energy. It is
composed of several thermocouples connected usually in series or, less commonly, in parallel.
3. Ferroelectric detectors
By inverting the direction of applied electrical field, the direction of polarization of these
materials can be inverted or changed (figure 1).
MIT-MODULE 6-SCET
This is called switching. It can also maintain the polarisation even once the field is removed.
These materials have some similarities over ferromagnetic materials which reveal permanent
magnetic moment.
Since, there are similarities; the prefix is same for both the materials.
But all the ferroelectric material must not have Ferro (iron). All the ferroelectric materials
exhibit piezoelectric effect. The opposed properties of these materials are seen in
antiferromagnetic materials.
Curie Temperature
The properties of these materials exist only below a definite phase conversion temperature.
Above this temperature, the material will become paraelectric materials. That is, loss in
spontaneous polarization. This definite temperature is called Curie temperature (T ). Most of
C
these materials above T will loss the piezoelectric property as well. The variation of dielectric
c
constant by means of temperature in the non polar, paraelectric state is shown by Curie-Weiss
law as given below:
ε → Dielectric constant
ε → ε at temperature, T >> TC
∞
A → Constant
T → Curie point
C
T → Temperature
χ → Susceptibility
C → Curie constant of the material
C
• PbTiO 3
Ferroelectric detectors
• The value of the spontaneous polarization depends on the temperature, i.e., a change
in the temperature of the crystal produces a change in its polarization, which can be
detected.
• This is called the pyroelectric effect .
• The pyroelectric effect can be described in terms of the pyroelectric coefficient λ.
• A small change in the temperature ΔT, in a crystal in a gradual manner, leads to a
change in the spontaneous polarization vector ΔP given by,
s
• ΔP = λΔT
s
• It should be noted that both piezoelectricity and pyroelectricity are inherent properties
of a crystal due, entirely, to its atomic arrangement or crystal structure.
• Ferroelectricity, on the other hand, is an effect produced in a pyroelectric crystal by
the application of an external electric field.
• It has been observed that all ferroelectrics are pyroelectric and piezoelectric.
• All pyroelectric arc piezoelectrics but the converse is not true.
4. Resistive microbolometers
• This resistance change is measured and processed into temperatures which can be used
to create an image.
• Unlike other types of infrared detecting equipment, microbolometers do not require
cooling.
• A microbolometer is an uncooled thermal sensor.
• Previous high resolution thermal sensors required exotic and expensive cooling
methods including stirling cycle coolers and liquid nitrogen coolers.
• These methods of cooling made early thermal imagers expensive to operate and
unwieldy to move.
• Also, older thermal imagers required a cool down time in excess of 10 minutes before
being usable.
•
• A microbolometer consists of an array of pixels, each pixel being made up of several
layers.
• The cross-sectional diagram shown in Figure 1 provides a generalized view of the
pixel. Each company that manufactures microbolometers has their own unique
procedure for producing them and they even use a variety of different absorbing
materials.
• In this example the bottom layer consists of a silicon substrate and a readout integrated
circuit (ROIC).
• Electrical contacts are deposited and then selectively etched away. A reflector, for
example, a titanium mirror, is created beneath the IR absorbing material.
• Since some light is able to pass through the absorbing layer, the reflector redirects this
light back up to ensure the greatest possible absorption, hence allowing a stronger signal
to be produced.
• Next, a sacrificial layer is deposited so that later in the process a gap can be created to
thermally isolate the IR absorbing material from the ROIC.
• A layer of absorbing material is then deposited and selectively etched so that the final
contacts can be created. To create the final bridge like structure shown in Figure 1,
• the sacrificial layer is removed so that the absorbing material is suspended
approximately 2 μm above the readout circuit. Because microbolometers do not
undergo any cooling, the absorbing material must be thermally isolated from the bottom
ROIC and the bridge like structure allows for this to occur.
• After the array of pixels is created the microbolometer is encapsulated under a vacuum
to increase the longevity of the device. In some cases the entire fabrication process is
done without breaking vacuum.
MIT-MODULE 6-SCET
Advantages
• They are small and lightweight. For applications requiring relatively short ranges, the
physical dimensions of the camera are even smaller. This property enables, for example,
the mounting of uncooled microbolometer thermal imagers on helmets.
• Provide real video output immediately after power on.
• Low power consumption relative to cooled detector thermal imagers.
• Very long MTBF.
• Less expensive compared to cameras based on cooled detectors.
Disadvantages
• Less sensitive (due to higher noise) than cooled thermal and photon detector imagers, and
as a result have not been able to match the resolution of cooled semiconductor based
approaches.
Medical thermography has a large bibliography and the technique has been used to
investigate a wide variety of clinical conditions.
Most important among these are the following, and we now consider briefly each of these in
turn:
(i) the assessment of inflammatory conditions such as rheumatoid arthritis;
(ii) vascular-disorder studies including; (a) the assessment of deep vein thrombosis
(DVT), (b) the localisation of varicosities, (c) the investigation of vascular
disturbance syndromes, and (d) the assessment of arterial disease;
(iii) metabolic studies;
(iv) the assessment of pain and trauma;
(v) oncological investigations; and
(vi) physiological studies
10. How the thermal imaging is used for the Investigations of Vascular
Disorders
• Venography and ultrasound are the most commonly used investigations for
investigating DVT. Venography is invasive, time consuming and labour intensive.
MIT-MODULE 6-SCET
• It also exposes the patient to the risks of contrast allergy. Doppler ultrasound avoids the
risks of venography but is operator dependent and can be time consuming.
• An increase in limb temperature is one of the clinical signs of DVT, and thermography
can be used to image the suspected limb.
• The thermal activity is thought to be caused by the local release of vasoactive chemicals
associated with the formation of the venous thrombosis, which cause an increase in the
resting blood flow
• The thermal test is based on the observation of delayed cooling of the affected limb.
• The patient is examined in a resting position and the limbs cooled, usually with a fan.
Recent calf-vein thrombosis produces a diffuse increase in temperature of about 2°C.
Localisation of Varicosities
• The localisation of incompetent perforating veins in the leg prior to surgery may be
found thermographically.
• The limb is first cooled with a wet towel and fan and the veins are drained by raising
the leg, to an angle of 30°–40° with the patient lying supine.
• A tourniquet is applied around the upper third of the thigh to occlude superficial veins
and the patient then stands and exercises the leg.
• The sites of incompetent perforating veins are identified by areas of ‘rapid rewarming’
below the level of the tourniquet.
• Varicosities occurring in the scrotum can alter testicular temperatures.
• It is known that the testes are normally kept at about 5°C lower than core-body
temperature: if this normal temperature differential is abolished, spermatogenesis is
depressed.
• The incidence of subfertility in man is high and the cause is often not known, but the
ligation of varicocele in subfertile men has been shown to result in improvements in
fertility, with pregnancies in up to 55% of partners .
• Thermography has been used extensively to determine the thermal effect and extent of
clinical varicocele, to investigate infertile men or subfertile men who might have
unsuspected varicocele, to examine patients who have had corrective surgery and to
determine whether any residual veins are of significance after ligation of the varicocele.
• In a cool environment of 20°C, the surface temperature of the normal scrotum is 32°C,
whereas a varicocele can increase this to 34°C–35°C.
Vascular Disturbance Syndromes
• Patients with Raynaud’s disease and associated disorders have cold and poorly perfused
extremities.
• The viability of therapeutic intervention will depend upon whether the vasculature has
the potential for increased blood flow.
• Howell et al. (1997) have measured thermographically the temperature of toes in
Raynaud’s phenomenon.
• These authors concluded that the baseline mean toe temperature and medial lateral toe
temperature difference are good diagnostic indicators of this condition in the feet.
MIT-MODULE 6-SCET
• Cold challenge can enhance temperature differentials since these recover markedly
better over a 10min period in healthy individuals than in Raynaud’s patients.
• Temperature changes also occur in Paget’s disease.
• This is a disease of bone that results in increased blood flow through bone tissue which,
in turn, can increase the bone temperature and overlying skin.
Assessment of Arterial Disease
11. How the thermal imaging is used for the Metabolic Studies
• Skin temperature is influenced by the proximity of the skin and superficial tissues to
the body core and the effects of subcutaneous heat production, blood perfusion and the
thermal properties of the tissues themselves.
• Subcutaneous fat modifies surface temperature patterns, as does muscular exercise.
• The complex interplay of these factors limits the role of IR imaging in metabolic
investigations to the study of the most superficial parts of the body surface.
• For example, in the case of newborn infants, it has been postulated that the tissue over
the nape of the neck and interscapular region consists of brown adipose tissue, which
plays an important role in heat production.
• Thermal imaging has been used to study this heat distribution directly after birth
(Rylander 1972).
• The presence of brown adipose tissue in man has also been investigated by this means:
it has been observed that metabolic stimulation by adrenaline (ephedrine) produces an
increase in skin temperature in the neck and upper back.
• Thermal imaging has also been used in the study of metabolic parameters in diabetes
mellitus
12. How the thermal imaging is used for Assessment of Pain and Trauma
depends largely upon the finding that in healthy subjects thermal patterns are
symmetrical.
• Asymmetrical heat production at dermatomes and myotomes can be identified
thermographically
• Temperature changes are probably related to reflex sympathetic vasoconstriction within
affected extremity dermatomes and to metabolic changes or muscular spasm in
corresponding paraspinal myotomes.
• Thermography has also found a place in physical medicine in the assessment of such
conditions as ‘frozen shoulder’
• Frozen shoulder is usually characterised by a lowering of skin temperature, which is
probably due to decreased muscular activity resulting from a decreased range of motion.
• Thermography can be used to assess tissue damage caused by a burn or frostbite.
• The treatment of a burn depends upon the depth of injury and the surface area affected.
• Whereas a first-degree burn shows a skin erythema, a third-degree burn is deeper and
shows a complete absence of circulation.
• Identification of a second-degree burn is sometimes difficult, and temperature
measurements are used to assist with this assessment.
• Third-degree burns have been found to be on average 3°C colder than surrounding
normal skin. In conditions of chronic stress (such as bed sores, poorly fitting prosthetic
devices), thermal imaging can be used to assess irritated tissue prior to frank
breakdown.
Oncological Investigations
• Thermal imaging has been used as an adjunct in the diagnosis of malignant disease, to
assess tumour prognosis and to monitor the efficacy of therapy.
• Malignant tumours tend to be warmer than benign tumours due to increased metabolism
and, more importantly, due to vascular changes surrounding the tumour.
• It has been observed that surface temperature patterns are accentuated and temperature
differences increased by cooling the skin surface in an ambient 20°C.
• This procedure reduces blood flow in the skin and subcutaneous tissues and, since blood
flow through tumour vasculature is less well controlled than through normal
vasculature, the effects of cooling the skin surface are less effective over the tumour
than over normal tissue.
• Imaging has been used to determine the extent of skin lesions and to differentiate
between benign and malignant pigmented lesions.
• The method has been used by many investigators as an aid in the diagnosis of malignant
breast disease, but it lacks sensitivity and specificity.
• It has been advocated as a means of identifying and screening ‘high-risk groups’ and
for selecting patients for further investigation by mammography.
• Breast tumours that cause large temperature changes (more than 2.5°C) tend to have a
poor prognosis
• The treatment of malignant disease by radiotherapy, chemotherapy or hormone therapy
can be monitored thermographically.
MIT-MODULE 6-SCET
• In the physiology of cutaneous perfusion, Anbar et al. (1997) used dynamic area
telethermometry.
• In this technique, 256 images at 66 IR images per second were captured with a 256 ×
256 FPA gallium-arsenide quantum-well photon-detector camera.
• The authors studied different areas of skin on the human forearm and observed
temperature modulation about 1Hz with an amplitude of about 0.03K due to the cardiac
cycle.
• Their findings demonstrated the potential use of high-speed imaging in peripheral
haemodynamic studies.
• Other workers have shown that imaging is useful in experimental respiratory function
studies (Perks et al. 1983), and for studying hyperthermia (Jones and Carnochan 1986).
• Improvements in examination techniques and the advantages of digital imaging have
improved the reliability of thermal imaging significantly in the study of diseases such
as diabetes mellitus and coronary heart disease (Marcinkowska-Gapinska and Kowal
2006, Ring 2010).
• The development of FPA systems allowed thermography to be used for public
screening such as for SARS and pandemic influenza.
• Outgoing or incoming travellers have been screened at airports to identify individual
travellers with a raised body temperature caused by SARS or Avian influenza.
• In practice, real-time screening of facial temperature patterns is used to identify
individuals for further investigation. Adequate digital data can be captured within 2 s.
• The positive threshold temperature is usually taken to be 38°C.
• The control of infectious diseases is an international problem and various international
organisations are involved in establishing international standards for temperature
screening of the public (Ring 2006).
• Cooled FPA detector systems give good thermal and spatial resolution but suffer from
the limited length of time the cooling system can operate.
• Uncooled camera systems are more suited for continuous use over long time periods
but are best used in conjunction with an external reference source.
• A further problem is the lack of published data relating facial temperature patterns and
deep body temperatures. This problem is accentuated by possible ethnic differences
caused by facial topography.