0% found this document useful (0 votes)
21 views181 pages

MIT Notes S7

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views181 pages

MIT Notes S7

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

BM409- MEDICAL IMAGING

TECHNIQUES

Dr. Jeethu Raveendran


Assistant Professor,
Department of Biomedical Engineering,
Sahrdaya College of Engineering & Technology,
Kodakara, P.B.No.17, Thrissur (Dt), Pin 680684, Kerala, India.
Course Course Name L-T-P- Year of
code Credits Introduction
BM409 MEDICAL IMAGING TECHNIQUES 3-0-0-3 2016
Prerequisite : Nil
Course Objectives
 To expose imaging methods in medicine and biology.
Syllabus
Ultrasonic imaging, X-Ray computed tomography, Magnetic Resonance Imaging - Radio
isotope imaging - SPECT & PET, Infrared Imaging.
Expected Outcome
At the end of the course the student will be able to
i. Learn the different methods and modalities used for medical imaging.
ii. Learn the preferred medical imaging methods for routine clinical applications
iii. Understand the engineering models used to describe and analyze medical images
iv. Apply these tools to different problems in medical imaging.
v. Implement methods to analyze medical images.
Text Books:
1. Webb, The Physics of Medical Imaging, IOP Publishing Ltd., 1988.
2. Peter Fish, The Physics of Diagnostic Ultrasound, John Wiley & sons, England, 1990.
3. A C Kak, Principle of Computed Tomography ,IEEE Press New York
4. HH Schild MRI made easy 2003 - Schering AG.
Reference Books:
1. Douglas A Christensen: Ultrasonic Bioinstrumentation, John Wiley, New York, 1988.
2. M N Rehani: Physics of Medical Imaging, Macmillian India Ltd., 1991.
3. D L Hykes, W R Hedrick & D E Starchman: Ultrasound Physics & Instrumentation,
Churchill Livingstone, Melbourne, 1985.
4. Atam Dhavan, Medical Image Analysis,Wiley IEEE Press, 2003.
Course Plan
Sem.
Module Contents Hours Exam
Marks
Basic physics of ultrasound – characteristic impedance, 2 15%
wavelength, frequency and velocity of propagation,
Absorption, beam width, resolution. Generation and detection.
Ultra Sound In Medicine – Transducers – types, Block diagram 3
I of an ultrasound machine. Principles of image formation,
capture and display - A-mode, B-mode and M-mode displays -
applications.
Doppler Ultrasound and Colour flow mapping 1
Introduction to 3D and 4D ultrasound and its applications. 1
X-Ray computed tomography - Principles of sectional imaging 3 15%
- generations - data acquisition system –components - image
formation principles - conversion of x-ray data into scan image
Image reconstruction from projections CT reconstruction - 3
Radon transform, inverse radon transform back projection
operator-convolution back projection- parallel beam geometry-
Fan beam geometry.
2D image reconstruction techniques - Iteration and Fourier 2
II methods.
Types of CT scanners – spiral CT, multi slice CT. 1
FIRST INTERNAL EXAM
Basic principles of magnetic resonance – magnetic moment, 2 15%
FID, excitation and emission - principles of image formation
Basic MRI technique:T1 weighted imaging,T2 weighted 4
imaging, spin density weighted imaging ,Gradient echo
III
imaging, Pulse sequences - spin echo, gradient echo imaging
sequence.
MRI instrumentation – magnets – gradient system – RF coils- 1
receiver system
Image acquisition and reconstruction techniques - MRI 2 15%
Fourier reconstruction.
Functional MRI - BOLD signal - clinical applications 1
Diffusion tensor imaging - principle and application. 1
IV Topics for seminar only: Magnetic resonance elastography - 3
Magnetic resonance angiography - Magnetic resonance
spectroscopy - Magnetic resonance microscopy - Hybrid
imaging – PET, MRI & EEG -MRI , applications - magnetic
resonance perfusion weighted imaging
SECOND INTERNAL EXAM
Emission Computed Tomography - Radio isotope imaging – 6 20%
V Radio nuclides for imaging - Rectilinear & Linear scanners,
PET & SPECT – principle – Gamma Camera.
Infrared Imaging - Physics of thermography – IR Detectors - 6 20%
photon & thermal detectors - thermal uncooled IR detectors:
resistive micro bolometers, pyroelectric and ferroelectric
detectors, thermoelectric detectors.
VI
Pyroelectric vidicon camera - camera characterization -
thermographic image processing - clinical applications of
thermography in rheumatology, neurology, oncology and
physiotherapy.
END SEMESTER EXAM
QUESTION PAPER PATTERN:
Maximum Marks: 100 Exam Duration: 3 Hours
There shall be three parts for the question paper.
Part A includes Modules 1 & 2 and shall have three questions of fifteen marks out of which
two are to be answered. There can be subdivisions, limited to a maximum of 4, in each
question.
Part B includes Modules 3 & 4 and shall have three questions of fifteen marks out of which
two are to be answered. There can be subdivisions, limited to a maximum of 4, in each
question.
Part C includes Modules 5 & 6 and shall have three questions of twenty marks out of which
two are to be answered. There can be subdivisions, limited to a maximum of 4, in each
question.
Note: Each part shall have questions uniformly covering both the modules in it.
E G4903 Pages: 2

Reg No.:_______________ Name:__________________________


APJ ABDUL KALAM TECHNOLOGICAL UNIVERSITY
SEVENTH SEMESTER B.TECH DEGREE EXAMINATION, DECEMBER 2018
Course Code: BM409
Course Name: MEDICAL IMAGING TECHNIQUES
Max. Marks: 100 Duration: 3 Hours

PART A
Answer any two full questions, each carries 15 marks. Marks
1 a) Explain the acoustic impedance and its role in ultrasound imaging. (5)
b) Discuss the terminologies (i) Characteristic Impedance (ii) Velocity of
Propagation (5)
c) Explain the acoustic impedance and its role in ultrasound imaging. (5)
2 a) Summarize on X-ray tubes used in CT. Explain the types of X-ray tubes have been
utilized for computed tomography (5)
b) With neat sketch, explain the block diagram of ultrasound machine. Also discuss
the functions of each part. (10)
3 a) Explain Fourier method for 2D image reconstruction (5)
b) Summarize the types of CT used. Explain their need and principle (6)
c) What is the principal of sectional imaging? (4)

PART B
Answer any two full questions, each carries 15 marks.
4 a) Nuclear magnetic resonance tomography is the powerful imaging technique used
in medical field, with relevant schematics explain the principle of NMR (10)
b) Explain the decay in magnetization (5)
5 Based on MRI technique, explain :
a) spin density weighted imaging (7)
b) gradient echo imaging (8)
6 a) What are the reconstruction techniques used in NMR imaging? What is
commonly used method in modern scanners? (10)
b) What are the clinical applications of functional MRI? (5)

PART C
Answer any two full questions, each carries 20 marks.
7 a) Explain the principles of SPECT imaging with suitable diagrams (10)

Page 1 of 2
E G4903 Pages: 2

b) Draw the block diagram and explain multi crystal Gamma camera. (10)
8 a) Discuss the augur position network and pulse height analyzers with suitable
diagrams (10)
b) What are the clinical applications of thermal imaging in Rheumatology,
oncology and physiotherapy? (10)
9 a) What is the principle of thermography? (4)
b) What is Stefan-Boltzman Law? (4)
c) Explain (i) Resistive bolometers (ii) thermo electric detectors (iii) pyroelectric (12)
****

Page 2 of 2
E G192097 Pages:2

Reg No.:_______________ Name:__________________________

APJ ABDUL KALAM TECHNOLOGICAL UNIVERSITY


SEVENTH SEMESTER B.TECH DEGREE EXAMINATION(R&S), DECEMBER 2019
Course Code: BM409
Course Name: MEDICAL IMAGING TECHNIQUES
Max. Marks: 100 Duration: 3 Hours
PART A
Answer any two full questions, each carries 15 marks. Marks

1 a) How the frequency and penetration of ultrasound are related? Give example for (3)
choosing frequency based on penetration at different parts on human body
b) Write a short note on colour flow mapping (4)
c) Describe the constructional details of an ultrasound transducer (8)
2 a) Explain the data acquisition system used in X-ray CT (7)
b) i) What is M-mode scan? (5)
ii) Mention any one example for theapplication of M-mode scan (3)
3 a) Explain convolution back projection method of CT image reconstruction (9)
b) List the features of multi-slice CT (6)

PART B
Answer any two full questions, each carries 15 marks.
4 a) Compare T1 weighted imaging and T2 weighted imaging (5)
b) Explain how the magnetic fields and radiofrequency signals are used to obtain (10)
anatomical information?
5 a) Give details about different magnets used in MRI (8)
b) Write a short note On Functional MRI and mention any two applications? (7)
6 a) Explain diffusion tensor imaging and list any five applications (12)
b) Define BOLD signal (3)

PART C
Answer any two full questions, each carries 20 marks.
7 a) What is a scintillation detector (5)
b) Explain Emission Computed Tomography (15)
8 a) Explain the concept of thermal imaging (6)
b) Based on the detector arrangement classify the design types of Positron Emission (4)
Tomography

Page 1 of 2
E G192097 Pages:2

c) Write a short note on (10)


i) Thermo electric detector
ii) Pyroelectric detector
9 a) List different physical factors which affect the amount of IR radiation from (4)
human body
b) With the help of neat diagram describe about pyroelectric vidicon camera (12)
c) What are the applications of thermography in oncology (4)
****

Page 2 of 2
E G1098 Pages: 2

Reg No.:_______________ Name:__________________________


APJ ABDUL KALAM TECHNOLOGICAL UNIVERSITY
SEVENTH SEMESTER B.TECH DEGREE EXAMINATION(S), MAY 2019
Course Code: BM409
Course Name: MEDICAL IMAGING TECHNIQUES
Max. Marks: 100 Duration: 3 Hours

PART A
Answer any two full questions, each carries 15 marks. Marks

1 a) What is ultrasound? Describe a method to produce the ultrasound. (5)


b) Discuss the terminologies (i) Characteristic Impedance (ii) Velocity of
Propagation (5)
c) What is resolution and the different types of resolution with respect to ultrasound
waves (5)
2 a) With the help of an ultrasound machine, explain A scan mode. (5)
b) Draw the block diagram of a real time ultrasound scanner and explain the
functions of each block (6)
c) Discuss the effect of the formation of the image of a small reflector during linear
scan. (4)
3 a) What are the basic features of X-Ray Computed Tomography (6)
b) Explain the 2-D image reconstruction (5)
c) Why is spiral CT more effective than other generations of CT. (4)

PART B
Answer any two full questions, each carries 15 marks.
4 a) Explain the relaxation mechanisms associated with excited nuclear spins (5)
b) How pulse sequence is employed in spin-echo imaging technique (4)
c) With relevant block diagram generalize the detection system used in MRI. (6)
5 a) Based on NMR detection system generalize the types of coils commonly
available at receiver side. (4)
b) Explain the principle of diffusion tensor imaging. (5)
c) Describe the instrumentation systems used in functional MRI. (6)
6 a) Explain Fourier reconstruction methods used in MRI. (10)
b) What are the clinical applications of functional MRI (5)

Page 1of 2
E G1098 Pages: 2

PART C
Answer any two full questions, each carries 20 marks.
7 a) With a simplified diagram explain SPECT system consisting of dual large field-of
view scintillation cameras mounted on a rotatable gantry (8)
b) List out the characteristics of Radio nuclides for imaging (5)
c) Generalize the block diagram of a typical rectilinear scanner. (7)
8 a) Discuss the common radio nuclides used for PET (10)
b) What are the different physics of thermography? Explain the terminologies with
respect to Infrared Imaging. (10)
9 a) Explain the principle of microbolometer thermal detector (6)
b) What is spectral radiant emissivity? What is the behaviour when wavelength
increases. (6)
c) Explain the various parts of pyroelectric vidicon thermographic camera? (8)
****

Page 2of 2
00000BM409121901
E Pages: 1

Reg No.:_______________ Name:__________________________


APJ ABDUL KALAM TECHNOLOGICAL UNIVERSITY
Seventh semester B.Tech examinations (S), September 2020

Course Code: BM409


Course Name: MEDICAL IMAGING TECHNIQUES
Max. Marks: 100 Duration: 3 Hours
PART A
Answer any two full questions, each carries 15 marks. Marks
1 a) What is Radon transform? Describe the various steps involved in the (10)
reconstruction of CT images.
b) State the principle that determines the shape of the ultrasound beam and discuss (5)
on how the shape of the beam and wavelength are related.
2 a) What is Pulse Repetition Frequency (PRF) generator? With help of diagram (10)
mention the constraints on PRF.
b) Define Fourier Slice theorem. Explain its significance in CT reconstruction. (5)
3 a) Validate “The amplitude of the display signal can have changed by altering (7)
either the amplitude of the transmitted pulse or the gain of the amplifier”.
b) Illustrate and explain the basic principle of producing of CT images. (8)
PART B
Answer any two full questions, each carries 15 marks.
4 a) Explain the terms (i) Free Induction Decay (ii) Magnetic Moment (6)
b) What is f-MRI? Discuss the scope of this technique. (9)
5 a) Explain the different image reconstruction technique in MRI. (9)
b) Distinguish between MRI and CT technologies. (6)
6 a) What is gradient echo imaging and mention its applications? (6)
b) With the help a well labelled diagram, explain MRI machine. (9)
PART C
Answer any two full questions, each carries 20 marks.
7 a) What are the different types of IR Detectors? (10)
b) With the help of schematic diagram explain the principle of sectional imaging in (10)
SPECT.
8 a) Draw and explain the block diagram of the scanning and displaying (10)
arrangements of infra-red imaging.
b) Describe the Gamma camera along with associated circuits to produce a nuclear (10)
image.
9 a) What is clinical application of thermography in Oncology? Elucidate with proper (10)
example.
b) Describe emission computed tomography. (10)
****

Page 1of 1
MIT MODULE 1 - SCET

1. Ultrasound Imaging

1. Discuss the basic principles of ultrasound imaging

• Ultrasound (US) is an imaging technology that uses high-frequency sound waves to


characterize tissue.
• It is a useful and flexible modality in medical imaging, and often provides an additional
or unique characterization of tissues, compared with other modalities such as
conventional radiography or CT.

• In order to understand diagnostic ultrasound, sound should be thought of as more than


just the familiar sense of hearing
• Rather, sound should be thought of as the interaction of energy and matter.
• Sound is mechanical energy transmitted by pressure waves in a medium.
• which means that sound exists in the form of particles moving in a medium.
• If a sound wave is moving from left to right through air, then particles of air will be
displaced both rightward and leftward as the energy of the sound wave passes through
it. The motion of the particles is parallel to the direction of the energy transport.
• This is what characterizes sound waves in air as longitudinal waves.
• Because of the longitudinal motion of the air particles, there are regions in the air where
the air particles are compressed together and other regions where the air particles are
spread apart.
• These regions are known as compressions and rarefactions respectively. The
compressions are regions of high air pressure while the rarefactions are regions of low
air pressure.

• Ultrasound relies on properties of acoustic physics (compression/rarefaction, reflection,


impedance, etc) to localize and characterize different tissue types.

1|Page
MIT MODULE 1 - SCET

• The frequency of the sound waves used in medical ultrasound is in the range of
millions of cycles per second (megahertz, MHz). In contrast, the upper range of
audible frequencies for human is around 20 thousand cycles per second (20 kHz).
• An ultrasound transducer sends an ultrasound pulse into tissue and then receives echoes
back. The echoes contain spatial and contrast information.
• The concept is analogous to sonar used in nautical applications, but the technique in
medical ultrasound is more sophisticated, gathering enough data to form a rapidly moving
two-dimensional grayscale image.

• Steps of ultrasound production:

• Diagnostic imaging is generally performed using ultrasound in the frequency range


from 2 to 15 MHz
• The choice of frequency is dictated by a trade-off between spatial resolution and
penetration depth, since higher frequency waves can be focused more tightly but are
attenuated more rapidly by tissue.

2|Page
MIT MODULE 1 - SCET

2. Discuss the advantages and disadvantages of ultrasound imaging?

Advantages
• ultrasound uses non-ionizing sound waves and has not been associated with
carcinogenesis. This is particularly important for the evaluation of fetal and gonadal
tissue.
• in most centers, ultrasound is more readily available than more advanced cross-sectional
modalities such as CT or MRI.
• ultrasound examination is less expensive to conduct than CT or MRI.
• there are few (if any) contraindications to use of ultrasound, compared with MRI or
contrast-enhanced CT.
• the real-time nature of ultrasound imaging is useful for the evaluation of physiology as
well as anatomy (e.g. fetal heart rate).
• Doppler evaluation of organs and vessels adds a dimension of physiologic data, not
available on other modalities (with the exception of some MRI sequences).
• ultrasound images may not be as adversely affected
• by metallic objects, as opposed to CT or MRI.
Disadvantages
• training is required to accurately and efficiently conduct an ultrasound exam and there is
non-uniformity in the quality of examinations ("operator dependence").
• ultrasound is not capable of evaluating tissue types with high acoustical impedance (e.g.
bone). It is also limited in evaluating structures encased in bone (e.g. cerebral parenchyma
inside the calvaria).
• the high frequencies of ultrasound result in a potential risk of thermal heating or
mechanical injury to tissue at a micro level. This is of most concern in fetal imaging.
• ultrasound has its own set of unique artifacts (US artifacts), which can potentially degrade
image quality or lead to misinterpretation.
• some ultrasound exams may be limited by abnormally large body habitus

3|Page
MIT MODULE 1 - SCET

3. What is acoustic impedance discuss its significance?

• Acoustic impedance (Z) is a physical property of tissue. It describes how much


resistance an ultrasound beam encounters as it passes through a tissue.
• Acoustic impedance depends on:
the density of the tissue (ρ, in kg/m3)
the speed of the sound wave (c, in m/s)
and they are related by:
Z=ρ xc
• So, if the density of a tissue increases, impedance increases. Similarly, but less intuitively,
if the speed of sound increases, then impedance also increases.
• The effect of acoustic impedance in medical ultrasound becomes noticeable at
interfaces between different tissue types.
• The ability of an ultrasound wave to transfer from one tissue type to another depends
on the difference in impedance of the two tissues.
• If the difference is large, then the sound is reflected.
• We grasp this intuitively at a macroscopic level. If you were to yell into a canyon, you
would expect an echo to return to you. The sound wave in air meets the dense rocky
canyon wall and reverberates off it back to you; the sound wave does not just pass into
the rock. This is due to the difference in impedance.
• Similarly, when an ultrasound beam passes through muscle tissue and encounters bone,
it reflects off of it due to the difference in density between the tissues.
• The amount of reflection that occurs in a perpendicular direction is expressed by:
Reflection fraction = [(Z2 - Z1) / (Z2 + Z1)]2
• Where Z1 and Z2 represent the impedance in tissue 1 and tissue 2, respectively.

4|Page
MIT MODULE 1 - SCET

4. Discuss the physical properties of ultrasound like frequency,


wavelength and velocity?

Wavelength and frequency

• Sound waves are characterized by the properties of frequency (f) or number of cycles
per second, amplitude (A or loudness), and wavelength (λ) or distance between two
adjacent ultrasound cycles.
• Using the average propagation velocity of ultrasound through tissue (c) or 1,540 m/s, the
relation between frequency (f) and wavelength (λ) is characterized as:
• λ (mm) = c (1,540 m/s)/f (cycles/s)

• Ultrasound vibrations (or waves) are produced by a very small but rapid push–pull action
of a probe (transducer) held against a material (medium) such as tissue.
• The push–pull action of the transducer causes regions of compression and rarefaction to
pass out from the transducer face into the tissue.
• These regions have increased or decreased tissue density.

5|Page
MIT MODULE 1 - SCET

• In tissue if we could look closely at a particular point, we would see that the tissue is
oscillating rapidly back and forward about its rest position.
• As noted above, the number of oscillations per second is the frequency of the wave.
The speed with which the wave passes through the tissue is very high close to 1540 m/s
for most soft tissue.
Speed of sound
• The speed of sound, c, is simply related to the frequency, f, and the wavelength, λ, by the
formula: c /f = λ
• where c is medium dependent, but for water and most soft tissues it is ∼1500m s−1.
Therefore, typical wavelengths encountered in medical diagnosis range from 1.5mm at
1MHz to 0.1mm at 15MHz
• he wavelength and frequency of ultrasound are inversely related, i.e., ultrasound of high
frequency has a short wavelength and vice versa.
• Medical ultrasound devices use sound waves in the range of 1–20 MHz.
• Proper selection of transducer frequency is an important concept for providing optimal
image resolution in diagnostic and procedural US.
• High-frequency ultrasound waves (short wavelength) generate images of high axial
resolution.
• Increasing the number of waves of compression and rarefaction for a given distance can
more accurately discriminate between two separate structures along the axial plane of
wave propagation.
• However, high-frequency waves are more attenuated than lower frequency waves for a
given distance; thus, they are suitable for imaging mainly superficial structures.
• Conversely, low-frequency waves (long wavelength) offer images of lower resolution
but can penetrate to deeper structures due to a lower degree of attenuation
• For this reason, it is best to use high-frequency transducers (up to 10–15 MHz range)
to image superficial structures (such as for stellate ganglion blocks) and low-frequency
transducers (typically 2–5 MHz) for imaging the lumbar neuraxial structures that are
deep in most adults.
Velocity of propagation
• Ultrasound, like sound in gases, propagates largely as a longitudinal pressure wave
• the energy of a moving source is transferred to the medium either as a local compression,
when the source pushes on the medium, or as a local stretching (rarefaction) when the
source pulls on the medium.
• In either case, potential energy is stored elastically in the layer of the medium that is
immediately adjacent to the source and is released by motion of the adjacent layer of the
medium
• he bulk elastic modulus is a property of the medium that relates the applied pressure
needed to achieve a given fractional change in volume.

• longitudinal wave speed is

• It is, therefore, clear that the speed of the wave depends on the density ρ0 and the bulk
elastic modulus K

6|Page
MIT MODULE 1 - SCET

5. Discuss the components of an Ultrasound Transducer?

• Piezoelectric material can be arranged in a number of different ways within the transducer,
with each arrangement having a different purpose. The arrangement of the piezoelectric
material will affect how the transducer sweeps the tissue to produce a two-dimensional
image, as well as the geometry of the ultrasound beam in order to maximize resolution.

Piezoelectric material
• Piezoelectric ceramic (lead zirconate titanate, PZT)
• Plastic (polyvinylidine difluoride, PVDF)
• Silver material is deposited in the faces – for electric connections-application of
potential
• Voltage applied produces a proportional change in thickness
• The crystal has a natural resonanat frequency f0=ccrystal/2d
• c-speed of the sound through the crystal
• d-thickness
Damping blocks
• Damping blocks are attached to the back of the piezoelectric material to accomplish a
wide range of tasks.
• Firstly, these damping blocks absorb stray sound waves, as well as absorb backward
directed ultrasound energy.
• They also dampen the vibrations to create short spatial pulse lengths (SPL) to help
improve axial resolution.
• A short SPL is desirable to minimize interference.
• If the length of the pulses is too long, it can interfere with the sound waves that are on
their way back.
• In a more extreme example, the outgoing pulse may actually collide with the returning
waves if it is too long.
• Thus, a good damping material is important to maximize sensitivity to signal.
Lens
• The acoustic lens is used to help focus the beam to improve resolution.
• In the same way that light diffracts off the lens in a magnifying glass or a pair of
eyeglasses, this helps aim the ultrasound beam at a particular distance depending on the
application.
• An unfocused ultrasound beam will have a broader beam geometry that will have a lower
resolution.
• Thus, an acoustic lens is very beneficial in order to provide a sharper image. It will also
reduce blurring from side lobe interference produced by the piezoelectric material.
Matching layer
• To provide acoustic coupling between the crystal and patient
• This layer helps to overcome the acoustic mismatch between the element and the tissue.
• Z matching=(Z element x Z tissue )1/2

7|Page
MIT MODULE 1 - SCET

• T matching= λ/4
• Example- aluminium powder in araldite
• The impedance matching layer is a crucial part of the transducer.
• This layer usually contains oil and helps maximize the amount of wave energy
transmitted into the skin and onward.
• The basic idea of this layer is to help the area of the transducer touching the skin better
mimic properties of the tissue.
• Ultrasound beams can more effectively travel from tissue to tissue of similar density.
• A coupling gel is also applied to the surface where the transducer will be imaging in
order to reduce interference from air bubbles

6. Discuss Various types of ultrasound transducers or probes?

The probe is the main part of the ultrasound machine. It sends ultrasound waves into the body
and receives the echoes produced by the waves when it is placed on or over the body part being
imaged. The shape of the probe determines its field of view. Probes are generally described by
the size & the shape of their footprint. Selecting the right probe for the situation is essential to
get good images.
There are four basic types of probes used:
1. Linear probes – are generally high frequency better for imaging superficial structures
& vessels also called vascular probes.
2. Curvilinear probes – have widened footprint and lower frequency for transabdominal
imaging & widen the field of view.
3. Phase array probes – for getting between ribs such as cardiac ultrasound.
4. Endocavitary probes – with high frequency and better imaging as transvaginal &
transrectal probes.
3D & 4D ultrasound probes help in more detailed imaging interns of volume data acquisition
volume display and analysis and multiplanar imaging of organs of interest i.e. assessing
multiple 2-D image planes simultaneously. Probes emit ultrasound waves and pass through the
skin. These waves are reflected by various structures that if encounters. The time taken by the
waves to return and their strength form the foundation for interpreting details into a clear image.
Sophisticated computer soft wave conducts this portion.

8|Page
MIT MODULE 1 - SCET

9|Page
MIT MODULE 1 - SCET

7. Discuss the Block diagram/constructional details of an ultrasound


machine?

Figure shows a generalised scheme for signal generation and processing in a pulse echo
imaging system, with a pictorial representation of the signals to be expected at each stage.

Transducer and Frequency


• The choice of frequency is a compromise between good resolution and deep
penetration.
• One often finds frequencies in the range 3–5MHz used to image the liver and other
abdominal organs, the uterus and the heart, whereas more superficial structures such as
thyroid, carotid artery, breast, testis and various organs in infants would warrant the use
of somewhat higher frequencies (5–15MHz). The eye, being extremely superficial and
exhibiting low attenuation, has been imaged with frequencies in the range 10–30MHz.
The short pulse of sound that emerges from a diagnostic transducer or array is generally
no more than 2–3 cycles in length, and is generated (at the transmitter) by applying to
the transducer either a momentary voltage step or a brief (e.g. single-cycle) gated sine
wave of frequency equal to the resonant frequency of the transducer
Clock Pulse
• This triggers the excitation of the transducer and acts, in some circumstances, to
synchronise the display

10 | P a g e
MIT MODULE 1 - SCET

• High repetition rates are desirable, however, for fast scanning or for following moving
structures. The maximum pulse repetition rate, usually referred to as the pulse repetition
frequency (PRFmax), is limited by the maximum depth (Dmax) to which one wishes to
image.
Transmitter
• The voltage pulse applied to the transducer is usually below 500V and often in the range
100–200V.
• Its shape is equipment dependent, but it must have sufficient frequency components to
excite the transducer properly; for example, for frequency components up to 10MHz, a
pulse rise time of less than 25ns is required.
Linear RF Amplifier
• This is an important component, since noise generated at this point may well limit the
performance of the complete instrument.
• Clever design is required, since, whilst maintaining low noise and high gain, the input
must be protected from the high-voltage pulse generated by the transmitter.
• Any circuits that do this must have a short recovery time, and the amplifier itself should
possess a large dynamic range and good linearity
Time Gain Control
• This is provided by a voltage-controlled attenuator.
• Some form of time-varying function, synchronised with the main clock and triggered
via a delay, is used as a control voltage so that the system gain roughly compensates
for attenuation of sound in the tissue.
• The simplest function used is a logarithmic voltage ramp, usually set to compensate for
some mean value of attenuation.
• The effect that this has on the time-varying gain of the system is illustrated
schematically in Figure.

• The delay period indicated is often adjustable, so that the attenuation compensation
does not become active until echoes begin returning from attenuating tissue.
Compression Amplifier
• There is a wide range of gain characteristics in use but a general feature is that the gain
decreases with increasing input signal.
• One example is an amplifier with a logarithmic response.
• This allows the remaining 40–50dB echo range to be displayed as a grey scale on a
screen that might typically have only a 20–30dB dynamic range.
Demodulation
• At this point the envelope of the RF echo signal is extracted, for example, by
rectification followed by smoothing with a time constant of about 1.5λ, although more
accurate techniques of complex demodulation are often used.

11 | P a g e
MIT MODULE 1 - SCET

• The signal depicted at this point is known as a detected A-scan (representing the
amplitude of the echoes).
• There are two types of demodulation used in ultrasound signal processing, amplitude
demodulation as described here for anatomical imaging, and frequency demodulation
for detecting the Doppler shift due to target motion.
Pre- and Postprocessing
• It is common to preprocess the A-scan echo signal both before and after it is digitised
and stored in the display memory
• Examples are various forms of edge detection (usually differentiation), and further
adjustments to the gain characteristic, such as suppression, which is dynamic-range
restriction by rejecting from the display echoes that are below an operator-defined
threshold (noise suppression)
Digitisation and Display
• As the sound beam is scanned across the object being imaged, a sequence of A-scans
is generated, each of which provides one line in the final image.
• This sequence of scan lines must be stored and geometrically reconstructed to form an
image, which is achieved by a process known as a scan conversion

8. Explain the Time-Gain Compensation in ultrasound imaging?

• The voltages corresponding to the backscattered echoes have a large range of


amplitudes: very strong signals appear from reflectors close to the transducer and very
weak signals from low concentrations of scatterers deep within the body.
• The total range of signal amplitudes may be as high as 80-100 dB.
• Radiofrequency (RF) amplifiers typically cannot amplify signals with a dynamic range
greater than about 40-50 dB with a linear gain.
• Nonlinear amplification would result in the signals from the weaker echoes being
attenuated severely. The solution to this problem is to use time-gain compensation
(TGC) of the acquired signals, a process in which the amplification factor is increased
as a function of time.
• Signals arising from structures close to the transducer are therefore amplified by a
smaller factor than those from greater depths.
• Various linear or nonlinear functions of gain versus time can be used, and these
functions can be chosen on-line by the operator.
• The net effect of TGC is to compress the dynamic range of the backscattered echoes,
as shown in Figure 3.16. After TGC, the signals pass through a logarithmic compression
amplifier, which further reduces the dynamic range to 20-30dB.

12 | P a g e
MIT MODULE 1 - SCET

• Older systems demodulated the signal to very low frequencies before digitization, but
recent advances in digital receivers allow direct digitization of the signal at the
fundamental frequency, giving a higher image SNR.
• Dynamic receiver filtering, a process in which there is a progressive reduction in the
receiver frequency bandwidth as a function of time after pulse transmission, can also
be used. Since the high-frequency content of the backscattered signal decreases with
depth due to the greater attenuation in tissue at higher frequencies, the receiver
bandwidth can be reduced accordingly.
• This results in an improved SNR in the image because the noise level is proportional to
the square root of the receiver bandwidth. After the signal has been digitized, it can be
processed via envelope detection, edge detection, or whichever algorithm is appropriate
for the particular application, and then displayed as a gray-scale image.

9. What are the different modes of scans in ultrasound? Differentiate


them?

Amplitude (A)-mode scanning


• Amplitude (A)-mode scanning refers to the acquisition of a one-dimensional scan.
• An A-mode scan simply plots the amplitude of the backscattered echo versus the time
after transmission of the ultrasound pulse.
• Some detectors use the unrectified, rather than rectified, digitized signal because the
leading edge is better defined.
• A-mode scanning is used most often in ophthalmology to determine the relative
distance between different regions of the eye, and can be used, for example, to detect
corneal detachment.
• High-frequency (>10 MHz) ultrasound is used to produce very high axial resolution
• Tissue attenuation, even at this high frequency, is not problematic because the
dimensions of the eye are so small.
M-mode scanning

13 | P a g e
MIT MODULE 1 - SCET

• A motion (M)-mode scan provides information on tissue movement within the body,
and essentially displays a continuous series of A-mode scans.
• The brightness of the displayed signal is proportional to the amplitude of the
backscattered echo, with a continuous time ramp being applied to the horizontal axis of
the display
• The maximum time resolution of the M-mode scan is dictated by how long it takes for
the echoes from the deepest tissue to return to the transducer.
• M-mode scanning is used most commonly to detect motion of the heart valves and heart
wall in echocardiography.
Brightness (B)-mode scanning
• Brightness (B)-mode scanning produces a two-dimensional image through a cross
section of tissue.
• Each line in the image consists of an A-mode scan with the brightness of the signal
being proportional to the amplitude of the backscattered echo.
• B-mode scanning can be used to study both stationary and moving structures, such as
the heart, because complete images can be acquired very rapidly.
• For example, in the case of an image with a 10-cm depth-of-view, it takes 130 us after
transmission of the ultrasound pulse for the most distant echo to return to the transducer.
• If the image consists of 120 lines, then the total time to acquire one frame is 15.6 ms
and the frame rate is 64 Hz. If the depth-of-view is increased, then the number of lines
must be reduced in order to maintain the same frame rate.

10. Discuss and differentiate 3D/4D ultrasound

2D ultrasound
Traditional ultrasound scanning is 2D, meaning it sends and receives ultrasound waves in just
one plane. The reflected waves then provide a flat, black-and-white image of the fetus through

14 | P a g e
MIT MODULE 1 - SCET

that plane. Moving the transducer enables numerous planes of viewing, and when the right
plane is achieved, as judged by the image on the monitor, a still film can be developed from
the recording. Most of the detailed evaluation of fetal anatomy and morphology so far has been
done using 2D ultrasound.
3D ultrasound
Further development of ultrasound technology led to the acquisition of volume data, i.e.,
slightly differing 2D images caused by reflected waves which are at slightly different angles to
each other. These are then integrated by high-speed computing software. This provides a 3-
dimensional image. The technology behind 3D ultrasound thus has to deal with image volume
data acquisition, volume data analysis and finally volume display.

Volume data is acquired using three techniques:


• Freehand movements of the probe, with or without position sensors to form the images.
• Mechanical sensors built into the probe head.
• Matrix array sensors which uses one single sweep to acquire a lot of data. This
incorporates a whole series of 2D frames taken in succession. Data analysis then
provides a 3-D image. The operator can then extract any view or plane of interest. This
helps to visualize the structures in terms of their morphology, size and relationship with
each other.
Data can be displayed using either multiplanar format or rendering of images, which is a
computerized process filling in the gaps to create a smooth 3D image. There is also a
tomographic mode which allows the viewing of numerous parallel slices in the transverse plane
from the 3D or 4D data set.
Advantages
• The use of virtual planes helps in better visualization of fetal heart structures by
allowing views not attainable by 2D imaging, possibly adding a 6% chance of detecting
defects.
• Diagnosis of fetal face defects like cleft lip.
• Diagnosis of fetal skeletal or neural tube defects.
• Less time for standard plane visualization.
• Less dependent on operator skill and experience for diagnosis of common fetal
anomalies.
• The recorded volume data can be made available for remote expert review for better
diagnosis.
4D ultrasound
3D imaging allows fetal structures and internal anatomy to be visualized as static 3D images.
However, 4D ultrasound allows us to add live streaming video of the images, showing the
motion of the fetal heart wall or valves, or blood flow in various vessels. It is thus 3D ultrasound
in live motion. It uses either a 2D transducer which rapidly acquires 20-30 volumes or a matrix
array 3D transducer is used.
4D ultrasound has the same advantages as 3D, while also allowing us to study the motion of
various moving organs of the body. Its clinical applications are still being studied. At present

15 | P a g e
MIT MODULE 1 - SCET

it is mostly used to provide fetal keepsake videos, a use which is discouraged by most medical
watchdog sites.
This is because unregulated centers offer it as entertainment ultrasounds. Such use violates the
ALARA (As Low As Reasonably Achievable) principle governing the medical use of
diagnostic imaging.
Disadvantages of non-medical use are:
• The machines may use higher-than-usual levels of ultrasound energy with potential
side-effects on the fetus.
• The ultrasound sessions may be prolonged.
• Uncertified or untrained operators may lead to missed or inadequate diagnosis since
they are not required to be certified by law.
Advantages of 3D/4D ultrasound

• Shorter time for fetal heart screening and diagnosis.


• Volume data storage for screening, expert review, remote diagnosis as in remote areas,
and teaching.
• Enhanced parental bonding with the baby.
• Healthier behavior during pregnancy as a result of seeing the baby in real-time and in
3D.
• More support by the father after visualizing the baby’s form and movement.
• Possibly more accurate identification of fetal anomalies especially of the face, heart,
limbs, neural tube and skeleton.
• In addition it shares the benefits of 2D ultrasound, namely:
o Assessment of fetal growth.
o Evaluation of fetal well-being.
o Placental localization and assessment.
o Seeing and hearing the fetal heartbeat.
o Capturing images of the baby which bond the family and friends with the baby
before birth.

Disadvantages

• Expensive machinery.
• Longer training required to operate.
• Volume data acquired may be lower-quality in the presence of fetal movements of any
kind, which will affect all later planes of viewing.
• If the fetal spine is not at the bottom of the scanned field sound shadows may hinder
the view.

11. Discuss the relation between frequency of the ultrasound and the
depth of penetration?

• Sound waves are characterized by the properties of frequency (f) or number of cycles
per second, amplitude (A or loudness), and wavelength (λ) or distance between two
adjacent ultrasound cycles.

16 | P a g e
MIT MODULE 1 - SCET

• Using the average propagation velocity of ultrasound through tissue (c) or 1,540 m/s,
the relation between frequency (f) and wavelength (λ) is characterized as:
• λ (mm) = c (1,540 m/s)/f (cycles/s)
• Ultrasound frequency is above 20,000 Hz or 20 KHz.
• Medical ultrasound is in the range of 3 -15 MHz.
• Average speed of sound through most soft human tissues is 1,540 meters per second.
This can be calculated multiplying the wavelength with frequency.
• The wavelength and frequency of ultrasound are inversely related, i.e., ultrasound of
high frequency has a short wavelength and vice versa.
• The higher frequency wavelength will have shorter wavelength whereas lower
frequency wavelength will have longer wavelength.
• The wavelength for the 2.5 MHz is 0.77 mm whereas that for 15 MHz is 0.1 mm
• High-frequency waves are more attenuated than lower frequency waves for a given
distance; thus, they are suitable for imaging mainly superficial structures.
• Conversely, low-frequency waves (long wavelength) offer images of lower resolution
but can penetrate to deeper structures due to a lower degree of attenuation
• For this reason, it is best to use high-frequency transducers (up to 10–15 MHz range)
to image superficial structures (such as for stellate ganglion blocks) and low-frequency
transducers (typically 2–5 MHz) for imaging the lumbar neuraxial structures that are
deep in most adults.

12. Discuss the different resolutions in ultrasound imaging?

The spatial resolution of any imaging system is defined as its ability to distinguish two points
as separate in space. Spatial resolution is measured in units of distance such as mm. The higher
the spatial resolution, the smaller the distance which can be distinguished.
Spatial resolution is commonly further sub-categorized into axial resolution and lateral
resolution.

17 | P a g e
MIT MODULE 1 - SCET

*SPL-Spatial Pulse Length

18 | P a g e
MIT MODULE 1 - SCET

19 | P a g e
MIT MODULE 1 - SCET

• temporal resolution is synonymous with frame rate.


• Typical frame rates in echo imaging systems are 30-100 Hz.
• The temporal resolution or frame rate = 1/(time to scan 1 frame).

20 | P a g e
MIT MODULE 1 - SCET

• The time to scan one frame is equal to the pulse repetition period x number of scan lines
per frame.
• Common means of improving frame rate include 1) narrowing the imaging sector, which
decreases the time it takes to scan one frame 2) decreasing the depth which decreases the
PRP 3) decreasing the line density, which requires fewer lines to scan one frame (at the
cost of spatial resolution)

13. Discuss the relation between frequency of the ultrasound and the
resolution of images?


Image resolution determines the clarity of the image.

Such spatial resolution is dependent of axial and lateral resolution.

Both of these are dependent on the frequency of the ultrasound.

Axial resolution is the ability to see the two structures that are side by side as separate
and distinct when parallel to the beam. So a higher frequency and short pulse length
will provide a better axial image.
• Lateral resolution is the image generated when the two structures lying side by side are
perpendicular to the beam. This is directly related to the width of the ultrasound beam.
• Proper selection of transducer frequency is an important concept for providing optimal
image resolution in diagnostic and procedural US.
• High-frequency ultrasound waves (short wavelength) generate images of high axial
resolution.
• Increasing the number of waves of compression and rarefaction for a given distance can
more accurately discriminate between two separate structures along the axial plane of
wave propagation.
• Narrower the beam better is the resolution.
• The width of the beam is inversely related to the frequency. Higher the frequency
narrower is the beam. If the beam is wide the echoes from the two adjacent structures
will overlap and the image will appear as one.

• Temporal resolution is the ability to detect that an object has moved over time.
• temporal resolution is synonymous with frame rate.
• Typical frame rates in echo imaging systems are 30-100 Hz.
• The temporal resolution or frame rate = 1/(time to scan 1 frame).
• The time to scan one frame is equal to the pulse repetition period x number of scan lines
per frame.
• Common means of improving frame rate include 1) narrowing the imaging sector,
which decreases the time it takes to scan one frame 2) decreasing the depth which
decreases the PRP 3) decreasing the line density, which requires fewer lines to scan one
frame (at the cost of spatial resolution)

21 | P a g e
MIT MODULE 1 - SCET

14. Discuss the Physical Principles and Theory of Image formation

• A short pulse, typically 1-5 us long, of energy is transmitted into the body using an
ultrasound transducer.
• The transducer is focused to produce a narrow ultrasound beam, which propagates as a
pressure wave through tissue at a speed of approximately 1540 mls.
• The initial trajectory of this beam is represented by line 1 in Figure 3.1.

• When the ultrasound wave encounters tissue surfaces; boundaries between tissues, or
structures within organs , a part of the energy of the pulse is scattered in all directions,
with a certain fraction of the energy being backscattered along the original transmission
path and returning to the transducer.
• As well as transmitting energy into tissue, the transducer also acts as the signal receiver,
and converts the backscattered pressure waves into voltages, which, after amplification
and filtering, are digitized.
• Using the measured time delay between pulse transmission and echo reception and the
propagation velocity of 1540m/s, one can estimate the depth of the feature.
• After all of the echoes have been received from the first beam trajectory, the direction
of the beam is electronically steered to acquire a second line of data adjacent to the first.
• This process is repeated to acquire between 64 and 256 lines, typically, per image. The
time required to acquire the echoes for each line is sufficiently short, on the order of
100-300 us, depending on the required depth of view, that complete ultrasonic images
can be acquired in tens of milliseconds, allowing dynamic imaging studies to be
performed.

22 | P a g e
MIT MODULE 1 - SCET

• Figure shows Elements of a simplified ultrasound pulse echo instrument.


• The received RF echo signals are presented on the vertical axis, where the horizontal
axis defines the time of flight which is converted to equivalent depth of penetration.
• RF signals are then demodulated representing the A-mode signals and the B-mode
image lines.
• The envelope of the echo signals is seen to the right, which yields 1D-information about
the tissue.

15. Discuss the Generation and Reception of Ultrasound by Transducers?

• The transducer is probably the single-most important component in an ultrasonic


imaging system.
• Its function is to convert applied electrical signals to pressure waves, which propagate
through the medium, and to generate an electrical replica of any received acoustic
waveform.

23 | P a g e
MIT MODULE 1 - SCET

• A well-designed transducer will do this with high fidelity, with good conversion
efficiency and with little introduction of noise or other artefacts.
• Also, it is primarily through transducer design that one has control over the system
resolution and its spatial variation
• There are two types: single element and multielement transducers.

Conventional Construction (Single Element/single crystal)

Piezoelectric material
• Piezoelectric ceramic (lead zirconate titanate, PZT)
• Plastic (polyvinylidine difluoride, PVDF)
• Silver material is deposited in the faces – for electric connections-application of potential
• Voltage applied produces a proportional change in thickness
• The crystal has a natural resonanat frequency f0=ccrystal/2d
• c-speed of the sound through the crystal
• d-thickness
Damping blocks
• Damping blocks are attached to the back of the piezoelectric material to accomplish a
wide range of tasks.
• Firstly, these damping blocks absorb stray sound waves, as well as absorb backward
directed ultrasound energy.
• They also dampen the vibrations to create short spatial pulse lengths (SPL) to help
improve axial resolution.
• A short SPL is desirable to minimize interference.

24 | P a g e
MIT MODULE 1 - SCET

• If the length of the pulses is too long, it can interfere with the sound waves that are on
their way back.
• In a more extreme example, the outgoing pulse may actually collide with the returning
waves if it is too long.
• Thus, a good damping material is important to maximize sensitivity to signal.
Lens
• The acoustic lens is used to help focus the beam to improve resolution.
• In the same way that light diffracts off the lens in a magnifying glass or a pair of
eyeglasses, this helps aim the ultrasound beam at a particular distance depending on the
application.
• An unfocused ultrasound beam will have a broader beam geometry that will have a lower
resolution.
• Thus, an acoustic lens is very beneficial in order to provide a sharper image. It will also
reduce blurring from side lobe interference produced by the piezoelectric material.
Matching layer
• To provide acoustic coupling between the crystal and patient
• This layer helps to overcome the acoustic mismatch between the element and the tissue.
• Z matching=(Z element x Z tissue )1/2
• T matching= λ/4
• Example- aluminium powder in araldite
• The impedance matching layer is a crucial part of the transducer.
• This layer usually contains oil and helps maximize the amount of wave energy
transmitted into the skin and onward.
• The basic idea of this layer is to help the area of the transducer touching the skin better
mimic properties of the tissue.
• Ultrasound beams can more effectively travel from tissue to tissue of similar density.
• A coupling gel is also applied to the surface where the transducer will be imaging in
order to reduce interference from air bubbles
Multiple-Element Transducers (transducer array)
• Single-element transducers, as described earlier, are not often used in modern scanning
equipment, although the other basic aspects of design still apply to multiple-element
systems.
• More than one element may be required (in the simplest case) to permit the use of
continuous waves (as in a Doppler system), where separate transmitting and receiving
elements are required.
• In pulse-echo imaging, multiple element arrays may be used for electronic (and rapidly
changing) beam focusing, translation and steering.
• The general principles of transmit beam forming for focusing, scanning and steering
are illustrated schematically, and in two dimensions only, in Figure
• In order to focus (Figure 6.25c) or steer (Figure 6.25b) the sound beam, one must be
able to excite each element via a variable delay. Systems using this technology to create
a sector shaped image by ‘phase steering’ the sound beam.

25 | P a g e
MIT MODULE 1 - SCET

16. Explain how the blood velocity measurements can be done using
ultrasound?

OR Discuss the Doppler Effect in ultrasound imaging

• The Doppler effect is familiar as, for example, the higher pitch of an ambulance siren
as it approaches the observer than when it has passed.

26 | P a g e
MIT MODULE 1 - SCET

• Similarly, blood flow, either toward or away from the transducer, alters the frequency
of the backscattered echoes, as shown in Figure 3.21.
• Because blood contains a high proportion of red blood cells (RBC), which have a
diameter of 7-10 µm, the interaction between ultrasound and blood is a scattering
process.
• The wavelength of the ultrasound is much greater than the dimensions of the scatterer
and therefore the wave is scattered in all directions.
• This means that the backscattered, Doppler-shifted signals have low signal intensities.
• The signal intensity is proportional to the fourth power of the ultrasound frequency, and
so higher operating frequencies are often used for blood velocity measurements.

27 | P a g e
MIT MODULE 1 - SCET

• The Doppler shift can be increased by using higher ultrasound frequencies, but in this
case the maximum depth at which vessels can be measured decreases due to increased
attenuation of the beam at the higher operating frequencies.
• Equation (3.43) also shows that an accurate measurement of blood velocity can only be
achieved if the angle is known.
• This angle is usually estimated from simultaneously acquired B-mode scans using
"duplex imaging"
• Doppler measurements can be performed either in Continuous Wave or pulsed mode,
depending upon the particular application.
Continuous Wave Doppler Measurements
• CW Doppler measurements are used when there is no need to localize exactly the
source of the Doppler shifts.
• A continuous pulse of ultrasound is transmitted by one transducer and the backscattered
signal is detected by a second one: usually both transducers are housed in the same
physical structure.
• The transducers are fabricated with only a small degree of mechanical damping in order
to increase the intensity of the signal transmitted at the fundamental frequency
• The region of overlap of the sensitive regions of the two transducers defines the area in
which blood flow is detected.

28 | P a g e
MIT MODULE 1 - SCET

• This area is often quite large, and problems in interpretation can occur when there is
more than one blood vessel within this region.
• The measured blood velocity is the average value over the entire sensitive region. The
advantages of CW Doppler over pulsed Doppler methods, in which exact localization
is possible, are that the method is neither limited to a maximum depth nor to a maximum
measurable velocity

Pulsed-Mode Doppler Measurements


• In pulsed-mode Doppler systems, only one transducer is used, which transmits pulses
and receives backscattered signals a number of times in order to estimate the blood
velocity.
• The major advantage of pulsed-mode over CW Doppler is the ability to measure
Doppler shifts in a specific region of interest at a defined depth within the body.
• This volume can be chosen using the following variables: (1) the transducer diameter
and focusing scheme, which define the cross section of the ultrasound beam, (2) the
time delay after pulse transmission before acquisition of the backscattered signal is
started (defining the minimum depth), and (3) the time for which the signal is acquired
(defining the maximum depth).

17. Applications of Doppler ultrasound imaging?

• Finding clots
• Check blood flow in your veins, arteries, and heart
• Look for narrowed or blocked arteries
• See how blood flows after treatment
• Look for bulging in an artery which is called an aneurysm

29 | P a g e
MIT MODULE 1 - SCET

When it’s done on your belly, it can help find:

• Blood flow problems with your liver, kidneys, pancreas, or spleen


• Abdominal aortic aneurysm

18. What is Colour flow mapping/ Color Doppler / color flow Doppler /
duplex ultrasonography?

• The term "duplex" refers to the fact that two modes of ultrasound are used, Doppler and
B-mode. The B-mode transducer (like a microphone) obtains an image of the vessel
being studied.
• The Doppler probe within the transducer evaluates the velocity and direction of blood
flow in the vessel.
• Color Doppler or color flow Doppler is the presentation of the velocity by color scale.
• Color Doppler images are generally combined with grayscale (B-mode) images to
display duplex ultrasonography images, allowing for simultaneous visualization of
the anatomy of the area.
• The mean velocity, its sign (positive or negative), and its variance are represented by
the hue, the saturation, and the luminance, respectively, of the color plot.
• Efficient computation of the mean and the variance values is important so that the frame
rate can be as high as possible.
.

30 | P a g e
MIT MODULE 1 - SCET

• This is particularly useful in cardiovascular studies (sonography of the vascular system


and heart) and essential in many areas such as determining reverse blood flow in the
liver vasculature in portal hypertension

19. Discuss the Interaction of ultrasound with tissue?

When an ultrasound wave reaches an interface between two materials, four things will happen:
• attenuation,
• reflection,
• refraction.

Reflection

• Earlier, it was explained that the change in acoustic impedance between tissue at an
interface can affect how much of the ultrasound wave is reflected.
• This could be the surface transitioning from fat to muscle tissue or it could be the surface
of a cyst or mass within a soft tissue.
• The angle of the reflected beam is equal to the angle of the incident beam assuming the
surface is smooth. In a perfect 90 degree angle, the beams will reflect back directly
towards the transducer
Refraction

• Refraction is governed by Snell’s Law and describes reflection where sound strikes the
boundary of two tissues at an oblique angle.
• Refraction takes place at an interface due to the different velocities of the acoustic waves
within the two materials.
31 | P a g e
MIT MODULE 1 - SCET

• The velocity of sound in each material is determined by the material properties (elastic
modulus and density) for that material.

If the wave propagates faster in the second medium, then the angle will decrease such that it
will bend towards the X-axis. If the wave propagates slower in the second medium, then the
angle will increase such that it will bend away from the X-axis.

Attenuation

• When sound travels through a medium, its intensity diminishes with distance.
• weakening results from scattering and absorption.
• Scattering is the reflection of the sound in directions other than its original direction of
propagation.
• Absorption is the conversion of the sound energy to other forms of energy.
• The combined effect of scattering and absorption is called attenuation.
• Ultrasonic attenuation is the decay rate of the wave as it propagates through material.
• All media attenuate ultrasound, so that the intensity of a plane wave propagating in the
x
direction decreases exponentially with distance as

where μ is the intensity attenuation coefficient.


• The attenuation coefficient has contributions from absorption and scattering; thus,

where
• μa is the intensity absorption coefficient, and
• μs is the intensity scattering coefficient.

32 | P a g e
MIT MODULE 1 - SCET

Absorption

• Absorption results in the conversion of the wave energy to heat, and is responsible for
the temperature rise made use of in physiotherapy, ultrasound-induced hyperthermia
and high-intensity focused ultrasound (HIFU) therapy.
• There are many mechanisms by which heat conversion may occur, although they are often
discussed in terms of three classes.
• Classical mechanisms, which for tissues are small and involve mainly viscous (frictional)
losses, give rise to an f 2 frequency dependence.
• Molecular relaxation, in which the temperature or pressure fluctuations associated with
the wave cause reversible alterations in molecular configuration, are thought to be
predominantly responsible for absorption in tissue (except bone and lung), and, because
there are likely to be many such mechanisms simultaneously in action, produce a variable
frequency dependence close to, or slightly greater than, f.
• Finally, relative motion losses, in which the wave induces movement of small-scale
structural elements of tissue, are thought to be potentially important. A number of such
loss mechanisms might also produce a frequency dependence of absorption somewhere
between f and f 2.
• Generally, however, one can say that, increasing molecular complexity results in
increasing absorption. For tissues, a higher protein content (especially structural proteins
such as collagen), or a lower water content, is associated with greater absorption of sound.

Scattering

Different kinds of scattering phenomena occur at different levels of structure.

33 | P a g e
MIT MODULE 1 - SCET

20. What is PRF in ultrasound imaging? What is Pulse Repetition


Frequency (PRF) generator? With help of diagram mention the
constraints on PRF.

• Pulse repetition frequency (PRF) indicates the number of ultrasound pulses emitted
by the transducer over a designated period of time.
• It is typically measured as cycles per second or hertz (Hz). In medical ultrasound the
typically used range of PRF varies between 1 and 10 kHz 1.

• A number of artifacts are directly influenced by the pulse repetition frequency,

34 | P a g e
MIT MODULE 1 - SCET

• e.g. increasing it diminishes the aliasing artifact commonly encountered


during color and spectral Doppler imaging, while decreasing it facilitates e.g. the
display of the useful twinkling artifact occurring behind stones and calcifications 2

21. State the principle that determines the shape of the ultrasound beam
and discuss on how the shape of the beam and wavelength are
related.

Beam shape
• The area through which the sound energy emitted from the ultrasound transducer
travels is known as the ultrasound beam.
• The beam is three-dimensional and is symmetrical around its central axis. It can be
subdivided into two regions: a near field (or Fresnel zone) which is cylindrical in
shape, and a far field (or Fraunhofer zone) where it diverges and becomes cone-
shaped.

35 | P a g e
MIT MODULE 1 - SCET

• The actual shape of the beam depends on a number of factors, including the diameter
of the crystal, the frequency and wavelength, the design of the transducer, and the
amount of focusing applied to the beam.
• Increasing the frequency will result in a longer near field and less far field divergence.
A narrow crystal diameter will result in a narrower beam in the near field, but the
disadvantage is that the near field is shorter and there is more divergence in the far
field.

36 | P a g e
MIT-MODULE2-SCET

2. Computed Tomography

1. Discuss the Principles of Sectional Imaging in computed tomography?

• CT enables the acquisition of two-dimensional X-ray images of thin "slices" through


the body.
• Multiple images from adjacent slices can be obtained in order to reconstruct a three-
dimensional volume.
• CT images show reasonable contrast between soft tissues such as kidney, liver, and
muscle because the X-rays transmitted through each organ are no longer superimposed
on one another at the detector, as is the case in planar X-ray radiography.
• The basic principle behind CT is that the two-dimensional internal structure of an
object can be reconstructed from a series of one-dimensional "projections" of the
object acquired at different angles.
• In order to obtain an image from a thin slice of tissue, the X-ray beam is collimated to
give a thin beam.
• The detectors, which are situated opposite the X-ray source, record the total number of
X-rays that are transmitted through the patient, producing a one-dimensional projection.
• The signal intensities in this projection are dictated by the two-dimensional distribution
of tissue attenuation coefficients within the slice.
• The X-ray source and the detectors are then rotated by a certain angle and the
measurements repeated.
• This process continues until sufficient data have been collected to reconstruct an image
with high spatial resolution. Reconstruction of the image involves a process termed
backprojection

• Motion of the X-ray source and the detector occurred in two ways, linear and rotational.
• In Figure 1.24, M linear steps were taken with the intensity of the transmitted X-rays
being detected at each step.

Page 1 of 36
MIT-MODULE2-SCET

• This produced a single projection with M data points. Then both the source and detector
were rotated by (180/N) degrees, where N is the number of rotations in the complete
scan, and a further M translational lines were acquired at this angle.
• The total data matrix acquired was therefore M x N points.
• Image reconstruction takes place in parallel with data acquisition in order to minimize
the delay between the end of data acquisition and the display of the images on the
operator's console.
• As the signals corresponding to one projection are being acquired, those from the
previous projection are being amplified and digitized, and those from the projection
previous to that are being filtered and processed.
• In order to illustrate the issues involved in image reconstruction, consider the raw
projection data that would be acquired from a simple object such as an ellipse with a
uniform attenuation coefficient, as shown in Figure 1.27.
• The reconstruction goal is illustrated on the right of Figure 1.27 for a simple 2 x 2
matrix of tissue attenuation coefficients: given a series of intensities I1, I2, I3 and I4. The
attenuation coefficients - µ1, µ2, µ3, µ4 can be calculated from this.
• For each projection, the signal intensity recorded by each detector depends upon the
attenuation coefficient and the thickness of each tissue that lies between the X-ray
source and that particular detector.
• For the simple case shown on the right of Figure 1.27, two projections are acquired,
each consisting of two data points: projection I (I1, I2) and projection 2 (I3 and I4). If the
image to be reconstructed is also a two-by-two matrix, then the intensities of the
projections can be expressed in terms of the linear attenuation coefficients by

Page 2 of 36
MIT-MODULE2-SCET

• where x is the dimension of each pixel. It might seem that this problem could be solved
by matrix inversion or similar techniques.
• These approaches are not feasible, however, first due to the presence of noise in the
projections (high noise levels can cause direct inversion techniques to become
unstable), and second because of the large amount of data collected.
• If the data matrix size is, for example, 1024 x 1024, then matrix inversion techniques
become very slow. Image reconstruction, in practice, is carried out using either
backprojection algorithms or iterative techniques.

2. Discuss different Source- Detector Geometries in CT?


OR
Different generations of CT?

First generation scanners


• In a first-generation scanner, a finely collimated source defined a pencil beam of X-
rays, which was then measured by a well-collimated detector.
• This source–detector combination measured parallel projections, one sample at a
time, by stepping linearly across the patient.
• After each projection, the gantry rotated to a new position for the next projection (see
Figure 3.5).

Page 3 of 36
MIT-MODULE2-SCET

• Since there was only one detector, calibration was easy and there was no problem
with having to balance multiple detectors; also, costs were minimised.
• The scatter rejection of this first-generation system was higher than that of any other
generation because of the 2D collimation at both source and detector.
• The system was slow, however, with typical acquisition times of 4 min per section,
even for relatively low-resolution images.

Second generation scanners


• Data gathering was speeded up considerably in the second generation.
• Here a single source illuminated an array of detectors with a narrow (∼10°) fan
beam of X-rays (see Figure 3.6).
• This assembly traversed the patient and measured N parallel projections
simultaneously (N being the number of detectors).
• The gantry angle was incremented by an angle equal to the fan angle between
consecutive traverses. These machines could complete the data gathering in about 20
s.

Page 4 of 36
MIT-MODULE2-SCET

Third generation scanners


• The third generation of scanner geometry became available in 1975.
• In these systems, the fan beam was enlarged to cover the whole field of view (see
Figure 3.7), typically 50 cm.
• Consequently, the gantry needed only to rotate, which it could do without stopping,
and the data gathering could be done in less than 5s.
• It is relatively easy for a patient to remain still for this length of time.

Page 5 of 36
MIT-MODULE2-SCET

Fourth generation scanners


• Fourth-generation systems became available a year or so after the third-generation
geometry was introduced.
• In this design, a stationary detector ring was used and only the source rotated along a
circular path contained within the detector ring (see Figure 3.8).
• Scan speeds were comparable to that of third-generation scanners.
• For the present-day CT scanner, manufacturers have almost exclusively adopted third
generation geometry. For over two decades, third- and fourth-generation systems
competed for technical supremacy in the clinic.

3. Discuss the scanner operation and components in computed


tomography?
OR
Discuss the block diagram of computed tomography?

• The typical modern CT scanner is based on third-generation geometry, and can be of


single-slice or, more commonly, multi-slice design.
• CT equipment can be divided into three major components: gantry, patient couch and
computer system.
• The gantry houses the imaging components. These are similar to those that would be
encountered in a conventional radiographic system: X-ray source, additional filters,
collimators, scatter removal device and detectors. However, tomography imposes
special requirements on these components

Page 6 of 36
MIT-MODULE2-SCET

Gantry

• The gantry houses and provides support for the rotation motors, HV generator, X-ray
tube (one or two), detector array and preamplifiers (one or two), temperature control
system and the slip rings.
• Slip rings enable the X-ray tube (and detectors in a third-generation system) to rotate
continuously. The HV cables have been replaced with conductive tracks on the slip
rings that maintain continuous contact with the voltage supply via graphite or silver
brushes.
• Two slip-ring designs were initially used in commercial scanners: (1) low-voltage slip
rings, in which the HV transformer was mounted on the rotating part of the gantry with
the X-ray tube, and only low voltages were present on the stationary and moving parts
and (2) high-voltage slip rings, in which the HV was generated in the stationary part of
the gantry, thus reducing the inertia of the moving parts. The low-voltage design is now
universally adopted.
X-Ray Generation

• The basic components of the X-ray source, also referred to as the X-ray tube, used for
clinical diagnoses are shown in Figure 1.3.
• The production of X-rays involves accelerating a beam of electrons to strike the surface
of a metal target.
• The X-ray tube has two electrodes, a negatively charged cathode, which acts as the
electron source, and a positively charged anode, which contains the metal target.
• A potential difference of between 15 and 150 kV is applied between the cathode and
the anode; the exact value depends upon the particular application.
• This potential difference is in the form of a rectified alternating voltage, which is
characterized by its maximum value, the kilovolts peak (kVp).

Page 7 of 36
MIT-MODULE2-SCET

• The maximum value of the voltage is also referred to as the accelerating voltage. The
cathode consists of a filament of tungsten wire coiled to form a spiral "'2 mm in
diameter and less than 1 cm in height.
• An electric current from a power source passes through the cathode, causing it to heat
up. When the cathode temperature reaches approximately 22OO°C the thermal energy
absorbed by the tungsten atoms allows a small number of electrons to move away from
the metallic surface, a process termed thermionic emission.
• A dynamic equilibrium is set up, with electrons having sufficient energy to escape from
the surface of the cathode, but also being attracted back to the metal surface.

• The large positive voltage applied to the anode causes these free electrons created at
the cathode surface to accelerate toward the anode.
• The spatial distribution of these electrons striking the anode correlates directly with the
geometry of the X-ray beam that enters the patient.
• Since the spatial resolution of the image is determined by the effective spot size, shown
in Figure, the cathode is designed to produce a tight, uniform beam of electrons. In
order to achieve this, a negatively charged focusing cup is placed around the cathode to
reduce divergence of the electron beam.

Page 8 of 36
MIT-MODULE2-SCET

• The larger the negative potential applied to the cup, the narrower is the electron beam.
If an extremely large potential (2kV) is applied, then the flux of electrons can be
switched off completely.
• At the anode, X-rays are produced as the accelerated electrons penetrate a few tens of
micrometers into the metal target and lose their kinetic energy. This energy is converted
into X-rays.

Collimators

• The geometry of the X-ray beam emanating from the source is a divergent beam.
• Often, the dimensions of the beam when it reaches the patient are larger than the desired
FOV of the image.
• This has two undesirable effects, the first of which is that the patient dose is increased
unnecessarily. The second effect is that the number of Compton-scattered X-rays
contributing to the image is greater than if the extent of the beam had been matched to
the image FOV
• In order to restrict the dimensions of the beam, a collimator, also called a beam
restrictor, is placed between the X-ray source and the patient.
• The collimator consists of sheets of lead, which can be slid over one another to restrict
the beam in either one or two dimensions.

Page 9 of 36
MIT-MODULE2-SCET

Antiscatter Grids

• Ideally, all of the X-rays reaching the detector would be primary radiation, with no
contribution from Compton-scattered X-rays.
• In this case, image contrast would be affected only by differences in attenuation from
photoelectric interactions in the various tissues.
• However, in practice, a large number of X-rays that have undergone Compton
scattering reach the detector.
• As mentioned previously, the contrast between tissues from Compton-scattered X-rays
is inherently low.
• In addition, secondary radiation contains no useful spatial information and is distributed
randomly over the film, thus reducing image contrast further.
• The effect of scattered radiation on the X-ray image is shown schematically in Figure
1.11.

• Collimators can be used to restrict the beam dimensions to the image FOV and therefore
decrease the number of scattered X-ray s contributing to the image, but even with a
collimator in place secondary radiation can represent between 50% and 90% of the X-
rays reaching the detector.
• Additional measures, therefore, are necessary to reduce the contribution of Compton-
scattered X-rays.
• One method is to place an antiscatter grid between the patient and the X-ray detector.
• This grid consists of strips of lead foil interspersed with aluminum as a support, with
the strips oriented parallel to the direction of the primary radiation, as shown in Figure
1.12.

Page 10 of 36
MIT-MODULE2-SCET

• The properties of the grid are defined in terms of the grid ratio and strip line density :
where h, t, and d are the length and the thickness of the lead strips and the distance
between the centers of the strips, respectively.

Detectors for Computed Tomography

• The most common detectors for CT scanners are xenon-filled ionization chambers,
shown in Figure 1.26.
• Because xenon has a high atomic number of 66, there is a high probability of
photoelectric interactions between the gas and the incoming X-rays.
• The xenon is kept under pressure at "-'20 atm to increase further the number of
interactions between the X-rays and the gas molecules.
• An array of interlinked ionization chambers, typically 768 in number (although some
commercial scanners have up to 1000), is filled with gas, with metal electrodes
separating the individual chambers.
• X-rays transmitted through the body ionize the gas in the detector, producing electron-
ion pairs.
• These are attracted to the electrodes by an applied voltage difference between the
electrodes, and produce a current which is proportional to the number of incident X-
rays .
• Each detector electrode is connected to a separate amplifier, and the outputs of the
amplifiers are multiplexed through a switch to a single AD converter.
• The digitized signals are logarithmically amplified and stored for subsequent image
reconstruction. In this design of the ionization chamber, the metal electrode plates also
perform the role of an antiscatter grid, with the plates being angled to align with the

Page 11 of 36
MIT-MODULE2-SCET

focal spot of the X-ray tube. The plates are typically 10 Cm in length, with a gap of 1
mm between adjacent plates.

Other detectors used are:

Page 12 of 36
MIT-MODULE2-SCET

Patient Couch

• The patient couch must be able not only to support the weight of the patient (which may
exceed 150kg), but also translate the patient into the gantry aperture without flexing
whilst achieving a positional accuracy of the order of 1mm.
• In addition, it must be radiolucent, safe for the patient and easy to clean.
• Most modern couches are made of a carbon fibre composite which can provide the
required rigidity and radiolucency.

Computer System

• During the course of a CT scan, a multitude of tasks take place: interfacing with the
operator, gantry and couch movement control; acquiring, correcting, reconstructing and
storing the data; image display and archive; generating hard copies, and pushing images
to PACS and networked workstations.
• For a clinical scanner to work efficiently, as many as possible of these tasks must take
place concurrently.
• This was originally achieved with a multiprocessor system and parallel architecture,
but more recently with a multitasking workstation.

Image Reconstruction and Display

• The CT image is reconstructed using 2D filtered backprojection, cone-beam


reconstruction.
• The user can specify the reconstruction filter from a selection offered by the
manufacturer

4. Explain the 2D Image Reconstruction techniques used in computed


tomography?

Subquestions:

1) What is backprojrction?
2) What is filtered backprojection?
3) Explain iterative algorithm with an example?
4) Discuss the radon transform?
5) Discuss the Fourier slice theorem?
6) Explain the Fourier method of image reconstruction

Page 13 of 36
MIT-MODULE2-SCET

 For each projection, the signal intensity recorded by each detector depends upon the
attenuation coefficient and the thickness of each tissue that lies between the X-ray
source and that particular detector.
 For the simple case shown on the right of Figure, two projections are acquired, each
consisting of two data points: projection I and projection 2.

 where x is the dimension of each pixel. It might seem that this problem could be
solved by matrix inversion or similar techniques. These approaches are not feasible,
 however, first due to the presence of noise in the projections (high noise levels can
cause direct inversion techniques to become unstable), and second because of the
large amount of data collected. If the data matrix size is, for example, 1024 x 1024,
then matrix inversion techniques become very slow.
 Image reconstruction, in practice, is carried out using either backprojection algorithms
or iterative techniques.

Page 14 of 36
MIT-MODULE2-SCET

Image reconstruction methods:

1) Back projection method:

Backprojection is done with the help of Radon transform.

Page 15 of 36
MIT-MODULE2-SCET

Radon transform:

 The mathematical basis for reconstruction of an image from a series of projections is


the Radon transform. For an arbitrary function f (x, y), its Radon transform R is defined
as the integral of f(x, y) along a line L

o
 Each X-ray projection p(r, Ф) can therefore be expressed in terms of the Radon
transform of the object being studied:

 where p(r, Ф) refers to the projection data acquired as a function of r, the distance
o along the projection, and Ф, the rotation angle of the X ray source and detector.
 Reconstruction of the image therefore requires computation of the inverse Radon
o transform of the acquired projection data.
o The most common methods of implementating the inverse Radon transform use
backprojection or filtered backprojection
 A number of one-dimensional projections, PI, P2, ... , Pn, are acquired with the
detector oriented at different angles with respect to the object, as shown in Figure.
 In the following analyses, the object is represented as a function f(x, y), in which the
spatially dependent values of f correspond to attenuation coefficients in X-ray CT.
 The coordinate system in the measurement frame is represented by (r, s), where r
is the direction parallel to the detector and S is the direction along the ray at 90°to the
r dimension.
 The angle between the x and r axes is denoted as Ф, and so by simple trigonometry

 The measured projection is denoted by p(r,Ф).


 Projections are acquired at different values of Ф until coverage over a range of 180°
or 360°, depending upon the particular application, has been reached.
 A number of schemes exist for the reconstruction of the image.

Page 16 of 36
MIT-MODULE2-SCET

• Simple backprojection of the acquired projections corresponds to direct implementation


of the inverse Radon transform.
• Backprojection assigns an equal weighting to the pixels contributing to each point in a
particular projection.
• This process is repeated for all of the projections, and the pixel intensities are summed
to give the reconstructed image f(x, y).
• Mathematically, f(x, y) can be represented as

• Figure shows the typical artifacts associated with simple backprojection.

Page 17 of 36
MIT-MODULE2-SCET

2) Iterative reconstruction

Page 18 of 36
MIT-MODULE2-SCET

Steps:

 These algorithms start with an initial estimate of the two dimensional matrix of
attenuation coefficients
 By comparing the projections predicted from this initial estimate with those that are
actually acquired, changes are made to the estimated matrix.
 This process is repeated for each projection, and then a number of times for the whole
dataset until the residual error between the measured data and those from the estimated
matrix falls below a predesignated value.

Example:

• Figure shows two four-point projections from a two-dimensional matrix of tissue


attenuation coefficients, µI-µI6.
• In generating an initial estimate, the components of the horizontal projection, 0.210 ,
0.410 , 0.310 , and 0.110, are considered first (this choice is arbitrary ).
• In the absence of prior knowledge, an initial estimate is formed by assuming that each
pixel has the same X-ray attenuation coefficient.
• If the pixel dimensions are assumed to be square with height = length = 1 for simplicity

Page 19 of 36
MIT-MODULE2-SCET

• The value of the MSE per pixel after the first iteration is approximately 0.0325I0.
• The next iteration forces the estimated data to agree with the measured vertical
projection.
• Consider the component that passes through pixels µI, µ5, µ9, and µ13
• The measured data is 0.4I0, but the calculated data using the first iteration is 0.22I0,
• The values of the attenuation coefficients have been overestimated and must be
reduced.
• The exact amount by which the attenuation coefficients µI, µ5, µ9, and µ13 should
be reduced is unknown, and again the simple assumption is made that each value
should be reduced by an equal amount.
• Applying this procedure to all four components of the horizontal projection gives
the estimated matrix shown on the right of Figure

Page 20 of 36
MIT-MODULE2-SCET

• Now, of course, the estimated projection data do not agree with the measured data of
the
• horizontal projection but the MSE per pixel has been reduced to 0.005I02 In a practical
realization of a full ray-by-ray iterative reconstruction, many more projections would
be acquired and processed.
• After a full iteration of all of the projections, the process can be repeated a number of
times until the desired accuracy is reached or further iterations produce no significant
improvements in the value of the MSE.

3) Analytical methods

3.A) Filtered Backprojection / Convolution back projection method

Page 21 of 36
MIT-MODULE2-SCET

• The widely implemented method of filtered backprojection consists in applying a


filter to each projection before backprojection in order to reduce the artifacts associated
with simple backprojection
• One of the most common implementations uses the Ramachandran-Lakshminarayanan
(Ram-Lak) filter.
• If the filter is applied in the spatial domain, then the filtered projection p'(r, Ф) can be
represented as

• Convolution in the spatial domain is equivalent to multiplication in the spatial


frequency domain, and multiplication can be performed much faster computationally
than convolution.
• Each projection p(r, Ф) is Fourier transformed along the r dimension to give P(k,Ф),
and then P(k,Ф) is multiplied by H(k), the Fourier transform of h(r), to give P‘(k, Ф)

Page 22 of 36
MIT-MODULE2-SCET

3.B) Fourier Filtering

Page 23 of 36
MIT-MODULE2-SCET

The projection-slice theorem, central slice theorem or Fourier slice theorem in two
dimensions states that the results of the following two calculations are equal:

• Take a two-dimensional function f(r), project (e.g. using the Radon transform) it
onto a (one-dimensional) line, and do a Fourier transform of that projection.
• Take that same function, but do a two-dimensional Fourier transform first, and
then slice it through its origin, which is parallel to the projection line.
In operator terms, if

• F1 and F2 are the 1- and 2-dimensional Fourier transform operators mentioned


above,
• P1 is the projection operator (which projects a 2-D function onto a 1-D line),
• S1 is a slice operator (which extracts a 1-D central slice from a function),

then
This idea can be extended to higher dimensions

Page 24 of 36
MIT-MODULE2-SCET

Page 25 of 36
MIT-MODULE2-SCET

5. Discuss the Fan-Beam Reconstruction technique?

• The backprojection reconstruction methodsassume that each line integral corresponds


to a parallel X-ray path from the source to detector.
• In third and fourth-generation scanners, the geometry of the X-rays is a fan beam.
• Since the X-ray beams are no longer parallel to one another,image reconstruction
requires modification of the backprojection algorithms to avoid introducing image
artifacts .
• The simplest modification is to "rebin" the acquired data to produce a series of parallel
projections, which can then be processed as described previously. For example,in
Figure, the X-ray beam from source position S2 to detector D3 is clearly not parallel to
the beam from S1 to detector D1.
• However, when the source is rotated to position S2, for example, the X-ray beam from
S2 to D3 is parallel to that from SI to D1.
• By resorting the data into a series of composite datasets consisting of parallel X-ray
• paths, for example, S1D1, S2D3, etc., one can reconstruct the image using standard
backprojection algorithms.

Page 26 of 36
MIT-MODULE2-SCET

6. Discuss the operational principles of spiral CT/Helical CT?

Page 27 of 36
MIT-MODULE2-SCET

Conventional CT Systems:
• Tube Rotates Around Stationary Patient (Table is Incremented Between Acquisitions)
• All Views in a Slice are at Same Table Position
• Power to X-Ray Tube via Cord
• Scan CW and CCW to Wind/Unwind Cord
• Interscan Delays: 3.5 Seconds Between Slices
Differences of spiral CT from Conventional:
• Continuous Tube Rotation - No Interscan Delays (Power to X-ray Tube via Slip Ring)
• Continuous Table Motion as Tube Rotates
• Each View is at a DIFFERENT Table Position Form Images by Synthesizing Projection
Data via Interpolation

Page 28 of 36
MIT-MODULE2-SCET

7. Discuss the operational principles of multislice CT?

• The multislice CT (MSCT), or multi-detectorrow CT (MDCT), is a CT system


equipped with multiple rows of CT detectors to create images of multiple sections
• This CT system has different characteristics from conventional CT systems, which have
only one row of CT detectors.
• The introduction of this advanced detector system and its combination with helical
scanning has markedly improved the performance of CT in terms of imaging range,
time for examination, and image resolution.
• The most important aspect of MSCT is the detectors (multislice detectors).
• The multislice CT systems with 2 or 4 rows of detectors are widely used, however,
those systems with 8 and 16 rows of detectors are now to be released.
• The most basic 4-row MSCT detectors are divided into 3 types (Fig. 1), although they
are structurally divided into 2 types. One is known as the Matrix/Fixed type in which
small detectors (cells) are arranged at equal intervals in grid formation. The other is the
Adaptive array type, in which detector units with increasing widths toward both ends
are arranged symmetrically. Multiple slice widths can be selected with both types, by
combining the number of rows of detectors used.

• MSCT is also different from conventional single slice CT in terms of the image
(reconstruction algorithm) calculation method it employs.
• The helical scanning with 4-row detectors provides data which are several times as
large as those of conventional single slice CT and have higher density.
• These data are used to calculate a high-resolution section image of the target site using
a technique called Z-axis multiple-point weighed interpolation.
• To obtain high image quality with multislice helical scanning, it is important to
determine the distance moved by the patient table during one rotation of the scanner.

Page 29 of 36
MIT-MODULE2-SCET

• A concept known as pitch (helical pitch) is usually used as an index of the distance of
table movement. The helical pitch is determined by dividing the distance of the table
movement per rotation of the X-ray tube by the detector width equivalent to 1 slice.
• In conventional helical scanning, the table moves for a distance equal to the slice width
during 1 rotation of the X-ray tube (that is, pitch 1).
• In contrast, the pitch can be adjusted up to 6 in MSCT: the table can be moved by a half
channel per rotation (pitch 3.5 or 4.5) or by 1 channel per rotation (pitch 3).
• The pitch is closely correlated with image quality and exposure dose.
• In general, image quality is improved and exposure dose is increased as the pitch is
reduced, while image quality is worsened and the exposure dose is reduced as the pitch
is increased.

8. Summarize on X-ray tubes used in CT. Explain the types of X-ray tubes
have been utilized for computed tomography?

Page 30 of 36
MIT-MODULE2-SCET

9. Explain the data acquisition system used in X-ray CT?

Page 31 of 36
MIT-MODULE2-SCET

Page 32 of 36
MIT-MODULE2-SCET

Page 33 of 36
MIT-MODULE2-SCET

10. What are the basic features of X-Ray Computed Tomography?

• X-ray imaging is a transmission-based technique in which X-rays from a source pass


through the patient and are detected either by film or an ionization chamber on the
opposite side of the body
• Planar X-ray radiography of overlapping layers of soft tissue or complex bone
structures can often be difficult to interpret, even for a skilled radiologist. In these cases,
X-ray computed tomography (CT) is used.
• The basic principles of CT are shown in Figure 1.2.

• The X-ray source is tightly collimated to interrogate a thin "slice" through the patient.
• The source and detectors rotate together around the patient, producing a series of one-
dimensional projections at a number of different angles
• These data are reconstructed to give a two-dimensional image, as shown on the right of
Figure 1.2. CT images have a very high spatial resolution (approx. 1 mm) and provide
reasonable contrast between soft tissues.
• In addition to anatomical imaging, CT is the imaging method that can produce the
highest resolution angiographic images, that is, images that show blood flow in vessels.

Page 34 of 36
MIT-MODULE2-SCET

• Recent developments in spiral and multislice CT have enabled the acquisition of full
three-dimensional images in a single patient breath-hold.
• The major disadvantage of both X-ray and CT imaging is the fact that the technique
uses ionizing radiation. Because ionizing radiation can cause tissue damage, there is a
limit on the total radiation dose per year to which a patient can be subjected. Radiation
dose is of particular concern in pediatric and obstetric radiology.

11. Discuss different acquisition geometries in CT?

Page 35 of 36
MIT-MODULE2-SCET

Page 36 of 36
MIT MODULE 3&4-SCET

3 & 4. Magnetic Resonance Imaging

Discuss the basic principles of MRI

• Magnetic resonance imaging (MRI) is a nonionizing technique with full three-


dimensional capabilities, excellent soft-tissue contrast, and high spatial resolution (1
mm).

• In general, the temporal resolution is much slower than for ultrasound or computed
tomography, with scans typically lasting between 3 and 10 min, and MRI is therefore
much more susceptible to patient motion.

• The cost of MRI scanners is relatively high, with the price of a typical clinical l.5-T
whole-body imager on the order of $1.5 million.

• The major uses of MRI are in the areas of assessing brain disease, spinal disorders,
angiography, cardiac function, and musculoskeletal damage.

• The MRI signal arises from protons in the body, primarily water, but also lipid.
• The patient is placed inside a strong magnet, which produces a static magnetic field
typically more than 10,000 times stronger than the earth's magnetic field.

• Each proton, being a charged particle with angular momentum, can be considered as
acting as a small magnet.

• The protons align in two configurations, with their internal magnetic fields aligned
either parallel or antiparallel to the direction of the large static magnetic field, with
slightly more found in the parallel state.

• The protons precess around the direction of the static magnetic field, in an analogous
way to a spinning gyroscope under the influence of gravity.

• The frequency of precession is proportional to the strength of the static magnetic field.
• Application of a weak radiofrequency (RF) field causes the protons to precess
coherently,

• The sum of all of the protons precessing is detected as an induced voltage in a tuned
detector coil.

• Spatial information is encoded into the image using magnetic field gradients.

• These impose a linear variation in all three dimensions in the magnetic field present
within the patient

1|Page
MIT MODULE 3&4-SCET

• In MRI the patient is placed inside a very strong magnet for scanning.

• A typical value of the magnetic field, denoted Bo, is 1.5 T (l5,OOOG), which can be
compared to the earth 's magnetic field of approximately 50 µT (0.5 G).

• The MRI signal arises from the interaction between the magnetic field and hydrogen
nuclei, or protons, which are found primarily as water in tissue and also lipid.

• All nuclei with an odd atomic weight and/or an odd atomic number possess a
fundamental quantum mechanical property termed "spin."
• For MRI the most important nucleus is the hydrogen nucleus, or proton.

2|Page
MIT MODULE 3&4-SCET

• Although not a rigorously accurate model , the property of spin can be viewed as a
proton spinning around an internal axis of rotation giving it a certain value of angular
momentum P.

• Because the proton is a charged particle , this rotation gives the proton a magnetic
moment µ.
• This magnetic moment produces an associated magnetic field, which has a
configuration similar to that of a bar magnet, as shown in Figure.

• In the absence of an external magnetic field the orientation of the individual magnetic
moments is random.

3|Page
MIT MODULE 3&4-SCET

4|Page
MIT MODULE 3&4-SCET

5|Page
MIT MODULE 3&4-SCET

• In order to obtain an MRI signal, transitions must be induced between the protons in
the parallel and the anti parallel energy levels .

• The energy required to do this is supplied by an oscillating electromagnetic field


• Because there is a specific energy gap ∆E in equation (4.9) between the two energy
levels, the electromagnetic field must be applied at a specific frequency, called the
resonance frequency.

6|Page
MIT MODULE 3&4-SCET

• Before the rf pulse is switched on the net magnetisation, Mo, is at equilibrium, aligned
along the z-axis in the same direction as Bo

• When the rf pulse is switched on, the net magnetisation begins to move away from its
alignment with the Bo field and rotate around it.

• The net magnetisation is the result of the sum of many individual magnetic moments.

• So long as they rotate together (a condition known as coherence) they will produce a
net magnetisation that is rotating.

• The greater the amount of energy applied by the rf pulse, the greater the angle that the
net magnetisation makes with the Bo field (the z axis).

• Once the rf pulse has caused the net magnetisation to make an angle with the z-axis, it
can be split into two components.

• One component is parallel to the z-axis. This is known as the z-component of the
magnetisation, Mz, also know as the longitudinal component.

• The other component lies at right angles to the z axis within the plane of the x and y
axes and is known as the x-y component of the net magnetisation, Mxy, or the
transverse component.

• The transverse component rotates at the Larmor frequency within the xy plane and as
it rotates, it generates its own small, oscillating magnetic field which is detected as an
MR signal by the rf receiver coil.

What are Excitation pulses?

• Radiofrequency pulses that generate an MR signal -by delivering energy to the


hydrogen spin population, - causing the magnetisation to move away from its
equilibrium position are known as -excitation pulses.

7|Page
MIT MODULE 3&4-SCET

What is saturation pulse and flip angle?

• The 90° rf excitation pulse delivers just enough energy to rotate the net magnetisation
through 90°

• This transfers all of the net magnetisation from the z-axis into the xy (transverse) plane,
leaving no component of magnetisation along the z-axis immediately after the pulse.

• When applied once, a 90° rf pulse produces the largest possible transverse
magnetisation and MR signal.

• The 90° rf pulse is sometimes referred to as a saturation pulse.


• This pulse is used to initially generate the signal for spin echo-based pulse sequences.
• When an rf pulse is applied, Mo makes an angle with the z-axis, known as the flip angle

• Low flip angle rf excitation pulses rotate the net magnetisation through a pre-defined
angle of less than 90°

• A low flip is represented by the symbol α or can be assigned a specific value, e.g. 30°.
Only a proportion of the net magnetisation is transferred from the z axis into the xy
plane, with some remaining along the z axis.

• While a low flip angle rf pulse produces an intrinsically lower signal than the 90°
excitation pulse described above, it can be repeated more rapidly as some of the
magnetisation remains along the z-axis immediately after the pulse.

• This excitation pulse is used to generate the signal in gradient echo pulse sequences to
control the amount of magnetisation that is transferred between the z-axis and the xy
plane for fast imaging applications.

• The 180° refocusing pulse is used in spin echo pulse sequences after the 90° excitation
pulse, where the net magnetisation has already been transferred into the x-y plane.

• It flips the direction of the magnetisation in the x-y plane through 180°

8|Page
MIT MODULE 3&4-SCET

• This pulse is used in spin echo based techniques

Discuss the relaxation process in MRI?

• Immediately after the rf pulse the spin system starts to return back to its original state,
at equilibrium. This process is known as relaxation.

• There are two distinct relaxation processes that relate to the two components of the Net
Magnetisation, the longitudinal (z) and transverse (xy) components
• longitudinal relaxation/ T1 relaxation- The first relaxation process, longitudinal
relaxation, commonly referred to as T1 relaxation is responsible for the recovery of the
z component along the longitudinal (z) axis to its original value at equilibrium.

• transverse relaxation/T2 relaxation- The second relaxation process, transverse


relaxation, is responsible for the decay of the xy component as it rotates about the z
axis, causing a corresponding decay of the observed MR signal.

• Longitudinal and transverse relaxation both occur at the same time, however, transverse
relaxation is typically a much faster process for human tissue.

What is free induction decay?

• Transverse relaxation can be understood by remembering that the net magnetisation is


the result of the sum of the magnetic moments (spins) of a whole population of protons.

9|Page
MIT MODULE 3&4-SCET

• Immediately after the rf pulse they rotate together in a coherent fashion.


• The angle of the direction they point at any instant is known as the phase angle and the
spins having similar phase angles are said at this initial stage to be ‘in phase’

• Over time, for reasons explained in a moment, the phase angles gradually spread out,
there is a loss of coherence and the magnetic moments no longer rotate together and
they are said to move ‘out of phase’.

• The net sum of the magnetic moments is thus reduced, resulting in a reduction in the
measured net (transverse) magnetisation.

• The signal that the receiver coil detects (if no further rf pulses or magnetic field
gradients are applied) is therefore seen as an oscillating magnetic field that gradually
decays -known as a Free Induction Decay or FID.

10 | P a g e
MIT MODULE 3&4-SCET

Discuss the T2 and T2* relaxation in detail. ( spin-spin relaxation/ Transverse


relaxation)

• Transverse relaxation can be understood by remembering that the net magnetisation is


the result of the sum of the magnetic moments (spins) of a whole population of protons.

• Immediately after the rf pulse they rotate together in a coherent fashion.


• The angle of the direction they point at any instant is known as the phase angle and the
spins having similar phase angles are said at this initial stage to be ‘in phase’

• Over time, for reasons explained in a moment, the phase angles gradually spread out,
there is a loss of coherence and the magnetic moments no longer rotate together and
they are said to move ‘out of phase’.

• The net sum of the magnetic moments is thus reduced, resulting in a reduction in the
measured net (transverse) magnetisation.
• The signal that the receiver coil detects (if no further rf pulses or magnetic field
gradients are applied) is therefore seen as an oscillating magnetic field that gradually
decays -known as a Free Induction Decay or FID.

• There are two causes of this loss of coherence. Firstly, the presence of interactions
between neighbouring protons causes a loss of phase coherence known as T2
relaxation

• This arises from the fact that the rate of precession for an individual proton depends on
the magnetic field it experiences at a particular instant.

• While the applied magnetic field Bo is constant, it is however possible for the magnetic
moment of one proton to slightly modify the magnetic field experienced by a
neighbouring proton.

• As the protons are constituents of atoms within molecules, they are moving rapidly and
randomly and so such effects are transient and random.

• The net effect is for the Larmor frequency of the individual protons to fluctuate in a
random fashion, leading to a loss of coherence across the population of protons.

• i.e. the spins gradually acquire different phase angles, pointing in different directions
to one another and are said to move out of phase with one another (this is often referred
to as de-phasing).
• The resultant decay of the transverse component of the magnetisation (Mxy) has an
exponential form with a time constant, T2, hence this contribution to transverse
relaxation is known as T2 relaxation
• As it is caused by interactions between neighbouring proton spins it is also sometimes
known as spin-spin relaxation.

11 | P a g e
MIT MODULE 3&4-SCET

• Due to the random nature of the spin-spin interactions, the signal decay caused by T2
relaxation is irreversible.
T2* relaxation

• The second cause for the loss of coherence (de-phasing) relates to local static variations
(inhomogeneities) in the applied magnetic field, Bo which are constant in time.

• If this field varies between different locations, then so does the Larmor frequency.
• Protons at different spatial locations will therefore rotate at different rates, causing
further de-phasing so that the signal decays more rapidly.

• In this case, as the cause of the variation in Larmor frequency is fixed, the resultant de-
phasing is potentially reversible.

• The combined effect of T2 relaxation and the effect of magnetic field non-uniformities
is referred to as T2* relaxation and this determines the actual rate of decay observed
when measuring an FID signal .
• T2* relaxation is also an exponential process with a time constant T2*.

12 | P a g e
MIT MODULE 3&4-SCET

Discuss the T1 relaxation in detail. (spin-Lattice relaxation/ Longitudinal


relaxation)

• T1 relaxation is an exponential process with a time constant T1.


• For example, if a 90° pulse (a saturation pulse) is applied at equilibrium, the z-
magnetisation is saturated (reduced to zero) immediately after the pulse, but then
recovers along the z-axis towards its equilibrium value, initially rapidly, slowing down
as it approaches its equilibrium value (Figure 3).

• The shorter the T1 time constant is, the faster the relaxation process and the return to
equilibrium.

• Recovery of the z-magnetisation after a 90° rf pulse is sometimes referred to as


saturation recovery.

13 | P a g e
MIT MODULE 3&4-SCET

Discuss the Significance of the T1 value

• T1 relaxation involves the release of energy from the proton spin population as it returns
to its equilibrium state.
• The rate of relaxation is related to the rate at which energy is released to the
surrounding molecular structure.
• This in turn is related to the size of the molecule that contains the hydrogen nuclei and
in particular the rate of molecular motion, known as the tumbling rate of the
particular molecule.
• As molecules tumble or rotate they give rise to a fluctuating magnetic field which is
experienced by protons in adjacent molecules.
• For example, lipid molecules are of a size that gives rise to a tumbling rate which is
close to the Larmor frequency and therefore extremely favourable for energy exchange.
• Fat therefore has one of the fastest relaxation rates of all body tissues and therefore the
shortest T1 relaxation time
• Larger molecules have much slower tumbling rates that are unfavourable for energy
exchange, giving rise to long relaxation times.
• For free water, its smaller molecular size has a much faster molecular tumbling rate
which is also unfavourable for energy exchange and therefore it has a long T1 relaxation
time.

Discuss the Significance of the T2 value

• T2 relaxation is related to the amount of spin-spin interaction that takes place.


• Free water contains small molecules that are relatively far apart and moving rapidly
• therefore spin-spin interactions are less frequent and T2 relaxation is slow
• Water molecules bound to large molecules are slowed down and more likely in interact
• leading to faster T2 relaxation and shorter T2 relaxation times.
• Water- based tissues with a high macromolecular content (e.g. muscle) tend to have
shorter T2 values.
• Conversely, when the water content is increased, for example by an inflammatory
process, the T2 value also increases.

14 | P a g e
MIT MODULE 3&4-SCET

Discuss the gradient echo in MRI

• Whilst the FID can be detected as a MR signal, for MR imaging it is more common to
generate and measure the MR signal in the form of an echo
• The two most common types of echo used for MR imaging are gradient echoes and
spin echoes.
• Gradient echoes are generated by the controlled application of magnetic field gradients.
• When a magnetic field gradient is switched on it causes proton spins to lose coherence
or de-phase rapidly along the direction of the gradient

• This de-phasing causes the amplitude of the FID signal to rapidly drop to zero

15 | P a g e
MIT MODULE 3&4-SCET

• de-phasing caused by one magnetic field gradient can however be reversed by applying
a second magnetic field gradient along the same direction with a slope of equal
amplitude but in the opposite direction.
• If the second gradient is applied for the same amount of time as the first gradient, the
de-phasing caused by the first gradient is cancelled and the FID re-appears
• It reaches a maximum amplitude at the point at which the spins de-phased by the first
gradient have moved back into phase, or ‘re-phased’.
• If the second gradient then continues to be applied, the FID signal de-phases and
disappears once more.
• The signal that is re-phased through the switching of the gradient direction is known as
a gradient echo.
• The time from the point at which the transverse magnetisation (the FID) is generated
by the rf pulse, to the point at which the gradient echo reaches it’s maximum amplitude
is known as the echo time (abbreviated TE)

Discuss the Spin echo in MRI

Pulse sequence is explained below:


• A- The vertical red arrow is the average magnetic moment of a group of spins, such as
protons. All are vertical in the vertical magnetic field and spinning on their long axis,
but this illustration is in a rotating reference frame where the spins are stationary on
average.
• B- A 90 degree pulse has been applied that flips the arrow into the horizontal (x-y)
plane.

16 | P a g e
MIT MODULE 3&4-SCET

• C--Due to local magnetic field inhomogeneities (variations in the magnetic field at


different parts of the sample that are constant in time), as the net moment precesses,
some spins slow down due to lower local field strength (and so begin to progressively
trail behind) while some speed up due to higher field strength and start getting ahead of
the others. This makes the signal decay.
• E- A 180 degree pulse is now applied so that the slower spins lead ahead of the main
moment and the fast ones trail behind.
• F- Progressively, the fast moments catch up with the main moment and the slow
moments drift back toward the main moment.
• G--Complete refocusing has occurred and at this time, an accurate T2 echo can be
measured with all T2* effects removed. Quite separately, return of the red arrow
towards the vertical (not shown) would reflect the T1 relaxation. 180 degrees is π
radians so 180° pulses are often called π pulses.

17 | P a g e
MIT MODULE 3&4-SCET

What is T1-weighted spin echo

• T1 relaxation is the recovery of the longitudinal magnetisation (Mz).


• The higher the Mz at the time of applying the 90° RF pulse the greater the transverse
signal (Mxy)

• The longer the TR(repetition time)


• The longer the time to the next 90° RF pulse
• The more time Mz will have had to recover
• The higher the transverse signal when the 90° RF pulse is applied
• i.e. it is the TR that determines the T1 signal
• The time constant, T1, is a measure of the time it takes for the nuclei to reach 63% of
its original Mz.
• Hydrogen nuclei in different molecules have different T1s.
• Those with a short T1 will recover their Mz quicker than those with a long T1.
• The parameter choice for T1-weighted spin echo is a short TR and short TE(Echo time)
• The choice of a short TR determines that tissues with a long T1 (e.g. fluid) will recover
less than those with a short T1 (e.g. fat)).
• This determines the initial value of the transverse magnetisation, Mxy, when the next
rf pulse is applied.

18 | P a g e
MIT MODULE 3&4-SCET

• Tissues that have recovered less quickly will have a smaller longitudinal magnetisation
before the next rf pulse, resulting in a smaller transverse magnetisation after the rf pulse.
• The short TE limits the influence of the different T2 decay rates. The resultant contrast
is therefore said to be T1-weighted.
• T1 weighted spin echo images are typically characterised by bright fat signal and a low
signal from fluid and are useful for anatomical imaging where high contrast is required
between, fat, muscle and fluid.

19 | P a g e
MIT MODULE 3&4-SCET

What is T2-weighted spin echo

• T2 decay is the decay of the transverse magnetisation (Mxy) after application of the 90°
RF pulse.
• The longer the time after the 90° RF pulse, the more the Mxy decays and the smaller
the transverse signal.
• As we saw in the spin echo sequence, TE is the "time to echo". If we leave a long TE
we give more time for the Mxy to decay and we get a smaller signal.
• The longer the TE
• The longer the time allowed for Mxy to decay
• The smaller the transverse (T2) signal
• i.e. it is the TE that determines the T2 signal
• The time constant, T2, is the time it takes for the hydrogen nuclei to decay to 37% of
its excited Mxy.
• Hydrogen nuclei in different molecules have different T2s.
• Those with a short T2 will take a shorter time to decay than those with a long T2.
• The parameter choice for T2-weighted spin echo is a long TR and long TE.
• The choice of a long TR allows the z-magnetisation to recover close to the equilibrium
values for most of the tissues, therefore reducing the influence of differences in T1
relaxation time.
• The longer echo time however allows more decay of the xy component of the
magnetisation.
• The differential rate of decay between a tissue with a short T2 (e.g. muscle) and a tissue
with a long T2 (e.g. fluid), leads to a difference in signal that is said to be T2-weighted.
• The short T2 leads to a reduced signal intensity, while the long T2 leads to an increased
signal intensity.
• These images are characterised by bright fluid and are useful for the depiction of fluid
collections and the characterisation of cardiac masses and oedema.

20 | P a g e
MIT MODULE 3&4-SCET

Discuss the Proton density-weighted spin echo/ spin density weighted imaging

• The parameter choice for proton density-weighted spin echo is a long TR and short TE .

• The choice of long TR allows recovery of the z-magnetisation for most tissues, therefore
reducing the influence of differences in T1 relaxation time and the 90° excitation pulse
therefore transfers a similar amount of signal into the xy plane for all tissues.

• The choice of a short TE limits the amount of T2 decay for any tissue at the time of
measurement.

• This results in a high signal from all tissues, with little difference between them.

• So the signal amplitude is not particularly affected by the T1 relaxation properties, or by the
T2 relaxation properties.

• The primary determinant of the signal amplitude is therefore the equilibrium magnetisation
of the tissue and the image contrast is said to be ‘proton density’-weighted.

21 | P a g e
MIT MODULE 3&4-SCET

• This type of weighting is useful where the depiction of anatomical structure is required,
without the need to introduce soft tissue contrast.

22 | P a g e
MIT MODULE 3&4-SCET

Differentiate T1, T2 and Proton density weighed imaging

Discuss the steps in the Localising and encoding MR signals to make an image?
Explain the image acquisition in MRI?

• The MR echo signals produced can be localised and encoded by applying magnetic
field gradients as they are generated to produce an image.
Step 1 - Selection of an image slice
• First, the resonance of protons is confined to a slice of tissue.
• This is done by applying a gradient magnetic field at the same time as the rf excitation
pulse is transmitted
• The frequency of the rf pulse corresponds to the Larmor frequency at a chosen point
along the direction of the applied gradient.

23 | P a g e
MIT MODULE 3&4-SCET

• The result is for resonance only to occur for protons in a plane that cuts through that
point at right angles to the gradient direction, effectively defining a slice of tissue.
• This process is known as slice selection and the gradient is known as the slice selection
gradient, Gs
• Rather than just a single frequency, the transmitted rf pulse is comprised of a small
range of frequencies, known as the transmit bandwidth of the rf pulse.
• This gives the slice a thickness.
• The thickness of the slice is determined by the combination of the rf pulse bandwidth
and the steepness (or strength) of the gradient.

24 | P a g e
MIT MODULE 3&4-SCET

Step 2 - Phase encoding


• Following slice selection, a phase encoding gradient, Gp, is applied for a specified
period.
• This causes the protons to rotate at different frequencies according to their relative
position along the gradient.
• Where the gradient increases the magnetic field, the protons acquire a higher frequency
of precession, while where the gradient decreases the magnetic field, the protons
acquire a lower frequency of precession
• The protons therefore also constantly change their relative phase according to their
position along the gradient.
• When the gradient is switched off, the protons will have changed their relative phase
by an amount depending on their position along the gradient.
• This process is known as phase encoding and the direction of the applied gradient is
known as the phase encoding direction.
Step 3 - Frequency encoding
• Following the phase encoding gradient, the frequency encoding gradient, GF, is applied
in a direction at right angles to it and in a similar way causes the protons to rotate at
different frequencies according to their relative position along that direction gradient
• This gradient is applied for longer, and at the same time the signal is measured or
digitally sampled.
• The signal is comprised of a range of frequencies (or bandwidth), corresponding to the
Larmor frequencies of the proton magnetic moments at their different locations along
the gradient.
• This process is known as frequency encoding,
• the direction of the frequency encoding gradient defines the frequency encoding
direction.
• In summary, to localise the MR signal in three dimensions, three separate magnetic
field gradients are applied in a three step process.
• For the examples in Figure 7 and 8 these gradients are applied in sequence with the
slice-section gradient, Gs applied along the z-axis, the phase-encoding gradient, GP
applied along the y-axis and the frequency-encoding gradient, GF applied along the x-
axis (Figure 9).

25 | P a g e
MIT MODULE 3&4-SCET

Explain the Image reconstruction in MRI

• The frequency encoded signal is analysed using a Fourier transform.


• This is a mathematical tool that transforms the time-dependent MR signal into its
different frequency components

26 | P a g e
MIT MODULE 3&4-SCET

• The amplitude of each frequency component can be mapped onto a location along the frequency
encoding gradient to determine the relative amount of signal at each location.
• The Fourier Transform can only analyse a signal that changes over time.
• To enable this, a number of signal echoes are generated by repeating the above three-step
process (slice selection, phase encoding and frequency encoding), each time applying the same
slice selection and frequency encoding gradient, but a different amount of phase encoding
• This is done by increasing the strength (or slope) of the phase encoding gradient for each
repetition by equal increments or steps.
• For each phase encoding step the signal echo is measured, digitised and stored in a raw data
matrix.
• Once all the signals for a prescribed number of phase encoding steps have been acquired and
stored, they are analysed together by a two-dimensional (2D) Fourier transform to decode both
the frequency and the phase information (Figure 12).
k-space
• a single pixel in the image may have contributions from all of the MR signals collected.
• Just as each pixel occupies a unique location in image space, each point of an MR signal echo
belongs to a particular location in a related space known as k-space

27 | P a g e
MIT MODULE 3&4-SCET

• There is an inverse relationship between the image space and k-space (Figure 12). Whereas the
coordinates of the image represent spatial position (x and y), the coordinates of k-space
represent 1/x and 1/y, sometimes referred to as spatial frequencies, kx and ky.
• The value of each point in k-space represents how much of a particular spatial frequency is
contained within the corresponding image.
• To make an image that is a totally faithful representation of the imaged subject, it is important
that the whole range of spatial frequencies is acquired (up to a maximum that defines the spatial
resolution of the image), i.e. that the whole of k-space is covered.
• For standard imaging this is done by filling k-space with equally spaced parallel lines of signal
data, line by line, along the kx direction. This is known as a Cartesian acquisition
• The phase encoding gradient determines the position of the line being filled in the ky direction.
• Usually the amplitude of the phase encoding gradient is incremented in steps such that the next
adjacent line in k-space is filled with each successive repetition, starting at one edge of k-space
and finishing at the opposite edge.

28 | P a g e
MIT MODULE 3&4-SCET

Discuss the various components of MR systems?

• Three basic components make up the MRI scanner: the magnet, three magnetic field
gradient coils, and an RF coil.
• The magnet polarizes the protons in the patient
• the magnetic field gradient coils impose a linear variation on the proton Larmor
frequency as a function of position
• the RF coil produces the oscillating magnetic field necessary for creating phase
coherence between protons, and also receives the MRI signal via Faraday induction
• Each MRI system has a number of different-sized RF coils, used according to the
particular part of the body being imaged, which are placed on or around the patient.
• The gradient coils are fixed permanently inside the bore of the superconducting magnet
• In addition to these three elements there is a series of electronic components used to
turn the gradients on and off, to pulse the B1 field, and to amplify and digitize the signal.
Magnet Design
• The purpose of the magnet is to produce a strong, temporally stable, and homogeneous
magnetic field within the patient.
• A strong magnetic field increases the amplitude of the MRI signal, a homogeneous
magnetic field is required so that the tissue T2 value is not too short and images are not
distorted by Bo inhomogeneities, and high stability is necessary to avoid introducing
unwanted artifacts into the image.
• There are three basic types of magnet: permanent, resistive, and superconducting.
1. Permanent magnets

• For magnetic fields of approximately 0.35 T or less, either resistive or permanent


magnets can be used.
• Permanent magnet systems are usually constructed of rare earth alloys such as cobalt-
samarium.
• Their advantages include relatively low cost, the lack of a requirement for cooling the
magnet, and a reduced susceptibility to patient claustrophobia due to their open nature
• The disadvantages of permanent magnets are the very large weight of such magnets and
the fact that the field homogeneity and temporal stability are highly temperature-
dependent, meaning that sophisticated thermal regulation must be used.
2. Resistive magnets

• In resistive magnets, the magnetic field is created by the passage of a constant current
through a conductor such as copper.

29 | P a g e
MIT MODULE 3&4-SCET

• The strength of the magnetic field is directly proportional to the magnitude of the
current
• thus high currents are necessary to create high magnetic fields.
• However, the amount of power dissipated in the wire is proportional to the resistance
of the conductor and the square of the current.
• Because the power is dissipated in the form of heat, cooling the conductors is a major
problem, and ultimately limits the maximum current, and therefore magnetic field
strength, that can be achieved with a resistive magnet.
• As with permanent magnets, the field homogeneity and the temporal stability of
resistive magnets are highly temperature-dependent.
3. Super conducting magnets

• The solution to the problem of conductor heating is to minimize the resistance of the
conductor by using the phenomenon of superconductivity, in which the resistance of
many conductors becomes zero at very low temperatures.
• In order to create high static magnetic fields, it is still necessary for the conductor to
carry a large current when it is superconducting, and this capability is only possessed
by certain alloys, particularly those made from niobium-titanium.
• The superconducting alloy is usually fashioned into multistranded filaments within a
conducting matrix because this arrangement can support a higher critical current than a
single, larger-diameter superconducting wire.
• This superconducting matrix is housed in a stainless steel can containing liquid helium
at a temperature of 4.2 K, as shown in Figure.
• This can is surrounded by a series of radiation shields and vacuum vessels to minimize
the boil-off of the liquid helium.
• Finally, an outer container of liquid nitrogen is used to cool the outside of the vacuum
chamber and the radiation shields.
• Because heat losses cannot be completely contained, liquid nitrogen and liquid helium
must be replenished on a regular basis.

30 | P a g e
MIT MODULE 3&4-SCET

Magnetic Field Gradient Coils


• the basic principle of MRI requires the generation of magnetic field gradients, in
addition to the static magnetic field, so that the proton resonant frequencies within the
patient are spatially dependent.
• Such gradients are achieved using "magnetic field gradient coils," a term usually
shortened to simply "gradient coils." Three separate gradient coils are required to
encode the x, y. and z dimensions of the image.
• The requirements for gradient coil design are that the gradients are linear over the
region being imaged, that they are efficient in terms of producing high gradient
strengths per unit current, and that they have fast switching times for use in rapid
imaging techniques.
• As in the case of magnet design, a magnetic field gradient is produced by the passage
of current through conducting wires .
• Unlike the design of the magnet, however, the geometry of the conductors for the three
gradient coils must be optimized to produce a linear gradient, rather than a uniform
field.
• Copper at room temperature can therefore be used as the conductor, with chilled-water
cooling being sufficient to remove the heat generated by the current. Because the
gradient coils fit directly inside the bore of the cylindrical magnet, the geometrical
design is usually cylindrical.

31 | P a g e
MIT MODULE 3&4-SCET

Radiofrequency Coils
• in order to produce an MRI signal, magnetic energy must be supplied to the protons at
the Larmor frequency in order to stimulate transitions between the parallel and the
antiparallel nuclear energy levels, thus creating precessing transverse magnetization.
• The particular piece of hardware that delivers this energy is called an RF coil, which is
usually placed directly around, or next to, the tissue to be imaged.
• The same RF coil is also usually used to detect the NMR signal via Faraday induction
• The power needed to generate the RF pulses for clinical systems can be many kilowatts,
and the receiver is designed to detect signals only on the order of 1-10 V.
• Examples of RF coil geometries for imaging different body parts are shown in Figure
• The "birdcage" coil, is a "volume coil" designed to give a spatially uniform magnetic
field over the entire volume of the coil.
• It is typically used for brain, abdominal, and knee studies.
• The circular loop coil , "surface" coil, used to image objects at the surface of the body
with high sensitivity.
• The third type of coil, "phased array," which consists of a series of surface coils.
• These coils are typically used to image large structures such as the spine.

32 | P a g e
MIT MODULE 3&4-SCET

Discuss the Block diagram of MRI?

/Discuss the instrumentation of MRI?

33 | P a g e
MIT MODULE 3&4-SCET

• Three basic components make up the MRI scanner: the magnet, three magnetic field
gradient coils, and an RF coil.
• The magnet polarizes the protons in the patient
• the magnetic field gradient coils impose a linear variation on the proton Larmor
frequency as a function of position
• the RF coil produces the oscillating magnetic field necessary for creating phase
coherence between protons, and also receives the MRI signal via Faraday induction
• Each MRI system has a number of different-sized RF coils, used according to the
particular part of the body being imaged, which are placed on or around the patient.
• The gradient coils are fixed permanently inside the bore of the superconducting magnet
• In addition to these three elements there is a series of electronic components used to
turn the gradients on and off, to pulse the B1 field, and to amplify and digitize the signal.
• A simplified block diagram of a system is shown in Figure.
• Various components are discussed further in the following sections
Magnet Design
• The purpose of the magnet is to produce a strong, temporally stable, and homogeneous
magnetic field within the patient.
• A strong magnetic field increases the amplitude of the MRI signal, a homogeneous
magnetic field is required so that the tissue T2 value is not too short and images are not
distorted by Bo inhomogeneities, and high stability is necessary to avoid introducing
unwanted artifacts into the image.
• There are three basic types of magnet: permanent, resistive, and superconducting.
4. Permanent magnets

• For magnetic fields of approximately 0.35 T or less, either resistive or permanent


magnets can be used.
• Permanent magnet systems are usually constructed of rare earth alloys such as cobalt-
samarium.
• Their advantages include relatively low cost, the lack of a requirement for cooling the
magnet, and a reduced susceptibility to patient claustrophobia due to their open nature
• The disadvantages of permanent magnets are the very large weight of such magnets and
the fact that the field homogeneity and temporal stability are highly temperature-
dependent, meaning that sophisticated thermal regulation must be used.
5. Resistive magnets

• In resistive magnets, the magnetic field is created by the passage of a constant current
through a conductor such as copper.

34 | P a g e
MIT MODULE 3&4-SCET

• The strength of the magnetic field is directly proportional to the magnitude of the
current
• thus high currents are necessary to create high magnetic fields.
• However, the amount of power dissipated in the wire is proportional to the resistance
of the conductor and the square of the current.
• Because the power is dissipated in the form of heat, cooling the conductors is a major
problem, and ultimately limits the maximum current, and therefore magnetic field
strength, that can be achieved with a resistive magnet.
• As with permanent magnets, the field homogeneity and the temporal stability of
resistive magnets are highly temperature-dependent.
6. Super conducting magnets

• The solution to the problem of conductor heating is to minimize the resistance of the
conductor by using the phenomenon of superconductivity, in which the resistance of
many conductors becomes zero at very low temperatures.
• In order to create high static magnetic fields, it is still necessary for the conductor to
carry a large current when it is superconducting, and this capability is only possessed
by certain alloys, particularly those made from niobium-titanium.
• The superconducting alloy is usually fashioned into multistranded filaments within a
conducting matrix because this arrangement can support a higher critical current than a
single, larger-diameter superconducting wire.
• This superconducting matrix is housed in a stainless steel can containing liquid helium
at a temperature of 4.2 K, as shown in Figure.
• This can is surrounded by a series of radiation shields and vacuum vessels to minimize
the boil-off of the liquid helium.
• Finally, an outer container of liquid nitrogen is used to cool the outside of the vacuum
chamber and the radiation shields.
• Because heat losses cannot be completely contained, liquid nitrogen and liquid helium
must be replenished on a regular basis.
Magnetic Field Gradient Coils
• the basic principle of MRI requires the generation of magnetic field gradients, in
addition to the static magnetic field, so that the proton resonant frequencies within the
patient are spatially dependent.
• Such gradients are achieved using "magnetic field gradient coils," a term usually
shortened to simply "gradient coils." Three separate gradient coils are required to
encode the x, y. and z dimensions of the image.
• The requirements for gradient coil design are that the gradients are linear over the
region being imaged, that they are efficient in terms of producing high gradient

35 | P a g e
MIT MODULE 3&4-SCET

strengths per unit current, and that they have fast switching times for use in rapid
imaging techniques.
• As in the case of magnet design, a magnetic field gradient is produced by the passage
of current through conducting wires.
• Unlike the design of the magnet, however, the geometry of the conductors for the three
gradient coils must be optimized to produce a linear gradient, rather than a uniform
field.
• Copper at room temperature can therefore be used as the conductor, with chilled-water
cooling being sufficient to remove the heat generated by the current. Because the
gradient coils fit directly inside the bore of the cylindrical magnet, the geometrical
design is usually cylindrical.

Radiofrequency Coils
• in order to produce an MRI signal, magnetic energy must be supplied to the protons at
the Larmor frequency in order to stimulate transitions between the parallel and the
antiparallel nuclear energy levels, thus creating precessing transverse magnetization.
• The particular piece of hardware that delivers this energy is called an RF coil, which is
usually placed directly around, or next to, the tissue to be imaged.
• The same RF coil is also usually used to detect the NMR signal via Faraday induction

36 | P a g e
MIT MODULE 3&4-SCET

• The power needed to generate the RF pulses for clinical systems can be many kilowatts,
and the receiver is designed to detect signals only on the order of 1-10 V.
• Examples of RF coil geometries for imaging different body parts are shown in Figure
• The "birdcage" coil, is a "volume coil" designed to give a spatially uniform magnetic
field over the entire volume of the coil.
• It is typically used for brain, abdominal, and knee studies.
• The circular loop coil , "surface" coil, used to image objects at the surface of the body
with high sensitivity.
• The third type of coil, "phased array," which consists of a series of surface coils.
• These coils are typically used to image large structures such as the spine.
Signal Demodulation, Digitization, and Fourier Transformation
• The oscillating voltage induced in the receiver coil using a standard 1.5-T scanner has
a magnitude between several tens of microvolts and a few millivolts
• Because it is difficult to digitize a signal at this high frequency, 63.9 MHz, using a high-
dynamic range AID converter, the signal must be "demodulated" to a lower frequency
before it can be digitized.
• A schematic for the typical components of a receiver used in magnetic resonance is
shown in Figure.

• The voltage induced in the RF coil first passes through a low-noise preamplifier, with
a typical gain factor of 100 and noise figure of "V0.6 dB.
• If the signal from only the water protons is considered initially, then the induced voltage
s(t) is given by

37 | P a g e
MIT MODULE 3&4-SCET

• The first demodulation step uses a mixer to reduce the frequency of the signal from the
Larmor frequency ωo to an intermediate frequency ωIF, where the value of ωIF is
typically 67.2 x 106 rad s-1 (10.7 MHz).
• A simple circuit for the demodulator is shown in Figure, where the mixer effectively
acts as a multiplier.

38 | P a g e
MIT MODULE 3&4-SCET

What is BOLD Signal? / Explain the principles of functional MRI/ Discuss the
applications?

FMRI is a technique for measuring metabolic correlates of neuronal activity


• Uses a standard MRI scanner
• Acquires a series of images (numbers)
• Measures changes in blood oxygenation
• Use non-invasive, non-ionizing radiation
• Can be repeated many times; can be used for a wide range of subjects
• Combines good spatial and reasonable temporal resolution
• fMRI uses principles of BOLD signal
• fMRI detects the blood oxygen level–dependent (BOLD) changes in the MRI signal
that arise when changes in neuronal activity occur following a change in brain state,
such as may be produced, for example, by a stimulus or task.
• it is well established that an increase in neural activity in a region of cortex stimulates
an increase in the local blood flow in order to meet the larger demand for oxygen and
other substrates.

39 | P a g e
MIT MODULE 3&4-SCET

• The change in blood flow actually exceeds that which is needed so that, at the capillary
level, there is a net increase in the balance of oxygenated arterial blood to deoxygenated
venous blood.
• Essentially, the change in tissue perfusion exceeds the additional metabolic demand, so
the concentration of deoxyhemoglobin within tissues decreases.
• This decrease has a direct effect on the signals used to produce magnetic resonance
images.
• While blood that contains oxyhemoglobin is not very different, in terms of its magnetic
susceptibility, from other tissues or water, deoxyhemoglobin is significantly
paramagnetic (like the agents used for MRI contrast materials, such as gadolinium),
and thus deoxygenated blood differs substantially in its magnetic properties from
surrounding tissues.
• When oxygen is not bound to hemoglobin, the difference between the magnetic field
applied by the MRI machine and that experienced close to a molecule of the blood
protein is much greater than when the oxygen is bound.
• The result of having lower levels of deoxyhemoglobin present in blood in a region of
brain tissue is therefore that the MRI signal from that region decays less rapidly and so
is stronger when it is recorded in a typical magnetic resonance image acquisition.

40 | P a g e
MIT MODULE 3&4-SCET

41 | P a g e
MIT MODULE 3&4-SCET

42 | P a g e
MIT MODULE 3&4-SCET

Discuss the Principle and Applications of Diffusion Tensor Imaging

• Diffusion tensor imaging (DTI) is an emerging magnetic resonance imaging (MRI)


technology.
• Using this technique, we can characterize the way water diffuses inside imaging
objects. For example, water molecules inside a cup can diffuse freely in all directions
(“free diffusion” or “isotropic diffusion”).
• On the other hand, water molecules inside living systems often experience numerous
“obstacles”, such as protein fibers, membrane, and organelles. If the water diffusion
is restricted by these structures it is called “restricted diffusion.”
• If water molecules are in an environment with highly ordered (or aligned) structure,
they tend to diffuse along the structure, resulting in so-called “anisotropic diffusion.”
• In other words, the water diffusion has “directionality”. The water diffusion, thus,
carries a wealth of information on the micro-architecture of the imaging object.
• Using the DTI, we can characterize the water diffusion process.
• DTI can answer questions about diffusion like, “is it free or restricted?” or “is it
isotropic or anisotropic?”
• Using the DTI technique, the water diffusion process can be characterized on a pixel-
by-pixel basis.
• Application of the DTI to the brain has revealed that the water diffusion in the
brain white matter is highly anisotropic, which is attributed to the highly
ordered axonal tracts.

43 | P a g e
MIT MODULE 3&4-SCET

• The characterization of the anisotropic diffusion can provide detailed


information on the white matter architectures, which cannot be obtained by any
other radiological tools.

• WATER protons = signal in DTI


• Diffusion property of water molecules (D)
• D = diffusion constant
• Move by Brownian motion / Random thermal motion

• Image intensities inversely related to the relative mobility of water molecules in tissue
and the direction of the motion
• Bright regions – decreased water diffusion
• Dark regions – increased water diffusion
44 | P a g e
MIT MODULE 3&4-SCET

Applications:
• Cancer

• Neurodegenerative disorders: Parkinson's disease and Alzheimer's dementia


Some studies have also demonstrated that it is possible to use DTI as a diagnostic tool for
differentiating patients with PD or Parkinsonian syndrome from healthy participants, by
analyzing changes in white matter fiber connections.

• Epilepsy
Patients with refractory temporal-lobe epilepsy exhibit increased diffusivity suggesting the
presence of structural disorganization

• Stroke

• Traumatic brain injury


In mild TBI, microscopic traumatic axonal injury is not always detectable in conventional brain
imaging. Unrecognized damage to the white matter is likely to increase the risks of long-term
cognitive and functional impairments, and so DTI is not only useful for detecting hidden lesions
but also for understanding the pathophysiology of mild TBI.

Discuss different image reconstruction techniques used in MRI?

WRITE FOURIOR RECONSTRUCTION METHODE HERE

45 | P a g e
MIT MODULE 3&4-SCET

46 | P a g e
MIT MODULE 3&4-SCET

With relevant block diagram generalize the detection system used in MRI.

An RF receiver is used to process the signals from the receiver coils. Most modern MRI
systems have six or more receivers to process the signals from multiple coils. The signals range
from approximately 1 MHz to 300 MHz, with the frequency range highly dependent on
applied-static magnetic field strength. The bandwidth of the received signal is small, typically
less than 20 kHz, and dependent on the magnitude of the gradient field. The receiver is also a
detection system whose function is to detect the nuclear magnetization and generate an output
signal for processing by the computer. A block diagram of a typical receiver is shown in Fig.
22.22.

The receiver coil usually surrounds the sample and acts as an antenna to pick up the fluctuating
nuclear magnetization of the sample and converts it to a fluctuating output voltage V(t).

47 | P a g e
MIT MODULE 3&4-SCET

48 | P a g e
MIT-MODULE-5-SCET

5. Nuclear Medicine

1. Discuss the general principles of nuclear medicine?

/principles of Emission Tomography

• In contrast to X-ray, ultrasound, and magnetic resonance, nuclear medicine imaging


techniques do not produce an anatomical map of the body, but instead image the spatial
distribution of radiopharmaceuticals introduced into the body.
• Nuclear medicine detects these early indicators of disease by imaging the uptake and
biodistribution of radioactive compounds introduced into the body in very small
amounts (typically nanograms) via inhalation into the lungs, direct injection into the
bloodstream, subcutaneous administration or oral administration.
• These "radiopharmaceuticals," also termed radiotracers, are compounds consisting of a
chemical substrate linked to a radioactive element.
• The chemical structure of the particular radiopharmaceutical determines the
biodistribution of the complex within the body, and a large number of
radiopharmaceuticals are used clinically in order to target specific organs.
• Abnormal tissue distribution or an increase or decrease in the rate at which the
radiopharmaceutical accumulates in a particular tissue is a strong indicator of disease.
Radiation, usually in the form of y-rays, from the radioactive decay of the
radiopharmaceutical is detected using an imaging device called a gamma camera.

P a g e 1 | 27
MIT-MODULE-5-SCET

• Decay of the radioactive element produces y-rays, which emanate in all directions.
• Attenuation of y -rays in tissue occurs via exactly the same mechanisms as for X-rays,
namely coherent scattering, Compton scattering, and photoelectric interactions.
• In order to determine the position of the source of the y -rays, a collimator is placed
between the patient and the detector so that only those components of radiation that
have a trajectory at an angle close to 90° to the detector plane are recorded.
• Rather than using film, as in planar X-ray imaging, to record the image, a scintillation
crystal is used to convert the energy of the y -rays that pass through the collimator into
light. These light photons are in turn converted into an electrical signal by
photomultiplier tubes (PMTs).
• The image is formed by analyzing the spatial distribution and the magnitude of the
electrical signals from each PMT.
• Planar nuclear medicine images are characterized, in general, as having a poor SNR
and low spatial resolution (""5 mm), but extremely high sensitivity, being able to detect
very small amounts of radioactive material, and very high specificity because there is
no background radiation in the body
• Three-dimensional nuclear medicine images can be produced using the principle of
tomography.
• A rotating gamma camera is used in a technique called single photon emission
computed tomography (SPECT).

• The most recently developed technique in nuclear medicine is positron emission


tomography (PET), which is based on positron-emitting radiopharmaceuticals.
• Due to the nature of the processes involved in positron annihilation and subsequent
emission of two γ-rays, PET has a sensitivity advantage over SPECT of between two
and three orders of magnitude.

P a g e 2 | 27
MIT-MODULE-5-SCET

2. What is radioactivity and half-life period?

• Radioactive decay (also known as nuclear decay, radioactivity, radioactive


disintegration or nuclear disintegration) is the process by which an unstable atomic
nucleus loses energy by radiation.
• A material containing unstable nuclei is considered radioactive. Three of the most
common types of decay are alpha decay, beta decay, and gamma decay, all of which
involve emitting one or more particles or photons.

• Alpha decay occurs when the nucleus ejects an alpha particle (helium nucleus).
• Beta decay occurs in two ways;
o (i) beta-minus decay, when the nucleus emits an electron and an
antineutrino in a process that changes a neutron to a proton.
o (ii) beta-plus decay, when the nucleus emits a positron and a neutrino
in a process that changes a proton to a neutron, this process is also
known as positron emission.

• In gamma decay a radioactive nucleus first decays by the emission of an alpha or


beta particle. The daughter nucleus that results is usually left in an excited state
and it can decay to a lower energy state by emitting a gamma ray photon.

• The half-life of a radioactive substance is a characteristic constant. It measures the time


it takes for a given amount of the substance to become reduced by half as a consequence
of decay, and therefore, the emission of radiation.

P a g e 3 | 27
MIT-MODULE-5-SCET

• Decay constant, proportionality between the size of a population of radioactive atoms


and the rate at which the population decreases because of radioactive decay.
• Suppose N is the size of a population of radioactive atoms at a given time t, and dN is
the amount by which the population decreases in time dt; then the rate of change is
given by the equation dN/dt = −λN, where λ is the decay constant. Integration of this
equation yields N = N0e−λt, where N0 is the size of an initial population of radioactive
atoms at time t = 0.
• This shows that the population decays exponentially at a rate that depends on the decay
constant. The time required for half of the original population of radioactive atoms to
decay is called the half-life. The relationship between the half-life, T1/2, and the decay
constant is given by T1/2 = 0.693/λ.

3. Discuss and differentiate types of radioactive decay?

• Radioactive elements can decay via a number of mechanisms, of which the most
common and important in nuclear medicine are α-particle decay, β-particle
emission, γ-ray emission, and electron capture
• The most useful radionuclides for diagnostic imaging are those that emit y -rays
or X-rays because these forms of radiation can pass through tissue and reach a
detector situated outside the body. A useful parameter in quantifying the
attenuation of radiation as it travels through tissue is the half-value layer (HVL),
which corresponds to the thickness oftissue that absorbs one-half ofthe
radioactivity produced.
• An α -particle consists of a helium nucleus-two protons and two neutrons-with
a net positive charge. Typical particle energies are between 4 and 8 MeV. The
a-particle has a tissue HVL of only a few millimeters, and is therefore not

P a g e 4 | 27
MIT-MODULE-5-SCET

directly detected in nuclear medicine. This form of radioactive decay occurs


mainly for radionuclides with an atomic number greater than 150.
• A β -particle is an electron, and is emitted with a continuous range of energies.
Radioactive decay occurs via the conversion of a neutron into a proton, with
emission of a high-energy β -particle and an antineutrino. Kinetic energy is
shared in a random 62 NUCLEAR MEDICINE manner between the β -particle
and antineutrino, and hence the electron has a continuous range of energies.
• No radionuclide can decay solely by γ -ray emission, but certain decay schemes
result in the formation of an intermediate species that exists in a metastable
state, with a reasonably long half-life. The radionuclide 99mTc, which is the most
widely used radionuclide, used in over 90% of studies, exists in such a
metastable state. It is formed from 99Mo according to the scheme shown below.
Roughly 90% of the metastable 99mTc nuclei follow this decay path:

• The energy of the emitted γ -ray is 140 keY. Below an energy of 100 keV most
γ -rays are absorbed in the body via photoelectric interactions, in direct analogy
to X-ray attenuation in tissue, and so radionuclides used in nuclear medicine
should emit y-rays with energies greater than this value. Above an energy of
200 keV, γ -rays penetrate the thin collimator septa used in gamma cameras to
reject unwanted, scattered y-rays. Therefore, the ideal energy of a γ -ray for
imaging lies somewhere between 100 and 200 keV.

P a g e 5 | 27
MIT-MODULE-5-SCET

4. Discuss the working of technetium generator?

99m
• Tc, which can be produced from an on-site generator.
• The radionuclide 99mTc has a half-life of 6.02 h, is generated from a long-lived
parent, 99Mo, emits a monochromatic 140-keV γ-ray with very minor β-particle
emission, and has an HVL of 4.6 cm.
• In combination with the advantages afforded by on-site production, these
properties result in 99mTc being used in more than 90% of nuclear medicine
studies
• The on-site technetium generator consists of an alumina ceramic column with
radioactive 99Mo absorbed on its surface in the form of ammonium
molybdenate.

• The column is housed within a lead shield for safety considerations.


• The 99mTc is obtained by flowing an eluting solution of saline through the
generator. The solution washes out the 99mTc, which binds very weakly to the
alumina, leaving the 99Mo behind.
• Suitable radioassays are then carried out to determine the concentration and the
purity of the eluted 99mTc.
• Typically, the technetium is eluted every 24 h and the generator is replaced once
a week.
• A simple mathematical model, presented below, describes the dynamic
operation of the technetium generator. The number of 99Mo atoms, denoted by
Nt, decreases with time from an initial maximum value No at time t = O.
• This radioactive decay produces N2 atoms of 99mTc, which decay to form N3
atoms of 99Tc, the final stable product:

P a g e 6 | 27
MIT-MODULE-5-SCET

5. Discuss the biodistribution of technetium-based agents within the


body?

• The 99mTc eluted from the generator is in the form of sodium pertechnetate,
NaTcO4.
• If this compound is injected into the body, it concentrates in the thyroid,
salivary glands, and stomach, and can be used for scanning these organs.
• The majority of radiopharmaceuticals, however, are prepared by reducing the
pertechnetate to ionic technetium (Tc4 +) and then complexing it with a
chemical ligand that binds to the metal ion.
• The properties of this ligand are chosen to have high selectivity for the organ
of interest with minimal distribution in other tissues.
• The ligand must bind the metal ion tightly so that the radiopharmaceutical
does not fragment in the body.
• General factors which effect the biodistribution of a particular agent include
the strength of the binding to blood proteins such as human serum albumin
(HSA), the lipophilicity and ionization of the chemical ligand

P a g e 7 | 27
MIT-MODULE-5-SCET

6. Explain the instrumentation of gamma camera?

• A gamma camera (γ-camera), also called a scintillation camera or Anger


camera, is a device used to image gamma radiation emitting radioisotopes, a
technique known as scintigraphy.
• The applications of scintigraphy include early drug development and nuclear
medical imaging to view and analyse images of the human body or the
distribution of medically injected, inhaled, or
ingested radionuclides emitting gamma rays.

• The gamma camera, shown in Figure, is the instrumental basis for all nuclear
medicine imaging studies.

• The roles of each of the separate components are covered below.

P a g e 8 | 27
MIT-MODULE-5-SCET

1 Collimators

• Many types of collimator are used in nuclear medicine, but the most common
geometry is a parallel-hole collimator, which is designed such that only γ -rays
traveling at angles close to 90° to the collimator surface are detected.
• The collimator thus reduces the contribution from γ -rays that have been Compton-
scattered in tissue; these contain no useful spatial information, and reduce the image
CNR.
• The collimator is usually constructed from thin strips of lead, through which
transmission of γ -rays is negligible. The normal pattern of the lead strips is a
hexagonally based "honeycomb" geometry
• The dimensions and the arrangement of the lead strips determine the contribution
made by the collimator to the overall spatial resolution of the gamma camera. In
Figure, if two point sources are placed a distance less than R apart, then they cannot
be resolved. The value of R is given by

where L is the length of the septa, d is the distance between septa, and z is the distance
between the y -ray source and the front of the collimator. Therefore, the spatial
resolution can be improved by increasing the length of the septa in the collimator, or
minimizing the value of z, that is, positioning the gamma camera as close to the patient
as possible.

• There are a number of other types of collimators, shown in Figure 2.6, that can
be used to magnify, or alternatively reduce the size of, the image.
• A converging collimator, for example, can be used for imaging small organs
close to the surface of the body.
• An extreme form of the converging collimator is a "pinhole collimator," which
is used for imaging very small organs.

P a g e 9 | 27
MIT-MODULE-5-SCET

• A pinhole collimator increases significantly the magnification and the spatial


resolution of the image, but also results in some geometric distortion,
particularly at the edges of the image.
• It is used primarily for thyroid and parathyroid imaging.
• In contrast, a diverging collimator reduces the size of the image compared to
the physical dimensions of the object and is used to image a structure larger than
the size of the detector.

2 The Scintillation Crystal

• The most common γ-ray detector is based on a single crystal of thallium-activated


sodium iodide, NaI(TI).
• The thallium creates imperfections in the crystal structure of the NaI such that atoms
within the crystal can be excited to elevated energy levels.
• When a γ -ray strikes the crystal, it loses energy through photoelectric and Compton
interactions with the crystal.
• "The electrons ejected by these interactions lose energy in a short distance by ionizing
and exciting the scintillation molecules.
• Deexcitation of these excited states within the scintillation crystal occurs via emission
of photons with a wavelength of 415 nm (visible blue light), corresponding to a
photon energy of "'4 eV.
• The intensity of the light is proportional to the energy of the incident γ -ray. The light
emission decay constant, which is the time for the excited states to return to
equilibrium, is 230 ns for NaI(TI).
• Overall, approximately 13% of the energy deposited in the crystal via γ -ray
absorption is emitted as visible light.
• One disadvantage of the NaI(TI) crystal is that it is hygroscopic, and so must be
hermetically sealed.

P a g e 10 | 27
MIT-MODULE-5-SCET

• When a γ -ray strikes the NaI(TI) crystal, light is produced from a very small volume
determined by the range, typically 1 mm, of the photoelectrons or Compton-scattered
electrons.
• The thicker the crystal, the broader is the light spread function and the poorer is the
spatial resolution.
• For obtaining 99mTc nuclear medicine images, the optimal crystal thickness is
approximately 0.6 cm.
• However, this value is too small for detecting, with high sensitivity, the higher energy
γ -rays associated with radiopharmaceuticals containing gallium, iodine, and indium
and so a compromise crystal thickness of 1 cm is generally used in these cases

3 Photomultiplier Tubes

• The second step in forming the nuclear medicine image involves detection of the light
photons emitted by the crystal by hexagonal PMTs, which are closely coupled to the
scintillation crystal.
• This geometry gives efficient packing, and also has the property that the distance from
the center of one PMT to that of each neighboring PMT is the same: this property is
important for determination of the spatial location of the scintillation event using an
Anger position network
• Arrays of 61, 75, or 91 PMTs, each with a diameter of between 25 and 30 mm, are
typically used. The basic design of a PMT is shown in Figure
• Light photons pass through the transparent window of the PMT and strike the
photocathode, which is made of a bialkali material with a spectral sensitivity matched
to the light-emission characteristics of the scintillation crystal.
• Provided that the photon energy is greater than the photoelectric work function of the
photocathode, free electrons are generated in the photocathode via photoelectric
interactions.
• These electrons have energies between 0.1 and 1eV.
• A bias voltage of between 300 and 5000 V applied between the first anode (also
called a dynode) and the photocathode attracts these electrons toward the anode.
• If the kinetic energy of this incident electron is above a certain value, typically 100-
200 eV,when it strikes the anode a large number of electrons are emitted from the
anode for every incident electron : the result is effectively noise-free amplification.
• A series of 10 successive accelerating dynodes produces between 105 and 106
electrons for each photoelectron, creating an amplified current at the output of the
PMTs.
• This current then passes through a series of low-noise preamplifiers and is digitized
using an A/D converter.

P a g e 11 | 27
MIT-MODULE-5-SCET

4 The Anger Position Network

The PMTs situated closest to a given y -ray-induced scintillation in the crystal


produce the largest output current.
By comparing the magnitudes of the currents from all of the PMTs, the location of
individual scintillations within the crystal can be estimated.
This calculation is most easily carried out using an Anger logic circuit, named after
one of the pioneers in development of the gamma camera, Hal Anger.
This network produces four output signals, X+. X-. Y+. and Y-, the relative
magnitudes and signs of which define the location of the scintillation event in the
crystal.
Figure shows two of the four channels of such a network

5 Pulse Height Analyzer


• In addition to recording the individual components X+ , X-, Y+, and Y-, the
summed signal (X+ + X- + Y+ + Y-), termed the "z-signal,' is sent to a
pulseheight analyzer (PHA) .
• The PHA compares the z-signal to a threshold value, which for a 99mTc scan
corresponds to that produced by a y-ray with energy 140 keV.
• If the z-signal is below this threshold, it is rejected as having originated from a
y -ray that has been Compton-scattered in the body and therefore has no useful
spatial information.
P a g e 12 | 27
MIT-MODULE-5-SCET

• In practice, rather than a single threshold value being used, a range of values
of the z-signal is accepted.
• The reason is that even monochromatic 140-keV y-rays that do not undergo
significant scattering in the patient give a statistical distribution in the size of
the z-signal.
• The energy resolution of the system is defined as the full-width half-maximum
(FWHM) of the photopeak, shown in Figure, and typically is about 14 keY (or
10%) for most gamma cameras.
• The narrower the FWHM of the system, the better it is at discriminating
between unscattered and scattered y -rays.
• The threshold level for accepting the "photopeak" is set to a slightly larger
value, typically 15%.
• For example, a 15% window around a 140-keV photopeak means that values
of 129.5-150.5 keV are accepted as corresponding to unscattered y-rays

P a g e 13 | 27
MIT-MODULE-5-SCET

7. Discuss the principles of single photon emission computed tomography


. (SPECT)?

• SPECT scans use radioactive material called tracers. The tracers mix with your
blood and are taken up by the body tissue.
• SPECT radio-nuclides do not require an on-site cyclotron.
• However, the isotopes of Tc, TI, In, and Xe are not normally found in the body.
For example, it is extremely difficult to label a biologically active pharmaceutical with
Tc-99m without altering its biochemical behaviour.
• Presently, SPECT has been used mainly in the detection of tumours and other
lesions, as well as in the evaluation of myocardial function using TI-201. However,
certain pharmaceuticals have been labelled with iodine and technetium and provide
information on blood perfusion within the brain and the heart.
• Gamma-ray photons emitted from the internal distributed radiopharmaceutical
penetrate through the animal’s or patient’s body and are detected by a single or a set of
collimated radiation detectors.
• A special “gamma” camera picks up signals from the tracer as it moves around
the subject.
• The tracer’s signals are converted into images by a computer.
• Most of the detectors used in current SPECT systems are based on a single or
multiple NaI(TI) scintillation detectors.

P a g e 14 | 27
MIT-MODULE-5-SCET

▪ SPECT can be performed using either multidetector or rotating gamma camera systems. In the
former, a large number of scintillation crystals and associated electronics are placed around the
patient.
▪ The primary advantage of the multidetector system is its high sensitivity, resulting in high
spatial resolution and rapid imaging.
▪ However, system complexity and associated cost have meant that these types of systems are
not widely used in the clinic.
▪ The latter approach, using a rotating gamma camera, is preferred for routine clinical imaging
because it also can be used for planar scintigraphy.
▪ Data are collected from multiple views obtained as the detector rotates about the patient's head.
▪ The simplest setup involves a single gamma camera which rotates in a plane around the patient,
collecting a series of signal projections, which, after correction for scatter and attenuation, can
be filtered and backprojected to form the image
▪ Because the array of PMTs is two-dimensional in nature, the data can be reconstructed as a
series of adjacent slices
▪ HERE Write about the gamma camera from the previous
answer
▪ SPECT systems with multiple camera heads are also available.
▪ In a dual-head system, two 180° opposed camera heads are used, and acquisition time
is reduced by half with no loss in sensitivity.
▪ A triple-head SPECT system further improves sensitivity.

P a g e 15 | 27
MIT-MODULE-5-SCET

▪ Some suppliers also offer variable-angle dual-head systems for improved positioning
during cardiac, brain and wholebody imaging.
▪ Imaging times can be decreased by using another SPECT configuration—a ring of
detectors completely surrounding the patient.
▪ Although multiple camera heads reduce acquisition time, they do not significantly
shorten procedure/exam time because of factors such as patient preparation and data
processing. Several approaches are being investigated to improve SPECT sensitivity and
resolution. Novel acquisition geometries are being evaluated for both discrete detector and
camera-based SPECT systems (Fig 21.13).

The sensitivity of a SPECT system is mainly determined by the total area of the detector surface
that is viewing the organ of interest. Of course, there is tradeoff of sensitivity versus spatial
resolution. Kuhl (1976) recognized that the use of banks of discrete detectors [Fig. 21.13(a)]
could be used to improve SPECT performance. The system [Fig. 21.13(b)] developed by
Hirose et al. (1982) consists of a stationary ring of detectors. This system uses a unique fan-
beam collimator that rotates in front of the stationary detectors. Another approach using multi-
detector brain system [Fig. 21.13(c)] uses a set of 12 scintillation detectors coupled with a
complex scanning motion to produce tomographic images (Moore et al., 1984). An advantage
of discrete detector SPECT systems is that they typically have a high sensitivity for a single
slice of the source. However, a disadvantage has been that typically only one or at most a few
non-contiguous sections could be imaged at a time. In order to overcome this deficiency,
Rogers et al. (1984) described a ring system that is capable of imaging several contiguous slices
simultaneously.

P a g e 16 | 27
MIT-MODULE-5-SCET

8. Discuss the principles of PET imaging

• PET is a diagnostic imaging technique used to map the biodistribution of positron-


emitting radiopharmaceuticals within the body.
• These radiopharmaceuticals must be synthesized using a cyclotron, and are structural
analogs of a biologically active molecule, such as glucose, in which one or more of the
atoms has been replaced by a radioactive atom.
• Examples of such radiopharmaceuticals include fluorodeoxyglucose (FDG), which
contains 18F, and [IIC]palmitate. Isotopes such as 11C, 150, 18F, and 13N undergo
radioactive decay by emitting a positron, that is a positively charged electron (e"), and
a neutrino (v)

• Because two "antiparallel" y-rays are produced and both must be detected, a PET
system consists of a complete ring of scintillation crystals surrounding the patient, as
shown in Figure 2.21.

P a g e 17 | 27
MIT-MODULE-5-SCET

• Because the two y-rays are created simultaneously, both are detected within a certain
time window, the value of which is determined by the diameter of the detector ring and
the location of the radiopharmaceutical within the body.
• The location of the two crystals that actually detect the two antiparallel y-rays defines
a line along which the annihilation must have occurred.
• This process of line definition is referred to as annihilation coincidence detection
(ACD) and forms the basis of signal localization in PET.
• This process should be contrasted with that in SPECT, which requires collimation of
single y-rays
• The difference in these localization methods is the major reason for the much higher
detection efficiency (typically 1000-fold) in PET than in SPECT.
• Image reconstruction in PET is via filtered backprojection
• Because the y-ray energy of 511 keV in PET is much higher than the 140 keV of y-rays
in conventional nuclear medicine, different materials such as bismuth germanate are
used for the scintillation crystals.
• The higher y -ray energy means that less attenuation of the y-rays occurs in tissue, a
second factor which results in the high sensitivity of PET.
• The spatial resolution in PET depends upon a number of factors including the number
and size of the individual crystal detectors: typical values of the overall system spatial
resolution are 3-5 mm.

9. List common Radionuclides Used for PET

P a g e 18 | 27
MIT-MODULE-5-SCET

10. Discuss the Instrumentation for PET

• The major differences in PET instrumentation compared to that in SPECT are the
scintillation crystals needed to detect 511-keV y-rays efficiently and the additional
circuitry needed for coincidence detection.

1. Scintillation Crystals

• Detection of the antiparallel y-rays uses a large number of scintillation crystals, which
are usually formed from bismuth germanate (BGO).
• The crystals are placed in a circular arrangement surrounding the patient, and the
crystals are coupled to a smaller number of PMTs.
• Coupling each crystal to a single PMT would give the highest possible spatial
resolution, but would also increase the cost prohibitively.
• Typically, each "block" ofscintillation crystals consists of an 8 x 8 array cut from a
single BGO crystal, with the cuts filled with light-reflecting material.
• The dimensions of each block are roughly 6.5 mm in width and height and 30 mm in
depth.
P a g e 19 | 27
MIT-MODULE-5-SCET

• Each block of 64 individual crystals is coupled to four PMTs, as shown in Figure


2.22.

• Localization of the detected y-ray to a particular crystal is performed in the same way
The ideal detector crystal would:
1. Have a high density, which results in a large effective cross-section for Compton
scattering, and a correspondingly high y-ray detection efficiency
2. Have a large effective atomic number, which also results in a high y-ray detection
efficiency due to y-ray absorption via photoelectric interactions
3. Have a short decay time to allow a short coincidence time to be used, with a reduction in
accidental coincidences and an increased SNR in the reconstructed PET image
4. Have a high light output (emission intensity) to allow more crystals to be coupled to a
single PMT, reducing the complexity and cost of the PET scanner
5. Have an emission wavelength near 400 nm; this wavelength represents the point of
maximum sensitivity for standard PMTs
6. Have an index of refraction near 1.5 to ensure efficient transmission of light between the
crystal and the PMT; optical transparency at the emission wavelength is also important
7. Be non-hygroscopic to simplify the design and construction of the many thousands of
crystals needed in the complete system
2. Annihilation Coincidence Detection Circuitry

• In a PET scan a large number of annihilation coincidences are detected.


• Some of these are true coincidences, but there are many mechanisms by which "false"
coincidences can be recorded.
• The ACD circuitry is designed to maximize the ratio of true-to- false recorded
coincidences.

P a g e 20 | 27
MIT-MODULE-5-SCET

• In Figure 2.23 an injected radiopharmaceutical is located in the forward right part of


the brain.
• A positron is emitted and annihilates with an electron, and two antiparallel y -rays are
produced.
• The first y -ray reaches crystal number 2 and produces a number of photons.
• These photons are converted into an amplified electrical signal, at the output of the
PMT, which is fed into a PHA. If the voltage is within a predetermined range, then the
PHA generates a "logic pulse," which is sent to the coincidence detector.
• Typically, this logic pulse is 6-10 ns long.
• Variations in the exact location of the y -ray within the crystal, and also the decay time
of the excited states created in the BGD crystal, create a random "time jitter" in the
delay between the y -ray striking the crystal and the leading edge of the logic pulse
being sent to the coincidence detection circuitry.

• In Figure 2.23, the first y-ray having been detected by crystal 2, only those crystals
numbered between 7 and 13·can detect the second y-ray from the annihilation.
• When the second y-ray is detected and produces a voltage that is accepted by the
associated PHA, a second logic pulse is sent to the coincidence detector.
• The coincidence detector adds the two logic pulses together and passes the summed
signal through a separate PHA, which has a threshold set to a value just less than twice
the amplitude of each individual logic pulse.
• If the logic pulses overlap in time, then the system accepts the two y-rays as having
evolved from one annihilation and records a line integral between the two crystals.
• The PET system can be characterized by its "coincidence resolving time," which is
defined as twice the length of the logic pulse, and usually has a value between 12 and
20 ns.

P a g e 21 | 27
MIT-MODULE-5-SCET

3. Image Reconstruction
• Basic image reconstruction in PET is essentially identical to that in SPECT, with
both iterative algorithms and those based on filtered backprojection being used to
form the image from individual line projections.
• However, prior to reconstruction, the data must be corrected for attenuation effects
and, more importantly, for accidental and multiple coincidences.

11. Discuss the radiotracers used in PET and their applications

P a g e 22 | 27
MIT-MODULE-5-SCET

12. Discuss the operation of rectilinear scanner

• A rectilinear scanner is an imaging device, used to capture emission


from radiopharmaceuticals in nuclear medicine.
• The image is created by physically moving a radiation detector over the surface of a
radioactive patient. It has become obsolete in medical imaging, largely replaced by
the gamma camera since the late 1960s.
Components
• Cassen's original rectilinear scanner used calcium tungstate (CaWo4) crystal as the
radiation detector. Later systems used a Sodium iodide (NaI) scintillator, as in a gamma
camera.
• The detector must be connected by mechanical or elecronic means to an output system.
This could be a simple light source over photographic film, dot matrix
printer, oscilloscope or television screen.
Mechanism
• The patient is administered with a radioactive pharmaceutical agent, such as iodine,
which will naturally collect in the thyroid.
• The detector moves in a raster pattern over studied area of the patient, making a
constant count rate.
• A collimator restricts detection to a small area directly below its position so that by the
end of the scan emission from the whole study area has been detected.
• The output method is designed such that positional and detection information is
maintained. For example when using a light source and film the light is moved in
tandem with the detector, and the intensity of light produced increases with an increase
in activity, producing dark areas on the film.
• Disadvantages include the very long imaging time (several minutes) due to the need to
separately cover each target area, unlike a gamma camera which has a much larger field
of view, and the motion artefacts this can result in.

P a g e 23 | 27
MIT-MODULE-5-SCET

Schematic of a basic rectilinear scanning system

13. Discuss the clinical applications of PET

1. Cancer diagnosis
18
F-Fluorodeoxyglucose (FDG), an analog of glucose that allows assessment of glucose
metabolism in body tissues is the most commonly used PET tracer. FDG enters cells by the
same transport mechanism as glucose and is intracellularly phosphorylated by hexokinase to
FDG-6-phosphate (FDG-6-P). Intracellularly FDG-6-P does not get metabolized further and
accumulates in proportion to the glycolytic rate of the cells. Most malignant cells have higher
rate of glycolysis and higher levels of glucose transporter proteins (GLUT) than do normal
cells and therefore accumulate FDG-6-P to higher levels than do the normal tissues.
2. Study of cellular proliferation
Carbon-11 thymidine and F-18 Fluorothymidine (FLT) an analog of thymidine are markers of
cellular proliferation. Uptake of FLT has considerable analogy with FDG uptake in that FLT
is taken up by actively proliferating cells but is not incorporated further into DNA synthesis
and hence accumulates intracellularly in tumor cells. It has shown promising ability to predict
tumor grade in lung cancers, evaluate brain tumors and may be a good predictor of tumor
response. 11C-methionine and amino acid, has shown great promise in evaluating brain tumors
and other cancers. 11C-choline and 11C-acetate have been used in prostate cancer to evaluate
the primary and metastatic disease

P a g e 24 | 27
MIT-MODULE-5-SCET

3. myocardial perfusion studies


Rubidium-82 is a potassium analog and is used as a first pass extraction agent to assess
myocardial perfusion in the same way as Thallium 201 or Technetium -99 labelled compounds.
Nitrogen-13 labelled ammonia is another PET tracer used for myocardial perfusion studies.
4. Skeletal imaging
F-18 Sodium fluoride has shown great promise as a bone scan agent, comparable to or even
superior to Technetium -99 labelled MDP.
5. Brain Imaging
A large number of PET tracers have been investigated to study the brain

P a g e 25 | 27
MIT-MODULE-5-SCET

14. Discuss the clinical applications of SPECT

P a g e 26 | 27
MIT-MODULE-5-SCET

P a g e 27 | 27
MIT-MODULE 6-SCET

6. Thermal Imaging

1. Explain the physics of thermography

• The infrared ray is a kind of electromagnetic wave with a frequency higher than the
radio frequencies and lower than visible light frequencies.
• The Infrared region of the electromagnetic spectrum is usually taken as 0.77 and 100
μm.
• For convenience, it is often split into near infrared (0.77 to 1.5 μm), middle infrared
(1.5 to 6 μm) and far infrared (6 to 40 μm) and far infrared (40 to 100 μm).
• Infrared rays are radiated spontaneously by all objects having a temperature above
absolute zero. The total energy ‘W’ emitted by the object and its temperature are related
by the Stefan-Boltzman formula

• The medical thermograph is a sensitive infrared camera which presents a video image
of the temperature distribution over the surface of the skin.
• This image enables temperature differences to be seen instantaneously, providing fairly
good evidence of any abnormality.
• However, thermography still cannot be considered as a diagnostic technique
comparable to radiography.
• Radiography provides essential information on anatomical structures and abnormalities
while thermography indicates metabolic process and circulation changes, so the two
techniques are complementary.
MIT-MODULE 6-SCET

• The human body absorbs infrared radiation almost without reflection, and at the same
time, emits part of its own thermal energy in the form of infrared radiation.
• The intensity of this radiant energy corresponds to the temperature of the radiant
surface. It is, therefore, possible to measure the varying intensity of radiation at a certain
distance from the body and thus determine the surface temperature.
• Fig. 24.1 shows spectral distribution of Infrared emission from human skin.

• In a normal healthy subject, the body temperature may vary considerably from time to
time, but the skin temperature pattern generally demonstrates characteristic features,
and a remarkably consistent bilateral symmetry.
• Thermography is the science of visualizing these patterns and determining any
deviations from the normal brought about by pathological changes.
• Thermography often facilitates detection of pathological changes before any other
method of investigation, and in some circumstances, is the only diagnostic aid available.
• Thermography has a number of distinct advantages over other imaging systems. It is
completely non-invasive, there is no contact between the patient and system as with
ultrasonography, and there is no radiation hazard as with X-rays. In addition,
thermography is a real-time system.
• The examination of the female breast as a reliable aid for diagnosing breast cancer is
probably the best known application of thermography.
• It is assumed that since cancer tissue metabolizes more actively than other tissues and
thus has a higher temperature, the heat produced is conveyed to the skin surface
resulting in a higher temperature in the skin directly over the malignancy than in other
regions
MIT-MODULE 6-SCET

2. Discuss the physical factors which affect the amount of infrared radiation
from the human body?

These factors are emissivity, reflectivity and transmittance or absorption.


MIT-MODULE 6-SCET

3. Explain the The Stefan–Boltzmann law for total radiation

4. Discuss different types of infrared detectors


MIT-MODULE 6-SCET

• Infrared detectors are used to convert infrared energy into electrical signals. Basically,
there are two types of detectors: thermal detectors and photo-detectors.
• Thermal detectors include thermocouples and thermistor bolometers. They feature
constant sensitivity over a long wavelength region. However, they are characterized by
long-time constant, and thus show a slow response. The wavelength at which the human
body has maximum response is 9–10 μm. Therefore, the detector should ideally have a
constant spectral sensitivity in the 3–20μm infrared range.
• However, the spectral response of the photodetectors is highly limited. Most of the
infrared cameras use InSb (indium antimonide) detector which detects infrared rays in
the range 2–6 μm. Only 2.4% of the energy emitted by the human body falls within the
region detected by InSb detectors. But they are highly sensitive and are capable of
detecting small temperature variations as compared to a thermistor. Another detector
making use of an alloy of cadmium, mercury and telluride (CMT) and cooled with
liquid nitrogen, has a peak response at 10–12 μm.
MIT-MODULE 6-SCET

5. Explain the types of photon detectors

• In photon detectors, incident photons interact with the electrons in the material and
change the electronic charge distribution.
• This perturbation of the charge distribution generates a current or a voltage that can be
measured by an electrical circuit.
• Because the photon-electron interaction is "instantaneous", the response speed of
photon detectors is much higher than that of thermal detectors.
• Indeed, by contrast to thermal detectors, quantum or photon detectors respond to
incident radiation through the excitation of electrons into a non-equilibrium state.

Photoconductive Detectors

• Photoconductive detectors are a type of photodetectors which are based on


photoconductive semiconductor materials.

• Here, the absorption of incident light creates non-equilibrium electrical carriers, and
that reduces the electrical resistance across two electrodes.

• There are also some exotic cases with negative photo conductivity, i.e., with an
increase of resistance caused by illumination.

• Alternative terms for photoconductive detectors are photoresistors, light-dependent


resistors and photocells.
MIT-MODULE 6-SCET

• While for the detection of visible light one would usually prefer photodiodes due to
their clearly superior performance (except sometimes in cost-critical applications),
photoconductive detectors are often used as infrared detectors.

• In principle, one could consider photodiodes to be photoconductors, at least when


used in photoconductive mode, i.e., with a negative bias voltage.

Photovoltaic detectors

• An important type of photodetector is the photovoltaic cell, which generates a voltage


that is proportional to the incident EM radiation intensity.

• These sensors are called photovoltaic cells because of their voltage-generating capacity,
but the cells actually convert EM energy into electrical energy.

• Photovoltaic cells are very important in instrumentation and control applications


because they are used both as light detectors and in power sources that convert solar
radiation into electrical power for remote-measuring systems.

6. Discuss different Thermal uncooled IR detectors?

Sub Questions: 1. What are pyroelectric detectors?


MIT-MODULE 6-SCET

2. What are Thermoelectric detectors

3. What are Ferroelectric detectors

4. What are Resistive microbolometers

1. Pyroelectric Detectors
• Pyroelectric detectors are sensors for light which are based on the pyroelectric effect.
• They are widely used for detecting laser pulses often in the infrared spectral region,
and with the potential for a very broad spectral response.
• Pyroelectric detectors are used as the central parts of many optical energy meters, and
are typically operated at room temperature (i.e., not cooled).
• Compared with energy meters based on photodiodes, they can have a much broader
spectral response.
• There are various other applications of pyroelectric sensors, for example fire detection,
satellite-based infrared detection, and the detection of persons via their infrared
emission (motion detectors).
Pyroelectricity
• Pyroelectricity is a property of certain crystals which are naturally electrically
polarized and as a result contain large electric fields.
• Pyroelectricity can be described as the ability of certain materials to generate a
temporary voltage when they are heated or cooled.
• The change in temperature modifies the positions of the atoms slightly within
the crystal structure, such that the polarization of the material changes.
• This polarization change gives rise to a voltage across the crystal.
• If the temperature stays constant at its new value, the pyroelectric voltage gradually
disappears due to leakage current.
• The pyroelectric coefficient may be described as the change in the spontaneous
polarization vector with temperature

• where p (Cm K ) is the vector for the pyroelectric coefficient


i
−2 −1

Operation Principle
• We first consider the basic operation principle. A pyroelectric detector contains a piece
of ferroelectric crystal material with electrodes on two sides – essentially a capacitor.
• One of those electrodes has a black coating , which is exposed to the incident
radiation.
MIT-MODULE 6-SCET

• The incident light is absorbed on the coating and thus also causes some heating of the
crystal, because the heat is conducted through the electrode into the crystal.
• As a result, the crystal produces some pyroelectric voltage; one can electronically detect
that voltage or alternatively the current when the voltage is held constant.
• For a constant optical power, that pyroelectric signal would eventually fade away; the
device would therefore not be suitable for measuring the intensity of continuous-wave
radiation.
• Instead, such a detector is usually used with light pulses; in that case, one obtains a
bipolar pulse structure, where one initially obtains a voltage in one direction and after
the pulse a voltage in the opposite direction.
• Due to that operation principle, pyroelectric detectors belong to the thermal detectors:
they do not directly respond to radiation, but only to the generated heat.
• In the simple explained form, the detector would be relatively sensitive to fluctuations
of the ambient temperature.
• Therefore, one often uses an additional compensating crystal, which is exposed to
essentially the same temperature fluctuations but not to the incoming light.
• By taking the difference of signals from both crystals, one can effectively reduce the
sensitivity to external temperature changes.
• The pyroelectric charges are typically detected with an operational amplifier (OpAmp)
based on field-effect transistors (JFETs) with very low leakage current.
Pyroelectric detector element
MIT-MODULE 6-SCET

2. Thermoelectric detectors

Thermoelectricity

The thermoelectric effect is the direct conversion of temperature differences to


electric voltage and vice versa via a thermocouple.

A thermoelectric device creates voltage when there is a different temperature on each side.

Conversely, when a voltage is applied to it, heat is transferred from one side to the other,
creating a temperature difference.

This effect can be used to generate electricity, measure temperature or change the temperature
of objects.
Because the direction of heating and cooling is determined by the polarity of the applied
voltage, thermoelectric devices can be used as temperature controllers.
The term "thermoelectric effect" encompasses three separately identified effects: the Seebeck
effect, Peltier effect, and Thomson effect.
1) Seebeck effect: The Seebeck effect is the conversion of heat directly into electricity at the
junction of different types of wire.
MIT-MODULE 6-SCET

2) Peltier effect: When an electric current is passed through a circuit of a thermocouple, heat
is evolved at one junction and absorbed at the other junction. This is known as Peltier Effect.
The Peltier effect is the presence of heating or cooling at an electrified junction of two different
conductors

3) Thomson effect: As per the Thomson effect, when two unlike metals are joined together
forming two junctions, the potential exists within the circuit due to temperature gradient along
the entire length of the conductors within the circuit.

In most of the cases the emf suggested by the Thomson effect is very small and it can be
neglected by making proper selection of the metals. The Peltier effect plays a prominent role
in the working principle of the thermocouple.

i)Thermocouple

The general circuit for the working of thermocouple is shown in the figure 1 above. It comprises
of two dissimilar metals, A and B. These are joined together to form two junctions, p and q,
which are maintained at the temperatures T and T respectively. Remember that the
1 2

thermocouple cannot be formed if there are not two junctions. Since the two junctions are
maintained at different temperatures the Peltier emf is generated within the circuit and it is the
function of the temperatures of two junctions.

If the temperature of both the junctions is same, equal and opposite emf will be generated at
both junctions and the net current flowing through the junction is zero. If the junctions are
maintained at different temperatures, the emf’s will not become zero and there will be a net
current flowing through the circuit. The total emf flowing through this circuit depends on the
metals used within the circuit as well as the temperature of the two junctions. The total emf or
the current flowing through the circuit can be measured easily by the suitable device.

The device for measuring the current or emf is connected within the circuit of the thermocouple.
It measures the amount of emf flowing through the circuit due to the two junctions of the two
dissimilar metals maintained at different temperatures. In figure 2 the two junctions of the
thermocouple and the device used for measurement of emf (potentiometer) are shown.
MIT-MODULE 6-SCET

Now, the temperature of the reference junctions is already known, while the temperature of
measuring junction is unknown. The output obtained from the thermocouple circuit is
calibrated directly against the unknown temperature. Thus the voltage or current output
obtained from thermocouple circuit gives the value of unknown temperature directly.
The amount of emf developed within the thermocouple circuit is very small, usually in
millivolts, therefore highly sensitive instruments should be used for measuring the emf
generated in the thermocouple circuit. Two devices used commonly are the ordinary
galvanometer and voltage balancing potentiometer. Of those two, a manually or automatically
balancing potentiometer is used most often.

ii) Thermopile

A thermopile is an electronic device that converts thermal energy into electrical energy. It is
composed of several thermocouples connected usually in series or, less commonly, in parallel.

A thermopile is an array of several thermocouples connected in series. A thermopile with N


thermocouples will output a voltage N times bigger than the one produced by a single
thermocouple, increasing the sensitivity of the transducer. With enough elements in the
thermopile, a useful voltage can be generated in order to control another process. This type of
transducer is often used to measure heat flux.
Thermopiles do not respond to absolute temperature, but generate an output voltage
proportional to a temperature difference or temperature gradient.
A thermopile is an array of several thermocouples connected in series. A thermopile with N
thermocouples will output a voltage N times bigger than the one produced by a single
thermocouple, increasing the sensitivity of the transducer. With enough elements in the
thermopile, a useful voltage can be generated in order to control another process. This type of
transducer is often used to measure heat flux.
Thermopiles do not respond to absolute temperature, but generate an output voltage
proportional to a temperature difference or temperature gradient.
MIT-MODULE 6-SCET

3. Ferroelectric detectors

The first discovered ferroelectric material is Rochelle salt by Valasek in 1921.

These materials can produce spontaneous polarization.

By inverting the direction of applied electrical field, the direction of polarization of these
materials can be inverted or changed (figure 1).
MIT-MODULE 6-SCET

This is called switching. It can also maintain the polarisation even once the field is removed.

These materials have some similarities over ferromagnetic materials which reveal permanent
magnetic moment.

The hysteresis loop is almost same for both materials.

Since, there are similarities; the prefix is same for both the materials.

But all the ferroelectric material must not have Ferro (iron). All the ferroelectric materials
exhibit piezoelectric effect. The opposed properties of these materials are seen in
antiferromagnetic materials.

Polarization and Hysteresis Loop


First we take a dielectric material and a peripheral electric field is given, then we can see that
the polarization will be always directly proportional to the applied field Next; when we
polarise a paraelectric material, we get a non linear polarization.
Next, we take a ferroelectric material and electric field is given to it. We get a non linear
polarization. It also exhibit nonzero spontaneous polarization without a peripheral field. We
can also see that by inverting the direction of applied electrical field, the direction of
polarization can be inverted or changed.
Thus, we can say that, the polarization will depend on the present as well as the previous
condition of electric field. The hysteresis loop is obtained as in figure.
MIT-MODULE 6-SCET

Curie Temperature
The properties of these materials exist only below a definite phase conversion temperature.
Above this temperature, the material will become paraelectric materials. That is, loss in
spontaneous polarization. This definite temperature is called Curie temperature (T ). Most of
C

these materials above T will loss the piezoelectric property as well. The variation of dielectric
c

constant by means of temperature in the non polar, paraelectric state is shown by Curie-Weiss
law as given below:

ε → Dielectric constant
ε → ε at temperature, T >> TC

A → Constant
T → Curie point
C

T → Temperature
χ → Susceptibility
C → Curie constant of the material
C

The dielectric constant and temperature characteristic of a ferroelectric material are


represented below.

Examples of Ferroelectric Materials


• BaTiO 3
MIT-MODULE 6-SCET

• PbTiO 3

• Lead Zirconate Titanate (PZT)


• Triglycine Sulphate
• PVDF
• Lithium tantalite etc.

Ferroelectric detectors

• The value of the spontaneous polarization depends on the temperature, i.e., a change
in the temperature of the crystal produces a change in its polarization, which can be
detected.
• This is called the pyroelectric effect .
• The pyroelectric effect can be described in terms of the pyroelectric coefficient λ.
• A small change in the temperature ΔT, in a crystal in a gradual manner, leads to a
change in the spontaneous polarization vector ΔP given by,
s

• ΔP = λΔT
s

• It is possible to reverse the polarization direction of a pyroelectric crystal by applying


a sufficiently intense external field then the crystal is said to be a ferroelectric.

• It should be noted that both piezoelectricity and pyroelectricity are inherent properties
of a crystal due, entirely, to its atomic arrangement or crystal structure.
• Ferroelectricity, on the other hand, is an effect produced in a pyroelectric crystal by
the application of an external electric field.
• It has been observed that all ferroelectrics are pyroelectric and piezoelectric.
• All pyroelectric arc piezoelectrics but the converse is not true.

4. Resistive microbolometers

• A microbolometer is a specific type of bolometer used as a detector in a thermal


camera. Infrared radiation with wavelengths between 7.5–14 μm strikes the detector
material, heating it, and thus changing its electrical resistance.
MIT-MODULE 6-SCET

• This resistance change is measured and processed into temperatures which can be used
to create an image.
• Unlike other types of infrared detecting equipment, microbolometers do not require
cooling.
• A microbolometer is an uncooled thermal sensor.
• Previous high resolution thermal sensors required exotic and expensive cooling
methods including stirling cycle coolers and liquid nitrogen coolers.
• These methods of cooling made early thermal imagers expensive to operate and
unwieldy to move.
• Also, older thermal imagers required a cool down time in excess of 10 minutes before
being usable.


• A microbolometer consists of an array of pixels, each pixel being made up of several
layers.
• The cross-sectional diagram shown in Figure 1 provides a generalized view of the
pixel. Each company that manufactures microbolometers has their own unique
procedure for producing them and they even use a variety of different absorbing
materials.
• In this example the bottom layer consists of a silicon substrate and a readout integrated
circuit (ROIC).
• Electrical contacts are deposited and then selectively etched away. A reflector, for
example, a titanium mirror, is created beneath the IR absorbing material.
• Since some light is able to pass through the absorbing layer, the reflector redirects this
light back up to ensure the greatest possible absorption, hence allowing a stronger signal
to be produced.
• Next, a sacrificial layer is deposited so that later in the process a gap can be created to
thermally isolate the IR absorbing material from the ROIC.
• A layer of absorbing material is then deposited and selectively etched so that the final
contacts can be created. To create the final bridge like structure shown in Figure 1,
• the sacrificial layer is removed so that the absorbing material is suspended
approximately 2 μm above the readout circuit. Because microbolometers do not
undergo any cooling, the absorbing material must be thermally isolated from the bottom
ROIC and the bridge like structure allows for this to occur.
• After the array of pixels is created the microbolometer is encapsulated under a vacuum
to increase the longevity of the device. In some cases the entire fabrication process is
done without breaking vacuum.
MIT-MODULE 6-SCET

The two most commonly used IR radiation detecting materials in microbolometers


are amorphous silicon and vanadium oxide. Much research has been done to test the feasibility
of other materials to be used. Those investigated include: Ti, YBaCuO, GeSiO, poly SiGe,
BiLaSrMnO and protein-based cytochrome C and bovine serum albumin.

Advantages

• They are small and lightweight. For applications requiring relatively short ranges, the
physical dimensions of the camera are even smaller. This property enables, for example,
the mounting of uncooled microbolometer thermal imagers on helmets.
• Provide real video output immediately after power on.
• Low power consumption relative to cooled detector thermal imagers.
• Very long MTBF.
• Less expensive compared to cameras based on cooled detectors.

Disadvantages

• Less sensitive (due to higher noise) than cooled thermal and photon detector imagers, and
as a result have not been able to match the resolution of cooled semiconductor based
approaches.

7. Discuss the operation of Pyroelectric vidicon based thermographic camera


MIT-MODULE 6-SCET
MIT-MODULE 6-SCET

8. Discuss the Application of Clinical Thermography

Medical thermography has a large bibliography and the technique has been used to
investigate a wide variety of clinical conditions.
Most important among these are the following, and we now consider briefly each of these in
turn:
(i) the assessment of inflammatory conditions such as rheumatoid arthritis;
(ii) vascular-disorder studies including; (a) the assessment of deep vein thrombosis
(DVT), (b) the localisation of varicosities, (c) the investigation of vascular
disturbance syndromes, and (d) the assessment of arterial disease;
(iii) metabolic studies;
(iv) the assessment of pain and trauma;
(v) oncological investigations; and
(vi) physiological studies

9. How the thermal imaging is used for the Assessment of Inflammatory


Conditions

• Arthritis is frequently a chronic inflammatory lesion that results in overperfusion of


tissue and a consequential increase in skin temperature.
• Thermography is able to distinguish between deep-seated inflammation and more
cutaneous involvement.
• Furthermore, it is useful for evaluating and monitoring the effects of drug and physical
therapy. By standardising conditions and cooling peripheral joints so that the skin is
within a specific temperature range (26°C–32°C for the lower limbs and 28°C–34°C
for the joints of upper limbs), it is possible to quantify the thermal pattern in the form
of a thermographic index on a scale from 1.0 to 6.0, in which healthy subjects are
usually found to be less than 2.5 and inflammatory joints are raised to 6.0.
• This quantitative analysis is a very effective means of assessing the efficacy of anti-
inflammatory drugs used in the treatment of rheumatic conditions.

10. How the thermal imaging is used for the Investigations of Vascular
Disorders

Deep Vein Thrombosis

• Venography and ultrasound are the most commonly used investigations for
investigating DVT. Venography is invasive, time consuming and labour intensive.
MIT-MODULE 6-SCET

• It also exposes the patient to the risks of contrast allergy. Doppler ultrasound avoids the
risks of venography but is operator dependent and can be time consuming.
• An increase in limb temperature is one of the clinical signs of DVT, and thermography
can be used to image the suspected limb.
• The thermal activity is thought to be caused by the local release of vasoactive chemicals
associated with the formation of the venous thrombosis, which cause an increase in the
resting blood flow
• The thermal test is based on the observation of delayed cooling of the affected limb.
• The patient is examined in a resting position and the limbs cooled, usually with a fan.
Recent calf-vein thrombosis produces a diffuse increase in temperature of about 2°C.
Localisation of Varicosities

• The localisation of incompetent perforating veins in the leg prior to surgery may be
found thermographically.
• The limb is first cooled with a wet towel and fan and the veins are drained by raising
the leg, to an angle of 30°–40° with the patient lying supine.
• A tourniquet is applied around the upper third of the thigh to occlude superficial veins
and the patient then stands and exercises the leg.
• The sites of incompetent perforating veins are identified by areas of ‘rapid rewarming’
below the level of the tourniquet.
• Varicosities occurring in the scrotum can alter testicular temperatures.
• It is known that the testes are normally kept at about 5°C lower than core-body
temperature: if this normal temperature differential is abolished, spermatogenesis is
depressed.
• The incidence of subfertility in man is high and the cause is often not known, but the
ligation of varicocele in subfertile men has been shown to result in improvements in
fertility, with pregnancies in up to 55% of partners .
• Thermography has been used extensively to determine the thermal effect and extent of
clinical varicocele, to investigate infertile men or subfertile men who might have
unsuspected varicocele, to examine patients who have had corrective surgery and to
determine whether any residual veins are of significance after ligation of the varicocele.
• In a cool environment of 20°C, the surface temperature of the normal scrotum is 32°C,
whereas a varicocele can increase this to 34°C–35°C.
Vascular Disturbance Syndromes

• Patients with Raynaud’s disease and associated disorders have cold and poorly perfused
extremities.
• The viability of therapeutic intervention will depend upon whether the vasculature has
the potential for increased blood flow.
• Howell et al. (1997) have measured thermographically the temperature of toes in
Raynaud’s phenomenon.
• These authors concluded that the baseline mean toe temperature and medial lateral toe
temperature difference are good diagnostic indicators of this condition in the feet.
MIT-MODULE 6-SCET

• Cold challenge can enhance temperature differentials since these recover markedly
better over a 10min period in healthy individuals than in Raynaud’s patients.
• Temperature changes also occur in Paget’s disease.
• This is a disease of bone that results in increased blood flow through bone tissue which,
in turn, can increase the bone temperature and overlying skin.
Assessment of Arterial Disease

• Peripheral arterial disease can cause ischaemia, necessitating amputation of an affected


limb.
• Tissue viability of the limb has to be assessed to determine the optimal level of
amputation; this will depend upon skin blood flow, perfusion pressure and arterial
pressure gradients.
• Skin-temperature studies that show hypothermal patterns can indicate the presence of
arterial stenosis
• The non-invasive nature of the thermal imaging allows the method to be used even on
patients who are in extreme pain

11. How the thermal imaging is used for the Metabolic Studies

• Skin temperature is influenced by the proximity of the skin and superficial tissues to
the body core and the effects of subcutaneous heat production, blood perfusion and the
thermal properties of the tissues themselves.
• Subcutaneous fat modifies surface temperature patterns, as does muscular exercise.
• The complex interplay of these factors limits the role of IR imaging in metabolic
investigations to the study of the most superficial parts of the body surface.
• For example, in the case of newborn infants, it has been postulated that the tissue over
the nape of the neck and interscapular region consists of brown adipose tissue, which
plays an important role in heat production.
• Thermal imaging has been used to study this heat distribution directly after birth
(Rylander 1972).
• The presence of brown adipose tissue in man has also been investigated by this means:
it has been observed that metabolic stimulation by adrenaline (ephedrine) produces an
increase in skin temperature in the neck and upper back.
• Thermal imaging has also been used in the study of metabolic parameters in diabetes
mellitus

12. How the thermal imaging is used for Assessment of Pain and Trauma

• The localisation of temperature changes due to spinal-root-compression syndromes,


impaired sympathetic function in peripheral nerve injuries and chronic-pain syndromes
MIT-MODULE 6-SCET

depends largely upon the finding that in healthy subjects thermal patterns are
symmetrical.
• Asymmetrical heat production at dermatomes and myotomes can be identified
thermographically
• Temperature changes are probably related to reflex sympathetic vasoconstriction within
affected extremity dermatomes and to metabolic changes or muscular spasm in
corresponding paraspinal myotomes.
• Thermography has also found a place in physical medicine in the assessment of such
conditions as ‘frozen shoulder’
• Frozen shoulder is usually characterised by a lowering of skin temperature, which is
probably due to decreased muscular activity resulting from a decreased range of motion.
• Thermography can be used to assess tissue damage caused by a burn or frostbite.
• The treatment of a burn depends upon the depth of injury and the surface area affected.
• Whereas a first-degree burn shows a skin erythema, a third-degree burn is deeper and
shows a complete absence of circulation.
• Identification of a second-degree burn is sometimes difficult, and temperature
measurements are used to assist with this assessment.
• Third-degree burns have been found to be on average 3°C colder than surrounding
normal skin. In conditions of chronic stress (such as bed sores, poorly fitting prosthetic
devices), thermal imaging can be used to assess irritated tissue prior to frank
breakdown.
Oncological Investigations

• Thermal imaging has been used as an adjunct in the diagnosis of malignant disease, to
assess tumour prognosis and to monitor the efficacy of therapy.
• Malignant tumours tend to be warmer than benign tumours due to increased metabolism
and, more importantly, due to vascular changes surrounding the tumour.
• It has been observed that surface temperature patterns are accentuated and temperature
differences increased by cooling the skin surface in an ambient 20°C.
• This procedure reduces blood flow in the skin and subcutaneous tissues and, since blood
flow through tumour vasculature is less well controlled than through normal
vasculature, the effects of cooling the skin surface are less effective over the tumour
than over normal tissue.
• Imaging has been used to determine the extent of skin lesions and to differentiate
between benign and malignant pigmented lesions.
• The method has been used by many investigators as an aid in the diagnosis of malignant
breast disease, but it lacks sensitivity and specificity.
• It has been advocated as a means of identifying and screening ‘high-risk groups’ and
for selecting patients for further investigation by mammography.
• Breast tumours that cause large temperature changes (more than 2.5°C) tend to have a
poor prognosis
• The treatment of malignant disease by radiotherapy, chemotherapy or hormone therapy
can be monitored thermographically.
MIT-MODULE 6-SCET

• Serial temperature measurements indicating temperature drops of 1°C or more are


usually consistent with tumour regression.
Physiological Investigations

• In the physiology of cutaneous perfusion, Anbar et al. (1997) used dynamic area
telethermometry.
• In this technique, 256 images at 66 IR images per second were captured with a 256 ×
256 FPA gallium-arsenide quantum-well photon-detector camera.
• The authors studied different areas of skin on the human forearm and observed
temperature modulation about 1Hz with an amplitude of about 0.03K due to the cardiac
cycle.
• Their findings demonstrated the potential use of high-speed imaging in peripheral
haemodynamic studies.
• Other workers have shown that imaging is useful in experimental respiratory function
studies (Perks et al. 1983), and for studying hyperthermia (Jones and Carnochan 1986).
• Improvements in examination techniques and the advantages of digital imaging have
improved the reliability of thermal imaging significantly in the study of diseases such
as diabetes mellitus and coronary heart disease (Marcinkowska-Gapinska and Kowal
2006, Ring 2010).
• The development of FPA systems allowed thermography to be used for public
screening such as for SARS and pandemic influenza.
• Outgoing or incoming travellers have been screened at airports to identify individual
travellers with a raised body temperature caused by SARS or Avian influenza.
• In practice, real-time screening of facial temperature patterns is used to identify
individuals for further investigation. Adequate digital data can be captured within 2 s.
• The positive threshold temperature is usually taken to be 38°C.
• The control of infectious diseases is an international problem and various international
organisations are involved in establishing international standards for temperature
screening of the public (Ring 2006).
• Cooled FPA detector systems give good thermal and spatial resolution but suffer from
the limited length of time the cooling system can operate.
• Uncooled camera systems are more suited for continuous use over long time periods
but are best used in conjunction with an external reference source.
• A further problem is the lack of published data relating facial temperature patterns and
deep body temperatures. This problem is accentuated by possible ethnic differences
caused by facial topography.

You might also like