You are on page 1of 169

Dr. Mohannad K.

Sabir
Fifth Class / Biomedical Engineering
• Image is an artifact like a two dimensional picture, that has a
similar appearance to some subject like a physical object or
person.
• Image processing is any form of signal processing for which the
input is an image and the output may either be an image or a
set of characteristics or parameters related to the image.
• Image processing is used in areas such as multimedia,
computing, secured image communication, biomedical
imaging, pattern recognition, remote sensing, image
compression and retrieval, …etc.
• The need/motivation for image processing:
• The enhancement/improvement of pictorial information for:
• human interpretation
• automatic management (identification, storage, transmission,
quantification, ...)
• What is digital image processing?
• Processing of an image by means of digital computers.

Image analysis - Image processing - Computer vision


One of the first application areas of digital images
was newspapers industries (cable between London Digital computers: 1940
and NY) 1st computer able to do
digital image
Important to reduce transfer time.
manipulations: early 1960
Principal energy source for images today: electromagnetic
energy spectrum.
• Gamma rays:

• Nuclear medicine
• (injection of radioactive
tracer)

• Astronomical
observations
• (object generate
gamma rays)
PET=Positron Emission
Tomography

imaging at molecular level


• X-rays (the oldest radiation-type An x-ray picture
imaging) (radiograph) taken by
Röntgen of Albert von
Kölliker's hand at a public
lecture on 23 January
• -Discovered in 1895 by german 1896
physicist William Roentgen
• (Nobel prize in physics, 1901)

• -used in
medicine/industry/astronomy

• X-ray tube (catode/anode,


controlled by voltage), emitting ray,
absorbption by object, rest captured
onto a film, digitised.

• C.A.T. (Computerized Axial


Tomography) uses X-rays.
• Ultraviolet band:

• microscopy (fluorescence)
• the excited electron jumps to
another energy level emitting light as
a low-energy photon in the red region

• lasers
• biological imaging
• astronomical imaging
• industrial inspections A fluorescent tracer is bind to a
molecular target
Nasa/Landsat Mount Everest
Mount Everest is the highest mountain on Earth, rising 29,029
feet above sea level. It is located on the border of Nepal and
Tibet in the Himalayan mountain range. In Tibet the mountain
is known as Chomolunga and in Nepal it is called Sagarmatha.

This image of Mount Everest was taken from the International


Space Station on November 26, 2003.
In this image you can see Mount Everest covered in white
snow with Lhotse, the fourth highest mountain on Earth
connected via the South Col — the saddle point between the
two peaks. Vegetation appears green and rock and soil
appear brown in the image.
This natural color Landsat 5 image was collected on June 11,
2005. It was created using bands 3, 2 and 1. Mount Everest is
found on Landsat WRS-2 Path 140 Row 41.
Nasa/Landsat Mono Lake, California
This Landsat 7 image of Mono Lake was
acquired on July 27, 2000. This image is
a false-color composite made from the
mid-infrared, near-infrared, and green
spectral channels of the Landsat 7 ETM+
sensor – it also includes the
panchromatic 15-meter band for spatial
sharpening purposes. In this image, the
waters of Mono Lake appear a bluish-
black and vegetation appears bright
green. You will notice the vegetation to
the west of the lake and following the
tributaries that enter the lake.
Visible range:

automated inspection
tasks
radio band:

MRI - imaging
(Nobel prizes: Bloch 1952,
… , 2003)

A strong magnet passes radio waves


though short pulses which causes
a response pulse (echo)
Retina: consist of receptors
- cones: highly sensitive to colors. Photopic
or bright-light vision

-rods: give overall picture with reduced detail.


Scotopic or dim-light vision
The rods in the retina respond to low intensity levels (scotopic
or dim-light vision) and the cones to higher intensity levels
(photopic or bright-light vision); between them they can adapt to
a huge range of light intensities, on the order of 1010, known as
the dynamic range.

Although the eye has a huge dynamic range, it cannot


simultaneously distinguish all these intensity levels; instead, it
adapts to regions within the total dynamic range by a process known
as brightness adaptation or accommodation.
The human visual response is limited to
detecting brightness changes of about 2–3%,
so that typically it can distinguish only around
25–30 brightness levels in a scene.
Thus, if the range between black and white
were divided into more than 30 equal levels of
brightness, the eye would be unable to
distinguish between adjacent levels.
Classical optical theory: This distance varies between 14-17 mm
depending on the lens’ focussing
A ray passes through the centre C of the lens.

The two triangle are proportional:


h is the height of the object on the retina
(note that is located close to the fovea)
Perceived intensity is not a simple function of
actual intensity.

- under/overshoot boundary of regions of


different intensity (Mach bands)

- A region’s perceived brightness does depend


on the background intensity as well
(simultaneous contrast)

© 1992–2008 R. C. Gonzalez & R. E. Woods


Optical illusions and perception:

The eye “fills in” non-existing information or


wrongly perceives geometrical properties of
objects.

If an image is analog, for example a film radiograph, it can be digitized
to obtain a digital image; the same considerations apply in mapping a
real object directly to a digital image.

There are two steps involved: spatial quantization and intensity


quantization. The term quantization means that a variable is not allowed
to take any value, but can only take certain allowable (quantized) values;
for example, only integer values but not the non-integer values between
them
The rate at which samples are taken is the sampling rate or sampling
frequency, fs, and is expressed as the number of samples taken per unit
distance, in units of samples per centimeter or dots (samples) per inch,
(dpi). The distance between samples, d, is the inverse of the sampling
frequency.
Usually the sampling frequency is the same in both directions, and the
small, square, area around each position, with sides equal to the distance
between samples, d, make up a single pixel in the digitized image.
The sampling frequency determines the distance between samples, and this
distance becomes the linear pixel size.
Each pixel represents not a point in the image, but rather an elementary
cell of the grid with its own individual brightness; the image has become
spatially quantized.
Distance along the x and y directions is no longer continuous; instead it
proceeds in discrete increments, each given by the size of a pixel.
Nyquist–Shannon sampling theorem:
fs >= 2 x fmax
The sampling theorem can be expressed equivalently in terms of distances
rather than spatial frequencies; the sampling distance, d (the pixel size),
must be less than, or equal to, half of the inverse of the maximum spatial
frequency in the image, fmax.
Thus, it must be less than, or equal to, half of the size of the smallest detail in the image
(Lmin) in order to digitize the image accurately. That is,
d <= 1/(2 x fmax)
or
d <= Lmin/2
Worked example
A chest radiograph is 14 inches by 17 inches (36cm×43cm). If we want to
preserve all the detail in the image, to a spatial resolution of 5 cycles mm−1,
how many pixels would be required?
To preserve the spatial resolution of 5 cyclesmm−1,we need to sample 10
pixelsmm−1 (i.e. 2 pixels per cycle) resulting in pixels of size 0.1mm. This
would require 3600×4300 pixels to cover the radiograph. If each pixel is 8
bits deep (i.e. 1 byte per pixel) this would require a file of size 14.8MB; if we
required more gray levels,
and used 16 bits per pixel, the file size would be twice as big.
The discrete pixels formed around the sampled locations comprise the
spatially quantized image, but the values within the pixels are still the
sampled values measured from the original analog (i.e. continuous) image.

In order to form a digital image, these values need to be assigned to a finite


set of discrete values. This is the second step in the process of digitizing an
analog image, and is known as intensity (or brightness) quantization.

The number of bits, b, required to store a digitized (M,N) image of k intensity


levels is;
b= MxNxk
Many digital images are 8 bits deep, i.e. they allocate 8 bits to each pixel,
resulting in 256 possible gray levels spanning black to white. In general,
allocating n bits per pixel gives 2n shades of gray
The coordinate system in Fig. 2.1(a) and the preceding discussion lead to the
following representation for a digitized image:

The right side of this equation is a digital image by definition. Each element
of this array is called an image element, picture element, pixel, or pel. The
terms image and pixel are used throughout the rest of our discussions to
denote a digital image and its elements.
• A given picture can be represented with different numbers of pixels
and various numbers of bits per pixel.
• Fewer pixels produces lower quality. (pixel resolution)
• Fewer bits per pixel produces lower quality. (Special resolution
DPI)
• Gray level resolution.
• Spectral resolution.
• There is a tradeoff between quality and picture storage
requirements.
Classification of images
• The most common 2-D image formats are listed in following Table.
Types of Images Based on Attributes
• In true colour images, the pixel has a colour that is obtained by mixing the
primary colours red, green, and blue. Each colour component is
represented like a grey scale image using eight bits. Mostly, true colour
images use 24 bits to represent all the colours.
• Indexed Image, is a special category of colour images is the indexed
image. In most images, the full range of colours is not used. So it is better
to reduce the number of bits by maintaining a colour map, gamut, or
palette with the image.
• Like true colour images, Pseudocolour images are also used widely in
image processing. True colour images are called three-band images.
However, in remote sensing applications, multi-band images or multi-
spectral images are generally used. These images, which are captured by
satellites, contain many bands.
RGB or true-color images are 3-D
arrays that assign three numerical
values to each pixel, each value
corresponding to the red, green and
blue (RGB) image channel
component respectively.
Types of
Images Based
on Data Types
(Classes)
Ex: Single, double,
Signed or unsigned
What is digital image processing?
Low-level : input, output are images
 Primitive operations such as image preprocessing to reduce noise, contrast
enhancement, and image sharpening
Mid-level : inputs may be images, outputs are attributes extracted from
those images
 Segmentation
 Description of objects
 Classification of individual objects
 High-level :
 Image analysis
 Image Acquisition:
 An image is captured by a sensor (such as a monochrome or color TV
camera) and digitized.
 If the output of the camera or sensor is not already in digital form,
an analog-to-digital converter digitizes it.
oCamera:
• Camera consists of 2 parts
1.A lens that collects the appropriate type of radiation emitted from the
object of interest and that forms an image of the real object.
2. a semiconductor device – so called charged coupled device or CCD
which converts the image into an electrical signal.
oFrame Grabber:
• Frame grabber only needs circuits to digitize the electrical signal from the
imaging sensor to store the image in the memory (RAM) of the
computer.
 Image Enhancement:
• To highlight certain features of interest in an image.
Example:
 Image Restoration:
• Improving the appearance of an image.
• Tend to be based on mathematical or probabilistic models of image
degradation.
• Example:
Color Image Processing:
• Gaining in importance because of the significant increase in the use of digital
images over the Internet.
Wavelets:
• Foundation for representing images in various degrees of resolution.
• Used in image data compression and pyramidal representation.
 Compression:
• Reducing the storage required to save an image or the bandwidth required to
transmit it.
• Ex. JPEG (Joint Photographic Experts Group) image compression standard.
Morphological processing:
• Tools for extracting image components that are useful in the
representation and description of shape.
Image Segmentation:
• Computer tries to separate objects from the image background.
• It is one of the most difficult tasks in DIP.
• Output of the segmentation stage is raw pixel data, constituting
either the boundary of a region or all the points in the region itself.
Representation & Description:
• Representation make a decision whether the data should be
represented as a boundary or as a complete region.
Boundary representation focus on external shape characteristics, such as
corners and inflections.
Region representation focus on internal properties, such as texture or
skeleton shape.
Representation & Description:
Representation + Description
1 connected component, 1 hole

transform raw data

a form suitable for


the Recognition processing
1 connected component, 2 holes
Recognition & Interpretation:
• Recognition the process that assigns a label to an object based on
the information provided by its descriptors.

• Interpretation assigning meaning to an ensemble of


recognized objects.
Knowledge base:
• a problem domain detailing regions of an image where the
information of interest is known to be located.
• Help to limit the search
Not all the processes are needed. Ex. Postal Code Problem
spatial resolution : is the smallest discernible detail in an image.
 Gray-level resolution : refers to the smallest discernible change in gray
level, measuring discernible changes in gray level is a highly subjective
process.
• Due to hardware considerations, the number of gray levels is usually an
integer power of 2, The most common number is 8 bits, with 16 bits being
used in some applications where enhancement of specific gray-level ranges
is necessary. Sometimes we find systems that can digitize the gray levels of
an image with 10 or 12 bits of accuracy, but these are the exception rather
than the rule.
• it is not uncommon to refer to an L-level digital image of size M*N as having a
spatial resolution of M*N pixels and a gray-level resolution of L levels.
• Figure below shows an image of size 1024*1024 pixels whose gray levels are
represented by 8 bits. The other images shown in below are the results of
subsampling the 1024*1024 image. The subsampling was accomplished by
deleting the appropriate number of rows and columns from the original image.
• For example, the 512*512 image was obtained by deleting every other row and
column from the 1024*1024 image. The number of allowed gray levels was kept
at 256.
• The simplest way to compare the effect of subsampling is to bring all the
subsampled images up to size 1024*1024 by row and column pixel replication.
The results are shown in Figs. Below (b) through (f). Image (a) is the same
1024*1024, 256-level image shown in Fig. above; it is repeated to facilitate
comparisons.
• the number of samples is kept constant and reduce the number of gray levels
from 256 to 2, in integer powers of 2. Figure below(a) is a 452*374 CAT
projection image, displayed with k=8 (256 gray levels). Images such as this are
obtained by fixing the X- ray source in one position, thus producing a 2-D image
in any desired direction.
• In figure below (b) through (h) were obtained by reducing the number of bits
from k=7to k=1while keeping the spatial resolution constant at 452*374 pixels.
The 256 -, 128 -, and 64 – level images are visually identical for all practical
purposes.
• The 32-level image however, has an almost imperceptible set of very fine ridge
like structures in areas of smooth gray levels (particularly in the skull).This effect,
caused by the use of an insufficient number of gray levels in smooth areas of a
digital image, is called false contouring,
• The results in from above examples illustrate the effects produced on image
quality by varying N and k independently.
• An early study by Huang [1965] attempted to quantify experimentally the effects
on image quality produced by varying N and k simultaneously.
• Images similar to those shown in Fig. below were used. The woman’s face is
representative of an image with relatively little detail;the picturer of the
cameraman contains an intermediate amount of detail; and the crowd picture
contains, by comparison, a large amount of detail.
• Sets of these three types of images were generated by varying N and k, and
observers were then asked to rank them according to their subjective quality.
Results were summarized in the form of so-called isopreference curves in the
Nk-plane
• The key point of interest in the context of the present discussion is that
isopreference curves tend to become more vertical as the detail in the image
increases. This result suggests that for images with a large amount of detail
only a few gray levels maybe needed
• For example, the isopreference curve in Fig. above corresponding to the crowd
is nearly vertical. This indicates that, for a fixed value of N, the perceived quality
for this type of image is nearly independent of the number of gray levels used.
• It is also of interest to note that perceived quality in the other two image
categories remained the same in some intervals in which the spatial
resolution was increased, but the number of gray levels actually
decreased.
• The most likely reason for this result is that a decrease in k tends to increase
the apparent contrast of an image, a visual effect that humans often
perceive as improved quality in an image.
Aliasing and Moiré Patterns:
• the Shannon sampling theorem tells us that, if the function is sampled at a rate
equal to or greater than twice its highest frequency, it is possible to recover
completely the original function from its samples.
• If the function is undersampled, then a phenomenon called aliasing corrupts the
sampled image.
• The corruption is in the form of additional frequency components being
introduced into the sampled function. These are called aliased frequencies.
• Note that the sampling rate in images is the number of samples taken (in both
spatial directions) per unit distance.
• We can only work with sampled data that are finite in duration. We can
model the process of converting a function of unlimited duration into a
function of finite duration simply by multiplying the unlimited function by
a “gating function” that is valued 1 for some interval and 0 elsewhere.
• Unfortunately, this function itself has frequency components that extend
to infinity.
• However, aliasing is always present in a sampled image. The effect of
aliased frequencies can be seen under the right conditions in the form of
so called Moiré patterns.
• Figure below shows two identical periodic patterns of equally spaced
vertical bars, rotated in opposite directions and then superimposed on
each other by multiplying the two images.
• Moiré pattern, caused by a breakup of the periodicity, is seen in Fig. below
as a 2-D sinusoidal (aliased) waveform (which looks like a corrugated tin
roof) running in a vertical direction.
Zooming and Shrinking Digital Images:
• How to zoom and shrink a digital image, is related to image
sampling and quantization because zooming may be viewed as
oversampling, while shrinking may be viewed as
undersampling.
• The key difference between these two operations and sampling and
quantizing an original continuous image is that zooming and
shrinking are applied to a digital image.
• Zooming requires two steps: the creation of new pixel locations, and
the assignment of gray levels to those new locations.
• Suppose that we have an image of size 500*500 pixels and we want to enlarge it
1.5 times to 750*750 pixels.
• one of the easiest ways to visualize zooming is laying an imaginary 750*750
grid over the original image.
• In order to perform gray-level assignment for any point in the overlay, we look for
the closest pixel in the original image and assign its gray level to the new pixel in
the grid.
• This method of gray-level assignment is called nearest
neighbor interpolation.
• Pixel replication is a special case of nearest neighbor interpolation.
• Pixel replication is applicable when we want to increase the size of an image an
integer number of times.
• For instance, to double the size of an image, we can duplicate each column.
• This doubles the image size in the horizontal direction.
• Then, we duplicate each row of the enlarged image to double the size in the
vertical direction.
• The same procedure is used to enlarge the image by any integer number of
times (triple, quadruple, and so on).
• Although nearest neighbor interpolation is fast, it has the undesirable feature
that it produces a checkerboard effect that is particularly objectionable at
high factors of magnification.
• A slightly more sophisticated way of accomplishing gray-level
assignments is bilinear interpolation using the four nearest
neighbors of a point.
• Let (x’, y’) denote the coordinates of a point in the zoomed image and let
v(x’, y’) denote the gray level assigned to it.
• For bilinear interpolation, the assigned gray level is given by

• where the four coefficients are determined from the four equations in four
unknowns that can be written using the four nearest neighbors of point (x’,
y’).
• Image shrinking is done in a similar manner as just described for zooming.
• The equivalent process of pixel replication is row-column deletion.
• For example, to shrink an image by one-half, we delete every other row and
column.
• We can use the zooming grid analogy to visualize the concept of shrinking by
a non integer factor, except that we now expand the grid to fit over the
original image, do gray-level nearest neighbor or bilinear interpolation, and
then shrink the grid back to its original specified size.
• f(x,y) must be nonzero and finite, that is
0 < f(x,y) < ∞
The function f(x,y) may be characterized by two components:
1- The amount of source illumination incident on the scene being viewed. Is called the illumination component,
and is denoted by i(x,y)., and
2- The amount of illumination reflected by the objects in the scene, is called reflectance component and is
denoted by r(x,y).
f(x,y)= i(x,y)*r(x,y)
Where
0<i(x,y)<∞
And
0< r(x,y) < 1
• The nature of i(x,y) is determined by the illumination source, and r(x,y) is determined by the c/cs of the image
objects.
Radiography (X-Rays)

Image processing for BME 1_Lec 5_ 2020 102


Radiography (X-Rays)

Image processing for BME 1_Lec 5_ 2020 103


Radiography (X-Rays Properties)

Image processing for BME 1_Lec 5_ 2020 104


X-Ray Spectra
• In general, there are three ways to measure the “quality” of electromagnetic
waves
– Wavelength
– Frequency
– Photon energy
  c/ f
E  hf
• -f frequency, Hertz (Hz)
•  - wavelength, meters (m)
• E - photon energy, electron volts (Ev)
• c - speed of light, 3x1010 m/sec
• h - Planck’s constant, 4.1x10-15 Ev/Hz

Image processing for BME 1_Lec 5_ 2020 105


What are X-Rays?
o “X” stands for “unknown”
o X-ray imaging is also known as - radiograph or Röntgen imaging

o X-Rays is an electromagnetic radiation which can ionize the matter through which it passes as it has high
energy content.
o The ionization can cause damage to DNA and cells in human tissues. However it can penetrate the body to
allow noninvasive visualization of the internal anatomy of the human body.
In order to reduce the ill effect of ionization due to X-Rays while taking radiography, new X-Rays techniques
are being developed to minimize the radiation dose.
o If the electron beam is accelerated with enough energy by applying suitable voltage, the radiation
produced is X-Ray portion of the electromagnetic spectrum.

Image processing for BME 1_Lec 5_ 2020 106


Physics of X-ray Radiography X-Ray interaction with
Tissue

Image processing for BME 1_Lec 5_ 2020 107


Image processing for BME 1_Lec 5_ 2020 108
Image processing for BME 1_Lec 5_ 2020 109
Image processing for BME 1_Lec 5_ 2020 110
X-Ray Imaging: How does it work.

Image processing for BME 1_Lec 5_ 2020 111


Generation of X-Rays

Anode
(X-ray source)

Cathode
(electron source)

- X-Rays emanate from a small point source and pass through a portion of the body and
onto a detector that records the X-Rays that reach the detector as an image which is called
radiograph

Image processing for BME 1_Lec 5_ 2020 112


How does it work (Operating Principles)

- X-Rays are absorbed by the body in relation to specific density and


atomic number of various tissues.
- In irradiating a volume of interest, these absorption differences are
recorded on an image receptor.
- A high voltage generator, supplies the essential power to X-Ray tube.

Image processing for BME 1_Lec 5_ 2020 113


How does it work (Operating Principles)

- The X-Ray exposure is kept for precise and finite duration by an electronic time
switch.
- The exposure is also automatically terminated after a certain amount of radiation
has been received by the image receptor with the help of phototiming circuit.
- The operator selects all operating parameters like exposure and dose of radiation
from the operator’s console

Image processing for BME 1_Lec 5_ 2020 114


Restriction (Collimator)

A collimator is used at the exit port of the X-Ray tube to adjust the size
and the shape of the X-Ray field.

Image processing for BME 1_Lec 5_ 2020 115


Filtration

Image processing for BME 1_Lec 5_ 2020 116


Compensation Filters

Image processing for BME 1_Lec 5_ 2020 117


Grids

Image processing for BME 1_Lec 5_ 2020 118


Generic System Description
Converts X-rays to light
and records
Produces X-rays from
electrical energy

Source

Restrictor Subject
(Collimator) Anti-scatter Detector

Determines size and Selectively removes


shape of beam scattered photons

Image processing for BME 1_Lec 5_ 2020 119


Generic System Description

- A high voltage generator, supplies the essential power to X-Ray tube.


- A collimator is used at the exit port of the X-Ray tube to adjust the size and
the shape of the X-Ray field.
- The X-Ray exposure is kept for precise and finite duration by an electronic
time switch.
- The exposure is also automatically terminated after a certain amount of
radiation has been received by the image receptor with the help of
phototiming circuit.
- X-Rays are absorbed by the body in relation to specific density and atomic
number of various tissues. In irradiating a volume of interest, these
absorption differences are recorded on an image receptor.
- The operator selects all operating parameters like exposure and dose of
radiation from the operator’s console.
Image processing for BME 1_Lec 5_ 2020 120
Generic System Description
An image receptor (detector) is a device which can detect and record an X-Ray image.
It is placed below the patient so that X-Ray after passing through patient falls on the
image receptor.
The patient’s anatomy modulates the intensity of the X-Ray field as it passes through
his body. The differential X-Ray absorption and transmission by tissues of the body
results in an exit radiation beam that varies in intensity in two dimensions.

Image processing for BME 1_Lec 5_ 2020 121


Image processing for BME 1_Lec 5_ 2020 122
X-Ray Attenuation
• For medical imaging, we can assume that X-Rays travel along straight lines
(rays).

i.e (X-Rays are removed from a beam).


This process is called attenuation.

Image processing for BME 1_Lec 5_ 2020 123


Interaction of Photons with Matter: Attenuation
(Homogeneous Slab)

in cm-1

Image processing for BME 1_Lec 5_ 2020 124


Interaction of Photons with Matter: Attenuation (Homogeneous Slab)

Image processing for BME 1_Lec 5_ 2020 125


Interaction of Photons with Matter: Attenuation (Non-
homogeneous Slab)

Image processing for BME 1_Lec 5_ 2020 126


Interaction of Photons with Matter: Attenuation

Low energies can distinguish different material better than higher energies
Image processing for BME 1_Lec 5_ 2020 128
Interaction of Photons with Matter: Attenuation

Image processing for BME 1_Lec 5_ 2020 129


Contrast Agents

Image processing for BME 1_Lec 5_ 2020 130


Projection Radiography
Generic System Description

Image processing for BME 1_Lec 5_ 2020 131


Projection vs. Tomography

Image processing for BME 1_Lec 5_ 2020 132


Projection vs. Tomography

Image processing for BME 1_Lec 5_ 2020 133


Radiography (X-Ray Imaging)
What does the image show?
X-Ray Image of Hand

Image processing for BME 1_Lec 5_ 2020 134


Radiography (X-Ray Imaging)
What is it?
• Two x-Ray views of the Right hand is illustrated in the first image whereas the same
hand is illustrated in the second image with high contrast.
• A fracture of the middle finger is seen on both views, though it is clearer on the view on
the left. This image can be used for diagnosis - to distinguish between a sprain and a
fracture, and to choose a course of treatment.

Image processing for BME 1_Lec 5_ 2020 135


X-Rays at Present

• Clear images of bones


• Some indication of tissue
• No tissue detail (tendon, muscle,
skin)
• Negative image: bone is white, air
is black

Image processing for BME 1_Lec 5_ 2020 136


Chest X-Ray

• Clear images of bone


– ribs, vertebra, clavicles
• Soft tissue
– shoulder muscles, heart,
abdomen
• Pattern of passages in lungs

Image processing for BME 1_Lec 5_ 2020 137


Abdominal X-Ray

• Visible: Bony structures


– Vertebra, pelvic bones, legs, ribs
• Soft tissues
– liver, stomach, leg muscles
• Confusing image of intestines
– Intestinal gas, walls
• Cannot see:
– Details of liver, back muscles, kidneys

Image processing for BME 1_Lec 5_ 2020 138


Abdomen - more

• Abdomen after Barium contrast


enema (real –time radiography)
• Large intestine easily visible

Image processing for BME 1_Lec 5_ 2020 139


Another Abdomen

• Contrast medium in aorta


(angiography)

• Visible:
– descending aorta,
– renal arteries,
– iliac arteries

Image processing for BME 1_Lec 5_ 2020 140


Pelvic X-Ray

• Can see
– Fracture in pelvis
– Femur

• Cannot see
– Soft tissues

Image processing for BME 1_Lec 5_ 2020 141


Skull

• Can see bones,


scalp

• Cannot see
ventricles, blood
vessels

Image processing for BME 1_Lec 5_ 2020 142


X-Rays

Image processing for BME 1_Lec 5_ 2020 143


Radiography (recap)
o Radiography was the first medical imaging technology, made possible when the physicist Wilhelm
Roentgen discovered X-Rays.
o Radiography defined the field of radiology and gave rise to radiologists, physicians who specialize
in the interpretation of medical images.
o Radiography is performed with an X-Ray source on one side of the patient and a (typically flat)
X-Ray detector on the other side. A short-duration (typically less than ½ second) pulse of X-Rays
is emitted by the X-Ray tube, a large fraction of the X-Rays interact in the patient, and some of
the X-Rays pass through the patient and reach the detector, where a radiographic image is
formed.
o The homogeneous distribution of X-Rays that enters the patient is modified by the degree to
which the X-Rays are removed from the beam (i.e., attenuated) by scattering and absorption
within the tissues.
o The attenuation properties of tissues such as bone, soft tissue, and air inside the patient are very
different, resulting in a heterogeneous distribution of X-Rays that emerges from the patient.
o The radiographic image is a picture of this X-Rays distribution. The detector used in radiography
can be photographic film (e.g., screen-film radiography) or an electronic detector system (i.e.,
digital radiography).

Image processing for BME 1_Lec 5_ 2020 144


Radiography

o Transmission imaging refers to imaging in which the energy source is outside the
body on one side, and the energy passes through the body and is detected on the
other side of the body.
Radiography is a transmission imaging modality.
o Projection imaging refers to the case when each point on the image corresponds
to information along a straight line trajectory through the patient.
Radiography is also a projection imaging modality.
o Radiographic images are useful for a very wide range of medical indications,
including the diagnosis of broken bones, lung cancer, cardiovascular disorders,
etc.

Image processing for BME 1_Lec 5_ 2020 145


Fluoroscopy

https://youtu.be/-DJiW1YADoE

Image processing for BME 1_Lec 5_ 2020 146


Radiography (Fluoroscopy)
oFluoroscopy refers to the continuous acquisition of a sequence of X-
Ray images over time, essentially a real-time X-Ray movie of the
patient. It is a transmission projection imaging modality, and is, in
essence, just real-time radiography.
oFluoroscopic systems use X-Rays detector systems capable of
producing images in rapid temporal sequence.
oFluoroscopy is used for positioning catheters in arteries, visualizing
contrast agents in the GI tract, and for other medical applications
such as invasive therapeutic procedures where real-time image
feedback is necessary. It is also used to make X-Ray movies of
anatomic motion, such as of the heart or the esophagus.

Image processing for BME 1_Lec 5_ 2020 147


Radiography (Mammography)
o Mammography is radiography of the breast, and is thus a transmission projection type of imaging. To
accentuate contrast in the breast, mammography makes use of much lower X-Ray energies than
general purpose radiography, and consequently the X-Rays and detector systems are designed
specifically for breast imaging.
o Mammography is used to screen asymptomatic women for breast cancer (screening mammography)
and is also used to aid in the diagnosis of women with breast symptoms such as the presence of a
lump (diagnostic mammography).
o Digital mammography has eclipsed the use of screen-film mammography, and the use of computer-
aided detection is widespread in digital mammography.

Image processing for BME 1_Lec 5_ 2020 148


Mammography

Image processing for BME 1_Lec 5_ 2020 149


3D-Mammography (Tomosynthesis)

o Some digital mammography systems are now capable of tomosynthesis, whereby the X-Ray tube
(and in some cases the detector) moves in an arc from approximately 7 to 40 degrees around the
breast. This limited angle tomographic method leads to the reconstruction of tomosynthesis images,
which are parallel to the plane of the detector, and can reduce the superimposition of anatomy above
and below the in-focus plane.

https://youtu.be/KU8Uz1x9xWM
Image processing for BME 1_Lec 5_ 2020 150
Recording X-Rays
The image detectors used in diagnostic radiology could be:
Screen-Film
 Photographic film (X-Ray films):
Direct film recording (like Roentgen)
• X-Ray films are the most important material used to “decode” the information
carried by the attenuated X-Ray beam, when they are made to pass through the
tissue.
• They capture the invisible image into visible form.
• Very low efficiency: film is thin, most X-Rays pass through the film emulsion

Image processing for BME 1_Lec 5_ 2020 151


X-Ray Detectors: Screen-Film

 Radiographic Film Structure

Image processing for BME 1_Lec 5_ 2020 152


Base: Supports the fragile photographic emulsion.

Image processing for BME 1_Lec 5_ 2020 153


Emulsion: Photosensitive Layer of the film.
(Thickness not more than 0.5 mm)
Key ingredients:
- Gelatin
- Silver Halide

Image processing for BME 1_Lec 5_ 2020 154


Adhesive Layer
• Firm attachment between emulsion layer and film base is achieved .
• Guards integrity during processing and fixing.

Image processing for BME 1_Lec 5_ 2020 155


Supercoating
• Thin layer of Gelatin.
• Protects the emulsion from mechanical damage.
• Prevents scratches and pressure marks.
• Makes the film smooth and slick.

Image processing for BME 1_Lec 5_ 2020 156


The Latent image formation
Intensifyer
Film, fluorescent
screen andscreen
film
or image intensifier
Scattered
radiation

« Latent »
Source Bone radiological
focus
image on film
X Soft
tissue
Air
Primary
collimation

Grid Beam intensity


at
atdetector
screen level
level

Image processing for BME 1_Lec 5_ 2020 157


The Latent Image
• Remnant radiation interacts with the silver halide crystals
• Mainly by the photoelectric interaction
• The energy deposited into the film is in the same pattern as the subject
that was exposed to radiation
• This invisible image is known as the latent image
• A latent image on photographic (radiographic) film is an invisible image
produced by the exposure of the film to light (radiation).

Image processing for BME 1_Lec 5_ 2020 158


Film Processing
• Series of events after the film is exposed to X-rays

Image processing for BME 1_Lec 5_ 2020 159


4 Steps of Film Processing

1. Developing – formation of the image


2. Fixing – stopping of development, permanent fixing of image on film
3. Washing – removal of residual fixer
4. Drying – warm air blowing over film

• There is another stage in the manual processing known as rinsing in between


development and fixing.

Image processing for BME 1_Lec 5_ 2020 160


Latent image to Manifest image
- A conventional system uses X-Ray film to create a latent image.
- The film is then processed, creating a manifest image that can be
interpreted by a physician.

Image processing for BME 1_Lec 5_ 2020 161


Latent image to Manifest image

Manifest
Washing drying image

Film Processing
Image processing for BME 1_Lec 5_ 2020 162
Processor (Top View)

Image processing for BME 1_Lec 5_ 2020 163


Summary: X-Ray Imaging
• X-Rays are 100 years old
• X-Ray imaging is a successful modality
• Created a revolution in medicine
• Useful for many diagnostic tasks
– Limitation: cannot distinguish between soft tissues
– Limitations can be overcome under some conditions with contrast media
• Oldest non-invasive imaging of internal structures
• Rapid, short exposure time, inexpensive
• Real time X-Ray imaging is possible and used during interventional procedures.
• Ionizing radiation: risk of cancer.

Image processing for BME 1_Lec 5_ 2020 164


Developments in X-Rays

• Digital recording systems are replacing film


– Improvement in sensitivity
– More convenient
• Computer interpretation of X-Rays is
– Now assisting mammography.
– May be the procedures for cardiography are next.

Image processing for BME 1_Lec 5_ 2020 165

You might also like