Professional Documents
Culture Documents
Contents
Preface
This short Introduction is designed to provide fundamental knowledge
necessary to understand elementary principles of digital image processing and
analyses and to give some remarks about where and how this technology be
used, particularly in the field of light microscopy. Main principles of digital
image acquisition, processing and analyses are given and illustrated with
examples. The last Chapter provides an information resource for those who
wish to obtain additional insight in this topic.
Introduction to Digital Image Processing and Analysis 3
order to exclude false products from the production line. All these tasks could
be carried out today using sophisticated hardware and software, with a higher
degree of speed and accuracy. It is important to know that the cost of such
sophisticated and complex equipment is not any more so high as it was until
recently. Couple of years ago those equipment were affordable only to well
founded research laboratories, but today also small routine laboratories can
afford and use all benefits of digital image processing and analyses.
• Illumination,
• Image formation and focusing,
• Image detection or sensing,
• Formatting camera analog electric signal,
• Digitalization or transformation of analog electric signal into a set of
numeric data suitable for processing in digital computer.
lighting which enhances the object boundaries, strobe lighting which eliminates
influence of ambient light, or structure lighting with a special pattern or grid
used to facilitate object recognition.
Illumination
state sensor element, located at the sensor plane produces an analog electric
signal representing the visible image. Today in most cases CCD (charge coupling
device) solid state sensor are used. They are characterized by their diagonal size
(1/3", 1/2" or 2/3"), number of pixel elements (typically 500 x 582 pixel
elements for standard PAL cameras), sensitivity (usually between 2 and 25
Lux), and resolution (from 300 to 600 horizontal lines).This analog signal
generated by sensor is then formatted according to one of video standards.
Each pixel of black & white image, or sometimes called binary image,
could have one of two possible values, for example for 0 or 255, but usually it
12 Darko Stipaničev
is expressed by logical 0 and 1 values. This is the most simple image where
objects are represented by their areas. Most of object measurements could be
performed only on this kind of digital image. Binary image could be obtained
from gray value image or color image by process called segmentation (see Image
Analysis).
Intensity 0
x
Binary (b & w) digital image Col. dig. image Red plane
Green plane
Blue plane
typically 256 intensities for each color. Combining various intensities of RGB
pixels each image pixel could give one of 256 x 256 x 256 = 16,777.216 different
color values. When defining value of color image pixel at location (x,y) a triplet
must be given in the form (red, green, blue), for example (0, 255, 255), which
means minimum intensity for red and maximal intensities for green and blue, and
that corresponds to pure yellow hue. Color image pixel will seem as gray value
pixel if all primary colors have the same intensity value: Black correspond to
(R,G,B) = (0,0,0), white to (R,G,B) = (255, 255, 255), and for example (R,G,B)
= (127, 127,127) will give some medium gray intensity.
The RGB color space could be schematically shown by color cube (see
Fig.4). Cube corners correspond to elementary colors (red, green, blue), their
complements (cyan, magenta, yellow) and black and white. Gray values are
located on main diagonal. The HSI color space could be schematically shown by
double cones (see Fig.4). Bottom and top vertexes correspond to black and
white, and gray values are located on line between them (saturation = 0). Pure
colors are located on outer circles.
Each color image pixel value could be found inside RGB cube or HSI
double cone as Fig.7. shows for one typical color image. If the image is gray
value one, all pixel values lie on gray value lines. Pure colors are located on outer
surfaces of RGB cube or HSI double cone.
14 Darko Stipaničev
One of the weakness of the RGB model, and partly of HSI model for
specifying color image is its nonuniform nature; equal distance in the RGB color
space does not generally correspond to equal difference in color perception.
Because of that some other color models are purposed. One of them is YIQ
color model of American NTSC standard for composite video signal, which is
essentially a linear transformation of the RGB model with luminance
information coded into the Y component and chrominance into I and Q.
White
Saturatin lines
Yellow White
Hue circles
Blue
Magenta
Green
0 1
Blue Red
Green Cyan
Video memory for storing color digital images must be three times bigger
than video memory for storing gray value digital images with the same number
of intensity levels. For example, one gray value digital image 512 x 512 pixels
with 256 levels of gray needs approximately 262 Kbytes of video memory, and
color digital image needs 786 Kbytes of video memory. Also processing time is
three times longer for color images, and because of that color digital video
systems are more complicated and more expensive.
another kind of color images are proposed, and named color-mapped images.
They have reduced color palette from 16,777.216 different colors to 256
different colors. This means that for storing each pixel value color-mapped
images need only 8 bits of memory, and true-color images need three times more
24 bits. Color-mapped images need the same memory space as gray level
images. For example, given before that is 262 Kbytes for 512 x 512 pixel image.
Fig. A5. shows the principle of converting true-color images to color-mapped
images. In true-color images each pixel value is given by red, green, blue triplet,
and in color-mapped images by palette index (or sometimes called palette
number). Each palette index has its corresponding color value which is shown
on a display screen. When we have defined color palette and want to convert
true-color image to color-mapped image, a special procedure looks for palette
color which is the closest to true-image color. For example, on Fig.5 that is
palette color with palette index 3.
x
Pallete Index
RGB value Palette Index
3
(30, 32, 92) 3
Palette Value
(31, 31, 95)
Fig.5. Conversion of true-color image to color-mapped image
An example of color image, gray level image, and binary image is shown
on color plates at the end of this introduction.
will higher their speed, because all pixels outside the ROI are excluded from
processing.
Region of interest
(ROI)
histograms for the same image, together with pixels positions in RGB cube and
HSI double cone.
4. Image Processing
The term "image processing" refers to the procedures whereby the
information contained in an image is altered, changed, usually to visually restore
or optimize the image. Typical examples are correction of image sharpening
caused by poor focusing, correction of lenses optical errors, correction of
contrast, intensity or brightness, color correction, image structure enhancement
to emphasize elements which are not easily seen in original image, subtraction of
background noise and similar. Image processing prepare images for image
analyze, both manual or automatic. This means that images could be analyzed
even in cases this was previously impossible due to poor quality or to its
complexity.
Offset
Range
Minimal value
Time
transformation table used to transform one intensity value into another. Color
systems have three LUTs, one for each primary color. Output LUT is located
between video memory and displaying unit. It is used for changing intensities
before displaying it on auxiliary monitor. On such a way it does not affect
values in video memory. Input and output LUTs are hardware devices, but they
could be in the form of software, too. Monadic image processing operations or
point-by-point operations are software LUTs. All point-by-point operations
described below in chapter Digital Image Processing could be applied on
hardware LUT, too.
• Monadic operations which act on one image pixel value in one time
moment, and
• Dyadic operations which act on multiple pixel values of one image or of
a few images in one time moment.
y y
255
Output intensities
q
0
0 p1 p2 255
Input intensities - p
intensity values are not uniformly spread through all intensity range. They are
limited to a portion of possible range. That could be easily noticed from input
image histograms as Fig.10. shows. Input image histogram is used for calculation
of clipping values p1 and p2. p1 corresponds to the darkest value in the image
and p2 to the brightest value. Applying contrast enhancement operator, input
image intensities between p1 and p2 are spread in output image over all
intensity range. The dynamic range in output image that results from this
calculation is distributed over the maximum possible intensity values of the
system. For color images contrast enhancement with different clipping values
for each primary color could be performed separately or the same clipping
values calculated from the intensity histogram could be applied for all primary
color planes. The effect of applying gray value and color contrast enhancement
operator is shown on plates at the end of this Introduction.
Contrast
Output int. Output int.reduction
Contrast
increase
contrast enhancement and clipping is that in the first case input image
intensities lower than p1 and higher than p2, are deleted in output image. For
clipping operation they are not deleted, but linearly transformed. Clipping is
used when some intensity ranges have to be augmented, and some enhanced.
Pixels p1, p2, p3, ... could either belong to two or more different images or to
the same image.
XOR or any other function that can be devised. Care must be taken that the
function contains an appropriate scaling factor to keep the magnitude of the
output value within the intensity range to avoid an overflow, negative value or
non-integer value. Image addition can be used to reduce the effect of noise in
the data, because it can average the data in two input images. If one of the input
images is constant, the result image will be lighter overall image which will
appear as a shift in the image histogram. Image subtraction can be used to
filter out differences in images, to detect changes between two images, to
eliminate background influence and alike. If two images are taken in different
times, than subtraction can be used to detect movement. Image multiplication
is used to correct the sensor nonlinearities or to extract specific areas of an
image by region of interest window. Image pixels are multiplied by window
pixels which are 0 outside the window and 1 inside the window. Logical
operations are particularly used for manipulating binary images.
where k is a reduction constant and cij are convolution matrix values. The
output image pixel value at location (x,y) is obtained as
1 3 3
q(x, y) = # # cij! p(x + i " 2, y + j " 2)
k i =1 J =1
". . . . .%
$. p(x ! 1, y ! 1) p(x ! 1, y) p(x ! 1, y + 1) .'
$ '
$. p(x, y ! 1) p(x, y) p(x, y + 1) .'
$. p(x + 1, y ! 1) p(x + 1, y) p(x + 1, y + 1) .'
$ '
#. . . . .&
!1 1 1$ " 0 !1 0 %
1#
Lowpass = #1 1 1&& Highpass = $$!1 5 !1''
9
#"1 1 1&% $# 0 !1 0 '&
1
q (3,3) = [1! p(2,2) + 1! p(2,3) + 1! p(2,4) + 1! p(3,2) + 1! p(3,3) + 1! p(3,4) + 1! p(4,2) + 1! p(
9
Introduction to Digital Image Processing and Analysis 31
. . . . . .
10 14 14 17 56 78
. . . . . .
27 34 17 13 67 43
34 21 17 86 12 156 . . 43 . . .
5 98 32 65 32 87 . . . . . .
21 65 87 32 43 34 . . . . . .
43 = (34 + 17 + 13 + 21 + 17 + 86 + 98 + 32 + 65) / 9
The same procedure is applied for all input image pixel locations. It
should be noted that 3x3 convolution or filtering could not be applied for pixel
values of input image located on the image boundaries. Usually they maintain
the same value on output image as they have on input image. Convolution
filtering is important digital image processing procedure used quite a lot in image
optimization, but also for image analyses. Some edge enhancement filters are
used in edge extracting procedures and image segmentation. CHRONOLAB
Color Vision in Image Processing option Users Filter offers 30 most important
convolution matrices.
Low pass filters are usually known as smoothing filters. They average
the image according to 3 x 3 neighborhood. It smoothes the image and blurs
intricate details, somewhat like a photographer's soft-focus lens. Smoothing is
used to blend contrast details and give to image a soft, blurred look. High pass
or sharpening filter is more often used. Its effect is sharpening edges, but
unfortunately it also increases the amount of noise in the image, making it
appear more grainy. Fig.14. and plates at the end of Introduction show effects
of applying smoothing and sharpening filters.
Special mention should be made of the rank order operators, of which
the minimum, maximum and median filters are particularly important.
These filters are used for suppressing noise, eliminate artifacts and dilate or
erode objects in gray or color images. Rank order operator principle is as
follows: An input image array of, let us say 3x3 pixels, is used for processing
on a way that the central element is replaced with the minimum, maximum or
median value of all 9 pixels belonging to that array.
32 Darko Stipaničev
Erosion and dilation are usually combined in more complex binary image
processing operations. Typical example is operation open, which carry out the
erosion operation followed by a dilation operator. This procedure eliminates
from the image all small white objects. The operation close is carried out in the
opposite sequence, a dilation operation is followed by an erosion operation. By
this operation all small black holes in objects are eliminated from the image.
Open is used to clean the noisy binary image, and close to connect structures
which have become separate, or to close small gaps and holes. By specific
combination of erosion and dilation, edges of binary object could be extracted
with various edges thickness.
Rank order operators, which are actually color and gray value
operators, could also be suitable for binary image processing, particularly for
binary image cleaning (noise suppression). Comparing to erosion and dilation
these methods have the advantage that the size and the shape of the main
objects are not greatly affected.
The final results of digital image analysis are normally numeric, such as
image topological characteristics (number of separate objects in the image),
geometrical characteristics of image objects (area, perimeter, circularity),
densitometric characteristic and texture of image objects or image itself, but the
result of digital image analysis could be also another image with identified image
structures (edges, boundaries, regions, objects). This second kind of digital
image analysis is usually performed before quantitative image evaluation. Digital
image analysis usually consists of a number of procedures which first prepare
the image for quantitative evaluation, and after that the image is quantitatively
analyzed. For example, for certain geometric or topological image analyses, color
or gray scale image is first converted in binary image, using one of segmentation
methods, and after that, when objects are marked as white areas in comparison
with background which is black, the image is quantitatively analyzed and
evaluated. Some other image measurements, like measurement of distances and
angles or single object characteristics, when its boundaries are manually or
automatically traced, do not need image transformation into binary one. They
can be performed on original, color or gray scale image. The same situation is
with measurement of image luminescence features, like luminescence profiles.
They don't need any kind of transformation because in that case original
luminescence information will be lost.
digital image. If it is necessary, the same procedure could be done for vertical
dimension. Transformation ratios could be previously defined if the system and
its magnification is well known, for example in the case of light microscopy
workstations.
5
0 5
Fig.15. The geometric calibration procedure. Calibration ratio 0.042 means that
each pixel in horizontal dimension corresponds to 0.042 mm of real horizontal
line.
After geometric calibration, image linear features, angles and features of
objects whose boundaries are traced and marked could be measured in real
dimension.
image pixels whose values lie between thresholds are considered as "white
objects", and "black background" are all other pixels whose value is lower than
the low threshold or higher than the higher threshold. Fig.17. shows “threshold
interval” operator function. For final binary image all three color planes have to
be combined by appropriate connection.
Output
p1 Input p2
Output
p1 Input p2
Output
p1 p2 Input
Fig.18. Threshold interval adjustment for image with dark and light background
and for extraction middle gray level
For image segmentation edges have to be not only enhanced, but extracted.
Appropriate convolution filters for edge extraction, mostly used in practice, are
Sobel and Prewitt filters for horizontal and vertical edge extraction. Their
convolution matrices are also given in Fig.19. In edge extraction procedure
horizontal and vertical filters are combined together to extract all edges. By
defining appropriate threshold level operator can choose what kind of edges he
wants to display. For low threshold level, all edges, strong and weak will be
displayed, and for high threshold level only strong edges will be displayed. Plate
AXVIII shows edge extraction by Sobel filter for two different threshold levels.
Segmentation by edges and boundaries gives as a result black and
white (binary) image where white areas correspond to object edges, and black
areas, both to objects exterior and objects interior. Because of that, they are not
so appropriate for extracting objects which have to be measured. They are more
appropriate for qualitative estimation of image content. Plate AXXII shows an
image segmented by edges and boundaries.
Introduction to Digital Image Processing and Analysis 41
Regions are image parts which have similar gray or color value. Similar
means that inside one region gray or color values could differ a little. This
difference could be usually defined as a gradient threshold. Region extraction
procedure equalizes gray or color value inside one regions. The result is the
image with regions with uniform gray or color values. Plate AXIX shows an
image with region extraction.
Fig.20. Objects group parameters for image from the Plate AXXIV
minimal radi
Feret 22,5º
centroid minor axis direction
Feret 90º
hole centroid
orientation Feret 0º
horizontal width
object's ortogonal length
object's length
Radi
Max
Min
Angle
0º
360º
Radial Boundary
Fig.22. Typical object's geometric parameters values for object No.12 from
Plate.AXXV.
Measuring units in this example were pixels. That means that object
covers 2432 pixels and its perimeter has 190 pixels and 0.953 parts of one pixel.
46 Darko Stipaničev
background
7
6
4 5
3
2
1
Fig.25. Line profile of intensity values for a line drawn in Plate AXXVII
Projection signatures could be calculated for each color plane (red, green
and blue), or what is more appropriate, only for image intensity. Fig.A27.
shows one horizontal projection signature of intensity values for area specified
by ROI on the Plate AXXVII.
195
Intensity
Black 0
x
50 Darko Stipaničev
Fig.29. Area profile of intensity values for area specified by ROI on the Plate
AXXVII
30
25
20
15
10
0
0 10 20 30 40 50
25
20
15
10
0
0 10 20 30 40 50
PLATE I - Original gray level image of blood smear and its intensity histogram.
Intensity values between 0 and 255 are on x - axis and number of pixels on y -
axis.
PLATE III - Brightness increasing. Resulting image, input image histogram with
applied point-by-point function and resulting image histogram.
PLATE XIII - Level reduction. Resulting image, input image histogram with
applied point-by-point function and resulting image histogram.
PLATE XIV - Inverse. Resulting image, input image histogram with applied
point-by-point function and resulting image histogram.
60 Darko Stipaničev
PLATE XVII - Edge extraction by Prewitt filter with threshold level 23.
PLATE XXV - Binary operation OPEN applied on binary black & white image
from plate XXI.
PLATE XXVI - Binary operation CLOSE applied on binary black & white
image from plate XXI.
66 Darko Stipaničev
a
PLATE XXXV - Starch grains image enhancement and
b analysis : a) input image, b) image after image enhancement,
c c) binary image after segmentation and automatic counting and
d d) area histogram of starch grains.
74 Darko Stipaničev
c
d a
PLATE XXXVII - Unsupervised classification of grains: a)
input image, b) image after segmentation and field
b measurements, c) measure image d) calibration data and
e
e)correlation diagram between minimal and maximal radii
from object's centroid.
76 Darko Stipaničev
a
PLATE XXXVIII - Detailed analysis of grains shape: a)
binary image of grains after automatic single object
c
b d measurements, b) measured data, c) radial boundary and d)
f Feret diagram of white paper grain, e) measured date, f) radial
e
g
boundary and g) Feret diagram of rice grain.
Introduction to Digital Image Processing and Analysis 77
d
e a PLATE XXXIX - Object recognition and analysis: a) input
image, b) binary image after segmentation and object
b
recognition, c) measured data of recognised casts d) image of
c measure and e) calibration data.
78 Darko Stipaničev