Chapter 04 Image Enhancement :
Gray Level Transformations,
Histogram Processing,
Spatial Filtering: Introduction, Smoothing and Sharpening Filters.
Colour Image Enhancement
Chapter 04
12/ 2010 1c). Differentiate between Spatial Resolution and Tonal resolution
12/ 2010 3b) Given an image of size (3 x 3)
f (m, n) =
128 212 255
54 62 124
140 152 156
Determine the output image g (m, n) using logarithmic Transformation
g (m, n) = C Log10[1+f (m, n) ] , choosing C = L /{ log10 [ 1 + L]}
Where L is maximum pixel value in the image
06/2011 2a) Differentiate between point operations and neighbourhood operations.
06/2011 2b) If :Grey level
No. of pixels
0
100
1
90
2
85
3
70
4
0
5
0
6
0
7
0
Perform histogram stretching so that new image has a dynamic range of [0, 7]
2
Introduction and chapter 04
Image enhancement
1). Plot the Histogram for the following image. Perform Histogram Equalization
and then plot the Equalized Histogram and Histogram equalized Image.
1 1 5 3 3
1 2 6 6 7
1 4 0 6 2
4 4 2 5 2
7 4 0 2 2
(10)
2). Distinguish the point operations and Spatial operations in the Spatial domain
processing for image enhancement
(5)
3). State whether the following statements are true or false. Briefly explain
the reason for your contention:
a. The principal function of median filter is to force points with distinct intensities
to be more like their neighbours.
(5)
b. Median filter is the best solution to remove salt and pepper noise.
(5)
c. Image subtraction is used for scene matching and detection
(5)
3
5. State and explain the point operations in spatial domain for image enhancement
giving at least one application each case.
(10)
6. Equalize the following histogram:Grey level r
0
1
2
No. of pixels
790
1023
850
3
656
4
329
5
245
6
122
7
81
7. Explain in detail image enhancement techniques in spatial domain:a. Image Negative
b. Bit plane slicing
c. Contrast stretching
d. Low pass filter
( 5 marks each)
Image Enhancement in spatial domain
Objective :
 Image Enhancement is processing of the original image
so that the resultant image is more suitable than the original image
for specific applications
 Enhancing Xray image
for better viewing of the bone structure
 Enhancing images transmitted by space probe
for studying the earth resources
Image enhancement can be done in two domains
1. The spatial domain image enhancement:
 Manipulations of images in the image plane
 Direct manipulation of the pixels in an image
2. The frequency domain image enhancement:
 Manipulation of images after converting
into frequency domain
 Frequency components of Fourier Transform of
an image are processed
Spatial Domain methods
 Spatial domain procedures operate directly on the images space
It means directly on pixel values
or working directly with raw image data
 There is no general theory of image enhancement
 The quality of the enhancement of the images
is judged by the viewer
 Thus Spatial Domain image processing is highly subjective
Spatial domain refers to aggregate of pixels composing an image
 Spatial domain process is denoted by the expression:
g( x, y) = T [ f ( x, y) ]
Where f ( x, y) is the input image
g ( x, y) is the processed / output image
x, y
f ( x, y)
are coordinates image pixels
pixel value at coordinates x, y
T is an operator that operates on input image f
 The T operator is defined over
some neighborhood of ( x, y)
 It can also operate on a set of images
T is called transfer function that operates on the image f (x, y)
f (x, y ) original image
Where f is grey level at coordinate (x, y)
For 8bit image, function f at any (x, y ) coordinates
can take values from 0 to 255
0 for black and 255 for white
intermediate values represent, shades of grey
 Transfer function T can also operate on
a set of input images
(i.e. Addition of pixel values of two or more images
for noise reduction)
Y coordinate, 0 to N1
X
coordinate
0 to M1
Pixel Picture Element at any coordinate (x, y)
Represents average value of light in
one square area of a pixel at (x, y)
10
 Image starts from left top corner (Scanning left to right and top to bottom)
 Values of gray scale at any coordinate (x, y) is f( x, y) :
y
f(0,0), f(0,1), f(0,2)  
f(0, N1)
f(1,0), f(1,1) , f(1,2)  f(1, N1)
f(2,0) f(2, N1)
 

f(M1, 0)
x varies from 0 to M1,
f(M1, N1)
y varies from 0 to N1
11
Principal approach to spatial domain image processing:
 Define a subimage in the input image
 A rectangle or a square region (3 x 3, 5 x 5, . . )
about a pixel ( x, y) as center
 The center of the subimage is moved
from pixel to pixel
(Starting from Left top corner
left to right, top to bottom)
 Operator T is applied at each location ( x, y)
to yield output pixel value g (x, y) at the location x, y
12
 Taking a subimage of the square size of 3 x 3 neighborhood about
a point (x, y) as centre
T operator for average intensities:
 Compute the average intensity of the 9 pixels, surrounding
the centre pixel (x, y)
 Average intensity of the centre pixel (x, y) is 1/9 Sum of intensities of 9 pixels
 The averaging procedure starts from top left, scanning horizontally, every13pixel
14
Spatial Domain enhancement can be carried out in two different ways
1. Point processing
2. Neighborhood processing /
Mask processing or spatial filtering
15
1. Point Processing
When the subimage size is taken as 1x 1 ( single pixel)
thus processing is carried out on single pixel at a time
 Enhancement at any point (new pixel value) in an image depends only
on the original grey level value at that point
(Not on the neighborhood pixels)
 Such operation are referred as singlepixel operations/
Point processing
 Transfer function T in such cases is called
Gray level or intensity mapping function
 Transfer function is of the form
s = T (r)
Where r is the gray level of input image pixel = f ( x, y)
s is the gray level of output image pixel g( x, y)
16
Mapping from r (grey level of the pixels) in input image
to s (grey level of the pixels) in the output image
 Can be done through a table lookup
(256 entries for 8bit)
or
 Through three type of functions
Identity function, where
 Linear
Negative and
Identity transformations
 Logarithmic
Log and Inverse log
 Power law
nth
s=r
power/ nth root
is a trivial case
(output pixel = input pixel)
17
2. Neighborhood/ Mask processing or spatial filtering
In this the neighborhood size of the subimage is, more than 1x1
(3x3, 5x5, 7x7, etc.)
 It gives more flexibility
 The enhancement techniques with 3 x3 or more neighborhood
are referred as:
Filters/ Kernel/ or Templates processing
 Approach is to use a function of intensity values
in predefined neighborhood (3x3, 5x5, . .)
of the pixel (x, y),
for determining the new value
at the pixel (x, y)
 Mask can be 3x3, 5x5, 7x7 . .
(2D array)
 Mask defines coefficients
 These processing techniques, Mask/ Filter processing,
are used in image sharpening/ smoothing
18
Common types of Point processing
1. Image Negative
(Digital negative)
2. Contrast Enhancement/ Contrast Stretching/
Piece wise linear Transformation
3. Thresholding
4. Dynamic range compression ( Log Transformation )
5. Power law transformations
6. Gray/ Intensity level Slicing
7. BitPlane Slicing
19
Identity transformation (trivial case)
S = t (r)
S=r
New grey level = original grey level
255
Modified
grey
level s
(output)
Transformation T
125
10
0
10
125
255
original grey level r (input)
In identity transform
The image is not modified
The new image is having same grey levels as the original image
20
1. Image Negative :
 Produces equivalent of photographic negative
 image gray level at any pixel
s = (L1) r
Gray levels range from 0 to L1 and
r (pixel value before processing)
`
s = rmax r
= 255 r
(rmax = L1= 256 1 = 255)
 Suitable for enhancing white or grey level details embedded in dark
regions of an image
255
Modified
grey
level s
245
Transformation T
125
S = 255 r
10 10
0
0 10
125
245 255
original grey level r
rmax = 255
S = 255 245 = 10
21
Digital mammogram, showing a small lesion
which can be seen much more clearly in negative image.
22
2. Contrast Enhancement/ Contrast Stretching:
 The pixel above certain level are brightened and
the pixel below the level are darkened
Contrast stretching/ Enhancemet
 Brightening/ stretching the values of grey level > m levels and
 Darkening/ compressing the values of grey level <m levels
Maximum contrast
Thresholding/ producing twolevel (binary ) image
>m = 1
<m=0
Two levels
23
Contrast stretching:
(Piecewise linear transformation)
 The advantage of this transform is that the form of
the piecewise function can be arbitrarily complex
 It increases the contrast of the image by making
the dark portion, darker and
the bright portion, brighter
Low contrast image may be due to
 Poor illumination
 Lack of dynamic range of sensors
 Wrong setting of aperture / lens
 Contrast stretching expands the range of intensity levels in
an image, so that it spans the full intensity range of
recoding medium or display device
(Example 100 to 150 to 50 to 200)
 This transformation increases the dynamic range of an image
24
Piecewise Contrast stretching:
 Location points (r1, s1) and (r2, s2)
controls the shape of the transfer function
 If r1 = s1
and r2 = s2 then transformation is linear no change
 If r1 = r2
and s1 = 0 s2 = L1 then transformation Thresholding
Identity
transformation
Dark grey levels are made darker
Bright grey levels are made brighter
25
Low contrast
Image
Image after
contrast stretching
26
3. Thresholding
Extreme contrast stretching yields thresholding
If in the contrast stretching diagram:
the first and the last slope is made zero, and
the center slope is made maximum
i.e. r1 = r2 and s1 = 0 and s2 = L1
s = 0 ; if r < a
s = L1; if r > a
where L is the maximum number of grey levels
a is threshold
L1
Modified
grey level
s
0
0
s = T( r )
a
L1
Original grey level r
27
Low contrast
Image
Thresholding
28
4. Log Transforms (Dynamic range compression)
s = c log ( 1+ r)
Where c is a constant,
r is the original pixel value
 Widens the range of the lowerlevels values
(dark pixels) in the input image
 Compress the range the higherlevel values
(light pixels) in the input image
 Compresses dynamic range of pixels
( which sometimes becomes very large after
processing, i. e. in Fourier spectrum)
 Scales down the large range of levels to 0 to 255
from very large values
( Fourier spectrum range 0 to 10 6)
29
30
Fourier Spectrum
with range = 0 to
1.5 x 106
Result after apply the
log transformation with
c = 1,
Range of values after
processing has become
0 to 6.2
31
5. Power law transformations:
s = C r
where r is pixel value or
g (x, y) = C f (x, y)
correction:
If is < 1
If is > 1
dark image Wider range
dark image Lesser range
Involves finding correct value of for various devices
 In Cathode ray tube, the intensity is power function of
voltage with = 1.8 to 2.5
 For scanners and cameras, gamma value will vary
 If gamma correction is not proper image will bleach out or
become too dark
Gamma correction is also useful for general purpose
contrast manipulation
 For the color images gamma correction is even more important
It is used for changing ratio of blue, green and red proportions
32
0<<1
<1
L/2
>1
L/16
>1
33
Gamma correction
Chapter 3
Image Enhancement in the
Spatial Domain
C=1
34
35
Contrast enhancement using power law
s=Cr
MRI of
A spine
fracture
C= 1
Gamma = 0.6
C= 1
Gamma = 0.4
C= 1
Gamma = 0.3
36
6. Intensity level slicing (Grey level )
Highlights the specific range of gray levels in an image
( Enhancing water bodies in satellite images, Xray images)
Approaches:
1. Display all values of interest in one grey value (say white)
and all other intensities in another value (say black)
 Results in a binary image
(The value of interest in white level)
2. It based on brightening (or darkening) the desired
intensity levels and leaving all other intensities
unchanged
37
Brightens range A B, darken all other values
High
Low
s = grey Level High; for A r B
s = grey level Low ; Otherwise
38
Original image
After applying the transfer function
39
Brightens in the range A to B, preserving other grey levels in the image
High
s = grey level High; for A r B
s = r ; Otherwise
40
7. Bit plane slicing
 The intensity of each pixel in a 256level grey scale image
is composed of 8 bits
 Number of bits required for representing grey level :
8 bits
Bit 7 6 5 4 3 2 1 0
(Bit 7 is most significant bit)
(Bit 0 is least significant bit)
Each pixel is represented by 8bit:
Black as 00000000
White as 11111111
 In bit plane slicing technique, we see
the importance/ contribution of each bit
in the final image appearance.
 This can be done as follows:
 Consider the LSB (least significant bit) of each pixel
plot the image using LSBs only
 Continue doing this each bit till we come to MSB
We will get 8 different images all binary (one for each bit)
41
One 8 bit byte
Bit plane 8
(Most significant, bit 7)
Bit plane 1
(Least significant, bit0 )
42
Bit 7
The original image
Bit 5
Bit 2
Bit 4
Bit 1
Bit 6
Bit 3
Bit 0
43
 Higher order bits contain most significant
visual data
 Lower order bits have more subtle details
 Instead of highlighting range of levels,
The contribution of each bit in the images can be highlighted
44
Neighborhood processing: (Enhancement in Spatial Domain)
 In the neighborhood processing, we consider not only
the intensity value of a pixel,
but also its the intensity values of
immediate neighborhood pixels
 The value of the pixel f (x, y) is changed based on the values
of its immediate neighbors
( 8 neighbors in case of 3 x3 neighborhood)
 Neigborhood can also be 3 x 3, 5 x 5, 7 x 7, . .
 In neighborhood processing, a mask is placed on the image,
 Each component of the mask is multiplied with
the corresponding , image pixel value
 Addition of all multiplications,
gives the new value of the pixel
at the center of the mask
45
After the new value of a pixel at the center of the mask is calculated,
the mask is shifted by one step (pixel)
(From left to right and top to bottom)
The operation is same as convolution of two signals,
One signal is flipped then moved across the other signal step by step
 Same thing is done in the neighborhood processing,
only the mask is not flipped as it is symmetrical
46
y 1
y+1
x 1
f (x1, y1) f (x1, y ) f (x1, y +1)
f (x , y1) f (x , y ) f (x , y+1)
3 x3 neighborhood of (x, y)
x+1
f (x+1, y1) f (x+1, y ) f (x+1, y+1)
x
w1
w4
w7
w2
w3
w5
w6
w8
w9
3 x 3 mask/ window or
a template
The new value of the center pixel:
g (x, y) = f (x1, y1). w 1 + f (x1, y ).w 2 + f (x1, y +1).w 3
+ f (x , y1).w 4 + f (x , y ).w 5 + f (x , y+1).w 6
+ f (x+1, y1).w 7 + f (x+1, y ).w 8 + f (x+1, y+1).w 9
47
Neighborhood processing can be used for image filtering such as:
 Low pass, high pass and band filtering of frequencies
Frequencies in an image
 Just as in signals we have high frequencies and low frequencies,
we have frequencies in images
 High frequency in signals, means that the number of oscillations
of the signal per unit time are high.
 If the signal is voltage, the high frequency means that
the voltage is changing at a high rate
 In images the rate of change of grey levels relates to frequency
If an image has only one grey level, the change of grey level
across the image is zero
thus the frequency is zero (DC)
At edges in an image,
the rate of change of grey levels is high,
thus the edge represent high frequency regions
48
Low frequency region
High frequency region (Edge)
Low frequency region
In most images the background is considered to be low frequency region
Whereas the edges are considered to be high frequency region
Low pass filters removes blurs /
smoothen the high frequency components
such as edges, noise, etc.
49
NOISE:
 The principle source of noise in a digital images arise during acquisition
and transmission of images.
Based on the shapes of the noise (Probability Density Functions),
the noises are classified as:
1. Gaussian Noise
2. SaltandPepper noise /
Impulse noise / Speckle Noise
3. Rayleigh Noise
4. Gamma Noise
5. Exponential Noise
6. Uniform Noise.
The first two, Gaussian and Saltandpepper noise are more common
50
Gaussian noise
It is statistical noise that has its probability density function
equal to that of the normal distribution,
which is also known as the Gaussian distribution
p (z) =
(z  )2 / 2 2
1. e
 2
Where
z is grey level
is Mean of average value of z
is Standard deviation
2 is Variance
1/ 2
0.67/ 2

Grey level
70 % of the value lies in the range of  and +
Gaussian noise has a maximum value at
51
SaltandPepper Noise
PDF of the salt and pepper noise (bipolar noise) is
Pa ;
for z =a
p (z)
Pb ;
for z = b
0;
otherwise
Generally a and b are black and white grey levels, respectively
(for a 8bit image a =0 and b = 255)
(white is salt noise and black is pepper noise)
Pb
Pa
a
Grey level
52
Spatial filtering
 Filtering in terms of signal frequencies,
refers to accepting ( passing) or rejecting certain
frequency components
Lowpass filter: passes low frequencies (rejects high frequencies)
Highpass filter: passes high frequencies (rejects low frequencies)
 Filtering effect on images can be achieved by spatial filters
also called spatial mask, kernels, templates or windows
 Filtering operations, that are performed directly on the pixels of
an image are called SPATIAL FILTERING.
53
Spatial filter consists of;
1. A neighborhood ( a rectangle of pixels) around a pixel point
2. A predefined operation is performed on the image pixels
encompassed by the neighborhood
 Filtering process produces a new pixel value for the pixel at
the center of the neighborhood,
The new value is the result of the filtering operation.
A filtered image is generated,
as the center of the filter visits each pixel of the image
Linear spatial filter
 If the operation performed on the image pixels is linear,
The filter is called linear filter
 otherwise, if the operations are nonlinear then the filter
is called Non linear filter
54
55
Linear spatial filter
 It is summation of the products of the filter coefficients and
the image pixels values, encompassed by the filter.
 Spatial operations work with the values of the image pixels
in the neighborhood of a pixel and
the corresponding values of the
subimage having same dimensions as the neighborhood.
 The subimage is the filter, mask, kernel, template or window.
The values in a filter subimage are referred as coefficients.
 The mask size generally is of odd sizes, e.g. 3x3, 5x5, 7x7 etc.
Spatial filters can also be used for nonlinear filtering
which is not possible in frequency domain filtering
56
0, 0
Image f (x, y)
W = weightage
w ( 1, 1) w ( 1, 0) w ( 1, 1)
x, y pixel with 3 x3 mask
f (x, y) grey level at pixel
w ( 0, 1)
w ( 0, 0)
w ( 1, 1)
w ( 1, 0)
w ( 1, 1)
w ( 1, 1)
x, y
x, y
pixel aligns with 0, 0
Response R, the new value of pixel x, y:
g ( x, y) = w( 1, 1) . f( x1, y1) + w ( 1, 0) . f (x1, y) +   w (0, 0) , f (x, y)
 w (1, 1) . f (x+1, y+1)
Center coefficient of the filter, w (0, 0) aligns with the pixel at location (x,57
y)
Linear filtering of an image
Image size
=MxN
x = 0 , 1, 2, 3    y = 0 , 1, 2, 3    
Filter mask
=mxn
a = (m1)/ 2
M1
N1
b = (n1)/2
s = + a t = b
g ( x, y) =
w (s, t ) . f{ ( x + s), (y + t )}
s =  a t = b
x = 0, 1, 2      M1
y = 0,1, 2, 3      N1
It is also called Convolution Mask
as linear filtering is similar to convolution in frequency domain
Response R = w 1 z 1 + w 2 z 2 +  w m n . z m n
Where ws are the coefficients of an m x n filter
and zs are the corresponding image intensities
of pixels of the image encompassed by filter
i= m.n
R
=
w i . zi
i=1
For 3 x3 mask m . n = 9 Therefore
= w i . zi
58
Coefficients of the mask
Pixel value of the image
encompassed by the mask
z1
z2 z 3
w1
w2
w4
w5
w6
z4
z5
z6
w7
w8
w9
z7
z8
z9
For 3 x 3 mask, with the coefficients embedded as above
k=9
R = w 1z1 + w 2z2 +. . . . w 9z9 =
w kzk
k=1
Where w and z are 9 dimensional vectors
At boundary of an image
1. Simplest solution:
 Dont take mask to border pixels,
limit center to (n1) / 2 pixel away
 The resultant image will be smaller than original
2. Padding of image by rows and columns by:
a)
0 value,
b)
Some other constant value or
c)
Replicate the last columns and rows
59
Smoothing Spatial Filters: (Lowpass filters or Averaging filters)
 Removes the high frequency content of an image.
Keeping (Allowing) low frequency components of the image
 This filter is used for blurring and noise reduction
(Noise is a high frequency component of an image)
 Blurring is used as a preprocessing step for:
1. Removal of small details from an image prior to
object extraction.
2. Bridging of small gaps in lines or curves.
 Noise reduction can be accomplished by blurring with:
 A linear filter as well as
 A nonlinear filter
60
 Output of the filter is simply the average value of
the pixels contained in the neighborhood of the filter mask.
Replaces the value of every pixel in an image by
the average of the gray levels in the neighborhood
defined by the filter mask.
 This filter reduce the sharp transitions in gray levels.
Sharp transitions are of two types:
1. Random noise in the image.
2. Edges of objects in the image.
 Thus, smoothing can blur edges
This is undesirable effect of the averaging filter
61
Smoothing Spatial filter
Used for blurring or noise removal
Removing small details prior to object extraction
resulting image has less sharp edges
3 x 3 Average or smoothing
i=9
R = 1/9 x z i
i=1
Spatial average filter with all coefficients are equal is also called
Box filter
1 1 1
1/ 9 x
1 1 1
1 1 1
(9 is sum of all values of all 9 coefficients for finding average)
For 5 x5 mask, the multiplying factor will be 1/25 ?
9
In term of general formula R =
w kzk
k=1
The mask will be
1/9 1/9 1/9
1/9 1/9 1/9
1/9 1/9 1/9
62
Weighted average filter:
Some pixels have higher weightage than the others, N4 more than ND
(Center, side higher weightage then diagonal pixels)
1 2 1
1/ 16 x 2 4 2
1 2 1
(16 is sum of all 9 coefficients)
Blurring effect will be more if the size mask is made larger
Some other low pass averaging masks:
1/6 x
0 1 0
1 2 1
0 1 0
1/10 x
1 1 1
1 2 1
1 1 1
63
Smoothing with Square Averaging filter
Original image
500 x 500 pixels
Average
filter size
3x3
Average
filter size
5x5
15 x 15
9x9
35 x 35
Blurring is more with the larger size of mask
64
Application of spatial averaging filter:
 Gross representation of the object of interest can be obtained
 The intensity of smaller objects blends with the background
 The larger objects become bloblike and easy to detect
 The size of mask decides the relative size of small objects that will be
blended with the background
Original image
528 x 485
pixels
Processed Image
Average filter
15 x15
Threshold image
with 25 % of
the highest intensity
65
Order statistics ( Nonlinear ) Filters
The response of the nonlinear filters is based on
the ordering (ranking) of the pixels, based on their grey level values
contained in the image area encompassed by the filter,
 The resulting value of the ranking replaces the value of the center pixel
 Useful spatial filters include:
1. Median filter
2. Max and
3. Min filter
Median filter : Response R = median {z k  k = 1,2,,n x n}
Max filter
: Response R = max {zk  k = 1,2,,n x n}
Min filter
: Response R = min {zk  k = 1,2,,n x n}
Where Zk is the value of the kth pixel of the
image under the mask 66
n x n is the size of the mask
Median filter (Also called 50th percentile filter)
 Replace value of the pixel by median of the grey levels of the
neighborhood pixels
 Median filter provide excellent noise reduction
 It reduces Impulse noise (Saltandpepper noise)
(Saltandpepper noise appears as white and black dots
superimposed upon the image)
Median of a set of value is such that:
The half of the values are less than the median
and the other half of the values are greater than the median
f ( x, y ) median{g ( s, t )}
( s ,t )S xy
67
The steps to perform the median filtering:
1. Place the empty mask on the image,
The center of the mask at the left hand corner pixel of the image
2. Sort the neighborhood pixels, encompassed by the mask, as per the values
(Ascending/ or descending order)
3. Determine the median value.
4. Assign the median value to the pixel, at the center of the mask
5. Move the center of the mask to each pixel of the image
( From left to right, top to bottom)
68
Median value
in 3 x3 neighborhood, the 5th largest value is the median
in 5 x5 neighborhood, the 13th largest value is the median
Example: Neighborhood values in 3 x3 mask:
{10, 20, 20, 20, 15, 20, 20, 25, 100}
10 20 20
20 15 20
20 25 100
Sorted values: {10, 15, 20, 20, 20, 20, 20, 25, 100}
The median is 5th value = 20
20 will replace center pixel 15
 Principal function of the median filter is
to make pixels with distinct intensity more like its neighbors
 Isolated light or dark cluster of pixels are eliminated
The clusters having the area less than (m x n) / 2
are eliminated. (1/2 of the area of the filter)
Where m x m is size of the mask69
Noise reduction 3 x3 average filter
Original
Corrupted
image
Noise reduction by
3 x3 average filter
Noise reduction by
3 x3 Median filter
70
Max. and Min. Filters
Max. Filter: (100th percentile filter)
 Max filter is useful for finding the brightest points in an image.
 Pepper noise ( black spots) having very low values
is reduced by this filter ,
f ( x, y ) max {g ( s, t )}
( s ,t )S xy
71
Min. Filter: (0th percentile filter)
 Minimum filter is useful for finding the darkest points in an image.
 The salt noise( white spots ) having very high values,
is reduced by this filter
f ( x, y ) min {g ( s, t )}
( s ,t )S xy
72
Sharpening Spatial filters (High pass Filter)
 Highlight the fine detail or transitions in intensity in an image
 Enhance the detail that has been blurred:
 Either due to an error
or
 As a natural effect of a particular method of
image acquisition
 The sharpening can accomplished by Spatial differentiation
( Averaging is analogous to integration, results in blurring)
 Image differentiation:
 Enhances the edged and other discontinuities
such as noise
and
 Deemphasizes slow varying intensities
73
Sharpening filters are based on first and secondorder derivatives
Derivative
First Derivative
 Must be zero in the area of constant intensities
 Must be nonzero along ramps
 Must be nonzero at the onset and the end of
a grey level ramp and step
Second derivative
 Must be zero in flat segments
(areas of constant graylevel values)
 Must be zero along the ramp of constant slope
 Must be nonzero at the onset and the end of
a gray level step or ramp but with a change in sign
74
For a digital image f (x, y)
 Maximum possible intensity change is finite number of total levels (256)
 The shortest distance over which change can occur is adjacent pixel
 Basic definition:
The firstorder derivative of one dimensional function f ( x )
f/ x = f( x +1) f (x)
5, 6, 8, 0
1, 2 8
= next pixel value current pixel value
The secondorder derivative of one dimensional function
2f / x2 = f ( x+1) + f ( x1)  2 f (x)
= derivative of next pixel derivative of the present pixel
= previous pixel value + next pixel value
 2 times the present pixel value
5, 6, 8, 0
1, 10
75
At any of the edges:
The first derivative will have nonzero value
The second derivative will produce double edge,
one pixel thick with different sign
The second derivative sharpens the image
 Enhances the fine details much better
than the first order derivative
76
Image
Ramp
Isolated point
Thin line
Step
77
f (x + 1 ) f (x)
f( x +1) + f (x1)
2 . f (x)
78
Comparing the response between the first and second order derivatives :
 The first order derivatives:
 Generally produce thicker edges in an image.
 Generally have a stronger response to a gray level step
 Are used basically for edge extraction.
 The second order derivatives:
 Have a stronger response to fine details
such as thin lines and isolated points.
 Produces a double response at step changes in gray level.
 Are better than first order derivatives
because of their much better ability to enhance fine detail
 Are easier to implement and also enhances the fine details much better
Thus only the second order derivatives are considered for highpass filtering
79
Using the second order derivative for image sharpening The Laplacian
 The filter should be an isotropic filter for image sharpening
so that the response of the filter is
independent of the direction of the discontinuities
in the image
 The isotropic filters are rotation invariant:
 Rotating the image and then applying the filter
should gives same results as
 Applying the filter first to the image and then rotating the result
Simplest isotropic derivative operator is Laplacian
Laplacian for an image f (x, y), of two variables is defined as:
2 f = 2f/ x2 + 2 f / y 2
, inverted delta , represents differentiation
80
Because derivative of any order are linear operations,
the Laplacian is a linear operator
Second derivative in x direction:
2f/ x2 = f ( x+1, y ) + f ( x 1, y )  2 f ( x, y)
Similarly in y direction:
2f/ y 2 = f ( x, y + 1 ) + f ( x, y  1 )  2 f ( x, y)
Thus Discrete Laplacian of two variables is;
2 f = 2f/ x 2 + 2 f / y 2
2 f (x, y) = f ( x+1, y ) + f ( x 1, y ) + f ( x, y + 1 ) + f ( x, y  1 )
 4 f ( x, y)
The above equation can be implemented by Laplacian Mask
81
The Laplacian mask for second order derivative:
0 1 0
1 4 1
0 1 0
Diagonal direction can be added:
by adding two more terms in the diagonal direction and
by subtracting 4 from the center
All directional filter:
1 1 1
1 8 1
1 1 1
 The above mask yields isotropic results in the increment of 45
In practice, Laplacian masks as shown below are also seen
The two more masks are:
0 1 0
and
1 1 1
1 4 1
1 8 1
0 1 0
1 1 1
 These are obtained from definitions of the derivatives that are negative
of the one used earlier,
 They yield equivalent results, but the difference in sign must be kept in mind
82
while combining Laplacianfiltered image with another image
Sharpening  The Laplacian filter / Second order derivative
2 f (x, y ) along x , y and both diagonal direction
=
f ( x + 1, y ) + f ( x1, y) 2 f ( x, y)
1 1 1
+ f ( x, y + 1 ) + f ( x, y1) 2 f ( x, y)
1 8 1
+ f ( x + 1, y +1 ) + f ( x1, y1) 2 f ( x, y)
1 1 1
+ f ( x 1, y + 1 ) + f ( x+1, y 1) 2 f ( x, y)
(Sum all 8 neighbors 8 x Current pixel value)
Since Laplacian operator is derivative operator
 It highlights discontinuities in an image and
deemphasizes regions with slow varying intensities
Resulting in an image with greyish edge and
other discontinuities superimposed on
dark featureless background
83
Background features can be recovered
while preserving the sharpening effect
 Add/ Subtract the Laplacian image to the original image
( add/ subtract depending upon the sign of the center coefficient
of the mask, if it is negative we subtract the Lalacian image)
The basic way the Lapalician is used for image sharpening is:
g ( x, y) = f (x, y) + c . [
f (x, y) ]
where c is a constant
c = 1 if the Laplacian filter
with  4, or 8 at the center are used
Otherwise c = +1 for the other two filters with +4 and +8 at the center
84
Scaling
Laplacian contains both positive and negative values
 Negative values can be set to zero for display, thus
result will be mostly black
 Better method is to scale Laplacian
by adding to the min value to the all pixels of the image
to bring the minimum value of pixels to zero
then scale to all the 255 levels by
multiplying with 255 / maximum value of the
pixels in the image
 After scaling the result will be having mostly grey background
85
Blurred image of
the moon
Laplacian filtered image without
scaling,
Mostly dark
86
Laplacian Image,
scaled for display
Background Is grey
Laplacian image
Enhanced by adding/ subtracting
from the original image
87
Unsharp Masking and high boost filtering
The process used in printing to sharpen the images
It consists of subtracting
an Unsharp ( smoothened ) version of the image
from the original image
This process is called Unsharp Masking:
The steps for Unsharp Masking:
1. Blur the original image. (Blurring is obtained by average filter)
2. Subtract the blurred/ smoothed image from the original image,
resulting difference is called the mask
3. Add the mask to the original image
88
Let the original image be f (x, y)
Blurred image f ( x, y)
The Mask gmask ( x, y) = f ( x, y) 
f ( x, y)
The weighted portion of the mask is added to the original image
Unsharp image
g (x, y) = f ( x, y) + k . gmask (x , y)
where k is weight (k 0)
Unsharp masking
If k = 1
We have Unsharp masking
High boost filtering
If k > 1 then then the process is referred to as
High boost filtering
89
Mechanics of Unsharp Masking
Original signal
Blurred signal
with original signal
shown dashed
Unsharp mask
Sharpened signal
Obtained by adding
original
to the Unsharp Mask
90
Original image
Result of blurring
with a Gaussian filter
Unsharp Mask
Result of using
Unsharp mask (k = 1)
Result of using
Highboost filtering
K>1
91
Composite Laplacian Mask a
Unsharp mask
Center coefficient is 4 +1
A second composite mask b
Unsharp mask
Center coefficient is 8 +1
High boost filtering
can be implemented
by a composite masks
making center pixel as A + 4 = 5 or A + 8 = 9
92
Original image
Result of masking by mask a
Result of masking by mask b
93
94
Using the first order derivative for (nonlinear) image Sharpening Gradient
Let in a 3x3 region of an image
zs be grey level values/ intensities
as shown
x1, y1
The center point z5 denotes f (x, y)
at an arbitrary location x, y;
z1 denotes f (x1, y 1), z2 denotes
f (x1, y), z3 f ( x1, y+1). . . and so on
The simplest approximation to
a first order derivative
gx = (z8 z5)
gy = (z6 z5 )
x1, y
x1, y+1
f (x, y)
X+1, y1
x+1, y
x+1, y+1
The two other definitions (Robert), using cross difference
gx = (z9 z5)
g y = (z8 z6 )
We can compute the magnitude length of the vector
M (x, y) = [(z9  z5) + (z8 z6) 2 ]
95
z9 z5 + z8 z6
Masks used to compute the gradient at the point labeled z5
Roberts cross gradient
Operators
Masks of even size are awkward to implement
As they do not have a center of symmetry
The smallest filter mask should be 3 x 3 neighborhood centered at z5
Approximation to gx and gy using 3 x 3 neighborhood
centered at z5
gx = f/ x = (z7 + 2z8 + z9) (z1 + 2z2 + z3)
gy = f/ y = (z3 + 2z6 + z9) (z1 + 2z4 + z7)
96
M (x, y) (z7 + 2z8 +z9 ) (z1 + 2z2 + z3)
+ (z3+ 2z6 +z9 ) (z1 + 2z4 + z7)
Sobel operators
For gradient in x and y direction
97
Histogram Modelling
 Histogram of images provide a global description of
the appearance of an image
 By definition, histogram of an image represents
the relative frequency of occurrences of
the various grey levels in the image.
 Histogram can be plotted in two ways:
nk
1. The xaxis has grey levels and
the yaxis has number of pixels
in each grey level
2. The xaxis represents the grey levels
the yaxis represents
the probability of occurrence of each grey levels
p ( rk) Probbility of a pixel having grey level rk
= nk number of pixels at level at kth level
n = total number of pixels in the image
p ( rk)
= n k/ n
98
Method 1:
Grey level
Number of
pixels nk
40
20
10
15
10
40
Number
Of pixels 30
nk
20
20
10
10
15
10
3
0
0
2
6 Grey level
99
Method 2:
 In this method in place of the number pixels,
the probability is plotted
Probability
Grey level
Number of
pixels nk
Probability
p (rk)
p (rk ) = nk / n
rk is kth grey level
nk is the number of pixels at kth level
n is the total number of pixels
40
20
10
15
10
0.4
0.2
0.1
0.15
0.1
0.03
0.02
n =100
0.4
0.3
0.2
0.15
p (rk) 0.2
0.1
0.1
0.1
0.0
0
0.03
0.02
100
6 Grey level
The probability histogram is known as normalized histogram
 The advantage of this is that the maximum value is always 1
 Generally black is 0 and white is maximum 1
Other grey levels are in the range of 0 to 1
Normalized Histogram:
 Normalized Histogram
It is plot of probability of pixel p (rk) vs grey levels r
Probability of pixel, being grey level rk :
p (r k) = n k / n
where k = 0, 1,2   L1, (grey levels)
n = total number of pixels in the image
L is the number of grey levels
Sum of normalized values of p( r k )
for the values of k from 0 to L1
p (r0)+ p (r1)+ p (r2 ) . . . p (r L1) = 1
101
Thus loosely speaking, Normalized Histogram of p ( r k)
gives an estimate of probability
of occurrence of gray level r k in an image
Thus normalized histogram p (r k) is estimate of probability of level r k
102
Histogram of some of the simple images
Number
at any
pixel
value 0
255
Gray levels
Complete white image
Number
at any
pixel
value
Gray levels
Darker image
Number
at any
pixel
value
Gray levels
Complete Black image
Number
at any
pixel
value
Gray levels
Brighter image
Histograms can be used for:
1. Image enhancement (spatial domain)
2. Image statistics (Compression/ Segmentation)
Histograms processing can be implemented in
1. Software or
2. Hardware
103
Dark image
Bright image
Low contrast image
High Contrast image
Equally spread out
at all levels
104
An image whose pixels tend to occupy the entire range of grey levels
tend to distribute uniformly will have
and
an appearance of high contrast and will
exhibit large variety of grey tones
 The image will show a great amount of greylevel details and has high
dynamic range
 So there is need of
1. Increasing the dynamic range of grey levels
and
2. Uniform distribution of pixels among the grey levels
For increasing the dynamic range, we can use the histogram stretching
However, for getting the uniform distribution, we have to use
Histogram equalization
105
Histogram Stretching:
 Used for increasing the dynamic range of image.
 The shape of image is not altered but
the image is spread so as
to cover the entire dynamic range
This stretching is done by straight line equation having slope:
(smax.. smin ) / (rmax rmin)
Where
Output grey level s = T ( r ) =
( grey levels in output image
/ grey levels in input image)
smax
smin
rmax
rsmin
(smax. smin )
( rmax rmin)
Maximum grey level of output
Minimum grey level of output
Maximum grey level of input
Minimum grey level of input
x (r rmin) + smin
106
Linear streching
s
smax
smin
S= T(r)=
rmin
smax. smin
rmax rmin
rmax
x ( r rmin)
r
+ smin
for every
In this technique spreading of grey levels can be obtained,
but the shape of the histogram remains the same
107
However, there are many applications where we need flat histogram
Grey level
0
1
2
3
4
5
6
7
No. of pixels
100
90
85
70
0
0
0
0
Perform histogram stretching so that new image has a dynamic range of [0, 7]
rmin = 0, rmax = 3
and smin = 0 and smax = 7
s = (smax smin ) / rmax rmin )
x (r rmin ) + smin
s = (7 0)/ (30) x ( r 0) + 0
Thus for r=0
r=1
r=2
r=3
So
r
0
1
2
3
= 7/3 x r
s = 7/3 x 0
s = 7/3 x 1
s = 7/3 x 2
s = 7/3 x 3
s
0
2
5
7
pixels
100
90
85
70
Stretched histogram
=0
= 7/3 = 2.3
= 14/3 = 4.6
= 21/3 = 7
=0
2
5
=7
nk
100
90
85
70
108
As perfect image is one which has equal number of pixels at all its
grey levels.
Thus to get a perfect image, the objective are:
1. To spread the dynamic range and also
2. To have equal pixels at all the grey levels.
This technique is known as HISTOGRAM EQUALIZATION
.
109
Histogram Equalization:
The histogram of lowcontrast image is narrow
 if the histogram is distributed to a wider range the quality of the image
will be improved.
This can be achieved by adjusting the probability density function of the
original histogram of the image
so that the probability of grey levels occurring spreads equally.
Histogram equalization spreads dynamic range and
generates an image with
equal number of pixels in all the grey levels
110
Histogram equalization:
 In Linear stretching, histogram stretches,
but the shape remains same
Histogram equalization
 Histogram spreads to all levels and also become flat
(Equal number of pixels at all levels)
nk
Linear stretch
r
nk
nk
Histogram equalized
111
Histogram equalization
 The objective is to find a transformation which would
transform a bad histogram to an Equalized histogram
We know that
s = T (r )
Where s is the output
T is transform
r is gray level of input image with range 0 to 1,
level
r = 0 is black
level
r = 1
is white
thus any level r lies between 0 and 1
Let s be equalized histogram = T (r )
where the range grey level r is between 0 and 1
The objective is to find T ( r ) so that s is uniform (flat histogram)
T (r ) should give equal number of pixels at all levels
112
The transfer function T ( r) must satisfy two conditions:
a. T (r ) should be a single valued function and
Monotonically increasing for the values of r
in the interval 0 to 1
0r1
b. S / T ( r ), the transformed values should lie between 0 and 1
0 T ( r) 1 for 0 r 1
i.e.
0 s 1 for 0 r 1
Range of grey levels is taken as 0 to 1 ( called normalized range)
(in place of 0 to 255 for simplicity)
If T (r ) is not a single valued
Multiple levels of r can be mapped into a single output level s
which is a big drawback, thus T ( r) should be single valued
s
T (r )
s1
r1
r2
r1 and r2 gives same value s1
r
113
T (r )
s2
s1
r1
r2
The T ( r ) is both single valued and monotonically increasing
 If the condition b is not satisfied than
the mapping will not be consistent with the allowed range of
grey values
( s will go beyond the highest/ lowest permitted levels)
114
Transfer function T (r k) is both single valued and
monotonically increasing in the interval 0 r 1
and its value lies between 0 and 1
0 s1
115
Condition a :
Singlevalued (onetoone relationship) guarantees
that the inverse transformation exists.
Monotonicity condition preserves
the increasing order from black to white in the output image
.
Condition b
0 T( r ) 1 for 0 r 1
guarantees that the output gray levels are in
the same range as the input levels.
Therefore the inverse transformation from s back to r exits and it is
r = T 1 ( s )
where 0 s 1
The grey levels for continuous variables ( r and s )can be characterized by
their Probability Density Function (PDF) ( p r ( r ) and p s ( s ) )
Any intensity level in an image may be viewed as random variables
in the interval [0,1].)
116
From probability theory it is known that
if pr( r) and T ( r ) are known
and
if T 1 ( s ) satisfies condition a
(single valued and Monotonically increasing)
Then PDF of transformed grey level s is:
ps(s)=
[ p r ( r ) . dr / ds ] r = T
1
(s )
.1
i. e. the probability density of the transformed image is equal to
the probability density of the original image multiplied by
the inverse slope of the transformation
Thus PDF of output intensities is determined by
the PDF of input intensities and the transformation function as indicated
s and r are related by T (r )
117
We need to find a transformation T ( r ) Which will give flat histogram
Let us consider CDF (Cumulative Density Function)
CDF is obtained by simply adding up all the PDF
( Probability Density functions)
s=T(r)
r
= pr ( r ) dr;
0
Differentiating with respect to
0r1
( p0 (0) + p1 (1) + p2 (2) . .
ds/ dr = pr ( r ) dr/ ds = 1/ pr (r)
Substituting this in the relation (1)
ps(s)=
[ p r ( r ) . dr / ds ]
ps( s ) = pr ( r ) / pr (r ) = 1
Which is nothing but the UDF (Uniform Density Function)
118
UDF
ps( s )
1
s
1
Thus, we get
ps(s)
pr(r)
1
Cumulative
density
function
pr ( r ) dr
A bad histogram becomes a flat histogram
If a cumulative density function is computed
119
Thus the cumulative function CDF gives the equalized histogram
r
s
= pr ( r ) dr;
0r1
( p0 (0) + p1 (1) + p2 (2) . .
0
Gives ps (s ) = 1
The above technique of histogram equalization
makes use of continuous mathematics domain
For image processing we operate in discrete domain.
Thus if n is the total number of pixels
pr (rk) = nk /n
Since
r
s = T( r )
= pr ( r ) dr; 0 r 1
( p0 (0) + p1 (1) + p2 (2) . .
0
r
In discrete domain sk =
pr (rj )
j=0
which is CDF ( Cumulative Density function)
120
S k gives equalized or linear histogram:
s k = T ( r k)
k
= .
pr (r j )
j =0
k
= n j / n
j=0
thus s at any k will be computed for
(summed up to k)
k = 0, 1, 2
(L1)
Map each pixel with level r k into corresponding pixel with level s k
p r (r k) versus r k
( from original histogram)
S k gives linearized or equalized of histogram
s k = new value of gray level for the grey level (r k ) in original image
k
= p r (r j )
(Cumulative Density Function)
j=0
k = 0, 1, 2 to L1
121
New value of level sk , for the original level r k in the original image
s k = T (r k)
for r 0, r 1, r 2
k
( pr (r j) )
j=0
r L1
= p r(r0) + pr(r1) + pr(r2) . . pr (r k) upto k
k
= (n j /n)
j= 0
= n0/n + n1/ n  Thus
s0 =
s1 =
s2 =
.
.
sk =
where n = total number of pixels
n k /n
up to k
p r (r 0)
= n 0/n
p r (r 0) + p r (r 1)
= n00/n + n1/n
p r (r 0) + p r (r 1) + p r (r 2) = n0/n + n1/n + n2 /n
p r (r 0) + p r (r 1) + p r (r 2)
up to L1 as k = L1
 + p r (r k)
k = 0, 1, 2 . . L1
j = 0 to 0 as k = 0
j = 0 to 1 as k = 1
j = 0 to 2 as k = 2
j = 0 to k as k = k
122
Equalize the following histogram:Grey level r
0
1
No. of pixels
790
1023
Grey
Level
0
1
2
3
4
5
6
7
nk
790
1023
850
656
329
245
122
81
n= 4096
2
850
3
656
PDF
pr (rk)
= nk / n
CDF
sk =
pr(rk)
(L1) x sk
= 7 x sk
0.19
0.25
0.21
0.16
0.08
0.06
0.03
0.02
0.19
0.44
0.65
0.81
0.89
0.95
0.98
1
1.33
3.08
4.55
5.67
6.23
6.65
6.86
7
4
329
5
245
6
122
7
81
Rounding off
1
3
5
6
6
7
7
7
PDF ( Probability Density Function)
CDF ( Cumulative Density Function)
123
Taking only First,
and last column
Old
Grey
Level
0
1
2
3
4
5
6
7
nk
790
1023
850
656
329
245
122
81
Equalized
grey
level
0
1
2
3
4
5
6
7
2nd
New
grey
level
1
3
5
6
6
7
7
7
Number
of
pixels
0
790
0
1023
0
850
985
(656 + 329)
448
(245+ 122 + 81)
Equalized histogram
New grey level vs total pixel at that level
nk
1023
1023
850
790
Original histogram
656 (old grey level vs pixels)
329
790
448
= 245
+ 122
+ 81
245
122
81
0
985
=656
850
+329
7b
124
Original images
Histogram equalized images
Dark
Bright
Low
Contrast
High
Contrast
125
Histogram Matching or Histogram specification
 Histogram equalization is automatic and
not interactive
gives an approximation to an uniform histogram
 Histogram equalization is not the best approach always.
 Sometimes the shape of histogram is specified to
get better enhancement
 The method to generate image to specific histogram is called
Histogram matching or Histogram specification
126
Inverse Transfer
rk = T1 (sk)
where k =0,1,2,3,
 L1
(Reverse transform satisfies conditions a and b,
if none of the level is missing)
[a, single valued and monotononically increasing
b The range of s, same as r , 0 to 1 ]
Using the inverse transform we can get back the original histogram
pr (r)
Suppose
pr (r ) is the input PDF and
(Probability Density Function)
ps (s ) is the Output PDF ( to be matched)
Suppose k represent grey level of some intermediate result
r
s
Then
k = T1 ( r ) =
pr (r ) dr and k = T2 (r ) = ps (s ) ds
0
0
In both cases pk (k) is uniform
r
Thus
s = T2 1 ( ps ) = T2 1 (T1 ( r ) thus s = T21 ( pr (r ) dr)
0
127
Steps for histogram specification:
1. Equalize the levels of the original histogram
2. Use the specified density function and
obtain the transfer function
3. Apply the inverse transformation function
128
Given the histogram ( a ):
Grey level
0
1
No. of pixels
790
1023
and Histogram (b)
Grey level
0
No. of pixels
0
2
850
3
656
4
329
5
245
6
122
7
81
2
0
3
614
4
819
5
1230
6
819
7
614
2
0
3
1023
4
0
5
850
6
985
7
448
1
0
Modify histogram (a ) as given by (b)
First equalizing (a)
we get
Grey level
0
No. of pixels
0
1
790
129
Now equalize (b)
Grey level
No. of pixels
Grey
Level
0
1
2
3
4
5
6
7
nk
0
0
1
0
PDF
pr (rk)
2
0
CDF
sk =
= nk / N
3
614
(L1) x sk
= 7 x sk
pr(rk)
4
819
5
1230
6
819
7
614
Rounding off
0
0.0
0.0
0.0
0
0.0
0.0
0.0
0
0.0
0.0
0.0
614
0.149 0.149 1.05
819
0.20
0.35
2.45
1230
0.30
0.65
4.55
819
0.20
0.85
5.97
614
0.15
1
7
N = 4096
From above the reverse level
1 3, 3 4, 5 5, 6 6, 7 7
mapping is:
0
0
0
1
3
5
6
7
To obtain histogram specifications, we apply the inverse transform
comparing equalized (a) and (b )
130
Substituting the reverse mapping values in the equalized
histogram of ( a)
Grey level
0
1
2
3
4
No. of pixels
0
790
0
1023
0
The substituted
level in equalized (a)
5
850
6
985
5
850
6
985
7
448
7
Reverse mapping: 1 3, 3 4, 5 5, 6 6, 7 7
The final histogram
Grey level
0
No. of pixels
0
1
0
2
0
3
790
4
1023
7
448
985
1023
850
790
448
131
Local Histogram processing
 Earlier methods are global applied to the entire image
 Sometimes it is necessary to enhance small area of the image
Local Enhancements
 Define a square or rectangular neighborhood/ mask.
 Move the centre of this mask from pixel to pixel
 At each point calculate the histogram
 Either by histogram equalize or
 Mathcing specified histogram
to obtain the transfer function
 Use the transfer function to map the center point
 At each point either one column or row is changed
This method is fast
.
Alternatively, a nonoverlapping method can also be applied
but this method produces checkerboard effect
132
Original image
Image after global
Histogram eqalisation
Image after local
Histogram equalisation
using 3 x 3 neighborhood
133
Use of histogram statistics for image enhancement
Let r = discrete random variable, representing gray level
(range 0 to L1)
Let p (r i ) denotes the normalized histogram component
corresponding to value r i
p (r i ) can be viewed as estimate of probability
that intensity r i occurs in the image
from which the histogram was drawn
nth moment of r about its mean is defined as
L1
n (r) =
(r i m)n x p ( r i)
i =0
Where m the mean value of r
Mean
m =
L1
r i
i =0
x p ( r i)
134
zero th movement
1 The First moment
L
Second moment 2 ( r ) =
( r i  m ) 2 x p ( r i)
i =0
The above expression is intensity variance of r
The standard deviation = variance
denoted by 2 ( r )
Global Mean and Global variance
can be used for the image enhancement
An image where the contrast of the darker portion of the image
only is to be change
Local mean can be compared with global mean and thus
only darker portion can be identified
For an image f ( x, y)
M1 N 1
Mean
m = 1/ M.N f (x , y)
x=0 y=0
M1 N 1
Variance 2 = 1/ M.N
[f (x , y) m] 2
x=0 y=0
135
Original image
Dark portion on
right lower side
Image where
the contrast of only
the darker side of
the image is isolated
and contrast enhanced
136
END
137
138
Histogram processing
Enhancement using arithmetic/ logic operations
Image subtraction.
Image Averaging
139
ps(s)=
[ p r ( r ) . dr / ds ]
from equation 1
A transformation function of particular importance in image processing
has the form
s = T ( r ) = L1 pr (w) dw
2.
where w is a dummy variable of integration
The right side of the above equation is
recognised as cumulative distributed function (CDF) of
random variable r
As PDFs are always positive
and since it is integration function
the value of s will increase as r increases ( satisfies condition a
When r = (L1) the integration equates to 1
So the maximum value of s is L1 ( satifies condion b also )
r
ds/dr = (L1) d/dr [ pr (w) dw] = (L1) pr (r )
0
140
ps (s ) = pr ( r ) . dr/ds = pr( r) . 1/ (L1) pr (r ) = 1/ L1
For discrete values, we deal with probabilities (Histogram values)
and summations
instead of PDF and integrations,
The probability of occurance of intensity can be approximated by:
pr (rk) = nk/ N
The plot pr (rk ) vs rk
k = 0, 1, 2, . . (L1)
n k is the number pixels at rk
is histogram
The discrete form of transformation (2)
k
sk = T (rk ) = (L1 ) pr ( rj)
j=0
sk = ( L1)/N
k
nj
j=0
0
s0 = 7
pr (rj) = 7. pr (r0)
j=0
s1 = 7. prr(r0) + 7. pr (r1)
for L =8
141