You are on page 1of 191

Image Enhancement

K.PRAVEEN
AP/DECE
CEG,AU

12/18/2018 1
Image Enhancement
• Accentuation, or sharpening of image features such
as edges, boundaries, or contrast
• Does not increase the inherent information
• Component in the data
• In general, image enhancement is used to generate a
visually desirable image.
– It can be used as a preprocess or a postprocess.
– Highly application dependent. A technique that works for
one application may not work for another.

12/18/2018 2
Image Enhancement (Contd…)
• The image enhancement methods are based on
either spatial or frequency domain techniques
– spatial domain approaches : direct manipulation of
pixel in an image
– frequency domain approach : modify the Fourier
transform of an image

12/18/2018
3
Spatial Domain Method
• Image processing function may be
expressed as
g ( x , y )  T [ f ( x , y )]
• f(x, y): input image
• g(x, y): processed image
• T : operator on f defined over some
neighbor of (x, y)
• Neighborhood shape : square or
rectangular arrays are the most A 3x3 neighborhood about a
predominant due to the ease of point (x, y) in an image
implementation
• mask processing / filtering
• masks( filters, windows, templates)
• e.g. Image sharpening
12/18/2018
4
Spatial Domain Method
• Simplest form of T : neighbor g depends only on
the value of f at (x, y)
– T : gray-level transformation function
– s = T(r)
(r,s are variables denoting the gray level of f(x, y) and g(x, y) at any point(x,
y) )

12/18/2018
5
What is a histogram?
• A graph indicating the number of times each gray level occurs
in the image, i.e. frequency of the brightness value in the
image
• The histogram of an image with L gray levels is represented by
a one-dimensional array with L elements
• Algorithm:
– Assign zero values to all
elements of the array hf;
– For all pixels (x,y) of the
image f, increment
hf[f(x,y)] by 1.

12/18/2018
6
Frequency Domain Method
• Convolution theorem
g ( x , y )  h ( x , y ) f ( x , y )

G ( u, v )  H ( u, v ) F ( u, v )
where G, H, and F are the Fourier transforms of g, h, and f

H ( u , v ) : transfer function, optical transfer function


Visually, f(x, y) is given and the goal, after computation
of F(u, v), is to select H(u, v) so that the desired image

 H ( u , v ) F ( u , v )
1
g(x, y )  F
.
exhibits some highlighted feature of f(x, y)
12/18/2018
7
Image enhancement

12/18/2018
8
Types of Image Enhancement
• There are three types of image enhancement
techniques:

– Point operations: each pixel is modified according to a


particular equation, independent of the other pixels.
– Mask operations: each pixel is modified according to the
values of the pixel’s neighbors.
– Global operations: all the pixel values in the image or
subimage are taken into consideration.

12/18/2018 9
Point Operations
Point operations are zero memory operations
where a given gray level u [0,L] is mapped into a
gray level v [0, L] according to a transformation

v  f (u )
1.Contrast Stretching
2.Clipping and Thresholding
3. Digital Negative
4. Intensity Level Slicing
5. Bit plane slicing
6. Log Transformation
7. Power Law Transformation
12/18/2018 10
Contrast stretching
• Increase the dynamic range of
the gray levels in the image
• Before the stretching can be
performed it is necessary to
specify the upper and lower
pixel value limits over which the
image is to be normalized.
• Often these limits will just be
the minimum and maximum
pixel values that the image type
concerned allows

12/18/2018
11
Contrast stretching
• Call the lower and the upper limits a and b respectively. Scans
the image to find the lowest and highest pixel values currently
present in the image. Call these c and d. Then each pixel P is
scaled using the following function:
ba
Pout  ( Pin  c )( ) a
d c
Values below 0 are set to 0 and values about 255 are set to
255

12/18/2018
12
Its histogram

Source image

After contrast stretching, using a simple linear interpolation


between c = 79 and d = 136

This result is a significant improvement over the original, the


enhanced image itself still appears somewhat flat.

13
• Otherwise,we can achieve
better results by contrast
stretching the image over a
more narrow range of
graylevel values from the
original image
• For example, by setting the
cutoff fraction parameter to
0.03, we obtain the contrast-
stretched image

12/18/2018
14
• Setting the cutoff fraction to
a higher value, e.g. 0.125,
yields the contrast stretched
image

12/18/2018
15
Contrast Stretching

 u 0  u  a

v    u  a   va a  u  b

  (u  b )  vb b  u  L

The gray scale intervals where pixels occur most frequently


would be stretched to improve the overall visibility of a scene

12/18/2018 16
Contrast Stretching

12/18/2018 17
Contrast Stretching

12/18/2018 18
12/18/2018 19
Clipping and Thresholding
• Clipping-
• This is useful for noise reduction when the input
signal is known to lie in the range [a,b]
A special case of contrast stretching where

    0

Thresholding v  L 1 u  T
A special case of clipping
0 u  T

12/18/2018 20
Thresholding
• Separate out the regions of the image corresponding to objects
in which we are interested, from the regions of the image that
correspond to background
• perform this segmentation on the basis of the different
intensities or colors in the foreground and background regions
of an image

12/18/2018
21
• A) shows a classic bi-modal intensity distribution. This image
can be successfully segmented using a single threshold T1. B)
is slightly more complicated. Here we suppose the central
peak represents the objects we are interested in and so
threshold segmentation requires two thresholds: T1 and T2.
In C), the two peaks of a bi-modal distribution have run
together and so it is almost certainly not possible to
successfully segment this image using a single global
threshold

12/18/2018
22
Input

Output
using a single threshold at
a pixel intensity value of 120

23
Thresholding

12/18/2018 24
Adaptive Thresholding
• Whereas the conventional thresholding
operator uses a global threshold for all pixels,
adaptive thresholding changes the threshold
dynamically over the image
• This more sophisticated version of
thresholding can accommodate changing
lighting conditions in the image, e.g. those
occurring as a result of a strong illumination
gradient or shadows
12/18/2018
25
Adaptive Thresholding
• For each pixel in the image, a threshold has to be
calculated. If the pixel value is below the
threshold it is set to the background value,
otherwise it assumes the foreground value
• Two main approaches to finding the threshold:
– the Chow and Kaneko approach
– local thresholding
Assumptions: smaller image regions are more likely to have
approximately uniform illumination, thus being more suitable
for thresholding
12/18/2018
26
Adaptive Thresholding
• For each pixel in the image, a threshold has to be
calculated. If the pixel value is below the
threshold it is set to the background value,
otherwise it assumes the foreground value
• Two main approaches to finding the threshold:
– the Chow and Kaneko approach
– local thresholding
Assumptions: smaller image regions are more likely to have
approximately uniform illumination, thus being more suitable
for thresholding
12/18/2018
27
Chow and Kaneko approach
• Divide an image into an array of overlapping
subimages and then find the optimum
threshold for each subimage by investigating
its histogram
• The threshold for each single pixel is found by
interpolating the results of the subimages
• The drawback of this method is that it is
computational expensive and, therefore, is not
appropriate for real-time applications
12/18/2018
28
Local Thresholding
• Finding the local threshold is to statistically examine the
intensity values of the local neighborhood of each pixel
• The statistic which is most appropriate depends largely on
the input image. Simple and fast functions include the mean
of the local intensity distribution,
T  mean
the median value,
T  median

or the mean of the minimum and maximum values,


min  max
T 
12/18/2018
29 2
• Source Image • Image contains a
strong illumination
gradient, global
thresholding produces
a very poor result
12/18/2018
30
 Using the mean of a 7×7
neighborhood, adaptive
thresholding

 However, the mean of the


local area is not suitable
as a threshold,because the
range of intensity values
within a local neighborhood
is very small and their mean
is close to the value of the
center pixel

12/18/2018
31
• Improve:If the threshold
employed is not the mean, but
(mean-C), where C is a constant
• Using this statistic, all pixels which
exist in a uniform neighborhood (e.g.
along the margins) are set to
background

The result for a 7×7


neighborhood and C=7

12/18/2018
32
• The larger window yields the poorer result, because it is more
adversely affected by the illumination gradient

Mean, a 7×7 Mean, a 75×75 Median, a 7×7


neighborhood neighborhood neighborhood
and C=7 and C=10 and C = 4
12/18/2018
33
How to choose a threshold value?

12/18/2018
34
Image negatives
• Display medical images
and photographing a screen
with monochrome positive
film
• Reverse the order from
black to white so that the
intensity of the output
image decreases as the
intensity of the input L is the number of gray levels,
r and N denote the input and
increases output gray levels

12/18/2018
35
Digital Negative
Applications:
• display of medical images
• produce negative prints of images

12/18/2018 36
an image its negative

37
Digital Negative
Original Image Negative Image

12/18/2018 38
Gray/Intensity-level slicing
• Highlighting a specific range of gray levels is
often desired
• Various way to accomplish this
• Highlight some range and reduce all others to a
constant level
• Highlight some range but preserve all other levels

12/18/2018
39
Intensity Level Slicing
Without Background

L, a  u  b
v  
 0 o th e rw is e
With Background

 L, a  u  b
v  
 u o th e rw is e
12/18/2018 40
a) b)

c) d)
Intensity-level slicing:
(a) a transformation function that highlights a range [A, B] of intensities while
diminishing all others to a constant ,low level
(b) a transformation that highlights a range [A, B] of intensities but preserves all others
(c) Original image
(d) result of using the transformation in (a)
These transformations permits segmentation of certain gray level regions from
the rest of the image 41
Examples of display transfer functions

Manipulation of the grey scale transfer function:


a) an original, moderately low-contrast transmission light microscope image
(prepared slide of a head louse)
b) expanded linear transfer function adjusted to the minimum and maximum
brightness values
c) positive gamma (log) function
d) negative gamma (log) function
e) negative linear transfer function
f) nonlinear transfer function (high slope linear contrast over central portion of
brightness range, with negative slope or solarization for dark and bright portions)42
Bit-plane slicing
• Highlighting the contribution made to the
total image appearance by specific bits
• Higher-order :
– The majority of the visually significant data
• Lower-order :
– Subtle details

12/18/2018
43
Bit Extraction

12/18/2018 44
Bit-plane slicing

Original image

3 2 7 6
1 0 5 4

45
Bit-plane slicing
• Plane 7 contains the
most significant bits,
and plane 0 contains
the least significant 7 6
bits of the pixels in
the original image
5 4 3

2 1 0

12/18/2018
46
Log Transformation
v  c lo g 1 0  1  u 

Fourier Spectrum Log Transformed image

12/18/2018 47
Power Law Transformation
• Power law transformations
have the following form

v  c * u
• Map a narrow range
of dark input values
into a wider range of
output values or vice
versa
• Varying γ gives a whole
family of curves

12/18/2018 48
Gamma Correction
• A variety of devices used for image capture,
printing, and display respond according to a
power law.
• By convention, the exponent in the power-law
equation is referred to as gamma
• The process used to correct this power-law
response phenomena is called gamma
correction.

12/18/2018 49
Example
• cathode ray tube (CRT) devices have an
intensity-to-voltage response that is a power
function, with exponents varying from
approximately 1.8 to 2.5
• With reference to the curve for g=2.5 we
see that such display systems would tend to
produce images that are darker
than intended

12/18/2018 50
Gamma Correction

12/18/2018 51
12/18/2018 52
Histogram Processing
• Histogram of a digital image with gray levels
in the range [0,L-1] is a discrete function
h(rk) = nk
where
 rk : the kth gray level
 nk : the number of pixels in the image having gray
level rk
 h(rk) : histogram of a digital image with gray levels
rk

12/18/2018 53
Normalized Histogram
dividing each of histogram at gray level rk by
the total number of pixels in the image, n
p(rk) = nk / n
for k = 0,1,…,L-1
p(rk) gives an estimate of the probability of
occurrence of gray level rk
The sum of all components of a normalized
histogram is equal to 1

12/18/2018 54
Histogram
An image histogram is a plot of the gray-level frequencies.

12/18/2018 55
Image Histogram

12/18/2018 56
Properties of Image Histogram
• Histograms with small spread correspond to low contrast
images (i.e., mostly dark, mostly bright, or mostly gray).
• Histograms with wide spread correspond to high contrast
images.

12/18/2018 57
Properties of Image Histogram
Histograms clustered at the low end correspond to
dark images.
Histograms clustered at the high end correspond to
bright images.

12/18/2018 58
Example
Dark Image

Components of
histogram are
concentrated on
the low side of the
gray scale.

Bright Image

Components of
histogram are
concentrated on the
high side of the
gray scale.

12/18/2018 59
Example
Low contrast image
histogram is narrow and
centered toward the
middle of the gray scale

High contrast image


histogram covers broad
range of the gray scale
and the distribution of
pixels is not too far from
uniform, with very few
vertical lines being much
higher than the others

12/18/2018 60
Histogram Processing
• Histograms corresponding to
four basic image types

Dark image

Bright image

Low-contrast image

High-contrast image
12/18/2018
61
Histogram equalization
• Goal: to produce an image with equally
distributed brightness levels over the whole
brightness scale
• Effect: enhancing contrast for brightness values
close to histogram maxima, and decreasing
contrast near minima.
• Result is better than just stretching, and method
is fully automatic

12/18/2018
62
Histogram Equalization
• As the low-contrast image’s histogram is
narrow and centered toward the middle of
the gray scale, if we distribute the histogram
to a wider range the quality of the image will
be improved
• We can do it by adjusting the probability
density function of the original histogram of
the image so that the probability spread
equally
12/18/2018 63
Histogram Equalization

The histogram equalization is an approach to enhance a given image.


The approach is to design a transformation T(.) such that the gray
values in the output is uniformly distributed in [0, 1].

Let us assume for the moment that the input image to be


enhanced has continuous gray values, with r = 0 representing
black and r = 1 representing white.

We need to design a gray value transformation s = T(r), based


on the histogram of the input image, which will enhance the
image.

12/18/2018 64
As before, we assume that:
(1) T(r) is a monotonically increasing function for s  T ( r )   p ( w ) dw 0 r 1
r

(preserves order from black to white). 0

(2) T(r) maps [0,1] into [0,1] (preserves the range of allowed
Gray values).

12/18/2018 65
Let us denote the inverse transformation by r T -1(s) . We
assume that the inverse transformation also satisfies the above
two conditions.

We consider the gray values in the input image and output
image as random variables in the interval [0, 1].

Let pin(r) and pout(s) denote the probability density of the


Gray values in the input and output images.

12/18/2018 66
If pin(r) and T(r) are known, and r T -1(s) satisfies condition 1, we can
write (result from probability theory):

 dr 
p out ( s )  p in ( r )
 ds  r  T  1 ( s )

One way to enhance the image is to design a transformation


T(.) such that the gray values in the output is uniformly
distributed in [0, 1], i.e. pout (s)  1, 0  s 1

In terms of histograms, the output image will have all


gray values in “equal proportion” .

This technique is called histogram equalization.

12/18/2018 67
Equalization

12/18/2018 68
Next we derive the gray values in the output is uniformly
distributed in [0, 1].

Consider the transformation


r
s  T (r )  0
p in ( w ) dw , 0  r 1

Note that this is the cumulative distribution function (CDF) of pin (r)
and satisfies the previous two conditions.

From the previous equation and using the fundamental


theorem of calculus,

 dr  ds
p out ( s )  p in ( r ) 1  p in ( r )
  dr
 ds 

12/18/2018 69
Therefore, the output histogram is given by

 1 
p out ( s )   p in ( r )    1  r  T  1 ( s )  1 , 0  s 1
 p in ( r )  1
r T (s)

The output probability density function is uniform, regardless of


the input.

Thus, using a transformation function equal to the CDF of input


gray values r, we can obtain an image with uniform gray values.

This usually results in an enhanced image, with an increase in


the dynamic range of pixel values.

12/18/2018 70
How to implement histogram equalization?
Step 1:For images with discrete gray values, compute:

nk
p in ( r k )  0  rk  1 0  k  L 1
n
L: Total number of gray levels

nk: Number of pixels with gray value rk

n: Total number of pixels in the image

Step 2: Based on CDF, compute the discrete version of the previous


transformation :

s k  T ( rk )   p in ( r j ) 0  k  L 1
j0

12/18/2018 71
k

Example: s k  T ( rk )   p in ( r j ) 0  k  L 1
j0

Consider an 8-level 64 x 64 image with gray values (0, 1, …,7). The


normalized gray values are (0, 1/7, 2/7, …, 1). The normalized
histogram is given below:

NB: The gray values in output are also (0, 1/7, 2/7, …, 1).
12/18/2018 72
# pixels Fraction of
# pixels

Gray value Normalized gray value

12/18/2018 73
k

 Applying the transformation, s k  T ( rk )   p in ( r j ) we have


j0

12/18/2018 74
12/18/2018 75
Notice that there are only five distinct gray levels --- (1/7,
3/7,5/7, 6/7, 1) in the output image. We will relabel them
as (s0,s1, …, s4 ).

Withthis transformation, the output image will have


histogram

12/18/2018 76
Histogram of output
image

# pixels

Gray values

Note that the histogram of output image is only approximately, and not exactly,
uniform. This should not be surprising, since there is no result that claims
uniformity in the discrete case.

12/18/2018 77
Example Original image and its histogram

12/18/2018 78
Histogram equalized image and its histogram

12/18/2018 79
 Comments:
Histogram equalization may not always produce desirable
results, particularly if the given histogram is very narrow. It
can produce false edges and regions. It can also increase
image “graininess” and “patchiness.”

12/18/2018 80
12/18/2018 81
Histogram Equalization
• Form the cumulative histogram
• Normalize the value by dividing it by the total
number of pixels
• Multiply these values by the maximum gray
level value and round off the value
• Map the original value to the result of step 3
by a one-one correspondence

12/18/2018 82
Histogram (Matching) Specification
• Histogram equalization has a disadvantage which
is that it can generates only one type of output
image.

• With Histogram Specification, we can specify the


shape of the histogram that we wish the output
image to have.

• It doesn’t have to be a uniform histogram

12/18/2018 83
Histogram specification / matching
• Motivation:
– Sometimes, the ability to specify particular histogram
shapes capable of highlighting certain gray-level ranges in
an image is desirable
• The aim is to produce an image with desired
distributed brightness levels over the whole
brightness scale, as opposed to uniform
distribution

12/18/2018
84
Histogram specification
pr (r ) : The original probability density function
pz (z)
: The desired probability density function

85
12/18/2018
86
Example
(a) (b)

(c) (d) resulting


specified
equalized
original
Illustration of the histogram specification method:
(a)original image; (b)image after histogram equalization;
(c)image enhanced by histogram specification; (d)histograms
87
Histogram Specification
• Find the mapping table of the histogram
equalization
• Specify the desired histogram. Equalize the
desired histogram
• Perform the mapping process so that the
values of step 1 can be mapped to the results
of step 2.

12/18/2018 88
Local enhancement
• Motivation:
– To enhance details over small areas in an image
– The computation of a global transformation does not guarantee the
desired local enhancement
• Solution:
– Define a square / rectangular neighborhood
– Move the center of this area from pixel to pixel
– Histogram equalization in each neighborhood region

12/18/2018
89
Image Subtraction
• The difference between two images f(x, y)
and h(x, y), expressed as
g ( x, y)  f ( x, y)  h( x, y)
– h(x, y) : mask - an x-ray image of a region of a patient’s body
– f(x, y) : image of the same anatomical region but acquired after
injection of a dye into the bloodstream

(b) image (after injection


of dye into the bloodstream)
with mask subtracted out

12/18/2018
90
(a) mask image
Example

Showing image differences by subtraction:


a) original image; b) image after moving one coin; c) difference image after pixel-by pixel subtraction

Different images for quality control. A master image is subtracted from


images of each subsequent part. In this example, the missing chip in a printed
circuit board is evident in the difference image

91
Applications of Image Subtraction and
change detection
• Medical imaging application : display blood-
flow paths
• Automated inspection of printed circuits
• Security monitoring

12/18/2018 92
Two frames from a videotape sequence of free-swimming
single-celled animals in a drop of pond water , and the
difference image. The length of the white region divided by
the time interval gives the velocity

Analysis of motion in a more complex


situation than shown in fig. Where the
paths of the swimming microorganisms
cross, they are sorted out by assuming
that the path continues in a nearly
straight direction. (Gualtieri &Coltelli,
12/18/2018
93 1992)
Image Averaging
• Motivation:
– Imaging with very low light levels is routine,
causing sensor noise frequently to render single
images virtually useless for analysis
– Solution: image averaging

12/18/2018
94
Image Averaging
• Noisy image g ( x, y )  f ( x, y )  n( x, y )

– g(x, y)=noise image


– f(x, y)=original image
– n(x, y)=noise
• Assumption: at every pair of coordinates (x, y), the noise is
uncorrelated
As K increases:& has zero average
the variability (noise)value
of the pixel values at each
location (x,y) decreases
– uncorrelated: covariance E [( x i  m i )( x j  m j )]  0
• If an image g ( x , y ) is formed by averaging K different noisy
K
images 1
g ( x, y) 
K
 i
g ( x, y)
i 1

E g ( x , y )  f ( x, y)

1
  
2 2
12/18/2018
95
g ( x, y ) n ( x, y )
K
Example

12/18/2018
96
Spatial Filtering -Contents

– What is spatial filtering?


– Smoothing Spatial filters.
– Sharpening Spatial Filters.
– Combining Spatial Enhancement Methods

12/18/2018 97
Mask Operation

 Linear systems and linear filtering


 Smoothing operations
Median Filtering
Sharpening operations
Derivative operations
Correlation

12/18/2018 98
Neighbourhood Operations
Neighbourhood operations simply operate on a
larger neighbourhood of pixels than point
Origin x
operations
Neighbourhoods are
mostly a rectangle
around a central pixel (x, y)
Neighbourhood
Any size rectangle
and any shape filter
are possible
12/18/2018
y Image f (x, y)99
Neighbourhood Operations
For each pixel in the origin image, the outcome
is written on the same location at the target
image.
Origin
Origin x Target

(x, y)
Neighbourhood

y Image f (x, y)
12/18/2018 100
Simple Neighbourhood Operations
Simple neighbourhood operations example:

– Min: Set the pixel value to the minimum in the


neighbourhood

– Max: Set the pixel value to the maximum in the


neighbourhood

12/18/2018 101
Image Enhancement: Spatial
Filtering
Image enhancement in the spatial domain can be represented
as:

g ( m , n )  T  f ( m , n )

Enhanced Image Transformation Given Image

The transformation T maybe linear or nonlinear. We will mainly study linear


operators T but will see one important nonlinear operation.

There are two closely related concepts that must be understood when
performing linear spatial filtering. One is correlation, the other is convolution.

12/18/2018 102
How to specify T
If the operator T is linear and shift invariant (LSI), characterized by
the point-spread sequence (PSS) h(m, n), then (recall convolution)

g (m , n )  h (m , n)  f (m , n)
 

   h(m  k , n  l) f (k , l)
l   k  

 

   f (m  k , n  l)h(k , l)
l   k  

In practice, to reduce computations, h( n , m ) is of “finite extent:”

h( n , m ) =0, for (k,l)  D


where Dis a small set (called neighborhood). Dis also called as the support
of h.

12/18/2018 103
If h(m,n) is a 3 by 3 mask given by

w1 w2 w3 f(-1,-1) f(-1, 0) f(-1, 1)


y
h= w4 w5 w6 f( 0,-1) f( 0, 0) f( 0, 1)
(x,y)
w7 w8 w9 f( 1,-1) f( 1, 0) f( 1, 1)
x
(m=0,n=0)

g (m , n)
 w 1 f ( m  1, n  1 )  w 2 f ( m  1, n )  w 3 f ( m  1, n  1 )

 w 4 f ( m , n  1)  w 5 f ( m , n )  w 6 f ( m , n  1)

 w 7 f ( m  1, n  1 )  w 8 f ( m  1, n )  w 9 f ( m  1, n  1 )

12/18/2018 104
The output g(m, n) is computed by sliding the mask over each pixel of the
image f(m, n). This filtering procedure is sometimes referred to as moving
average filter.

Special care is required for the pixels at the border of image f(m, n). This
depends on the so-called boundary condition. Common choices are:
(1) The mask is truncated at the border (free boundary).

For one dimension

~  f (x) 0 x  N
f (x)  
 0  (L / 2)  1  x  0 N  x  N  (L / 2)  1

MATLAB option is „P‟

12/18/2018 105
(2)The image is extended by appending extra rows/columns at the boundaries. The
extension is done by repeating the first/last row/column or by setting them to some
constant (fixed boundary).

For one dimension

 f (0 )  (L / 2)  1  x  0
~ 
f (x)   f (x) 0  x  N
 f ( N  1) N  x  N  (L / 2)  1

MATLAB option is „replicate‟

(3)The boundaries “wrap around” (periodic boundary).

For one dimension

 f (( x  N ) mod N )  (L / 2)  1  x  0
~ 
f (x)   f (x) 0  x  N
 N  x  N  (L / 2)  1
 f ( x mod N )

MATLAB
12/18/2018 option is „symmetric‟ 106
In any case, the final output g(m, n) is restricted to the support of the original
image f(m, n).

The mask operation can be implemented in matlab using the filter2


command, which is based on the conv2 command.

12/18/2018 107
The Spatial Filtering Process
Origin x
a b c j k l
d
g
e
h
f
i
* m
p
n
q
o
r
Original Image Filter (w)
Simple 3*3 Pixels
e 3*3 Filter
Neighbourhood
eprocessed = n*e +
j*a + k*b + l*c +
m*d + o*f +
y Image f (x, y) p*g + q*h + r*i

The above is repeated for every pixel in the


original image to generate the filtered image
12/18/2018 108
Spatial Filtering: Equation Form
a b

g (x, y)    w (s, t) f ( x  s, y  t)
s atb

Filtering can be given


in equation form as
shown above
Notations are based
on the image shown
to the left

12/18/2018 109
Smoothing Filters
Image smoothing refers to any image-to-image transformation designed to
“smooth” or flatten the image by reducing the rapid pixel-to-pixel variation in
gray values.

Smoothing filters are used for:


(1) Blurring: This is usually a preprocessing step for removing small
(unwanted) details before extracting the relevant (large) object, bridging gaps
in lines/curves,
(2)Noise reduction: Mitigate the effect of noise by linear or nonlinear
operations.

Image smoothing by averaging (lowpass spatial filtering)

12/18/2018 110
Smoothing is accomplished by applying an averaging mask.

An averaging mask is a mask with positive weights, which sum to 1. It


computes a weighted average of the pixel values in a neighborhood. This
operation is sometimes called neighborhood averaging.

Some 3 x 3 averaging masks:

0 1 0 0 1 0 1 1 1 1 3 1
1  1  
1    1   3 16 3

1 1 1
 
1 4 1
 
1 1 1
 32  
5 8 9
 0 1 0   0 1 0  1 1 1   1 3 1 

This operation is equivalent to lowpass filtering.

12/18/2018 111
Smoothing Spatial Filters
One of the simplest spatial filtering operations
we can perform is a smoothing operation
– Simply average all of the pixels in a
neighbourhood around a central value
– Especially useful
in removing noise 1/ 1/ 1/
9 9 9
from images Simple
– Also useful for 1/
9
1/
9
1/
9 averaging
highlighting gross filter
detail 1/
9
1/
9
1/
9
12/18/2018 112
Smoothing Spatial Filtering
Origin x
104 100 108 1/ 1/ 1/
9 9 9

99 106 98

95 90 85
* 1/

1/
9
1/

1/
9
1/

1/
9

9 9 9

1/ 100
104
9
1/ 108
9
1/
9
Original Image Filter
Simple 3*3 1/ 1/ 1/
3*3 Smoothing Pixels
999 106
9 989
Neighbourhood 195
/9 190
/9 185
/9
Filter
e = 1/9*106 +
1/ *104 + 1/ *100 + 1/ *108 +
9 9 9
1/ *99 + 1/ *98 +
9 9
y Image f (x, y) 1/ *95 + 1/ *90 + 1/ *85
9 9 9
= 98.3333
The above is repeated for every pixel in the
original image to generate the smoothed image
12/18/2018 113
Image Smoothing Example
The image at the top left
is an original image of
size 500*500 pixels
The subsequent images
show the image after
filtering with an averaging
filter of increasing sizes
– 3, 5, 9, 15 and 35
Notice how detail begins
to disappear
12/18/2018 114
Image Smoothing Example

12/18/2018 115
Image Smoothing Example

12/18/2018 116
Image Smoothing Example

12/18/2018 117
Image Smoothing Example

12/18/2018 118
Image Smoothing Example

12/18/2018 119
Image Smoothing Example

12/18/2018 120
Weighted Smoothing Filters
More effective smoothing filters can be
generated by allowing different pixels in the
neighbourhood different weights in the
averaging function 1/ 2/ 1/
16 16 16
– Pixels closer to the
central pixel are more 2/
16
4/
16
2/
16
important
– Often referred to as a 1/
16
2/
16
1/
16
weighted averaging
Weighted
12/18/2018
averaging filter 121
Another Smoothing Example
By smoothing the original image we get rid of
lots of the finer detail which leaves only the
gross features for thresholding

Original Image Smoothed Image Thresholded Image

* Image taken from Hubble Space Telescope


12/18/2018 122
Averaging Filter Vs. Median Filter Example

Original Image Image After Image After


With Noise Averaging Filter Median Filter

Filtering is often used to remove noise from


images
Sometimes a median filter works better than an
averaging filter
12/18/2018 123
Averaging Filter Vs. Median Filter Example

Original

12/18/2018 124
Averaging Filter Vs. Median Filter Example

Averaging
Filter

12/18/2018 125
Averaging Filter Vs. Median Filter Example

Median
Filter

12/18/2018 126
Strange Things Happen At The Edges!
At the edges of an image we are missing
pixels to form a neighbourhood
Origin x
e e

e e e
12/18/2018 y Image f (x, y) 127
Strange Things Happen At The Edges!
(cont…)
There are a few approaches to dealing with
missing edge pixels:
– Omit missing pixels
• Only works with some filters
• Can add extra code and slow down processing
– Pad the image
• Typically with either all white or all black pixels
– Replicate border pixels
– Truncate the image
12/18/2018 128
Correlation & Convolution
The filtering we have been talking about so far is
referred to as correlation with the filter itself
referred to as the correlation kernel
Convolution is a similar operation, with just one
subtle difference
a b c r s t eprocessed = v*e +
z*a + y*b + x*c +
d
f
e
g
e
h
* u
x
v
y
w
z
w*d + u*e +
t*f + s*g + r*h
Original Image Filter
Pixels

For symmetric filters it makes no difference


12/18/2018 129
Sharpening Spatial Filters
Previously we have looked at smoothing filters
which remove fine detail
Sharpening spatial filters seek to highlight fine
detail
– Remove blurring from images
– Highlight edges
Sharpening filters are based on spatial
differentiation

12/18/2018 130
Image Sharpening
Thisinvolves highlighting fine details or enhancing
details that have been blurred.
Basic highpass spatial filtering

Thiscan be accomplished by a linear shift-invariant


operator, implemented by means of a mask, with
positive and negative coefficients.
This is called a sharpening mask, since it tends to
enhance abrupt gray level changes in the image.

12/18/2018 131
Themask should have a positive coefficient at the center and
negative coefficients at the periphery. The coefficients should
sum to zero. Example:
 1 1  1
1  
1 8 1
9  
  1 1  1 

This is equivalent to highpass filtering.

Ahighpass filtered image g can be thought of as the difference


between the original image f and a lowpass filtered version of f :
g(m,n) = f(m,n) – lowpass(f(m,n))

12/18/2018 132
Example:

12/18/2018 133
High-boost filtering
This
is a filter whose output g is produced by subtracting a
lowpass (blurred) version of f from an amplified version of f
g(m,n) = A f(m,n) – lowpass(f(m,n))

This is also referred to as unsharp masking.


Observe that
g(m,n) = A f(m,n) – lowpass(f(m,n))

= (A-1) f(m,n) + f(m,n) – lowpass(f(m,n))

= (A-1) f(m,n) + highpass(f(m,n))

For A 1, part of the original image is added back to the highpass filtered
version of f.

12/18/2018 134
The result is the original image with the edges enhanced relative to the original
image.

Example:

12/18/2018 135
Spatial Differentiation
Differentiation measures the rate of change of a
function
Let’s consider a simple 1 dimensional example

12/18/2018 136
Spatial Differentiation

A B

12/18/2018 137
1st Derivative
The formula for the 1st derivative of a function is
as follows:
f
 f ( x  1)  f ( x )
x

It’s just the difference between subsequent


values and measures the rate of change of the
function

12/18/2018 138
1st Derivative (cont…)
Image Strip

8
7
6
5
f(x) 4
3
2
1
0

5 5 4 3 2 1 0 0 0 6 0 0 0 0 1 3 1 0 0 0 0 7 7 7 7
0 -1 -1 -1 -1 0 0 6 -6 01st 0
Derivative
0 1 2 -2 -1 0 0 0 7 0 0 0
8
6
4

f‟(x) 2
0
-2
-4
-6
12/18/2018
-8 139
2nd Derivative
The formula for the 2nd derivative of a function
is as follows:
 f
2

 f ( x  1)  f ( x  1)  2 f ( x )
 x
2

Simply takes into account the values both before


and after the current value

12/18/2018 140
2nd Derivative (cont…)
Image Strip

8
7
6
5
f(x) 4
3
2
1
0

5 5 4 3 2 1 0 0 0 6 0 0 0 0 1 3 1 0 0 0 0 7 7 7 7
-1 0 0 0 0 1 0 6 -12 6
2nd0 0 1
Derivative 1 -4 1 1 0 0 7 -7 0 0

10

f‟‟(x) 0

-5

-10

12/18/2018-15 141
12/18/2018 142
1st and 2nd Derivative
Image Strip

8
7
6
5
4
f(x) 3
2
1
0 1st Derivative

8
6
4
2

f‟(x) 0
-2
-4
-6
-8
2nd Derivative

10

f‟‟(x) 0

-5

-10

-15 143
Using Second Derivatives For Image
Enhancement
The 2nd derivative is more useful for image
enhancement than the 1st derivative
– Stronger response to fine detail
– Simpler implementation
– We will come back to the 1st order derivative later
on
The first sharpening filter we will look at is the
Laplacian
– Isotropic
– One of the simplest sharpening filters
– We will look at a digital implementation
12/18/2018 144
The Laplacian
The Laplacian is defined as follows:
 f  f
2 2

 f  
2

 x  y
2 2

where the partial 1st order derivative in the x


direction is defined as follows:
 f
2

 f ( x  1, y )  f ( x  1, y )  2 f ( x , y )
 x
2

and in the y direction as follows:


 f
2

 f ( x , y  1)  f ( x , y  1)  2 f ( x , y )
 y
2
12/18/2018 145
The Laplacian (cont…)
So, the Laplacian can be given as follows:
 f  [ f ( x  1, y )  f ( x  1, y )
2

 f ( x , y  1 )  f ( x , y  1 )]
 4 f ( x, y)
We can easily build a filter based on this
0 1 0

1 -4 1

12/18/2018
0 1 0 146
Laplacian Mask

12/18/2018 147
The Laplacian (cont…)
Applying the Laplacian to an image we get a new
image that highlights edges and other
discontinuities

Original Laplacian Laplacian


Image Filtered Image Filtered Image
12/18/2018 Scaled for Display 148
But That Is Not Very Enhanced!
The result of a Laplacian filtering is
not an enhanced image
We have to do more work in order
to get our final image
Subtract the Laplacian result from
Laplacian
the original image to generate our Filtered Image
final sharpened enhanced image Scaled for Display

g ( x, y)  f ( x, y)  
2
f
12/18/2018 149
• Background features can be “recovered” while still preserving
the sharpening effect of the Laplacian operation simply by
adding the original and Laplacian images

12/18/2018 150
Laplacian Image Enhancement

- =
Original Laplacian Sharpened
Image Filtered Image Image

In the final sharpened image edges and fine


detail are much more obvious

12/18/2018 151
Laplacian Image Enhancement

12/18/2018 152
Simplified Image Enhancement
The entire enhancement can be combined into a
single filtering operation 2
g ( x, y)  f ( x, y)   f
 f ( x , y )  [ f ( x  1, y )  f ( x  1, y )
 f ( x , y  1)  f ( x , y  1)
 4 f ( x , y )]
 5 f ( x , y )  f ( x  1, y )  f ( x  1, y )
 f ( x , y  1)  f ( x , y  1)
12/18/2018 153
Simplified Image Enhancement (cont…)

This gives us a new filter which does the whole


job for us in one step
0 -1 0

-1 5 -1

0 -1 0

12/18/2018 154
Simplified Image Enhancement (cont…)

12/18/2018 155
Variants On The Simple Laplacian-
Composite Laplacian Mask
There are lots of slightly different versions of the
Laplacian that can be used:
0 1 0 1 1 1
Simple Variant of
1 -4 1 1 -8 1
Laplacian Laplacian
0 1 0 1 1 1

-1 -1 -1

-1 9 -1

-1 -1 -1
12/18/2018 156
Unsharp Masking and Highboost
Filtering
• Unsharp masking
Sharpen images consists of subtracting an unsharp (smoothed)
version of an image from the original image
• e.g., printing and publishing industry
Steps

1. Blur the original image

2. Subtract the blurred image from the original

3. Add the mask to the original


12/18/2018 157
Unsharp masking

fs ( x, y )  f ( x, y )  f ( x, y )

f s ( x , y ) - sharpened image obtained by unsharp masking

f ( x, y) - Blurred version of f(x,y)

158
High-Boost Filtering
• Generalization of unsharp masking is called high-
boost filtering

f hb ( x , y )  A f ( x , y )  f ( x , y )

f hb ( x , y )  A f ( x , y )  f ( x , y )  f ( x , y )  f ( x , y )

f hb ( x , y )   A  1 f (x, y)  fs (x, y)

f hb ( x , y )   A  1 f ( x, y )  f ( x, y )   f ( x, y )
2

f hb ( x , y )  A f ( x , y )   f ( x , y )
2

12/18/2018 159
High-Boost Filtering

12/18/2018 160
High-Boost Filtered image

12/18/2018 161
Highboost Filtering

12/18/2018 162
1st Derivative Filtering- The Gradient
Implementing 1st derivative filters is difficult in
practice
For a function f(x, y) the gradient of f at
coordinates (x, y) is given as the column vector:

 f 
G x   
x
f      f 
G
 y  
  y 

12/18/2018 163
1st Derivative Filtering (cont…)
The magnitude of this vector is given by:
 f  mag ( f )

 
1
 Gx  Gy
2 2 2

1
 f 
2 2 2
   f 
     
 
   x   y  

For practical reasons this can be simplified as:


f  Gx  Gy

12/18/2018 164
1st Derivative Filtering (cont…)
There is some debate as to how best to calculate
these gradients but we will use:
f  z7  2 z 8  z 9    z1  2 z 2  z 3 

 z3  2 z 6  z 9    z1  2 z 4  z 7 

which is based on these coordinates

z1 z2 z3

z4 z5 z6

z7 z8 z9
12/18/2018 165
Sobel Operators
Based on the previous equations we can derive the
Sobel Operators
-1 -2 -1 -1 0 1

0 0 0 -2 0 2

1 2 1 -1 0 1

To filter an image it is filtered using both operators


the results of which are added together

12/18/2018 166
Sobel Example
An image of a
contact lens which
is enhanced in
order to make
defects (at four
and five o’clock in
the image) more
obvious

Sobel filters are typically used for edge detection

12/18/2018 167
1st & 2nd Derivatives
Comparing the 1st and 2nd derivatives we can
conclude the following:
– 1st order derivatives generally produce thicker
edges
– 2nd order derivatives have a stronger response to
fine detail e.g. thin lines
– 1st order derivatives have stronger response to
grey level step
– 2nd order derivatives produce a double response
at step changes in grey level
12/18/2018 168
Combining Spatial Enhancement Methods
Successful image
enhancement is typically not
achieved using a single
operation
Rather we combine a range of
techniques in order to achieve
a final result
This example will focus on
enhancing the bone scan to
the right
12/18/2018 169
Combining Spatial Enhancement Methods
(cont…)

(a)
Laplacian filter of
bone scan (a)
(b)
Sharpened version of
bone scan achieved (c)
by subtracting (a)
and (b) Sobel filter of bone
12/18/2018
scan (a) (d) 170
Combining Spatial Enhancement Methods
(cont…)
Result of applying a (h)
power-law trans. to
Sharpened image (g)
which is sum of (a)
and (f) (g)
The product of (c)
and (e) which will be (f)
used as a mask
(e)

Image (d) smoothed with


12/18/2018 171
a 5*5 averaging filter
Combining Spatial Enhancement Methods
(cont…)
Compare the original and final images

12/18/2018 172
Filtering in Frequency Domain

12/18/2018 173
Notch Filter
H ( u , v )  0 if ( u , v )  ( M / 2 , N / 2 )

1 o th e rw is e

12/18/2018 174
Transfer Function of Ideal Lowpass
Filter
H ( u , v )  1 if D ( u , v )  D 0

0 if D ( u , v )  D 0

D 0 is th e c u to f f f r e q u e n c y
1

 N  
2 2 2
M  
D (u , v )    u    v   
 2   2  
 

12/18/2018 175
2
P (u , v )  F (u , v ) P (u , v)  R (u , v)  I (u , v)
2 2

M 1 N 1

PT   P (u , v )
u0 v0

  P (u , v ) 
 
  100 u v

 PT 
 

12/18/2018 176
Ideal Lowpass Filter

12/18/2018 177
12/18/2018 178
12/18/2018 179
Butterworth Low pass Filter
1
H (u , v ) 
1   D (u , v ) / D 0 
2n

12/18/2018 180
Results of Filtering with BLPF

12/18/2018 181
12/18/2018 182
Gaussian Lowpass Filter
2 2
 D ( u ,v ) / 2 D0
H (u , v )  e

12/18/2018 183
High Pass Filter

12/18/2018 184
Transfer function of HPF
Id e a l H P F

H (u , v )  0 if D (u , v )  D 0

1 if D (u , v )  D 0

D 0 is th e c u to ff fre q u e n c y

B u tte r w o r th H P F

1
H (u , v ) 
D 
2 n
1  (u , v ) / D 0

G a u s s ia n H P F
2 2
 D ( u ,v ) / 2 D 0
H (u , v )  1  e
12/18/2018 185
Results of HPF using Butterworth HPF

12/18/2018 186
Homomorphic Filtering Approach

12/18/2018 187
Homomorphic Filtering
f ( x, y )  i( x, y )r ( x, y )

F  f ( x , y )  F  i ( x , y ) F  r ( x , y )
z ( x , y )  ln f ( x , y )
= ln i ( x , y )  ln r ( x , y )

F  z ( x , y )  F  ln f ( x , y )

=F  ln i ( x , y )   F  ln r ( x , y )

Z ( u , v )  Fi ( u , v )  F r ( u , v )

12/18/2018 188
I f w e p r o c e s s Z ( u , v ) b y m e a n s o f a f ilte r f u n c tio n H ( u ,v )

S (u , v )  H (u , v ) Z (u , v )

= H ( u , v ) Fi ( u , v )  H ( u , v ) F r ( u , v )

I n th e s p a tia l d o m a in

 S ( u , v )
1
s(x, y)  F

H ( u , v ) Fi ( u , v )  F H ( u , v ) F r ( u , v )
1 1
F

H ( u , v ) F i ( u , v )
1
i '( x , y )  F

H ( u , v ) F r ( u , v )
1
r '( x , y )  F

s ( x , y )  i '( x , y )  r '( x , y )

g ( x, y)  e
s(x,y)

i '( x , y ) r '( x , y )
=e .e
12/18/2018 189
= i 0 ( x , y ) r0 ( x , y )
Filter Function H(u,v)

The filter function tends to decrease the contribution made by the low
frequencies (illumination) and amplify the contributions made by the
high frequencies (reflectance)

12/18/2018 190
Thank You

12/18/2018 191

You might also like