You are on page 1of 145

Digital Image Processing

Intensity Transformations
(Unit-II)
(CO-R16C402.2; PO’s-1, 2, 5, 12; PSO’s-1, 2)
N.Nagaraju
Assistant Professor
Department of ECE

1
Contents
• Background
• Some basic intensity transformation functions
• Histogram processing
• Fundamentals of spatial filtering
• Smoothing spatial filters, sharpening spatial
filters
• Combining spatial enhancement methods

2
Principle Objective of
Enhancement
• Process an image so that the result will be more suitable
than the original image for a specific application

• The suitableness is up to each application

• A method which is quite useful for enhancing an image


may not necessarily be the best approach for enhancing
another images

3
2 Domains
• Spatial Domain : (image plane)
• Techniques are based on direct manipulation of pixels in
an image
• Frequency Domain :
• Techniques are based on modifying the Fourier
transform of an image
• There are some enhancement techniques based on
various combinations of methods from these two
categories.

4
Spatial Domain
• Procedures that operate directly on pixels.
g(x,y) = T[f(x,y)]
• where
• f(x,y) is the input image
• g(x,y) is the processed image
• T is an operator on f defined
over some neighborhood
of (x,y)

5
Mask/Filter
 Neighborhood of a point (x,y) can be defined by using a
square/rectangular (common used) or circular
subimage area centered at (x,y)

 The center of the subimage is moved from pixel to pixel


starting at the top of the corner

6
Point Processing
 Neighborhood = 1x1 pixel
 g depends on only the value of f at (x,y)
 T = gray level (or intensity or mapping)
transformation function
s = T(r)
 Where
 r = gray level of f(x,y)
 s = gray level of g(x,y)

7
Contrast Stretching
 Produce higher contrast than the original by
darkening the levels below m in the original image

 Brightening the levels above m in the original image

8
Thresholding
 Produce a two-level (binary) image

9
Digital Image Processing
Intensity Transformation functions
(Unit-II)
(CO-R16C402.2; PO’s-1, 2, 5, 12; PSO’s-1, 2)
N.Nagaraju
Assistant Professor
Department of ECE

10
3 basic gray-level transformation
• functions
Linear function
• Negative and identity transformations
• Logarithm function
• Log and inverse-log transformation
• Power-law function
• nth power and nth root transformations
• Identity function
• output intensities are identical to
input intensities

11
Image Negatives
• An image with gray level in the range [0, L-1]
where L = 2n ; n = 1, 2…
• Negative transformation :
s = L – 1 –r
• Reversing the intensity levels of
an image.
• Suitable for enhancing white or gray
detail embedded in dark regions of an
image, especially when the black area
dominant in size.

12
Example of Negative Image
• Original mammogram showing a small lesion of a breast
• Negative Image : gives a better vision to analyse the image

13
Log Transformations
• s = c log (1+r)
• c is a constant and r ≥ 0
• Log curve maps a narrow range of low
gray-level values in the input image
into a wider range of output levels.
• Used to expand the values of dark
pixels in an image while compressing
the higher-level values.

14
Log Transformations (Contd.)
• It compresses the dynamic range of images with large
variations in pixel values
• Example of image with dynamic range: Fourier spectrum
image
• It can have intensity range from 0 to 106 or higher
• We can’t see the significant degree of detail as it will be lost
in the display

15
Example of Log Transformations
• Fourier Spectrum with range = 0 to 1.5 x 106
• Result after apply the log transformation with c = 1, range =
0 to 6.2

16
Power-Law Transformations
s = crϒ
• c and ϒ are positive constants
• Power-law curves with
fractional values of ϒ map a
narrow range of dark input
values into a wider range of
output values, with the
opposite being true for
higher values of input levels.
• c = ϒ = 1 Identity function

17
Gamma correction
• Cathode ray tube (CRT)
devices have an intensity to-
voltage response that is
a power function, with ϒ
varying from 1.8 to 2.5
• The picture will become
darker
• Gamma correction is done
by pre-processing the image
before inputting it to the
monitor with s = cr1/ϒ

18
Another example : MRI
• (a) magnetic resonance image of an upper
thoracic human spine with a fracture
dislocation and spinal cord impingement
• The picture is predominately dark
• An expansion of gray levels are
desirable needs ϒ < 1
• (b) result after power-law transformation
with ϒ = 0.6, c=1
• (c) transformation with ϒ = 0.4 (best result)
• (d) transformation with ϒ = 0.3
(under acceptable level)

19
Effect of decreasing gamma
• When the ϒ is reduced too much, the image begins to
reduce contrast to the point where the image started to
have very slight “washout” look, especially in the
background

20
Another Example
• (a) image has a washed-out appearance,
it needs a compression of gray levels
needs ϒ > 1
• (b) result after power-law transform-
ation with ϒ = 3.0 (suitable)
• (c) transformation with ϒ = 4.0
(suitable)
• (d) transformation with ϒ = 5.0
(high contrast, the image has
areas that are too dark, some detail
is lost)

21
Contrast Stretching
• Increase the dynamic range of the gray levels in the image
• (b) a low-contrast image:
result from poor illumination,
lack of dynamic range in the
imaging sensor, or even wrong
setting of a lens aperture of
image acquisition
• (c) result of contrast stretching:
(r1,s1) = (rmin,0) and (r2,s2) =
(rmax,L-1)
• (d) result of thresholding

22
(Intensity)Gray-level slicing
• Highlighting a specific range of gray levels in an image
• Display a high value of all gray levels in the range of interest
and a low value for all other gray levels
• (a) transformation highlights
range [A,B] of gray level and
reduces all others to a constant
level
• (b) transformation highlights
range [A,B] but preserves all
other levels

23
Bit-plane slicing
• Highlighting the contribution made to total image appearance by
specific bits
• Suppose each pixel is represented by 8 bits
• Higher-order bits contain the majority of the visually significant
data
• Useful for analyzing the relative
importance played by each bit
of the image

24
8 Bit-planes

25
Digital Image Processing
Histogram Processing
(Unit-II)
(CO-R16C402.2; PO’s-1, 2, 5, 12; PSO’s-1, 2)
N.Nagaraju
Assistant Professor
Department of ECE

26
Histogram Processing
• Histogram of a digital image with gray levels in the range [0,L-1] is
a discrete function

h(rk) = nk

Where
• rk : the kth gray level
• nk : the number of pixels in the image having gray level rk
• h(rk) : histogram of a digital image with gray levels rk

27
Normalized Histogram
• Dividing each of histogram at gray level rk by the total number of
pixels in the image, n

p(rk) = nk /n

• For k = 0,1,…,L-1
• p(rk) gives an estimate of the probability of occurrence of gray
level rk
• The sum of all components of a normalized histogram is equal to
1

28
A simple image and its histogram
Histogram Example
• Dark image
• Components of histogram are concentrated on the
low side of the gray scale.

• Bright image
• Components of histogram
are concentrated on the
high side of the gray scale.

30
Histogram Example (Contd.)
• Low-contrast image
• Histogram is narrow and centered toward the middle of the
gray scale
• High-contrast image
• Histogram covers broad
range of the gray scale
and the distribution of
pixels is not too far from
uniform, with very few
vertical lines being much
higher than the others
31
Histogram Equalization
• As the low-contrast image’s histogram is narrow and centered
toward the middle of the gray scale, if we distribute the histogram
to a wider range the quality of the image will be improved.

• We can do it by adjusting the probability density function of the


original histogram of the image so that the probability spread
equally

32
Histogram transformation
s = T(r)
Where 0 ≤ r ≤ L-1
• T(r) satisfies
• (a). T(r) is single-valued
and monotonically
increasingly in the
interval 0 ≤ r ≤ L-1
• (b). 0 ≤ T(r) ≤ L-1 for
0 ≤ r ≤ L-1

33
2 Conditions of T(r)
• Single-valued (one-to-one relationship) guarantees that
the inverse transformation will exist
• Monotonicity condition preserves the increasing order
from black to white in the output image thus it won’t
cause a negative image
• 0 ≤ T(r) ≤ L-1 for 0 ≤ r ≤ L-1 guarantees that the output
gray levels will be in the same range as the input levels.
• The inverse transformation from s back to r is

r = T -1(s) ; 0 ≤ s ≤ L-1

34
Probability Density Function
• The gray levels in an image may be viewed as random
variables in the interval [0,L-1]

• PDF is one of the fundamental descriptors of a random


variable

35
Random Variables
• If a random variable x is transformed by a monotonic
transformation function T(x) to produce a new random
variable y
• The probability density function of y can be obtained
from knowledge of T(x) and the probability density
function of x, as follows:

• where the vertical bars signify the absolute value.

36
Monotonic Transformations of a Random Variable

37
Monotonic Transformations of a Random Variable

38
Monotonic Transformations of a Random Variable

39
Monotonic Transformations of a Random Variable

40
Monotonic Transformations of a Random Variable

41
Monotonic Transformations of a Random Variable

42
Applied to Image
• Let
• pr(r) denote the PDF of random variable r
• ps (s) denote the PDF of random variable s
• If pr(r) and T(r) are known and T-1(s) satisfies condition
(a) then ps(s) can be obtained using a formula :

43
Transformation function
• A transformation function is a cumulative distribution
function (CDF) of random variable r :

r
s  T (r )  ( L  1)  Pr ( w)dw
0

• where w is a dummy variable of integration


• Note: T(r) depends on pr(r)

44
Histogram Equalization

r
s  T (r )  ( L  1)  pr ( w)dw
0

ds dT ( r ) d  r 
dr

dr
 ( L  1)

dr  0
pr ( w) dw

 ( L  1) pr (r )
pr (r )dr pr (r ) pr (r ) 1
ps ( s )    
ds ds
   ( L  1) pr ( r )  L 1
 
 dr 

45
ps(s)
• As ps(s) is a probability function, it must be zero outside the
interval [0,L-1] in this case because its integral over all values of s
must equal 1.
• Called ps(s) as a uniform probability density function.
• ps(s) is always a uniform, independent of the form of pr(r)

46
Histogram Equalization

47
Histogram Equalization

Continuous case:
r
s  T (r )  ( L  1)  pr ( w)dw
0

Discrete values:
k
sk  T (rk )  ( L  1) pr (rj )
j 0
k nj L 1 k
 ( L  1)   nj k=0,1,..., L-1
j  0 MN MN j  0

48
Example: Histogram Equalization
Suppose that a 3-bit image (L=8) of size 64 × 64 pixels (MN = 4096)
has the intensity distribution shown in following table.

Get the histogram equalization transformation function and give


the ps(sk) for each sk.

49
Example: Histogram Equalization

0
s0  T (r0 )  7 pr (rj )  7  0.19  1.33 1
j 0
1
s1  T (r1 )  7 pr (rj )  7  (0.19  0.25)  3.08 3
j 0
s2  4.55  5 s3  5.67  6
s4  6.23  6 s5  6.65  7
s6  6.86  7 s7  7.00  7

50
Example: Histogram Equalization

51
Histogram Matching
Histogram matching (histogram specification)
—A processed image has a specified histogram

Let pr ( r ) and p z ( z ) denote the continous probability


density functions of the variables r and z. pz ( z ) is the
specified probability density function.
Let s be the random variable with the probability
r
s  T ( r )  ( L  1)  pr ( w) dw
0

Obtain a transformation function G


z
G ( z )  ( L  1)  p z (t ) dt  s
0

52
Histogram Matching

r
s  T ( r )  ( L  1)  pr ( w)dw
0
z
G ( z )  ( L  1)  pz (t )dt  s
0

1
z  G (s)  G 1
T (r )

53
Histogram Matching: Procedure
• Obtain pr(r) from the input image and then obtain the values of s
r
s  ( L  1)  pr ( w) dw
0

• Use the specified PDF and obtain the transformation function G(z)

z
G ( z )  ( L  1)  pz (t ) dt  s
0

• Mapping from s to z
z  G 1 ( s )

54
Histogram Matching: Example
Assuming continuous intensity values, suppose that an image has the
intensity PDF
 2r
 2
, for 0  r  L -1
pr ( r )   ( L  1)
 0, otherwise

Find the transformation function that will produce an image whose
intensity PDF is
 3z 2

 3
, for 0  z  ( L -1)
pz ( z )   ( L  1)
 0, otherwise

55
Histogram Matching: Example
Find the histogram equalization transformation for the input image
2
r r 2w r
s  T (r )  ( L  1)  pr ( w)dw  ( L  1)  dw 
0 0 ( L  1) 2
L 1
Find the histogram equalization transformation for the specified
histogram
z z 3t 2 z3
G ( z )  ( L  1)  pz (t )dt  ( L  1)  dt  s
0 0 ( L  1)3 ( L  1) 2

The transformation function 1/3


2
2 1/3  2 r  2 1/3
z  ( L  1) s   ( L  1)    ( L  1)r 
 L  1

56
Histogram Matching: Discrete Cases
• Obtain pr(rj) from the input image and then obtain the values of sk,
round the value to the integer range [0, L-1].
k
( L  1) k
sk  T (rk )  ( L  1) pr (r j )   nj
j 0 MN j 0
• Use the specified PDF and obtain the transformation function
G(zq), round the value to the integer range [0, L-1].
q
G ( zq )  ( L  1) pz ( zi )  sk
i 0

• Mapping from sk to zq
zq  G 1 ( sk )

57
Example: Histogram Matching
Suppose that a 3-bit image (L=8) of size 64 × 64 pixels (MN = 4096) has the
intensity distribution shown in the following table (on the left). Get the
histogram transformation function and make the output image with the
specified histogram, listed in the table on the right.

58
Example: Histogram Matching
Obtain the scaled histogram-equalized values,

Compute all the values of the transformation function G,


0
G ( z0 )  7 pz ( z j )  0.00  0
j 0

G ( z1 )  0.00 0 G( z2 )  0.00  0
G ( z3 )  1.05 1 G( z4 )  2.45  2
G ( z5 )  4.55  5 G ( z6 )  5.95  6
G ( z7 )  7.00  7

59
Example: Histogram Matching

60
Example: Histogram Matching
Obtain the scaled histogram-equalized values,

Compute all the values of the transformation function G,


0
G ( z0 )  7 pz ( z j )  0.00  0
j 0

G ( z1 )  0.00 0 G( z2 )  0.00  0
G ( z3 )  1.05 1 s0 G( z4 )  2.45  2 s1
G ( z5 )  4.55  5 s2 G ( z6 )  5.95  6 s3 s4
G ( z7 )  7.00  7 s5 s6 s 7

61
Example: Histogram Matching

rk
0
1
2
3
4
5
6
7

62
Example: Histogram Matching

63
Example: Histogram Matching

64
Example: Histogram Matching

65
Example: Histogram Matching

66
Digital Image Processing
Fundamentals of spatial filtering
(Unit-II)
(CO-R16C402.2; PO’s-1, 2, 5, 12; PSO’s-1, 2)
N.Nagaraju
Assistant Professor
Department of ECE

67
Spatial Filtering
• Use filter (can also be called as mask/kernel/template or window)
• The values in a filter sub-image are referred to as coefficients,
rather than pixel.

68
Spatial Filtering (Contd.)
• The process of moving the center point creates new
neighborhoods, one for each pixel in the input image.
The two principal terms used to identify this operation
are neighborhood processing or spatial filtering, with the
second term being more common.
• If the computations performed on the pixels of the
neighborhoods are linear, the operation is called linear
spatial filtering; otherwise it is called nonlinear spatial
filtering.
The mechanics of Spatial Filtering

The mechanics
of linear spatial
filtering
Linear Spatial Filtering
• For linear spatial filtering, the response is given by a sum of
products of the filter coefficients and the corresponding
image pixels in the area spanned by the filter mask. For the 3
X 3 mask shown in the previous figure, the result (response),
R, of linear filtering with the filter mask at a point (x,y) in the
image is:

R = w (-1,-1) f(x-1, y-1) + w(-1,0) f (x-1, y) + … + w(0,0) f(0,0) +


… + w(1,0) f(x+1, y) + w(1,1) f(x+1, y+1)

which we see is the sum of products of the mask coefficients


with the corresponding pixels directly under the mask.
Local enhancement through spatial
filtering The mechanics of spatial
filtering
For an image of size M x N and
a mask of size m x n
The resulting output gray level
for any coordinates x and y is
given by

a b
g ( x, y )    w(s, t ) f ( x  s, y  t )
s  a t b

where a  ( m  1) / 2, b  ( n  1) / 2
x  0, 1, 2,......, M  1, y  0, 1, 2,......, N  1,

72
Basics of spatial filtering
• Given the 3×3 mask with coefficients: w1, w2,…, w9
• The mask cover the pixels with gray levels: z1, z2,…, z9

• z gives the output intensity value for the processed image (to be
stored in a new array) at the location of z5 in the input image

73
Basics of spatial filtering
Mask operation near the image border
Problem arises when part of the mask is located outside the image
plane; to handle the problem:
1. Discard the problem pixels (e.g. 512x512input 510x510output if mask
size is 3x3)
2. Zero padding: expand the input image by padding zeros
(512x512input 514x514output)
• Zero padding is not good create artificial lines or edges on
the border;
3. We normally use the gray levels of border pixels to fill up the
expanded region (for 3x3 mask). For larger masks a border
region equal to half of the mask size is mirrored on the expanded
region.

74
Mask operation near the image border

75
Spatial filtering for Smoothing
• For blurring/noise reduction;
• Blurring is usually used in preprocessing steps,
e.g., to remove small details from an image prior to object extraction,
or to bridge small gaps in lines or curves
• Equivalent to Low-pass spatial filtering in frequency domain because
smaller (high frequency) details are removed based on neighborhood
averaging (averaging filters)

Implementation: The simplest form of the spatial filter for averaging is a


square mask (assume m×m mask) with the same coefficients 1/m2 to
preserve the gray levels (averaging).
Applications: Reduce noise; smooth false contours
Side effect: Edge blurring

76
Smoothing filters

77
General form: Smoothing mask
• Filter of size mxn (m and n odd)

78
Spatial filtering for Smoothing
(example)

79
Spatial filtering for Smoothing
(example)
Original image Smoothed by
size: 500 x 500 3 x 3 box filter

Smoothed by Smoothed by
5 x 5 box filter 9 x 9 box filter

Smoothed by Smoothed by
15 x 15 box filter 35 x 35 box filter

80
Spatial filtering for Smoothing
(example)

 we can see that the result after smoothing and thresholding, the
remains are the largest and brightest objects in the image.

81
Order-statistics filtering
• Nonlinear spatial filters
• Output is based on order of gray levels in the masked area (sub-
image)
• Examples: Median filtering, Max & Min filtering
Median filtering
• Assigns the mid value of all the gray levels in the mask to the
center of mask;
• Particularly effective when
• The noise pattern consists of strong, spiky components
(impulse noise, salt-and-pepper) edges are to be preserved
• Force points with distinct gray levels to be more like their
neighbors

82
Median Filtering

83
Median Filtering (example)

84
Spatial filtering for image
sharpening
Background: to highlight fine detail in an image or to enhance
blurred detail
Applications: electronic printing, medical imaging, industrial
inspection, autonomous target detection (smart
weapons)......
Foundation:
• Blurring/smoothing is performed by spatial averaging
(equivalent to integration)
• Sharpening is performed by noting only the gray level changes
in the image that is the differentiation

85
Spatial filtering for image sharpening
Operation of Image Differentiation
• Enhance edges and discontinuities (magnitude of output
gray level >>0)
• De-emphasize areas with slowly varying gray-level values
(output gray level: 0)

Mathematical Basis of Filtering for Image Sharpening


• First-order and second-order derivatives

86
First and second order derivatives

87
Example for discrete derivatives

88
Various situations encountered for
derivatives

89
Various situations encountered for
derivatives

90
Various situations encountered for
•derivatives
Thin lines

• Isolated point

91
First and Second-order derivative of
f(x,y)
• when we consider an image function of two variables,
f(x,y), at which time we will dealing with partial
derivatives along the two spatial axes.

92
Laplacian for image enhancement

93
Laplacian for image enhancement

94
Laplacian for image enhancement

95
Effect of Laplacian Operator
• As it is a derivative operator,
• It highlights gray-level discontinuities in an image
• It deemphasizes regions with slowly varying gray levels
• Tends to produce images that have
• Grayish edge lines and other discontinuities, all
• Superimposed on a dark, featureless background.

96
Correct the effect of featureless background
• Easily by adding the original and Laplacianimage.
• Be careful with the Laplacian filter used

97
Laplacian for image
enhancement(example)

98
Mask of Laplacian + addition

99
Laplacian for image
enhancement(example)

100
Unsharp Masking and Highboost
Filtering
► Unsharp masking
Sharpen images consists of subtracting an unsharp (smoothed)
version of an image from the original image
e.g., printing and publishing industry

► Steps
1. Blur the original image
2. Subtract the blurred image from the original
3. Add the mask to the original

101
Unsharp Masking and Highboost
Filtering
Let f ( x, y ) denote the blurred image, unsharp masking is
g mask ( x, y)  f ( x, y )  f ( x, y )
Then add a weighted portion of the mask back to the original
g ( x, y )  f ( x, y )  k * g mask ( x, y ) k 0

when k  1, the process is referred to as highboost filtering.

102
Unsharp Masking: Demo

103
Unsharp Masking and Highboost Filtering:
Example

104
Image Sharpening based on First-Order
Derivatives
For function f ( x, y ), the gradient of f at coordinates ( x, y )
is defined as
 f 
 g x   x 
f  grad( f )      
 g y   f 
 y 

The magnitude of vector f , denoted as M ( x, y )


Gradient Image M ( x, y)  mag(f )  g x 2  g y 2

105
Image Sharpening based on First-Order
Derivatives
The magnitude of vector f , denoted as M ( x, y )
2 2
M ( x, y)  mag(f )  g x  g y

M ( x, y ) | g x |  | g y |

z1 z2 z3
M ( x, y ) | z8  z5 |  | z6  z5 |
z4 z5 z6
z7 z8 z9
106
Image Sharpening based on First-Order
Derivatives
Roberts Cross-gradient Operators
M ( x, y ) | z9  z5 |  | z8  z6 |

Sobel Operators
z1 z2 z3
M ( x, y ) | ( z7  2 z8  z9 )  ( z1  2 z2  z3 ) |
z4 z5 z6  | ( z3  2 z6  z9 )  ( z1  2 z4  z7 ) |
z7 z8 z9

107
Image Sharpening based on First-Order
Derivatives

108
Example

109
Example:

Combining
Spatial
Enhancement
Methods

Goal:

Enhance the
image by
sharpening it and
by bringing out
more of the
skeletal detail

110
Example:

Combining
Spatial
Enhancement
Methods

Goal:

Enhance the
image by
sharpening it and
by bringing out
more of the
skeletal detail

111
Digital Image Processing
Filtering in Frequency Domain
(Unit-II)
(CO-R16C402.2; PO’s-1, 2, 5, 12; PSO’s-1, 2)
N.Nagaraju
Assistant Professor
Department of ECE

112
Contents
• The Basics of filtering in the frequency domain
• Image smoothing using frequency domain filters
• Image Sharpening using frequency domain filters
• Selective filtering

113
Basics of filtering in frequency domain

114
Some Basic filters and their functions
• Multiply all values of F(u,v) by the filter function (notch filter):
 0 if (u, v)  ( M / 2, N / 2)
H (u , v)  
1 otherwise.

• All this filter would do is set F(0,0) to zero (force the average
value of an image to zero) and leave all frequency components
of the Fourier transform untouched.

115
Some Basic filters and their functions

Lowpass filter

Highpass filter

116
Some Basic filters and their functions

117
Smoothing frequency-domain filters
• The basic model for filtering in the frequency domain

G (u , v)  H (u, v) F (u , v)

where F(u,v): the Fourier transform of the image to be


smoothed
H(u,v): a filter transfer function
• Smoothing is fundamentally a lowpass operation in the
frequency domain.
• There are several standard forms of lowpass filters (LPF).
• Ideal lowpass filter
• Butterworth lowpass filter
• Gaussian lowpass filter
118
Ideal Lowpass filters (ILPFs)
• The simplest lowpass filter is a filter that “cuts off” all high-
frequency components of the Fourier transform that are at a
distance greater than a specified distance D0 from the origin
of the transform.
• The transfer function of an ideal lowpass filter
1 if D(u , v)  D0
H (u , v)  
0 if D(u , v)  D0

where D(u,v) : the distance from point (u,v) to the center of


their frequency rectangle
 
1
2 2 2
D(u , v)  (u  M / 2)  (v  N / 2)

119
Ideal Lowpass filters (ILPFs)
(contd.)

120
Ideal Lowpass filters (ILPFs)
(contd.)

121
Ideal Lowpass filters (ILPFs)
(contd.)

122
Butterworth Lowpass filters (BLPFs) with
order n
1
H (u , v) 
1  D(u , v) / D0 
2n

123
Butterworth Lowpass filters (BLPFs)
(contd.)

n=2
D0=5,15,30,80,and 230

124
Butterworth Lowpass filters (BLPFs)
(contd.) Spatial Representation

n=1 n=2 n=5 n=20

125
Gaussian Lowpass filters (GLPFs)
 D 2 ( u , v ) / 2 D02
H (u , v)  e

126
Gaussian Lowpass filters (GLPFs)
(Contd.)

D0=5,15,30,80,and 230

127
Additional Examples of Lowpass
filtering

128
Additional Examples of Lowpass
filtering

129
Sharpening Frequency Domain Filter
H hp (u , v)  H lp (u , v)
Ideal highpass filter

0 if D(u , v)  D0
H (u , v)  
1 if D(u , v)  D0

Butterworth highpass filter


1
H (u, v) 
1  D0 / D(u, v)
2n

Gaussian highpass filter

 D 2 ( u ,v ) / 2 D02
H (u , v)  1  e

130
Highpass filters
Spatial Representations

131
Ideal Highpass Filters
0 if D(u, v)  D0
H (u , v)  
1 if D(u , v)  D0

132
Butterworth Highpass Filters
1
H (u , v) 
1  D0 / D(u, v)
2n

133
Gaussian Highpass Filters
 D 2 ( u , v ) / 2 D02
H (u , v)  1  e

134
The Laplacian in the Frequency Domain
• The Laplacian filter
H (u , v)  4 2 (u 2  v 2 )
• Shift the center:
M N 
Frequency

H (u , v)  4 (u  ) 2  (v  ) 2 
2
domain
 2 2 

Spatial domain

135
The Laplacian in the Frequency Domain

g ( x, y )  f ( x , y )   2 f ( x , y )
where
 2 f ( x, y ) : the Laplacian - filtered
image in the spatial domain

For display
purposes only

136
Selective Filtering
Non-Selective Filters:
operate over the entire frequency rectangle

Selective Filters
operate over some part, not entire frequency rectangle
• bandreject or bandpass: process specific bands
• notch filters: process small regions of the frequency rectangle

137
Selective Filtering:
Bandreject and Bandpass Filters

H BP (u , v)  1  H BR (u, v)

138
Selective Filtering:
Bandreject and Bandpass Filters

139
Selective Filtering: Notch Filters
Zero-phase-shift filters must be symmetric about the origin.
A notch with center at (u0, v0) must have a corresponding notch at
location (-u0,-v0).

Notch reject filters are constructed as products of highpass filters


whose centers have been translated to the centers of the notches.

Q
H NR (u, v)   H k (u, v) H  k (u, v)
k 1

where H k (u , v) and H - k (u, v) are highpass filters whose centers are


at (uk , vk ) and (-uk , -vk ), respectively.

140
Selective Filtering: Notch Filters
Q
H NR (u, v)   H k (u, v) H  k (u, v)
k 1

where H k (u, v) and H - k (u , v) are highpass filters whose centers are


at (uk , vk ) and (-uk , -vk ), respectively.

A Butterworth notch reject filter of order n


3  1  1 
H NR (u , v)    2n   2n 
k 1 1   D0 k / Dk (u , v )   1   D0 k / D k (u , v )  
  
2 2 1/2
Dk (u, v)  (u  M / 2  uk )  (v  N / 2  vk ) 
2 2 1/2
D k (u , v)  (u  M / 2  uk )  (v  N / 2  vk ) 

141
Examples:
Notch
Filters (1)

A Butterworth notch
reject filter D 0 =3
and n=4 for all
notch pairs

142
Examples:
Notch Filters
(2)

143
144
Thank you

145

You might also like