You are on page 1of 49

Objectives

 Necessity of image enhancement


Image Enhancement  Spatial domain operation
- point processing
- histogram based technique
- mask processing
 Frequency domain operations

Principle Objective of
What is Image Enhancement? Enhancement
 It is a technique of processing an image  Process an image so that the result will be
to enhance certain features of the more suitable than the original image for
image a specific application.
 The suitableness is up to each application.
 Designed to improve the quality of an  A method which is quite useful for
image as perceived by a human being enhancing an image may not necessarily be
the best approach for enhancing another
images
3 4

1
Domains Good images
 Spatial Domain : (image plane)  For human visual
 Techniques are based on direct manipulation of  The visual evaluation of image quality is a highly
pixels in an image subjective process.
It is hard to standardize the definition of a good
Frequency Domain :


image.
 Techniques are based on modifying the Fourier  For machine perception
transform of an image
 The evaluation task is easier.
 There are some enhancement techniques based  A good image is one which gives the best machine
on various combinations of methods from these recognition results.
two categories.  A certain amount of trial and error usually is
required before a particular image
5 enhancement approach is selected. 6

Spatial Domain Point Processing


 Procedures that operate  Neighborhood = 1x1 pixel
directly on pixels.  g depends on only the value of f at (x,y)
g(x,y) = T[f(x,y)]
 T = gray level (or intensity or mapping)
where
transformation function, operates on one pixel
 f(x,y) is the input image

 g(x,y) is the processed


s = T(r)
image  Where
 T is an operator on f
 r = gray level of f(x,y)
defined over some
neighborhood of (x,y)  s = gray level of g(x,y)

7 8

2
Types of point operation 1.Brightness modification:
1. Brightness modification  it depend on the value associated with
2. Contrast manipulation pixel of the image
3. Histogram manipulation  For changing brightness, a constant is
added or subtracted from the luminance
of all sample values

9 10

1.Brightness modification: 2. Contrast adustment


 Increasing the brightness value by  Done by scaling all the pixels of the
g(x,y) = f(x,y) + c image by constant c
 Decreasing the brightness value by g(x,y) = f(x,y) * c
g(x,y) = f(x,y) - c

11 12

3
Contrast Stretching Thresholding
 Produce higher  Produce a two-level (binary) image
contrast than the
original by
 darkening the levels
below m in the original
image
 Brightening the levels
above m in the original
image

13 14

Mask Processing or Filter Mask/Filter


 Neighborhood is bigger than 1x1 pixel  Neighborhood of a point (x,y)
 Use a function of the values of f in a can be defined by using a
predefined neighborhood of (x,y) to determine square/rectangular (common
(x,y)
the value of g at (x,y) used) or circular subimage
area centered at (x,y)
 The value of the mask coefficients determine
the nature of the process
•  The center of the subimage
is moved from pixel to pixel
 Used in techniques starting at the top of the
 Image Sharpening corner
 Image Smoothing

15 16

4
3 basic gray-level
transformation functions Identity function
 Linear function  Output intensities
Negative

nth root
 Negative and identity Negative
are identical to input
transformations nth root
intensities.
Log
nth power  Logarithm function Log
nth power
 Is included in the
 Log and inverse-log graph only for
transformation completeness.
 Power-law function
Inverse Log
Identity
 nth power and nth root Identity Inverse Log

transformations
Input gray level, r
Input gray level, r
17 18

Linear Grey Level Transformation


Image Negatives
Example of Negative Image
 An image with gray level in
the range [0, L-1]
Negative
where L = 2n ; n = 1, 2…
nth root
 Negative transformation :
Log
nth power s = L – 1 –r => s = 255 –r
 Reversing the intensity
levels of an image.
 Suitable for enhancing white
Identity Inverse Log or gray detail embedded in
dark regions of an image, Original image Negative Image
especially when the black
Input gray level, r area dominant in size.
19 20

5
Non-Linear Grey Level Transformation
Log Transformations Log Transformations
s = c log (1+r)  It compresses the dynamic range of images
 c is a constant with large variations in pixel values
and r  0
 Log curve maps a narrow  Example of image with dynamic range: Fourier
range of low gray-level spectrum image
values in the input image
into a wider range of
 It can have intensity range from 0 to 106 or
output levels. higher.
 Used to expand the  We can‟t see the significant degree of detail
values of dark pixels in as it will be lost in the display.
an image while
compressing the higher-
level values.
21 22

Inverse Logarithm
Example of Logarithm Image Transformations
 Do opposite to the Log Transformations
 Used to expand the values of high pixels
in an image while compressing the
darker-level values.

Fourier Spectrum with Result after apply the log


range = 0 to 1.5 x 106 transformation with c = 1,
range = 0 to 6.2 23 24

6
Non-Linear Grey Level Transformation
Power-Law Transformations Power-Law Transformations
 The intensity of light generated by a physical s = cr
device such as CRT is not a linear function of  c and  are positive
applied signal. constants
 Intensity produced at the surface of display is  Used in devices where

approximately the applied voltage, raised to power-law


power 2.5 characteristics to be
followed
 Numerical value of the exponent of this power  For example
function is termed as gamma  - display devices
 This nonlinearity must be compensated in - printer devices
order to achieve correct reproduction of Input gray level, r
- CRT devices
intensity Plots of s = cr for various values of 
25 (c = 1 in all cases) 26

Gamma correction
 Power-law curves with fractional values  Cathode ray tube (CRT)
devices have an
of  map a narrow range of dark input Monitor
intensity-to-voltage
values into a wider range of output response that is a
power function, with 
values, with the opposite being true for varying from 1.8 to 2.5
higher values of input levels. Gamma
correction
 = 2.5  The picture will become
darker.
 c =  = 1  Identity function  Gamma correction is
Monitor
done by preprocessing
the image before
inputting it to the
monitor with s = cr1/
27  =1/2.5 = 0.4 28

7
a b
Another example : MRI c d Effect of decreasing gamma
(a) a magnetic resonance image of  When the  is reduced too much, the
an upper thoracic human spine
with a fracture dislocation and image begins to reduce contrast to the
spinal cord impingement point where the image started to have
The picture is predominately dark
very slight “wash-out” look, especially in

 An expansion of gray levels are


desirable  needs  < 1
the background
(b) result after power-law
transformation with  = 0.6, c=1
(c) transformation with  = 0.4
(best result)
(d) transformation with  = 0.3
(under acceptable level) 29 30

a b
Another example c d Contrast Stretching
(a) image has a washed-out  When an image appear very dark because
appearance, it needs a
compression of gray levels of wrong lens aperture setting use
 needs  > 1 contrast stretching operation
(b) result after power-law
transformation with  = 3.0
(suitable)
(c) transformation with  = 4.0
(suitable)
(d) transformation with  = 5.0
(high contrast, the image
has areas that are too dark,
some detail is lost) 31 32

8
Contrast Stretching
 (a)increase the dynamic range of
the gray levels in the image
 (b) a low-contrast image : result
from poor illumination, lack of
dynamic range in the imaging
sensor, or even wrong setting of
a lens aperture of image
acquisition
 (c) result of contrast
stretching: (r1,s1) = (rmin,0) and
(r2,s2) = (rmax,L-1)
 (d) result of thresholding

33 34

Gray-level slicing
 Highlighting a specific
range of gray levels in an
image
 Display a high value of all
gray levels in the range of
interest and a low value for
all other gray levels
 (a) transformation highlights
range [A,B] of gray level and
reduces all others to a constant
level, Gray-level slicing without
background
 (b) transformation highlights
range [A,B] but preserves all
other levels , GS with
Background
35 36

9
Bit plane slicing Bit-plane slicing
 Bit plane slicing is a method of  Highlighting the
contribution made to total
representing an image with one or more One 8-bit byte Bit-plane 7
(most significant) image appearance by
bits of the byte used for each pixel. specific bits
 Main goals  Suppose each pixel is
represented by 8 bits
1. convert a gray level image to a binary  Higher-order bits contain
image the majority of the visually
Bit-plane 0 significant data
2.Represent an image with fewer bit and (least significant)
Useful for analyzing the
compress the image to a smaller size

relative importance played
by each bit of the image
37 38

8 bit planes
10000000 01000000

0 which results in a binary image, i.e, odd and even pixels are displayed Bit-plane 7 Bit-plane 6
1 which displays all pixels with bit 1 set: 0000.0010
2 which displays all pixels with bit 2 set: 0000.0100
3 which displays all pixels with bit 3 set: 0000.1000 Bit- Bit- Bit-
4 which displays all pixels with bit 4 set: 0001.0000 plane 5 plane 4 plane 3
5 which displays all pixels with bit 5 set: 0010.0000
6 which displays all pixels with bit 6 set: 0100.0000 Bit- Bit- Bit-
7 which displays all pixels with bit 7 set: 1000.0000
plane 2 plane 1 plane 0

39 40

10
% bitplane slicing % 4th bit=0
I=imread('cameraman.tif'); b5=dec2bin(b);
b5(:,5)='0';
subplot(3,2,1);imshow(I);title('original image'); b6=bin2dec(b5);
c3=uint8(b6);
%MSB=0 k2=reshape(c3,256,256);
b=double(I); subplot(3,2,4);imshow(k2);title('4th bit=0');
b1=dec2bin(b); % 2nd bit=0
b1(:,1)='0'; b7=dec2bin(b);
b2=bin2dec(b1); b7(:,7)='0';
c1=uint8(b2); b8=bin2dec(b7);
c4=uint8(b8);
k=reshape(c1,256,256); k3=reshape(c4,256,256);
subplot(3,2,2);imshow(k);title('MSB=0'); subplot(3,2,5);imshow(k3);title('2nd bit=0');

% 6th bit=0 % LSB bit=0


b9=dec2bin(b);
b3=dec2bin(b); b9(:,8)='0';
b3(:,3)='0'; b10=bin2dec(b9);
b4=bin2dec(b3); c4=uint8(b10);
c2=uint8(b4); k3=reshape(c4,256,256);
subplot(3,2,6);imshow(k3);title('LSB=0');
k1=reshape(c2,256,256);
subplot(3,2,3);imshow(k1);title('6th bit=0');
41 42

Image Histogram Some Typical Histograms


The shape of a histogram provides useful information for
contrast enhancement.
0 4 8 10 12
Frequency of occurrence

12 16 5 0 16
4 16 8 5 10 Dark image
10 0 4 12 16
5 5
12 0 16 0 8
4
5
3 3 3
4
2
3
A plot of number of occurrences
of grey levels in the image against
2
the grey level values 1
0 4 5 8 10 12 16
Gray Level

44
43

11
Bright image

High contrast image

Low contrast image

45 46

Histogram Processing Normalized Histogram


 Histogram of a digital image with gray levels in  dividing each of histogram at gray level rk by
the range [0,L-1] is a discrete function the total number of pixels in the image, n
h(rk) = nk p(rk) = nk / n
 Where  For k = 0,1,…,L-1
 rk : the kth gray level  p(rk) gives an estimate of the probability of
 nk : the number of pixels in the image having gray occurrence of gray level rk
level rk
 The sum of all components of a normalized
h(rk) : histogram of a digital image with gray levels rk
histogram is equal to 1

Histogram of an image tells a lot about the distribution of grey levels within the
image
47 48

12
h(rk) or p(rk)

Histogram Processing Example rk

 Basic for numerous spatial domain


Dark image
processing techniques Components of
 Used effectively for image enhancement histogram are
concentrated on the
 Information inherent in histograms also low side of the gray
scale.
is useful in image compression and
Bright image
segmentation Components of
histogram are
concentrated on the
high side of the gray
scale.
49 50

Example Histogram Equalization


Low-contrast image  As the low-contrast image‟s histogram is
histogram is narrow narrow and centered toward the middle of the
and centered toward gray scale, if we distribute the histogram to a
the middle of the wider range the quality of the image will be
gray scale
improved.
High-contrast image
histogram covers broad  We can do it by adjusting the probability
range of the gray scale density function of the original histogram of
and the distribution of
pixels is not too far from
the image so that the probability spread
uniform, with very few equally
vertical lines being much
higher than the others
51 52

13
Histogram transformation Conditions of T(r)
s s = T(r)  Single-valued (one-to-one relationship)
 Where 0  r  1 guarantees that the inverse transformation will
exist
 T(r) satisfies
(a). T(r) is single-
 Monotonicity condition preserves the increasing
order from black to white in the output image

valued and
sk= T(rk) monotonically thus it won‟t cause a negative image
T(r)
increasingly in the  0  T(r)  1 for 0  r  1 guarantees that the
interval 0  r  1 output gray levels will be in the same range as
 (b). 0  T(r)  1 for the input levels.
0r1
0 rk 1 r
 The inverse transformation from s back to r is
53
r = T -1(s) ;0s1 54

Probability Density Function Random Variables


 The gray levels in an image may be The probability density function
viewed as random variables in the (pdf or shortly called density function)
interval [0,1] of random variable x is defined as the
 PDF is one of the fundamental derivative of the cdf:
descriptors of a random variable
dF ( x )
p( x ) 
dx
55 56

14
Random Variables Random Variables
If a random variable x is transformed by a
The pdf satisfies the following properties:

monotonic transformation function T(x) to


produce a new random variable y,
 the probability density function of y can be
obtained from knowledge of T(x) and the
probability density function of x, as follows:
dx
p y ( y )  px ( x )
dy
where the vertical bars signify the absolute value.
57 58

Random Variables Applied to Image


 A function T(x) is monotonically  Let
increasing if T(x1) < T(x2) for x1 < x2, and  pr(r) denote the PDF of random variable r
 A function T(x) is monotonically  ps (s) denote the PDF of random variable s
decreasing if T(x1) > T(x2) for x1 < x2.  If pr(r) and T(r) are known and T-1(s)
 The preceding equation is valid if T(x) is satisfies condition (a) then ps(s) can be
an increasing or decreasing monotonic obtained using a formula :
function. dr
p s(s)  p r (r)
59
ds 60

15
Applied to Image Transformation function

The PDF of the transformed variable s  A transformation function is a cumulative


distribution function (CDF) of random
is determined by variable r :
r
the gray-level PDF of the input image s  T ( r )   pr ( w )dw
and by 0
where w is a dummy variable of integration
the chosen transformation function
Note: T(r) depends on pr(r)
61 62

Cumulative
Distribution function Finding ps(s) from given T(r)
 CDF is an integral of a probability
function (always positive) is the area ds dT ( r )

under the function dr dr
Thus, CDF is always single valued and dr
d   ps ( s )  pr ( r )
r

monotonically increasing    pr ( w )dw  ds
dr  0 
 Thus, CDF satisfies the condition (a) 1
 We can use CDF as a transformation  pr ( r )  pr ( r )
pr ( r )
function
 1 where 0  s  1
Substitute and yield
63 64

16
r

ps(s) s  T ( r )   pr ( w )dw
0
 As ps(s) is a probability function, it must
be zero outside the interval [0,1] in this yields
case because its integral over all values
of s must equal 1. Ps(s)
 Called ps(s) as a uniform probability
density function a random variable s 1
 ps(s) is always a uniform, independent of characterized by
the form of pr(r) a uniform probability
function s
0
65 66

Discrete
transformation function
 The probability of occurrence of gray  s  smin
s '  int eger 

* ( L  1)  0.5
level in an image is approximated by  1  smin 

pr ( rk ) 
nk
where k  0 , 1, ..., L-1  Example:
n  Suppose we have
 The discrete version of transformation r-> 0,1,…….,7 & s-> 0,1,…….,7
k
sk  T ( rk )   pr ( r j ) Pr(0)=0, Pr(1)=0.1=Pr(2), Pr(3)=0.3, Pr(4)=0
j 0 Pr(5)=0, Pr(6)=0.4, Pr(7)=0.1
k nj Find out mapping function T(r)
 where k  0 , 1, ..., L-1
j 0 n 67 68

17
Histogram Equalization Example
 Thus, an output image is obtained by mapping before after Histogram
each pixel with level rk in the input image into a equalization
corresponding pixel with level sk in the output
image
 In discrete space, it cannot be proved in
general that this discrete transformation will
produce the discrete equivalent of a uniform
probability density function, which would be a
uniform histogram

69 70

Example Example
No. of pixels
before after Histogram 6
equalization 2 3 3 2 5
4 2 4 3 4

The quality is 3 2 3 5 3

not improved 2
2 4 2 4
much because 1
the original Gray level
image already 4x4 image
0 1 2 3 4 5 6 7 8 9
has a broaden Gray scale = [0,9]
gray-level scale histogram
71 72

18
Example Note
No. of pixels
6  It is clearly seen that
3 6 6 3 5  Histogram equalization distributes the gray level to
reach the maximum gray level (white) because the
8 3 8 6 4 cumulative distribution function equals 1 when
0  r  L-1
6 3 6 9 3
 If the cumulative numbers of gray levels are slightly
2 different, they will be mapped to little different or
3 8 3 8 same gray levels as we may have to approximate the
1
processed gray level of the output image to integer
Output image number
0 1 2 3 4 5 6 7 8 9
Gray scale = [0,9] Gray level  Thus the discrete transformation function can‟t
guarantee the one to one mapping relationship
Histogram equalization
73 74

Histogram Matching Consider the continuous domain

(Specification) Let pr(r) denote continuous probability density


function of gray-level of input image, r

Let pz(z) denote desired (specified) continuous


 Histogram equalization has a disadvantage probability density function of gray-level of
which is that it can generate only one type output image, z
of output image.
 With Histogram Specification, we can Let s be a random variable with the property
specify the shape of the histogram that
we wish the output image to have.
r
s  T ( r )   pr ( w )dw Histogram equalization
 It doesn‟t have to be a uniform histogram 0

 Used to enhance specific portion Where w is a dummy variable of integration


75 76

19
Next, we define a random variable z with the property

Procedure Conclusion
z

G( z )   p ( t )dt  s
z Histogram equalization
0

Where t is a dummy variable of integration 1. Obtain the transformation function T(r) by


thus calculating the histogram equalization of the
input image
s = T(r) = G(z) r
s  T ( r )   pr ( w )dw
Therefore, z must satisfy the condition
0

z = G-1(s) = G-1[T(r)] 2. Obtain the transformation function G(z) by


calculating histogram equalization of the
Assume G-1 exists and satisfies the condition (a) and (b)
desired density function
z
We can map an input gray level r to output gray level z G ( z )   pz ( t )dt  s
77 0 78

Procedure Conclusion Example


3. Obtain the inversed transformation Assume an image has a gray level probability density
function G-1 function pr(r) as shown.

z = G-1(s) = G-1[T(r)] Pr(r)   2r  2 ;0  r  1


pr ( r )  
4. Obtain the output image by applying the 2  0 ; elsewhere
processed gray-level from the inversed
transformation function to all the 1 r

pixels in the input image  p ( w )dw  1


0
r

0 1 2 r
79 80

20
Example Discrete formulation
k
We would like to apply the histogram specification with sk  T ( rk )   pr ( r j ) (1)
the desired probability density function pz(z) as shown. j 0
k nj
Pz(z)  k  0 ,1,2 ,..., L  1 (2)
 2z ;0  z  1 j 0 n
2 pz ( z )  
 0
k
; elsewhere G ( z k )   pz ( z i )  sk k  0 ,1,2 ,..., L  1 (3)
i 0
1 z

 p ( w )dw  1
z z k  G 1 T ( rk )
(4)
 G 1 sk 
z 0
0 1 2 k  0 ,1,2 ,..., L  1
81 82

Implementation Implementation
 Since we don‟t have the z‟s, we must
resort to some sort of iterative scheme
to find z from s
 We are dealing with integers, which
makes this very simple process
 Basically vk=sk , so from equation

that z‟s for which we are looking must satisfy


83
the equation 84

21
Implementation Implementation
 Thus, all we have to do to find the value of zk corresponding to sk
is to iterate on values of z such that this equation is satisfied for
k=0, 1, 2, L-1.
 This is the same thing as Eq. (4), except that we do not have to
find the inverse of G because we are going to iterate on z.
 Since we are dealing with integers, the closest we can get to
satisfying the equation

is to let zk= z‟ for each value of k, where z‟ is the smallest


integer in the interval [0, L-1] such that

(5)

85 86

Implementation Example
 The procedure we have just developed for histogram
matching may be summarized as follows:
1. Obtain the histogram of the given image.
2. Use Eq.(1) to pre compute a mapped level sk for each
level rk.
3. Obtain the transformation function G from the given
pz(z) using Eq. (2).
4. Pre compute zk for each value of sk using the iterative
scheme defined in connection with Eq. (5).
5. For each pixel in the original image, if the value of that Image is dominated by large, dark areas,
pixel is rk, map this value to its corresponding level sk; resulting in a histogram characterized by
then map level sk into the final level zk. Use the pre a large concentration of pixels in pixels in
computed values from Steps (2) and (4) for these the dark end of the gray scale
mappings. 87 Image of Mars moon 88

22
Image Equalization Solve the problem Histogram Equalization

Since the problem with the


transformation function of the
histogram equalization was
Result image
caused by a large concentration
after histogram of pixels in the original image
equalization with levels near 0 Histogram Specification

Transformation function
Histogram of the result image
for histogram equalization a reasonable approach is to
The histogram equalization doesn’t make the result image look better than modify the histogram of that
the original image. Consider the histogram of the result image, the net image so that it does not have
effect of this method is to map a very narrow interval of dark pixels into
the upper end of the gray scale of the output image. As a consequence, the this property
output image is light and has a washed-out appearance. 89 90

Histogram Specification Result image and its histogram


 (1) the transformation
function G(z) obtained
from
k
G ( z k )   pz ( z i )  s k
i 0

k  0 ,1,2 ,..., L  1 The output image‟s histogram

 (2) the inverse Notice that the output


histogram‟s low end has
transformation G-1(s) After applied shifted right toward the
Original image the histogram lighter region of the gray
equalization scale as desired.
91 92

23
Note Note
 Histogram specification is a trial-and-  Histogram processing methods are global
error process processing, in the sense that pixels are
 There are no rules for specifying modified by a transformation function
histograms, and one must resort to based on the gray-level content of an
analysis on a case-by-case basis for any entire image.
given enhancement task.  Sometimes, we may need to enhance
details over small areas in an image,
which is called a local enhancement.
93 94

a) Original image
(slightly blurred to
reduce noise)
b) global histogram

Local Enhancement equalization (enhance


noise & slightly
increase contrast but
Explain the result in c)
the construction is
not changed)  Basically, the original image consists of many
c) local histogram
equalization using
small squares inside the larger dark ones.
7x7 neighborhood  However, the small squares were too close in
(reveals the small
squares inside larger
gray level to the larger ones, and their sizes
ones of the original were too small to influence global histogram
image. equalization significantly.
(a) (b) (c)
 So, when we use the local enhancement
define a square or rectangular neighborhood and move the center

of this area from pixel to pixel. technique, it reveals the small areas.
 at each location, the histogram of the points in the neighborhood  Note also the finer noise texture is resulted
is computed and either histogram equalization or histogram by the local processing using relatively small
specification transformation function is obtained.
 another approach used to reduce computation is to utilize neighborhoods.
nonoverlapping regions, but it usually produces an undesirable
checkerboard effect.
95 96

24
For every pixel, based on the
neighbor hood value the histogram
equalization is done. Here I used 3
by 3 window matrix for explanation.
By changing the window matrix
size, the histogram equalization can
be enhanced. By changing the
values of M and N the window size
can be changed in the code given
below.

Enhancement using
Arithmetic/Logic Operations Logic Operations
 Arithmetic/Logic operations perform on  Logic operation performs on gray-level
pixel by pixel basis between two or more images, the pixel values are processed as
images binary numbers
 except NOT operation which perform  light represents a binary 1, and dark
only on a single image represents a binary 0
 NOT operation = negative transformation

99 100

25
Example of AND Operation Example of OR Operation

original image AND image result of AND original image OR image result of OR
mask operation mask operation
101 102

Image Subtraction Image Subtraction

g(x,y) = f(x,y) – h(x,y)

 enhancement of the differences between


images
 Used in medical imaging called mask mode
radiography,
 PCB fault detection

103 104

26
Image Subtraction Spatial Filtering

 use filter (can also be called as


mask/kernel/template or window)
 the values in a filter subimage are
referred to as coefficients, rather than
pixel.
 our focus will be on masks of odd sizes,
e.g. 3x3, 5x5,…
105 108

Spatial Filtering Process Linear Filtering


 simply move the filter mask from point  Linear Filtering of an image f of size
to point in an image. MxN filter mask of size mxn is given by
 at each point (x,y), the response of the the expression
filter at that point is calculated using a a b

predefined relationship. g ( x, y )    w(s, t ) f ( x  s, y  t )


t  a t b
R  w1 z1  w2 z2  ...  wmn zmn
where a = (m-1)/2 and b = (n-1)/2
mn
  wi zi To generate a complete filtered image this equation must
i i be applied for x = 0, 1, 2, … , M-1 and y = 0, 1, 2, … , N-1
109 110

27
Spatial Filtering: Equation Form
a b
Smoothing Spatial Filters
  w(s, t ) f ( x  s, y  t )
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

g ( x, y ) 
s   at   b  used for blurring and for noise reduction
 blurring is used in preprocessing steps,
Filtering can be given such as
in equation form as  removal of small details from an image prior
shown above to object extraction
Notations are based  bridging of small gaps in lines or curves
on the image shown  noise reduction can be accomplished by
to the left blurring with a linear filter and also by a
nonlinear filter
112

Smoothing Linear Filters Smoothing Linear Filters


 replacing the value of every pixel in an image
 output is simply the average of the pixels by the average of the gray levels in the
contained in the neighborhood of the filter neighborhood will reduce the “sharp”
mask. transitions in gray levels.
 sharp transitions
 called averaging filters or lowpass filters.  random noise in the image
 edges of objects in the image
 thus, smoothing can reduce noises (desirable)
and blur edges (undesirable)

113 114

28
3x3 Smoothing Linear Filters Image smoothing

box filter weighted average


the center is the most important and other
pixels are inversely weighted as a function of
their distance from the center of the mask
115 116

3x3 mean filter

117 118

29
Illustration of Spatial filtering Result of Averaging filter
7 9 11
0 0 0 0 0
10 50 8 7 9 11 8.4 10.7 8.8

0 7 9 11 0
9 5 6 10 50 8 10.3 12.9 5.7
Original Image 0 10 50 8 0
9 5 6 4.1 4.6 3.2
0 9 5 6 0
1 1 1 Original Image Image after Spatial Averaging

1/9 1 1 1 0 0 0 0 0

1 1 1 Input Image after zero padding


3 x 3 Averaging
Mask
119 120

Spatial Averaging Spatial Averaging

3x3 5x5
Smoothing Smoothing
filter filter

Original Image Smoothened Image Original Image Smoothened Image

121 122

30
1 1 1
1
1 1 1
9 1 1 1
 

Convolution Examples: Original


Convolution Examples: 33 Blur
Images

1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
1 
1 1 1 1 1  
25 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
  1 1 1 1 1 1 1 1 1
81 1 1 1 1 1 1 1 1 1
1 1 1 1 1  
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
 

Convolution Examples: 55 Blur Convolution Examples: 99 Blur


1 1 1 1 1 1 1 1 1

31
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

Weighted average filter


1 
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
289 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

 Convolution Examples: 1717


1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

Blur

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

 the basic strategy behind weighting the


center point the highest and then
reducing the value of the coefficients as
a function of increasing distance from
the origin is simply an attempt to
reduce blurring in the smoothing
process.

128

a b
c d
General form : smoothing mask Example e f

 filter of size mxn (m and n odd)  a). original image 500x500 pixel
 b). - f). results of smoothing
a b with square averaging filter
  w(s, t ) f ( x  s, y  t ) masks of size n = 3, 5, 9, 15 and
35, respectively.
g ( x, y )  s   at   b
a b  Note:

  w(s, t )
s   at   b


big mask is used to eliminate small
objects from an image.
the size of the mask establishes
the relative size of the objects
that will be blended with the
summation of all coefficient of the mask background.
129 130

32
Order-Statistics Filters
(Nonlinear Filters) Median Filters
 the response is based on ordering  replaces the value of a pixel by the median of
(ranking) the pixels contained in the the gray levels in the neighborhood of that
image area encompassed by the filter pixel (the original value of the pixel is included
in the computation of the median)
 example  quite popular because for certain types of
 median filter : R = median{zk |k = 1,2,…,n x n} random noise (impulse noise  salt and pepper
noise) , they provide excellent noise-reduction
capabilities, with considering less blurring than
 note: n x n is the size of the mask
linear smoothing filters of similar size.

131 132

Median Filters Median Filters


 forces the points with distinct gray levels to  Median filters perform the following tasks to
be more like their neighbors. find each pixel value in the processed image
 isolated clusters of pixels that are light or  1. All pixels in the neighborhood of the pixel in
dark with respect to their neighbors, and the original image which are identified by the
whose area is less than n2/2 (one-half the
filter area), are eliminated by an n x n median mask are sorted in the ascending or descending
filter. order.
 eliminated = forced to have the value equal the  2. The median of the sorted value is computed
median intensity of the neighbors. and is chosen as the pixel value in the
 larger clusters are affected considerably less processed image
133 134

33
Median filter
Example : Median Filters

Original Image Corrupted Image

135
Median filtered Image 136

Image Sharpening
(High-pass filtering) High-pass filtering
 to highlight fine detail in an image  Image sharpening is achieved in the same
 or to enhance detail that has been fashion as image smoothing except that a
blurred, either in error or as a natural different mask image is used called high
effect of a particular method of image pass filter
acquisition.

137 138

34
High-pass filtering High-pass filtering
 Consider the 8x8 matrix to perform
image sharpening operation

139 140

High-pass filtering Derivative operator


 the strength of the response of a derivative operator
is proportional to the degree of discontinuity of the
image at the point at which the operator is applied.

 Edge is typically extracted by computing the derivative


of image function

 thus, image differentiation


 enhances edges and other discontinuities (noise)

 deemphasizes area with slowly varying gray-level

values.
141 142

35
First-order derivative Second-order derivative
 Derivative of a digital pixel grid can be  similarly, we define the second-order
defined in terms of difference derivative of a one-dimensional function
 a basic definition of the first-order f(x) is the difference
derivative of a one-dimensional function
2 f
f(x) is the difference  f ( x  1)  f ( x)  [ f ( x  1  1)  f ( x  1)]
x 2
f
 f ( x  1)  f ( x) 2 f
x  f ( x  1)  f ( x  1)  2 f ( x)
143
x 2 144

Spatial Differentiation

A B

146

36
Using Second Derivatives For Image
The Laplacian
Enhancement
The 2nd derivative is more useful for image The Laplacian is defined as follows:
enhancement than the 1st derivative 2 f 2 f
– Stronger response to fine detail 2 f 
– Simpler implementation 2x 2 y
– We will come back to the 1st order derivative
where the partial 1st order derivative in the x
later on direction is defined as follows:
The first sharpening filter we will look at is 2 f
the Laplacian  f ( x  1, y )  f ( x  1, y )  2 f ( x, y)
2 x
– Isotropic and in the y direction as follows:
– One of the simplest sharpening filters 2 f
– We will look at a digital implementation  f ( x, y  1)  f ( x, y  1)  2 f ( x, y )
2 y

The Laplacian (cont…) The Laplacian (cont…)


So, the Laplacian can be given as follows: Applying the Laplacian to an image we get a
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

 f  [ f ( x  1, y )  f ( x  1, y )
2 new image that highlights edges and other
discontinuities
 f ( x, y  1)  f ( x, y 1)]
 4 f ( x, y)
We can easily build a filter based on this
0 1 0

1 -4 1 Original Laplacian Laplacian


Image Filtered Image Filtered Image
Scaled for Display
0 1 0

37
But That Is Not Very Enhanced! Laplacian Image Enhancement
The result of a Laplacian filtering

Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

is not an enhanced image


We have to do more work in - =
order to get our final image
Subtract the Laplacian result
Laplacian Original Laplacian Sharpened
from the original image to Filtered Image Image Filtered Image Image
Scaled for Display
generate our final sharpened In the final sharpened image edges and fine
enhanced image detail are much more obvious
g ( x, y )  f ( x, y )   2 f

Laplacian Image Enhancement First and Second-order


derivative of f(x,y)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

 when we consider an image function of


two variables, f(x,y), we will dealing with
partial derivatives along the two spatial
axes.
f ( x, y ) f ( x, y ) f ( x, y )
Gradient operator f   
xy x y
Laplacian operator
 2 f ( x, y )  2 f ( x, y )
(linear operator) 2 f  
x 2 y 2 154

38
Discrete Form of Laplacian Result Laplacian mask
from
 f 2
 f ( x  1, y )  f ( x  1, y )  2 f ( x, y)
x 2
2 f
 f ( x, y  1)  f ( x, y  1)  2 f ( x, y )
y 2
yield,

 f  [ f ( x  1, y )  f ( x  1, y )
2

 f ( x, y  1)  f ( x, y  1)  4 f ( x, y )]
155 156

Laplacian mask implemented an Other implementation of


extension of diagonal neighbors Laplacian masks

give the same result, but we have to keep in mind that


when combining (add / subtract) a Laplacian-filtered
157 image with another image. 158

39
Effect of Laplacian Operator Example
 as it is a derivative operator,  a). image of the North
pole of the moon
 it highlights gray-level discontinuities in an  b). Laplacian-filtered
image image with
 it deemphasizes regions with slowly varying 1 1 1
gray levels 1 -8 1

 tends to produce images that have 1 1 1

 grayish edge lines and other discontinuities,  c). Laplacian image scaled
for display purposes
all superimposed on a dark,
 d). image enhanced by
 featureless background. addition with original
image
159 160

Mask of Laplacian + addition Mask of Laplacian + addition


 to simply the computation, we can create g ( x, y )  f ( x, y )  [ f ( x  1, y )  f ( x  1, y )
a mask which do both operations,
 f ( x, y  1)  f ( x, y  1)  4 f ( x, y )]
Laplacian Filter and Addition the original
image.  5 f ( x, y )  [ f ( x  1, y )  f ( x  1, y )
 f ( x, y  1)  f ( x, y  1)]

0 -1 0
-1 5 -1
0 -1 0
161 162

40
Unsharp masking
 Used for edge enhancement
 In this approach, a smoothened version
of image is subtracted from the original
image, hence tipping the image balance
towards the sharper content of the
image

163 164

Unsharp masking Unsharp masking


 Procedure to perform unsharp masking  Mathematically
1. Blur filter the image
2. Subtract the result obtain step1 from
the original image.  g(x,y)= F(x,y)+α[F(x,y)-F‟(x,y)]
3. Multiply the result obtained in step 2 by
some weighting fraction  to subtract a blurred version of an image
4. Add the result obtained in step 3 to the produces sharpening output image.
original image.

165 166

41
High-boost filtering High-boost filtering
 Known as high frequency emphasis filter if the center coefficient

 Used to retain some of low frequency  yields of the Laplacian mask is


negative
components to aid in the interpretation
of the image.  Af ( x, y )   2 f ( x, y )
High boost = A*f(x,y) – low pass f hb ( x, y )  
 Af ( x, y )   f ( x, y )
 2
 Adding and subtracting one with gain
factor
if the center coefficient
 High boost=(A-1) f(x,y) + f(x,y)-low pass of the Laplacian mask is
positive
 High boost=(A-1) f(x,y) + High Pass 167 168

High-boost Masks

 A1
 if A = 1, it becomes “standard” Laplacian
sharpening 169 170

42
Edges Derivative Filters

 Change in the gray values w.r.t distance


in „x‟ and „y‟ directions give the
sharpness contents of the image
 Above measurement enhances the edge
content and other high level feature
content
 For every pixel, the magnitude of the
partial derivative vector, represented as

172

 f 
Gx   x 
f      f 
Gradient Operator G y   
 y 

 Gradient is a 2-D vector that points to  first derivatives are implemented using
the direction in which the image intensity the magnitude of the gradient.
grows fastest 1
f  mag(f )  [Gx2  G y2 ] 2

1
commonly approx.
 f  2  f  2  2

      
 x   y  
f  Gx  Gy
173 174

43
z1 z2 z3
z4 z5 z6
Gradient Mask z7 z8 z9

*Gradient magnitude gives the amount of the difference  simplest approximation, 2x2
between pixels in the neighborhood which give the strength of
the edge
Gx  ( z8  z5 ) and Gy  ( z6  z5 )
* Gradient orientation gives the direction of the greatest
change which presumably is the direction across the edge 1 1
f  [Gx2  G y2 ] 2
 [( z8  z5 ) 2  ( z6  z5 ) 2 ] 2

f  z8  z5  z6  z5

175 176

z1 z2 z3 z1 z2 z3
z4 z5 z6 z4 z5 z6
Gradient Mask z7 z8 z9 Gradient Mask z7 z8 z9

 Roberts cross-gradient operators, 2x2  Sobel operators, 3x3


Gx  ( z9  z5 ) and Gy  ( z8  z6 ) Gx  ( z7  2 z8  z9 )  ( z1  2 z2  z3 )
1 1
G y  ( z3  2 z6  z9 )  ( z1  2 z4  z7 )
f  [Gx2  G y2 ] 2
 [( z9  z5 ) 2  ( z8  z6 ) 2 ] 2

f  Gx  Gy
f  z9  z5  z8  z6
the weight value 2 is to
achieve smoothing by
giving more important
177 to the center point 178

44
Note Prewitt operator

 the summation of coefficients in all


masks equals 0, indicating that they
would give a response of 0 in an area of
constant gray level.

179 180

Robert operator Sobel operator

181 182

45
EXAMPLE Using Prewitt operator

183 184

Using Prewitt operator Using Prewitt operator

185 186

46
Image Sharpening based on First-Order Derivatives
Using sobel operator

188

187

Example Sobel Example


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

An image of a
contact lens which
is enhanced in
order to make
defects (at four
and five o’clock in
the image) more
obvious

Sobel filters are typically used for edge


detection

189

47
Combining Spatial Enhancement Methods Example of Combining Spatial
Successful image Enhancement Methods
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

enhancement is typically not  solve :


achieved using a single
operation 1. Laplacian to highlight fine detail
Rather we combine a range 2. gradient to enhance prominent
of techniques in order to edges
achieve a final result 3. gray-level transformation to
This example will focus on increase the dynamic range of
enhancing the bone scan to gray levels
the right
192

Combining Spatial Enhancement Methods Combining Spatial Enhancement Methods


(cont…) (cont…)
Result of applying a (h)
power-law trans. to
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Sharpened image (g)


which is sum of (a)
(g)
and (f)
The product of (c)
and (e) which will be (f)
used as a mask
(e)

(a)
Laplacian filter of
bone scan (a)
(b)
Sharpened version of
bone scan achieved (c)
by subtracting (a)
and (b) Sobel filter of bone
scan (a) (d) Image (d) smoothed with
a 5*5 averaging filter

48
Combining Spatial Enhancement Methods
(cont…)
Compare the original and final images
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

196

197

49

You might also like