You are on page 1of 61

EE-333 Digital Image Processing

Image Enhancement in Spatial Domain

Dr Tahir Nawaz
Website: https://www.tahirnawaz.com
Email: tahir.nawaz@ceme.nust.edu.pk
Histogram specification/matching
• Histogram equalization does not allow interactive image
enhancement and generates only one result: an
approximation to a uniform histogram

• Sometimes, though, we may need to be able to specify


and achieve particular histogram shapes capable of
highlighting certain gray-level ranges.

• This is referred to as histogram specification-based


enhancement
Histogram specification/matching
• Procedure
– Equalize the levels of the original image using:

– n: total number of pixels,

– nj: number of pixels with gray level rj,

– L: number of discrete gray levels


Histogram specification/matching
• Procedure
– Specify the desired density function and obtain the
transformation function G(z):

– where the new processed version of the original image is aimed


to be as close as possible to the specified/target pdf, pz(zk),
where zk refers to the gray levels of the target image
Histogram specification/matching
• Implementation with the help of a numerical example
– Given a histogram of an 8x8 3-bit image

rk 0 1 2 3 4 5 6 7
nk 8 10 10 2 12 16 4 2

– Target histogram

rk 0 1 2 3 4 5 6 7
nk 0 0 0 0 20 20 16 8
Histogram specification/matching
• Implementation with the help of a numerical example
– Histogram equalization of the given image

rk nk pdf cdf x (L-1) round(cdf x (L-1))


0 8 0.125 0.125 x 7 1
1 10 0.156 0.281 x 7 2
2 10 0.156 0.437 x 7 3
3 2 0.031 0.468 x 7 3
4 12 0.188 0.656 x 7 5
5 16 0.250 0.906 x 7 6
6 4 0.062 0.968 x 7 7
7 2 0.031 1x7 7
Histogram specification/matching
• Implementation with the help of a numerical example
– Histogram equalization of the target image

rk nk pdf cdf x (L-1) round(cdf x (L-1))


0 0 0 0x7 0
1 0 0 0x7 0
2 0 0 0x7 0
3 0 0 0x7 0
4 20 0.312 0.312 x 7 2
5 20 0.312 0.624 x 7 4
6 16 0.250 0.874 x 7 6
7 8 0.125 1x7 7
Histogram specification/matching
• Implementation with the help of a numerical example
– Mapping

Gray Level Equalized h Equalized h Mapped


(original) (target) gray level
0 1 0 4
1 2 0 4
2 3 0 5
3 3 0 5
4 5 2 6
5 6 4 6
6 7 6 7
7 7 7 7
Histogram specification/matching
• Example
Histogram specification/matching
• Example
Histogram specification/matching
• Example
Local histogram processing
• Define a neighborhood and move its center from pixel to
pixel

• At each location, the histogram of the points in the


neighborhood is computed. Either histogram equalization
or histogram specification transformation function is
obtained

• Map the intensity of the pixel centered in the


neighborhood

• Move to the next location and repeat the procedure


Local histogram processing
Spatial filtering
• The output intensity value at (x,y) depends not only on
the input intensity value at (x,y) but also on the specified
number of neighboring intensity values around (x,y)

• Spatial masks (also called window, filter, kernel,


template) are used and convolved over the entire image
for local enhancement (spatial filtering)

• The size of a mask determines the number of


neighboring pixels that influence the output value at (x,y)

• The values (coefficients) of the mask determine the


nature and properties of enhancing technique
Spatial filtering
• Given the 3×3 mask with coefficients: w1, w2,…, w9
• The mask cover the pixels with gray levels: z1, z2,…, z9

• z gives the output intensity value for the processed


image (to be stored in a new array) at the location of z5
in the input image
Spatial filtering
Spatial filtering
Spatial filtering
Spatial filtering
Spatial filtering
Spatial filtering
Spatial filtering
Spatial filtering
Spatial filtering
• Mask operation near the image border: Problem arises
when part of the mask is located outside the image plane
Spatial filtering
• How to address the problem?

– Discard the problem pixels (e.g. for a 3x3 mask with a 512x512
input image, the output image becomes 510x510)

– Zero padding: Expand the input image by padding zeros


(512x512 original image, 514x514 padded image, 512x512
output). Zero padding may at times be undesirable as it creates
artificial lines or edges on the border

– Pixel replication: We normally use the gray levels of border


pixels to fill up the expanded region (for 3x3 mask). For larger
masks a border region equal to half of the mask size is mirrored
on the expanded region.
Spatial filtering
• Mask operation near the border in the form of pixel
replication
Smoothing spatial filters
• Linear filtering
Smoothing spatial filters
• Linear filtering
– Averaging is a linear filtering approach generally used for
blurring/noise reduction

– Blurring is usually used in pre-processing steps, e.g., to


remove (undesired) fine details in an image prior to object
extraction, or to bridge small gaps in lines or curves

– Equivalent to Low-pass spatial filtering in frequency domain


because fine (high frequency) details are removed based on
neighborhood averaging (averaging filters) – more on this in the
next lecture!
Smoothing spatial filters
• Linear filtering
Smoothing spatial filters
• Linear filtering
Smoothing spatial filters
• Linear filtering
Smoothing spatial filters
• Linear filtering
Smoothing spatial filters
• Order-statistic filtering
– Non-linear filtering

– Based on ordering (ranking) the pixels contained in


the filter mask

– Replacing the value of the center pixel with the value


determined by the ranking result

– E.g., median filter, max filter, min filter


Smoothing spatial filters
• Order-statistic (non-linear) filtering
– Use of median filtering helps in removing salt-and-
pepper noise
Sharpening spatial filters
• Previously we have looked at smoothing filters which
remove fine detail

• Sharpening spatial filters seek to highlight fine detail


– Remove blurring from images
– Highlight edges

• Sharpening filters are based on the concept of spatial


differentiation
Sharpening spatial filters
• Foundation – Spatial Differentiation
– Lets consider the following example
Sharpening spatial filters
• Foundation – Spatial Differentiation
– The first-order derivative of a one-dimensional function f(x) is the
difference

f
 f ( x  1)  f ( x)
x

– Its just the difference between subsequent intensity values and


measures the rate of change of the function
Sharpening spatial filters
• Foundation – Spatial Differentiation
Sharpening spatial filters
• Foundation – Spatial Differentiation
– The second-order derivative of f(x) is given as follows:

2 f
 f ( x  1)  f ( x  1)  2 f ( x)
x 2

– Simply takes into account the values both before and after the
current value
Sharpening spatial filters
• Foundation – Spatial Differentiation
Sharpening spatial filters
• The 2nd derivative is, at times, more useful for image
enhancement than the 1st derivative - Stronger response
to fine detail

• We will come back to the 1st order derivative later on

• The first sharpening filter we will look at is the Laplacian


Sharpening spatial filters
• Laplacian Filter
– The Laplacian for a function (image) f(x,y) is defined as follows:

 2
f  2
f
 f  2  2
2

x y

2 f
 f ( x  1, y)  f ( x  1, y)  2 f ( x, y)
x 2

2 f
 f ( x, y  1)  f ( x, y  1)  2 f ( x, y )
y 2
Sharpening spatial filters
• Laplacian Filter
– The implementation of 2-D Laplacian is obtained by summing
the two components:

 2 f  f ( x  1, y )  f ( x  1, y )  f ( x, y  1)  f ( x, y  1)
- 4 f ( x, y)

– Can we implement the above using a mask?


Sharpening spatial filters
• Laplacian Filter
Sharpening spatial filters
• Laplacian Filter
– Applying the Laplacian leads to highlighting edges and other
discontinuities in the images

Original Detected edges after


image applying Laplacian
Sharpening spatial filters
• Laplacian Filter
– The result of a Laplacian filtering is not an enhanced image

– To obtain the final enhanced image g(x,y), we incorporate the


Laplacian filtered image into the original image
Sharpening spatial filters
• Laplacian Filter
– In the final sharpened image, edges and fine detail are much
more obvious
Sharpening spatial filters
• Laplacian Filter
– Original image (left) vs Sharpened image (right)
Sharpening spatial filters
• Laplacian Filter
– The entire enhancement can be combined into a single filtering
operation
Sharpening spatial filters
• Laplacian Filter
– This gives us a new filter which does the whole job for us in one
step
Unsharp masking
• Sharpen images by subtracting an unsharp (smoothed)
version of an image from the original image, e.g., printing
and publishing industry

• Steps
– Blur the original image
– Subtract the blurred image from the original
– Add the mask to the original
Unsharp masking
• Let f ( x, y ) denote the blurred image, unsharp masking is
g mask ( x, y)  f ( x, y)  f ( x, y)
Then add a weighted portion of the mask back to the original
g ( x, y )  f ( x, y)  k * g mask ( x, y) k 0

when k  1, the process is referred to as highboost filtering.


Unsharp masking
• Example
First-order derivatives (Gradient) operator
• The gradient of a function f(x,y) is defined as

• Where Gx and Gy are the gradients along x and y


directions
First-order derivatives (Gradient) operator
• Gradient operators
– Most common differentiation operator is the gradient
vector.

– Magnitude

– Direction
First-order derivatives (Gradient) operator
• Gradient operators

Roberts
cross
gradient
operator

Sobel
operator

Extract horizontal Extract vertical


edges edges
First-order derivatives (Gradient) operator
• Gradient operators

Original image Vertical edge Horizontal edge


detection detection
Combining spatial enhancement methods
• Successful image enhancement is typically not achieved
using a single operation

• Rather we combine a range of techniques in order to


achieve a final result

• Lets see an example that will focus on enhancing the


bone scan by combining different spatial enhancement
methods
Combining spatial enhancement methods
Combining spatial enhancement methods
Acknowledgement/References
• Digital Image Processing”, Rafael C. Gonzalez & Richard E. Woods,
Addison-Wesley, 2002
• Statistical Pattern Recognition: A Review – A.K Jain et al., PAMI (22)
2000
• Pattern Recognition and Analysis Course – A.K. Jain, MSU
• “Pattern Classification” by Duda et al., John Wiley & Sons.
• “Machine Vision: Automated Visual Inspection and Robot Vision”,
David Vernon, Prentice Hall, 1991
• www.eu.aibo.com/
• Advances in Human Computer Interaction, Shane Pinder, InTech,
Austria, October 2008
• https://www.cs.nmt.edu/~ip/lectures.html
• https://web.stanford.edu/class/ee368/handouts.html

You might also like