You are on page 1of 5

Digital Image Processing

Experiment 11 & 12

Instructor: Engr. Firdous Kanwal

Objective
To get familiar with histogram matching, local histogram processing and filtering.

1 Histogram Matching

1.1 Background

Histogram equalization produces a transformation function that is adaptive in the sense


that it is based on the histogram of the given image. It produces enhancement by spreading the
intensities of the input image over a wider range of the intensity scale. We will show in the
next section that this does not always lead to successful result. In some applications we want
to specify the shape of the histogram that we wish the output image to have. The method used
to generate an image that has a specified histogram is called histogram matching.
Histogram matching can be done using histogram equalisation. Suppose r is the intensity
of the input image and z is the intensity of the output image. Let probability density functions
of the input and output image be pr (r) and pz (z). We know for histogram equalization

Z r
s = T (r) = pr (w)dw
0

where s has uniform probability density function. Let us define a variable z with the property

1
1.1 Background

Z z
H(z) = pz (w)dw = s
0

where z is the intensity of the histogram matched image and pz (z) is its histogram. From the
preceding two equations it follows that

z = H −1 (s) = H −1[T (r)]

So we can use the preceding equation to find the transformed level z. The toolbox uses the
following syntax to implement histogram matching

g = histeq(f, hspec);

where f is the input image, hspec is the specified histogram (a row vector of intensity values)
and g is the output image whose histogram matches the specified histogram, hspec.
Read the image moon.tif in variable f . Now create a histogram equalised image g
from f (Refer to Experiment 5 for histogram equalised image). Now show both, original and
histogram equalised image on the same figure. The figure will show you that the histogram
equalisation did not produce a very good result (enhancement) in this case. The reason for this
can be seen by studying the histograms of both the images. Now plot image f , its histogram,
image g and its histogram, all on one figure. You will see that most of the intensity levels are
shifted to the upper half of the grey scale, giving the image a washed out appearance. The cause
of this shift is the large concentration of dark intensities in the original image.
The solution to this problem is histogram matching, with the desired histogram having
lesser concentration of intensity levels on the lower end of the gray scale and maintaining the
general shape of the original histogram. Unfortunately, there is no straight forward method in
MATLAB for histogram matching. The user has to develop his own function files to specify
what kind of histogram user wants output image to have and it is beyond undergraduate level.
But there is another solution to this problem and it is local histogram processing.

2
2 Local Histogram Processing
Local histogram processing can be used when we want to enhance the contrast of small
areas of an image. We saw in the previous section that the global histogram method failed
to produce good results when enhancement was needed in small areas. The syntax for local
histogram processing in MATLAB is

g = adapthisteq(f, param1, val1, param2, val2, ...)

Now apply the following function on image moon.tif and plot both the original and modified
image on the same plot.

>> g = adpathisteq(f);
Do you see the improvement in the new image. Now get another image by using the following
command and plot all three images on same figure.

>> g = adpathisteq(f,’NumTiles’,[25 25]);


Now get another image by using the following command and plot all four images on same
figure.

>> g = adpathisteq(f,’NumTiles’,[25 25],’ClipLimit’,0.02);


You can see that the contrast improves as we add more parameters. You can explore other
parameters by typing help adapthisteq on the command prompt.

3 Spatial Filtering
Spatial filtering follows the following steps: 1) Defining a centre point (x,y) 2) Perform-
ing an operation that involves only the pixels in predefined neighbourhood 3) Letting the result
of that operation be response of the process at that point 4) repeating the process for every point
in the image.
The process of moving the centre creates new neighbourhood, one for each pixel in the
input image. The two terms used for this process are neighbourhood operation and spatial
filtering.

3
3.1 Linear Filtering

3.1 Linear Filtering

If the computations performed on the neighbourhood of a pixel are linear, it is called


linear filtering. The linear operations of interest in this experiment are limited to multiplying
each pixel in the neighbourhood with a weight and summing the result to obtain the response
at point (x,y). The weights are arranged in the form of a matrix also called as filter, mask,
kernel, template or window. The filtering process consists of simply moving the centre of the
filter mask w, from point to point in the image f (x, y). At each point (x,y), the response of
the filter at that point is the sum of the products of the filter weights and the corresponding
neighbourhood pixels in the area spanned by the filter mask.
There are two closely related concepts with linear filtering, correlation and convolution.
Correlation is the process of moving the filter mask over the image and calculating sum of
products at each location as described previously. The mechanics of convolution are exactly
the same as correlation except that the filter is first rotated by 180 degree.
The toolbox implements the linear filtering using the following function

g = imf ilter(f, w, f ilteringmode, boundaryoptions, sizeoptions);

Where f is the input image, w is the filtering mask and g is the filtered result. Filtering mode
is used to specify whether to do correlation (’corr’) or convolution (’conv’). The boundary
option is about how to pad the boundary of image (’replicate’,’circular’,’symmetric’,P) while
filtering and the size option is used either to save the full output image (padded)(’full’) or after
truncating the padded values (’same’).
Q1 Make a matrix of size 5x5 such that only the centre pixel is 1 and all other values are zero.
Now make a filter of size 3x3 with values from 1 to 9 (as discussed during lecture). Now use
MATLAB filtering function to do convolution and correlation. Also explore the parameter size
options. Do the results match with theory?
A popular application of linear filtering is blurring which can be achieved using averag-
ing filter. Averaging can be achieved from correlation where after taking sum of products we
can divide the result by number of elements of mask. It can also be achieved by having a mask
such that each element of the mask is equal to 1/(number of elements of mask).
Q2 Create an images f of size 512x512 such that if this image is divided into four squares, the

4
REFERENCES

first and last square should be black while second and third squares should be white (Hint: use
MATLAB built in functions zeros and ones to create the image). Now Create an averaging mask
w of size 31x31 (Hint: Create a matrix of ones of size 31x31. Now divide it by 312 ). Apply the
MATLAB filtering operation on the image with different values of boundary options. Plot all
the images on the same figure. What difference do you see with different values of boundary
option?
Q3 Now read image moon.tif and cameraman.tif . Make an averaging filter of size 11x11.
Apply this filter on both the images and show all four images on same figure. Are the new
images blurred?

References
[1] R. F. Gonzales, R. E. Woods, S. L. Eddins, “Digital Image Processing Using MATLAB”,
2nd Edition, Gatesmark Publishing, 2009.

You might also like