You are on page 1of 7

CSE 440 Digital Image Processing Lab 7

The process of spatial filtering using python

Name: _________________________________

Enrollment #: _________________________________

Class: _________________________________

Objective

The purpose of today’s lab is to introduce you to the process of Filtering. This lab spotlights the
built-in Python IPT functions for Smoothing and Sharpening filters. Also, it gives a brief overview
of implementing these filters in Python on mathematical grounds. By the end of this lab, you
should be able to use Smoothing/Sharpening filters and code them at your own.

Submission Requirements

You are expected to complete the assigned tasks within the lab session and show them to
the lab engineer/instructor. Get the lab journal signed by your instructor and submit it by the
end of lab session.

Spatial Filtering
Filtering is the process of applying a transformation matrix h(u, v) over some input intensity f(x,
y) such that we have transformed intensity g(x, y) while considering the effect of 4 or 8
neighbors of f(x, y).

where the transformation matrix h(u, v) is called as kernel, mask, or filter. g(x, y) is the
resultant intensity obtained after h(u, v) is convolved with f(x, y)

Kernels are minified pixel blocks with a purpose of going over the whole image and applying
mathematical operations on top the corresponding pixels they are currently on. This operation
is called “2D Convolution”. It works in a following manner,

1. Start off by picking a kernel size, it is generally picked 3x3 pixels. The kernel size changes
the feature localization property. If it is large, then the features will more likely to be
global instead of local. In addition to that, larger kernels are less computationally
efficient due to increased number of individual calculations. Also, larger kernels tend to

reduce the noise better, although it may cause artifacts in the image. So, finding
the correct and balanced kernel size is quite important when applying filter to
the image.
2. Moving further, fill out the kernel with filter specific values. For example,
Those values are the determining factor of the filter behavior. They can be set up for
many purposes such as blur, sharpen-unshapen, edge detection.
3. After the values of filter are decided, place it in on the top left pixel of the image. The
center of the kernel should correspond to that pixel. Then each pixel in the kernel is
multiplied with its corresponding image counterpart and the result is summed up. To
preserve the original brightness, it should be normalized. Therefore, divide the
summation to the number of elements in the kernel.
4. Finally, apply previous step to the image from top left pixel to bottom right pixel in a
row-by-row manner. In this step it is important to not overwrite the image because
previous operations will probably interfere with the next operations.

As you can see from the processes above, although it algorithmically looks simple,
implementing it manually is rather time consuming. Therefore, it is better to use OpenCV for
implementation, which can be seen below.

import cv2
import numpy as np

img = cv2.imread("HeliView.jpg")
img = cv2.resize(img, (0, 0), None, .25, .25)

gaussianBlurKernel = np.array(([[1, 2, 1], [2, 4, 2], [1, 2, 1]]), np.fl


oat32)/9
sharpenKernel = np.array(([[0, -1, 0], [-1, 9, -1], [0,
- 1, 0]]), np.float32)/9
meanBlurKernel = np.ones((3, 3), np.float32)/9

gaussianBlur = cv2.filter2D(src=img, kernel=gaussianBlurKernel, ddepth=-


1)
meanBlur = cv2.filter2D(src=img, kernel=meanBlurKernel, ddepth=-1)
sharpen = cv2.filter2D(src=img, kernel=sharpenKernel, ddepth=-1)

horizontalStack = np.concatenate((img, gaussianBlur, meanBlur, sharpen),


axis=1)
cv2.imwrite("Output.jpg", horizontalStack)

cv2.imshow("2D Convolution Example", horizontalStack)

cv2.waitKey(0)
cv2.destroyAllWindows()

####################################################################
import cv2
import numpy as np
from google.colab.patches import cv2_imshow

img = cv2.imread("/content/sample_data/images/hill.png")
img = cv2.resize(img, (0, 0), None, .25, .25)

gaussianBlurKernel = np.array(([[1, 2, 1], [2, 4, 2], [1, 2, 1]]), np.fl


oat32)/9
sharpenKernel = np.array(([[0, -1, 0], [-1, 9, -1],
[0, 1, 0]]), np.float32)/9
meanBlurKernel = np.ones((3, 3), np.float32)/9

gaussianBlur = cv2.filter2D(src=img, kernel=gaussianBlurKernel, ddepth=-


1)
meanBlur = cv2.filter2D(src=img, kernel=meanBlurKernel, ddepth=-1)
sharpen = cv2.filter2D(src=img, kernel=sharpenKernel, ddepth=-1)

horizontalStack = np.concatenate((img, gaussianBlur, meanBlur, sharpen),


axis=1)

cv2_imshow(horizontalStack)

cv2.waitKey(0)
cv2.destroyAllWindows()

Filters in Python
The Filter2D operation convolves an image with the kernel. You can perform this operation on
an image using the Filter2D() method of the imgproc class. Following is the syntax of this
method −
filter2D(src, dst, ddepth, kernel)
This method accepts the following parameters −
• src − A Mat object representing the source (input image) for this operation.
• dst − A Mat object representing the destination (output image) for this operation.
• ddepth − A variable of the type integer representing the depth of the output image.
• kernel − A Mat object representing the convolution kernel.
Averaging Filtering
OpenCV provides a function, cv2.filter2D(), to convolve a kernel with an image. As an example,
we will try an averaging filter on an image. A 5x5 averaging filter kernel can be defined as
follows:

Filtering with the above kernel results in the following being performed: for each pixel, a 5x5
window is centered on this pixel, all pixels falling within this window are summed up, and the
result is then divided by 25. This equates to computing the average of the pixel values inside
that window. This operation is performed for all the pixels in the image to produce the output
filtered image.

import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('/content/sample_data/images/cameraman.png')
kernel = np.ones((5,5),np.float32)/25
dst = cv2.filter2D(img,-1,kernel)
plt.figure(figsize=(10,10))
plt.subplot(121),plt.imshow(img),plt.title('Original')
plt.subplot(122),plt.imshow(dst),plt.title('Averaging')
plt.show()

Using blur Function


import cv2
import numpy as np
from matplotlib import pyplot as plt

img = cv2.imread('/content/sample_data/images/cameraman.png')
blur = cv2.blur(img,(5,5))
plt.figure(figsize=(10,10))
plt.subplot(121),plt.imshow(img),plt.title('Original')
plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(blur),plt.title('Blurred')
plt.xticks([]), plt.yticks([])
plt.show()

Gaussian Filtering
In this approach, instead of a box filter consisting of equal filter coefficients, a Gaussian kernel is
used. It is done with the function, cv2.GaussianBlur(). We should specify the width and height
of the kernel which should be positive and odd. We also should specify the standard deviation
in the X and Y directions, sigmaX and sigmaY respectively. If only sigmaX is specified, sigmaY is
taken as equal to sigmaX. If both are given as zeros, they are calculated from the kernel size.
Gaussian filtering is highly effective in removing Gaussian noise from the image.

If you want, you can create a Gaussian kernel with the function, cv2.getGaussianKernel().
Median Filtering
The function cv2.medianBlur() computes the median of all the pixels under the kernel window and
the central pixel is replaced with this median value. This is highly effective in removing salt-and-
pepper noise. One interesting thing to note is that, in the Gaussian and box filters, the filtered value
for the central element can be a value which may not exist in the original image. However this is not
the case in median filtering, since the central element is always replaced by some pixel value in the
image. This reduces the noise effectively. The kernel size must be a positive odd integer.
In this demo, we add a 50% noise to our original image and use a median filter. Check the
result: median = cv2.medianBlur(img,5)

Exercise 1

Read the image ‘coins.png’. Apply average and median filters of size 5x5 individually and
identify the differences b/w their results.

Exercise 2
Read the image ‘Lines.gif’ uploaded on your course page, apply the masks to detect
horizontal, vertical and diagonal lines and compare the results of different masks. given
four line detection

Exercise 3
Read the image ‘coin’ uploaded on your course page, you should have to choose
kernel/mask values to get the output image shown in figure .

Exercise 4

Read the image ‘moon.tif’. Write a function named ‘mylaplacian’ to MANUALLY


code/implement 2nd order derivate of above read image in order to extract
horizontal and vertical edges, collectively. Also, compare your results with ‘Sobel’
filter and state your findings.
[HINT]: You need to perform filtering with the following masks.

Vertical Edges:
g(x, y) = f(x + 1, y) + f(x – 1, y) – 2f(x, y)
Horizontal Edges:
+ g(x, y) = f(x , y + 1) + f(x, y - 1) – 2f(x, y)

Powered by TCPDF (www.tcpdf.org)

You might also like