You are on page 1of 13

192911_Mohammed Yahya

MT390 STCS (Digital Image Processing)

Tutor-Marked Assignment, Summer 2021

Cut-Off Date: TBA, Total Marks: 100

Question 1 (25 marks)………………………………………………………………2

Question 2 (25 marks)………………………………………………………………2

Question 3 (25 marks)………………………………………………………………3

Question 4 (25 marks)………………………………………………………………4

Plagiarism Warning:

As per AOU rules and regulations, all students are required to submit their own TMA work and
avoid plagiarism. The AOU has implemented sophisticated techniques for plagiarism detection.
You must provide all references in case you use and quote another person's work in your TMA.
You will be penalized for any act of plagiarism as per the AOU's rules and regulations.

Declaration of No Plagiarism by Student (to be signed and submitted by student with TMA
work):

I hereby declare that this submitted TMA work is a result of my own efforts and I have not
plagiarized any other person's work. I have provided all references of information that I have
used and quoted in my TMA work.

Name of Student: Mohammed Yahya

Signature: 192911

Date: 17/8/2021

1
192911_Mohammed Yahya

The main focus of this TMA is to enable the students to appreciate both the
theoretical/general as well as the applied nature of the Digital Image Processing
(DIP) techniques.

Part 1: Theoretical/General Concepts (50 Marks)

Question 1: (25 Marks)


The theme of this question is usage of Satellite Imagery for Remote Sensing and
Monitoring of Earth’s resources. You should visit the Earth observatory website at
NASA and read the article “Blazes Rage in British Columbia” and answer the
following questions related to the article:

a) Briefly describe what is the VIIRS technology? 6 marks


VIIRS is Visible Infrared Imaging Radiometer Suite instrument which is used to
collect visible and infrared imagery and global observations of land, atmosphere,
cry sphere and oceans. Currently flying on the Suomi NPP satellite mission It
generates many critical environmental products about snow and ice cover, clouds, fog,
aerosols, fire, smoke plumes, dust, vegetation health, phytoplankton abundance and
chlorophyll.  It will also be on the JPSS-1 and JPSS-2 satellite missions. It has
a mass of 280 kilograms with average power 319 watts. Instrument Contractor
is Raytheon Company, EI Segundo, and California. It has resolution of 400m. 

b) Briefly describe any 3 benefits of the VIIRS technology. 9 marks


 Benefits of VIIRS

2
192911_Mohammed Yahya

 It generates products for the operational weather community which improves


weather, flooding and storm forecasting which helps to protect life and property.

2. It helps agricultural industry and they get benefitted from monitoring and


vegetation index along with weather warnings.

3. It also helps fishing commercial and maritime industries by improving and


making the fishery management more efficient.

4. It produces higher resolution and more accurate measurements of sea surface


temperature as well as an operational capability for ocean color observations and
products

c) Describe in your own words the benefits of using satellite imagery for
monitoring large-scale fires or disasters. 10 marks
Benefits of satellite imagery

VIIRS provides global coverage twice a day with 750m resolution across its entire
scan. It is a substantial improvement for ocean ecology and carbon research studies as
well as for establishing accurate estimates of sea surface temperature which are
essential for predicting hurricanes and other types of severe weather.

It produces higher resolution and more accurate measurements of sea surface


temperature. Ocean color is an indicator of water quality supporting a wide range of
decisions from fishing to tourism.

The VIIRS Day/Night Band also provides nighttime imagery which is essential


for Alaska during the winter months.

Question 2: (25 Marks)


a) List at least 10 free Satellite Imagery sources. The list must have at least one
source from Europe and one from Asia. 10 marks

1. USGS Earth Explorer- U.S


2. Land viewer
3. Copernicus Open Access Hub
4. Sentinel Hub
5. NASA Earth data Search
6. INPE Image Catalog
7. Google Earth
3
192911_Mohammed Yahya

8. Earth On AWS
9. Bhuvan- Asia
10.ISRO's Geoportal- Asia

b) Obtain at least two free Satellite images from the above sources and include
them in your answer. 5 marks

First image from USGS earth explorer

4
192911_Mohammed Yahya

 This image is from sentinel hub

c) Briefly describe the importance of open source Satellite Imagery sources


compared to commercial Satellite companies’ sources. 5 marks
Many commercial problems and data asymmetry challenges are solved using
satellite pictures. Agriculture, geological and hydrological research, forestry,
environmental protection, land planning, intelligence, and military reasons are
among the businesses that profit the most. In the past, Earth Observation (EO)
was mostly available to government-hired analysts who were interested in
obtaining alternative data on what was going on across the world. Some of them
offer free live satellite photos, while others offer historical data. Some portals
collect high-resolution satellite imagery for scientific research, while others are
suitable for amateur use.

Open Source Satellite Images are used in Flood Monitoring. During the last three to
four decades, land and water resource consumption without vision, as well as fast
urbanization, have turned the environment into a card castle, where any minor
alteration can cause a butterfly effect on the environmental equilibrium. Droughts and
floods occur in the Indian subcontinent on a yearly basis. In India, about one-eighth of

5
192911_Mohammed Yahya

the total geographical area is prone to flooding. Natural disasters result in the loss of
life and property, and this is deemed "normal." Every year, flooding affects over a
million people in India's North-Eastern states between July and August. It is an annual
event for the people of Assam, and the death toll and agony are generally overlooked

If one talks about the Open-source information then they are the international
level with merely an expansion of the relatively greater amount of information.
While commercial satellite imagery offer much more detailed information than is
currently available from the Landsat and Spot Image systems

But that have one disadvantage that adds a negative factor in commercial imagery is
that they demands some money or capital for the data acquisition but on the other hand
open source is available to all around the globe and even it is available to everyone out
there who have keen interest in data or the satellite imagery. Also roughly talking open
source imagery provides seamless access and abilities to process spatiotemporal image
sequences on a pronounced scale. 

Part 2: Matlab/Python Part (50 Marks): For this part, you must insert your
Matlab or Python code (either screen shots of the code or the actual code) inside
your MS Word answer file for your work. Also you must submit all figures and
relevant images as part of your work in your MS Word answer file . You must
submit your work as only one MS Word file.

Question 3: (25 marks)


Consider the image given to you with TMA and named ‘Q3Mystery. jpg’.
Q3Mystery.jpg

Use at least three image processing techniques that you have studied in this course
to extract or reveal the original image from the mystery image. Display your results

6
192911_Mohammed Yahya

in one figure showing all the 3 results. Which technique has performed better and
why?

import matplotlib.pyplot as plt

import cv2

import numpy as np

def convolve2D(image, kernel, padding=0, strides=1):

# Cross Correlation

kernel = np.flipud(np.fliplr(kernel))

# Gather Shapes of Kernel + Image + Padding

xKernShape = kernel.shape[0]

yKernShape = kernel.shape[1]

xImgShape = image.shape[0]

yImgShape = image.shape[0]

# Shape of Output Convolution

xOutput = int(((xImgShape - xKernShape + 2 * padding) / strides) + 1)

yOutput = int(((yImgShape - yKernShape + 2 * padding) / strides) + 1)

output = np.zeros((xOutput, yOutput))

# Apply Equal Padding to All Sides

if padding != 0:

7
192911_Mohammed Yahya

imagePadded = np.zeros((image.shape[0] + padding*2, image.shape[1] +


padding*2))

imagePadded[int(padding):int(-1 * padding), int(padding):int(-1 * padding)] =


image

print(imagePadded)

else:

imagePadded = image

# Iterate through image

for y in range(image.shape[1]):

# Exit Convolution

if y > image.shape[1] - yKernShape:

break

# Only Convolve if y has gone down by the specified Strides

if y % strides == 0:

for x in range(image.shape[0]):

# Go to next row once kernel is out of bounds

if x > image.shape[0] - xKernShape:

break

try:

# Only Convolve if x has moved by the specified Strides

if x % strides == 0:

8
192911_Mohammed Yahya

output[x, y] = (kernel * imagePadded[x: x + xKernShape, y: y +


yKernShape]).sum()

except:

break

return output

def processImage(x) :

x=cv2.imread(x)

x=cv2.cvtColor(x, cv2.COLOR_BGR2GRAY)

return x

if __name__ == '__main__':

# Grayscale Image

image = processImage('template.jpg')

# Edge Detection Kernels

filter_1 = (1/9 )*np.array([[1, 1, 1], [1, 1, 1], [1, 1, 1]]) # averaging low-pass filter

filter_2 = (1/8)*np.array([[1, 1, 1], [1, 0, 1], [1, 1, 1]]) # circular low-pass filter

filter_3 = (1/6)*np.array([[0, 1, 0], [1, 2, 1], [0, 1, 0]]) # gaussian low-pass filter

# Convolve and Save Output

image_1 = convolve2D(image, filter_1, padding=2)

image_2 = convolve2D(image, filter_2, padding=2)


9
192911_Mohammed Yahya

image_3 = convolve2D(image, filter_3, padding=2)

plt.subplot(411),plt.imshow(image,cmap="gray"),plt.title('original')

plt.subplot(412),plt.imshow(image_1,cmap="gray"),plt.title('averaging')

plt.subplot(413),plt.imshow(image_2,cmap="gray"),plt.title('circular,')

plt.subplot(414),plt.imshow(image_3,cmap="gray"),plt.title('gaussian')

plt.tight_layout()

plt.show()

Question 4: (25 marks)

Consider the image of the Space Shuttle given to you.


Space Shuttle

Use at least three image processing techniques that you have studied in this course
to highlight the edges in the given image. Display your results in one figure
showing all the 3 results. Which technique has performed better and why?

Solution

import matplotlib.pyplot as plt


import cv2
import numpy as np
def convolve2D(image, kernel, padding=0, strides=1):
10
192911_Mohammed Yahya

# Cross Correlation
kernel = np.flipud(np.fliplr(kernel))

# Gather Shapes of Kernel + Image + Padding


xKernShape = kernel.shape[0]
yKernShape = kernel.shape[1]
xImgShape = image.shape[0]
yImgShape = image.shape[0]

# Shape of Output Convolution


xOutput = int(((xImgShape - xKernShape + 2 * padding) / strides) + 1)
yOutput = int(((yImgShape - yKernShape + 2 * padding) / strides) + 1)
output = np.zeros((xOutput, yOutput))

# Apply Equal Padding to All Sides


if padding != 0:
imagePadded = np.zeros((image.shape[0] + padding*2, image.shape[1] +
padding*2))
imagePadded[int(padding):int(-1 * padding), int(padding):int(-1 * padding)] =
image
print(imagePadded)
else:
imagePadded = image

# Iterate through image


for y in range(image.shape[1]):
# Exit Convolution
if y > image.shape[1] - yKernShape:
break
# Only Convolve if y has gone down by the specified Strides
if y % strides == 0:
for x in range(image.shape[0]):
# Go to next row once kernel is out of bounds
if x > image.shape[0] - xKernShape:
break
try:
# Only Convolve if x has moved by the specified Strides
if x % strides == 0:

11
192911_Mohammed Yahya

output[x, y] = (kernel * imagePadded[x: x + xKernShape, y: y +


yKernShape]).sum()
except:
break

return output

def processImage(x) :
x=cv2.imread(x)
x=cv2.cvtColor(x, cv2.COLOR_BGR2GRAY)
return x
if __name__ == '__main__':
# Grayscale Image
image = processImage('template.jpg') ### Here put the file name of the given
image

# Edge Detection Kernels


sobel_1 = np.array([[-1, -2, -1], [0, 0, 0], [1, 2, 1]]) # horizontal edge detection
sobel_2 = np.array([[-1, 0, 1], [-2, 0, 2], [-1, 0, 1]]) # vertical edge detection
gaussian = np.array([[0, 1, 0], [1, -4, 1], [0, 1, 0]]) # circular edge detection

# Convolve and Save Output


image_1 = convolve2D(image, sobel_1, padding=2)
image_2 = convolve2D(image, sobel_2, padding=2)
image_3 = convolve2D(image, gaussian, padding=2)

plt.subplot(411),plt.imshow(image,cmap="gray"),plt.title('original image')
plt.subplot(412),plt.imshow(image_1,cmap="gray"),plt.title('horizontal edge
detection')
plt.subplot(413),plt.imshow(image_2,cmap="gray"),plt.title('vertical edge
detection')
plt.subplot(414),plt.imshow(image_3,cmap="gray"),plt.title('circular edge
detection')

plt.tight_layout()
plt.show()

12
192911_Mohammed Yahya

13

You might also like