You are on page 1of 26

Experiment No.

Aim:-Implementation of Image negative, Gray level Slicing and Thresholding

Theory:-

A negative image is a total inversion, in which light areas appear dark and vice versa.
When negative film images are brought into the digital realm, their contrast may be
adjusted at the time of scanning or, more usually, during subsequent post-processing.

A positive image is a normal image. A negative image is a total


inversion, in which light areas appear dark and vice versa. A negative color image is
additionally color-reversed with red areas appearing cyan, greens appearing magenta,
and blues appearing yellow, and vice versa.

Grey level slicing is equivalent to band pass filtering. It manipulates groups of intensity
levels in an image up to specific range by diminishing rest or by leaving them alone.
This transformation is applicable in medical images and satellite images such as X-ray
flaws, CT scan.
The grey level or grey value indicates the brightness of a pixel. The minimum grey level
is 0. The maximum grey level depends on the digitisation depth of the image. In
contrast, in a greyscale or colour image a pixel can take on any value between 0 and
255.
Thresholding is a type of image segmentation, where we change the pixels of an image
to make the image easier to analyze. In thresholding, we convert an image from color or
grayscale into a binary image, i.e., one that is simply black and white.

In image processing, thresholding is used to split an image into smaller segments, or


junks, using at least one color or gray scale value to define their boundary. The
advantage of obtaining a binary image first is that it reduces the complexity of the data
and simplifies the process of recognition and classification.

Code:-
import cv2
import numpy as np
# Image negative
img = cv2.imread('food.jpeg',0)

# To ascertain total numbers of


# rows and columns of the image,
# size of the image
m,n = img.shape

# To find the maximum grey level


# value in the image
L = img.max()

# Maximum grey level value minus


# the original image gives the
# negative image
img_neg = L-img

# convert the np array img_neg to


# a png image
cv2.imwrite('Cameraman_Negative.png', img_neg)

# Thresholding without background


# Let threshold =T
# Let pixel value in the original be denoted by r
# Let pixel value in the new image be denoted by s
# If r<T, s= 0
# If r>T, s=255

T = 150

# create a array of zeros


img_thresh = np.zeros((m,n), dtype = int)

for i in range(m):

for j in range(n):

if img[i,j] < T:
img_thresh[i,j]= 0
else:
img_thresh[i,j] = 255

# Convert array to png image


cv2.imwrite('Cameraman_Thresh.png', img_thresh)

# the lower threshold value


T1 = 100

# the upper threshold value


T2 = 180

# create a array of zeros


img_thresh_back = np.zeros((m,n), dtype = int)

for i in range(m):
for j in range(n):

if T1 < img[i,j] < T2:


img_thresh_back[i,j]= 255
else:
img_thresh_back[i,j] = img[i,j]

# Convert array to png image

cv2.imwrite('Cameraman_Thresh_Back.png', img_thresh_back)

Original Input Image:-


Output:-

1) Image Negative

2) Image with Thresholding :


3) Image with Grey Level Slicing with Background

Conclusion:- Thus we studied and performed image negative, Gray level Slicing
and Thresholding of given input image.
Experiment No.7
Aim:- Implementation of Contrast Stretching ,Dynamic range compression & Bit plane
Slicing
Theory:-

Contrast stretching (often called normalization) is a simple image enhancement


technique that attempts to improve the contrast in an image by `stretching' the range of
intensity values it contains to span a desired range of values, e.g. the the full range of
pixel values that the image type concerned allows.

Contrast stretching (often called normalization) is a simple image enhancement


technique that attempts to improve the contrast in an image by 'stretching' the range of
intensity values it contains to span a desired range of values, the full range of pixel
values that the image type concerned allows.

Bit plane slicing is a method of representing an image with one or more bits of the byte
used for each pixel. One can use only MSB to represent the pixel, which reduces the
original gray level to a binary image. The three main goals of bit plane slicing is:
Converting a gray level image to a binary image.

Instead of highlighting gray level images, highlighting the contribution made to total
image appearance by specific bits might be desired. Suppose that each pixel in an
image is represented by 8 bits. Imagine the image is composed of 8, 1-bit planes
ranging from bit plane 1-0 (LSB)to bit plane 7 (MSB).

In terms of 8-bits bytes, plane 0 contains all lowest order bits in the bytes comprising the
pixels in the image and plane 7 contains all high order bits.

The purpose of dynamic range compression is to map the natural dynamic range of a
signal to a smaller range. This is achieved by modifying the illumination component of
the image.

Dynamic range compression (DRC) or simply compression is an audio signal


processing operation that reduces the volume of loud sounds or amplifies quiet sounds,
thus reducing or compressing an audio signal's dynamic range. ... A limiter is a
compressor with a high ratio and, generally, a short attack time.

Code:-
import cv2
import numpy as np
img = cv2.imread('messi.jpg')
original = img.copy()
xp = [0, 64, 128, 192, 255]
fp = [0, 16, 128, 240, 255]
x = np.arange(256)
table = np.interp(x, xp, fp).astype('uint8')
img = cv2.LUT(img, table)
cv2.imshow("original", original)
cv2.imshow("Output", img)
cv2.waitKey(0)
cv2.destroyAllWindows()

Output:-
Original Image:-

Contrast Stretch image:-

Code:-
import numpy as np
import cv2

img = cv2.imread('D:/Downloads/reference.jpg',0)
lst = []
for i in range(img.shape[0]):
for j in range(img.shape[1]):
lst.append(np.binary_repr(img[i][j] ,width=8)) # width = no. of bits

# We have a list of strings where each string represents binary pixel value. To extract bit
planes we need to iterate over the strings and store the characters corresponding to bit
planes into lists.

# Multiply with 2^(n-1) and reshape to reconstruct the bit image.


eight_bit_img = (np.array([int(i[0]) for i in list],dtype = np.uint8) *
128).reshape(img.shape[0],img.shape[1])
seven_bit_img = (np.array([int(i[1]) for i in list],dtype = np.uint8) *
64).reshape(img.shape[0],img.shape[1])
six_bit_img = (np.array([int(i[2]) for i in list],dtype = np.uint8) *
32).reshape(img.shape[0],img.shape[1])
five_bit_img = (np.array([int(i[3]) for i in list],dtype = np.uint8) *
16).reshape(img.shape[0],img.shape[1])
four_bit_img = (np.array([int(i[4]) for i in list],dtype = np.uint8) *
8).reshape(img.shape[0],img.shape[1])
three_bit_img = (np.array([int(i[5]) for i in list],dtype = np.uint8) *
4).reshape(img.shape[0],img.shape[1])
two_bit_img = (np.array([int(i[6]) for i in list],dtype = np.uint8) *
2).reshape(img.shape[0],img.shape[1])
one_bit_img = (np.array([int(i[7]) for i in list],dtype = np.uint8) *
1).reshape(img.shape[0],img.shape[1])

#Concatenate these images for ease of display using cv2.hconcat()


finale = cv2.hconcat([eight_bit_img,seven_bit_img,six_bit_img,five_bit_img])
final =cv2.hconcat([four_bit_img,three_bit_img,two_bit_img,one_bit_img])

# Vertically concatenate
final = cv2.vconcat([final,final])

# Display the images


cv2.imshow('a',final)
cv2.waitKey(0)
Output:-

Original Image:-

Bit plane Slicing Image:-

Code:-

import cv2
import numpy as np
from skimage.io import imread

image = imread("img.jpg")
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
image = cv2.resize(image, (250, 250))
print("Original Image")
cv2_imshow(image)
image = image.astype('float32')
a = 255 / np.log(1 + np.max(image))
alpha = 5
transformed_image = a * np.log(1 + (np.exp(alpha) - 1) * image)
transformed_image = transformed_image.astype(np.uint8)

print("\nDynamic Range Compression Image")


cv2_imshow(transformed_image)

Output:-

Original Image:-

Dynamic Range Compressed Image:-

Conclusion:- Thus we studied different techniques of Contrast stretching, bit


slicing, dynamic range compression of image.

Experiment No.8
Aim:- Implementation of Histogram Processing
Theory:-

In digital image processing, the histogram is used for graphical representation of a


digital image. A graph is a plot by the number of pixels for each tonal value. Nowadays,
image histograms are present in digital cameras. Photographers use them to see the
distribution of tones captured.

In a graph, the horizontal axis of the graph is used to represent tonal variations whereas
the vertical axis is used to represent the number of pixels in that particular pixel. Black
and dark areas are represented in the left side of the horizontal axis, medium grey color
is represented in the middle, and the vertical axis represents the size of the area.

Applications of Histograms

1. In digital image processing, histograms are used for simple calculations in


software.
2. It is used to analyze an image. Properties of an image can be predicted by the
detailed study of the histogram.
3. The brightness of the image can be adjusted by having the details of its
histogram.
4. The contrast of the image can be adjusted according to the need by having
details of the x-axis of a histogram.
5. It is used for image equalization. Gray level intensities are expanded along the x-
axis to produce a high contrast image.
6. Histograms are used in thresholding as it improves the appearance of the image.
7. If we have input and output histograms of an image, we can determine which
type of transformation is applied in the algorithm.

Histogram Processing Techniques


1) Histogram Sliding

In Histogram sliding, the complete histogram is shifted towards rightwards or leftwards.


When a histogram is shifted towards the right or left, clear changes are seen in the
brightness of the image. The brightness of the image is defined by the intensity of light
which is emitted by a particular light source.
2) Histogram Stretching

In histogram stretching, contrast of an image is increased. The contrast of an image is


defined between the maximum and minimum value of pixel intensity.

If we want to increase the contrast of an image, the histogram of that image will be fully
stretched and cover the dynamic range of the histogram.

From histogram of an image, we can check that the image has low or high contrast.
3) Histogram Equalization

Histogram equalization is used for equalizing all the pixel values of an image.
Transformation is done in such a way that uniform flattened histogram is produced.

Histogram equalization increases the dynamic range of pixel values and makes an
equal count of pixels at each level which produces a flat histogram with high contrast
image.

While stretching histogram, the shape of histogram remains the same whereas in
Histogram equalization, the shape of histogram changes and it generates only one
image.

Code:-
import cv2 as cv
import numpy as np
from google.colab.patches import cv2_imshow
from matplotlib import pyplot as plt
from skimage.io import imread

image = imread("img.jpg")
image = cv.cvtColor(image, cv.COLOR_BGR2GRAY)
image = cv.resize(image,(250,250))

print("Original Image")
cv2_imshow(image)

row, col = image.shape

histo = cv.calcHist([image],[0],None,[256],[0, 256])


plt.plot(histo)
print("\nHistogram")
plt.show()

Output:-
Original image:-

Histogram:-
Conclusion:- Thus we studied histogram processing and studied histogram of given
image.
Experiment No.9
Aim:- Implementation of Image smoothing/ Image sharpening

Theory:-

This is usually obtained by removing noise while sharpening details and improving
edges contrast. Smoothing refers to the case of denoising when noise follows a
Gaussian distribution. Both operations, smoothing noise and sharpening, have an
opposite nature.

Smoothing and sharpening function use the pixels in anNxN neighborhood about each
pixel to modify an image. For both smoothing and sharpening filters the larger the N x N
neighborhood the stronger the smoothing or sharpening effect.

For example, a box smoothing filter smooths an image by calculating the average of all
pixels in the N x N neighborhood of a pixel and replaces the pixel at the center of the N
x N neighborhood with the average value. Gaussian blur and unsharp mask are also
examples of non-adaptive filters.

Non-adaptive filters are represented as an N x N convolution kernel. The dimensions of


the N x N convolution kernel are typically 3x3, 5x5, 7x7, 9x9, or larger. A 3x3
convolution kernel is denoted by

A convolution is a one-to-one linear function F that maps an MxN image Z and


a N xNconvolution kernel C onto a new MxN image W. The function F has the
following properties:1) A pixel from Z is mapped to the same position in W.2) If
the convolution kernel Cis given by
Example:-

Code:-

import cv2 as cv
import numpy as np

from google.colab.patches import cv2_imshow


from matplotlib import pyplot as plt
from skimage.io import imread
from PIL import Image
from PIL import ImageFilter
image = imread("img.jpg")
image = cv.cvtColor(image, cv.COLOR_BGR2GRAY)
image = cv.resize(image,(250,250))
print("Original Image")
cv2_imshow(image)

# Identity
impulse = np.array(([0,0,0],[0,1,0],[0,0,0]), np.float32)

# Large Blur
largeBlur = np.ones((21, 21), dtype="float") * (1.0 / (21 * 21))

# Sharpening
sharpen = np.array(([0,-1,0],[-1,5,-1],[0,-1,0]), np.float32) #sharpen

# Box Blur
boxblur = np.array(np.ones((3,3), np.float32)) / 9
output1 = cv.filter2D(image, -1, impulse)
output2 = cv.filter2D(image, -1, largeBlur)
output3 = cv.filter2D(image, -1, sharpen)
output4 = cv.filter2D(image, -1, boxblur)

print("\Identity")
cv2_imshow(output1)

print("\nSharpen")
cv2_imshow(output3)

print("\nBox Blur")
cv2_imshow(output4)

print("\nLarge Blur")
cv2_imshow(output2)

Output:-

Original Image

Identity
Sharpen

Box Blur
Large Blur

Conclusion:- Thus we studied different steps of image smoothing and sharpening


with different techniques.
Experiment No.10
Aim:- Implementation of Edge detection using Sobel and Prewitt masks.
Theory:-
We can also say that sudden changes of discontinuities in an image are called edges.
Significant transitions in an image are called edges.

Types of edges

Generally edges are of three types:

● Horizontal edges
● Vertical Edges
● Diagonal Edges

Why detect edges

Most of the shape information of an image is enclosed in edges. So first we detect these
edges in an image and by using these filters and then by enhancing those areas of
image which contain edges, sharpness of the image will increase and image will
become clearer.

Some Edge detection types:-

● Prewitt Mask
● Sobel Mask

Prewitt operator is used for edge detection in an image. It detects two types of edges

● Horizontal edges
● Vertical Edges

Edges are calculated by using difference between corresponding pixel intensities of an


image. All the masks that are used for edge detection are also known as derivative
masks. Because as we have stated many times before in this series of tutorials that
image is also a signal so changes in a signal can only be calculated using
differentiation. So that’s why these operators are also called as derivative operators or
derivative masks.

All the derivative masks should have the following properties:

● Opposite signs should be present in the mask.


● Sum of the masks should be equal to zero.
● More weight means more edge detection.

The sobel operator is very similar to the Prewitt operator. It is also a derivative mask
and is used for edge detection. Like Prewitt operator sobel operator is also used to
detect two kinds of edges in an image:

● Vertical direction
● Horizontal direction

Code:-
x , y = np.ogrid[:100, :100]

# Creating a rotation-invariant image with different spatial frequencies.


image_rot = np.exp(1j * np.hypot(x, y) ** 1.3 / 20.).real

edge_sobel = filters.sobel(image_rot)
edge_scharr = filters.scharr(image_rot)
edge_prewitt = filters.prewitt(image_rot)

diff_scharr_prewitt = compare_images(edge_scharr, edge_prewitt)


diff_scharr_sobel = compare_images(edge_scharr, edge_sobel)
max_diff = np.max(np.maximum(diff_scharr_prewitt, diff_scharr_sobel))

fig, axes = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True,


figsize=(8, 8))
axes = axes.ravel()

axes[0].imshow(image_rot, cmap=plt.cm.gray)
axes[0].set_title('Original image')

axes[1].imshow(edge_scharr, cmap=plt.cm.gray)
axes[1].set_title('Scharr Edge Detection')

axes[2].imshow(diff_scharr_prewitt, cmap=plt.cm.gray, vmax=max_diff)


axes[2].set_title('Scharr - Prewitt')

axes[3].imshow(diff_scharr_sobel, cmap=plt.cm.gray, vmax=max_diff)


axes[3].set_title('Scharr - Sobel')

for ax in axes:
ax.axis('off')

plt.tight_layout()
plt.show()

Output:-

Original Image:-

Scharr Edge Detection:-


Scharr-Prewitt:-

Scharr- Sobel

Conclusion:- Thus we performed edge detection using sobel and prewitt methods.

You might also like