Professional Documents
Culture Documents
Theory:-
A negative image is a total inversion, in which light areas appear dark and vice versa.
When negative film images are brought into the digital realm, their contrast may be
adjusted at the time of scanning or, more usually, during subsequent post-processing.
Grey level slicing is equivalent to band pass filtering. It manipulates groups of intensity
levels in an image up to specific range by diminishing rest or by leaving them alone.
This transformation is applicable in medical images and satellite images such as X-ray
flaws, CT scan.
The grey level or grey value indicates the brightness of a pixel. The minimum grey level
is 0. The maximum grey level depends on the digitisation depth of the image. In
contrast, in a greyscale or colour image a pixel can take on any value between 0 and
255.
Thresholding is a type of image segmentation, where we change the pixels of an image
to make the image easier to analyze. In thresholding, we convert an image from color or
grayscale into a binary image, i.e., one that is simply black and white.
Code:-
import cv2
import numpy as np
# Image negative
img = cv2.imread('food.jpeg',0)
T = 150
for i in range(m):
for j in range(n):
if img[i,j] < T:
img_thresh[i,j]= 0
else:
img_thresh[i,j] = 255
for i in range(m):
for j in range(n):
cv2.imwrite('Cameraman_Thresh_Back.png', img_thresh_back)
1) Image Negative
Conclusion:- Thus we studied and performed image negative, Gray level Slicing
and Thresholding of given input image.
Experiment No.7
Aim:- Implementation of Contrast Stretching ,Dynamic range compression & Bit plane
Slicing
Theory:-
Bit plane slicing is a method of representing an image with one or more bits of the byte
used for each pixel. One can use only MSB to represent the pixel, which reduces the
original gray level to a binary image. The three main goals of bit plane slicing is:
Converting a gray level image to a binary image.
Instead of highlighting gray level images, highlighting the contribution made to total
image appearance by specific bits might be desired. Suppose that each pixel in an
image is represented by 8 bits. Imagine the image is composed of 8, 1-bit planes
ranging from bit plane 1-0 (LSB)to bit plane 7 (MSB).
In terms of 8-bits bytes, plane 0 contains all lowest order bits in the bytes comprising the
pixels in the image and plane 7 contains all high order bits.
The purpose of dynamic range compression is to map the natural dynamic range of a
signal to a smaller range. This is achieved by modifying the illumination component of
the image.
Code:-
import cv2
import numpy as np
img = cv2.imread('messi.jpg')
original = img.copy()
xp = [0, 64, 128, 192, 255]
fp = [0, 16, 128, 240, 255]
x = np.arange(256)
table = np.interp(x, xp, fp).astype('uint8')
img = cv2.LUT(img, table)
cv2.imshow("original", original)
cv2.imshow("Output", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Output:-
Original Image:-
Code:-
import numpy as np
import cv2
img = cv2.imread('D:/Downloads/reference.jpg',0)
lst = []
for i in range(img.shape[0]):
for j in range(img.shape[1]):
lst.append(np.binary_repr(img[i][j] ,width=8)) # width = no. of bits
# We have a list of strings where each string represents binary pixel value. To extract bit
planes we need to iterate over the strings and store the characters corresponding to bit
planes into lists.
# Vertically concatenate
final = cv2.vconcat([final,final])
Original Image:-
Code:-
import cv2
import numpy as np
from skimage.io import imread
image = imread("img.jpg")
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
image = cv2.resize(image, (250, 250))
print("Original Image")
cv2_imshow(image)
image = image.astype('float32')
a = 255 / np.log(1 + np.max(image))
alpha = 5
transformed_image = a * np.log(1 + (np.exp(alpha) - 1) * image)
transformed_image = transformed_image.astype(np.uint8)
Output:-
Original Image:-
Experiment No.8
Aim:- Implementation of Histogram Processing
Theory:-
In a graph, the horizontal axis of the graph is used to represent tonal variations whereas
the vertical axis is used to represent the number of pixels in that particular pixel. Black
and dark areas are represented in the left side of the horizontal axis, medium grey color
is represented in the middle, and the vertical axis represents the size of the area.
Applications of Histograms
If we want to increase the contrast of an image, the histogram of that image will be fully
stretched and cover the dynamic range of the histogram.
From histogram of an image, we can check that the image has low or high contrast.
3) Histogram Equalization
Histogram equalization is used for equalizing all the pixel values of an image.
Transformation is done in such a way that uniform flattened histogram is produced.
Histogram equalization increases the dynamic range of pixel values and makes an
equal count of pixels at each level which produces a flat histogram with high contrast
image.
While stretching histogram, the shape of histogram remains the same whereas in
Histogram equalization, the shape of histogram changes and it generates only one
image.
Code:-
import cv2 as cv
import numpy as np
from google.colab.patches import cv2_imshow
from matplotlib import pyplot as plt
from skimage.io import imread
image = imread("img.jpg")
image = cv.cvtColor(image, cv.COLOR_BGR2GRAY)
image = cv.resize(image,(250,250))
print("Original Image")
cv2_imshow(image)
Output:-
Original image:-
Histogram:-
Conclusion:- Thus we studied histogram processing and studied histogram of given
image.
Experiment No.9
Aim:- Implementation of Image smoothing/ Image sharpening
Theory:-
This is usually obtained by removing noise while sharpening details and improving
edges contrast. Smoothing refers to the case of denoising when noise follows a
Gaussian distribution. Both operations, smoothing noise and sharpening, have an
opposite nature.
Smoothing and sharpening function use the pixels in anNxN neighborhood about each
pixel to modify an image. For both smoothing and sharpening filters the larger the N x N
neighborhood the stronger the smoothing or sharpening effect.
For example, a box smoothing filter smooths an image by calculating the average of all
pixels in the N x N neighborhood of a pixel and replaces the pixel at the center of the N
x N neighborhood with the average value. Gaussian blur and unsharp mask are also
examples of non-adaptive filters.
Code:-
import cv2 as cv
import numpy as np
# Identity
impulse = np.array(([0,0,0],[0,1,0],[0,0,0]), np.float32)
# Large Blur
largeBlur = np.ones((21, 21), dtype="float") * (1.0 / (21 * 21))
# Sharpening
sharpen = np.array(([0,-1,0],[-1,5,-1],[0,-1,0]), np.float32) #sharpen
# Box Blur
boxblur = np.array(np.ones((3,3), np.float32)) / 9
output1 = cv.filter2D(image, -1, impulse)
output2 = cv.filter2D(image, -1, largeBlur)
output3 = cv.filter2D(image, -1, sharpen)
output4 = cv.filter2D(image, -1, boxblur)
print("\Identity")
cv2_imshow(output1)
print("\nSharpen")
cv2_imshow(output3)
print("\nBox Blur")
cv2_imshow(output4)
print("\nLarge Blur")
cv2_imshow(output2)
Output:-
Original Image
Identity
Sharpen
Box Blur
Large Blur
Types of edges
● Horizontal edges
● Vertical Edges
● Diagonal Edges
Most of the shape information of an image is enclosed in edges. So first we detect these
edges in an image and by using these filters and then by enhancing those areas of
image which contain edges, sharpness of the image will increase and image will
become clearer.
● Prewitt Mask
● Sobel Mask
Prewitt operator is used for edge detection in an image. It detects two types of edges
● Horizontal edges
● Vertical Edges
The sobel operator is very similar to the Prewitt operator. It is also a derivative mask
and is used for edge detection. Like Prewitt operator sobel operator is also used to
detect two kinds of edges in an image:
● Vertical direction
● Horizontal direction
Code:-
x , y = np.ogrid[:100, :100]
edge_sobel = filters.sobel(image_rot)
edge_scharr = filters.scharr(image_rot)
edge_prewitt = filters.prewitt(image_rot)
axes[0].imshow(image_rot, cmap=plt.cm.gray)
axes[0].set_title('Original image')
axes[1].imshow(edge_scharr, cmap=plt.cm.gray)
axes[1].set_title('Scharr Edge Detection')
for ax in axes:
ax.axis('off')
plt.tight_layout()
plt.show()
Output:-
Original Image:-
Scharr- Sobel
Conclusion:- Thus we performed edge detection using sobel and prewitt methods.