You are on page 1of 7

PRACTICAL FILE

Student Name Diksha Sharma


UID 20BCS1458
Section & Group 20BCS-17 & Group-B
Department Computer Science & Engineering
Session Aug-Dec 2023
Course Name Computer Vision Lab
Course Code 20CSP-422
Semester 7th
INDEX
Total
Date of Conduct Viva Record Teacher’s
S. N. Experiment Experiment (12 marks) (10 marks) (08 marks)
(30
Signature
marks)
Write a program to implement
various feature extraction
1.
techniques for image
classification.
Write a program to assess various
2. feature matching algorithms for
object recognition.
Write a program to analyze the
3. impact of refining feature
detection for image segmentation.
Write a program to evaluate the
efficacy of human-guided control
4.
point selection for image
alignment.
Write a program to compare the
performance of different
5.
classification models in image
recognition.
Write a program to interpret the
effectiveness of Bag of Features
6.
in enhancing image classification
performance.
Write a program to analyze
various object detection
7.
algorithms with machine
learning.
Write a program to determine the
effectiveness of incorporating
8.
optical flow analysis into object
tracking algorithms.
Write a program to examine the
performance of various pretrained
9.
deep learning models for real-
time object tracking tasks.
Write a program to interpret the
effectiveness of template
10.
matching techniques for video
stabilization tasks.
Course Name: Computer Vision Lab Course Code: CSP-422

Experiment: 1.1

Aim: Write a program to implement various feature extraction techniques for image
classification.

Software Required:
• Python 3.9
• PyCharm IDE

Description:
Feature extraction refers to the process of transforming raw data into numerical
features that can be processed while preserving the information in the original data
set.
There are several feature extraction techniques commonly used in image
classification tasks. These techniques aim to capture relevant information from
images and transform them into meaningful representations that can be used by
machine learning algorithms for classification. Some popular feature extraction
techniques are:
• Scale-Invariant Feature Transform (SIFT): SIFT is a widely used technique that
identifies key points and extracts local invariant descriptors from images. It is
robust to changes in scale, rotation, and illumination.
• Oriented FAST and Rotated BRIEF(ORB):orb is a feature detection and
description algorithm designed for efficiency, often used in real-time applications.
It identifies key points using the FAST algorithm, computes rotation-invariant
descriptors with binary patterns, and is well-suited for tasks like object tracking
and robotics. While ORB is faster, its descriptors may be less distinctive
compared to methods like SIFT or SURF.
• Histogram of Oriented Gradients (HOG): HOG computes the distribution of
gradient orientations in an image. It captures the shape and edge information and
has been particularly successful in object detection and pedestrian recognition
tasks.

Name: Diksha Sharma UID: 20BCS1458


Course Name: Computer Vision Lab Course Code: CSP-422

• Local Binary Patterns (LBP): LBP encodes the texture information by comparing
each pixel's intensity value with its neighboring pixels. It is commonly used in
texture analysis tasks and has shown good performance in various image
classification applications.

Steps:
1. Import necessary libraries
2. Load the dataset
3. Preprocess the images
4. Extract features using a specific technique
5. Split the dataset into training and testing sets
6. Train a classifier using the extracted features and labels
7. Evaluate the performance of the classifier on the testing set
8. Compare the performance of different techniques
9. Repeat steps 4-8 for other techniques
10. Document the observations and conclusions

Implementation:
import cv2

def extract_orb_features(image):
orb = cv2.ORB_create()
keypoints, descriptors = orb.detectAndCompute(image,
None)
return keypoints, descriptors

def extract_hog_features(image):
win_size = (64, 64) # Adjust this window size based
on your requirement
block_size = (16, 16)
block_stride = (8, 8)
cell_size = (8, 8)
nbins = 9

Name: Diksha Sharma UID: 20BCS1458


Course Name: Computer Vision Lab Course Code: CSP-422

hog = cv2.HOGDescriptor(win_size, block_size,


block_stride, cell_size, nbins)
descriptors = hog.compute(image)
return descriptors

def extract_sift_features(image):
sift = cv2.SIFT_create()
keypoints, descriptors = sift.detectAndCompute(image,
None)
return keypoints, descriptors

def extract_lbp_features(image):
orb = cv2.ORB_create()
keypoints = orb.detect(image, None)

# Compute the LBP descriptors


keypoints, lbp_descriptors = orb.compute(image,
keypoints)
return keypoints, lbp_descriptors

if name == " main ":


# Load the image
image_path = 'sunflower.jpg'
image = cv2.imread(image_path)

# Extract ORB features


orb_keypoints, orb_descriptors =
extract_orb_features(image)

# Extract HOG features


hog_descriptors = extract_hog_features(image)

# Extract SIFT features


sift_keypoints, sift_descriptors =
extract_sift_features(image)

Name: Diksha Sharma UID: 20BCS1458


Course Name: Computer Vision Lab Course Code: CSP-422

# Extract LBP features


lbp_keypoints, lbp_descriptors =
extract_lbp_features(image)

# Display images with keypoints


image_with_orb = cv2.drawKeypoints(image,
orb_keypoints, None,
flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
image_with_sift = cv2.drawKeypoints(image,
sift_keypoints, None,
flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
image_with_lbp = cv2.drawKeypoints(image,
lbp_keypoints, None,
flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Print the number of keypoints and descriptors for
each method
print(f"SIFT: Keypoints: {len(sift_keypoints)},
Descriptors: {sift_descriptors.shape if sift_descriptors
is not None else (0, 0)}")
print(f"ORB: Keypoints: {len(orb_keypoints)},
Descriptors: {orb_descriptors.shape if orb_descriptors is
not None else (0, 0)}")
print(f"HOG: Descriptors: {hog_descriptors.shape if
hog_descriptors is not None else (0, 0)}")
print(f"LBP: Keypoints: {len(lbp_keypoints)},
Descriptors: {lbp_descriptors.shape if lbp_descriptors is
not None else (0, 0)}")

cv2.imshow("Image with ORB Keypoints",


image_with_orb)
cv2.imshow("Image with SIFT Keypoints",
image_with_sift)
cv2.imshow("Image with LBP Keypoints",
image_with_lbp)
cv2.waitKey(0)
cv2.destroyAllWindows()

Name: Diksha Sharma UID: 20BCS1458


Course Name: Computer Vision Lab Course Code: CSP-422

Output:

Name: Diksha Sharma UID: 20BCS1458

You might also like