You are on page 1of 3

Experiment:2.

Aim: Write a program to compare the performance of different classification models in


image recognition.

Software Required: Google Collab.

Description:
1. Convolutional Neural Networks (CNN): CNNs have revolutionized image
recognition and achieved state-of-the-art performance in various tasks. They consist
of multiple convolutional layers that automatically learn hierarchical features from
images. Popular CNN architectures include AlexNet, VGGNet, GoogLeNet
(Inception), ResNet, and DenseNet.
2. Support Vector Machines (SVM): SVMs are supervised learning models that can
be used for image classification. They find an optimal hyperplane to separate
different classes in feature space. SVMs are often combined with handcrafted
features or extracted features from CNNs.
3. Random Forests: Random Forests are ensemble learning models that consist of
multiple decision trees. They can be used for image classification by combining
features extracted from images and making predictions based on the majority voting
of the trees.

Name: Sameer Indora UID: 20BCS5206


Pseudo code/Algorithms/Flowchart/Steps:
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from keras.models import Sequential
from keras.layers import Dense, Conv2D, MaxPooling2D, Flatten

# Generate a synthetic dataset for demonstration


# Replace this with your actual dataset loading code
def load_dataset():
# Generate a synthetic dataset with 1000 samples and 20 features
np.random.seed(0)
X = np.random.rand(1000, 20)
y = np.random.randint(0, 2, size=(1000,))
return X, y

# Preprocess the dataset (you can replace this with your actual preprocessing code)
def preprocess_dataset(X, y):
# Your dataset preprocessing code here (if needed)
return X, y

# Load the dataset


X, y = load_dataset()

# Preprocess the dataset


X, y = preprocess_dataset(X, y)

# Split the dataset into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,
random_state=42)

# Define and initialize classification models


svm_model = SVC()
rf_model = RandomForestClassifier()
cnn_model = Sequential([
Dense(64, activation='relu', input_shape=(20,)), # Input shape matches the
number of features
Dense(32, activation='relu'),
Dense(1, activation='sigmoid') # For binary classification, use 'sigmoid'
activation

Name: Sameer Indora UID: 20BCS5206


])

# Evaluate the models


svm_accuracy = svm_model.score(X_test,
y_test) rf_accuracy = rf_model.score(X_test,
y_test)
cnn_loss, cnn_accuracy = cnn_model.evaluate(X_test, y_test)

# Print the output in the specified format


print(f"SVM accuracy: {svm_accuracy:.3f}")
print(f"Random Forest accuracy:
{rf_accuracy:.3f}") print(f"CNN accuracy:
Output:

Name: Sameer Indora UID: 20BCS5206

You might also like