You are on page 1of 15

DSA LAB

Experiment 6

GROUP MEMBERS ROLL NO.

201090013
Shrishail Dolle
201090026
Arnav
201090072
Shashank Patvekar
Atharva Yesansure 201090051
DSA LAB
Experiment 6

Aim: Implement SVM (Support Vector Machine) Classifier with Feature


Kernel Techniques – Linear, RBF and Polynomial Kernel Classifiers.
Software Used: Python 3, Jupyter Notebook
Theory:
• Support Vector Machines (SVM) is a Machine Learning Algorithm which
can be used for many different tasks (Figure 1).
• In this article, I will explain the mathematical basis to demonstrate how
this algorithm works for binary classification purposes.
• Support Vector Machine (SVM) is a supervised machine learning
algorithm used for both classification and regression. Though we
say regression problems as well its best suited for classification.
The objective of SVM algorithm is to find a hyperplane in an N-
dimensional space that distinctly classifies the data points. The
dimension of the hyperplane depends upon the number of features.
• If the number of input features is two, then the hyperplane is just
a line. If the number of input features is three, then the
hyperplane becomes a 2-D plane.
• It becomes difficult to imagine when the number of features
exceedsthree.
• A simple linear SVM classifier works by making a straight line
between two classes.
• That means all of the data points on one side of the line will
represent a category and the data points on the other side of the
line will be put into a different category. This means there can be
an infinite number of lines to choose from.
• What makes the linear SVM algorithm better than some of the
other algorithms, like k-nearest neighbors, is that it chooses the
bestline to classify your data points.
• It chooses the line that separates the data and is the furthest
away from the closet data points as possible. Basically one has
some data points on a grid. We are trying to separate these data
points by the category they should fit in, but you don't want to
have any data in the wrong category.
• That means we are trying to find the line between the two
closestpoints that keeps the other data points separated.
• So the two closest data points give you the support vectors you'll
use to find that line. That line is called the decision boundary. The
decision boundary doesn't have to be a line. It's also referred to as
a hyperplane because you can find the decision boundary with any
number of features, not just two.
Hyperplane:
• There can be multiple lines/decision boundaries to segregate
theclasses in n-dimensional space, but we need to find out the
best decision boundary that helps to classify the data points.
• This best boundary is known as the hyperplane of SVM.The
dimensions of the hyperplane depend on the features present in
the dataset, which means if there are 2 features (as shown in
image), then hyperplane will be a straight line.
• And if there are 3 features, then hyperplane will be a 2-
dimension plane.We always create a hyperplane that has a
maximum margin, which means the maximum distance between
the data points.
Support Vectors:
• The data points or vectors that are the closest to the hyperplane
and which affect the position of the hyperplane are termed as
Support Vector. Since these vectors support the hyperplane,
hence called a Support vector
Types of SVMs: There are two different types of SVMs, each used for
different things:
• Simple SVM: Typically used for linear regression and classification
problems.
• Kernel SVM: Has more flexibility for non-linear data because you
can add more features to fit a hyperplane instead of a two-
dimensional space.
Advantages:
• Effective on datasets with multiple features, like financial or
medicaldata.
• Effective in cases where number of features is greater than
thenumber of data points.
• Uses a subset of training points in the decision function
calledsupport vectors which makes it memory efficient.
• Different kernel functions can be specified for the decision function.
You can use common kernels, but it's also possible to specify
customkernels.
Disadvantages:
• If the number of features is a lot bigger than the number of
data points, avoiding over-fitting when choosing kernel
functions andregularization term is crucial.
• SVMs don't directly provide probability estimates. Those
arecalculated using an expensive five-fold cross-validation.
• Works best on small sample sets because of its high training time.

Code:
• Importing all necessary libraries and required modules
• The dataset used is a custom dataset – seeds.csv

• After doing exploration and data analysis we come to the


conclusion that the two factors we choose and tweak are area and
assymetry coefficient.
• Describing the dataframe and exploring its correlation parameters

• Generating a sns heatmap showing an area subplot with


respectoveannotations.

• The dataframe needs to be segregated based on selective labels


based on Type. The to_numpy() method is used to convert the
typefrom dataframe to an array.
• Splitting the data frame into two datasets - training and
testingdatasets.

Case 1: Linear Kernel


• Using Scikitlearn for loading and processing the kernels.
• Using StandardScaler() to transform and scale the dataset
• This is a linear kernel SVM classifier.
• Precision, Accuracy and other parameters are listed below:
• Decision Boundaries for Linear Kernel SVM Classifier

• Displaying Linear Classifier Support Vectors


Case 2: RBF Kernel
• This is a RBF kernel SVM classifier.
• Precision, Accuracy and other parameters are listed below:

• Decision Boundaries for Linear Kernel SVM Classifier


• Displaying RBF Classifier Support Vectors

Case 3: Polynomial Kernel


• This is a Polynomial kernel SVM classifier.
• Precision, Accuracy and other parameters are listed below:
• Decision Boundaries for Polynomial Kernel SVM Classifier
• Displaying Polynomial Classifier Support Vectors

Conclusion:
• The usage, meaning and theory behind using SVM classifiers are
understood.
• Using a custom dataset, the accuracies have been found for
differentkernels for SVM classifiers.
• Linear, RBF and Polynomial Kernel SVM Classifers were
implemented successfully using Scikit learn and different libraries.
• The performance, accuracy, decision boundary plots, support
vectorkernels are determined.

You might also like