You are on page 1of 27

Bayesian Decision Theory 

It is a simple but fundamental approach to a variety of problems like pattern


classification. It helps us in making decisions on whether to select a class
with ‘x’ probability or an opposite class with ‘y’ probability based on a
certain feature. 

Basic Decision
Let us take an example where an electronics store company wants to know
whether a customer is going to buy a computer or not. So we have the
following two buying classes:
w1 – Yes (Customer will buy a computer)
w2 – No (Customer will not buy a computer)

Now, we will look into the past records of our customer database. We will
note down the number of customers buying computers and also the number
of customers not buying a computer. Now, we will calculate the probabilities
of customers buying a computer. Let it be P(w1). Similarly, the probability of
customers not buying a customer is P(w2).

Now we will do a basic comparison for our future customers.


For a new customer,
If P(w1) > P(w2), then the customer will buy a computer (w1)
And, if P(w2) > P(w1), then the customer will not buy a computer (w2)

The main problem of basic decision method is that it will always give the
same decision for all future customers based on just previous records,. This
is illogical and absurd
So we need something that will help us in making better decisions for future
customers. We do that by introducing some features. Let’s say we add a
feature ‘x’ where ‘x’ denotes the age of the customer. Now with this added
feature, we will be able to make better decisions.

For our class w1 and feature ‘x’, we have:  

P (w1 | x) =  P (x | w1) * P(w1)


P(x)
There are 4 terms in this formula that we need to understand:
1. Prior – P(w1) is the Prior Probability that w1 is true before the data is
observed
2. Posterior – P(w1 | x) is the Posterior Probability that w1 is true after the data
is observed.
3. Evidence – P(x) is the Total Probability of the Data
4. Likelihood – P(x | w1) is the information about w1 provided by ‘x’
P(w1 | x) is read as Probability of w1 given x

More Precisely, it is the probability that a customer will buy a computer,


given a specific customer’s age. Now, we are ready to make our decision:
For a new customer,
If P(w1 | x) > P(w2 | x), then the customer will buy a computer (w1)
And, if P(w2 | x) > P(w1 | x), then the customer will not buy a computer (w2)

Now, from the formula, you can see that for both our classes w1 and w2, our
denominator P(x) is constant. So, we can utilize this idea and can form
another form of decision as below:

If P(x | w1)*P(w1) > P(x | w2)*P(w2), then the customer will buy a computer
(w1)
And, if P(x | w2)*P(w2) > P(x | w1)*P(w1), then the customer will not buy a
computer (w2)
Example
Consider a table that contain people who have sunburned or not

Name hair height weight lotion Result


sarah blonde average light yes sunburned
Dana blonde tall average Yes none
alex brown short average Yes none
annie blonde short average Yes sunburned
emily red average heavy yes sunburned
pete brown tall heavy no none
john brown average heavy no none
kate blonde sshort light yes none
Table 1

Task: when a new data is coming, our model should predict whether the person is
sunburned or not.
The new data is as follows

hair=red, height=short, Weight=light, lotion=yes Resut =? (SB / None)

(we have to find whether the new person is sunburned or not)

Solution:
Bayes theorem state that

P (w1 | x) =  P (x | w1) * P(w1)


P(x)

There are two Results in this problem (Sunburned or Not).So we need to find
P (w1 | x) with two cases
Case 1: P(SB|x) - Probability of Sunburned ,given x

Case2: P (None| x) - Probability of None, given x

Here x = hair=red, height=short, Weight=light, lotion=yes

Here we need to find the probability of each by one by one. It shows in table 2

Case 1.
First we need to Find P (w1 | x) ≅ P(SB | x)
i.e. Probability of Sunburned, given x

Here w1=SB and x = hair=red, height=short, Weight=light, lotion=yes

P (w1 | x) = P (SB|hair=red, height=short, Weight=light, lotion=yes).


It means find probability of sunburned given x condition is “ hair=red,
height=short, Weight=light, lotion=yes.”

From the Bayes Equation, we need to find P (x | w1).

P (x | w1) = P (hair=red, height=short, weight=light, lotion=yes|SB)

From the Table2, we can decide probability of each condition separately.

Table2 is created from Table 1.


Hair SB None weight SB none
blonde 2/3 2/5 light 1/3 1/5
Brown 0 3/5 avg 1/3 2/5
Red 1/3 0 heavy 1/3 2/5

hight SB None lotion SB None


avg 2/3 1/5 No 0 2/5
tall 0 2/5 yes 3/3 3/5
short 1/3 2/5

Table 2.

P(hair=red) |SB = 1/3. It means out of 3 sunburned, one hair is red. [ show in
Table 1]. Similarly

P(height=short) |SB = 1/3

P(weight=light) |SB = 1/3

P(lotion=yes) |SB = 3/3

So
P (x | w1) = P(hair=red) |SB * P(height=short) |SB* P(weight=light) |SB*
P(lotion=yes) |SB

=1/3 * 1/3 * 1/3 * 3/3 = 3/81= 1/27 = 0.0370

P(w1) = P(SB ) - Probability of sunburned.

=3/8=0.375 - Out of 8 outputs, sunburned are 3 ( Refer Table 1)

Apply in Bayes Theorm ‘

P(w1|x) = P (x | w1) * P(w1)


P(x)
P (x | w1) = 0.0370, P(w1) = 0.375

Here P(X) -Probability of x (new data). It is either 0 or a fixed value 1. So we can


skip the denominator

= 0.0370 * 0.375 = 0.0138

Case 2.
We need to find the probability of P(None|x).

i.e. P (None| hair=red, height=short, Weight=light, lotion=yes).

Procedure is same as above.

Finally, we get

P (None| hair=red, height=short, Weight=light, lotion=yes).

= 0.0056

Finally

P(SB) = 0.0138
P(None)= 0.0056

Result

Out of these two values, P(SB) is highest. So the model will predict as Sunburned
with the given x= “hair=red) |SB * P(height=short) |SB* P(weight=light) |SB* P
(lotion=yes”

Support Vector Machine(SVM)


Support Vector Machine or SVM is one of the most popular Supervised Learning algorithms,
which is used for Classification as well as Regression problems. However, primarily, it is used
for Classification problems in Machine Learning.

The goal of the SVM algorithm is to create the best line or decision boundary that can segregate
n-dimensional space into classes so that we can easily put the new data point in the correct
category in the future. This best decision boundary is called a hyperplane.

SVM chooses the extreme points/vectors that help in creating the hyperplane. These extreme
cases are called as support vectors, and hence algorithm is termed as Support Vector Machine.
Consider the below diagram in which there are two different categories that are classified using a
decision boundary or hyperplane:
Example: SVM can be understood with the example that we have used in the KNN classifier.
Suppose we see a strange cat that also has some features of dogs, so if we want a model that can
accurately identify whether it is a cat or dog, so such a model can be created by using the SVM
algorithm. We will first train our model with lots of images of cats and dogs so that it can learn
about different features of cats and dogs, and then we test it with this strange creature. So as
support vector creates a decision boundary between these two data (cat and dog) and choose
extreme cases (support vectors), it will see the extreme case of cat and dog. On the basis of the
support vectors, it will classify it as a cat. Consider the below diagram:
SVM algorithm can be used for Face detection, image classification, text categorization, etc.

Types of SVM
SVM can be of two types:

o Linear SVM: Linear SVM is used for linearly separable data, which means if a dataset
can be classified into two classes by using a single straight line, then such data is termed
as linearly separable data, and classifier is used called as Linear SVM classifier.
o Non-linear SVM: Non-Linear SVM is used for non-linearly separated data, which means
if a dataset cannot be classified by using a straight line, then such data is termed as non-
linear data and classifier used is called as Non-linear SVM classifier.

Hyperplane and Support Vectors in the SVM algorithm:


Hyperplane: There can be multiple lines/decision boundaries to segregate the classes in n-
dimensional space, but we need to find out the best decision boundary that helps to classify the
data points. This best boundary is known as the hyperplane of SVM.

The dimensions of the hyperplane depend on the features present in the dataset, which means if
there are 2 features (as shown in image), then hyperplane will be a straight line. And if there are
3 features, then hyperplane will be a 2-dimension plane.

We always create a hyperplane that has a maximum margin, which means the maximum distance
between the data points.
Support Vectors:

The data points or vectors that are the closest to the hyperplane and which affect the position of
the hyperplane are termed as Support Vector. Since these vectors support the hyperplane, hence
called a Support vector

How does SVM works?


Linear SVM:

The working of the SVM algorithm can be understood by using an example. Suppose we have a
dataset that has two tags (green and blue), and the dataset has two features x1 and x2. We want a
classifier that can classify the pair(x1, x2) of coordinates in either green or blue. Consider the
below image:

So as it is 2-d space so by just using a straight line, we can easily separate these two classes. But
there can be multiple lines that can separate these classes. Consider the below image:
Hence, the SVM algorithm helps to find the best line or decision boundary; this best boundary or
region is called as a hyperplane. SVM algorithm finds the closest point of the lines from both
the classes. These points are called support vectors. The distance between the vectors and the
hyperplane is called as margin. And the goal of SVM is to maximize this margin.
The hyperplane with maximum margin is called the optimal hyperplane.
Non-Linear SVM:

If data is linearly arranged, then we can separate it by using a straight line, but for non-linear
data, we cannot draw a single straight line. Consider the below image:
So to separate these data points, we need to add one more dimension. For linear data, we have
used two dimensions x and y, so for non-linear data, we will add a third dimension z. It can be
calculated as:

z=x2 +y2

By adding the third dimension, the sample space will become as below image:
So now, SVM will divide the datasets into classes in the following way. Consider the below
image:
Since we are in 3-d Space, hence it is looking like a plane parallel to the x-axis. If we convert it
in 2d space with z=1, then it will become as:
K-Nearest Neighbor (K-NN)
o K-Nearest Neighbour is one of the simplest Machine Learning algorithms based on
Supervised Learning technique.
o K-NN algorithm assumes the similarity between the new case/data and available cases
and put the new case into the category that is most similar to the available categories.
o K-NN algorithm stores all the available data and classifies a new data point based on the
similarity. This means when new data appears then it can be easily classified into a well
suite category by using K- NN algorithm.
o K-NN algorithm can be used for Regression as well as for Classification but mostly it is
used for the Classification problems.
o K-NN is a non-parametric algorithm, which means it does not make any assumption on
underlying data.
o It is also called a lazy learner algorithm because it does not learn from the training set
immediately instead it stores the dataset and at the time of classification, it performs an
action on the dataset.
o KNN algorithm at the training phase just stores the dataset and when it gets new data,
then it classifies that data into a category that is much similar to the new data.
o Example: Suppose, we have an image of a creature that looks similar to cat and dog, but
we want to know either it is a cat or dog. So for this identification, we can use the KNN
algorithm, as it works on a similarity measure. Our KNN model will find the similar
features of the new data set to the cats and dogs images and based on the most similar
features it will put it in either cat or dog category.

Why do we need a K-NN Algorithm?


Suppose there are two categories, i.e., Category A and Category B, and we have a new data point
x1, so this data point will lie in which of these categories. To solve this type of problem, we need
a K-NN algorithm. With the help of K-NN, we can easily identify the category or class of a
particular dataset. Consider the below diagram:
How does K-NN work?
The K-NN working can be explained on the basis of the below algorithm:

o Step-1: Select the number K of the neighbors


o Step-2: Calculate the Euclidean distance of K number of neighbors
o Step-3: Take the K nearest neighbors as per the calculated Euclidean distance.
o Step-4: Among these k neighbors, count the number of the data points in each category.
o Step-5: Assign the new data points to that category for which the number of the neighbor
is maximum.
o Step-6: Our model is ready.

Suppose we have a new data point and we need to put it in the required category. Consider the
below image:
o Firstly, we will choose the number of neighbors, so we will choose the k=5.
o Next, we will calculate the Euclidean distance between the data points. The Euclidean
distance is the distance between two points, which we have already studied in geometry.
It can be calculated as:
o By calculating the Euclidean distance we got the nearest neighbors, as three nearest
neighbors in category A and two nearest neighbors in category B. Consider the below
image:
o As we can see the 3 nearest neighbors are from category A, hence this new data point
must belong to category A.

How to select the value of K in the K-NN Algorithm?


Below are some points to remember while selecting the value of K in the K-NN algorithm:

o There is no particular way to determine the best value for "K", so we need to try some
values to find the best out of them. The most preferred value for K is 5.
o A very low value for K such as K=1 or K=2, can be noisy and lead to the effects of
outliers in the model.
o Large values for K are good, but it may find some difficulties.

Advantages of KNN Algorithm:


o It is simple to implement.
o It is robust to the noisy training data
o It can be more effective if the training data is large.
Disadvantages of KNN Algorithm:

o Always needs to determine the value of K which may be complex some time.
o The computation cost is high because of calculating the distance between the data points
for all the training samples.

Decision Tree
o Decision Tree is a Supervised learning technique that can be used for both
classification and Regression problems, but mostly it is preferred for solving
Classification problems. It is a tree-structured classifier, where internal nodes represent
the features of a dataset, branches represent the decision rules and each leaf node
represents the outcome.
o In a Decision tree, there are two nodes, which are the Decision Node and Leaf
Node. Decision nodes are used to make any decision and have multiple branches,
whereas Leaf nodes are the output of those decisions and do not contain any further
branches.
o The decisions or the test are performed on the basis of features of the given dataset.
o It is a graphical representation for getting all the possible solutions to a
problem/decision based on given conditions.
o It is called a decision tree because, similar to a tree, it starts with the root node, which
expands on further branches and constructs a tree-like structure.
o In order to build a tree, we use the CART algorithm, which stands for Classification
and Regression Tree algorithm.
o A decision tree simply asks a question, and based on the answer (Yes/No), it further split
the tree into subtrees.
o Below diagram explains the general structure of a decision tree:

Note: A decision tree can contain categorical data (YES/NO) as well as numeric data.
Why use Decision Trees?
There are various algorithms in Machine learning, so choosing the best algorithm for the given
dataset and problem is the main point to remember while creating a machine learning model.
Below are the two reasons for using the Decision tree:

o Decision Trees usually mimic human thinking ability while making a decision, so it is
easy to understand.
o The logic behind the decision tree can be easily understood because it shows a tree-like
structure.

Decision Tree Terminologies


 Root Node: Root node is from where the decision tree starts. It represents the entire dataset,
which further gets divided into two or more homogeneous sets.
 Leaf Node: Leaf nodes are the final output node, and the tree cannot be segregated further
after getting a leaf node.
 Splitting: Splitting is the process of dividing the decision node/root node into sub-nodes
according to the given conditions.
 Branch/Sub Tree: A tree formed by splitting the tree.
 Pruning: Pruning is the process of removing the unwanted branches from the tree.
 Parent/Child node: The root node of the tree is called the parent node, and other nodes are
called the child nodes.

How does the Decision Tree algorithm Work?

In a decision tree, for predicting the class of the given dataset, the algorithm starts from the root
node of the tree. This algorithm compares the values of root attribute with the record (real
dataset) attribute and, based on the comparison, follows the branch and jumps to the next node.

For the next node, the algorithm again compares the attribute value with the other sub-nodes and
move further. It continues the process until it reaches the leaf node of the tree. The complete
process can be better understood using the below algorithm:

o Step-1: Begin the tree with the root node, says S, which contains the complete dataset.
o Step-2: Find the best attribute in the dataset using Attribute Selection Measure (ASM).
o Step-3: Divide the S into subsets that contains possible values for the best attributes.
o Step-4: Generate the decision tree node, which contains the best attribute.
o Step-5: Recursively make new decision trees using the subsets of the dataset created in step -3.
Continue this process until a stage is reached where you cannot further classify the nodes and
called the final node as a leaf node.

Example: Suppose there is a candidate who has a job offer and wants to decide whether he
should accept the offer or Not. So, to solve this problem, the decision tree starts with the root
node (Salary attribute by ASM). The root node splits further into the next decision node (distance
from the office) and one leaf node based on the corresponding labels. The next decision node
further gets split into one decision node (Cab facility) and one leaf node. Finally, the decision
node splits into two leaf nodes (Accepted offers and Declined offer). Consider the below
diagram:
Attribute Selection Measures
While implementing a Decision tree, the main issue arises that how to select the best attribute for
the root node and for sub-nodes. So, to solve such problems there is a technique which is called
as Attribute selection measure or ASM. By this measurement, we can easily select the best
attribute for the nodes of the tree. There are two popular techniques for ASM, which are:

o Information Gain
o Gini Index

1. Information Gain:
o Information gain is the measurement of changes in entropy after the segmentation of a dataset
based on an attribute.
o It calculates how much information a feature provides us about a class.
o According to the value of information gain, we split the node and build the decision tree.
o A decision tree algorithm always tries to maximize the value of information gain, and a
node/attribute having the highest information gain is split first. It can be calculated using the
below formula:
1. Information Gain= Entropy(S)- [(Weighted Avg) *Entropy(each feature)  

Entropy: Entropy is a metric to measure the impurity in a given attribute. It specifies


randomness in data. Entropy can be calculated as:

Entropy(s)= -P(yes)log2 P(yes)- P(no) log2 P(no)

Where,

o S= Total number of samples


o P(yes)= probability of yes
o P(no)= probability of no

2. Gini Index:
o Gini index is a measure of impurity or purity used while creating a decision tree in the
CART(Classification and Regression Tree) algorithm.
o An attribute with the low Gini index should be preferred as compared to the high Gini index.
o It only creates binary splits, and the CART algorithm uses the Gini index to create binary splits.
o Gini index can be calculated using the below formula:

Gini Index= 1- ∑jPj2

Pruning: Getting an Optimal Decision tree


Pruning is a process of deleting the unnecessary nodes from a tree in order to get the optimal
decision tree.

A too-large tree increases the risk of overfitting, and a small tree may not capture all the
important features of the dataset. Therefore, a technique that decreases the size of the learning
tree without reducing accuracy is known as Pruning. There are mainly two types of
tree pruning technology used:

o Cost Complexity Pruning


o Reduced Error Pruning.
Advantages of the Decision Tree
o It is simple to understand as it follows the same process which a human follow while making any
decision in real-life.
o It can be very useful for solving decision-related problems.
o It helps to think about all the possible outcomes for a problem.
o There is less requirement of data cleaning compared to other algorithms.

Disadvantages of the Decision Tree


o The decision tree contains lots of layers, which makes it complex.
o It may have an overfitting issue, which can be resolved using the Random Forest algorithm.
o For more class labels, the computational complexity of the decision tree may increase.

You might also like