Professional Documents
Culture Documents
SMRITI SHARMA
MACHINE LEARNING ( BCA 3RD YEAR)
o According to the value of information gain, we split the node and build the decision
tree.
o A decision tree algorithm always tries to maximize the value of information gain, and
a node/attribute having the highest information gain is split first. It can be calculated
using the below formula:
1. Information Gain= Entropy(S)- [(Weighted Avg) *Entropy(each feature)
Entropy: Entropy is a metric to measure the impurity in a given attribute. It specifies
randomness in data. Entropy can be calculated as:
Entropy(s)= -P(yes)log2 P(yes)- P(no) log2 P(no)
Where,
o S= Total number of samples
o P(yes)= probability of yes
o P(no)= probability of no
2. Gini Index:
o Gini index is a measure of impurity or purity used while creating a decision tree in the
CART(Classification and Regression Tree) algorithm.
o An attribute with the low Gini index should be preferred as compared to the high Gini
index.
o It only creates binary splits, and the CART algorithm uses the Gini index to create
binary splits.
o Gini index can be calculated using the below formula:
Gini Index= 1- ∑jPj2
Pruning: Getting an Optimal Decision tree
Pruning is a process of deleting the unnecessary nodes from a tree in order to get the optimal
decision tree.
A too-large tree increases the risk of overfitting, and a small tree may not capture all the
important features of the dataset. Therefore, a technique that decreases the size of the
learning tree without reducing accuracy is known as Pruning. There are mainly two types of
tree pruning technology used:
o Cost Complexity Pruning
o Reduced Error Pruning.
Advantages of the Decision Tree
o It is simple to understand as it follows the same process which a human follow while
making any decision in real-life.
o It can be very useful for solving decision-related problems.
o It helps to think about all the possible outcomes for a problem.
o There is less requirement of data cleaning compared to other algorithms.
Disadvantages of the Decision Tree
o The decision tree contains lots of layers, which makes it complex.
o It may have an overfitting issue, which can be resolved using the Random Forest
algorithm.
o For more class labels, the computational complexity of the decision tree may increase.
Python Implementation of Decision Tree
Now we will implement the Decision tree using Python. For this, we will use the dataset
"user_data.csv," which we have used in previous classification models. By using the same
dataset, we can compare the Decision tree classifier with other classification models such
as KNN SVM, LogisticRegression, etc.
SMRITI SHARMA
MACHINE LEARNING ( BCA 3RD YEAR)
Steps will also remain the same, which are given below:
o Data Pre-processing step
o Fitting a Decision-Tree algorithm to the Training set
o Predicting the test result
o Test accuracy of the result(Creation of Confusion matrix)
o Visualizing the test set result.
1. Data Pre-Processing Step:
Below is the code for the pre-processing step:
1. # importing libraries
2. import numpy as nm
3. import matplotlib.pyplot as mtp
4. import pandas as pd
5.
6. #importing datasets
7. data_set= pd.read_csv('user_data.csv')
8.
9. #Extracting Independent and dependent Variable
10. x= data_set.iloc[:, [2,3]].values
11. y= data_set.iloc[:, 4].values
12.
13. # Splitting the dataset into training and test set.
14. from sklearn.model_selection import train_test_split
15. x_train, x_test, y_train, y_test= train_test_split(x, y, test_size= 0.25, random_state=0)
16.
17. #feature Scaling
18. from sklearn.preprocessing import StandardScaler
19. st_x= StandardScaler()
20. x_train= st_x.fit_transform(x_train)
21. x_test= st_x.transform(x_test)
In the above code, we have pre-processed the data. Where we have loaded the dataset,
which is given as:
SMRITI SHARMA
MACHINE LEARNING ( BCA 3RD YEAR)
SMRITI SHARMA
MACHINE LEARNING ( BCA 3RD YEAR)
In the below output image, the predicted output and real test output are given. We can
clearly see that there are some values in the prediction vector, which are different from the
real vector values. These are prediction errors.
In the above output image, we can see the confusion matrix, which has 6+3= 9 incorrect
predictions and62+29=91 correct predictions. Therefore, we can say that compared to
other classification models, the Decision Tree classifier made a good prediction.
5. Visualizing the training set result:
Here we will visualize the training set result. To visualize the training set result we will plot a
graph for the decision tree classifier. The classifier will predict yes or No for the users who
have either Purchased or Not purchased the SUV car as we did in Logistic Regression. Below
is the code for it:
SMRITI SHARMA
MACHINE LEARNING ( BCA 3RD YEAR)
The above output is completely different from the rest classification models. It has both
vertical and horizontal lines that are splitting the dataset according to the age and estimated
salary variable.
As we can see, the tree is trying to capture each dataset, which is the case of overfitting.
6. Visualizing the test set result:
Visualization of test set result will be similar to the visualization of the training set except that
the training set will be replaced with the test set.
1. #Visualizing the test set result
2. from matplotlib.colors import ListedColormap
3. x_set, y_set = x_test, y_test
4. x1, x2 = nm.meshgrid(nm.arange(start = x_set[:, 0].min() - 1, stop = x_set[:, 0].max() +
1, step =0.01),
5. nm.arange(start = x_set[:, 1].min() - 1, stop = x_set[:, 1].max() + 1, step = 0.01))
6. mtp.contourf(x1, x2, classifier.predict(nm.array([x1.ravel(), x2.ravel()]).T).reshape(x1.sha
pe),
SMRITI SHARMA
MACHINE LEARNING ( BCA 3RD YEAR)
As we can see in the above image that there are some green data points within the purple
region and vice versa. So, these are the incorrect predictions which we have discussed in the
confusion matrix.
preserve the hidden structure of the model, implying that the pruned model will have
an unexpected architecture in comparison to the first model. Unstructured pruning is
suitable for models without a structured architecture, such as completely connected
brain networks, where the parameters are coordinated into a single grid. It tends to
be more effective than structured pruning since it allows for more fine-grained
pruning; however, it can also be more difficult to execute.
Criteria for Selecting a Pruning Technique:
The decision of which pruning technique to use depends on several factors, such as
the type of model, the accessibility of registration resources, and the degree of
accuracy desired. For instance, structured pruning is more suitable for convolutional
brain networks, while unstructured pruning is more pertinent for completely
connected networks. The decision to prune should also consider the compromise
between model size and accuracy. Other factors to consider include the complexity
of the model, the size of the training information, and the performance metrics of the
model.
Pruning in Neural Networks:
Neural networks are a kind of machine learning model that can benefit
extraordinarily from pruning. The objective of pruning in neural networks is to lessen
the quantity of parameters in the network, thereby making a smaller and faster
model without sacrificing accuracy.
There are several types of pruning techniques that can be applied to neural networks,
including weight pruning, neuron pruning, channel pruning, and filter pruning.
1. Weight Pruning
Weight pruning is the most common pruning technique used in brain networks. It
involves setting some of the weights in the network to zero or eliminating them. This
results in a sparser network that is faster and more effective than the first network.
Weight pruning can be done in more than one way, including magnitude-based
pruning, which removes the smallest magnitude weights, and iterative pruning, which
removes weights during training.
2. Neuron Pruning
Neuronal pruning involves eliminating whole neurons from the network. This can be
useful for diminishing the size of the network and working on its speed and
effectiveness. Neuron pruning can be done in more ways than one, including
threshold-based pruning, which removes neurons with small activation values, and
sensitivity-based pruning, which removes neurons that only slightly affect the result.
3. Channel Pruning
Channel pruning is a technique used in convolutional brain networks (CNNs) that
involves eliminating whole channels from the network. A channel in a CNN
corresponds to a gathering of filters that figure out how to distinguish a specific
element. Eliminating unnecessary channels can decrease the size of the network and
work on its speed and effectiveness without sacrificing accuracy.
4. Filter Pruning
SMRITI SHARMA
MACHINE LEARNING ( BCA 3RD YEAR)
Filter pruning involves eliminating whole filters from the network. A filter in a CNN
corresponds to a set of weights that figure out how to identify a specific element.
Eliminating unnecessary filters can decrease the size of the network and improve its
speed and effectiveness without sacrificing accuracy.
Pruning in Decision Trees:
Pruning can also be applied to decision trees, which are a kind of machine learning
model that learns a series of binary decisions based on the information features.
Decision trees can turn out to be exceptionally huge and perplexing, prompting
overfitting and decreased generalisation capacity. Pruning can be used to eliminate
unnecessary branches and nodes from the decision tree, resulting in a smaller and
simpler model that is less liable to overfit.
Pruning in Support Vector Machines:
Pruning can also be applied to support vector machines (SVMs), which are a sort of
machine learning model that separates useful pieces of information into various
classes using a hyperplane. SVMs can turn out to be extremely large and
complicated, resulting in slow and wasteful predictions. Pruning can be used to
eliminate unnecessary support vectors from the model, resulting in a smaller and
faster model that is still accurate.
Advantages
o Decreased model size and complexity. Pruning can significantly diminish the
quantity of parameters in a machine learning model, prompting a smaller and
simpler model that is easier to prepare and convey.
o Faster inference. Pruning can decrease the computational cost of making
predictions, prompting faster and more effective predictions.
o Further developed generalization. Pruning can forestall overfitting and further
develop the generalization capacity of the model by diminishing the
complexity of the model.
o Increased interpretability. Pruning can result in a simpler and more
interpretable model, making it easier to understand and make sense of the
model's decisions.
Disadvantages
o Possible loss of accuracy. Pruning can sometimes result in a loss of accuracy,
especially in the event that such a large number of parameters are pruned or
on the other hand in the event that pruning is not done cautiously.
o Increased training time. Pruning can increase the training season of the model,
especially assuming it is done iteratively during training.
o Trouble in choosing the right pruning technique. Choosing the right pruning
technique can be testing and may require area expertise and experimentation.
o Risk of over-pruning. Over-pruning can prompt an overly simplified model
that is not accurate enough for the task.
Pruning vs Other Regularization Techniques:
SMRITI SHARMA
MACHINE LEARNING ( BCA 3RD YEAR)
SMRITI SHARMA