Professional Documents
Culture Documents
(Accredited by NBA)
LAB MANUAL
Ms.S.Priyadharshini
Vision and Mission - Institute
Vision
To carve the youth as dynamic competent, valued and knowledgeable Technocrats through
research, innovation and entrepreneurial development for accomplishing the global
expectations.
Mission
M1: Inculcate academic excellence in engineering education to create talented
professionals
M2: Promote research in basic sciences and applied engineering among faculty and
students tofulfill the societal expectations.
M3: Holistic development of students through meaningful interaction with industry and
academia.
M4: Foster the students on par with sustainable development goals thereby
contributing to theprocess of nation building
M5: To nurture and retain conducive lifelong learning environment towards professional
excellence
Mission
M1: Provide quality education to the students in core and allied fields by implementing
advanced pedagogies.
M2: Create ardor among faculty as well as students to achieve excellence in emerging
research areas
M3: Imbibe industry relevant skills to the students through industry interaction thereby
bridging the campus to corporate gap.
M4: To endow the students with broad intellectual spectra pertaining to the sustainable
development goals.
M5: To instill the thirst of lifelong learning among students to excel in their field of
interest
Program Educational Objectives (PEO's)
PSO1 : Design and develop Electronic circuits assimilating Futuristic technologies of Signal
Processing, Communication, VLSI and Embedded Systems using Modern Hardware and software
tools to cater the expectation of solving real time problems.
PSO2: Instill the professional skill sets with ethical principles and tools for Networking,
Communication and integrated circuits to provide Solutions for societal benefits.
Semester / Year VI / III
CS3491 ARTIFICIAL INTELLIGENCE AND
Course Code & Name
MACHINE LEARNING
Regulations R- 2021
Branch of the Students B.E - ECE
Academic Year 2023-24
Batch / Section 2021-2025 / III ECE- A&B
Introduction
Machine learning
Machine learning is a subset of artificial intelligence in the field of computer science that often
uses statistical techniques to give computers the ability to "learn" (i.e., progressively improve
performance on a specific task) with data, without being explicitly programmed. In the past
decade, machine learning has given us self-driving cars, practical speech recognition, effective
web search, and a vastly improved understanding of the human genome.
Machine learning tasks are typically classified into two broad categories, depending on whether
there is a learning "signal" or "feedback" available to a learning system:
1. Supervised learning: The computer is presented with example inputs and their desired
outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to outputs.
As special cases, the input signal can be only partially available, or restricted to special feedback:
3. Active learning: the computer can only obtain training labels for a limited set of instances
(based on a budget), and also has to optimize its choice of objects to acquire labels for. When
used interactively, these can be presented to the user for labeling.
4. Reinforcement learning: training data (in form of rewards and punishments) is given only as
feedback to the program's actions in a dynamic environment, such as driving a vehicle or playing
a game against an opponent.
5. Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to
find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden
patterns in data) or a means towards an end (feature learning).
In classification, inputs are divided into two or more classes, and the learner must produce a
model that assigns unseen inputs to one or more (multi-label classification) of these classes. This
is typically tackled in a supervised manner. Spam filtering is an example of classification, where
the inputs are email (or other) messages and the classes are "spam" and "not spam".
In regression, also a supervised problem, the outputs are continuous rather than discrete. In
clustering, a set of inputs is to be divided into groups. Unlike in classification, the groups are not
known beforehand, making this typically an unsupervised task. Density estimation finds the
distribution of inputs in some space.
Dimensionality reduction simplifies inputs by mapping them into a lower dimensional space.
Topic modeling is a related problem, where a program is given a list of human language
documents and is tasked with finding out which documents cover similar topics.
4. Deep learning
Falling hardware prices and the development of GPUs for personal use in the last few years have
contributed to the development of the concept of deep learning which consists of multiple hidden
layers in an artificial neural network. This approach tries to model the way the human brain
processes light and sound into vision and hearing. Some successful applications of deep learning
are computer vision and speech Recognition.
of two categories, an SVM training algorithm builds a model that predicts whether a new
example falls into one category or the other.
7. Clustering
Cluster analysis is the assignment of a set of observations into subsets (called clusters) so that
observations within the same cluster are similar according to some pre designated criterion or
criteria, while observations drawn from different clusters are dissimilar. Different clustering
techniques make different assumptions on the structure of the data, often defined by some
similarity metric and evaluated for example by internal compactness (similarity between
members of the same cluster) and separation between different clusters. Other methods are based
on estimated density and graph connectivity. Clustering is a method of unsupervised learning,
and a common technique for statistical data analysis.
8. Bayesian networks
A Bayesian network, belief network or directed acyclic graphical model is a probabilistic
graphical model that represents a set of random variables and their conditional independencies
via a directed acyclic graph (DAG). For example, a Bayesian network could represent the
probabilistic relationships between diseases and symptoms. Given symptoms, the network can be
used to compute the probabilities of the presence of various diseases. Efficient algorithms exist
that perform inference and learning.
9. Reinforcement learning
Reinforcement learning is concerned with how an agent ought to take actions in an environment
so as to maximize some notion of long-term reward. Reinforcement learning algorithms attempt
to find a policy that maps states of the world to the actions the agent ought to take in those states.
Reinforcement learning differs from the supervised learning problem in that correct input/output
pairs are never presented, nor sub-optimal actions explicitly corrected.
10. Similarity and metric learning
In this problem, the learning machine is given pairs of examples that are considered similar and
pairs of less similar objects. It then needs to learn a similarity function (or a distance metric
function) that can predict if new objects are similar. It is sometimes used in Recommendation
systems.
1.1 AIM:
To Implement Uninformed search algorithms ( BFS and DFS )
1.3 ALGORITHM
BFS Algorithm
Breadth-First Search (BFS) is an algorithm used for traversing graphs or trees.
Traversing means visiting each node of the graph. Breadth-First Search is a recursive
algorithm to search all the vertices of a graph or a tree. BFS in python can be
implemented by using data structures like a dictionary and lists. Breadth-First Search in
tree and graph is almost the same. The only difference is that the graph may contain
cycles, so we may traverse to the same node again.
Step 1: Enqueue the starting node. The first step is to enqueue the starting node into a
queue data structure. ...
Step 2: Dequeue a node and mark it as visited. ...
Step 3: Enqueue all adjacent nodes of the dequeued node that are not yet visited. ...
Step 4: Repeat steps 2-3 until the queue is empty.
DFS Algorithm
The recursive method of the Depth-First Search algorithm is implemented using stack.
A standard Depth-First Search implementation puts every vertex of the graph into one
in all 2 categories: 1) Visited 2) Not Visited. The only purpose of this algorithm is to
visit all the vertex of the graph avoiding cycles.
Step:1 : We will start by putting any one of the graph's vertex on top of the stack.
Step:2 : After that take the top item of the stack and add it to the visited list of the
vertex.
Step:3 : Next, create a list of that adjacent node of the vertex. Add the ones which
aren't in the visited list of vertexes to the top of the stack.
Step:4 : Lastly, keep repeating steps 2 and 3 until the stack is empty.
queue.append(node)
# Driver Code
print("Following is the Breadth-First Search")
bfs(visited, graph, '5') # function calling
OUTPUT
'2' : [],
'4' : ['8'],
'8' : []
}
# Driver Code
print("Following is the Depth-First Search")
dfs(visited, graph, '5')
OUTPUT
1.5 PROCEDURE
1.6 RESULT
By doing this experiment I Implemented Uninformed search algorithms ( BFS and DFS ) and observed its
2.1 AIM:
To Implement Informed search algorithms ( BFS and DFS )
2.3 ALGORITHM
A* Search Algorithm:
A* Search Algorithm is a Path Finding Algorithm. It is similar to Breadth First Search(BFS). It
will search shortest path using heuristic value assigned to node and actual cost from
Source_node to Dest_node
Real-life Examples
Maps
Games
AO* Search Algorithm is a Path Finding Algorithm and it is similar to A* star, other than
AND is used between two nodes along with OR. After getting shortest path it will return back
to root node and it will update it's heuristic value. It is similar to Depth First Search(DFS). It
will search shortest path using heuristic value assigned to node and actual cost from
Source_node to Dest_node
Real-life Examples
Maps
Games
Formula for AO* Algorithm
h(n) = heuristic_value
g(n) = actual_cost
f(n) = actual_cost + heursitic_value
f(n) = g(n) + h(n)
Program :
n = None
#for each node m,compare its distance from start i.e g(m) to the
#from start through n node
else:
if g[m] > g[n] + weight:
#update g(m)
g[m] = g[n] + weight
#change parent of m to n
parents[m] = n
if n == None:
print('Path does not exist!')
return None
while parents[n] != n:
path.append(n)
n = parents[n]
path.append(start_node)
path.reverse()
print('Path found: {}'.format(path))
return path
def heuristic(n):
H_dist = {
'A': 10,
'B': 8,
'C': 5,
'D': 7,
'E': 3,
'F': 6,
'G': 5,
'H': 3,
'I': 1,
'J': 0
}
return H_dist[n]
aStarAlgo('A', 'J')
Output
Program
class Graph:
def __init__(self, graph, heuristicNodeList, startNode): #instantiate graph object with graph
topology, heuristic values, start node
self.graph = graph
self.H=heuristicNodeList
self.start=startNode
self.parent={}
self.status={}
self.solutionGraph={}
self.status[v]=val
def printSolution(self):
print("FOR GRAPH SOLUTION, TRAVERSE THE GRAPH FROM THE
STARTNODE:",self.start)
print("------------------------------------------------------------")
print(self.solutionGraph)
print("------------------------------------------------------------")
if flag==True: # initialize Minimum Cost with the cost of first set of child node/s
minimumCost=cost
costToChildNodeListDict[minimumCost]=nodeList # set the Minimum Cost child node/s
flag=False
else: # checking the Minimum Cost nodes with the current Minimum Cost
if minimumCost>cost:
minimumCost=cost
costToChildNodeListDict[minimumCost]=nodeList # set the Minimum Cost child
node/s
def aoStar(self, v, backTracking): # AO* algorithm for a start node and backTracking status flag
print("-----------------------------------------------------------------------------------------")
if solved==True: # if the Minimum Cost nodes of v are solved, set the current node status as
solved(-1)
self.setStatus(v,-1)
if v!=self.start: # check the current node is the start node for backtracking the current node
value
self.aoStar(self.parent[v], True) # backtracking the current node value with backtracking
status set to true
h1 = {'A': 1, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J':1, 'T': 3}
graph1 = {
'A': [[('B', 1), ('C', 1)], [('D', 1)]],
'B': [[('G', 1)], [('H', 1)]],
'C': [[('J', 1)]],
'D': [[('E', 1), ('F', 1)]],
'G': [[('I', 1)]]
}
G1= Graph(graph1, h1, 'A')
G1.applyAOStar()
G1.printSolution()
h2 = {'A': 1, 'B': 6, 'C': 12, 'D': 10, 'E': 4, 'F': 4, 'G': 5, 'H': 7} # Heuristic values of Nodes
graph2 = { # Graph of Nodes and Edges
'A': [[('B', 1), ('C', 1)], [('D', 1)]], # Neighbors of Node 'A', B, C & D with repective weights
'B': [[('G', 1)], [('H', 1)]], # Neighbors are included in a list of lists
'D': [[('E', 1), ('F', 1)]] # Each sublist indicate a "OR" node or "AND" nodes
G2 = Graph(graph2, h2, 'A') # Instantiate Graph object with graph, heuristic values and start Node
G2.applyAOStar() # Run the AO* algorithm
G2.printSolution() # print the solution graph as AO* Algorithm search
Output:
HEURISTIC VALUES : {'A': 1, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J': 1, 'T': 3}
SOLUTION GRAPH : {}
PROCESSING NODE : A
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J': 1, 'T': 3}
SOLUTION GRAPH : {}
PROCESSING NODE : B
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J': 1, 'T': 3}
SOLUTION GRAPH : {}
PROCESSING NODE : A
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J': 1, 'T': 3}
SOLUTION GRAPH : {}
PROCESSING NODE : G
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7, 'I': 7, 'J': 1, 'T': 3}
SOLUTION GRAPH : {}
PROCESSING NODE : B
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 10, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7, 'I': 7, 'J': 1, 'T': 3}
SOLUTION GRAPH : {}
PROCESSING NODE : A
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 12, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7, 'I': 7, 'J': 1, 'T': 3}
SOLUTION GRAPH : {}
PROCESSING NODE : I
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 12, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7, 'I': 0, 'J': 1, 'T': 3}
SOLUTION GRAPH : {'I': []}
PROCESSING NODE : G
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 12, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 1, 'T': 3}
SOLUTION GRAPH : {'I': [], 'G': ['I']}
PROCESSING NODE : B
-----------------------------------------------------------------------------------------
HEURISTIC VALUES : {'A': 12, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 1, 'T': 3}
Dept. of ECE ,SRM Page 20
TRPEC
CS3491 AIML LAB
2.5 PROCEDURE
Open python 3.0 IDLE / Colab
Write the program
Run the program
Observe the output and take the hard copy
Write the program for various example / application , Observe the output and take the
hard copy
2.6 RESULT
By doing this experiment I Implemented Informed search algorithms ( A* and AO* ) and
observed its output and application .
3.1 AIM
To Implement Naïve Bayes Models
3.3 ALGORITHM
Conditional probability is defined as the likelihood of an event or outcome occurring,
based on the occurrence of a previous event or outcome. Conditional probability is
calculated by multiplying the probability of the preceding event by the updated
probability of the succeeding, or conditional, event
Bayes’ Rule
Bayes’ Rule. Bayes’ theorem which was given by Thomas Bayes, a British Mathema
tician, in 1763 provides a means for calculating the probability of an event given some
information.
Mathematically Bayes’ theorem can be stated as:
Naive Bayes
Bayes’ rule provides us with the formula for the probability of Y given some feature X. In
real-world problems, we hardly find any case where there is only one feature. When the
features are independent, we can extend Bayes’ rule to what is called Naive Bayes which
assumes that the features are independent that means changing the value of one feature
doesn’t influence the values of other variables and this is why we call this algorithm
“NAIVE”. Naive Bayes can be used for various things like face recognition, weather
prediction, Medical Diagnosis, News classification, Sentiment Analysis, and a lot more.
When there are multiple X variables, we simplify it by assuming that X’s are independent, so
We can use this formula to compute the probability of likelihoods if our data is
continuous.
Problem statement:
– Given features X1 ,X2 ,…,Xn
– Predict a label Y
X = (Rainy, Hot,
High, False) y =
No
Or
Consider a random experiment of tossing 2 coins. The sample space here will be:
S = {HH, HT, TH, TT}
P(H) is the probability of hypothesis H being true. This is known as the prior
probability.
P(E) is the probability of the evidence(regardless of the hypothesis).
P(E|H) is the probability of the evidence given that hypothesis is true.
P(H|E) is the probability of the hypothesis given that the evidence is there.
##import library
import math
import random
import pandas as pd
import numpy as np
X, y = make_classification(
n_features=6,
n_classes=3,
n_samples=800,
n_informative=2,
random_state=1,
n_clusters_per_class=1,
# Model training
model.fit(X_train, y_train)
# Predict Output
predicted = model.predict([X_test[6]])
y_pred = model.predict(X_test)
accuray = accuracy_score(y_pred, y_test)
f1 = f1_score(y_pred, y_test, average="weighted")
print("Accuracy:", accuray)
print("F1 Score:", f1)
labels = [0,1,2]
cm = confusion_matrix(y_test, y_pred, labels=labels)
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=labels)
OUTPUT
Accuracy: 0.8484848484848485
F1 Score: 0.8491119695890328
3.5 PROCEDURE
Open python 3.0 IDLE / Colab
Write the program
Run the program
Observe the output and take the hard copy
Write the program for various example / application , Observe the output and take the
hard copy
3.6 RESULT
By doing this experiment I Implemented Naïve Bayes Models and observed its output and application .
4.1AIM:
To Implement Bayesian Networks
4.3 ALGORITHM
This section will be about obtaining a Bayesian network, given a set of sample data.
Learning a Bayesian network can be split into two problems:
Parameter learning: Given a set of data samples and a DAG that captures the
dependencies between the variables, estimate the (conditional) probability distributions of
the individual variables.
Structure learning: Given a set of data samples, estimate a DAG that captures the
dependencies between the variables.
This notebook aims to illustrate how parameter learning and structure learning can be
done with pgmpy. Currently, the library supports:
The Bayesian Parameter Estimator starts with already existing prior CPDs, that
express our beliefs about the variables before the data was observed. Those
"priors" are then updated, using the state counts from the observed data.
One can think of the priors as consisting in pseudo state counts, that are added to
the actual counts before normalization. Unless one wants to encode specific
beliefs about the distributions of the variables, one commonly chooses uniform
priors, i.e. ones that deem all states equiprobable.
A very simple prior is the so-called K2 prior, which simply adds 1 to the count of
every single state. A somewhat more sensible choice of prior is BDeu (Bayesian
Dirichlet equivalent uniform prior). For BDeu we need to specify an equivalent
sample size N and then the pseudo-counts are the equivalent of having observed N
uniform samples of each variable (and each parent configuration).
*Parameter Learning *
Parameter learning is the task to estimate the values of the conditional probability
distributions (CPDs), for the variables fruit, size, and tasty.
Program :
!pip install pgmpy
!pip install pandas
!pip install numpy
import pandas as pd
data = pd.DataFrame(data={'fruit': ["banana", "apple", "banana", "apple",
"banana","apple", "banana",
"apple", "apple", "apple", "banana",
"banana", "apple", "banana",],
'tasty': ["yes", "no", "yes", "yes", "yes",
"yes", "yes",
"yes", "yes", "yes", "yes", "no", "no",
"no"],
'size': ["large", "large", "large", "small",
"large", "large", "large",
"small", "large", "large", "large",
"large", "small", "small"]})
print(data)
OUTPUT
fruit
apple 7
banana 7
+---------------+-----+
| fruit(apple) | 0.5 |
+---------------+-----+
| fruit(banana) | 0.5 |
+---------------+-----+
+------------+--------------+--------------------+---------------------+---------
------+
| fruit | fruit(apple) | fruit(apple) | fruit(banana) |
fruit(banana) |
+------------+--------------+--------------------+---------------------+---------
------+
| size | size(large) | size(small) | size(large) |
size(small) |
+------------+--------------+--------------------+---------------------+---------
------+
| tasty(no) | 0.25 | 0.3333333333333333 | 0.16666666666666666 | 1.0
|
+------------+--------------+--------------------+---------------------+---------
------+
| tasty(yes) | 0.75 | 0.6666666666666666 | 0.8333333333333334 | 0.0
|
+------------+--------------+--------------------+---------------------+--
---
+------------+---------------------+--------------------+--------------------+---------------------+
| fruit | fruit(apple) | fruit(apple) | fruit(banana) | fruit(banana) |
+------------+---------------------+--------------------+--------------------+---------------------+
| size | size(large) | size(small) | size(large) | size(small) |
+------------+---------------------+--------------------+--------------------+---------------------+
| tasty(no) | 0.34615384615384615 | 0.4090909090909091 | 0.2647058823529412 | 0.6428571428571429 |
+------------+---------------------+--------------------+--------------------+---------------------+
| tasty(yes) | 0.6538461538461539 | 0.5909090909090909 | 0.7352941176470589 | 0.35714285714285715 |
+------------+---------------------+--------------------+--------------------+---------------------+
4.5 PROCEDURE
4.6 RESULT
5.1AIM:
To Build Regression Models
5.3 ALGORITHM
Regression analysis is a commonly used statistical technique for predicting the relationship
between a dependent variable and one or more independent variables. In the field of
machine learning, regression algorithms are used to make predictions about continuous
variables, such as housing prices, student scores, or medical outcomes. Python, being one
of the most widely used programming languages in data science and machine learning, has
a variety of powerful libraries for implementing regression algorithms.
independent variables. Notice how we could expand this by choosing higher orders
of polynomials (to some order k) and we could have also included interaction terms.
6. Decision tree based regression is a method that uses decision trees to model the
Dept. of ECE ,SRM Page 35
TRPEC
CS3491 AIML LAB
reg.coef_
reg.intercept_
reg.predict(np.array([[3, 5]]))
OUTPUT:
array([16.])
2. Polynomial regression
# polynomial
import numpy as np
from sklearn.preprocessing import PolynomialFeatures
X = np.arange(6).reshape(3, 2)
X
poly = PolynomialFeatures(2)
poly.fit_transform(X)
poly = PolynomialFeatures(interaction_only=True)
poly.fit_transform(X)
OUTPUT:
array([[ 1., 0., 1., 0.],
[ 1., 2., 3., 6.],
[ 1., 4., 5., 20.]])
3. Ridge regression
from sklearn.linear_model import Ridge
import numpy as np
n_samples, n_features = 10, 5
rng = np.random.RandomState(0)
y = rng.randn(n_samples)
X = rng.randn(n_samples, n_features)
clf = Ridge(alpha=1.0)
clf.fit(X, y)
OUTPUT:
Ridge
Ridge()
4. Lasso regression
#lasso
from sklearn import linear_model
clf = linear_model.Lasso(alpha=0.1)
clf.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2])
print(clf.coef_)
print(clf.intercept_)
OUTPUT:
[0.85 0. ]
0.15000000000000002
X, y = make_regression(n_features=2, random_state=0)
regr = ElasticNet(random_state=0)
regr.fit(X, y)
print(regr.coef_)
print(regr.intercept_)
print(regr.predict([[0, 0]]))
OUTPUT:
[18.83816048 64.55968825]
1.4512607561653996
[1.45126076]
OUTPUT:
array([-0.39292219, -0.46749346, 0.02768473, 0.06441362, -0.50323135,
0.16437202, 0.11242982, -0.73798979, -0.30953155, -0.00137327])
#SVR
from sklearn.svm import SVR
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
import numpy as np
n_samples, n_features = 10, 5
rng = np.random.RandomState(0)
y = rng.randn(n_samples)
X = rng.randn(n_samples, n_features)
regr = make_pipeline(StandardScaler(), SVR(C=1.0, epsilon=0.2))
regr.fit(X, y)
OUTPUT:
Application _ LR
data_url ="http://lib.stat.cmu.edu/datasets/boston"
raw_df = pd.read_csv(data_url, sep="\s+",
skiprows=22, header=None)
X = np.hstack([raw_df.values[::2, :],
raw_df.values[1::2, :2]])
y = raw_df.values[1::2, 2]
X_train, X_test,\
reg = linear_model.LinearRegression()
reg.fit(X_train, y_train)
regression coefficients
print('Coefficients: ', reg.coef_)
# plotting legend
plt.legend(loc='upper right')
# plot title
plt.title("Residual errors")
OUTPUT:
5.5 PROCEDURE
5.6 RESULT
By doing this experiment I Built Regression models and observed its output and application
6.1AIM:
To build Decision Trees and Random Forests
6.3 ALGORITHM
A decision tree is a supervised machine-learning algorithm that can be used for
both classification and regression problems. Algorithm builds its model in the
structure of a tree along with decision nodes and leaf nodes. A decision tree is
simply a series of sequential decisions made to reach a specific result.
The Palmer Penguins dataset
This Colab uses the Palmer Penguins dataset, which contains size measurements for
three penguin species:
Chinstrap
Gentoo
Adelie
This is a classification problem—the goal is to predict the species of penguin
based on data in the Palmer's Penguins dataset. Let’s meet the penguins.
https://www.kaggle.com/code/sohamsave/personal-loan-prediction-using-decision-tree
import numpy as np
import pandas as pd
import tensorflow_decision_forests as tfdf
path =
"https://storage.googleapis.com/download.tensorflow.org/data/pal
mer_penguins/penguins.csv"
pandas_dataset = pd.read_csv(path)
label = "species"
classes = list(pandas_dataset[label].unique())
print(f"Label classes: {classes}")
# >> Label classes: ['Adelie', 'Gentoo', 'Chinstrap']
pandas_dataset[label] = pandas_dataset[label].map(classes.index)
np.random.seed(1)
# Use the ~10% of the examples as the testing set
# and the remaining ~90% of the examples as the training set.
test_indices = np.random.rand(len(pandas_dataset)) < 0.1
pandas_train_dataset = pandas_dataset[~test_indices]
pandas_test_dataset = pandas_dataset[test_indices]
tf_train_dataset =
tfdf.keras.pd_dataframe_to_tf_dataset(pandas_train_dataset, label=label)
model = tfdf.keras.CartModel()
model.fit(tf_train_dataset)
tfdf.model_plotter.plot_model_in_colab
tfdf.model_plotter.plot_model_in_colab(model, max_depth=10)
bill_depth_mm = 16.35;
if bill_depth_mm > 16.35
classes = list(pandas_dataset[label].unique())
print(f"Label classes: {classes}")
else
bill_depth_mm < 16.35
classes = list(pandas_dataset[label].unique())
print(f"Label classes: {classes}")
end
end
model.compile("accuracy")
print("Train evaluation: ", model.evaluate(tf_train_dataset,
return_dict=True))
# >> Train evaluation: {'loss': 0.0, 'accuracy': 0.96116}
tf_test_dataset =
tfdf.keras.pd_dataframe_to_tf_dataset(pandas_test_dataset, label=label)
print("Test evaluation: ", model.evaluate(tf_test_dataset,
return_dict=True))
# >> Test evaluation: {'loss': 0.0, 'accuracy': 0.97142}
OUTPUT:
tensorflow_decision_forests.component.model_plotter.model_plott
er.plot_model_in_colab
def plot_model_in_colab(model: InferenceCoreModel, **kwargs)
/usr/local/lib/python3.10/dist-
packages/tensorflow_decision_forests/component/model_plotter/mo
del_plotter.pyPlots a model structure in colab.
Args:
model: The model to plot.
**kwargs: Arguments passed to "plot_model".
Returns:
A Colab HTML element showing the model.
6.5 PROCEDURE
6.6 RESULT
By doing this experiment I built Decision Trees and Random Forests and observed its
output and application
7.1AIM:
To build SVM Models
7.3 ALGORITHM
The main objective is to segregate the given dataset in the best possible way. The
distance between the either nearest points is known as the margin. The objective is
to select a hyperplane with the maximum possible margin between support vectors
in the given dataset. SVM searches for the maximum marginal hyperplane in the
following steps:
1. Generate hyperplanes which segregates the classes in the best way. Left-hand
side figure showing three hyperplanes black, blue and orange. Here, the blue
and orange have higher classification error, but the black is separating the
two classes correctly.
2. Select the right hyperplane with the maximum segregation from the either
nearest data points as shown in the right-hand side figure.
#Load dataset
cancer = datasets.load_breast_cancer()
#kernel implimentation
def K(x, xi):
# Choose one of the following implementations:
# Linear kernel
# return sum(x * xi)
# Gaussian kernel
gamma = 1 # Set the kernel parameter
return exp(-gamma * sum((x_i - xi_i)**2 for x_i, xi_i in zip(x, xi)))
# print data(feature)shape
cancer.data.shape
OUTPUT:
Features: ['mean radius' 'mean texture' 'mean perimeter' 'mean area'
'mean smoothness' 'mean compactness' 'mean concavity'
'mean concave points' 'mean symmetry' 'mean fractal dimension'
'radius error' 'texture error' 'perimeter error' 'area error'
'smoothness error' 'compactness error' 'concavity error'
'concave points error' 'symmetry error' 'fractal dimension error'
'worst radius' 'worst texture' 'worst perimeter' 'worst area'
'worst smoothness' 'worst compactness' 'worst concavity'
'worst concave points' 'worst symmetry' 'worst fractal dimension']
Labels: ['malignant' 'benign']
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 1 0 1 1 1 1 1 0 0 1 0 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 0
1 0 1 0 0 1 1 1 0 0 1 0 0 0 1 1 1 0 1 1 0 0 1 1 1 0 0 1 1 1 1 0 1 1 0 1 1
1 1 1 1 1 1 0 0 0 1 0 0 1 1 1 0 0 1 0 1 0 0 1 0 0 1 1 0 1 1 0 1 1 1 1 0 1
1 1 1 1 1 1 1 1 0 1 1 1 1 0 0 1 0 1 1 0 0 1 1 0 0 1 1 1 1 0 1 1 0 0 0 1 0
1 0 1 1 1 0 1 1 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 1 1 0 1 0 0 0 0 1 1 0 0 1 1
Accuracy: 0.9649122807017544
Precision: 0.9811320754716981
Recall: 0.9629629629629629
7.5 PROCEDURE
7.6 RESULT
By doing this experiment I Built SVM Models and observed its output and application
8.1AIM:
To Implement Ensembling Techniques
8.3 ALGORITHM
The steps of the EM algorithm are as follows:
thetas.append((theta_A,theta_B))
return thetas, (theta_A,theta_B)
type(thetas)
thet=thetas[1]
print(thetas)
rolls_p = "HHHTTHTHTH"
numHeads_p = rolls_p.count('H')
print('No. of Heads', numHeads_p)
flips_p = len(rolls_p)
print(flips_p)
OUTPUT:
9
10
9
10
8
10
8
10
4
10
4
10
7
10
7
10
#8: 0.80 0.52
5
10
5
10
9
10
9
10
8
10
8
10
No. of Heads 6
10
Lilelihood of A coin : 0.0004366017976005356
Lilelihood of B coin : 0.0010483562262602218
Probability of A coin : 0.2940162553991999
Probability of B coin : 0.7059837446008002
8.6 RESULT
By doing this experiment I Implemented Ensembling Techniques and observed its output and application
9.1AIM:
To Implement Clustering Algorithms
9.3 ALGORITHM
Kmeans and EM algorithm
We can explain K means as an EM algorithm. First we initialize the k means (mk)
of the Kmeans algorithm. In the E Step we assign each point to a Cluster and
during the M Step given the Clusters we refine mean mk of each cluster k. This
process is repeated until the change in means is small.
K-means and Mixture of Gaussians
Now we know that in a general K-means which is essentially a classifier and we
need to find the parameter to fit data – that is we need to find the mean – µk as
already discussed above. However when we use mixture of Gaussians which is a
probability model where we are defining a “soft” classifier. Now the parameters
that are to be determined to fit to data are the means µk and covariance Σk which
define the Gaussians distributions and the mixing coefficient πk. Now given the
data set, find the mixing coefficients, means and covariance. If we knew which
component generated each data point, the maximum likelihood solution would
involve fitting each component to the corresponding cluster . However our
problem is that the data set is unlabelled or are hidden
u_labels = np.unique(label)
import matplotlib.pyplot as plt
#plotting the results:
for i in u_labels:
plt.scatter(X[label == i , 0] , X[label == i , 1] , label = i)
plt.legend()
plt.title("K-Means Clustering")
plt.show()
OUTPUT
[[ 1 2]
[ 2 4]
[10 12]
[11 15]
[ 3 2]
[12 13]]
array([[ 2. , 2.66666667],
[11. , 13.33333333]])
array([1], dtype=int32)
Cluster Labels: [0 0 1 1 0 1]
Cluster Centers: [[ 2. 2.66666667]
[11. 13.33333333]]
9.5 PROCEDURE
9.6 RESULT
By doing this experiment I implemented Clustering Algorithms and observed its output and application
10.1AIM:
To Implement EM for Bayesian Networks
10.3 ALGORITHM
Here the E-step or expectation step is so named because it involves updating our
expectation of which cluster each point belongs to. The M-step or maximization
step is so named because it involves maximizing some fitness function that
defines the locations of the cluster centers—in this case, that maximization is
accomplished by taking a simple mean of the data in each cluster.
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import numpy as np
centers = kmeans.cluster_centers_
plt.scatter(centers[:, 0], centers[:, 1], c='black', s=200);
while True:
# 2a. Assign labels based on closest center
labels = pairwise_distances_argmin(X, centers)
Figure: 1
Figure: 2
Figure: 3
Figure: 4
Figure: 5
Figure: 6
Dept. of ECE ,SRM Page 66
TRPEC
CS3491 AIML LAB
Figure: 7
10.5 PROCEDURE
10.6 RESULT
By doing this experiment I Implemented EM for bayesian networks and observed its output and
application
11.1AIM:
To build Neural Network (BP) Models
11.3 ALGORITHM
#Sigmoid Function
def sigmoid (x):
return (1/(1 + np.exp(-x)))
#Derivative of Sigmoid Function
def derivatives_sigmoid(x):
return x * (1 - x)
#Variable initialization
epoch=7000 #Setting training iterations
lr=0.1 #Setting learning rate
inputlayer_neurons = 2 #number of features in data set
hiddenlayer_neurons = 3 #number of hidden layers neurons
output_neurons = 1 #number of neurons at output layer
#weight and bias initialization
wh=np.random.uniform(size=(inputlayer_neurons,hiddenlayer_neurons))
bh=np.random.uniform(size=(1,hiddenlayer_neurons))
wout=np.random.uniform(size=(hiddenlayer_neurons,output_neurons))
bout=np.random.uniform(size=(1,output_neurons)) # draws a random range
of numbers uniformly of dim x*y
#Forward Propagation
for i in range(epoch):
hinp1=np.dot(X,wh)
hinp=hinp1 + bh
hlayer_act = sigmoid(hinp)
outinp1=np.dot(hlayer_act,wout)
outinp= outinp1+ bout
output = sigmoid(outinp)
#Backpropagation
EO = y-output
outgrad = derivatives_sigmoid(output)
d_output = EO* outgrad
EH = d_output.dot(wout.T)
hiddengrad = derivatives_sigmoid(hlayer_act)
#how much hidden layer wts contributed to error
d_hiddenlayer = EH * hiddengrad
wout += hlayer_act.T.dot(d_output) *lr
# dotproduct of nextlayererror and currentlayerop
bout+= np.sum(d_output, axis=0,keepdims=True) *lr
wh += X.T.dot(d_hiddenlayer) *lr
bh += np.sum(d_hiddenlayer, axis=0,keepdims=True) *lr
OUTPUT:
Input:
[[0.66666667 1. ]
[0.33333333 0.55555556]
[1. 0.66666667]]
Actual Output:
[[92.]
[86.]
[89.]]
Predicted Output:
[[0.99999894]
[0.99999822]
[0.99999887]]
11.5 PROCEDURE
11.6 RESULT
By doing this experiment I built Neural Network (BP) Models and observed its output and application
12.1 AIM:
To build deep Neural Network Models
12.3 ALGORITHM
Simple Convolutional Neural Network (CNN) to classify CIFAR images
The CIFAR10 dataset contains 60,000 color images in 10 classes, with 6,000 images in each
class. The dataset is divided into 50,000 training images and 10,000 testing images. The classes
are mutually exclusive and there is no overlap between them.
The 6 lines of code below define the convolutional base using a common pattern: a stack
of Conv2D and MaxPooling2D layers.
plt.figure(figsize=(8,8))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i])
# The CIFAR labels happen to be arrays,
#which is why we need the extra index
plt.xlabel(class_names[train_labels[i][0]])
plt.show()
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32,
3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.summary()
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))
model.summary()
# Adam is the best among the adaptive optimizers in most of the cases
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
plt.plot(history.history['accuracy'],label='accuracy')
plt.plot(history.history['val_accuracy'],label = 'val_accuracy')
Dept. of ECE ,SRM Page 73
TRPEC
CS3491 AIML LAB
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
OUTPUT:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 30, 30, 32) 896
=================================================================
Total params: 56320 (220.00 KB)
Trainable params: 56320 (220.00 KB)
Non-trainable params: 0 (0.00 Byte)
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 30, 30, 32) 896
=================================================================
Total params: 122570 (478.79 KB)
Trainable params: 122570 (478.79 KB)
Non-trainable params: 0 (0.00 Byte)
12.5 PROCEDURE
12.6 RESULT
By doing this experiment I built Deep Neural Network (BP) Models and observed its
output and application
13.1 AIM:
To build deep Neural Network for digit classification
model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
print('Loss:', loss)
print('Accuracy:', accuracy)
pred = model.predict(X[0,:].reshape(1, -1))
print(pred)
print(y[0,:])
dgts = load_digits()
print(dgts.data.shape)
import matplotlib.pyplot as plt
plt.gray()
plt.matshow(dgts.images[0])
plt.show()
OUTPUT
Epoch 1/5
45/45 [==============================] - 1s 2ms/step - loss: 5.0013 -
accuracy: 0.2289
Epoch 2/5
45/45 [==============================] - 0s 2ms/step - loss: 1.1459 -
accuracy: 0.6354
Epoch 3/5
45/45 [==============================] - 0s 2ms/step - loss: 0.5060 -
accuracy: 0.8462
Epoch 4/5
45/45 [==============================] - 0s 2ms/step - loss: 0.3240 -
accuracy: 0.9040
Epoch 5/5
45/45 [==============================] - 0s 2ms/step - loss: 0.2358 -
accuracy: 0.9283
12/12 [==============================] - 0s 2ms/step - loss: 0.2600 -
accuracy: 0.9250
Loss: 0.2599562704563141
Accuracy: 0.925000011920929
1/1 [==============================] - 0s 52ms/step
[[9.9925131e-01 8.5686344e-07 7.1382141e-07 4.9433766e-06 6.5290674e-06
5.4754998e-04 2.5058553e-06 3.0487461e-06 3.8330847e-05 1.4414983e-04]]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
(1797, 64)
<Figure size 640x480 with 0 Axes>