You are on page 1of 30

MACHINE LEARNING ASSIGNMENT

HOME ASSIGNMENT SOLUTION

Day 3
1.PROBABILITY:
Certainly! Here are some simple programs related to probability in the context of
machine learning, along with explanations:
1.1 Coin Flip Simulation:
Explanation:
This program simulates a series of coin flips (either "Heads" or "Tails") and calculates the
probability of getting "Heads" based on the outcomes.
It conducts a specified number of coin flips (num_flips) and counts how many times
"Heads" occurs.
The probability of getting "Heads" is then calculated by dividing the count of "Heads" by
the total number of flips.
1.2 Dice Roll Simulation:
Explanation:
This program simulates rolling a fair six-sided die and calculates the probability of rolling
a specific target number.
It conducts a specified number of dice rolls (num_rolls) and counts how many times the
target number appears.
The probability of rolling the target number is then calculated by dividing the count of
occurrences by the total number of rolls.
1.3 Bayesian Classifier:
Explanation:
This program calculates the probability of a specific target class in a dataset.
It counts the occurrences of each class in the dataset using the Counter from the
collection’s module.
Then, it calculates the probability of the target class by dividing the count of the target
class by the total number of samples.
These simple programs illustrate how to work with probabilities in machine learning,
including coin flips, dice rolls, and Bayesian classification.
1.4 Gaussian Probability Density Function (PDF):
Explanation:
This program creates a normal distribution with a specified mean and standard
deviation using the SciPy. Stats library. It then generates data points and plots the
Probability Density Function (PDF) of the normal distribution using Matplotlib.
These programs demonstrate various applications of probability in machine learning,
including random sampling, Monte Carlo simulations, Bayesian classification, probability
distributions, and PDF visualization.
1.5 Binomial Distribution:
Explanation:
Explanation: This program calculates the probability of getting a specific number of
heads in a series of coin tosses using the binomial distribution. It uses the scipy. stats
library to compute the probability mass function (pmf) for the given parameters.
1.6 Bayes' Theorem:
This program calculates the posterior probability of event A given event B using Bayes'
theorem.
These simple programs illustrate basic probability concepts that are often used in
machine learning and data analysis. You can modify them and use them as building
blocks for more complex probability calculations in your ML projects.
Day :4-5
2.STATISTICS:
Creating a simple statistical program in machine learning involves using basic statistical
techniques to analyze data. In this example, we'll build a Python program that calculates
and explains basic statistical measures using a sample dataset. We'll use the NumPy
library for numerical operations and the Matplotlib library for visualization.
Explanation of the program:
We import the necessary libraries: NumPy for numerical operations and Matplotlib for
data visualization.
We define a sample dataset called data. You should replace this with your own dataset.
The basic_statistics function takes the dataset as input and calculates and explains
various basic statistical measures:
Mean: Calculated as the sum of all data points divided by the number of data points.
Median: The middle value in the sorted dataset.
Mode: The most frequently occurring value.
Standard Deviation: A measure of data dispersion.
Variance: A measure of how much the data values vary from the mean.
Range: The difference between the maximum and minimum values.
Histogram: A visualization of the data distribution.
We call the basic_statistics function with the sample dataset data.
Finally, we display a histogram of the data using Matplotlib for visualizing the
distribution of values.
You can modify this program to work with your own datasets and further extend it to
include more advanced statistical techniques or machine learning algorithms as needed
1.Mean calculation:
Explanation:
The calculate_mean function computes the mean (average) of a list of data points. It
does this by summing up all the data points and dividing the sum by the total number of
data points. Mean is a fundamental statistic used in various machine learning tasks,
such as data preprocessing, and it provides a measure of central tendency.
2. Standard Deviation Calculation:
Explanation:
The calculate_standard_deviation function calculates the standard deviation, which
measures the dispersion or spread of data points around the mean. It first computes the
mean of the data, then calculates the squared differences between each data point and
the mean, and finally takes the square root of the average of those squared differences.
Standard deviation is useful in understanding the variability within a dataset and is often
used in data analysis and feature scaling in machine learning.
3. Correlation Coefficient Calculation:
Explanation:
The calculate_correlation_coefficient function computes the Pearson correlation
coefficient, which measures the linear relationship between two variables, x and y. It is
used to understand how changes in one variable relate to changes in another. The
formula involves calculating the mean of each variable, then computing the numerator
as the sum of the product of the deviations from the mean for x and y, and dividing it by
the product of the standard deviations of x and y. A positive correlation coefficient
indicates a positive linear relationship, while a negative coefficient indicates a negative
linear relationship.
These simple statistical programs are building blocks in machine learning and data
analysis, providing valuable insights into data and helping make informed decisions in
various applications.

Day -6
3.Numbers, Arithmetic, and Printing to the Console.
Explanation:

# Get user input for two numbers: This is a comment that explains what the following
lines of code will do. It's not executed as part of the program.

num1 = float(input("Enter the first number: ")): This line prompts the user to enter the
first number and stores it in the variable num1. input() is used to get input from the
user, and float() is used to convert the user's input to a floating-point number (a number
with decimal points) because we want to allow decimal numbers as well as whole
numbers.

num2 = float(input("Enter the second number: ")): This line is similar to the previous
one but for the second number. It prompts the user to enter the second number and
stores it in the variable num2.
result = num1 + num2: This line calculates the sum of the two numbers entered by the
user and stores the result in the variable result. The + operator is used for addition.

print("The sum of {} and {} is: {}".format(num1, num2, result)): This line displays the
result to the user. It uses the print() function to output a message that includes the
values of num1, num2, and result. The format() method is used to format the output
string and insert the values of these variables into the string.

Here's how the program works:

The user is prompted to enter two numbers.


The program converts the user's input to floating-point numbers.
It then adds the two numbers together and stores the result in the result variable.
Finally, it displays the result with a message indicating what the sum is.

Day -7
4.Veriables:
Explanation:
The program starts with comments. Comments in Python start with the # symbol and
are used to provide explanations or notes to make the code more readable.

Variable assignment: We define three variables, name, age, and height, and assign
values to them. name is assigned a string, age is assigned an integer, and height is
assigned a float.

Printing variables: We use the print function to display the values stored in the
variables. This helps us see the current values of name, age, and height.

Modifying variables: We update the age and height variables by performing arithmetic
operations on them. We add 5 to age and 10 to height. Python allows you to change the
value of a variable after it has been assigned.
Printing modified variables: We print the modified values of age and height to see the
changes that occurred after the modifications.

Day -8
Strings:
Explanation:
1.This program uses the print() function to display three different strings on the screen.
The first string, "Hello, World!", is a common greeting used in programming to test code
execution.
The second string, "Python is fun.", is a statement expressing a positive opinion about
Python.
The third string, "12345", contains only numeric characters and is treated as a string,
not a numeric value, because it's enclosed in double quotes.
2.program:
This program demonstrates string concatenation by combining different strings.
It defines two variables, first_name and last_name, each containing a string
representing a name.
The program then concatenates these strings with the + operator to create a full_name
string, which contains both the first and last names separated by a space.
Finally, it uses the print() function to display three different strings:
"First Name: " followed by the value of first_name.
"Last Name: " followed by the value of last_name.
"Full Name: " followed by the value of full_name.
These two programs showcase basic string usage in Python, including printing strings
and string concatenation.
Day-8
3.Conditinal & Boolean:
Explanation:
The first line of code assigns the value 10 to the variable x, and the second line assigns 5
to the variable y.
Conditional 1 (If-Else Statement): It checks if x is greater than y. If it is, it assigns the
string "x is greater than y" to the result variable; otherwise, it assigns "x is less than or
equal to y". The ternary conditional operator x if condition else y is used here.
Conditional 2 (Boolean Comparison): It checks if x is equal to y and stores the result in
the is_equal variable. This line demonstrates a boolean comparison using the equality
operator ==.
Conditional 3 (Logical Operator): It checks if x is both greater than y and not equal to y
and stores the result in the is_greater_and_not_equal variable. This line demonstrates
the use of logical operators (and in this case) to combine multiple conditions.
When you run this code, it will output the results of these three conditionals, providing
information about the relationship between x and y.
DAY-
Basics List programs:
Explanation: Creating a List and Accessing Elements
This program creates a list named my_list containing five integers.
It then uses indexing ([2]) to access and print the element at the 2nd position of the list
(remember that indexing starts at 0), which is 3.

2. Explanation: Modifying a List


This program creates a list named fruits containing three string elements.
It uses the append method to add the string "orange" to the end of the list.
Finally, it prints the updated list, which now includes "orange" as a fourth element.
These basic list operations demonstrate how to create, access, and modify elements in a
Python list.

Day-10
3.Program: Creating and accessing a tuple:
Explanation:
In this program, we create a tuple named fruits containing three elements: 'apple',
'banana', and 'cherry'.
We use indexing to access elements of the tuple. Tuples are zero-indexed, so fruits[0]
gives us the first element ('apple') and fruits[1] gives us the second element ('banana').

Program: Concatenating tuples:


Explanation:
In this program, we have two tuples, tuple1 and tuple2, containing integer elements.
We can concatenate tuples by using the + operator, which combines the elements from
both tuples to create a new tuple result_tuple.
The resulting tuple result_tuple contains all the elements from tuple1 followed by all
the elements from tuple2.
These simple programs demonstrate the basic concepts of creating, accessing, and
manipulating tuples in Python.

DAY-12
1.Loops
For Loop & while loop:
For loop:
Explanation:
The for loop in Python is used to iterate over a sequence of items, which can be a range
of numbers, a list, or any iterable.
In this program, we use the range() function to generate a sequence of numbers from 1
to 5. The range(1, 6) generates numbers starting from 1 (inclusive) up to 6
(exclusive).The loop variable i takes on each value in the sequence generated by range(),
and we print it using the print() function.
Program to calculate the sum of numbers from 1 to 10 using a for loop:
Python
2 programs.
In this program, we initialize a variable sum to 0. This variable will be used to
accumulate the sum of numbers.
We then use a for loop with the range(1, 11) to iterate through numbers from 1 to 10.
Inside the loop, we add each value of i to the sum variable using the += operator. This
accumulates the sum as we iterate through the numbers.
Finally, we print the sum after the loop is complete.
These two examples demonstrate the basic structure of a for loop in Python and how it
can be used for simple iteration and accumulation tasks.

2.While Loop:
1.1 Countdown with a While Loop:
Explanation:
In this program, we initialize a variable count with the value 5.
The while loop continues as long as count is greater than 0.
Inside the loop, we print the value of count and then decrement it by 1 (count -= 1).
This loop will count down from 5 to 1, printing each value along the way, and then
terminate when count becomes 0.
1.2 Sum of Numbers with a While Loop:
Explanation:
Here, we have two variables, n initialized to 5 and sum initialized to 0.
The while loop continues as long as n is greater than 0.
Inside the loop, we add the value of n to the sum and then decrement n by 1 (n -= 1).
This loop will add up the numbers from 5 down to 1, and the final sum will be printed
after the loop terminates.
These are basic examples of while loops in Python. They demonstrate how a while loop
continues executing a block of code as long as a specified condition is true and how you
can use it for tasks like counting down or accumulating sums.

DAY-13
Dictionary:
1.1 Creating a Dictionary and Accessing Values:
Explanation:
In this program, we create a dictionary called student_info with key-value pairs
representing information about a student. We then access and print specific values from
the dictionary using the keys. This demonstrates how to create a dictionary and access
its values by key.

1.2 Adding and Updating Dictionary Values:


Explanation:
In this program, we start with an empty dictionary called employee_info. We then add
key-value pairs to the dictionary using the assignment operator (=). Later, we update the
value associated with the 'age' key. Finally, we print the updated dictionary to see the
changes. This demonstrates how to add and update values in a dictionary.

DAY:14-15
Function:
1.1 Program to calculate the square of a number:
This program defines a function called calculate_square that takes a parameter num.
Inside the function, it returns the square of the input num.
The function is then called with the argument 5, and the result is printed
1.2 Program to greet a person:
This program defines a function called greet that takes a parameter name.
Inside the function, it constructs a greeting message by concatenating "Hello, " with the
input name and an exclamation mark.
The function is then called with the argument "Alice", and the greeting is printed.
In both programs, we define a function to perform a specific task, and then we call that
function with appropriate arguments to get the desired result. Functions help in
organizing and reusing code effectively.
DAY-
Split / Slicing / Join
Program 1: Splitting a Sentence into Words:
Explanation:
In this program, we have a string sentence containing a sentence.
We use the split() method without passing any arguments to split the sentence into
words. By default, split() splits the string at spaces and creates a list of words.
The resulting list of words is stored in the words variable and then printed. This program
will output a list of words from the sentence.
Program 2: Splitting a CSV Line into Values
Explanation:
In this program, we have a string csv_line containing a comma-separated
values (CSV) line.
We use the split(',') method, passing a comma , as the argument, to split
the CSV line into individual values. This is a common technique for parsing
CSV data in Python.
The resulting list of values is stored in the values variable and then printed.
This program will output a list of values extracted from the CSV line.
In both of these examples, the split method is used to break a larger string
into smaller components based on a specified delimiter, such as spaces in
the first program and commas in the second program
Program 2: Adding and Modifying Dictionary Entries:

Program 3: Splitting a String Based on a Specific Character:


In this program, we have a string sentence that contains a list of fruits
separated by commas.
We use split(","), passing a comma as the argument to the split() function.
This tells the function to split the string wherever it encounters a comma.
The function returns a list of substrings, where each substring is a part of
the original string separated by commas.
The list of fruits is then printed to the console.
DAY-
List Comprehensions:/ Set Comprehensions:/ Dictionary
Comprehensions:
Explanation:
Program 1: Creating and Accessing a Dictionary:
In this program, we create a dictionary called student with key-value pairs
representing the name, age, and grade of a student. We then access and
print these values using their respective keys.
**************************NEXT**********************************

DAY-22
Using *args in Machine Learning: Using /*kwargs in Machine Learning:
1.Using *args in Machine Learning:
Explaination:
In this example, train_logistic_regression accepts the training data, labels,
and any number of hyperparameters using *args. It then splits the data,
creates a logistic regression model with the specified hyperparameters, and
returns the accuracy score.
2.Explaination:
In this example, train_classifier accepts the training data and labels, as well as any
number of keyword arguments (e.g., hyperparameters) using **kwargs. It creates a
classifier (in this case, logistic regression) with the specified keyword arguments and
returns the accuracy score.
Using **kwargs provides a more readable and flexible way to pass arguments, especially
when dealing with a large number of parameters that can vary from one experiment to
another in machine learning. It also makes it easier to understand the purpose of each
argument since they are named.
DAY-27
Itarator:
Batch Iterator for Training Data:
Explanation:
data: This is the input training data, which can be a list or a NumPy array.
batch_size: The size of each mini-batch.
data_size: The total number of data points in the training data.
num_batches: The total number of mini-batches needed to cover the entire dataset.
The for loop iterates through the dataset in mini-batches, and the yield statement
returns each mini-batch
Sequential Iterator for Dataset:
data: This is the input dataset, which can be a list or a NumPy array.
index: The current index within the dataset.
The __iter__ method makes the iterator an iterable, and __next__ defines how to
retrieve the next item in the dataset sequentially.
When there are no more items to iterate over, a StopIteration exception is raised to
signal the end of iteration.
These are two simple iterator programs used in machine learning. The first one is for
creating mini-batches during training, while the second one is a basic sequential iterator
for looping through a dataset. You can modify and extend these iterators to suit your
specific machine learning tasks and data types.
DAY-30
Numpy :
Explanation:
We generate synthetic data with a linear relationship between X and y, with some
random noise added.
We add a bias term (intercept) to the input features.
We use the normal equation from linear algebra to calculate the best-fit weights (theta)
for the linear regression model.
Finally, we make predictions for new input data.
Explanation:
We load the Iris dataset and perform binary classification to predict whether an Iris
flower is of the "Iris-Virginica" class or not.
We split the dataset into training and testing sets and standardize the features.
We use the logistic regression model from scikit-learn to train on the training data and
make predictions on the test data.
These two examples showcase the use of NumPy in implementing simple machine
learning algorithms for both regression and classification tasks. NumPy is a powerful
library for numerical computations in Python and is widely used in the machine learning
field for data manipulation and mathematical operations.
DAY-31-32
Pandas:
Program 1: Reading and Displaying Data with Pandas:
We import the pandas library as pd.
We create a dictionary called data containing three lists: 'Name', 'Age', and 'City'.
We use the pd.DataFrame() function to convert the dictionary into a pandas DataFrame,
which is a two-dimensional tabular data structure.
Finally, we print the DataFrame to the console, which will display the data in a nicely
formatted table.
Program 2: Data Manipulation with Pandas:
We import the pandas library as pd.
We assume there is a CSV file named 'data.csv' containing data, and we use
pd.read_csv() to read this data into a DataFrame called df.
We use df.head() to display the first 5 rows of the DataFrame.
We calculate the mean age of the people in the DataFrame using df['Age'].mean(), and
then we print it.
We filter the DataFrame to select people older than 30 using df[df['Age'] > 30], and we
print this filtered subset.
These two simple programs demonstrate basic operations with pandas, including
creating DataFrames from data and CSV files, displaying data, and performing simple
data manipulations like calculating means and filtering data based on conditions.
Day-33
Matplotlib:
Program 1: Creating a Basic Line Plot:
Explanation:
We first import Matplotlib as plt.
We define two lists, x and y, which represent the x and y coordinates of the
points to be plotted.
We create a figure and an axis using plt.subplots(). A figure represents the
entire window or page, while an axis is a specific subplot within that figure.
We use ax.plot(x, y) to create a line plot using the data from x and y.
Labels for the x and y axes are set using ax.set_xlabel() and ax.set_ylabel().
We set a title for the plot using ax.set_title().
Finally, we display the plot using plt.show().

Program 2: Creating a Basic Bar Chart:


Explanation:

We again import Matplotlib as plt.


We define two lists, categories and values, where categories represents the
categories or labels for the bars, and values represents the corresponding
values.
We create a figure and an axis using plt.subplots().
We create a bar chart using ax.bar(categories, values).
Labels for the x and y axes are set using ax.set_xlabel() and ax.set_ylabel().
We set a title for the plot using ax.set_title().
Finally, we display the bar chart using plt.show().
These are two simple examples of creating basic line plots and bar charts
using Matplotlib in Python. Matplotlib is a powerful library for data
visualization, and you can customize these plots further to suit your specific
needs.
DAY-34
Seaborn:
Program 1: Creating a Scatter Plot with Seaborn:

Certainly! Here are two simple Seaborn programs along with


explanations:

Program 1: Creating a Scatter Plot with Seaborn

Explanation:
First, we import Seaborn as sns and Matplotlib as plt, which is often used
alongside Seaborn for customization.
We create sample data in the form of two lists, x and y.
The sns.scatterplot() function is used to create a scatter plot. It takes the x
and y data as input.
We then add labels to the x and y axes using plt.xlabel() and plt.ylabel(),
and set a title for the plot using plt.title().
Finally, we use plt.show() to display the plot.

Program 2: Creating a Box Plot with Seaborn:


Explanation:
In this program, we import Seaborn and Matplotlib as before.
We load sample data from Seaborn's built-in datasets using
sns.load_dataset(). In this case, we load the "tips" dataset, which contains
information about restaurant bills and tips.
The sns.boxplot() function is used to create a box plot. We specify the
variables we want to plot on the x and y axes using the x and y parameters
and provide the data using the data parameter.
We add labels and a title to the plot using plt.xlabel(), plt.ylabel(), and
plt.title().
Finally, we use plt.show() to display the box plot.
These are two simple Seaborn programs demonstrating how to create
scatter plots and box plots. Seaborn provides a wide range of customization
options and can be used to create various types of statistical visualizations
with ease.

DAY-41
MACHINE LEARNING
Linear Regression:
1. Simple Linear Regression
Explanation:
We import the necessary libraries: numpy for numerical operations,
sklearn.linear_model for the linear regression model, and matplotlib.pyplot
for visualization.
We create sample data for hours studied and exam scores.
Reshape the input data because sklearn expects it in a specific format (2D
array).
Create a LinearRegression model and fit it to the data using the fit method.
Predict the exam score for a student who studied 7 hours.
Plot the data points and the regression line.

2. Multiple Linear Regression:


In multiple linear regression, we consider more than one independent
variable. Let's create a program that performs multiple linear regression to
predict a car's price based on its attributes like mileage and age.
Explanation:
We import the necessary libraries and create sample data for mileage, age,
and car prices.
Create a feature matrix X by stacking the independent variables (mileage
and age) horizontally.
Create a LinearRegression model and fit it to the data.
Predict the price of a car with 8000 miles and 4 years old.
These two examples demonstrate simple and multiple linear regression in
Python using scikit-learn. Linear regression models aim to find the best-
fitting line (in simple linear regression) or hyperplane (in multiple linear
regression) to predict the dependent variable based on the independent
variables.
DAY-44
KNN - K Nearest Neighbours:

1.KNN for Classification:


Explanation:
We start by importing necessary libraries, including KNeighborsClassifier
from scikit-learn, which provides the KNN classifier.
We load the Iris dataset using load_iris() from scikit-learn.
The dataset is split into training and testing sets using train_test_split.
We create a KNN classifier with n_neighbors=3 (meaning it will consider 3
nearest neighbors for classification).
The classifier is trained on the training data using fit.
Predictions are made on the test data using predict.
We calculate and print the accuracy of the model using accuracy_score.

2. KNN for Regression:


Explanation:
We import KNeighborsRegressor from scikit-learn for regression tasks.
We generate some synthetic data for regression, where X is the input
feature, and y is the target variable.
A KNN regressor with n_neighbors=5 is created.
The regressor is trained on the synthetic data using fit.
We make a prediction for a new data point (new_x) using predict.
In both examples, KNN is used for either classification or regression. The
key parameter is n_neighbors, which determines the number of nearest
neighbors to consider when making predictions. These examples
demonstrate how to use scikit-learn's KNN implementations for both
classification and regression tasks

DAY-54
Hierarchical Clustering:
Agglomerative Hierarchical Clustering:
Agglomerative hierarchical clustering starts with individual data points as
separate clusters and merges them iteratively into larger clusters until only
one cluster remains. This is a bottom-up approach.
Explanation:
We generate some random data points in this example using NumPy.
We use AgglomerativeClustering from the sklearn.cluster module to
perform agglomerative hierarchical clustering with a specified number of
clusters (n_clusters).
The .fit(X) method fits the clustering model to the data.
Finally, we plot the clusters, with different colors representing different
clusters.
2.Divisive Hierarchical Clustering:
Divisive hierarchical clustering is the reverse of agglomerative clustering; it
starts with all data points in a single cluster and recursively splits them into
smaller clusters until each data point forms its own cluster. This is a top-
down approach.
2.Divisive Hierarchical Clustering:
Explanation:
We generate random data points similarly to the previous example.
We use scipy.cluster.hierarchy.linkage to compute the linkage matrix, which
contains information about how clusters are merged at each step using the
"ward" linkage method.
The dendrogram function is used to visualize the hierarchical clustering as a
dendrogram.
In both examples, you can replace the sample data with your own dataset.
Hierarchical clustering is useful for exploring the structure of your data and
can be visualized through dendrograms or cluster assignments.
DAY:60-61
Optimization Techniques:
1.Brute Force Search:
Brute Force Search is a straightforward optimization technique that
exhaustively searches through all possible solutions to find the best one.
While it's not the most efficient method for large problem spaces, it's
simple and reliable for small-scale problems.
Program: Let's use the example of finding the maximum value in an array.
Explanation: This program iterates through each element in the array and
keeps track of the maximum value found so far. It updates max_value
whenever it encounters an element that is greater than the current
maximum.
2. Binary Search:
Binary Search is an efficient search algorithm for finding a specific element
in a sorted list or array. It repeatedly divides the search space in half until
the target element is found.

Explanation:
Binary Search works by repeatedly narrowing down the search range by
comparing the middle element with the target value. If the middle element
is equal to the target, it returns the index. If the target is smaller, it searches
in the left half, and if it's larger, it searches in the right half until the target is
found or the search range becomes empty (indicating the target is not in
the list).
These are two simple optimization techniques commonly used in
programming. Brute Force Search is suitable for small-scale problems, while
Binary Search is efficient for searching in sorted data. Depending on the
problem, you can choose the appropriate technique to optimize your code.
Program: Let's use the example of searching for a specific number in a
sorted list.
DAY-64
Convolutional Neural Networks:
Explanation:
This CNN is designed for image classification on the MNIST dataset, which
contains 28x28 grayscale images of handwritten digits (0-9).
It uses the Keras Sequential API to create a model.
The first layer is a 2D convolutional layer with 32 filters, a 3x3 kernel, and
ReLU activation. It expects input images with a shape of (28, 28, 1).
A max-pooling layer with a 2x2 pool size reduces spatial dimensions.
The output of the convolutional and pooling layers is flattened.
A fully connected layer with 128 neurons and ReLU activation is added.
The output layer has 10 neurons (one for each digit) and uses softmax
activation for multi-class classification.
The model is compiled with the Adam optimizer, sparse categorical cross-
entropy loss, and accuracy as the metric.
DAY-64
Convolutional Neural Networks:
CNN #2: Image Classification with Data Augmentation using TensorFlow
and Keras:
Explanation:
This CNN is designed for image classification on the CIFAR-10 dataset,
which contains 32x32 color images in 10 different classes.
It loads the CIFAR-10 dataset and normalizes pixel values to be between 0
and 1.
Similar to the first example, it uses a Keras Sequential model.
Data augmentation is applied to the training data using
ImageDataGenerator. This helps improve model generalization by creating
variations of the training images.
The model architecture includes convolutional layers, max-pooling layers,
and fully connected layers.
It is compiled and trained using the Adam optimizer and sparse categorical
cross-entropy loss.
The fit method is used to train the model with the augmented data.
These are two simple CNN examples for image classification tasks using
TensorFlow and Keras. The first example is for MNIST digit classification,
while the second example is for CIFAR-10 image classification with data
augmentation.
DAY 67-68
MLPs Multilayer Perceptrons:
Example 1: MLP for Binary Classification:
Explanation:

We import the necessary TensorFlow libraries and classes.


We create a Sequential model, which is a linear stack of layers.
We add an input layer with 2 neurons and specify the input shape as (2,)
because we have 2 input features. We use the ReLU activation function.
We add a hidden layer with 3 neurons and ReLU activation.
We add an output layer with 1 neuron and sigmoid activation because this
is a binary classification problem.
We compile the model, specifying the loss function as binary cross-entropy
(common for binary classification) and the optimizer as Adam.
Finally, we display a summary of the model architecture.

Example 2: MLP for Multiclass Classification:


Explanation:
multiclass classification:
This example is for multiclass classification with 3 classes.
Similar to the previous example, we create a Sequential model.
We add an input layer with 4 neurons and specify the input shape as (4,)
because we have 4 input features. We use the ReLU activation function.
We add a hidden layer with 5 neurons and ReLU activation.
We add an output layer with 3 neurons and softmax activation because this
is a multiclass classification problem with 3 classes.
The loss function is set to categorical cross-entropy, which is suitable for
multiclass classification.
We use the Adam optimizer.
Finally, we display a summary of the model architecture.
These examples provide a basic understanding of how to create simple MLP
architectures for binary and multiclass classification tasks using TensorFlow.
Depending on your specific problem and dataset, you may need to adjust
the architecture and hyperparameters accordingly.
DAY-71-72
Convolutional Neural Networks:
Explanation:
We import the necessary libraries from TensorFlow.
We define a CNN architecture using Sequential model. This CNN has three
convolutional layers followed by max-pooling layers, and two dense layers
for classification.
The first layer is a 2D convolutional layer with 32 filters, each of size (3, 3),
using ReLU activation and input shape of (32, 32, 3) for RGB images.
We add a max-pooling layer with a pool size of (2, 2).
Next, we add two more sets of convolutional and max-pooling layers with
increasing filter sizes (64 filters each).
We then flatten the output to feed it into dense layers.
Two dense layers with ReLU activation are added for classification, with the
final layer using softmax activation for multi-class classification (in this
example, 10 classes for CIFAR-10).
The model is compiled using Adam optimizer, sparse categorical
crossentropy loss (appropriate for multi-class classification), and accuracy
as the evaluation metric.

Program 2: CNN for Binary Image Classification:


Explanation:
This program is similar to the first one, but it is designed for binary image
classification (e.g., cat vs. dog).
The input shape is (64, 64, 1) for grayscale images.
The last dense layer uses a sigmoid activation function for binary
classification.
Both programs follow a similar structure with slight differences based on
the specific task (multi-class vs. binary classification) and dataset
characteristics (input shape, number of classes).
Day -73
Sequence Models:
1. HMM Forward Algorithm:
Explanation:
The forward_algorithm function calculates the probability of observing a
sequence of observations given an HMM. It uses the forward algorithm,
which is a dynamic programming approach to compute these
probabilities.

Observations is the sequence of observed emissions over time.

TransitionMatrix represents the state transition probabilities.

EmissionMatrix represents the emission probabilities.

InitialProbabilities represents the initial state probabilities.

We initialize a matrix forward to store the forward probabilities.

The algorithm computes the forward probabilities recursively for each


time step t, where each entry forward[j][t] represents the probability of
being in state j at time t given the observations up to time t.

The final probability of the observations is the sum of the probabilities


of being in each state at the last time step T

2. HMM Viterbi Algorithm:


Explanation:
The viterbi_algorithm function uses the Viterbi algorithm to find the most
likely sequence of hidden states given a sequence of observations.

It is similar to the forward algorithm but also keeps track of the best state
sequence (the Viterbi path).

The Viterbi matrix stores the probabilities of the most likely path to each
state at each time step.

The backpointer matrix stores the best previous state that leads to the
current state at each time step.

After computing the Viterbi probabilities and backpointers, we backtrack


from the last time step to find the best path by selecting the state with the
highest Viterbi probability at each time step.

These two pseudocode examples demonstrate the fundamental algorithms


for calculating probabilities and finding the most likely state sequences in a
Hidden Markov Model. These algorithms are building blocks for various
HMM-based applications like speech recognition, part-of-speech tagging,
and bioinformatics.

DAY-75
Program 1: Text Generation using an RNN:
Explanation:
We start by defining the text data, which in this case is "Hello, how are you
today?"
We create a character mapping to convert characters to numerical indices
and vice versa.
We prepare the training data by creating sequences of characters and their
corresponding target characters.
Next, we define an RNN model using the Sequential API in Keras. The
model consists of a SimpleRNN layer with 64 units and a Dense layer with
softmax activation for character prediction.
We compile the model using sparse categorical cross-entropy loss and the
Adam optimizer.
We train the model on the training data for 100 epochs.
Finally, we use the trained model to generate text starting from a seed text
("Hello, how") by predicting the next character at each step and appending
it to the generated text.

DAY-77
1.Simple GRU Model for Sequence Prediction:
In this program, we first import the necessary libraries, including
TensorFlow.
We create a simple GRU model using the Sequential API from TensorFlow.
The model consists of an input layer that expects sequences of length 10
with 1 feature, a GRU layer with 32 units, and an output layer with 1
neuron.
We compile the model using the Adam optimizer and mean squared error
loss.
Next, we generate synthetic data for training purposes. X represents 100
sequences, each of length 10 with 1 feature, and y represents the
corresponding target values.
Finally, we train the model for 10 epochs using a batch size of 32.
DAY-88
Text Classification using BERT:
In this program, we'll use BERT for text classification. We'll fine-tune a pre-
trained BERT model on a specific classification task, such as sentiment
analysis.
Explanation:
We use the Hugging Face Transformers library to load a pre-trained BERT
model and tokenizer.
The input text is tokenized using the tokenizer, and special tokens for [CLS]
and [SEP] are added.
The tokenized input is converted into a PyTorch tensor and fed to the BERT
model for inference.
The model's output logits represent the predicted class probabilities, and
we select the class with the highest probability as the prediction.

DAY-90
NLTK, spaCy:
NLTK Program
Explanation:
Import the necessary modules from NLTK.
Define a sample text.
Tokenize the text using word_tokenize.
Create a set of stopwords and remove them from the tokens.
Perform stemming on the filtered tokens using PorterStemmer.
Calculate the frequency distribution of stemmed tokens.
Print the original tokens, filtered tokens, stemmed tokens, and the most
common words.

spaCy Program:
Explanation:
Import the spaCy library.
Load the spaCy English model (en_core_web_sm).
Define a sample text.
Process the text with spaCy, which performs tokenization, part-of-speech
tagging, named entity recognition, and more.
Extract named entities from the processed document.
Print the named entities and their corresponding labels.
These are simple examples to get you started with NLTK and spaCy. NLTK is
often used for text preprocessing tasks like tokenization and stemming,
while spaCy is a more comprehensive NLP library that provides a wide
range of natural language processing capabilities, including named entity
recognition.

You might also like