You are on page 1of 51

Name: Bhavik Shah

Analysis Of Algorithms SAP_Id: 60004200044

(LAB Experiments) Division/Batch: A-2

Experiment 1
Aim: Write a program to implement and analyse time complexity of Insertion sort &
Selection Sort

Theory:
Insertion sort is a simple sorting algorithm that works similar to the way you sort playing
cards in your hands. The array is virtually split into a sorted and an unsorted part. Values
from the unsorted part are picked and placed at the correct position in the sorted part.
The selection sort algorithm sorts an array by repeatedly finding the minimum element
(considering ascending order) from the unsorted part and putting it at the beginning. The
algorithm maintains two subarrays in a given array.
 The subarray is already sorted.
 The remaining subarray is unsorted.
In every iteration of the selection sort, the minimum element (considering ascending
order) from the unsorted subarray is picked and moved to the sorted subarray.
Time Complexity: The time complexity of an algorithm quantifies the amount of time
taken by an algorithm to run as a function of the length of the input. Note that the time to
run is a function of the length of the input and not the actual execution time of the
machine on which the algorithm is running on.

Algorithm:
 Insertion Sort
Step 1 − If it is the first element, it is already sorted. return 1;
Step 2 − Pick next element
Step 3 − Compare with all elements in the sorted sub-list
Step 4 − Shift all the elements in the sorted sub-list that is greater than the value to be
sorted
Step 5 − Insert the value
Step 6 − Repeat until list is sorted

 Selection Sort
Step 1 − Set MIN to location 0
Step 2 − Search the minimum element in the list
Step 3 − Swap with value at location MIN
Step 4 − Increment MIN to point to next element
Step 5 − Repeat until list is sorted
Code:
#include <stdio.h>
#include <time.h>
#include <stdlib.h>

void insertionSort(int arr[], int n)


{
int i, key, j;
for (i = 1; i < n; i++)
{
key = arr[i];
j = i - 1;
while (j >= 0 && arr[j] > key)
{
arr[j + 1] = arr[j];
j = j - 1;
}
arr[j + 1] = key;
}
}
void selectionsort(int arr[], int n)
{
int i, min, j, temp;
for (i = 0; i < n - 1; i++)
{
min = i;
for (j = i + 1; j < n; j++)
{
if (arr[min] > arr[j])
{
min = j;
}
}
temp = arr[min];
arr[min] = arr[i];
arr[i] = temp;
}
}

int main()
{
int num = 50000;
int r, arr[num], arr2[num];
srand(time(NULL));
for (int i = 0; i < num; i++)
{
r = rand() % num;
arr[i] = r;
}
for (int i = 0; i < num; i++)
{
arr2[i] = arr[i];
}

// avg case
clock_t begin1 = clock();
selectionsort(arr, num);
clock_t end1 = clock();
double time_spent1 = (double)(end1 - begin1) / CLOCKS_PER_SEC;
clock_t begin2 = clock();
insertionSort(arr2, num);
clock_t end2 = clock();
double time_spent2 = (double)(end2 - begin2) / CLOCKS_PER_SEC;
printf("Average Case: Time taken by Insertion Sort:%f \n\tTime taken by Selection Sort:%f ",
time_spent2, time_spent1);
// best case
begin1 = clock();
selectionsort(arr, num);
end1 = clock();
time_spent1 = (double)(end1 - begin1) / CLOCKS_PER_SEC;
begin2 = clock();
insertionSort(arr2, num);
end2 = clock();
time_spent2 = (double)(end2 - begin2) / CLOCKS_PER_SEC;
printf("\nBest Case: Time taken by Insertion Sort:%f \n\tTime taken by Selection Sort:%f ", time_spent2,
time_spent1);

// worst case here


int temp, j = num - 1;
for (int i = 0; i < num / 2; i++)
{
temp = arr[j];
arr[j] = arr[i];
arr[i] = temp;

temp = arr2[j];
arr2[j] = arr2[i];
arr2[i] = temp;
j = j - 1;
}
begin1 = clock();
selectionsort(arr, num);
end1 = clock();
time_spent1 = (double)(end1 - begin1) / CLOCKS_PER_SEC;
begin2 = clock();
insertionSort(arr2, num);
end2 = clock();
time_spent2 = (double)(end2 - begin2) / CLOCKS_PER_SEC;
printf("\nWorst Case: Time taken by Insertion Sort:%f \n\tTime taken by Selection Sort:%f ",
time_spent2, time_spent1);

return 0;
}

Output:

Conclusion: We have performed program to implement and analyse time complexity of


Insertion sort & Selection Sort.

Experiment 2
Aim: Write a program to implement and analyse time complexity of Merge sort, Quick Sort
Theory:
Merge Sort is a Divide and Conquer algorithm. It divides the input array into two halves,
calls itself for the two halves, and then it merges the two sorted halves. The merge()
function is used for merging two halves. The merge(arr, l, m, r) is a key process that
assumes that arr[l..m] and arr[m+1..r] are sorted and merges the two sorted sub-arrays
into one.
QuickSort is a Divide and Conquer algorithm. It picks an element as pivot and partitions
the given array around the picked pivot. There are many different versions of quickSort
that pick pivot in different ways.
 Always pick first element as pivot.
 Always pick last element as pivot (implemented below)
 Pick a random element as pivot.
 Pick median as pivot.
The key process in quickSort is partition(). Target of partitions is, given an array and an
element x of array as pivot, put x at its correct position in sorted array and put all smaller
elements (smaller than x) before x, and put all greater elements (greater than x) after x.
All this should be done in linear time.
Algorithm:
 Merge Sort
procedure mergesort( var a as array )
if ( n == 1 ) return a
var l1 as array = a[0] ... a[n/2]
var l2 as array = a[n/2+1] ... a[n]
l1 = mergesort( l1 )
l2 = mergesort( l2 )
return merge( l1, l2 )
end procedure

procedure merge( var a as array, var b as array )


var c as array
while ( a and b have elements )
if ( a[0] > b[0] )
add b[0] to the end of c
remove b[0] from b
else
add a[0] to the end of c
remove a[0] from a
end if
end while
while ( a has elements )
add a[0] to the end of c
remove a[0] from a
end while
while ( b has elements )
add b[0] to the end of c
remove b[0] from b
end while
return c
end procedure

 Quick Sort
Pseudo Code for recursive QuickSort function:
/* low –> Starting index, high –> Ending index */
quickSort(arr[], low, high) {
if (low < high) {
/* pi is partitioning index, arr[pi] is now at right place */
pi = partition(arr, low, high);
quickSort(arr, low, pi – 1); // Before pi
quickSort(arr, pi + 1, high); // After pi
}
}
Pseudo code for partition()
/* This function takes last element as pivot, places the pivot element at its correct position in
sorted array, and places all smaller (smaller than pivot) to left of pivot and all greater elements to
right of pivot */
partition (arr[], low, high)
{
// pivot (Element to be placed at right position)
pivot = arr[high];
i = (low – 1) // Index of smaller element and indicates the
// right position of pivot found so far
for (j = low; j <= high- 1; j++){
// If current element is smaller than the pivot
if (arr[j] < pivot){
i++; // increment index of smaller element
swap arr[i] and arr[j]
}
}
swap arr[i + 1] and arr[high])
return (i + 1)
}
Code:
Merge Sort:-
#include <stdio.h>
#include <time.h>
#include <stdlib.h>

void merge(int arr[], int l, int m, int r)


{
int i, j, k;
int n1 = m - l + 1;
int n2 = r - m;

int L[n1], R[n2];


for (i = 0; i < n1; i++)
L[i] = arr[l + i];
for (j = 0; j < n2; j++)
R[j] = arr[m + 1 + j];

i = 0;
j = 0;
k = l;
while (i < n1 && j < n2)
{
if (L[i] <= R[j])
{
arr[k] = L[i];
i++;
}
else
{
arr[k] = R[j];
j++;
}
k++;
}

while (i < n1)


{
arr[k] = L[i];
i++;
k++;
}

while (j < n2)


{
arr[k] = R[j];
j++;
k++;
}
}

void mergeSort(int arr[], int l, int r)


{
if (l < r)
{

int m = l + (r - l) / 2;

mergeSort(arr, l, m);
mergeSort(arr, m + 1, r);

merge(arr, l, m, r);
}
}
int main()
{
int num = 50000;
int arr[num], r;
srand(time(NULL));
for (int i = 0; i < num; i++)
{
r = rand() % num;
arr[i] = r;
}

// avg case
clock_t begin1 = clock();
mergeSort(arr, 0, num - 1);
clock_t end1 = clock();
double time_spent1 = (double)(end1 - begin1) / CLOCKS_PER_SEC;

printf("Average Case (Random Order): Time taken by Merge Sort:%f \n", time_spent1);

begin1 = clock();
mergeSort(arr, 0, num - 1);
end1 = clock();
time_spent1 = (double)(end1 - begin1) / CLOCKS_PER_SEC;

printf("Sorted Array Case (Increasing Order): Time taken by Merge Sort:%f \n", time_spent1);

int temp, j = num - 1;


for (int i = 0; i < num / 2; i++)
{
temp = arr[j];
arr[j] = arr[i];
arr[i] = temp;
j = j - 1;
}

clock_t begin2 = clock();


mergeSort(arr, 0, num - 1);
clock_t end2 = clock();
double time_spent2 = (double)(end2 - begin2) / CLOCKS_PER_SEC;

printf("Sorted Array Case (Decresing Order): Time taken by Merge Sort:%f \n", time_spent2);
return 0;
}

Quick Sort:-
#include <stdio.h>
#include <time.h>
#include <stdlib.h>

void swap(int *a, int *b)


{
int t = *a;
*a = *b;
*b = t;
}

int partition(int array[], int low, int high)


{

int pivot = array[high];

int i = (low - 1);


for (int j = low; j < high; j++)
{
if (array[j] <= pivot)
{

i++;

swap(&array[i], &array[j]);
}
}

swap(&array[i + 1], &array[high]);

return (i + 1);
}

void quickSort(int array[], int low, int high)


{
if (low < high)
{

int pi = partition(array, low, high);

quickSort(array, low, pi - 1);

quickSort(array, pi + 1, high);
}
}

int main()
{
int num = 50000;
int arr[num], r;
srand(time(NULL));
for (int i = 0; i < num; i++)
{
r = rand() % num;
arr[i] = r;
}

clock_t begin1 = clock();


quickSort(arr, 0, num - 1);
clock_t end1 = clock();
double time_spent1 = (double)(end1 - begin1) / CLOCKS_PER_SEC;

printf("\nAverage Case (Random Order): Time taken by Quick Sort:%f \n", time_spent1);

begin1 = clock();
quickSort(arr, 0, num - 1);
end1 = clock();
time_spent1 = (double)(end1 - begin1) / CLOCKS_PER_SEC;
printf("\nSorted Array Case (Increasing Order):: Time taken by Quick Sort:%f \n", time_spent1);

int temp, j = num - 1;


for (int i = 0; i < num / 2; i++)
{
temp = arr[j];
arr[j] = arr[i];
arr[i] = temp;
j = j - 1;
}

clock_t begin2 = clock();


quickSort(arr, 0, num - 1);
clock_t end2 = clock();
double time_spent2 = (double)(end2 - begin2) / CLOCKS_PER_SEC;

printf("\nSorted Array Case (Decresing Order):: Time taken by Quick Sort:%f \n", time_spent2);
return 0;
}

Output:
 Merge Sort Analysis

 Quick Sort Analysis

Conclusion: We have Performed a program to implement and analyse time complexity of


Merge sort, Quick Sort.

Experiment 3
Aim: Write a program to implement Single source shortest path using Dynamic
Programming

Theory:
Bellman–Ford algorithm is a solution for Single source shortest path using Dynamic
Programming.
The idea is to use the Bellman–Ford algorithm to compute the shortest paths from a single
source vertex to all the other vertices in a given weighted digraph. Bellman–Ford algorithm
is slower than Dijkstra’s Algorithm, but it can handle negative weights edges in the graph,
unlike Dijkstra’s.
If a graph contains a “negative cycle” (i.e., a cycle whose edges sum to a negative value)
that is reachable from the source, then there is no shortest path. Any path that has a point
on the negative cycle can be made cheaper by one more walk around the negative cycle.
Bellman–Ford algorithm can easily detect any negative cycles in the graph.

The algorithm initializes the distance to the source to 0 and all other nodes to INFINITY .
Then for all edges, if the distance to the destination can be shortened by taking the edge,
the distance is updated to the new lower value. At each iteration i that the edges are
scanned, the algorithm finds all shortest paths of at most length i edges. Since the longest
possible path without a cycle can be V-1 edges, the edges must be scanned V-1 times to
ensure that the shortest path has been found for all nodes. A final scan of all the edges is
performed, and if any distance is updated, then a path of length |V| edges have been
found, which can only occur if at least one negative cycle exists in the graph.

Algorithm:
function BellmanFord(list vertices, list edges, vertex source, distance[], parent[])

// Step 1 – initialize the graph. In the beginning, all vertices weight of


// INFINITY and a null parent, except for the source, where the weight is 0

for each vertex v in vertices


distance[v] = INFINITY
parent[v] = NULL

distance[source] = 0
// Step 2 – relax edges repeatedly
for i = 1 to V-1 // V – number of vertices
for each edge (u, v) with weight w
if (distance[u] + w) is less than distance[v]
distance[v] = distance[u] + w
parent[v] = u

// Step 3 – check for negative-weight cycles


for each edge (u, v) with weight w
if (distance[u] + w) is less than distance[v]
return “Graph contains a negative-weight cycle”

return distance[], parent[]


Code:
#include<stdio.h>
int graph[20][20];
int distance[20];
int num = 20;
void InitialiseSingleSource(int source)
{
for (int i = 0; i < num; i++)
{
distance[i] = 9999;
}
distance[source] = 0;
}
void relax(int u, int v)
{
if (distance[v] > distance[u] + graph[u][v])
{
distance[v] = distance[u] + graph[u][v];
}
}

int BellmanFord(int source)


{
InitialiseSingleSource(source);
for (int i = 0; i < num - 1; i++)
{
for (int j = 0; j < num; j++)
{
for (int k = 0; k < num; k++)
{
if (graph[j][k] == 0)
continue;
relax(j, k);
}
}
}
for (int j = 0; j < num; j++)
{
for (int k = 0; k < num; k++)
{
if (graph[j][k] == 0)
continue;
if (distance[k] > distance[j] + graph[j][k])
{
printf("Graph has a Negative Cycle");
return 0;
}
}
}
return 1;
}
int main()
{
int ans = 1;
printf("Enter Number of Vertices: ");
scanf("%d", &num);
printf("Enter Weight edges : \n");
for (int i = 0; i < num; i++)
{
printf("From Vertex %d: ", i + 1);
for (int j = 0; j < num; j++)
{
scanf("%d", &graph[i][j]);
}
}
ans = BellmanFord(0);
if (ans)
{
for (int i = 0; i < num; i++)
{
printf("%d ", distance[i]);
}
}
return 0;
}

Output:

Conclusion: We have performed program to implement Single source shortest path using
Dynamic Programming i.e Bellman–Ford Algorithm.
Experiment 4
Aim: Write a program to implement Longest common subsequence.
Theory:
LCS Problem Statement: Given two sequences, find the length of longest subsequence
present in both of them. A subsequence is a sequence that appears in the same relative
order, but not necessarily contiguous. For example, “abc”, “abg”, “bdf”, “aeg”, ‘”acefg”, ..
etc are subsequences of “abcdefg”.
In order to find out the complexity of brute force approach, we need to first know the
number of possible different subsequences of a string with length n, i.e., find the number of
subsequences with lengths ranging from 1,2,..n-1. Recall from theory of permutation and
combination that number of combinations with 1 element are nC1. Number of combinations
with 2 elements are nC2 and so forth and so on. We know that nC0 + nC1 + nC2 + … nCn = 2n. So
a string of length n has 2n-1 different possible subsequences since we do not consider the
subsequence with length 0. This implies that the time complexity of the brute force
approach will be O(n * 2n). Note that it takes O(n) time to check if a subsequence is
common to both the strings. This time complexity can be improved using dynamic
programming.
Examples:
LCS for input Sequences “ABCDGH” and “AEDFHR” is “ADH” of length 3.
LCS for input Sequences “AGGTAB” and “GXTXAYB” is “GTAB” of length 4.

Algorithm:
Algorithm: LCS-Length-Table-Formulation (X, Y)
m := length(X)
n := length(Y)
for i = 1 to m do
C[i, 0] := 0
for j = 1 to n do
C[0, j] := 0
for i = 1 to m do
for j = 1 to n do
if xi = yj
C[i, j] := C[i - 1, j - 1] + 1
B[i, j] := ‘D’
else
if C[i -1, j] ≥ C[i, j -1]
C[i, j] := C[i - 1, j] + 1
B[i, j] := ‘U’
else
C[i, j] := C[i, j - 1]
B[i, j] := ‘L’
return C and B

Algorithm: Print-LCS (B, X, i, j)


if i = 0 and j = 0
return
if B[i, j] = ‘D’
Print-LCS(B, X, i-1, j-1)
Print(xi)
else if B[i, j] = ‘U’
Print-LCS(B, X, i-1, j)
else
Print-LCS(B, X, i, j-1)

Code:
#include <stdio.h>
#include <string.h>
int b[25][25], c[25 + 1][25 + 1];
void LCS_Length(char x[], char y[])
{
int m = strlen(x);
int n = strlen(y);
for (int i = 0; i < m + 1; i++)
{
c[0][i] = 0;
}
for (int i = 0; i < m + 1; i++)
{
c[i][0] = 0;
}
for (int i = 0; i < m; i++)
{
for (int j = 0; j < n; j++)
{
if (x[i] == y[j])
{
c[i + 1][j + 1] = c[i][j] + 1;
b[i][j] = 0;
}
else if (c[i][j + 1] >= c[i + 1][j])
{
c[i + 1][j + 1] = c[i][j + 1];
b[i][j] = 1;
}
else
{
c[i + 1][j + 1] = c[i + 1][j];
b[i][j] = 2;
}
}
}

return;
}
void Print_LCS(char x[], int i, int j)
{
if (i == -1 || j == -1)
return;
if (b[i][j] == 0)
{
Print_LCS(x, i - 1, j - 1);
printf("%c ", x[i]);
}
else if (b[i][j] == 1)
{
Print_LCS(x, i - 1, j);
}
else
{
Print_LCS(x, i, j - 1);
}
}

int main()
{
char str1[100],str2[100];
printf("Enter String 1 : ");
scanf("%s",&str1);
printf("Enter String 2 : ");
scanf("%s",&str2);

LCS_Length(str1, str2);
int m = strlen(str1);
int n = strlen(str2);
Print_LCS(str1, m, n);
return 0;
}

Output:

Conclusion: We have performed a program to implement Longest common subsequence.


Experiment 5
Aim: Write a program to implement Minimum Spanning Tree using Prim's and Kruskal
Algorithm.

Theory:
Prim’s algorithm is a Greedy algorithm. It starts with an empty spanning tree. The idea is to
maintain two sets of vertices. The first set contains the vertices already included in the
MST, the other set contains the vertices not yet included. At every step, it considers all the
edges that connect the two sets, and picks the minimum weight edge from these edges.
After picking the edge, it moves the other endpoint of the edge to the set containing MST.
A group of edges that connects two set of vertices in a graph is called cut in graph
theory. So, at every step of Prim’s algorithm, we find a cut (of two sets, one contains the
vertices already included in MST and other contains rest of the vertices), pick the minimum
weight edge from the cut and include this vertex to MST Set (the set that contains already
included vertices).

Kruskal's algorithm is a minimum spanning tree algorithm that takes a graph as input and
finds the subset of the edges of that graph which
 form a tree that includes every vertex
 has the minimum sum of weights among all the trees that can be formed from the
graph

It falls under a class of algorithms called greedy algorithms that find the local optimum
in the hopes of finding a global optimum.
We start from the edges with the lowest weight and keep adding edges until we reach
our goal.

The steps for implementing Kruskal's algorithm are as follows:

a. Sort all the edges from low weight to high


b. Take the edge with the lowest weight and add it to the spanning tree. If
adding the edge created a cycle, then reject this edge.
c. Keep adding edges until we reach all vertices.
Algorithm:
Prims:-
The pseudocode for prim's algorithm shows how we create two sets of vertices U and V-U.
U contains the list of vertices that have been visited and V-U the list of vertices that haven't.
One by one, we move vertices from set V-U to set U by connecting the least weight edge.
T = ∅;
U = { 1 };
while (U ≠ V)
let (u, v) be the lowest cost edge such that u ∈ U and v ∈ V - U;
T = T ∪ {(u, v)}
U = U ∪ {v}

Kruskal:-
Any minimum spanning tree algorithm revolves around checking if adding an edge
creates a loop or not.
The most common way to find this out is an algorithm called Union_FInd. The Union-
Find algorithm divides the vertices into clusters and allows us to check if two vertices
belong to the same cluster or not and hence decide whether adding an edge creates a
cycle.
KRUSKAL(G):
A=∅
For each vertex v ∈ G.V:
MAKE-SET(v)
For each edge (u, v) ∈ G.E ordered by increasing order by weight(u, v):
if FIND-SET(u) ≠ FIND-SET(v):
A = A ∪ {(u, v)}
UNION(u, v)
return A

Code:
Prims:-
#include <stdio.h>
#include <stdlib.h>
int distances[100];
int completed[100];
int parent[100];
int ver;
int graph[100][100];
void initializeSource()
{
for (int i = 0; i < ver; i++)
{
distances[i] = 100;
completed[i] = 0;
parent[i] = -1;
}
distances[0] = 0;
parent[0] = 0;
}

void relax(int u, int v)


{
if (distances[v] > graph[u][v])
{
distances[v] = graph[u][v];
parent[v] = u;
}
}

int extractMin()
{
int min;
int some = 0;
for (int i = 0; i < ver; i++)
{
if (completed[i] != 1)
{
some = 1;
min = i;
break;
}
}

for (int i = 0; i < ver; i++)


{
if (completed[i] == 0 && distances[i] <= distances[min])
{
min = i;
}
}

return min;
}

int main()
{

printf("Enter Total Number Of Vertices In Graph: ");


scanf("%d",&ver);
printf("Enter Distances For each Vertex-\n");
for (int i = 0; i < ver; i++)
{
printf("From Vertex %c : ",i + 97);
for(int j=0;j<ver;j++){
scanf("%d",&graph[i][j]);
}
}

initializeSource();

int cur;

for (int i = 0; i < ver; i++)


{
cur = extractMin();
for (int i = 0; i < ver; i++)
{
if (completed[i] == 0 && graph[cur][i] != 0)
{
relax(cur, i);
}
}
completed[cur] = 1;
}
int sum = 0;
printf("Distances\t");
printf("Parent\n");
for (int i = 0; i < ver; i++)
{
printf(" %d\t", distances[i]);
printf(" \t%c\n", parent[i] + 97);
sum = sum + distances[i];
}
printf("Cost Of Minimum Spanning Tree: %d\n", sum);
return 0;
}
Kruskal:-
#include <stdio.h>
int graph[100][100];
int set[100];
int parent[100];
int ver;
int edge_no = 0;
struct edge
{
int pt1;
int pt2;
int wt;
int sol;
};
struct edge edges[100];

void edgesCalulate()
{
int count = 0;
for (int i = 0; i < ver; i++)
{
for (int j = 0; j <= i; j++)
{
if (graph[i][j] != 0)
{
edges[count].pt1 = i;
edges[count].pt2 = j;
edges[count].wt = graph[i][j];
edges[count].sol = 0;
count++;
}
}
}
edge_no = count;
}
void edgesSort()
{
struct edge temp;
for (int i = 0; i < edge_no; i++)
{
for (int j = 0; j < edge_no - 1; j++)
{
if (edges[j].wt > edges[j + 1].wt)
{
temp = edges[j];
edges[j] = edges[j + 1];
edges[j + 1] = temp;
}
}
}
}

int main()
{
printf("Enter Total Number Of Vertices In Graph: ");
scanf("%d",&ver);
printf("Enter Distances For each Vertex-\n");
for (int i = 0; i < ver; i++)
{
printf("From Vertex %c : ",i + 97);
for(int j=0;j<ver;j++){
scanf("%d",&graph[i][j]);
}
}
edgesCalulate();
edgesSort();
for (int i = 0; i < ver; i++)
{
set[i] = i;
}

for (int i = 0; i < edge_no; i++)


{
if (set[edges[i].pt1] != set[edges[i].pt2])
{
edges[i].sol = 1;
int temp;
temp = set[edges[i].pt1];
for (int j = 0; j < edge_no; j++)
{
if (set[j] == temp)
set[j] = set[edges[i].pt2];
}
}
}
int sum = 0;
for (int i = 0; i < edge_no; i++)
{
if (edges[i].sol)
{
sum = sum + edges[i].wt;
printf(" wt- %d ", edges[i].wt);
printf(" ( %c , %c ) \n", edges[i].pt1 + 97, edges[i].pt2 + 97);
}
}
printf("Minimum Cost Of Spanning Tree: %d ", sum);
return 0;
}

Output:
Prims:-

Kruskal:-

Conclusion: We have performed a program to implement Minimum Spanning Tree using


Prim's and Kruskal Algorithm.
Experiment 6
Aim: Write a program to implement Single source shortest path using Greedy Approach.
Theory:
Given a graph and a source vertex in the graph, find the shortest paths from the source to
all vertices in the given graph.
Dijkstra’s algorithm is very similar to Prim’s algorithm for minimum spanning tree. Like
Prim’s MST, we generate a SPT (shortest path tree) with a given source as a root. We
maintain two sets, one set contains vertices included in the shortest-path tree, other set
includes vertices not yet included in the shortest-path tree. At every step of the algorithm,
we find a vertex that is in the other set (set of not yet included) and has a minimum
distance from the source.
Below are the detailed steps and algorithm used in Dijkstra’s algorithm to find the shortest
path from a single source vertex to all other vertices in the given graph.
We need to maintain the path distance of every vertex. We can store that in an array of size
v, where v is the number of vertices.
We also want to be able to get the shortest path, not only know the length of the shortest
path. For this, we map each vertex to the vertex that last updated its path length.
Once the algorithm is over, we can backtrack from the destination vertex to the source
vertex to find the path.
A minimum priority queue can be used to efficiently receive the vertex with least path
distance.

Algorithm:
function dijkstra(G, S)
for each vertex V in G
distance[V] <- infinite
previous[V] <- NULL
If V != S, add V to Priority Queue Q
distance[S] <- 0

while Q IS NOT EMPTY


U <- Extract MIN from Q
for each unvisited neighbour V of U
tempDistance <- distance[U] + edge_weight(U, V)
if tempDistance < distance[V]
distance[V] <- tempDistance
previous[V] <- U
return distance[], previous[]

Code:
#include <stdio.h>
#include <stdlib.h>
int graph[100][100];
int n;
int distances[100];
int completed[100];
int pi[100];
void initializeSource(){
for(int i=0;i<n;i++){
distances[i]=100;
completed[i]=0;
pi[i]=-1;
}
distances[0]=0;
pi[0]=0;
}
void relax(int u,int v){
if(distances[v]>distances[u]+graph[u][v]){
distances[v]=distances[u]+graph[u][v];
pi[v]=u;
}
}
int extractMin(){
int min;
int some=0;

for(int i=0;i<n;i++){
if(completed[i]!=1){
some=1;
min=i;
break;
}
}

for(int i=0;i<n;i++){
if(completed[i]==0 && distances[i]<=distances[min]){
min=i;
}
}

return min;
}
int main(){
printf("Enter Number of Vertices: ");
scanf("%d",&n);
printf("Enter Weight of edges to all vertices : \n");
for(int i=0;i<n;i++){
printf("Form Vertex %d: ",i);
for(int j=0;j<n;j++){
scanf("%d",&graph[i][j]);
}
}
initializeSource();
int cur;
for(int i=0;i<n;i++){
cur = extractMin();

for(int i=0;i<5;i++){
if(completed[i]==0 && graph[cur][i]!=0){
relax(cur,i);
}
}
completed[cur]=1;
}
printf("Vertex Distance Parent \n");
for(int i=0;i<n;i++){
printf("%d %d %d\n",i, distances[i],pi[i]);
}
return 0;
}

Output:

Conclusion: We have performed a program to implement Single source shortest path


using Greedy Approach which is Dijkstra’s Algorithm.
Experiment 7
Aim: Write a program to implement n queens Problem.
Theory:
N - Queens problem is to place n - queens in such a manner on an n x n chessboard that no
queens attack each other by being in the same row, column or diagonal.
It can be seen that for n =1, the problem has a trivial solution, and no solution exists for n
=2 and n =3

Place (k, i) returns a Boolean value that is true if the kth queen can be placed in column i. It
tests both whether i is distinct from all previous costs x1, x2,....xk-1 and whether there is no
other queen on the same diagonal.
Using place, we give a precise solution to then n- queens problem.
Place (k, i) return true if a queen can be placed in the kth row and 𝑖 𝑡ℎ column otherwise
return is false.
x [] is a global array whose final k - 1 values have been set. Abs (r) returns the absolute
value of r.

Algorithm:
Place (k, i)
{
For j ← 1 to k - 1
do if (x [j] = i)
or (Abs x [j]) - i) = (Abs (j - k))
then return false;
return true;
}

N - Queens (k, n)
{
For i ← 1 to n
do if Place (k, i) then
{
x [k] ← i;
if (k ==n) then
write (x [1....n));
else
N - Queens (k + 1, n);
}
}

Code:
#include<stdio.h>
#include<math.h>
int n;
int x[20];

int Place(int k,int i){


int j;
for (j=1;j<=k-1;j++){
if(x[j]==i || abs(x[j]-i)==abs(j-k)){
return 0;
}
}
return 1;
}

void N_Queens(int k){


int i,j;
for(i=1;i<=n;i++){
if(Place(k,i)==1){
x[k]=i;
if(k==n){
printf("Answer: ");
for(j=1;j<=n;j++){
printf(" %d ",x[j]);
}
printf("\n");
return;

}
else{
N_Queens(k+1);
}
}
}
}

int main(){
int l;
printf("Enter n: ");
scanf("%d",&n);
N_Queens(1);
return 0;
}

Output:

Conclusion: We have performed a program to implement n queens Problem.


Experiment 8
Aim: Write a program to implement sum of Subsets
Theory:
Subset sum problem is to find subset of elements that are selected from a given set whose
sum adds up to a given number K. We are considering the set contains non-negative values.
It is assumed that the input set is unique (no duplicates are presented).
Backtracking Algorithm for Subset Sum
Using exhaustive search we consider all subsets irrespective of whether they satisfy
given constraints or not. Backtracking can be used to make a systematic consideration of
the elements to be selected.
Assume given set of 4 elements, say w[1] … w[4]. Tree diagrams can be used to design
backtracking algorithms. The following tree diagram depicts approach of generating
variable sized tuple.
The power of backtracking appears when we combine explicit and implicit constraints, and
we stop generating nodes when these checks fail. We can improve the above algorithm by
strengthening the constraint checks and presorting the data. By sorting the initial array, we
need not to consider rest of the array, once the sum so far is greater than target number.
We can backtrack and check other possibilities.
Similarly, assume the array is presorted and we found one subset. We can generate next
node excluding the present node only when inclusion of next node satisfies the constraints.

Algorithm:
Algorithm SUB_SET_PROBLEM(i, sum, W, remSum)
// Description : Solve sub of subset problem using backtracking
// Input :
W: Number for which subset is to be computed
i: Item index
sum : Sum of integers selected so far
remSum : Size of remaining problem i.e. (W – sum)

// Output : Solution tuple X

if FEASIBLE_SUB_SET(i) == 1 then
if (sum == W) then
print X[1…i]
end
else
X[i + 1] ← 1
SUB_SET_PROBLEM(i + 1, sum + w[i] + 1, W, remSum – w[i] + 1 )
X[i + 1] ← 0// Exclude the ith item
SUB_SET_PROBLEM(i + 1, sum, W, remSum – w[i] + 1 )
end

function FEASIBLE_SUB_SET(i)
if (sum + remSum ≥ W) AND (sum == W) or (sum + w[i] + 1 ≤ W) then
return 0
end
return 1

Code:
#include<stdio.h>
int counter=1;
int n;
int arr[50];
int ans[50];
int m;
int total=0;

void sumOfSubset(int s, int k, int r){


int j;
ans[k]=1;
if(s+arr[k]==m){
printf("Solution %d: ",counter,s);
counter++;
for(j=0;j<n;j++){
printf(" %d ",ans[j]);
}
printf("\n");
for(j=k;j<n;j++){
ans[j]=0;
}
return;
}
else if(k!=n-1 && s+arr[k]+arr[k+1]<=m){
sumOfSubset(s+arr[k],k+1,r-arr[k] );
}
if(s+r-arr[k]>=m && s+arr[k+1]<=m && k!=n-1 ){
ans[k]=0;
sumOfSubset(s,k+1,r-arr[k]);
}

int main(){
int i;
printf("Enter Total Number of Numbers to be used : ");
scanf("%d",&n);
printf("Enter Sum to be Calculated : ");
scanf("%d",&m);
printf("Enter Numbers : ");
for ( i = 0; i < n; i++)
{
scanf("%d",&arr[i]);
ans[i]=0;
}
for(i=0;i<n;i++){
total+=arr[i];
}
sumOfSubset(0,0,total);
return 0;
}

Output:

Conclusion: We performed a program to implement sum of Subsets.

Experiment 9
Aim: Write a program to implement Graph Colouring
Theory:
In this problem, an undirected graph is given. There is also provided m colors. The problem
is to find if it is possible to assign nodes with m different colors, such that no two adjacent
vertices of the graph are of the same colors. If the solution exists, then display which color
is assigned on which vertex.
Starting from vertex 0, we will try to assign colors one by one to different nodes. But before
assigning, we have to check whether the color is safe or not. A color is not safe whether
adjacent vertices are containing the same color.

Algorithm:
isValid(vertex, colorList, col)
Begin
for all vertices v of the graph, do
if there is an edge between v and i, and col = colorList[i], then
return false
done
return true
End
graphColoring(colors, colorList, vertex)
Begin
if all vertices are checked, then
return true
for all colors col from available colors, do
if isValid(vertex, color, col), then
add col to the colorList for vertex
if graphColoring(colors, colorList, vertex+1) = true, then
return true
remove color for vertex
done
return false
End

Code:
#include <stdio.h>
int graph[100][100];
int m;
int n;
int x[100];

void NextVal(int k){


int j=0;
while(1){
x[k]=(x[k]+1)%(m+1);
if(x[k]==0)
return;
for ( j = 0; j <n; j++) {
// printf("1- %d\n",x[j]);
if(graph[k][j]!=0 && x[k]==x[j]){
break;
}
}
if(j==n)
return;
}
}

void mColoring(int k){


while(1){
NextVal(k);
if(x[k]==0)
return;
if(k==n-1){
printf("Solution is-\n");
for (int i = 0; i < n; i++) {
printf(" %d ",x[i]);
}
printf("\n");
break;
}
else{
mColoring(k+1);
}
}
}
int main()
{
printf("Enter Number of Vertices: ");
scanf("%d",&n);
printf("Enter Distances For each Vertex-\n");
for (int i = 0; i < n; i++)
{
printf("From Vertex %c : ",i + 97);
for(int j=0;j<n;j++){
scanf("%d",&graph[i][j]);
}
x[i]=0;
}
printf("Enter Number of Colors: ");
scanf("%d",&m);
mColoring(0);
return 0;
}

Output:

Conclusion: We have performed a program to implement Graph Colouring.


Experiment 10
Aim: Write a program to implement Rabin Karp String matching Algorithm and KMP
algorithm

Theory:
Rabin-Karp is another pattern searching algorithm. It is the string matching algorithm that
was proposed by Rabin and Karp to find the pattern in a more efficient way. Like the Naive
Algorithm, it also checks the pattern by moving the window one by one, but without
checking all characters for all cases, it finds the hash value. When the hash value is
matched, then only it proceeds to check each character. In this way, there is only one
comparison per text subsequence making it a more efficient algorithm for pattern
searching.
Preprocessing time- O(m)
The time complexity of the Rabin-Karp Algorithm is O(m+n), but for the worst case, it
is O(mn).

Knuth-Morris and Pratt introduce a linear time algorithm for the string matching problem.
A matching time of O (n) is achieved by avoiding comparison with an element of 'S' that
have previously been involved in comparison with some element of the pattern 'p' to be
matched. i.e., backtracking on the string 'S' never occurs
1. The Prefix Function (Π): The Prefix Function, Π for a pattern encapsulates knowledge
about how the pattern matches against the shift of itself. This information can be used to
avoid a useless shift of the pattern 'p.' In other words, this enables avoiding backtracking of
the string 'S.'
2. The KMP Matcher: With string 'S,' pattern 'p' and prefix function 'Π' as inputs, find the
occurrence of 'p' in 'S' and returns the number of shifts of 'p' after which occurrences are
found.

Algorithm:
 Rabin Karp
Rabin_karp ( p, t, d, q)
n = t.length
m = p.length
h = dm-1 mod q
p=0
t0 = 0
for i = 1 to m
p = (dp + p[i]) mod q
t0 = (dt0 + t[i]) mod q
for s = 0 to n - m
if p = ts
if p[1.....m] = t[s + 1..... s + m]
print "pattern found at position" s
If s < n-m
ts + 1 = (d (ts - t[s + 1]h) + t[s + m + 1]) mod q

 KMP
COMPUTE- PREFIX- FUNCTION (P)
m ←length [P] //'p' pattern to be matched
Π [1] ← 0
k←0
for q ← 2 to m
do while k > 0 and P [k + 1] ≠ P [q]
do k ← Π [k]
If P [k + 1] = P [q]
then k← k + 1
Π [q] ← k
Return Π
KMP-MATCHER (T, P)
n ← length [T]
m ← length [P]
Π← COMPUTE-PREFIX-FUNCTION (P)
q←0 // numbers of characters matched
for i ← 1 to n // scan S from left to right
do while q > 0 and P [q + 1] ≠ T [i]
do q ← Π [q] // next character does not match
If P [q + 1] = T [i]
then q ← q + 1 // next character matches
If q = m // is all of p matched?
then print "Pattern occurs with shift" i - m
q ← Π [q]

Code:
Rabin Karp-
#include<stdio.h>
#include<string.h>
#include<math.h>

void rabinK(char txt[],char pat[],int d,int q ){


int i,s,m,n,h=1,hashP=0,hashT=0;
n=strlen(txt);
m=strlen(pat);
h= ( int )(pow(d,m-1))%q;
for( i=0;i<m;i++){
hashP=(d*hashP + pat[i])%q;
hashT=(d*hashT +txt[i])%q;
}
for( s=0; s<=n-m;s++){
if(hashP==hashT){
for(i=0;i<m;i++){
if(pat[i]!=txt[s+i]){
break;
}
}
if(i==m){
printf("Pattern Occured at index %d. \n",s);
}
}
if(s<n-m){
hashT=(d*(hashT-(txt[s]*h))+txt[s+m])%q;
if(hashT<0){
hashT+=q;
}
}
}
return;
}

int main(){
int d,q;
char txt[ 100],pat[100] ;
printf("Enter Text: ");
scanf("%s",&txt);
printf("Enter Pattern: ");
scanf("%s",&pat);
d=256;
q=31;
rabinK(txt,pat,d,q);
return 0;
}

KMP-
#include<stdio.h>
#include<string.h>
int pi[100];

void computePrefix(char pat[]){


int q,m,k=0;
m=strlen(pat);
pi[0]=0;
for(q=1;q<m;q++){
while(k>0 && pat[k]!=pat[q]){
k=pi[k-1];
}
if(pat[k]==pat[q]){
k=k+1;
}
pi[q]=k;
}
return;
}

void kmp(char txt[],char pat[]){


int i,m,n,q;
n=strlen(txt);
m=strlen(pat);
computePrefix(pat);
q=0;
for(i=0;i<n;i++){
while(q>0 && pat[q]!=txt[i]){
q=pi[q-1];
}
if (pat[q]==txt[i])
q=q+1;
if (q==m){
printf("Pattern Occurs at index %d \n",(i-(m-1)));
q=pi[q-1];
}
}
return;
}
int main(){
char txt[ 100],pat[100 ] ;
printf("Enter Text: ");
scanf("%s",&txt);
printf("Enter Pattern: ");
scanf("%s",&pat);
kmp(txt,pat);
return 0;
}

Output:
Rabin Karp-

KMP-

Conclusion: We have performed a program to implement Rabin Karp String matching


Algorithm and KMP algorithm

You might also like