You are on page 1of 48

Aamir khan

01996203119

Dr. Akhilesh Das Gupta Institute of Technology


& Management

New Delhi – 110053

Algorithm Design and Analysis Lab

Submitted To:- Submitted By: -


Ms. Ashu Jain Aamir khan
Assistant Professor 01996203119
Dept. Of Information Technology T-19

Aamir khan
01996203119
Aamir khan
01996203119

INDEX
S. NO. OBJECTIVE DATE SIGNATURE
1. To implement the following algorithm using
1
array as a data structure and analyse its time
complexity.
a. Merge sort
b. Quick sort
c. Bubble sort
d. Bucket sort
e. Radix sort
f. Shell sort
g. Selection sort
h. Heap sort

To implement Linear search and Binary search


2
and analyse its time complexity.

To implement Matrix Multiplication and analyse


3
its time complexity.

To implement the Longest Common


4
Subsequence problem and analyse its time
complexity.
5 To implement the Optimal Binary Search
Tree problem and analyse its time
complexity.
To implement Huffman Coding and analyse its
6
time complexity.

To implement Dijkstra’s algorithm and analyse


7
its time complexity.

To implement Bellman Ford algorithm and


8
analyse its time complexity.

To implement naïve String Matching algorithm,


9
Rabin Karp algorithm and Knuth Morris Pratt
algorithm and analyse its time complexity.

Aamir khan
01996203119
Aamir khan
01996203119

Practical 01
Objective:
To implement the following algorithm using array as a data structure and analyse its
time complexity.
a. Bubble sort
b. Insertion Sort
c. Selection sort
d. Merge sort
e. Quick sort
f. Heap Sort

Theory and Algorithm:


a. Bubble Sort:
Bubble Sort is the simplest sorting algorithm that works by repeatedly
swapping the adjacent elements if they are in the wrong order.

Algorithm:
First Pass:
( 5 1 4 2 8 ) –> ( 1 5 4 2 8 ), Here, algorithm compares the first two elements,
and swaps since 5 > 1.
( 1 5 4 2 8 ) –> ( 1 4 5 2 8 ), Swap since 5 > 4
( 1 4 5 2 8 ) –> ( 1 4 2 5 8 ), Swap since 5 > 2
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ), Now, since these elements are already in order (8 >
5), the algorithm does not swap them.
Second Pass:
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 )
( 1 4 2 5 8 ) –> ( 1 2 4 5 8 ), Swap since 4 > 2
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
Now, the array is already sorted, but our algorithm does not know if it is
completed. The algorithm needs one whole pass without any swap to know it is
sorted.
Third Pass:
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )

Program:
Aamir khan
01996203119
Aamir khan
01996203119

#include<iostream>
using namespace std;
void bubble_sort(int a[],int n)
{
int j;
for(int i=0;i<n;i++)
{
for(j=0;j<n-i;j++)
{
if(a[j]>a[j+1])
{
int temp=a[j];
a[j]=a[j+1];
a[j+1]=temp;
}
}
}
}
int main()
{
int i,j,t,n;
cout<<"Enter the size of array:"<<endl;
cin>>n;
int a[100];
cout<<"Ente elements:"<<endl;
for(i=0;i<n;i++)
{
cin>>a[i];
}
bubble_sort(a,n);
for(i=0;i<n;i++)
{
cout<<a[i]<<",";
}
}

Aamir khan
01996203119
Aamir khan
01996203119

Output:

Analysis of Time Complexity:


Worst and Average Case Time Complexity: O(n*n). Worst case occurs
when the array is reverse sorted.
Best Case Time Complexity: O(n). Best case occurs when the array is already
sorted.
Auxiliary Space: O(1)
Boundary Cases: Bubble sort takes minimum time (Order of n) when
elements are already sorted.

b. Insertion Sort:
Insertion sort is a simple sorting algorithm that works similar to the way you
sort playing cards in your hands. The array is virtually split into a sorted and an
unsorted part. Values from the unsorted part are picked and placed at the
correct position in the sorted part.

Algorithm
To sort an array of size n in ascending order:
1: Iterate from arr[1] to arr[n] over the array.
2: Compare the current element (key) to its predecessor.
3: If the key element is smaller than its predecessor, compare it to the elements
before. Move the greater elements one position up to make space for the
swapped element.

Aamir khan
01996203119
Aamir khan
01996203119

Program:
#include<iostream>
using namespace std;

void insertion_sort(int a[],int n)


{
for(int i=1;i<n;i++)
{
int e=a[i];
int j=i-1;
while(j>=0&&a[j]>e)
{
a[j+1]=a[j];
j=j-1;
}
a[j+1]=e;
}
}
int main()
{
int i,n;
int a[100];
cout<<"Enter size of array:"<<endl;
cin>>n;
cout<<"Enter Array elements:"<<endl;
for(i=0;i<n;i++)
{
cin>>a[i];
}
insertion_sort(a,n);
cout<<"Array after sort:"<<endl;
Aamir khan
01996203119
Aamir khan
01996203119

for(i=0;i<n;i++)
{
cout<<a[i]<<",";

}
}

Output:

Analysis of Time Complexity:


Time Complexity: O(n^2)
Auxiliary Space: O(1)
Boundary Cases: Insertion sort takes maximum time to sort if elements are
sorted in reverse order. And it takes minimum time (Order of n) when elements
are already sorted.
Algorithmic Paradigm: Incremental Approach

c. Selection Sort:

Aamir khan
01996203119
Aamir khan
01996203119

The selection sort algorithm sorts an array by repeatedly finding the minimum
element (considering ascending order) from the unsorted part and putting it at
the beginning. The algorithm maintains two subarrays in a given array.

1) The subarray which is already sorted.

2) Remaining subarray which is unsorted.

In every iteration of selection sort, the minimum element (considering


ascending order) from the unsorted subarray is picked and moved to the sorted
subarray.

Algorithm:

arr[] = 64 25 12 22 11

// Find the minimum element in arr[0...4]


// and place it at beginning
11 25 12 22 64

// Find the minimum element in arr[1...4]


// and place it at beginning of arr[1...4]
11 12 25 22 64

// Find the minimum element in arr[2...4]


// and place it at beginning of arr[2...4]
11 12 22 25 64

// Find the minimum element in arr[3...4]


// and place it at beginning of arr[3...4]
11 12 22 25 64

Program:

#include<iostream>
using namespace std;
void selection_sort(int a[],int n)
{
for(int i=0;i<n-1;i++)

Aamir khan
01996203119
Aamir khan
01996203119

{ int minn=i;
for(int j=i+1;j<n;j++)
{
if(a[j]<a[minn])
{
minn=j;
}
}
int temp=a[i];
a[i]=a[minn];
a[minn]=temp;
}
}
int main()
{
int i,t;
int n;
int a[100];
cout<<"Enter size of the array:"<<endl;
cin>>n;
cout<<"Eneter array elements:"<<endl;
for(i=0;i<n;i++)
{
cin>>a[i];
}
cout<<"array after sort is:"<<endl;
for(i=n-1;i>=0;i--)
{
cout<<a[i]<<",";
}
}

Output:

Aamir khan
01996203119
Aamir khan
01996203119

Analysis of Time Complexity:


Time Complexity: O(n2) as there are two nested loops.
Auxiliary Space: O(1)
The good thing about selection sort is it never makes more than O(n) swaps and
can be useful when memory write is a costly operation.

d. Merge Sort:
Merge sort is one of the most efficient sorting algorithms. It works on the
principle of Divide and Conquer. Merge sort repeatedly breaks down a list into
several sublists until each sublist consists of a single element and merges those
sublists in a manner that results in a sorted list.

Algorithm
The MergeSort function repeatedly divides the array into two halves until we
reach a stage where we try to perform MergeSort on a subarray of size 1 i.e.

p == r. After that, the merge function comes into play and combines the sorted
arrays into larger arrays until the whole array is merged.
MergeSort(A, p, r):
if p > r
return
q = (p+r)/2

Aamir khan
01996203119
Aamir khan
01996203119
mergeSort(A, p, q)
mergeSort(A, q+1, r)
merge(A, p, q, r)
To sort an entire array, we need to call MergeSort(A, 0, length(A)-1).

Program:

Output:

e. Quick Sort:

Quicksort is a sorting algorithm based on the divide and conquer approach where

1. An array is divided into subarrays by selecting a pivot element (element


selected from the array).

While dividing the array, the pivot element should be positioned


in such a way that elements less than pivot are kept on the left side and
elements greater than pivot are on the right side of the pivot.
2. The left and right subarrays are also divided using the same approach. This
process continues until each subarray contains a single element.
3. At this point, elements are already sorted. Finally, elements are combined to
form a sorted array.

Algorithm:

quickSort(array, leftmostIndex, rightmostIndex)

if (leftmostIndex < rightmostIndex)

pivotIndex <- partition(array,leftmostIndex, rightmostIndex)

Aamir khan
01996203119
Aamir khan
01996203119
quickSort(array, leftmostIndex, pivotIndex - 1)

quickSort(array, pivotIndex, rightmostIndex)

partition(array, leftmostIndex, rightmostIndex)

set rightmostIndex as pivotIndex

storeIndex <- leftmostIndex - 1

for i <- leftmostIndex + 1 to rightmostIndex

if element[i] < pivotElement

swap element[i] and element[storeIndex]

storeIndex++

swap pivotElement and element[storeIndex+1]

return storeIndex + 1

Program:
import java.util.Arrays;

class Quicksort {

static int partition(int array[], int low, int high) {

int pivot = array[high];

int i = (low - 1);

for (int j = low; j < high; j++) {

if (array[j] <= pivot) {

i++;

Aamir khan
01996203119
Aamir khan
01996203119

int temp = array[i];

array[i] = array[j];

array[j] = temp;

int temp = array[i + 1];

array[i + 1] = array[high];

array[high] = temp;

return (i + 1);

static void quickSort(int array[], int low, int high) {

if (low < high) {

int pi = partition(array, low, high);

quickSort(array, low, pi - 1);

quickSort(array, pi + 1, high);

Aamir khan
01996203119
Aamir khan
01996203119
}

class Main {

public static void main(String args[]) {

int[] data = { 8, 7, 2, 1, 0, 9, 6 };

System.out.println("Unsorted Array");

System.out.println(Arrays.toString(data));

int size = data.length;

Quicksort.quickSort(data, 0, size - 1);

System.out.println("Sorted Array in Ascending Order: ");

System.out.println(Arrays.toString(data));

Output:

f. Heap Sort:

Aamir khan
01996203119
Aamir khan
01996203119

Heap Sort is a popular and efficient sorting algorithm in computer programming.


Learning how to write the heap sort algorithm requires knowledge of two types of data
structures - arrays and trees.

The initial set of numbers that we want to sort is stored in an array e.g. [10, 3, 76, 34,
23, 32] and after sorting, we get a sorted array [3,10,23,32,34,76].

Heap sort works by visualizing the elements of the array as a special kind of complete
binary tree called a heap.

Program:
public class Practice01 {

public void sort(int arr[]) {

int n = arr.length;

for (int i = n / 2 - 1; i >= 0; i--) {

heapify(arr, n, i);

for (int i = n - 1; i >= 0; i--) {

int temp = arr[0];

arr[0] = arr[i];

arr[i] = temp;

heapify(arr, i, 0);

void heapify(int arr[], int n, int i) {

int largest = i;

int l = 2 * i + 1;

Aamir khan
01996203119
Aamir khan
01996203119
int r = 2 * i + 2;

if (l < n && arr[l] > arr[largest])

largest = l;

if (r < n && arr[r] > arr[largest])

largest = r;

if (largest != i) {

int swap = arr[i];

arr[i] = arr[largest];

arr[largest] = swap;

heapify(arr, n, largest);

static void printArray(int arr[]) {

int n = arr.length;

for (int i = 0; i < n; ++i)

System.out.print(arr[i] + " ");

System.out.println();

public static void main(String args[]) {

int arr[] = { 1, 12, 9, 5, 6, 10 };

Aamir khan
01996203119
Aamir khan
01996203119
Practice01 hs = new Practice01();

hs.sort(arr);

System.out.println("Sorted array is");

printArray(arr);

Output:

Analysis of Time Complexity:

TimeComplexity

Best O(nlogn)
Worst O(nlogn)
Average O(nlogn)

SpaceComplexity O(1)

Aamir khan
01996203119
Aamir khan
01996203119

Program 02
Objective:
To implement Linear search and Binary search and analyse its time complexity
Theory and Algorithm:
a. Linear Search:
A linear search or sequential search is a method for finding an element
within a list. It sequentially checks each element of the list until a match is
found or the whole list has been searched.
Algorithm:
A simple approach is to do a linear search, i.e

● Start from the leftmost element of arr[] and one by one compare x with each
element of arr[]
● If x matches with an element, return the index.
● If x doesn’t match with any of the elements, return -1.

Program:
#include <iostream>
using namespace std;

int search(int arr[], int n, int x)


{
int i;
for (i = 0; i < n; i++)
if (arr[i] == x)
return i;
return -1;
}

Aamir khan
01996203119
Aamir khan
01996203119
int main(void)
{
int arr[] = {2, 3, 4, 10, 40};
int x = 10;
int n = sizeof(arr) / sizeof(arr[0]);

int result = search(arr, n, x);


(result == -1)
? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}

Output:

Analysis of Time Complexity:

The Time complexity of the above algorithm is O(n).

Linear search is rarely used practically because other search algorithms such as
the binary search algorithm and hash tables allow significantly faster-searching
comparison to Linear search.

Improve Linear Search Worst-Case Complexity

1. If element Found at last O(n) to O(1)


2. If element Not found O(n) to O(n/2)

b. Binary Search:
Search a sorted array by repeatedly dividing the search interval in half. Begin
with an interval covering the whole array. If the value of the search key is less
than the item in the middle of the interval, narrow the interval to the lower half.

Aamir khan
01996203119
Aamir khan
01996203119

Otherwise, narrow it to the upper half. Repeatedly check until the value is
found or the interval is empty.

Algorithm:

The idea of binary search is to use the information that the array is sorted and reduce
the time complexity to O(Log n).
We basically ignore half of the elements just after one comparison.

1. Compare x with the middle element.


2. If x matches with the middle element, we return the mid index.
3. Else If x is greater than the mid element, then x can only lie in the right half
subarray after the mid element. So we recur for the right half.
4. Else (x is smaller) recur for the left half.

Program:

#include <bits/stdc++.h>

using namespace std;

int binarySearch(int arr[], int l, int r, int x)

while (l <= r)

Aamir khan
01996203119
Aamir khan
01996203119
int m = l + (r - l) / 2;

if (arr[m] == x)

return m;

if (arr[m] < x)

l = m + 1;

else

r = m - 1;

return -1;

int main(void)

int arr[] = {2, 3, 4, 10, 40};

int x = 10;

int n = sizeof(arr) / sizeof(arr[0]);

int result = binarySearch(arr, 0, n - 1, x);

(result == -1) ? cout << "Element is not present in array"

: cout << "Element is present at index " << result;

return 0;

Output:
Aamir khan
01996203119
Aamir khan
01996203119

Analysis of Time Complexity:


The time complexity of Binary Search can be written as

T(n) = T(n/2) + c

The above recurrence can be solved either using the Recurrence Tree method or
Master method. It falls in case II of the Master Method and the solution of the
recurrence is.

Auxiliary Space: O(1) in case of iterative implementation. In the case of recursive


implementation, O(Logn) recursion calls stack space.

Aamir khan
01996203119
Aamir khan
01996203119

Program 03
Objective: To implement Matrix Multiplication and analyse its time complexity.

Theory and Algorithm:


Given two matrices A and B of any size, the task is to multiply them in Java.
Examples:
Input: A[][] = {{1, 2},
{3, 4}}
B[][] = {{1, 1},
{1, 1}}
Output: {{3, 3},
{7, 7}}

Input: A[][] = {{2, 4},


{3, 4}}
B[][] = {{1, 2},
{1, 3}}
Output: {{6, 16},
{7, 18}}

Algorithm:
● Take the two matrices to be multiplied
● Check if the two matrices are compatible to be multiplied

● Create a new Matrix to store the product of the two matrices

Aamir khan
01996203119
Aamir khan
01996203119

● Traverse each element of the two matrices and multiply them. Store this
product in the new matrix at the corresponding index.
● Print the final product matrix

Program:
import java.io.*;
import java.util.*;

public class Main {


public static void main(String[] args) throws Exception {
Scanner scan = new Scanner(System.in);
// 1st matrix
int r1 = scan.nextInt();
int c1 = scan.nextInt();
int[][] one = new int[r1][c1];
for (int i = 0; i < one.length; i++) {
for (int j = 0; j < one[0].length; j++) {
one[i][j] = scan.nextInt();
}
}
// 2nd matrix
int r2 = scan.nextInt();
int c2 = scan.nextInt();
int[][] two = new int[r2][c2];
for (int i = 0; i < two.length; i++) {
for (int j = 0; j < two[0].length; j++) {
two[i][j] = scan.nextInt();
}
}
if (c1 != r2) {
System.out.println("Invalid input");
return;
}

int[][] prd = new int[r1][c2];


for (int i = 0; i < prd.length; i++) {
for (int j = 0; j < prd[0].length; j++) {
for (int k = 0; k < c1; k++) { // logic to get product values
prd[i][j] += one[i][k] * two[k][j]; // // logic to get product
values
}
}

Aamir khan
01996203119
Aamir khan
01996203119
}
// printing output
for (int i = 0; i < prd.length; i++) {
for (int j = 0; j < prd[0].length; j++) {
System.out.print(prd[i][j] + " ");
}
System.out.println();
}

Output:

Analysis of Time Complexity:

Time Complexity:
O(n3)

This time complexity is cubic because 3 nested for loops are used.

Space Complexity
O(n2)

As 2D arrays are used to store numbers, therefore space complexity is quadratic.

Aamir khan
01996203119
Aamir khan
01996203119

Practical 4
Objective: To implement Longest Common Subsequence problem and analyse its
time complexity.

Theory:
The longest subsequence common to all the given sequences is referred to as Longest
Common Subsequence. The reason for using the LCS is to restrict the element of the
subsequences from occupying the consecutive position within the original sequences.

A sequence that appears in the same relative order, either contiguous or non-
contiguous way is known as a subsequence.

For example, if we have two sequences, such as "KTEURFJS" and "TKWIDEUJ", the
longest common subsequence will be "TEUJ" of length 4.

Algorithm:
X and Y be two given sequences
Initialize a table LCS of dimension X.length * Y.length
X.label = X
Y.label = Y
LCS[0][] = 0
LCS[][0] = 0
Start from LCS[1][1]
Compare X[i] and Y[j]
If X[i] = Y[j]
LCS[i][j] = 1 + LCS[i-1, j-1]
Point an arrow to LCS[i][j]
Else
LCS[i][j] = max(LCS[i-1][j], LCS[i][j-1])
Point an arrow to max(LCS[i-1][j], LCS[i][j-1])

Aamir khan
01996203119
Aamir khan
01996203119

Program:
class LCS_ALGO {
static void lcs(String S1, String S2, int m, int n) {
int[][] LCS_table = new int[m + 1][n + 1];
for (int i = 0; i <= m; i++) {
for (int j = 0; j <= n; j++) {
if (i == 0 || j == 0)
LCS_table[i][j] = 0;
else if (S1.charAt(i - 1) == S2.charAt(j - 1))
LCS_table[i][j] = LCS_table[i - 1][j - 1] + 1;
else
LCS_table[i][j] = Math.max(LCS_table[i - 1][j], LCS_table[i]
[j - 1]);
}
}

int index = LCS_table[m][n];


int temp = index;

char[] lcs = new char[index + 1];


lcs[index] = '\0';

int i = m, j = n;
while (i > 0 && j > 0) {
if (S1.charAt(i - 1) == S2.charAt(j - 1)) {
lcs[index - 1] = S1.charAt(i - 1);

i--;
j--;
index--;
}

else if (LCS_table[i - 1][j] > LCS_table[i][j - 1])


i--;
else
j--;
}
System.out.print("S1 : " + S1 + "\nS2 : " + S2 + "\nLCS: ");
for (int k = 0; k <= temp; k++)
System.out.print(lcs[k]);
System.out.println("");
}

Aamir khan
01996203119
Aamir khan
01996203119

public static void main(String[] args) {


String S1 = "ACADB";
String S2 = "CBDA";
int m = S1.length();
int n = S2.length();
lcs(S1, S2, m, n);
}
}

Output:

Aamir khan
01996203119
Aamir khan
01996203119

Practical 5
Objective: To implement the Optimal Binary Search Tree problem and analyse its
time complexity.

Theory:
Given a sorted array key [0.. n-1] of search keys and an array freq[0.. n-1] of
frequency counts, where freq[i] is the number of searches for keys[i]. Construct a
binary search tree of all keys such that the total cost of all the searches is as small as
possible.

Algorithm:

Input: keys[] = {10, 12}, freq[] = {34, 50}


There can be following two possible BSTs
10 12
\ /
12 10
I II
Frequency of searches of 10 and 12 are 34 and 50 respectively.
The cost of tree I is 34*1 + 50*2 = 134
The cost of tree II is 50*1 + 34*2 = 118

Input: keys[] = {10, 12, 20}, freq[] = {34, 8, 50}


There can be following possible BSTs
10 12 20 10 20
\ / \ / \ /
12 10 20 12 20 10
\ / / \
20 10 12 12
I II III IV V
Among all possible BSTs, cost of the fifth BST is minimum.
Cost of the fifth BST is 1*50 + 2*34 + 3*8 = 142

Program:
// Dynamic Programming Java code for Optimal Binary Search
// Tree Problem
public class Optimal_BST2 {

Aamir khan
01996203119
Aamir khan
01996203119

/* A Dynamic Programming based function that calculates


minimum cost of a Binary Search Tree. */
static int optimalSearchTree(int keys[], int freq[], int n) {

/* Create an auxiliary 2D matrix to store results of


subproblems */
int cost[][] = new int[n + 1][n + 1];

/* cost[i][j] = Optimal cost of binary search tree that


can be formed from keys[i] to keys[j]. cost[0][n-1]
will store the resultant cost */

// For a single key, cost is equal to frequency of the key


for (int i = 0; i < n; i++)
cost[i][i] = freq[i];

// Now we need to consider chains of length 2, 3, ... .


// L is chain length.
for (int L = 2; L <= n; L++) {

// i is row number in cost[][]


for (int i = 0; i <= n - L + 1; i++) {

// Get column number j from row number i and


// chain length L
int j = i + L - 1;
cost[i][j] = Integer.MAX_VALUE;

// Try making all keys in interval keys[i..j] as root


for (int r = i; r <= j; r++) {

// c = cost when keys[r] becomes root of this


subtree
int c = ((r > i) ? cost[i][r - 1] : 0)
+ ((r < j) ? cost[r + 1][j] : 0) +
sum(freq, i, j);
if (c < cost[i][j])
cost[i][j] = c;
}
}
}
return cost[0][n - 1];

Aamir khan
01996203119
Aamir khan
01996203119
}

// A utility function to get sum of array elements


// freq[i] to freq[j]
static int sum(int freq[], int i, int j) {
int s = 0;
for (int k = i; k <= j; k++) {
if (k >= freq.length)
continue;
s += freq[k];
}
return s;
}

public static void main(String[] args) {

int keys[] = { 10, 12, 20 };


int freq[] = { 34, 8, 50 };
int n = keys.length;
System.out.println("Cost of Optimal BST is "
+ optimalSearchTree(keys, freq, n));
}

Output:

Analysis of Time Complexity


● The time complexity of the above solution is O(n^4). The time complexity can
be easily reduced to O(n^3) by pre-calculating the sum of frequencies instead
of calling sum() again and again.

Aamir khan
01996203119
Aamir khan
01996203119

Practical 6
Objective: To implement Huffman Coding and analyse its time complexity.

Theory:
● Huffman Coding is a technique of compressing data to reduce its size without
losing any of the details. It was first developed by David Huffman.
● Huffman Coding is generally useful to compress the data in which there are
frequently occurring characters.

Algorithm:

Begin
define a node with character, frequency, left and right child of the
node for Huffman tree.
create a list ‘freq’ to store frequency of each character,
initially, all are 0
for each character c in the string do
increase the frequency for character ch in freq list.
done

for all type of character ch do


if the frequency of ch is non zero then
add ch and its frequency as a node of priority queue Q.
done

while Q is not empty do


remove item from Q and assign it to left child of node
remove item from Q and assign to the right child of node
traverse the node to find the assigned code
done
End

Program:
import java.util.PriorityQueue;
import java.util.Comparator;

class HuffmanNode {
int item;

Aamir khan
01996203119
Aamir khan
01996203119
char c;
HuffmanNode left;
HuffmanNode right;
}

// For comparing the nodes


class ImplementComparator implements Comparator<HuffmanNode> {
public int compare(HuffmanNode x, HuffmanNode y) {
return x.item - y.item;
}
}

// IMplementing the huffman algorithm


public class Main {
public static void printCode(HuffmanNode root, String s) {
if (root.left == null && root.right == null &&
Character.isLetter(root.c)) {

System.out.println(root.c + " | " + s);

return;
}
printCode(root.left, s + "0");
printCode(root.right, s + "1");
}

public static void main(String[] args) {

int n = 4;
char[] charArray = { 'A', 'B', 'C', 'D' };
int[] charfreq = { 5, 1, 6, 3 };

PriorityQueue<HuffmanNode> q = new PriorityQueue<HuffmanNode>(n,


new ImplementComparator());

for (int i = 0; i < n; i++) {


HuffmanNode hn = new HuffmanNode();

hn.c = charArray[i];
hn.item = charfreq[i];

hn.left = null;
hn.right = null;

Aamir khan
01996203119
Aamir khan
01996203119

q.add(hn);
}

HuffmanNode root = null;

while (q.size() > 1) {

HuffmanNode x = q.peek();
q.poll();

HuffmanNode y = q.peek();
q.poll();

HuffmanNode f = new HuffmanNode();

f.item = x.item + y.item;


f.c = '-';
f.left = x;
f.right = y;
root = f;

q.add(f);
}
System.out.println(" Char | Huffman code ");
System.out.println("--------------------");
printCode(root, "");
}
}

Output:

Aamir khan
01996203119
Aamir khan
01996203119

Analysis of Time Complexity:


● The time complexity for encoding each unique character based on its frequency
is O(nlog n).
● Extracting minimum frequency from the priority queue takes place 2*(n-1)
times and its complexity is O(log n). Thus the overall complexity is O(nlog n).

Aamir khan
01996203119
Aamir khan
01996203119

Program: 07
Objective: To implement Dijkstra’s algorithm and analyse its time complexity.
Theory:
● Dijkstra's algorithm allows us to find the shortest path between any two
vertices of a graph.
● It differs from the minimum spanning tree because the shortest distance
between two vertices might not include all the vertices of the graph.

Algorithm:

function dijkstra(G, S)
for each vertex V in G
distance[V] <- infinite
previous[V] <- NULL
If V != S, add V to Priority Queue Q
distance[S] <- 0

while Q IS NOT EMPTY


U <- Extract MIN from Q
for each unvisited neighbour V of U
tempDistance <- distance[U] + edge_weight(U, V)
if tempDistance < distance[V]
distance[V] <- tempDistance
previous[V] <- U
return distance[], previous[]

Program:
// A Java program for Dijkstra's single source shortest path algorithm.
// The program is for adjacency matrix representation of the graph
import java.util.*;
import java.lang.*;
import java.io.*;

class ShortestPath {
// A utility function to find the vertex with minimum distance
value,
// from the set of vertices not yet included in shortest path tree
static final int V = 9;
int minDistance(int dist[], Boolean sptSet[])
{

Aamir khan
01996203119
Aamir khan
01996203119
// Initialize min value
int min = Integer.MAX_VALUE, min_index = -1;

for (int v = 0; v < V; v++)


if (sptSet[v] == false && dist[v] <= min) {
min = dist[v];
min_index = v;
}

return min_index;
}

// A utility function to print the constructed distance array


void printSolution(int dist[])
{
System.out.println("Vertex \t\t Distance from Source");
for (int i = 0; i < V; i++)
System.out.println(i + " \t\t " + dist[i]);
}

// Function that implements Dijkstra's single source shortest path


// algorithm for a graph represented using adjacency matrix
// representation
void dijkstra(int graph[][], int src)
{
int dist[] = new int[V]; // The output array. dist[i] will hold
// the shortest distance from src to i

// sptSet[i] will true if vertex i is included in shortest


// path tree or shortest distance from src to i is finalized
Boolean sptSet[] = new Boolean[V];

// Initialize all distances as INFINITE and stpSet[] as false


for (int i = 0; i < V; i++) {
dist[i] = Integer.MAX_VALUE;
sptSet[i] = false;
}

// Distance of source vertex from itself is always 0


dist[src] = 0;

// Find shortest path for all vertices


for (int count = 0; count < V - 1; count++) {

Aamir khan
01996203119
Aamir khan
01996203119
// Pick the minimum distance vertex from the set of
vertices
// not yet processed. u is always equal to src in first
// iteration.
int u = minDistance(dist, sptSet);

// Mark the picked vertex as processed


sptSet[u] = true;

// Update dist value of the adjacent vertices of the


// picked vertex.
for (int v = 0; v < V; v++)

// Update dist[v] only if is not in sptSet, there is an


// edge from u to v, and total weight of path from src
to
// v through u is smaller than current value of dist[v]
if (!sptSet[v] && graph[u][v] != 0 && dist[u] !=
Integer.MAX_VALUE && dist[u] + graph[u][v] < dist[v])
dist[v] = dist[u] + graph[u][v];
}

// print the constructed distance array


printSolution(dist);
}

// Driver method
public static void main(String[] args)
{
/* Let us create the example graph discussed above */
int graph[][] = new int[][] { { 0, 4, 0, 0, 0, 0, 0, 8, 0 },
{ 4, 0, 8, 0, 0, 0, 0, 11, 0 },
{ 0, 8, 0, 7, 0, 4, 0, 0, 2 },
{ 0, 0, 7, 0, 9, 14, 0, 0, 0 },
{ 0, 0, 0, 9, 0, 10, 0, 0, 0 },
{ 0, 0, 4, 14, 10, 0, 2, 0, 0 },
{ 0, 0, 0, 0, 0, 2, 0, 1, 6 },
{ 8, 11, 0, 0, 0, 0, 1, 0, 7 },
{ 0, 0, 2, 0, 0, 0, 6, 7, 0 } };
ShortestPath t = new ShortestPath();
t.dijkstra(graph, 0);
}
}

Aamir khan
01996203119
Aamir khan
01996203119

Output:

Analysis of Algorithm:
● Time Complexity of the implementation is O(V^2). If the input graph is
represented using an adjacency list, it can be reduced to O(E log V) with the
help of a binary heap.

Aamir khan
01996203119
Aamir khan
01996203119

Program: 08
Objective: To implement Bellman Ford algorithm and analyse its time complexity.
Theory:
● Bellman Ford algorithm helps us find the shortest path from a vertex to all
other vertices of a weighted graph.
● It is similar to Dijkstra's algorithm but it can work with graphs in which edges
can have negative weights.

Algorithm:

function bellmanFord(G, S)
for each vertex V in G
distance[V] <- infinite
previous[V] <- NULL
distance[S] <- 0

for each vertex V in G


for each edge (U,V) in G
tempDistance <- distance[U] + edge_weight(U, V)
if tempDistance < distance[V]
distance[V] <- tempDistance
previous[V] <- U

for each edge (U,V) in G


If distance[U] + edge_weight(U, V) < distance[V}
Error: Negative Cycle Exists

return distance[], previous[]

Program:
// Bellman Ford Algorithm in Java

class CreateGraph {

// CreateGraph - it consists of edges


class CreateEdge {
int s, d, w;

CreateEdge() {
s = d = w = 0;

Aamir khan
01996203119
Aamir khan
01996203119
}
};

int V, E;
CreateEdge edge[];

// Creates a graph with V vertices and E edges


CreateGraph(int v, int e) {
V = v;
E = e;
edge = new CreateEdge[e];
for (int i = 0; i < e; ++i)
edge[i] = new CreateEdge();
}

void BellmanFord(CreateGraph graph, int s) {


int V = graph.V, E = graph.E;
int dist[] = new int[V];

// Step 1: fill the distance array and predecessor array


for (int i = 0; i < V; ++i)
dist[i] = Integer.MAX_VALUE;

// Mark the source vertex


dist[s] = 0;

// Step 2: relax edges |V| - 1 times


for (int i = 1; i < V; ++i) {
for (int j = 0; j < E; ++j) {
// Get the edge data
int u = graph.edge[j].s;
int v = graph.edge[j].d;
int w = graph.edge[j].w;
if (dist[u] != Integer.MAX_VALUE && dist[u] + w < dist[v])
dist[v] = dist[u] + w;
}
}

// Step 3: detect negative cycle


// if value changes then we have a negative cycle in the graph
// and we cannot find the shortest distances
for (int j = 0; j < E; ++j) {
int u = graph.edge[j].s;

Aamir khan
01996203119
Aamir khan
01996203119
int v = graph.edge[j].d;
int w = graph.edge[j].w;
if (dist[u] != Integer.MAX_VALUE && dist[u] + w < dist[v]) {
System.out.println("CreateGraph contains negative w cycle");
return;
}
}

// No negative w cycle found!


// Print the distance and predecessor array
printSolution(dist, V);
}

// Print the solution


void printSolution(int dist[], int V) {
System.out.println("Vertex Distance from Source");
for (int i = 0; i < V; ++i)
System.out.println(i + "\t\t" + dist[i]);
}

public static void main(String[] args) {


int V = 5; // Total vertices
int E = 8; // Total Edges

CreateGraph graph = new CreateGraph(V, E);

// edge 0 --> 1
graph.edge[0].s = 0;
graph.edge[0].d = 1;
graph.edge[0].w = 5;

// edge 0 --> 2
graph.edge[1].s = 0;
graph.edge[1].d = 2;
graph.edge[1].w = 4;

// edge 1 --> 3
graph.edge[2].s = 1;
graph.edge[2].d = 3;
graph.edge[2].w = 3;

// edge 2 --> 1
graph.edge[3].s = 2;

Aamir khan
01996203119
Aamir khan
01996203119
graph.edge[3].d = 1;
graph.edge[3].w = 6;

// edge 3 --> 2
graph.edge[4].s = 3;
graph.edge[4].d = 2;
graph.edge[4].w = 2;

graph.BellmanFord(graph, 0); // 0 is the source vertex


}
}

Output:

Analysis of Time Complexity:

Time Complexity is O(VE) and space complexity is O(V).

Aamir khan
01996203119
Aamir khan
01996203119

Program: 09
Objective: To implement naïve String Matching algorithm, Rabin Karp
algorithm and Knuth Morris Pratt algorithm and analyse its time complexity.

Theory:
STRING MATCHING PROBLEM

 There are really two forms of string matching. The first, exact string matching, finds
instances of some pattern in a target string.
For example, if the pattern is "go" and the target is "agogo", then two instances of
the pattern appear in the text (at the second and fourth characters, respectively).

 The second, inexact string matching or string alignment, attempts to find the "best"
match of a pattern to some target. Usually, the match of a pattern to a target is either
probabilistic or evaluated based on some fixed criteria
For example, the pattern "aggtgc" matches the target "agtgcggtg" pretty well in
two places, located at the first character of the string and the sixth character.

Both string matching algorithms are used extensively in bioinformatics to isolate


structurally similar regions of DNA or a protein (usually in the context of a gene map
or a protein database).

NAIVE STRING MATCHING :-


The naive string searching algorithm is to examine each position, i>=1, in txt, trying for
equality of pat[1..m] with txt[i..i+m-1]. If there is inequality, position i+1 is tried, and so on.

Algorithm:-

Aamir khan
01996203119
Aamir khan
01996203119

Analysis:-
 Remember that |P| = m, |T| = n.
 Inner loop will take m steps to confirm the pattern matches.
 Outer loop will take n-m+1 steps.
 Therefore, worst case is

RABIN-KARP STRING SEARCH


The Rabin-Karp algorithm is a string searching algorithm created by
Michael O. Rabin and Richard M. Karp that seeks a pattern, i.e. a
substring, within a text by using hashing. It is not widely used for single
pattern matching, but is of considerable theoretical importance and is
very effective for multiple pattern matching. For text of length n and
pattern of length m, its average and best case running time is O(n), but
the (highly unlikely) worst case performance is O(nm), which is one of
the reasons why it is not widely used. However, it has the unique
advantage of being able to find any one of k strings or less in O(n) time
on average, regardless of the size of k.
One of the simplest practical applications of Rabin-Karp is in detection of
plagiarism

The algorithm is as shown:

function RabinKarp(string s[1..n], string sub[1..m])


2 hsub := hash(sub[1..m])
3 hs := hash(s[1..m])
4 for i from 1 to n
5 i f hs = hsub
6 if s[i..i+m-1] = sub
7 return i
8 hs := hash(s[i+1..i+m])
9 return not found.

Aamir khan
01996203119
Aamir khan
01996203119

Program:-

// Rabin-Karp algorithm in C++

#include <string.h>

#include <iostream>
using namespace std;

#define d 10

void rabinKarp(char pattern[], char text[], int q) {


int m = strlen(pattern);
int n = strlen(text);
int i, j;
int p = 0;
int t = 0;
int h = 1;

for (i = 0; i < m - 1; i++)


h = (h * d) % q;

// Calculate hash value for pattern and text


for (i = 0; i < m; i++) {
p = (d * p + pattern[i]) % q;
t = (d * t + text[i]) % q;
}

// Find the match


for (i = 0; i <= n - m; i++) {
if (p == t) {
for (j = 0; j < m; j++) {
if (text[i + j] != pattern[j])
break;
}

if (j == m)
cout << "Pattern is found at position: " << i + 1 << endl;
}

Aamir khan
01996203119
Aamir khan
01996203119

if (i < n - m) {
t = (d * (t - text[i] * h) + text[i + m]) % q;

if (t < 0)
t = (t + q);
}
}
}

int main() {
char text[] = "ABCCDDAEFG";
char pattern[] = "CDD";
int q = 13;
rabinKarp(pattern, text, q);
}

Output:-

Time Complexity

Aamir khan
01996203119
Aamir khan
01996203119

 Rabin's algorithm is (almost always) fast, i.e. O(m+n) average-


case timecomplexity, because hash(txt[i..i+m-1]) can be computed
in O(1) time - i.e. by two multiplications, a subtraction, an addition
and a `mod' - given its predecessor hash(txt[i-1..i-1+m-1]).
 The worst-case time-complexity does however remain at O(m*n)
because of the possibility of false-positive matches on the basis of
the hash numbers, although these are very rare indeed.

**********************************************COMPLETED*******************************************

Aamir khan
01996203119

You might also like