Professional Documents
Culture Documents
01996203119
Aamir khan
01996203119
Aamir khan
01996203119
INDEX
S. NO. OBJECTIVE DATE SIGNATURE
1. To implement the following algorithm using
1
array as a data structure and analyse its time
complexity.
a. Merge sort
b. Quick sort
c. Bubble sort
d. Bucket sort
e. Radix sort
f. Shell sort
g. Selection sort
h. Heap sort
Aamir khan
01996203119
Aamir khan
01996203119
Practical 01
Objective:
To implement the following algorithm using array as a data structure and analyse its
time complexity.
a. Bubble sort
b. Insertion Sort
c. Selection sort
d. Merge sort
e. Quick sort
f. Heap Sort
Algorithm:
First Pass:
( 5 1 4 2 8 ) –> ( 1 5 4 2 8 ), Here, algorithm compares the first two elements,
and swaps since 5 > 1.
( 1 5 4 2 8 ) –> ( 1 4 5 2 8 ), Swap since 5 > 4
( 1 4 5 2 8 ) –> ( 1 4 2 5 8 ), Swap since 5 > 2
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ), Now, since these elements are already in order (8 >
5), the algorithm does not swap them.
Second Pass:
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 )
( 1 4 2 5 8 ) –> ( 1 2 4 5 8 ), Swap since 4 > 2
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
Now, the array is already sorted, but our algorithm does not know if it is
completed. The algorithm needs one whole pass without any swap to know it is
sorted.
Third Pass:
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
Program:
Aamir khan
01996203119
Aamir khan
01996203119
#include<iostream>
using namespace std;
void bubble_sort(int a[],int n)
{
int j;
for(int i=0;i<n;i++)
{
for(j=0;j<n-i;j++)
{
if(a[j]>a[j+1])
{
int temp=a[j];
a[j]=a[j+1];
a[j+1]=temp;
}
}
}
}
int main()
{
int i,j,t,n;
cout<<"Enter the size of array:"<<endl;
cin>>n;
int a[100];
cout<<"Ente elements:"<<endl;
for(i=0;i<n;i++)
{
cin>>a[i];
}
bubble_sort(a,n);
for(i=0;i<n;i++)
{
cout<<a[i]<<",";
}
}
Aamir khan
01996203119
Aamir khan
01996203119
Output:
b. Insertion Sort:
Insertion sort is a simple sorting algorithm that works similar to the way you
sort playing cards in your hands. The array is virtually split into a sorted and an
unsorted part. Values from the unsorted part are picked and placed at the
correct position in the sorted part.
Algorithm
To sort an array of size n in ascending order:
1: Iterate from arr[1] to arr[n] over the array.
2: Compare the current element (key) to its predecessor.
3: If the key element is smaller than its predecessor, compare it to the elements
before. Move the greater elements one position up to make space for the
swapped element.
Aamir khan
01996203119
Aamir khan
01996203119
Program:
#include<iostream>
using namespace std;
for(i=0;i<n;i++)
{
cout<<a[i]<<",";
}
}
Output:
c. Selection Sort:
Aamir khan
01996203119
Aamir khan
01996203119
The selection sort algorithm sorts an array by repeatedly finding the minimum
element (considering ascending order) from the unsorted part and putting it at
the beginning. The algorithm maintains two subarrays in a given array.
Algorithm:
arr[] = 64 25 12 22 11
Program:
#include<iostream>
using namespace std;
void selection_sort(int a[],int n)
{
for(int i=0;i<n-1;i++)
Aamir khan
01996203119
Aamir khan
01996203119
{ int minn=i;
for(int j=i+1;j<n;j++)
{
if(a[j]<a[minn])
{
minn=j;
}
}
int temp=a[i];
a[i]=a[minn];
a[minn]=temp;
}
}
int main()
{
int i,t;
int n;
int a[100];
cout<<"Enter size of the array:"<<endl;
cin>>n;
cout<<"Eneter array elements:"<<endl;
for(i=0;i<n;i++)
{
cin>>a[i];
}
cout<<"array after sort is:"<<endl;
for(i=n-1;i>=0;i--)
{
cout<<a[i]<<",";
}
}
Output:
Aamir khan
01996203119
Aamir khan
01996203119
d. Merge Sort:
Merge sort is one of the most efficient sorting algorithms. It works on the
principle of Divide and Conquer. Merge sort repeatedly breaks down a list into
several sublists until each sublist consists of a single element and merges those
sublists in a manner that results in a sorted list.
Algorithm
The MergeSort function repeatedly divides the array into two halves until we
reach a stage where we try to perform MergeSort on a subarray of size 1 i.e.
p == r. After that, the merge function comes into play and combines the sorted
arrays into larger arrays until the whole array is merged.
MergeSort(A, p, r):
if p > r
return
q = (p+r)/2
Aamir khan
01996203119
Aamir khan
01996203119
mergeSort(A, p, q)
mergeSort(A, q+1, r)
merge(A, p, q, r)
To sort an entire array, we need to call MergeSort(A, 0, length(A)-1).
Program:
Output:
e. Quick Sort:
Quicksort is a sorting algorithm based on the divide and conquer approach where
Algorithm:
Aamir khan
01996203119
Aamir khan
01996203119
quickSort(array, leftmostIndex, pivotIndex - 1)
storeIndex++
return storeIndex + 1
Program:
import java.util.Arrays;
class Quicksort {
i++;
Aamir khan
01996203119
Aamir khan
01996203119
array[i] = array[j];
array[j] = temp;
array[i + 1] = array[high];
array[high] = temp;
return (i + 1);
quickSort(array, pi + 1, high);
Aamir khan
01996203119
Aamir khan
01996203119
}
class Main {
int[] data = { 8, 7, 2, 1, 0, 9, 6 };
System.out.println("Unsorted Array");
System.out.println(Arrays.toString(data));
System.out.println(Arrays.toString(data));
Output:
f. Heap Sort:
Aamir khan
01996203119
Aamir khan
01996203119
The initial set of numbers that we want to sort is stored in an array e.g. [10, 3, 76, 34,
23, 32] and after sorting, we get a sorted array [3,10,23,32,34,76].
Heap sort works by visualizing the elements of the array as a special kind of complete
binary tree called a heap.
Program:
public class Practice01 {
int n = arr.length;
heapify(arr, n, i);
arr[0] = arr[i];
arr[i] = temp;
heapify(arr, i, 0);
int largest = i;
int l = 2 * i + 1;
Aamir khan
01996203119
Aamir khan
01996203119
int r = 2 * i + 2;
largest = l;
largest = r;
if (largest != i) {
arr[i] = arr[largest];
arr[largest] = swap;
heapify(arr, n, largest);
int n = arr.length;
System.out.println();
Aamir khan
01996203119
Aamir khan
01996203119
Practice01 hs = new Practice01();
hs.sort(arr);
printArray(arr);
Output:
TimeComplexity
Best O(nlogn)
Worst O(nlogn)
Average O(nlogn)
SpaceComplexity O(1)
Aamir khan
01996203119
Aamir khan
01996203119
Program 02
Objective:
To implement Linear search and Binary search and analyse its time complexity
Theory and Algorithm:
a. Linear Search:
A linear search or sequential search is a method for finding an element
within a list. It sequentially checks each element of the list until a match is
found or the whole list has been searched.
Algorithm:
A simple approach is to do a linear search, i.e
● Start from the leftmost element of arr[] and one by one compare x with each
element of arr[]
● If x matches with an element, return the index.
● If x doesn’t match with any of the elements, return -1.
Program:
#include <iostream>
using namespace std;
Aamir khan
01996203119
Aamir khan
01996203119
int main(void)
{
int arr[] = {2, 3, 4, 10, 40};
int x = 10;
int n = sizeof(arr) / sizeof(arr[0]);
Output:
Linear search is rarely used practically because other search algorithms such as
the binary search algorithm and hash tables allow significantly faster-searching
comparison to Linear search.
b. Binary Search:
Search a sorted array by repeatedly dividing the search interval in half. Begin
with an interval covering the whole array. If the value of the search key is less
than the item in the middle of the interval, narrow the interval to the lower half.
Aamir khan
01996203119
Aamir khan
01996203119
Otherwise, narrow it to the upper half. Repeatedly check until the value is
found or the interval is empty.
Algorithm:
The idea of binary search is to use the information that the array is sorted and reduce
the time complexity to O(Log n).
We basically ignore half of the elements just after one comparison.
Program:
#include <bits/stdc++.h>
while (l <= r)
Aamir khan
01996203119
Aamir khan
01996203119
int m = l + (r - l) / 2;
if (arr[m] == x)
return m;
if (arr[m] < x)
l = m + 1;
else
r = m - 1;
return -1;
int main(void)
int x = 10;
return 0;
Output:
Aamir khan
01996203119
Aamir khan
01996203119
T(n) = T(n/2) + c
The above recurrence can be solved either using the Recurrence Tree method or
Master method. It falls in case II of the Master Method and the solution of the
recurrence is.
Aamir khan
01996203119
Aamir khan
01996203119
Program 03
Objective: To implement Matrix Multiplication and analyse its time complexity.
Algorithm:
● Take the two matrices to be multiplied
● Check if the two matrices are compatible to be multiplied
Aamir khan
01996203119
Aamir khan
01996203119
● Traverse each element of the two matrices and multiply them. Store this
product in the new matrix at the corresponding index.
● Print the final product matrix
Program:
import java.io.*;
import java.util.*;
Aamir khan
01996203119
Aamir khan
01996203119
}
// printing output
for (int i = 0; i < prd.length; i++) {
for (int j = 0; j < prd[0].length; j++) {
System.out.print(prd[i][j] + " ");
}
System.out.println();
}
Output:
Time Complexity:
O(n3)
This time complexity is cubic because 3 nested for loops are used.
Space Complexity
O(n2)
Aamir khan
01996203119
Aamir khan
01996203119
Practical 4
Objective: To implement Longest Common Subsequence problem and analyse its
time complexity.
Theory:
The longest subsequence common to all the given sequences is referred to as Longest
Common Subsequence. The reason for using the LCS is to restrict the element of the
subsequences from occupying the consecutive position within the original sequences.
A sequence that appears in the same relative order, either contiguous or non-
contiguous way is known as a subsequence.
For example, if we have two sequences, such as "KTEURFJS" and "TKWIDEUJ", the
longest common subsequence will be "TEUJ" of length 4.
Algorithm:
X and Y be two given sequences
Initialize a table LCS of dimension X.length * Y.length
X.label = X
Y.label = Y
LCS[0][] = 0
LCS[][0] = 0
Start from LCS[1][1]
Compare X[i] and Y[j]
If X[i] = Y[j]
LCS[i][j] = 1 + LCS[i-1, j-1]
Point an arrow to LCS[i][j]
Else
LCS[i][j] = max(LCS[i-1][j], LCS[i][j-1])
Point an arrow to max(LCS[i-1][j], LCS[i][j-1])
Aamir khan
01996203119
Aamir khan
01996203119
Program:
class LCS_ALGO {
static void lcs(String S1, String S2, int m, int n) {
int[][] LCS_table = new int[m + 1][n + 1];
for (int i = 0; i <= m; i++) {
for (int j = 0; j <= n; j++) {
if (i == 0 || j == 0)
LCS_table[i][j] = 0;
else if (S1.charAt(i - 1) == S2.charAt(j - 1))
LCS_table[i][j] = LCS_table[i - 1][j - 1] + 1;
else
LCS_table[i][j] = Math.max(LCS_table[i - 1][j], LCS_table[i]
[j - 1]);
}
}
int i = m, j = n;
while (i > 0 && j > 0) {
if (S1.charAt(i - 1) == S2.charAt(j - 1)) {
lcs[index - 1] = S1.charAt(i - 1);
i--;
j--;
index--;
}
Aamir khan
01996203119
Aamir khan
01996203119
Output:
Aamir khan
01996203119
Aamir khan
01996203119
Practical 5
Objective: To implement the Optimal Binary Search Tree problem and analyse its
time complexity.
Theory:
Given a sorted array key [0.. n-1] of search keys and an array freq[0.. n-1] of
frequency counts, where freq[i] is the number of searches for keys[i]. Construct a
binary search tree of all keys such that the total cost of all the searches is as small as
possible.
Algorithm:
Program:
// Dynamic Programming Java code for Optimal Binary Search
// Tree Problem
public class Optimal_BST2 {
Aamir khan
01996203119
Aamir khan
01996203119
Aamir khan
01996203119
Aamir khan
01996203119
}
Output:
Aamir khan
01996203119
Aamir khan
01996203119
Practical 6
Objective: To implement Huffman Coding and analyse its time complexity.
Theory:
● Huffman Coding is a technique of compressing data to reduce its size without
losing any of the details. It was first developed by David Huffman.
● Huffman Coding is generally useful to compress the data in which there are
frequently occurring characters.
Algorithm:
Begin
define a node with character, frequency, left and right child of the
node for Huffman tree.
create a list ‘freq’ to store frequency of each character,
initially, all are 0
for each character c in the string do
increase the frequency for character ch in freq list.
done
Program:
import java.util.PriorityQueue;
import java.util.Comparator;
class HuffmanNode {
int item;
Aamir khan
01996203119
Aamir khan
01996203119
char c;
HuffmanNode left;
HuffmanNode right;
}
return;
}
printCode(root.left, s + "0");
printCode(root.right, s + "1");
}
int n = 4;
char[] charArray = { 'A', 'B', 'C', 'D' };
int[] charfreq = { 5, 1, 6, 3 };
hn.c = charArray[i];
hn.item = charfreq[i];
hn.left = null;
hn.right = null;
Aamir khan
01996203119
Aamir khan
01996203119
q.add(hn);
}
HuffmanNode x = q.peek();
q.poll();
HuffmanNode y = q.peek();
q.poll();
q.add(f);
}
System.out.println(" Char | Huffman code ");
System.out.println("--------------------");
printCode(root, "");
}
}
Output:
Aamir khan
01996203119
Aamir khan
01996203119
Aamir khan
01996203119
Aamir khan
01996203119
Program: 07
Objective: To implement Dijkstra’s algorithm and analyse its time complexity.
Theory:
● Dijkstra's algorithm allows us to find the shortest path between any two
vertices of a graph.
● It differs from the minimum spanning tree because the shortest distance
between two vertices might not include all the vertices of the graph.
Algorithm:
function dijkstra(G, S)
for each vertex V in G
distance[V] <- infinite
previous[V] <- NULL
If V != S, add V to Priority Queue Q
distance[S] <- 0
Program:
// A Java program for Dijkstra's single source shortest path algorithm.
// The program is for adjacency matrix representation of the graph
import java.util.*;
import java.lang.*;
import java.io.*;
class ShortestPath {
// A utility function to find the vertex with minimum distance
value,
// from the set of vertices not yet included in shortest path tree
static final int V = 9;
int minDistance(int dist[], Boolean sptSet[])
{
Aamir khan
01996203119
Aamir khan
01996203119
// Initialize min value
int min = Integer.MAX_VALUE, min_index = -1;
return min_index;
}
Aamir khan
01996203119
Aamir khan
01996203119
// Pick the minimum distance vertex from the set of
vertices
// not yet processed. u is always equal to src in first
// iteration.
int u = minDistance(dist, sptSet);
// Driver method
public static void main(String[] args)
{
/* Let us create the example graph discussed above */
int graph[][] = new int[][] { { 0, 4, 0, 0, 0, 0, 0, 8, 0 },
{ 4, 0, 8, 0, 0, 0, 0, 11, 0 },
{ 0, 8, 0, 7, 0, 4, 0, 0, 2 },
{ 0, 0, 7, 0, 9, 14, 0, 0, 0 },
{ 0, 0, 0, 9, 0, 10, 0, 0, 0 },
{ 0, 0, 4, 14, 10, 0, 2, 0, 0 },
{ 0, 0, 0, 0, 0, 2, 0, 1, 6 },
{ 8, 11, 0, 0, 0, 0, 1, 0, 7 },
{ 0, 0, 2, 0, 0, 0, 6, 7, 0 } };
ShortestPath t = new ShortestPath();
t.dijkstra(graph, 0);
}
}
Aamir khan
01996203119
Aamir khan
01996203119
Output:
Analysis of Algorithm:
● Time Complexity of the implementation is O(V^2). If the input graph is
represented using an adjacency list, it can be reduced to O(E log V) with the
help of a binary heap.
Aamir khan
01996203119
Aamir khan
01996203119
Program: 08
Objective: To implement Bellman Ford algorithm and analyse its time complexity.
Theory:
● Bellman Ford algorithm helps us find the shortest path from a vertex to all
other vertices of a weighted graph.
● It is similar to Dijkstra's algorithm but it can work with graphs in which edges
can have negative weights.
Algorithm:
function bellmanFord(G, S)
for each vertex V in G
distance[V] <- infinite
previous[V] <- NULL
distance[S] <- 0
Program:
// Bellman Ford Algorithm in Java
class CreateGraph {
CreateEdge() {
s = d = w = 0;
Aamir khan
01996203119
Aamir khan
01996203119
}
};
int V, E;
CreateEdge edge[];
Aamir khan
01996203119
Aamir khan
01996203119
int v = graph.edge[j].d;
int w = graph.edge[j].w;
if (dist[u] != Integer.MAX_VALUE && dist[u] + w < dist[v]) {
System.out.println("CreateGraph contains negative w cycle");
return;
}
}
// edge 0 --> 1
graph.edge[0].s = 0;
graph.edge[0].d = 1;
graph.edge[0].w = 5;
// edge 0 --> 2
graph.edge[1].s = 0;
graph.edge[1].d = 2;
graph.edge[1].w = 4;
// edge 1 --> 3
graph.edge[2].s = 1;
graph.edge[2].d = 3;
graph.edge[2].w = 3;
// edge 2 --> 1
graph.edge[3].s = 2;
Aamir khan
01996203119
Aamir khan
01996203119
graph.edge[3].d = 1;
graph.edge[3].w = 6;
// edge 3 --> 2
graph.edge[4].s = 3;
graph.edge[4].d = 2;
graph.edge[4].w = 2;
Output:
Aamir khan
01996203119
Aamir khan
01996203119
Program: 09
Objective: To implement naïve String Matching algorithm, Rabin Karp
algorithm and Knuth Morris Pratt algorithm and analyse its time complexity.
Theory:
STRING MATCHING PROBLEM
There are really two forms of string matching. The first, exact string matching, finds
instances of some pattern in a target string.
For example, if the pattern is "go" and the target is "agogo", then two instances of
the pattern appear in the text (at the second and fourth characters, respectively).
The second, inexact string matching or string alignment, attempts to find the "best"
match of a pattern to some target. Usually, the match of a pattern to a target is either
probabilistic or evaluated based on some fixed criteria
For example, the pattern "aggtgc" matches the target "agtgcggtg" pretty well in
two places, located at the first character of the string and the sixth character.
Algorithm:-
Aamir khan
01996203119
Aamir khan
01996203119
Analysis:-
Remember that |P| = m, |T| = n.
Inner loop will take m steps to confirm the pattern matches.
Outer loop will take n-m+1 steps.
Therefore, worst case is
Aamir khan
01996203119
Aamir khan
01996203119
Program:-
#include <string.h>
#include <iostream>
using namespace std;
#define d 10
if (j == m)
cout << "Pattern is found at position: " << i + 1 << endl;
}
Aamir khan
01996203119
Aamir khan
01996203119
if (i < n - m) {
t = (d * (t - text[i] * h) + text[i + m]) % q;
if (t < 0)
t = (t + q);
}
}
}
int main() {
char text[] = "ABCCDDAEFG";
char pattern[] = "CDD";
int q = 13;
rabinKarp(pattern, text, q);
}
Output:-
Time Complexity
Aamir khan
01996203119
Aamir khan
01996203119
**********************************************COMPLETED*******************************************
Aamir khan
01996203119