You are on page 1of 84

PRACTICAL FILE: DAA (DESIGN ANALYSIS AND ALGORITHM)

NAME: NITISH KUMAR


ROLL no: SG19828
BRANCH: BE (IT) 5th SEM
SUBMITTED TO: Mr. Gurpinder Singh
INDEX
S.NO Topic Page No
1 Quick Sort 1
2 Merge Sort 5
3 BinarySort 9
4 Fractional Knapsack 12
problem
5 0-1 Knapsack 15
6 Krushal’s Algorithm 23
7 Prim’s Algorithm 35
8 Strassen’s Matrix 42
multiplication
9 Dijikstra Algorithm 46
10 Bellman Ford Algorithm 54
11 Multistage Graph 62
12 Floyd Warshall Algorithm 67
13 Travelling Salesman 71
Problem
14 8- Queens Problem 79

QUICK SORT:
Quick Sort is a Divide and Conquer algorithm. It picks an element as pivot
and partitions the given array around the picked pivot. There are many
different versions of quick Sort that pick pivot in different ways. 
1. Always pick first element as pivot.
2. Always pick last element as pivot (implemented below)
3. Pick a random element as pivot.
4. Pick median as pivot.
The key process in quick Sort is partition(). Target of partitions is, given an
array and an element x of array as pivot, put x at its correct position in sorted
array and put all smaller elements (smaller than x) before x, and put all
greater elements (greater than x) after x. All this should be done in linear
time.
Pseudo Code for recursive Quick Sort function:
/* low --> Starting index, high --> Ending index */
quickSort(arr[], low, high)
{
if (low < high)
{
/* pi is partitioning index, arr[pi] is now
at right place */
pi = partition(arr, low, high);

quickSort(arr, low, pi – 1); // Before pi


quickSort(arr, pi + 1, high); // After pi
}
}
 

Implementation: 
Following are the implementations of Quick Sort:  
/* C++ implementation of QuickSort */
#include <bits/stdc++.h>
using namespace std;

// A utility function to swap two elements


void swap(int* a, int* b)
{
int t = *a;
*a = *b;
*b = t;
}

/* This function takes last element as pivot, places


the pivot element at its correct position in sorted
array, and places all smaller (smaller than pivot)
to left of pivot and all greater elements to right
of pivot */
int partition (int arr[], int low, int high)
{
int pivot = arr[high]; // pivot
int i = (low - 1); // Index of smaller element and indicates the right position of pivot
found so far

for (int j = low; j <= high - 1; j++)


{
// If current element is smaller than the pivot
if (arr[j] < pivot)
{
i++; // increment index of smaller element
swap(&arr[i], &arr[j]);
}
}
swap(&arr[i + 1], &arr[high]);
return (i + 1);
}

/* The main function that implements QuickSort


arr[] --> Array to be sorted,
low --> Starting index,
high --> Ending index */
void quickSort(int arr[], int low, int high)
{
if (low < high)
{
/* pi is partitioning index, arr[p] is now
at right place */
int pi = partition(arr, low, high);

// Separately sort elements before


// partition and after partition
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}

/* Function to print an array */


void printArray(int arr[], int size)
{
int i;
for (i = 0; i < size; i++)
cout << arr[i] << " ";
cout << endl;
}

// Driver Code
int main()
{
int arr[] = {10, 7, 8, 9, 1, 5};
int n = sizeof(arr) / sizeof(arr[0]);
quickSort(arr, 0, n - 1);
cout << "Sorted array: \n";
printArray(arr, n);
return 0;
}

MERGE SORT:
Like Quick Sort, Merge Sort is a Divide and Conquer algorithm. It divides the
input array into two halves, calls itself for the two halves, and then merges
the two sorted halves. The merge () function is used for merging two
halves. The merge (arr, l, m, r) is a key process that assumes that arr[l..m]
and arr [m+1..r] are sorted and merges the two sorted sub-arrays into one.
See the following C implementation for details.
Merge Sort(arr[], l, r)
If r > l
1. Find the middle point to divide the array into two halves:
middle m = l+ (r-l)/2
2. Call mergeSort for first half:
Call mergeSort(arr, l, m)
3. Call mergeSort for second half:
Call mergeSort(arr, m+1, r)
4. Merge the two halves sorted in step 2 and 3:
Call merge(arr, l, m, r)

The following diagram shows the complete merge sort process for an


example array {38, 27, 43, 3, 9, 82, 10}. If we take a closer look at the
diagram, we can see that the array is recursively divided into two halves till
the size becomes 1. Once the size becomes 1, the merge processes come
into action and start merging arrays back till the complete array is merged.
// C++ program for Merge Sort
#include <iostream>
using namespace std;

// Merges two subarrays of array[].


// First subarray is arr[begin..mid]
// Second subarray is arr[mid+1..end]
void merge(int array[], int const left, int const mid, int const right)
{
auto const subArrayOne = mid - left + 1;
auto const subArrayTwo = right - mid;

// Create temp arrays


auto *leftArray = new int[subArrayOne],
*rightArray = new int[subArrayTwo];

// Copy data to temp arrays leftArray[] and rightArray[]


for (auto i = 0; i < subArrayOne; i++)
leftArray[i] = array[left + i];
for (auto j = 0; j < subArrayTwo; j++)
rightArray[j] = array[mid + 1 + j];

auto indexOfSubArrayOne = 0, // Initial index of first sub-array


indexOfSubArrayTwo = 0; // Initial index of second sub-array
int indexOfMergedArray = left; // Initial index of merged array

// Merge the temp arrays back into array[left..right]


while (indexOfSubArrayOne < subArrayOne && indexOfSubArrayTwo <
subArrayTwo) {
if (leftArray[indexOfSubArrayOne] <= rightArray[indexOfSubArrayTwo]) {
array[indexOfMergedArray] = leftArray[indexOfSubArrayOne];
indexOfSubArrayOne++;
}
else {
array[indexOfMergedArray] = rightArray[indexOfSubArrayTwo];
indexOfSubArrayTwo++;
}
indexOfMergedArray++;
}
// Copy the remaining elements of
// left[], if there are any
while (indexOfSubArrayOne < subArrayOne) {
array[indexOfMergedArray] = leftArray[indexOfSubArrayOne];
indexOfSubArrayOne++;
indexOfMergedArray++;
}
// Copy the remaining elements of
// right[], if there are any
while (indexOfSubArrayTwo < subArrayTwo) {
array[indexOfMergedArray] = rightArray[indexOfSubArrayTwo];
indexOfSubArrayTwo++;
indexOfMergedArray++;
}
}

// begin is for left index and end is


// right index of the sub-array
// of arr to be sorted */
void mergeSort(int array[], int const begin, int const end)
{
if (begin >= end)
return; // Returns recursively

auto mid = begin + (end - begin) / 2;


mergeSort(array, begin, mid);
mergeSort(array, mid + 1, end);
merge(array, begin, mid, end);
}

// UTILITY FUNCTIONS
// Function to print an array
void printArray(int A[], int size)
{
for (auto i = 0; i < size; i++)
cout << A[i] << " ";
}

// Driver code
int main()
{
int arr[] = { 12, 11, 13, 5, 6, 7 };
auto arr_size = sizeof(arr) / sizeof(arr[0]);

cout << "Given array is \n";


printArray(arr, arr_size);
mergeSort(arr, 0, arr_size - 1);

cout << "\nSorted array is \n";


printArray(arr, arr_size);
return 0;

BINARY SEARCH:
Binary Search: Search a sorted array by repeatedly dividing the search
interval in half. Begin with an interval covering the whole array. If the value of
the search key is less than the item in the middle of the interval, narrow the
interval to the lower half. Otherwise, narrow it to the upper half. Repeatedly
check until the value is found or the interval is empty. The idea of binary
search is to use the information that the array is sorted and reduce the time
complexity to O (Log n). 
We basically ignore half of the elements just after one comparison.
1. Compare x with the middle element.
2. If x matches with the middle element, we return the mid index.
3. Else If x is greater than the mid element, then x can only lie in the right
half subarray after the mid element. So we recur for the right half.
4. Else (x is smaller) recur for the left half.

Recursive implementation of Binary Search 

// C++ program to implement recursive Binary Search


#include <bits/stdc++.h>
using namespace std;

// A recursive binary search function. It returns


// location of x in given array arr[l..r] is present,
// otherwise -1
int binarySearch(int arr[], int l, int r, int x)
{
if (r >= l) {
int mid = l + (r - l) / 2;

// If the element is present at the middle


// itself
if (arr[mid] == x)
return mid;

// If element is smaller than mid, then


// it can only be present in left subarray
if (arr[mid] > x)
return binarySearch(arr, l, mid - 1, x);

// Else the element can only be present


// in right subarray
return binarySearch(arr, mid + 1, r, x);
}

// We reach here when element is not


// present in array
return -1;
}

int main(void)
{
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int n = sizeof(arr) / sizeof(arr[0]);
int result = binarySearch(arr, 0, n - 1, x);
(result == -1) ? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}

Output: 
Element is present at index 3

Iterative implementation of Binary Search:

// C++ program to implement recursive Binary Search


#include <bits/stdc++.h>
using namespace std;

// A iterative binary search function. It returns


// location of x in given array arr[l..r] if present,
// otherwise -1
int binarySearch(int arr[], int l, int r, int x)
{
while (l <= r) {
int m = l + (r - l) / 2;

// Check if x is present at mid


if (arr[m] == x)
return m;

// If x greater, ignore left half


if (arr[m] < x)
l = m + 1;

// If x is smaller, ignore right half


else
r = m - 1;
}

// if we reach here, then element was


// not present
return -1;
}

int main(void)
{
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int n = sizeof(arr) / sizeof(arr[0]);
int result = binarySearch(arr, 0, n - 1, x);
(result == -1) ? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;

Output: 
Element is present at index 3

FRACTIONAL KNAPSACK PROBLEM:


Given weights and values of n items, we need to put these items in a
knapsack of capacity W to get the maximum total value in the knapsack.
In the 0-1 Knapsack problem, we are not allowed to break items. We either
take the whole item or don’t take it.

Input: 
Items as (value, weight) pairs 
arr[] = {{60, 10}, {100, 20}, {120, 30}} 
Knapsack Capacity, W = 50; 

Output: 
Maximum possible value = 240 
by taking items of weight 10 and 20 kg and 2/3 fraction 
of 30 kg. Hence total price will be 60+100+(2/3)(120) = 240

In Fractional Knapsack, we can break items for maximizing the total value
of knapsack. This problem in which we can break an item is also called the
fractional knapsack problem. 

Input:
Same as above

Output:
Maximum possible value = 240
By taking full items of 10 kg, 20 kg and
2/3rd of last item of 30 kg

A brute-force solution would be to try all possible subset with all different


fraction but that will be too much time taking. 
An efficient solution is to use Greedy approach. The basic idea of the
greedy approach is to calculate the ratio value/weight for each item and sort
the item on basis of this ratio. Then take the item with the highest ratio and
add them until we can’t add the next item as a whole and at the end add the
next item as much as we can. Which will always be the optimal solution to
this problem.
A simple code with our own comparison function can be written as follows,
please see sort function more closely, the third argument to sort function is
our comparison function which sorts the item according to value/weight ratio
in non-decreasing order. 
After sorting we need to loop over these items and add them in our knapsack
satisfying above-mentioned criteria.
Below is the implementation of the above idea:
// C/C++ program to solve fractional Knapsack Problem
#include <bits/stdc++.h>

using namespace std;

// Structure for an item which stores weight and


// corresponding value of Item
struct Item {
int value, weight;

// Constructor
Item(int value, int weight)
{
this->value=value;
this->weight=weight;
}
};

// Comparison function to sort Item according to val/weight


// ratio
bool cmp(struct Item a, struct Item b)
{
double r1 = (double)a.value / (double)a.weight;
double r2 = (double)b.value / (double)b.weight;
return r1 > r2;
}

// Main greedy function to solve problem


double fractionalKnapsack(int W, struct Item arr[], int n)
{
// sorting Item on basis of ratio
sort(arr, arr + n, cmp);
// Uncomment to see new order of Items with their
// ratio
/*
for (int i = 0; i < n; i++)
{
cout << arr[i].value << " " << arr[i].weight << " :
"
<< ((double)arr[i].value / arr[i].weight) <<
endl;
}
*/

int curWeight = 0; // Current weight in knapsack


double finalvalue = 0.0; // Result (value in Knapsack)

// Looping through all Items


for (int i = 0; i < n; i++) {
// If adding Item won't overflow, add it completely
if (curWeight + arr[i].weight <= W) {
curWeight += arr[i].weight;
finalvalue += arr[i].value;
}

// If we can't add current Item, add fractional part


// of it
else {
int remain = W - curWeight;
finalvalue += arr[i].value
* ((double)remain
/ (double)arr[i].weight);
break;
}
}

// Returning final value


return finalvalue;
}

// Driver code
int main()
{
int W = 50; // Weight of knapsack
Item arr[] = { { 60, 10 }, { 100, 20 }, { 120, 30 } };

int n = sizeof(arr) / sizeof(arr[0]);

// Function call
cout << "Maximum value we can obtain = "
<< fractionalKnapsack(W, arr, n);
return 0;
}
Output
Maximum value we can obtain = 240

0-1_KNAPSACK PROBLEM:

Given weights and values of n items, put these items in a knapsack of


capacity W to get the maximum total value in the knapsack. In other
words, given two integer arrays val[0..n-1] and wt[0..n-1] which
represent values and weights associated with n items respectively. Also
given an integer W which represents knapsack capacity, find out the
maximum value subset of val[] such that sum of the weights of this
subset is smaller than or equal to W. You cannot break an item, either
pick the complete item or don’t pick it (0-1 property).

Method 1: Recursion by Brute-Force algorithm OR Exhaustive Search.


Approach: A simple solution is to consider all subsets of items and calculate
the total weight and value of all subsets. Consider the only subsets whose
total weight is smaller than W. From all such subsets, pick the maximum
value subset.
Optimal Sub-structure: To consider all subsets of items, there can be two
cases for every item. 
1. Case 1: The item is included in the optimal subset.
2. Case 2: The item is not included in the optimal set.
Therefore, the maximum value that can be obtained from ‘n’ items is the max
of the following two values. 
1. Maximum value obtained by n-1 items and W weight (excluding nth item).
2. Value of nth item plus maximum value obtained by n-1 items and W
minus the weight of the nth item (including nth item).
If the weight of ‘nth’ item is greater than ‘W’, then the nth item cannot be
included and Case 1 is the only possibility.

/* A Naive recursive implementation of


0-1 Knapsack problem */
#include <bits/stdc++.h>
using namespace std;

// A utility function that returns


// maximum of two integers
int max(int a, int b) { return (a > b) ? a : b; }

// Returns the maximum value that


// can be put in a knapsack of capacity W
int knapSack(int W, int wt[], int val[], int n)
{

// Base Case
if (n == 0 || W == 0)
return 0;

// If weight of the nth item is more


// than Knapsack capacity W, then
// this item cannot be included
// in the optimal solution
if (wt[n - 1] > W)
return knapSack(W, wt, val, n - 1);

// Return the maximum of two cases:


// (1) nth item included
// (2) not included
else
return max(
val[n - 1]
+ knapSack(W - wt[n - 1],
wt, val, n - 1),
knapSack(W, wt, val, n - 1));
}

// Driver code
int main()
{
int val[] = { 60, 100, 120 };
int wt[] = { 10, 20, 30 };
int W = 50;
int n = sizeof(val) / sizeof(val[0]);
cout << knapSack(W, wt, val, n);
return 0;
}

Output 220

It should be noted that the above function computes the same sub-
problems again and again. See the following recursion tree, K(1, 1) is
being evaluated twice. The time complexity of this naive recursive
solution is exponential (2^n)

In the following recursion tree, K() refers


to knapSack(). The two parameters indicated in the
following recursion tree are n and W.
The recursion tree is for following sample inputs.
wt[] = {1, 1, 1}, W = 2, val[] = {10, 20, 30}
K(n, W)
K(3, 2)
/ \
/ \
K(2, 2) K(2, 1)
/ \ / \
/ \ / \
K(1, 2) K(1, 1) K(1, 1) K(1, 0)
/ \ / \ / \
/ \ / \ / \
K(0, 2) K(0, 1) K(0, 1) K(0, 0) K(0, 1) K(0, 0)
Recursion tree for Knapsack capacity 2
units and 3 items of 1 unit weight.

Complexity Analysis: 
 Time Complexity: O(2n). 
As there are redundant subproblems.
 Auxiliary Space :O(1). 
As no extra data structure has been used for storing values.
Since subproblems are evaluated again, this problem has Overlapping Sub-
problems property. So the 0-1 Knapsack problem has both properties of a
dynamic programming problem. 
Method 2: Like other typical Dynamic Programming(DP) problems, re-
computation of same subproblems can be avoided by constructing a
temporary array K[][] in bottom-up manner. Following is Dynamic
Programming based implementation.

Approach: In the Dynamic programming we will work considering the same


cases as mentioned in the recursive approach. In a DP[][] table let’s consider
all the possible weights from ‘1’ to ‘W’ as the columns and weights that can
be kept as the rows. 
The state DP[i][j] will denote maximum value of ‘j-weight’ considering all
values from ‘1 to ith’. So if we consider ‘wi’ (weight in ‘ith’ row) we can fill it in
all columns which have ‘weight values > wi’. Now two possibilities can take
place: 
 Fill ‘wi’ in the given column.
 Do not fill ‘wi’ in the given column.
Now we have to take a maximum of these two possibilities, formally if we do
not fill ‘ith’ weight in ‘jth’ column then DP[i][j] state will be same as DP[i-1][j]
but if we fill the weight, DP[i][j] will be equal to the value of ‘wi’+ value of the
column weighing ‘j-wi’ in the previous row. So we take the maximum of these
two possibilities to fill the current state. This visualization will make the
concept clear:  
Let weight elements = {1, 2, 3}
Let weight values = {10, 15, 40}
Capacity=6

0 1 2 3 4 5 6

0 0 0 0 0 0 0 0

1 0 10 10 10 10 10 10

2 0 10 15 25 25 25 25

3 0

Explanation:
For filling 'weight = 2' we come
across 'j = 3' in which
we take maximum of
(10, 15 + DP[1][3-2]) = 25
| |
'2' '2 filled'
not filled

0 1 2 3 4 5 6

0 0 0 0 0 0 0 0

1 0 10 10 10 10 10 10

2 0 10 15 25 25 25 25

3 0 10 15 40 50 55 65

Explanation:
For filling 'weight=3',
we come across 'j=4' in which
we take maximum of (25, 40 + DP[2][4-3])
= 50

For filling 'weight=3'


we come across 'j=5' in which
we take maximum of (25, 40 + DP[2][5-3])
= 55

For filling 'weight=3'


we come across 'j=6' in which
we take maximum of (25, 40 + DP[2][6-3])
= 65

// A dynamic programming based


// solution for 0-1 Knapsack problem
#include <bits/stdc++.h>
using namespace std;

// A utility function that returns


// maximum of two integers
int max(int a, int b)
{
return (a > b) ? a : b;
}

// Returns the maximum value that


// can be put in a knapsack of capacity W
int knapSack(int W, int wt[], int val[], int n)
{
int i, w;
vector<vector<int>> K(n + 1, vector<int>(W + 1));

// Build table K[][] in bottom up manner


for(i = 0; i <= n; i++)
{
for(w = 0; w <= W; w++)
{
if (i == 0 || w == 0)
K[i][w] = 0;
else if (wt[i - 1] <= w)
K[i][w] = max(val[i - 1] +
K[i - 1][w - wt[i -
1]],
K[i - 1][w]);
else
K[i][w] = K[i - 1][w];
}
}
return K[n][W];
}

// Driver Code
int main()
{
int val[] = { 60, 100, 120 };
int wt[] = { 10, 20, 30 };
int W = 50;
int n = sizeof(val) / sizeof(val[0]);

cout << knapSack(W, wt, val, n);


return 0;
}

Output: 220

Complexity Analysis: 
 Time Complexity: O(N*W). 
where ‘N’ is the number of weight element and ‘W’ is capacity. As for
every weight element we traverse through all weight capacities
1<=w<=W.
 Auxiliary Space: O(N*W). 
The use of 2-D array of size ‘N*W’.
Method 3: This method uses Memoization Technique (an extension of
recursive approach).
This method is basically an extension to the recursive approach so that we
can overcome the problem of calculating redundant cases and thus
increased complexity. We can solve this problem by simply creating a 2-D
array that can store a particular state (n, w) if we get it the first time. Now if
we come across the same state (n, w) again instead of calculating it in
exponential complexity we can directly return its result stored in the table in
constant time. This method gives an edge over the recursive approach in this
aspect.

// Here is the top-down approach of


// dynamic programming
#include <bits/stdc++.h>
using namespace std;

// Returns the value of maximum profit


int knapSackRec(int W, int wt[],
int val[], int i,
int** dp)
{
// base condition
if (i < 0)
return 0;
if (dp[i][W] != -1)
return dp[i][W];

if (wt[i] > W) {

// Store the value of function call


// stack in table before return
dp[i][W] = knapSackRec(W, wt,
val, i - 1,
dp);
return dp[i][W];
}
else {
// Store value in a table before return
dp[i][W] = max(val[i]
+ knapSackRec(W - wt[i],
wt, val,
i - 1, dp),
knapSackRec(W, wt, val,
i - 1, dp));

// Return value of table after storing


return dp[i][W];
}
}

int knapSack(int W, int wt[], int val[], int n)


{
// double pointer to declare the
// table dynamically
int** dp;
dp = new int*[n];

// loop to create the table dynamically


for (int i = 0; i < n; i++)
dp[i] = new int[W + 1];

// loop to initially filled the


// table with -1
for (int i = 0; i < n; i++)
for (int j = 0; j < W + 1; j++)
dp[i][j] = -1;
return knapSackRec(W, wt, val, n - 1, dp);
}

// Driver Code
int main()
{
int val[] = { 60, 100, 120 };
int wt[] = { 10, 20, 30 };
int W = 50;
int n = sizeof(val) / sizeof(val[0]);
cout << knapSack(W, wt, val, n);
return 0;
}

Output: 220

Method 4 :-  We use the dynamic programming approach but with optimized
space complexity .

#include <bits/stdc++.h>
using namespace std;
int knapSack(int W, int wt[], int val[], int n)
{
// making and initializing dp array
int dp[W + 1];
memset(dp, 0, sizeof(dp));

for (int i = 1; i < n + 1; i++) {


for (int w = W; w >= 0; w--) {

if (wt[i - 1] <= w)
// finding the maximum value
dp[w] = max(dp[w],
dp[w - wt[i - 1]] + val[i - 1]);
}
}
return dp[W]; // returning the maximum value of knapsack
}
int main()
{
int val[] = { 60, 100, 120 };
int wt[] = { 10, 20, 30 };
int W = 50;
int n = sizeof(val) / sizeof(val[0]);
cout << knapSack(W, wt, val, n);
return 0;
}

Output: 220

KRUSKAL’S MINIMUM SPANNING TREE


ALGORITHM:

What is Minimum Spanning Tree? 


Given a connected and undirected graph, a spanning tree of that graph is a
subgraph that is a tree and connects all the vertices together. A single graph
can have many different spanning trees. A minimum spanning tree (MST) or
minimum weight spanning tree for a weighted, connected, undirected graph
is a spanning tree with a weight less than or equal to the weight of every
other spanning tree. The weight of a spanning tree is the sum of weights
given to each edge of the spanning tree.
How many edges does a minimum spanning tree has? 
A minimum spanning tree has (V – 1) edges where V is the number of
vertices in the given graph. 
Below are the steps for finding MST using Kruskal’s algorithm
1. Sort all the edges in non-decreasing order of their weight. 
2. Pick the smallest edge. Check if it forms a cycle with the spanning tree
formed so far. If cycle is not formed, include this edge. Else, discard it. 
3. Repeat step#2 until there are (V-1) edges in the spanning tree.
The algorithm is a Greedy Algorithm. The Greedy Choice is to pick the
smallest weight edge that does not cause a cycle in the MST constructed so
far. Let us understand it with an example: Consider the below input graph.

The graph contains 9 vertices and 14 edges. So, the minimum spanning tree
formed will be having (9 – 1) = 8 edges. 

After sorting:

Weight Src Dest


1 7 6
2 8 2
2 6 5
4 0 1
4 2 5
6 8 6
7 2 3
7 7 8
8 0 7
8 1 2
9 3 4
10 5 4
11 1 7
14 3 5

Now pick all edges one by one from the sorted list of edges 
1. Pick edge 7-6: No cycle is formed, include it. 

2. Pick edge 8-2: No cycle is formed, include it. 


 

3. Pick edge 6-5: No cycle is formed, include it.

4. Pick edge 0-1: No cycle is formed, include it. 


 
5. Pick edge 2-5: No cycle is formed, include it. 
 

6. Pick edge 8-6:  Since including this edge results in the cycle, discard it.

7. Pick edge 2-3: No cycle is formed, include it. 


 

8. Pick edge 7-8: Since including this edge results in the cycle, discard it.

9. Pick edge 0-7: No cycle is formed, include it. 


 
10. Pick edge 1-2:  Since including this edge results in the cycle, discard it.

11. Pick edge 3-4: No cycle is formed, include it. 


 

Since the number of edges included equals (V – 1), the algorithm stops here.
 
Below is the implementation of the above idea:

// C++ program for Kruskal's algorithm

// to find Minimum Spanning Tree of a

// given connected, undirected and weighted

// graph

#include <bits/stdc++.h>

using namespace std;


// a structure to represent a

// weighted edge in graph

class Edge {

public:

int src, dest, weight;

};

// a structure to represent a connected,

// undirected and weighted graph

class Graph {

public:

// V-> Number of vertices, E-> Number of edges

int V, E;

// graph is represented as an array of edges.

// Since the graph is undirected, the edge

// from src to dest is also edge from dest

// to src. Both are counted as 1 edge here.

Edge* edge;

};

// Creates a graph with V vertices and E edges

Graph* createGraph(int V, int E)

Graph* graph = new Graph;

graph->V = V;
graph->E = E;

graph->edge = new Edge[E];

return graph;

// A structure to represent a subset for union-find

class subset {

public:

int parent;

int rank;

};

// A utility function to find set of an element i

// (uses path compression technique)

int find(subset subsets[], int i)

// find root and make root as parent of i

// (path compression)

if (subsets[i].parent != i)

subsets[i].parent

= find(subsets, subsets[i].parent);

return subsets[i].parent;

}
// A function that does union of two sets of x and y

// (uses union by rank)

void Union(subset subsets[], int x, int y)

int xroot = find(subsets, x);

int yroot = find(subsets, y);

// Attach smaller rank tree under root of high

// rank tree (Union by Rank)

if (subsets[xroot].rank < subsets[yroot].rank)

subsets[xroot].parent = yroot;

else if (subsets[xroot].rank > subsets[yroot].rank)

subsets[yroot].parent = xroot;

// If ranks are same, then make one as root and

// increment its rank by one

else {

subsets[yroot].parent = xroot;

subsets[xroot].rank++;

// Compare two edges according to their weights.

// Used in qsort() for sorting an array of edges

int myComp(const void* a, const void* b)

Edge* a1 = (Edge*)a;
Edge* b1 = (Edge*)b;

return a1->weight > b1->weight;

// The main function to construct MST using Kruskal's

// algorithm

void KruskalMST(Graph* graph)

int V = graph->V;

Edge result[V]; // Tnis will store the resultant MST

int e = 0; // An index variable, used for result[]

int i = 0; // An index variable, used for sorted edges

// Step 1: Sort all the edges in non-decreasing

// order of their weight. If we are not allowed to

// change the given graph, we can create a copy of

// array of edges

qsort(graph->edge, graph->E, sizeof(graph->edge[0]),

myComp);

// Allocate memory for creating V ssubsets

subset* subsets = new subset[(V * sizeof(subset))];

// Create V subsets with single elements

for (int v = 0; v < V; ++v)

subsets[v].parent = v;
subsets[v].rank = 0;

// Number of edges to be taken is equal to V-1

while (e < V - 1 && i < graph->E)

// Step 2: Pick the smallest edge. And increment

// the index for next iteration

Edge next_edge = graph->edge[i++];

int x = find(subsets, next_edge.src);

int y = find(subsets, next_edge.dest);

// If including this edge does't cause cycle,

// include it in result and increment the index

// of result for next edge

if (x != y) {

result[e++] = next_edge;

Union(subsets, x, y);

// Else discard the next_edge

// print the contents of result[] to display the

// built MST

cout << "Following are the edges in the constructed "

"MST\n";
int minimumCost = 0;

for (i = 0; i < e; ++i)

cout << result[i].src << " -- " << result[i].dest

<< " == " << result[i].weight << endl;

minimumCost = minimumCost + result[i].weight;

// return;

cout << "Minimum Cost Spanning Tree: " << minimumCost

<< endl;

// Driver code

int main()

/* Let us create following weighted graph

10

0--------1

|\|

6| 5\ |15

|\|

2--------3

4 */

int V = 4; // Number of vertices in graph

int E = 5; // Number of edges in graph

Graph* graph = createGraph(V, E);


// add edge 0-1

graph->edge[0].src = 0;

graph->edge[0].dest = 1;

graph->edge[0].weight = 10;

// add edge 0-2

graph->edge[1].src = 0;

graph->edge[1].dest = 2;

graph->edge[1].weight = 6;

// add edge 0-3

graph->edge[2].src = 0;

graph->edge[2].dest = 3;

graph->edge[2].weight = 5;

// add edge 1-3

graph->edge[3].src = 1;

graph->edge[3].dest = 3;

graph->edge[3].weight = 15;

// add edge 2-3

graph->edge[4].src = 2;

graph->edge[4].dest = 3;

graph->edge[4].weight = 4;

// Function call
KruskalMST(graph);

return 0;

Output
Following are the edges in the constructed MST
2 -- 3 == 4
0 -- 3 == 5
0 -- 1 == 10
Minimum Cost Spanning Tree: 19

Time Complexity: O(ElogE) or O(ElogV). Sorting of edges takes O(ELogE)


time. After sorting, we iterate through all edges and apply the find-union
algorithm. The find and union operations can take at most O(LogV) time. So
overall complexity is O(ELogE + ELogV) time. The value of E can be at most
O(V2), so O(LogV) is O(LogE) the same. Therefore, the overall time
complexity is O(ElogE) or O(ElogV).

PRIM’S MINIMUM SPANNING TREE ALGORITHM:

We have discussed Kruskal’s algorithm for Minimum Spanning Tree. Like


Kruskal’s algorithm, Prim’s algorithm is also a Greedy algorithm. It starts with
an empty spanning tree. The idea is to maintain two sets of vertices. The first
set contains the vertices already included in the MST, the other set contains
the vertices not yet included. At every step, it considers all the edges that
connect the two sets, and picks the minimum weight edge from these edges.
After picking the edge, it moves the other endpoint of the edge to the set
containing MST. 
A group of edges that connects two set of vertices in a graph is called cut in
graph theory. So, at every step of Prim’s algorithm, we find a cut (of two
sets, one contains the vertices already included in MST and other contains
rest of the vertices), pick the minimum weight edge from the cut and include
this vertex to MST Set (the set that contains already included vertices).
How does Prim’s Algorithm Work? The idea behind Prim’s algorithm is
simple, a spanning tree means all vertices must be connected. So the two
disjoint subsets (discussed above) of vertices must be connected to make
a Spanning  Tree. And they must be connected with the minimum weight
edge to make it a Minimum  Spanning Tree.
Algorithm 
1) Create a set mstSet that keeps track of vertices already included in MST. 
2) Assign a key value to all vertices in the input graph. Initialize all key values
as INFINITE. Assign key value as 0 for the first vertex so that it is picked
first. 
3) While mstSet doesn’t include all vertices 
….a) Pick a vertex u which is not there in mstSet and has minimum key
value. 
….b) Include u to mstSet. 
….c) Update key value of all adjacent vertices of u. To update the key
values, iterate through all adjacent vertices. For every adjacent vertex v, if
weight of edge u-v is less than the previous key value of v, update the key
value as weight of u-v

The idea of using key values is to pick the minimum weight edge from cut.
The key values are used only for vertices which are not yet included in MST,
the key value for these vertices indicate the minimum weight edges
connecting them to the set of vertices included in MST. 

Let us understand with the following example: 

The set mstSet  is initially empty and keys assigned to vertices are {0, INF,
INF, INF, INF, INF, INF, INF} where INF indicates infinite. Now pick the
vertex with the minimum key value. The vertex 0 is picked, include it
in mstSet. So mstSet  becomes {0}. After including to mstSet, update key
values of adjacent vertices. Adjacent vertices of 0 are 1 and 7. The key
values of 1 and 7 are updated as 4 and 8. Following subgraph shows
vertices and their key values, only the vertices with finite key values are
shown. The vertices included in MST are shown in green color.

Pick the vertex with minimum key value and not already included in MST (not
in mstSET). The vertex 1 is picked and added to mstSet. So mstSet now
becomes {0, 1}. Update the key values of adjacent vertices of 1. The key
value of vertex 2 becomes 8.

Pick the vertex with minimum key value and not already included in MST (not
in mstSET). We can either pick vertex 7 or vertex 2, let vertex 7 is picked. So
mstSet now becomes {0, 1, 7}. Update the key values of adjacent vertices of
7. The key value of vertex 6 and 8 becomes finite (1 and 7 respectively). 
 
Pick the vertex with minimum key value and not already included in MST (not
in mstSET). Vertex 6 is picked. So mstSet now becomes {0, 1, 7, 6}. Update
the key values of adjacent vertices of 6. The key value of vertex 5 and 8 are
updated.
 

We repeat the above steps until mstSet  includes all vertices of given graph.
Finally, we get the following graph.
 

How to implement the above algorithm? 


We use a boolean array mstSet[] to represent the set of vertices included in
MST. If a value mstSet[v] is true, then vertex v is included in MST, otherwise
not. Array key[] is used to store key values of all vertices. Another array
parent[] to store indexes of parent nodes in MST. The parent array is the
output array which is used to show the constructed MST. 
// A C++ program for Prim's Minimum

// Spanning Tree (MST) algorithm. The program is

// for adjacency matrix representation of the graph

#include <bits/stdc++.h>

using namespace std;

// Number of vertices in the graph

#define V 5

// A utility function to find the vertex with

// minimum key value, from the set of vertices

// not yet included in MST

int minKey(int key[], bool mstSet[])

// Initialize min value

int min = INT_MAX, min_index;

for (int v = 0; v < V; v++)

if (mstSet[v] == false && key[v] < min)

min = key[v], min_index = v;

return min_index;

// A utility function to print the

// constructed MST stored in parent[]

void printMST(int parent[], int graph[V][V])

cout<<"Edge \tWeight\n";

for (int i = 1; i < V; i++)

cout<<parent[i]<<" - "<<i<<" \t"<<graph[i][parent[i]]<<" \n";

}
// Function to construct and print MST for

// a graph represented using adjacency

// matrix representation

void primMST(int graph[V][V])

// Array to store constructed MST

int parent[V];

// Key values used to pick minimum weight edge in cut

int key[V];

// To represent set of vertices included in MST

bool mstSet[V];

// Initialize all keys as INFINITE

for (int i = 0; i < V; i++)

key[i] = INT_MAX, mstSet[i] = false;

// Always include first 1st vertex in MST.

// Make key 0 so that this vertex is picked as first vertex.

key[0] = 0;

parent[0] = -1; // First node is always root of MST

// The MST will have V vertices

for (int count = 0; count < V - 1; count++)

// Pick the minimum key vertex from the

// set of vertices not yet included in MST

int u = minKey(key, mstSet);

// Add the picked vertex to the MST Set

mstSet[u] = true;

// Update key value and parent index of


// the adjacent vertices of the picked vertex.

// Consider only those vertices which are not

// yet included in MST

for (int v = 0; v < V; v++)

// graph[u][v] is non zero only for adjacent vertices of m

// mstSet[v] is false for vertices not yet included in MST

// Update the key only if graph[u][v] is smaller than key[v]

if (graph[u][v] && mstSet[v] == false && graph[u][v] < key[v])

parent[v] = u, key[v] = graph[u][v];

// print the constructed MST

printMST(parent, graph);

// Driver code

int main()

/* Let us create the following graph

23

(0)--(1)--(2)

|/\|

6| 8/ \5 |7

|/\|

(3)-------(4)

9 */

int graph[V][V] = { { 0, 2, 0, 6, 0 },

{ 2, 0, 3, 8, 5 },

{ 0, 3, 0, 0, 7 },

{ 6, 8, 0, 0, 9 },

{ 0, 5, 7, 9, 0 } };

// Print the solution


primMST(graph);

return 0;

Output: 
Edge Weight
0 - 1 2
1 - 2 3
0 - 3 6
1 - 4 5
Time Complexity of the above program is O(V^2). 

STRASSEN’S MATRIX MULTIPLICATION


(DIVIDE AND CONQUER):

Divide and Conquer 


Following is simple Divide and Conquer method to multiply two square
matrices. 
1) Divide matrices A and B in 4 sub-matrices of size N/2 x N/2 as shown in
the below diagram. 
2) Calculate following values recursively. ae + bg, af + bh, ce + dg and cf +
dh. 
 
In the above method, we do 8 multiplications for matrices of size N/2 x N/2
and 4 additions. Addition of two matrices takes O(N 2) time. So the time
complexity can be written as 
T(N) = 8T(N/2) + O(N 2)

From Master’s Theorem, time complexity of above method is


O(N3)
which is unfortunately same as the above naive method.

Simple Divide and Conquer also leads to O(N3), can there be a better
way? 
In the above divide and conquer method, the main component for high time
complexity is 8 recursive calls. The idea of Strassen’s method is to reduce
the number of recursive calls to 7. Strassen’s method is similar to above
simple divide and conquer method in the sense that this method also divide
matrices to sub-matrices of size N/2 x N/2 as shown in the above diagram,
but in Strassen’s method, the four sub-matrices of result are calculated using
following formulae.
 
Time Complexity of Strassen’s Method 
Addition and Subtraction of two matrices takes O(N 2) time. So time
complexity can be written as 
 
T(N) = 7T(N/2) + O(N 2)

From Master’s Theorem,, time complexity of above


method is
O(NLog7) which is approximately O(N 2.8074)

Generally Strassen’s Method is not preferred for practical applications for


following reasons. 
1) The constants used in Strassen’s method are high and for a typical
application Naive method works better. 
2) For Sparse matrices, there are better methods especially designed for
them. 
3) The submatrices in recursion take extra space. 
4) Because of the limited precision of computer arithmetic on noninteger
values, larger errors accumulate in Strassen’s algorithm than in Naive
Method.

# Version 3.6
import numpy as np

def split(matrix):

"""

Splits a given matrix into quarters.

Input: nxn matrix

Output: tuple containing 4 n/2 x n/2 matrices corresponding to a, b, c, d

"""

row, col = matrix.shape

row2, col2 = row//2, col//2

return matrix[:row2, :col2], matrix[:row2, col2:], matrix[row2:, :col2], matrix[row2:, col2:]

def strassen(x, y):

"""

Computes matrix product by divide and conquer approach, recursively.

Input: nxn matrices x and y

Output: nxn matrix, product of x and y

"""

# Base case when size of matrices is 1x1

if len(x) == 1:

return x * y

# Splitting the matrices into quadrants. This will be done recursively

# until the base case is reached.

a, b, c, d = split(x)

e, f, g, h = split(y)

# Computing the 7 products, recursively (p1, p2...p7)

p1 = strassen(a, f - h)

p2 = strassen(a + b, h)

p3 = strassen(c + d, e)

p4 = strassen(d, g - e)
p5 = strassen(a + d, e + h)

p6 = strassen(b - d, g + h)

p7 = strassen(a - c, e + f)

# Computing the values of the 4 quadrants of the final matrix c

c11 = p5 + p4 - p2 + p6

c12 = p1 + p2

c21 = p3 + p4

c22 = p1 + p5 - p3 - p7

# Combining the 4 quadrants into a single matrix by stacking horizontally and vertically.

c = np.vstack((np.hstack((c11, c12)), np.hstack((c21, c22))))

return c

DIJIKSTRA’ MINIMUM SPANNING TREE


ALGORITHM:
Dijkstra’s algorithm is very similar to Prim’s algorithm for minimum spanning
tree. Like Prim’s MST, we generate a SPT (shortest path tree) with a given
source as a root. We maintain two sets, one set contains vertices included in
the shortest-path tree, other set includes vertices not yet included in the
shortest-path tree. At every step of the algorithm, we find a vertex that is in
the other set (set of not yet included) and has a minimum distance from the
source.
Below are the detailed steps used in Dijkstra’s algorithm to find the shortest
path from a single source vertex to all other vertices in the given graph. 
Algorithm 
1) Create a set sptSet (shortest path tree set) that keeps track of vertices
included in the shortest-path tree, i.e., whose minimum distance from the
source is calculated and finalized. Initially, this set is empty. 
2) Assign a distance value to all vertices in the input graph. Initialize all
distance values as INFINITE. Assign distance value as 0 for the source
vertex so that it is picked first. 
3) While sptSet doesn’t include all vertices 
….a) Pick a vertex u which is not there in sptSet and has a minimum
distance value. 
….b) Include u to sptSet. 
….c) Update distance value of all adjacent vertices of u. To update the
distance values, iterate through all adjacent vertices. For every adjacent
vertex v, if the sum of distance value of u (from source) and weight of edge
u-v, is less than the distance value of v, then update the distance value of v. 
Let us understand with the following example: 

The set sptSet is initially empty and distances assigned to vertices are {0,
INF, INF, INF, INF, INF, INF, INF} where INF indicates infinite. Now pick the
vertex with a minimum distance value. The vertex 0 is picked, include it
in sptSet. So sptSet  becomes {0}. After including 0 to sptSet, update
distance values of its adjacent vertices. Adjacent vertices of 0 are 1 and 7.
The distance values of 1 and 7 are updated as 4 and 8. The following
subgraph shows vertices and their distance values, only the vertices with
finite distance values are shown. The vertices included in SPT are shown in
green colour.
 

Pick the vertex with minimum distance value and not already included in SPT
(not in sptSET). The vertex 1 is picked and added to sptSet. So sptSet now
becomes {0, 1}. Update the distance values of adjacent vertices of 1. The
distance value of vertex 2 becomes 12.

Pick the vertex with minimum distance value and not already included in SPT
(not in sptSET). Vertex 7 is picked. So sptSet now becomes {0, 1, 7}. Update
the distance values of adjacent vertices of 7. The distance value of vertex 6
and 8 becomes finite (15 and 9 respectively). 
 
Pick the vertex with minimum distance value and not already included in SPT
(not in sptSET). Vertex 6 is picked. So sptSet now becomes {0, 1, 7, 6}.
Update the distance values of adjacent vertices of 6. The distance value of
vertex 5 and 8 are updated.

We repeat the above steps until sptSet  includes all vertices of the given
graph. Finally, we get the following Shortest Path Tree (SPT).
 
How to implement the above algorithm?
We use a boolean array sptSet[] to represent the set of vertices included in
SPT. If a value sptSet[v] is true, then vertex v is included in SPT, otherwise
not. Array dist[] is used to store the shortest distance values of all vertices.

// A Java program for Dijkstra's single source shortest path algorithm.

// The program is for adjacency matrix representation of the graph

import java.util.*;

import java.lang.*;

import java.io.*;

class ShortestPath {

// A utility function to find the vertex with minimum distance value,

// from the set of vertices not yet included in shortest path tree

static final int V = 9;

int minDistance(int dist[], Boolean sptSet[])

// Initialize min value

int min = Integer.MAX_VALUE, min_index = -1;

for (int v = 0; v < V; v++)

if (sptSet[v] == false && dist[v] <= min) {

min = dist[v];
min_index = v;

return min_index;

// A utility function to print the constructed distance array

void printSolution(int dist[])

System.out.println("Vertex \t\t Distance from Source");

for (int i = 0; i < V; i++)

System.out.println(i + " \t\t " + dist[i]);

// Function that implements Dijkstra's single source shortest path

// algorithm for a graph represented using adjacency matrix

// representation

void dijkstra(int graph[][], int src)

int dist[] = new int[V]; // The output array. dist[i] will hold

// the shortest distance from src to i

// sptSet[i] will true if vertex i is included in shortest

// path tree or shortest distance from src to i is finalized

Boolean sptSet[] = new Boolean[V];

// Initialize all distances as INFINITE and stpSet[] as false

for (int i = 0; i < V; i++) {

dist[i] = Integer.MAX_VALUE;

sptSet[i] = false;

// Distance of source vertex from itself is always 0

dist[src] = 0;
// Find shortest path for all vertices

for (int count = 0; count < V - 1; count++) {

// Pick the minimum distance vertex from the set of vertices

// not yet processed. u is always equal to src in first

// iteration.

int u = minDistance(dist, sptSet);

// Mark the picked vertex as processed

sptSet[u] = true;

// Update dist value of the adjacent vertices of the

// picked vertex.

for (int v = 0; v < V; v++)

// Update dist[v] only if is not in sptSet, there is an

// edge from u to v, and total weight of path from src to

// v through u is smaller than current value of dist[v]

if (!sptSet[v] && graph[u][v] != 0 && dist[u] != Integer.MAX_VALUE


&& dist[u] + graph[u][v] < dist[v])

dist[v] = dist[u] + graph[u][v];

// print the constructed distance array

printSolution(dist);

// Driver method

public static void main(String[] args)

/* Let us create the example graph discussed above */

int graph[][] = new int[][] { { 0, 4, 0, 0, 0, 0, 0, 8, 0 },

{ 4, 0, 8, 0, 0, 0, 0, 11, 0 },

{ 0, 8, 0, 7, 0, 4, 0, 0, 2 },

{ 0, 0, 7, 0, 9, 14, 0, 0, 0 },
{ 0, 0, 0, 9, 0, 10, 0, 0, 0 },

{ 0, 0, 4, 14, 10, 0, 2, 0, 0 },

{ 0, 0, 0, 0, 0, 2, 0, 1, 6 },

{ 8, 11, 0, 0, 0, 0, 1, 0, 7 },

{ 0, 0, 2, 0, 0, 0, 6, 7, 0 } };

ShortestPath t = new ShortestPath();

t.dijkstra(graph, 0);

OUTPUT:
Vertex Distance from Source
0 0
1 4
2 12
3 19
4 21
5 11
6 9
7 8
8 14

Notes: 
1) The code calculates the shortest distance but doesn’t calculate the path
information. We can create a parent array, update the parent array when
distance is updated (like prim’s implementation) and use it to show the
shortest path from source to different vertices.
2) The code is for undirected graphs, the same Dijkstra function can be used
for directed graphs also.
3) The code finds the shortest distances from the source to all vertices. If we
are interested only in the shortest distance from the source to a single target,
we can break the for loop when the picked minimum distance vertex is equal
to the target (Step 3.a of the algorithm).
4) Time Complexity of the implementation is O(V^2). If the input graph is
represented using adjacency list, it can be reduced to O(E log V) with the
help of a binary heap. Please see 
Dijkstra’s Algorithm for Adjacency List Representation for more details.
5) Dijkstra’s algorithm doesn’t work for graphs with negative weight cycles. It
may give correct results for a graph with negative edges but you must allow
a vertex can be visited multiple times and that version will lose its fast time
complexity. For graphs with negative weight edges and cycles.

BELLMAN FORD ALGORITHM:


We have discussed Dijkstra’s algorithm for this problem. Dijkstra’s algorithm
is a Greedy algorithm and time complexity is O(V+E LogV) (with the use of
Fibonacci heap). Dijkstra doesn’t work for Graphs with negative
weight  cycle, Bellman-Ford works for such graphs. Bellman-Ford is also
simpler than Dijkstra and suites well for distributed systems. But time
complexity of Bellman-Ford is O(VE), which is more than Dijkstra. 
 
Algorithm 
Following are the detailed steps.
Input: Graph and a source vertex src 
Output: Shortest distance to all vertices from src. If there is a negative weight
cycle, then shortest distances are not calculated, negative weight cycle is
reported.
1) This step initializes distances from the source to all vertices as infinite and
distance to the source itself as 0. Create an array dist[] of size |V| with all
values as infinite except dist[src] where src is source vertex.
2) This step calculates shortest distances. Do following |V|-1 times where |V|
is the number of vertices in given graph. 
…..a) Do following for each edge u-v 
………………If dist[v] > dist[u] + weight of edge uv, then update dist[v] 
………………….dist[v] = dist[u] + weight of edge uv
3) This step reports if there is a negative weight cycle in graph. Do following
for each edge u-v 
……If dist[v] > dist[u] + weight of edge uv, then “Graph contains negative
weight cycle” 
The idea of step 3 is, step 2 guarantees the shortest distances if the graph
doesn’t contain a negative weight cycle. If we iterate through all edges one
more time and get a shorter path for any vertex, then there is a negative
weight cycle
How does this work? Like other Dynamic Programming Problems, the
algorithm calculates shortest paths in a bottom-up manner. It first calculates
the shortest distances which have at-most one edge in the path. Then, it
calculates the shortest paths with at-most 2 edges, and so on. After the i-th
iteration of the outer loop, the shortest paths with at most i edges are
calculated. There can be maximum |V| – 1 edges in any simple path, that is
why the outer loop runs |v| – 1 times. The idea is, assuming that there is no
negative weight cycle, if we have calculated shortest paths with at most i
edges, then an iteration over all edges guarantees to give shortest path with
at-most (i+1) edges
Example 
Let us understand the algorithm with following example graph. Let the given
source vertex be 0. Initialize all distances as infinite, except the distance to
the source itself. Total number of vertices in the graph is 5, so all edges
must be processed 4 times.

Let all edges are processed in the following order: (B, E), (D, B), (B, D), (A,
B), (A, C), (D, C), (B, C), (E, D). We get the following distances when all
edges are processed the first time. The first row shows initial distances. The
second row shows distances when edges (B, E), (D, B), (B, D) and (A, B) are
processed. The third row shows distances when (A, C) is processed. The
fourth row shows when (D, C), (B, C) and (E, D) are processed. 
The first iteration guarantees to give all shortest paths which are at most 1
edge long. We get the following distances when all edges are processed
second time (The last row shows final values). 
The second iteration guarantees to give all shortest paths which are at most
2 edges long. The algorithm processes all edges 2 more times. The
distances are minimized after the second iteration, so third and fourth
iterations don’t update the distances.
Implementation: 
// A Java program for Bellman-Ford's single source shortest path

// algorithm.

import java.util.*;

import java.lang.*;

import java.io.*;

// A class to represent a connected, directed and weighted graph

class Graph {

// A class to represent a weighted edge in graph

class Edge {

int src, dest, weight;

Edge()

src = dest = weight = 0;

};

int V, E;

Edge edge[];

// Creates a graph with V vertices and E edges

Graph(int v, int e)

V = v;

E = e;

edge = new Edge[e];

for (int i = 0; i < e; ++i)

edge[i] = new Edge();

}
// The main function that finds shortest distances from src

// to all other vertices using Bellman-Ford algorithm. The

// function also detects negative weight cycle

void BellmanFord(Graph graph, int src)

int V = graph.V, E = graph.E;

int dist[] = new int[V];

// Step 1: Initialize distances from src to all other

// vertices as INFINITE

for (int i = 0; i < V; ++i)

dist[i] = Integer.MAX_VALUE;

dist[src] = 0;

// Step 2: Relax all edges |V| - 1 times. A simple

// shortest path from src to any other vertex can

// have at-most |V| - 1 edges

for (int i = 1; i < V; ++i) {

for (int j = 0; j < E; ++j) {

int u = graph.edge[j].src;

int v = graph.edge[j].dest;

int weight = graph.edge[j].weight;

if (dist[u] != Integer.MAX_VALUE && dist[u] + weight < dist[v])

dist[v] = dist[u] + weight;

// Step 3: check for negative-weight cycles. The above

// step guarantees shortest distances if graph doesn't

// contain negative weight cycle. If we get a shorter

// path, then there is a cycle.

for (int j = 0; j < E; ++j) {

int u = graph.edge[j].src;

int v = graph.edge[j].dest;
int weight = graph.edge[j].weight;

if (dist[u] != Integer.MAX_VALUE && dist[u] + weight < dist[v]) {

System.out.println("Graph contains negative weight cycle");

return;

printArr(dist, V);

// A utility function used to print the solution

void printArr(int dist[], int V)

System.out.println("Vertex Distance from Source");

for (int i = 0; i < V; ++i)

System.out.println(i + "\t\t" + dist[i]);

// Driver method to test above function

public static void main(String[] args)

int V = 5; // Number of vertices in graph

int E = 8; // Number of edges in graph

Graph graph = new Graph(V, E);

// add edge 0-1 (or A-B in above figure)

graph.edge[0].src = 0;

graph.edge[0].dest = 1;

graph.edge[0].weight = -1;

// add edge 0-2 (or A-C in above figure)

graph.edge[1].src = 0;

graph.edge[1].dest = 2;

graph.edge[1].weight = 4;
// add edge 1-2 (or B-C in above figure)

graph.edge[2].src = 1;

graph.edge[2].dest = 2;

graph.edge[2].weight = 3;

// add edge 1-3 (or B-D in above figure)

graph.edge[3].src = 1;

graph.edge[3].dest = 3;

graph.edge[3].weight = 2;

// add edge 1-4 (or A-E in above figure)

graph.edge[4].src = 1;

graph.edge[4].dest = 4;

graph.edge[4].weight = 2;

// add edge 3-2 (or D-C in above figure)

graph.edge[5].src = 3;

graph.edge[5].dest = 2;

graph.edge[5].weight = 5;

// add edge 3-1 (or D-B in above figure)

graph.edge[6].src = 3;

graph.edge[6].dest = 1;

graph.edge[6].weight = 1;

// add edge 4-3 (or E-D in above figure)

graph.edge[7].src = 4;

graph.edge[7].dest = 3;

graph.edge[7].weight = -3;

graph.BellmanFord(graph, 0);

}
Output: 

Vertex Distance from Source


0 0
1 -1
2 2
3 -2
4 1

Notes 
1) Negative weights are found in various applications of graphs. For
example, instead of paying cost for a path, we may get some advantage if
we follow the path.
2) Bellman-Ford works better (better than Dijkstra’s) for distributed systems.
Unlike Dijkstra’s where we need to find the minimum value of all vertices, in
Bellman-Ford, edges are considered one by one.                                            
           
3) Bellman-Ford does not work with undirected graph with negative edges as
it will declared as negative cycle.
MULTISTAGE GRAPH (Shortest Path):

A Multistage graph is a directed graph in which the nodes can be divided


into a set of stages such that all edges are from a stage to next stage only
(In other words there is no edge between vertices of same stage and from a
vertex of current stage to previous stage).
We are given a multistage graph, a source and a destination, we need to find
shortest path from source to destination. By convention, we consider source
at stage 1 and destination as last stage.
Following is an example graph we will consider in this article :- 

Now there are various strategies we can apply :-


 The Brute force method of finding all possible paths between Source and
Destination and then finding the minimum. That’s the WORST possible
strategy.
 Dijkstra’s Algorithm of Single Source shortest paths. This method will
find shortest paths from source to all other nodes which is not required in
this case. So it will take a lot of time and it doesn’t even use the SPECIAL
feature that this MULTI-STAGE graph has.
 Simple Greedy Method – At each node, choose the shortest outgoing
path. If we apply this approach to the example graph give above we get
the solution as 1 + 4 + 18 = 23. But a quick look at the graph will show
much shorter paths available than 23. So the greedy method fails !
 The best option is Dynamic Programming. So we need to find Optimal
Sub-structure, Recursive Equations and Overlapping Sub-problems.
Optimal Substructure and Recursive Equation :- 
We define the notation :- M(x, y) as the minimum cost to T(target node) from
Stage x, Node y. 

Shortest distance from stage 1, node 0 to


destination, i.e., 7 is M(1, 0).

// From 0, we can go to 1 or 2 or 3 to
// reach 7.
M(1, 0) = min(1 + M(2, 1),
2 + M(2, 2),
5 + M(2, 3))

This means that our problem of 0 —> 7 is now sub-divided into 3 sub-
problems :-
 
So if we have total 'n' stages and target
as T, then the stopping condition will be :-
M(n-1, i) = i ---> T + M(n, T) = i ---> T

Recursion Tree and Overlapping Sub-Problems:- 


So, the hierarchy of M(x, y) evaluations will look something like this :-
 
In M(i, j), i is stage number and
j is node number

M(1, 0)
/ | \
/ | \
M(2, 1) M(2, 2) M(2, 3)
/ \ / \ / \
M(3, 4) M(3, 5) M(3, 4) M(3, 5) M(3, 6) M(3, 6)
. . . . . .
. . . . . .
. . . . . .
So, here we have drawn a very small part of the Recursion Tree and we can
already see Overlapping Sub-Problems. We can largely reduce the number
of M(x, y) evaluations using Dynamic Programming.
Implementation details: 
The below implementation assumes that nodes are numbered from 0 to N-1
from first stage (source) to last stage (destination). We also assume that the
input graph is multistage. 

// Java program to find shortest distance


// in a multistage graph.
class GFG
{

static int N = 8;
static int INF = Integer.MAX_VALUE;

// Returns shortest distance from 0 to


// N-1.
public static int shortestDist(int[][] graph)
{

// dist[i] is going to store shortest


// distance from node i to node N-1.
int[] dist = new int[N];

dist[N - 1] = 0;

// Calculating shortest path for


// rest of the nodes
for (int i = N - 2; i >= 0; i--)
{

// Initialize distance from i to


// destination (N-1)
dist[i] = INF;
// Check all nodes of next stages
// to find shortest distance from
// i to N-1.
for (int j = i; j < N; j++)
{
// Reject if no edge exists
if (graph[i][j] == INF)
{
continue;
}

// We apply recursive equation to


// distance to target through j.
// and compare with minimum distance
// so far.
dist[i] = Math.min(dist[i], graph[i][j]
+ dist[j]);
}
}

return dist[0];
}

// Driver code
public static void main(String[] args)
{
// Graph stored in the form of an
// adjacency Matrix
int[][] graph = new int[][]{{INF, 1, 2, 5, INF, INF, INF, INF},
{INF, INF, INF, INF, 4, 11, INF, INF},
{INF, INF, INF, INF, 9, 5, 16, INF},
{INF, INF, INF, INF, INF, INF, 2, INF},
{INF, INF, INF, INF, INF, INF, INF, 18},
{INF, INF, INF, INF, INF, INF, INF, 13},
{INF, INF, INF, INF, INF, INF, INF, 2}};
System.out.println(shortestDist(graph));
}
}

Output:  9

Time Complexity : O(n2)


Floyd Warshall Algorithm:

The Floyd Warshall Algorithm is for solving the All Pairs Shortest Path
problem. The problem is to find shortest distances between every pair of
vertices in a given edge weighted directed Graph. 
Example: 

 Input:
graph[][] = { {0, 5, INF, 10},
{INF, 0, 3, INF},
{INF, INF, 0, 1},
{INF, INF, INF, 0} }
which represents the following graph
10
(0)------->(3)
| /|\
5 | |
| | 1
\|/ |
(1)------->(2)
3
Note that the value of graph[i][j] is 0 if i is equal to j
And graph[i][j] is INF (infinite) if there is no edge from vertex i
to j.

Output:
Shortest distance matrix
0 5 8 9
INF 0 3 4
INF INF 0 1
INF INF INF 0

Floyd Warshall Algorithm 


We initialize the solution matrix same as the input graph matrix as a first
step. Then we update the solution matrix by considering all vertices as an
intermediate vertex. The idea is to one by one pick all vertices and updates
all shortest paths which include the picked vertex as an intermediate vertex
in the shortest path. When we pick vertex number k as an intermediate
vertex, we already have considered vertices {0, 1, 2, .. k-1} as intermediate
vertices. For every pair (i, j) of the source and destination vertices
respectively, there are two possible cases. 
1) k is not an intermediate vertex in shortest path from i to j. We keep the
value of dist[i][j] as it is. 
2) k is an intermediate vertex in shortest path from i to j. We update the value
of dist[i][j] as dist[i][k] + dist[k][j] if dist[i][j] > dist[i][k] + dist[k][j]
The following figure shows the above optimal substructure property in the all-
pairs shortest path problem

Following is implementations of the Floyd Warshall algorithm. 

// A Java program for Floyd Warshall All Pairs Shortest


// Path algorithm.
import java.util.*;
import java.lang.*;
import java.io.*;

class AllPairShortestPath
{
final static int INF = 99999, V = 4;

void floydWarshall(int graph[][])


{
int dist[][] = new int[V][V];
int i, j, k;

/* Initialize the solution matrix


same as input graph matrix.
Or we can say the initial values
of shortest distances
are based on shortest paths
considering no intermediate
vertex. */
for (i = 0; i < V; i++)
for (j = 0; j < V; j++)
dist[i][j] = graph[i][j];

/* Add all vertices one by one


to the set of intermediate
vertices.
---> Before start of an iteration,
we have shortest
distances between all pairs
of vertices such that
the shortest distances consider
only the vertices in
set {0, 1, 2, .. k-1} as
intermediate vertices.
----> After the end of an iteration,
vertex no. k is added
to the set of intermediate
vertices and the set
becomes {0, 1, 2, .. k} */
for (k = 0; k < V; k++)
{
// Pick all vertices as source one by one
for (i = 0; i < V; i++)
{
// Pick all vertices as destination for the
// above picked source
for (j = 0; j < V; j++)
{
// If vertex k is on the shortest path from
// i to j, then update the value of dist[i][j]
if (dist[i][k] + dist[k][j] < dist[i][j])
dist[i][j] = dist[i][k] + dist[k][j];
}
}
}

// Print the shortest distance matrix


printSolution(dist);
}

void printSolution(int dist[][])


{
System.out.println("The following matrix shows the shortest "+
"distances between every pair of
vertices");
for (int i=0; i<V; ++i)
{
for (int j=0; j<V; ++j)
{
if (dist[i][j]==INF)
System.out.print("INF ");
else
System.out.print(dist[i][j]+" ");
}
System.out.println();
}
}
// Driver program to test above function
public static void main (String[] args)
{
/* Let us create the following weighted graph
10
(0)------->(3)
| /|\
5| |
| |1
\|/ |
(1)------->(2)
3 */
int graph[][] = { {0, 5, INF, 10},
{INF, 0, 3, INF},
{INF, INF, 0, 1},
{INF, INF, INF, 0}
};
AllPairShortestPath a = new AllPairShortestPath();

// Print the solution


a.floydWarshall(graph);
}
}

Output: 
Following matrix shows the shortest distances between every pair of
vertices
0 5 8 9
INF 0 3 4
INF INF 0 1
INF INF INF 0
Time Complexity: O(V^3)

Travelling Salesman Problem:

Travelling Salesman Problem (TSP): Given a set of cities and distance


between every pair of cities, the problem is to find the shortest possible route
that visits every city exactly once and returns to the starting point.
Note the difference between Hamiltonian Cycle and TSP. The Hamiltoninan
cycle problem is to find if there exist a tour that visits every city exactly once.
Here we know that Hamiltonian Tour exists (because the graph is complete)
and in fact many such tours exist, the problem is to find a minimum weight
Hamiltonian Cycle.

For example, consider the graph shown in figure on right side. A TSP tour in
the graph is 1-2-4-3-1. The cost of the tour is 10+25+30+15 which is 80.
.
Following are different solutions for the traveling salesman problem.
Naive Solution:
1) Consider city 1 as the starting and ending point.
2) Generate all (n-1)! Permutations of cities.
3) Calculate cost of every permutation and keep track of minimum cost
permutation.
4) Return the permutation with minimum cost.
Time Complexity: Θ(n!)
Dynamic Programming:
Let the given set of vertices be {1, 2, 3, 4,….n}. Let us consider 1 as starting
and ending point of output. For every other vertex i (other than 1), we find the
minimum cost path with 1 as the starting point, i as the ending point and all
vertices appearing exactly once. Let the cost of this path be cost(i), the cost
of corresponding Cycle would be cost(i) + dist(i, 1) where dist(i, 1) is the
distance from i to 1. Finally, we return the minimum of all [cost(i) + dist(i, 1)]
values. This looks simple so far. Now the question is how to get cost(i)?
To calculate cost(i) using Dynamic Programming, we need to have some
recursive relation in terms of sub-problems. Let us define a term C(S, i) be
the cost of the minimum cost path visiting each vertex in set S exactly once,
starting at 1 and ending at i.
We start with all subsets of size 2 and calculate C(S, i) for all subsets where
S is the subset, then we calculate C(S, i) for all subsets S of size 3 and so
on. Note that 1 must be present in every subset.

If size of S is 2, then S must be {1, i},


C(S, i) = dist(1, i)
Else if size of S is greater than 2.
C(S, i) = min { C(S-{i}, j) + dis(j, i)} where j belongs to S, j !
= i and j != 1.

For a set of size n, we consider n-2 subsets each of size n-1 such that all
subsets don’t have nth in them.
Using the above recurrence relation, we can write dynamic programming
based solution. There are at most O(n*2 n) subproblems, and each one takes
linear time to solve. The total running time is therefore O(n 2*2n). The time
complexity is much less than O(n!), but still exponential. Space required is
also exponential. So this approach is also infeasible even for slightly higher
number of vertices.

The implementation of a simple solution:


Consider city 1 as the starting and ending point. Since the route is cyclic, we
can consider any point as a starting point.
1. Generate all (n-1)! permutations of cities.
2. Calculate the cost of every permutation and keep track of the minimum
cost permutation.
3. Return the permutation with minimum cost.
Below is the implementation of the above idea

// Java program to implement

// traveling salesman problem

// using naive approach.

import java.util.*;

class GFG{
static int V = 4;

// implementation of traveling

// Salesman Problem

static int travllingSalesmanProblem(int graph[][],

int s)

// store all vertex apart

// from source vertex

ArrayList<Integer> vertex =

new ArrayList<Integer>();

for (int i = 0; i < V; i++)

if (i != s)

vertex.add(i);

// store minimum weight

// Hamiltonian Cycle.

int min_path = Integer.MAX_VALUE;

do

// store current Path weight(cost)

int current_pathweight = 0;

// compute current path weight

int k = s;

for (int i = 0;

i < vertex.size(); i++)

current_pathweight +=

graph[k][vertex.get(i)];

k = vertex.get(i);

}
current_pathweight += graph[k][s];

// update minimum

min_path = Math.min(min_path,

current_pathweight);

} while (findNextPermutation(vertex));

return min_path;

// Function to swap the data

// present in the left and right indices

public static ArrayList<Integer> swap(

ArrayList<Integer> data,

int left, int right)

// Swap the data

int temp = data.get(left);

data.set(left, data.get(right));

data.set(right, temp);

// Return the updated array

return data;

// Function to reverse the sub-array

// starting from left to the right

// both inclusive

public static ArrayList<Integer> reverse(

ArrayList<Integer> data,

int left, int right)

// Reverse the sub-array


while (left < right)

int temp = data.get(left);

data.set(left++,

data.get(right));

data.set(right--, temp);

// Return the updated array

return data;

// Function to find the next permutation

// of the given integer array

public static boolean findNextPermutation(

ArrayList<Integer> data)

// If the given dataset is empty

// or contains only one element

// next_permutation is not possible

if (data.size() <= 1)

return false;

int last = data.size() - 2;

// find the longest non-increasing

// suffix and find the pivot

while (last >= 0)

if (data.get(last) <

data.get(last + 1))

break;

}
last--;

// If there is no increasing pair

// there is no higher order permutation

if (last < 0)

return false;

int nextGreater = data.size() - 1;

// Find the rightmost successor

// to the pivot

for (int i = data.size() - 1;

i > last; i--) {

if (data.get(i) >

data.get(last))

nextGreater = i;

break;

// Swap the successor and

// the pivot

data = swap(data,

nextGreater, last);

// Reverse the suffix

data = reverse(data, last + 1,

data.size() - 1);

// Return true as the

// next_permutation is done

return true;
}

// Driver Code

public static void main(String args[])

// matrix representation of graph

int graph[][] = {{0, 10, 15, 20},

{10, 0, 35, 25},

{15, 35, 0, 30},

{20, 25, 30, 0}};

int s = 0;

System.out.println(

travllingSalesmanProblem(graph, s));

Output
80
8- QUEENS PROBLEM:

The eight queens problem is the problem of placing eight queens on an 8×8
chessboard such that none of them attack one another (no two are in the
same row, column, or diagonal). More generally, the n queens problem
places n queens on an n×n chessboard.
There are different solutions for the problem.

The expected output is a binary matrix which has 1s for the blocks where
queens are placed. For example, following is the output matrix for above 4
queen solution.
{ 0, 1, 0, 0}
{ 0, 0, 0, 1}
{ 1, 0, 0, 0}
{ 0, 0, 1, 0}
Naive Algorithm
Generate all possible configurations of queens on board and print a
configuration that satisfies the given constraints.
while there are untried configurations
{
generate the next configuration
if queens don't attack in this configuration then
{
print this configuration;
}
}
Backtracking Algorithm
The idea is to place queens one by one in different columns, starting from
the leftmost column. When we place a queen in a column, we check for
clashes with already placed queens. In the current column, if we find a row
for which there is no clash, we mark this row and column as part of the
solution. If we do not find such a row due to clashes then we backtrack and
return false.
1) Start in the leftmost column
2) If all queens are placed
return true
3) Try all rows in the current column.
Do following for every tried row.
a) If the queen can be placed safely in this row
then mark this [row, column] as part of the
solution and recursively check if placing
queen here leads to a solution.
b) If placing the queen in [row, column] leads to
a solution then return true.
c) If placing queen doesn't lead to a solution then
unmark this [row, column] (Backtrack) and go to
step (a) to try other rows.
3) If all rows have been tried and nothing worked,
return false to trigger backtracking.

Implementation of Backtracking solution

/* C/C++ program to solve N Queen Problem using


backtracking */
#define N 4
#include <stdbool.h>
#include <stdio.h>
/* A utility function to print solution */
void printSolution(int board[N][N])
{
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++)
printf(" %d ", board[i][j]);
printf("\n");
}
}

/* A utility function to check if a queen can


be placed on board[row][col]. Note that this
function is called when "col" queens are
already placed in columns from 0 to col -1.
So we need to check only left side for
attacking queens */
bool isSafe(int board[N][N], int row, int col)
{
int i, j;

/* Check this row on left side */


for (i = 0; i < col; i++)
if (board[row][i])
return false;

/* Check upper diagonal on left side */


for (i = row, j = col; i >= 0 && j >= 0; i--, j--)
if (board[i][j])
return false;

/* Check lower diagonal on left side */


for (i = row, j = col; j >= 0 && i < N; i++, j--)
if (board[i][j])
return false;

return true;
}

/* A recursive utility function to solve N


Queen problem */
bool solveNQUtil(int board[N][N], int col)
{
/* base case: If all queens are placed
then return true */
if (col >= N)
return true;

/* Consider this column and try placing


this queen in all rows one by one */
for (int i = 0; i < N; i++) {
/* Check if the queen can be placed on
board[i][col] */
if (isSafe(board, i, col)) {
/* Place this queen in board[i][col] */
board[i][col] = 1;

/* recur to place rest of the queens */


if (solveNQUtil(board, col + 1))
return true;

/* If placing queen in board[i][col]


doesn't lead to a solution, then
remove queen from board[i][col] */
board[i][col] = 0; // BACKTRACK
}
}

/* If the queen cannot be placed in any row in


this colum col then return false */
return false;
}

/* This function solves the N Queen problem using


Backtracking. It mainly uses solveNQUtil() to
solve the problem. It returns false if queens
cannot be placed, otherwise, return true and
prints placement of queens in the form of 1s.
Please note that there may be more than one
solutions, this function prints one of the
feasible solutions.*/
bool solveNQ()
{
int board[N][N] = { { 0, 0, 0, 0 },
{ 0, 0, 0, 0 },
{ 0, 0, 0, 0 },
{ 0, 0, 0, 0 } };

if (solveNQUtil(board, 0) == false) {
printf("Solution does not exist");
return false;
}

printSolution(board);
return true;
}
// driver program to test above function
int main()
{
solveNQ();
return 0;
}

Output: The 1 values indicate placements of queens


0 0 1 0
1 0 0 0
0 0 0 1
0 1 0 0

You might also like