You are on page 1of 68

Design & Analysis of Algorithm

Unit 3
Notes

Divide and Conquer is an algorithmic pattern. In algorithmic methods, the design is to take a dispute
on a huge input, break the input into minor pieces, decide the problem on each of the small pieces, and
then merge the piecewise solutions into a global solution. This mechanism of solving the problem is
called the Divide & Conquer Strategy.

Divide and Conquer algorithm consists of a dispute using the following three steps.

1. Divide the original problem into a set of subproblems.


2. Conquer: Solve every subproblem individually, recursively.
3. Combine: Put together the solutions of the subproblems to get the solution to the whole
problem.

Generally, we can follow the divide-and-conquer approach in a three-step process.

Examples: The specific computer algorithms are based on the Divide & Conquer approach:

1. Maximum and Minimum Problem


2. Binary Search
3. Sorting (merge sort, quick sort)
4. Tower of Hanoi.

Applications of Divide and Conquer Approach:

Following algorithms are based on the concept of the Divide and Conquer Technique:

1. Binary Search: The binary search algorithm is a searching algorithm, which is also called a half-
interval search or logarithmic search. It works by comparing the target value with the middle
element existing in a sorted array. After making the comparison, if the value differs, then the half
that cannot contain the target will eventually eliminate, followed by continuing the search on the
other half. We will again consider the middle element and compare it with the target value. The
process keeps on repeating until the target value is met. If we found the other half to be empty
after ending the search, then it can be concluded that the target is not present in the array.
2. Quicksort: It is the most efficient sorting algorithm, which is also known as partition-exchange
sort. It starts by selecting a pivot value from an array followed by dividing the rest of the array
elements into two sub-arrays. The partition is made by comparing each of the elements with the
pivot value. It compares whether the element holds a greater value or lesser value than the pivot
and then sort the arrays recursively.
3. Merge Sort: It is a sorting algorithm that sorts an array by making comparisons. It starts by
dividing an array into sub-array and then recursively sorts each of them. After the sorting is done,
it merges them back.
4. Closest Pair of Points: It is a problem of computational geometry. This algorithm emphasizes
finding out the closest pair of points in a metric space, given n points, such that the distance
between the pair of points should be minimal.
5. Strassen's Algorithm: It is an algorithm for matrix multiplication, which is named after Volker
Strassen. It has proven to be much faster than the traditional algorithm when works on large
matrices.
6. Cooley-Tukey Fast Fourier Transform (FFT) algorithm: The Fast Fourier Transform
algorithm is named after J. W. Cooley and John Turkey. It follows the Divide and Conquer
Approach and imposes a complexity of O(nlogn).
7. Karatsuba algorithm for fast multiplication: It is one of the fastest multiplication algorithms of
the traditional time, invented by Anatoly Karatsuba in late 1960 and got published in 1962. It
multiplies two n-digit numbers in such a way by reducing it to at most single-digit.

Advantages of Divide and Conquer

o Divide and Conquer tend to successfully solve one of the biggest problems, such as the Tower
of Hanoi, a mathematical puzzle. It is challenging to solve complicated problems for which you
have no basic idea, but with the help of the divide and conquer approach, it has lessened the
effort as it works on dividing the main problem into two halves and then solve them recursively.
This algorithm is much faster than other algorithms.
o It efficiently uses cache memory without occupying much space because it solves simple
subproblems within the cache memory instead of accessing the slower main memory.
o It is more proficient than that of its counterpart Brute Force technique.
o Since these algorithms inhibit parallelism, it does not involve any modification and is handled by
systems incorporating parallel processing.

Disadvantages of Divide and Conquer

o Since most of its algorithms are designed by incorporating recursion, so it necessitates high
memory management.
o An explicit stack may overuse the space.
o It may even crash the system if the recursion is performed rigorously greater than the stack
present in the CPU.

Examples of Divide & Conquer:


1. Sorting Algorithms: Merge Sort
Divide and Conquer paradigm in Merge Sort is as follows:
• Divide: Divide the n-element array which is to be sorted into two subarrays of n/2
elements each.
• Conquer: Sort the two sub-arrays recursively using merge sort.
• Combine: Merge the two sorted sub-arrays to produce the sorted array.
Quick Sort:
Divide-and-Conquer paradigm in Quick Sort:
Divide: Partition (rearrange) the array A[p r] into two (possibly empty) subarrays A[p q - 1] and A[q +
1 r] such that each element of A[p q - 1] is less than or equal to A[q], which is, in turn, less than or
equal to each element of A[q + 1 r]. Compute the index q as part of this partitioning procedure.
Conquer: Sort the two subarrays A[p q -1] and A[q +1 r] by recursive calls to quicksort.
Combine: Since the subarrays are sorted in place, no work is needed to combine them: the entire
array A[p r] is now sorted.
Matrix Multiplication
Introduction to Greedy Algorithm:
• Greedy Method: In this method, we have to find out the best method/option out of many present
ways.
• This method may or may not give the best output/optimal solution.
• Greedy Algorithm solves problems by making the best choice that seems best at the
particular moment.
• Many optimization problems can be determined using a greedy algorithm.
• Some issues have no efficient solution, but a greedy algorithm may provide a solution that
is close to optimal.
A greedy algorithm works if a problem exhibits the following two properties:
• Greedy Choice Property: A globally optimal solution can be reached at by creating a locally
optimal solution. In other words, an optimal solution can be obtained by creating "greedy"
choices.
• Optimal substructure: Optimal solutions contain optimal sub-solutions. In other words, answers
to sub-problems of an optimal solution are optimal.

The greedy method is one of the strategies like Divide and conquer used to solve the problems. This
method is used for solving optimization problems. An optimization problem is a problem that demands
either maximum or minimum results. Let's understand through some terms.

The Greedy method is the simplest and straightforward approach. It is not an algorithm, but it is a
technique. The main function of this approach is that the decision is taken on the basis of the currently
available information. Whatever the current information is present, the decision is made without
worrying about the effect of the current decision in future.

This technique is basically used to determine the feasible solution that may or may not be optimal. The
feasible solution is a subset that satisfies the given criteria. The optimal solution is the solution which
is the best and the most favorable solution in the subset. In the case of feasible, if more than one
solution satisfies the given criteria then those solutions will be considered as the feasible, whereas the
optimal solution is the best solution among all the solutions.

Characteristics of Greedy method

The following are the characteristics of a greedy method:

o To construct the solution in an optimal way, this algorithm creates two sets where one set
contains all the chosen items, and another set contains the rejected items.
o A Greedy algorithm makes good local choices in the hope that the solution should be either
feasible or optimal.

Components of Greedy Algorithm

The components that can be used in the greedy algorithm are:

o Candidate set: A solution that is created from the set is known as a candidate set.
o Selection function: This function is used to choose the candidate or subset which can be added
in the solution.
o Feasibility function: A function that is used to determine whether the candidate or subset can
be used to contribute to the solution or not.
o Objective function: A function is used to assign the value to the solution or the partial solution.
o Solution function: This function is used to intimate whether the complete function has been
reached or not.

Applications of Greedy Algorithm


o It is used in finding the shortest path.
o It is used to find the minimum spanning tree using the prim's algorithm or the Kruskal's algorithm.
o It is used in a job sequencing with a deadline.
o This algorithm is also used to solve the fractional knapsack problem.

Pseudo code of Greedy Algorithm


1. Algorithm Greedy (a, n)
2. {
3. Solution : = 0;
4. for i = 0 to n do
5. {
6. x: = select(a);
7. if feasible(solution, x)
8. {
9. Solution: = union(solution , x)
10. }
11. return solution;
12. } }

The above is the greedy algorithm. Initially, the solution is assigned with zero value. We pass the array
and number of elements in the greedy algorithm. Inside the for loop, we select the element one by one
and checks whether the solution is feasible or not. If the solution is feasible, then we perform the union.

Let's understand through an example.

Suppose there is a problem 'P'. I want to travel from A to B shown as below:

P:A→B

The problem is that we have to travel this journey from A to B. There are various solutions to go from
A to B. We can go from A to B by walk, car, bike, train, aeroplane, etc. There is a constraint in the
journey that we have to travel this journey within 12 hrs. If I go by train or aeroplane then only, I can
cover this distance within 12 hrs. There are many solutions to this problem but there are only two
solutions that satisfy the constraint.

If we say that we have to cover the journey at the minimum cost. This means that we have to travel this
distance as minimum as possible, so this problem is known as a minimization problem. Till now, we
have two feasible solutions, i.e., one by train and another one by air. Since travelling by train will lead
to the minimum cost so it is an optimal solution. An optimal solution is also the feasible solution, but
providing the best result so that solution is the optimal solution with the minimum cost. There would be
only one optimal solution.

The problem that requires either minimum or maximum result then that problem is known as an
optimization problem. Greedy method is one of the strategies used for solving the optimization
problems.
Disadvantages of using Greedy algorithm

Greedy algorithm makes decisions based on the information available at each phase without
considering the broader problem. So, there might be a possibility that the greedy solution does not give
the best solution for every problem.

It follows the local optimum choice at each stage with a intend of finding the global optimum. Let's
understand through an example.

Consider the graph which is given below:

We have to travel from the source to the destination at the minimum cost. Since we have three feasible
solutions having cost paths as 10, 20, and 5. 5 is the minimum cost path so it is the optimal solution.
This is the local optimum, and in this way, we find the local optimum at each stage in order to calculate
the global optimal solution.

Applications of Greedy Algorithm:


1. Activity Selection Problem
2. Huffman Coding Problem
3. Fractional Knapsack Problem
4. Task/Job Scheduling Problem
5. Travelling Salesman Problem
6. Minimum Spanning Tree Problem
An activity-selection problem:
• Is a problem of scheduling several competing activities that require exclusive use of a
common resource, with a goal of selecting a maximum-size set of mutually compatible
activities.
• Let us take a set 𝑆 = {𝑎1 , 𝑎2 , 𝑎3 , …, 𝑎𝑛 } on n proposed activities that wish to use a
resource (i.e. lecture hall) which can serve only one activity at a time.
• Assume that the activities are sorted in monotonically increasing order of finish time:
𝑓1 ≤ 𝑓2 ≤ 𝑓3 ≤ ⋯ ≤ 𝑓𝑛−1 ≤ 𝑓𝑛
• Each activity 𝑎𝑖 has a start time 𝑠𝑖 and a finish time 𝑓𝑖 , where 0 ≤ 𝑠𝑖 < 𝑓𝑖 < ∞.
• Activities 𝑎𝑖 and 𝑎𝑗 are compatible if the intervals [𝑠𝑖 , 𝑓𝑖 ] and [𝑠𝑗 , 𝑓𝑗 ] do not overlap. (i.e.
[𝑠𝑗 ≥ 𝑓𝑖 ] ).

• In the activity-selection problem, the main goal is to select a maximum-size subset of


mutually compatible activities.
Example 1: Given 10 activities with their start and finish time compute a schedule where the
largest number of activities take place.

Activity 𝑎1 𝑎2 𝑎3 𝑎4 𝑎5 𝑎6 𝑎7 𝑎8 𝑎9 𝑎10

𝑠𝑖 1 2 3 4 7 8 9 9 11 12

𝑓𝑖 3 5 4 7 10 9 11 13 12 14
Now, schedule A1

Next schedule A3 as A1 and A3 are non-interfering.

Next skip A2 as it is interfering.

Next, schedule A4 as A1 A3 and A4 are non-interfering, then next, schedule A6 as A1 A3 A4 and A6 are
non-interfering.

Skip A5 as it is interfering.

Next, schedule A7 as A1 A3 A4 A6 and A7 are non-interfering.

Next, schedule A9 as A1 A3 A4 A6 A7 and A9 are non-interfering.

Skip A8 as it is interfering.

Next, schedule A10 as A1 A3 A4 A6 A7 A9 and A10 are non-interfering.

Thus the final Activity schedule is:


Example 2: Find the optimal set in the given activity selection problem.

A recursive greedy algorithm


Recursive-Activity-Selector(s,f,k,n)
1. m=k+1
2. while (𝑚 ≤ 𝑛)𝑎𝑛𝑑 (𝑠[𝑚] < 𝑓[𝑘])
3. m=m+1
4. If ((𝑚 ≤ 𝑛)
5. return{𝑎𝑚 } ∪ Recursive-Activity-Selector(s,f,k,n)
1. else return 0
Complexity without sorting is 𝛰(𝑛 lg 𝑛) + Θ(𝑛).
Complexity with sorting is 𝛩(𝑛).
A non recursive greedy algorithm
The procedure GREEDY-ACTIVITY-SELECTOR is an iterative version of the procedure
RECURSIVE-ACTIVITY-SELECTOR.
Greedy-Activity-Selector (s, f)
1. n=s.length
2. A={𝑎1 }
3. k=1
4. for m=2 to n
5. If (𝑠[𝑚] ≥ 𝑓[𝑘])
6. A=A∪ {𝑎𝑚 }
7. k=m
8. Return A
The procedure GREEDY-ACTIVITY-SELECTOR is an iterative version of the procedure
RECURSIVE-ACTIVITY-SELECTOR. Like the recursive version, Greedy-Activity-Selector schedules
a set of n activities in 𝛩(𝑛) time

Task Scheduling problem


• An interesting problem that are solving using matroids is the problem of optimally
scheduling unit-time tasks on a single processor, where each task has a deadline, along
with a penalty paid if the task misses its deadline.
• This problem looks complicated, when it was solved in a surprisingly simple manner by
casting it as a matroid and using a greedy algorithm.
• A unit-time task is a job, such as a program to be run on a computer, that requires
exactly one unit of time to complete.
• Given a finite set S of unit-time tasks, a schedule for S is a permutation of S specifying
the order in which to perform these tasks.
The first task in the schedule begins at time 0 and finishes at time 1, the second task begins at time 1
and finishes at time 2, and so on.
• The problem of scheduling unit-time tasks with deadlines and penalties for a single
processor has the following inputs:
• A set 𝑆 = {𝑎1 , 𝑎2, 𝑎3 ,…….., 𝑎𝑛 } of n unit-time tasks;
• A set of n integer deadlines 𝑑1 , 𝑑2, 𝑑3 ,…….., 𝑑𝑛 such that each 𝑑𝑖 satisfies
1 ≤ 𝑑𝑖 ≤ 𝑛 and task 𝑎𝑖 is supposed to finish by time 𝑑𝑖 .
• A set of n nonnegative weights or penalties 𝑤1 , 𝑤2, 𝑤3 ,…….., 𝑤𝑛 , such that a
penalty of 𝑤𝑖 is incurred if task 𝑎𝑖 is not finished by time 𝑑𝑖 , and incurred no
penalty if a task finishes by its deadline.
Example 1: Find an optimal schedule from the following table, where the tasks with penalties(weight)
and deadlines are given.

Solution: As per the greedy algorithm, first sort the tasks in descending order of their penalties. So
that minimum penalties will be charged.
As in this example tasks are already sorted.

Example 2: Find an optimal schedule from the following table, where the tasks with penalties(weight)
and deadlines are given.
Solution: As per the greedy algorithm, first sort the tasks in descending order of their penalties. So
that minimum penalties will be charged.

Huffman Coding:
• Huffman Coding is a famous Greedy Algorithm.
• It is used for the lossless compression of data.
• It uses variable length encoding.
• It assigns variable length code to all the characters.
• The code length of a character depends on how frequently it occurs in the given text.
• The character which occurs most frequently gets the smallest code.
• The character which occurs least frequently gets the largest code.
• It is also known as Huffman Encoding
• Huffman Coding implements a rule known as a prefix rule.
• This is to prevent the ambiguities while decoding.
• It ensures that the code assigned to any character is not a prefix of the code assigned
to any other character.
Why Huffman Coding:

o Data can be encoded efficiently using Huffman Codes.


o (ii) It is a widely used and beneficial technique for compressing data.
o (iii) Huffman's greedy algorithm uses a table of the frequencies of occurrences of each character
to build up an optimal way of representing each character as a binary string.

Suppose we have 105 characters in a data file. Normal Storage: 8 bits per character (ASCII) - 8 x
105 bits in a file. But we want to compress the file and save it compactly. Suppose only six characters
appear in the file:

How can we represent the data in a Compact way?

(i) Fixed length Code: Each letter represented by an equal number of bits. With a fixed length code,
at least 3 bits per character:

For example:

a 000

b 001

c 010

d 011

e 100

f 101

For a file with 105 characters, we need 3 x 105 bits.

(ii) A variable-length code: It can do considerably better than a fixed-length code, by giving many
characters short code words and infrequent character long codewords.

For example:

a 0
b 101

c 100

d 111

e 1101

f 1100
Number of bits = (45 x 1 + 13 x 3 + 12 x 3 + 16 x 3 + 9 x 4 + 5 x 4) x 1000
= 2.24 x 105bits

Thus, 224,000 bits to represent the file, a saving of approximately 25%.This is an optimal character
code for this file.

Prefix Codes:

The prefixes of an encoding of one character must not be equal to complete encoding of another
character, e.g., 1100 and 11001 are not valid codes because 1100 is a prefix of some other code word
is called prefix codes.

Prefix codes are desirable because they clarify encoding and decoding. Encoding is always simple for
any binary character code; we concatenate the code words describing each character of the file.
Decoding is also quite comfortable with a prefix code. Since no codeword is a prefix of any other, the
codeword that starts with an encoded data is unambiguous.

How Can a Huffman Tree Be Constructed from Input Characters?

The frequency of each character in the provided string must first be determined.

Character Frequency

a 4

b 7

c 3

d 2

e 4

1. Sort the characters by frequency, ascending. These are kept in a Q/min-heap priority queue.
2. For each distinct character and its frequency in the data stream, create a leaf node.
3. Remove the two nodes with the two lowest frequencies from the nodes, and the new root of the
tree is created using the sum of these frequencies.
o Make the first extracted node its left child and the second extracted node its right child
while extracting the nodes with the lowest frequency from the min-heap.
o To the min-heap, add this node.
o Since the left side of the root should always contain the minimum frequency.
4. Repeat steps 3 and 4 until there is only one node left on the heap, or all characters are
represented by nodes in the tree. The tree is finished when just the root node remains.

Step 1: Build a min-heap in which each node represents the root of a tree with a single node and holds
5 (the number of unique characters from the provided stream of data).

Step 2: Obtain two minimum frequency nodes from the min heap in step two. Add a third internal node,
frequency 2 + 3 = 5, which is created by joining the two extracted nodes.

o Now, there are 4 nodes in the min-heap, 3 of which are the roots of trees with a single element
each, and 1 of which is the root of a tree with two elements.

Step 3: Get the two minimum frequency nodes from the heap in a similar manner in step three.
Additionally, add a new internal node formed by joining the two extracted nodes; its frequency in the
tree should be 4 + 4 = 8.
o Now that the minimum heap has three nodes, one node serves as the root of trees with a single
element and two heap nodes serve as the root of trees with multiple nodes.

Step 4: Get the two minimum frequency nodes in step four. Additionally, add a new internal node
formed by joining the two extracted nodes; its frequency in the tree should be 5 + 7 = 12.

o When creating a Huffman tree, we must ensure that the minimum value is always on the left side
and that the second value is always on the right side. Currently, the image below shows the tree
that has formed:

Step 5: Get the following two minimum frequency nodes in step 5. Additionally, add a new internal node
formed by joining the two extracted nodes; its frequency in the tree should be 12 + 8 = 20.

Continue until all of the distinct characters have been added to the tree. The Huffman tree created for
the specified cast of characters is shown in the above image.

Now, for each non-leaf node, assign 0 to the left edge and 1 to the right edge to create the code for
each letter.

Rules to follow for determining edge weights:

o We should give the right edges weight 1 if you give the left edges weight 0.
o If the left edges are given weight 1, the right edges must be given weight 0.
o Any of the two aforementioned conventions may be used.
o However, follow the same protocol when decoding the tree as well.
Following the weighting, the modified tree is displayed as follows:

Understanding the Code


o We must go through the Huffman tree until we reach the leaf node, where the element is present,
in order to decode the Huffman code for each character from the resulting Huffman tree.
o The weights across the nodes must be recorded during traversal and allocated to the items
located at the specific leaf node.
o The following example will help to further illustrate what we mean:
o To obtain the code for each character in the picture above, we must walk the entire tree (until all
leaf nodes are covered).
o As a result, the tree that has been created is used to decode the codes for each node. Below is
a list of the codes for each character:

Character Frequency/count Code

a 4 01

b 7 11

c 3 101

d 2 100

e 4 00
By traversing, the Huffman Tree is created and decoded. The values gathered during the traversal are
then to be applied to the character located at the leaf node. Every unique character in the supplied
stream of data may be identified using the Huffman Code in this manner. O (nlogn), where n is the total
number of characters, is the time complexity. ExtractMin() is called 2*(n - 1) times if there are n nodes.
Since extractMin() calls minHeapify(), its execution time is O (logn). The total complexity is therefore O
(nlogn). There is a linear time algorithm if the input array is sorted. This will be covered in more detail
in our upcoming piece.

Algorithm:

HUFFMAN(C)

1 n ← |C|

2 Q←C

3 for i 1 to n - 1

4 do allocate a new node z

5 left[z] ← x ← EXTRACT-MIN (Q)

6 right[z] ← y ← EXTRACT-MIN (Q)

7 f [z] ← f [x] + f [y]

8 INSERT(Q, z)

9 return EXTRACT-MIN(Q)

The total running time of HUFFMAN on a set of n characters is O (n lg n).

Example 2: A file contains the following characters with the frequencies as shown. If Huffman Coding
is used for data compression, determine-

Huffman Code for each character

Average code length

Length of Huffman encoded message (in bits)


100
0 1

45 55
0 1
a
25 30
0 0 1
1
12 13 14 16
c b 1 d
0
5 9
e
Observation:

• Characters occurring less frequently in the text are assigned the larger code.
• Characters occurring more frequently in the text are assigned the smaller code.

Average code length

• = ∑ ( 𝒇𝒓𝒆𝒒𝒖𝒆𝒏𝒄𝒚𝒊 𝒙 𝒄𝒐𝒅𝒆 𝒍𝒆𝒏𝒈𝒕𝒉𝒊 ) / ∑ ( 𝒇𝒓𝒆𝒒𝒖𝒆𝒏𝒄𝒚𝒊 )


• = { (45 𝑥 1) + (13 𝑥 3) + (12 𝑥 3) + (16 𝑥 3) + (9 𝑥 4) + (5 𝑥 4)} / (45 + 13 + 12 + 16 +
9 + 5)
• = 2.24

Total number of bits in Huffman encoded message

• = Total number of characters in the message x Average code length per character
• = 100 x 2.24
• = 224 bits

Problems with Huffman Coding

Let's talk about the drawbacks of Huffman coding in this part and why it isn't always the best option:

o If not all of the characters' probabilities or frequencies are negative powers of 2, it is not regarded
as ideal.
o Although one may get closer to the ideal by grouping symbols and expanding the alphabet, the
blocking method necessitates handling a larger alphabet. Therefore, Huffman coding may not
always be very effective.
o Although there are many effective ways to count the frequency of each symbol or character,
reconstructing the entire tree for each one can be very time-consuming. When the alphabet is
large and the probability distributions change quickly with each symbol, this is typically the case.

Greedy Huffman Code Construction Algorithm


o Huffman developed a greedy technique that generates a Huffman Code, an ideal prefix code,
for each distinct character in the input data stream.
o The approach uses the fewest nodes each time to create the Huffman tree from the bottom up.
o Because each character receives a length of code based on how frequently it appears in the
given stream of data, this method is known as a greedy approach. It is a commonly occurring
element in the data if the size of the code retrieved is less.

The use of Huffman Coding


o Here, we'll talk about some practical uses for Huffman Coding:
o Conventional compression formats like PKZIP, GZIP, etc. typically employ Huffman coding.
o Huffman Coding is used for data transfer by fax and text because it minimizes file size and
increases transmission speed.
o Huffman encoding (particularly the prefix codes) is used by several multimedia storage formats,
including JPEG, PNG, and MP3, to compress the files.
o Huffman Coding is mostly used for image compression.
o When a string of often recurring characters has to be sent, it can be more helpful.

Conclusion

o In general, Huffman Coding is helpful for compressing data that contains frequently occurring
characters.
o We can see that the character that occurs most frequently has the shortest code, whereas the
one that occurs the least frequently has the greatest code.
o The Huffman Code compression technique is used to create variable-length coding, which uses
a varied amount of bits for each letter or symbol. This method is superior to fixed-length coding
since it uses less memory and transmits data more quickly.
o Go through this article to have a better knowledge of the greedy algorithm.
Fractional Knapsack Problem

The fractional knapsack problem is also one of the techniques which are used to solve the knapsack
problem. In fractional knapsack, the items are broken in order to maximize the profit. The problem in
which we break the item is known as a Fractional knapsack problem.

This problem can be solved with the help of using two techniques:

o Brute-force approach: The brute-force approach tries all the possible solutions with all the
different fractions but it is a time-consuming approach.
o Greedy approach: In Greedy approach, we calculate the ratio of profit/weight, and accordingly,
we will select the item. The item with the highest ratio would be selected first.

There are basically three approaches to solve the problem:

o The first approach is to select the item based on the maximum profit.
o The second approach is to select the item based on the minimum weight.
o The third approach is to calculate the ratio of profit/weight.

Consider the below example:

Objects: 1 2 3 4 5 6 7

Profit (P): 10 15 7 8 9 4

Weight(w): 1 3 5 4 1 3 2

W (Weight of the knapsack): 15

n (no of items): 7

First approach:

First approach:

Object Profit Weight Remaining weight

3 15 5 15 - 5 = 10

2 10 3 10 - 3 = 7

6 9 3 7-3=4

5 8 1 4-1=3
7 7 * ¾ = 5.25 3 3-3=0

The total profit would be equal to (15 + 10 + 9 + 8 + 5.25) = 47.25

Second approach:

The second approach is to select the item based on the minimum weight.

Object Profit Weight Remaining weight

1 5 1 15 - 1 = 14

5 7 1 14 - 1 = 13

7 4 2 13 - 2 = 11

2 10 3 11 - 3 = 8

6 9 3 8-3=5

4 7 4 5-4=1

3 15 * 1/5 = 3 1 1-1=0

In this case, the total profit would be equal to (5 + 7 + 4 + 10 + 9 + 7 + 3) = 46

Third approach:

In the third approach, we will calculate the ratio of profit/weight.

Objects: 1 2 3 4 5 6 7

Profit (P): 5 10 15 7 8 9 4

Weight(w): 1 3 5 4 1 3 2

In this case, we first calculate the profit/weight ratio.

Object 1: 5/1 = 5

Object 2: 10/3 = 3. 33

Object 3: 15/5 = 3
Object 4: 7/4 = 1.7

Object 5: 8/1 = 8

Object 6: 9/3 = 3

Object 7: 4/2 = 2

P:w: 5 3.3 3 1.7 8 3 2

In this approach, we will select the objects based on the maximum profit/weight ratio. Since the P/W of
object 5 is maximum so we select object 5.

Object Profit Weight Remaining weight

5 8 1 15 - 8 = 7

After object 5, object 1 has the maximum profit/weight ratio, i.e., 5. So, we select object 1 shown in the
below table:

Object Profit Weight Remaining weight

5 8 1 15 - 1 = 14

1 5 1 14 - 1 = 13
After object 1, object 2 has the maximum profit/weight ratio, i.e., 3.3. So, we select object 2 having
profit/weight ratio as 3.3.

Object Profit Weight Remaining weight

5 8 1 15 - 1 = 14

1 5 1 14 - 1 = 13

2 10 3 13 - 3 = 10

After object 2, object 3 has the maximum profit/weight ratio, i.e., 3. So, we select object 3 having
profit/weight ratio as 3.

Object Profit Weight Remaining weight

5 8 1 15 - 1 = 14

1 5 1 14 - 1 = 13

2 10 3 13 - 3 = 10

3 15 5 10 - 5 = 5
After object 3, object 6 has the maximum profit/weight ratio, i.e., 3. So we select object 6 having
profit/weight ratio as 3.

Object Profit Weight Remaining weight

5 8 1 15 - 1 = 14

1 5 1 14 - 1 = 13

2 10 3 13 - 3 = 10

3 15 5 10 - 5 = 5

6 9 3 5-3=2

After object 6, object 7 has the maximum profit/weight ratio, i.e., 2. So we select object 7 having
profit/weight ratio as 2.

Object Profit Weight Remaining weight

5 8 1 15 - 1 = 14

1 5 1 14 - 1 = 13

2 10 3 13 - 3 = 10

3 15 5 10 - 5 = 5

6 9 3 5-3=2

7 4 2 2-2=0
As we can observe in the above table that the remaining weight is zero which means that the knapsack
is full. We cannot add more objects in the knapsack. Therefore, the total profit would be equal to (8 + 5
+ 10 + 15 + 9 + 4), i.e., 51.

In the first approach, the maximum profit is 47.25. The maximum profit in the second approach is 46.
The maximum profit in the third approach is 51. Therefore, we can say that the third approach, i.e.,
maximum profit/weight ratio is the best approach among all the approaches.

Algorithm: Fractional Knapsack Problem(Algorithm)


Fractional_Knapsack (𝑣, 𝑤, 𝑊)
1. 𝑙𝑜𝑎𝑑 = 0
2. 𝑖 = 1
3. 𝑊ℎ𝑖𝑙𝑒 (𝑙𝑜𝑎𝑑 < 𝑊) 𝑎𝑛𝑑 𝑖 ≤ 𝑛
4. 𝑖𝑓 𝑤𝑖 ≤ 𝑊 − 𝑙𝑜𝑎𝑑
5. 𝑡𝑎𝑘𝑒 𝑎𝑙𝑙 𝑜𝑓 𝑖𝑡𝑒𝑚 𝑖
6. 𝑒𝑙𝑠𝑒 𝑡𝑎𝑘𝑒 (𝑊 − 𝑙𝑜𝑎𝑑)/ 𝑤𝑖 𝑜𝑓 𝑖𝑡𝑒𝑚 𝑖
7. 𝑎𝑑𝑑 𝑤ℎ𝑎𝑡 𝑤𝑎𝑠 𝑡𝑎𝑘𝑒𝑛 𝑡𝑜 𝑙𝑜𝑎𝑑
8. i=i+1
(Note: Assume that the items are already sorted on the basis of ratio)
Analysis:
 The main time taking step is the sorting of all items in decreasing order of their (profit / weight)
ratio.
 If the items are already arranged in the required order, then while loop takes 𝑂(𝑛) time.
 The average time complexity of Quick Sort is 𝑂(𝑛 log 𝑛).
 Therefore, total time taken including the sort is 𝑂(𝑛 log 𝑛)

Minimum Spanning Tree


Spanning Tree
• A tree (i.e., connected, acyclic graph) which contains all the vertices of the graph
Minimum Spanning Tree
• Spanning tree with the minimum sum of weights
Problem Definition
 A town has a set of houses and a set of roads.
 A road connects 2 and only 2 houses.
 A road connecting houses 𝑢 and 𝑣 has a repair cost 𝑤(𝑢, 𝑣)
Goal:
 Repair enough roads such that
1. everyone stays connected: can reach every house from all other houses, and
2. total repair cost is minimum.

Application of Minimum Spanning Tree


1. Consider n stations are to be linked using a communication network & laying of communication
links between any two stations involves a cost.
The ideal solution would be to extract a subgraph termed as minimum cost spanning tree.
2. Suppose you want to construct highways or railroads spanning several cities then we can use
the concept of minimum spanning trees.
3. Designing Local Area Networks.
4. Laying pipelines connecting offshore drilling sites, refineries and consumer markets.
5. Suppose you want to apply a set of houses with
o Electric Power
o Water
o Telephone lines
o Sewage lines

To reduce cost, you can connect houses with minimum cost spanning trees.

For Example, Problem laying Telephone Wire.


Kruskal’s Algorithm:
Most of the cable network companies (Landline cable/ TV cable/ Dish TV Network) use Kruskal’s
algorithm to save a huge amount of cable for connecting different offices/ houses. On our trip to some
tourist locations, we plan to visit all the important places and sites, but we are short on time. Then
Kruskal’s algorithm gives us the way to visit all places in minimum time.

Sort all the edges from low weight to high weight.


Take the edge with the lowest weight and use it to connect the vertices of the graph. If adding an edge
creates a cycle, reject that edge and go for the next least weight edge.

Keep adding edges (repeat step 2) until all the vertices are connected and a Minimum Spanning Tree
(MST) is obtained.

Step - 1
Edge ( 1, 6 ) ( 3, 4 ) ( 2, 7 ) ( 2, 3 ) ( 4, 7 ) ( 4, 5 ) ( 5, 7 ) ( 5, 6 ) ( 1, 2 )
Weight 10 12 14 16 18 22 24 25 28

Step - 2
Edge ( 1, 6 ) ( 3, 4 ) ( 2, 7 ) ( 2, 3 ) ( 4, 7 ) ( 4, 5 ) ( 5, 7 ) ( 5, 6 ) ( 1, 2 )
Weight 10 12 14 16 18 22 24 25 28

Step - 3
Edge ( 1, 6 ) ( 3, 4 ) ( 2, 7 ) ( 2, 3 ) ( 4, 7 ) ( 4, 5 ) ( 5, 7 ) ( 5, 6 ) ( 1, 2 )
Weight 10 12 14 16 18 22 24 25 28
Step - 4
Edge ( 1, 6 ) ( 3, 4 ) ( 2, 7 ) ( 2, 3 ) ( 4, 7 ) ( 4, 5 ) ( 5, 7 ) ( 5, 6 ) ( 1, 2 )
Weight 10 12 14 16 18 22 24 25 28

Step - 5
Edge ( 1, 6 ) ( 3, 4 ) ( 2, 7 ) ( 2, 3 ) ( 4, 7 ) ( 4, 5 ) ( 5, 7 ) ( 5, 6 ) ( 1, 2 )
Weight 10 12 14 16 18 22 24 25 28

Step - 6
Edge ( 1, 6 ) ( 3, 4 ) ( 2, 7 ) ( 2, 3 ) ( 4, 7 ) ( 4, 5 ) ( 5, 7 ) ( 5, 6 ) ( 1, 2 )
Weight 10 12 14 16 18 22 24 25 28

Step - 7
Edge ( 1, 6 ) ( 3, 4 ) ( 2, 7 ) ( 2, 3 ) ( 4, 7 ) ( 4, 5 ) ( 5, 7 ) ( 5, 6 ) ( 1, 2 )
Weight 10 12 14 16 18 22 24 25 28
Step - 8
Edge ( 1, 6 ) ( 3, 4 ) ( 2, 7 ) ( 2, 3 ) ( 4, 7 ) ( 4, 5 ) ( 5, 7 ) ( 5, 6 ) ( 1, 2 )
Weight 10 12 14 16 18 22 24 25 28

Step - 9
Edge ( 1, 6 ) ( 3, 4 ) ( 2, 7 ) ( 2, 3 ) ( 4, 7 ) ( 4, 5 ) ( 5, 7 ) ( 5, 6 ) ( 1, 2 )
Weight 10 12 14 16 18 22 24 25 28

Step - 10
Edge ( 1, 6 ) ( 3, 4 ) ( 2, 7 ) ( 2, 3 ) ( 4, 7 ) ( 4, 5 ) ( 5, 7 ) ( 5, 6 ) ( 1, 2 )
Weight 10 12 14 16 18 22 24 25 28

• Kruskal’s Algorithm
MST-KRUSKAL(G, w)
1 A←Ø
2 for each vertex v V[G]
3 do MAKE-SET(v)
4 sort the edges of E into nondecreasing order by weight w
5 for each edge (u, v) E, taken in nondecreasing order by weight
6 do if FIND-SET(u) ≠ FIND-SET(v)
7 then A ← A U {(u, v)}
8 UNION(u, v)
9 return A
Kruskal’s Algorithm(Analysis)
• The running time of Kruskal's algorithm for a graph 𝐺 = (𝑉, 𝐸) depends on the
implementation of the disjoint-set data structure.
• Initializing the set A in line 1 takes 𝑂(1) time, and the time to sort the edges in line 4 is
O(E lg E).
• The for loop of lines 5-8 performs 𝑂(𝐸) FIND-SET and UNION operations on the
disjoint-set forest. Along with the |V| MAKE-SET operations, these take a total of
𝑂((𝑉 + 𝐸) time.
• the total running time of Kruskal's algorithm is 𝑂(𝐸 𝑙𝑔⁡𝐸).
• Observing that |𝐸| < |𝑉|2
• Apply log both side ⟹ 𝑙𝑔⁡|𝐸| = 𝑂(𝑙𝑔⁡𝑉)
• Hence the running time of Kruskal's algorithm as 𝑂(𝐸 𝑙𝑔⁡𝑉).

Prim’s Algorithm

Prim's algorithm is used to find the minimum spanning tree. It takes a graph as input and gives a
minimum spanning tree as output. In prims, the tree starts from an arbitrary root vertex r and grows
until the tree spans all the vertices in V. Prims follow the greedy strategy since the tree is augmented
at each step with an edge that contributes the minimum amount possible to the tree's weight.

Procedure
Step 1: first initialize the starting node of the graph.
Step 2: initialize the priority queue Q and assign key-value ꝏ to each vertex except root or source
node whose key value is 0.
Step 3: check all adjacent nodes belongs to Q of the current node and compare the new cost with the
existing cost. Select minimum cost edge among all edges connected with adjacent nodes. Remove
current node from Q.
Step 4: make the current node to the minimum cost adjacent node.
Step 5: Repeat steps 3 and 4.
• Prim’s Algorithm: (Algorithm)
MST-PRIM(G, w, r)
1 for each u V[G]
2 do key[u] ← ∞
3 𝜋[u] ← NIL
4 key[r] ← 0
5 Q ← V [G]
6 while Q ≠ Ø
7 do u ← EXTRACT-MIN(Q)
8 for each v ∈ Adj[u]
9 do if v ∈ Q and w(u, v) < key[v]
10 then π[v] ← u
11 key[v] ← w(u, v)

Example: Generate minimum cost spanning tree for the following graph using Prim's algorithm.

Solution: In Prim's algorithm, first we initialize the priority Queue Q. to contain all the vertices and the
key of each vertex to ∞ except for the root, whose key is set to 0. Suppose 0 vertex is the root, i.e., r.
By EXTRACT - MIN (Q) procure, now u = r and Adj [u] = {5, 1}.

Removing u from set Q and adds it to set V - Q of vertices in the tree. Now, update the key and π fields
of every vertex v adjacent to u but not in a tree.

1. Taking 0 as starting vertex


2. Root = 0
3. Adj [0] = 5, 1
4. Parent, π [5] = 0 and π [1] = 0
5. Key [5] = ∞ and key [1] = ∞
6. w [0, 5) = 10 and w (0,1) = 28
7. w (u, v) < key [5] , w (u, v) < key [1]
8. Key [5] = 10 and key [1] = 28
9. So update key value of 5 and 1 is:
Now by EXTRACT_MIN (Q) Removes 5 because key [5] = 10 which is minimum so u = 5.

1. Adj [5] = {0, 4} and 0 is already in heap


2. Taking 4, key [4] = ∞ π [4] = 5
3. (u, v) < key [v] then key [4] = 25
4. w (5,4) = 25
5. w (5,4) < key [4]
6. date key value and parent of 4.
Now remove 4 because key [4] = 25 which is minimum, so u =4

1. Adj [4] = {6, 3}


2. Key [3] = ∞ key [6] = ∞
3. w (4,3) = 22 w (4,6) = 24
4. w (u, v) < key [v] w (u, v) < key [v]
5. w (4,3) < key [3] w (4,6) < key [6]

Update key value of key [3] as 22 and key [6] as 24.

And the parent of 3, 6 as 4.

1. π[3]= 4 π[6]= 4

1. u = EXTRACT_MIN (3, 6) [key [3] < key [6]]


2. u = 3 i.e. 22 < 24

Now remove 3 because key [3] = 22 is minimum so u =3.


1. Adj [3] = {4, 6, 2}
2. 4 is already in heap
3. 4 ≠ Q key [6] = 24 now becomes key [6] = 18
4. Key [2] = ∞ key [6] = 24
5. w (3, 2) = 12 w (3, 6) = 18
6. w (3, 2) < key [2] w (3, 6) < key [6]

Now in Q, key [2] = 12, key [6] = 18, key [1] = 28 and parent of 2 and 6 is 3.

1. π [2] = 3 π[6]=3

Now by EXTRACT_MIN (Q) Removes 2, because key [2] = 12 is minimum.

1. u = EXTRACT_MIN (2, 6)
2. u = 2 [key [2] < key [6]]
3. 12 < 18
4. Now the root is 2
5. Adj [2] = {3, 1}
6. 3 is already in a heap
7. Taking 1, key [1] = 28
8. w (2,1) = 16
9. w (2,1) < key [1]

So update key value of key [1] as 16 and its parent as 2.


1. π[1]= 2

Now by EXTRACT_MIN (Q) Removes 1 because key [1] = 16 is minimum.

1. Adj [1] = {0, 6, 2}


2. 0 and 2 are already in heap.
3. Taking 6, key [6] = 18
4. w [1, 6] = 14
5. w [1, 6] < key [6]

Update key value of 6 as 14 and its parent as 1.

1. Π [6] = 1

Now all the vertices have been spanned, Using above the table we get Minimum Spanning Tree.

1. 0 → 5 → 4 → 3 → 2 → 1 → 6
2. [Because Π [5] = 0, Π [4] = 5, Π [3] = 4, Π [2] = 3, Π [1] =2, Π [6] =1]

Thus the final spanning Tree is


Total Cost = 10 + 25 + 22 + 12 + 16 + 14 = 99

Time Complexity of Prim’s Algorithm


=Build max Heap()+Extract Min()+Decrease Key()
=𝛰(𝑉) + 𝛰(𝑉 lg 𝑉) + 𝛰(𝑉 2 ) {In case of Dense graph}
=𝛰(𝑉 2 + 𝑉𝑙𝑔 𝑉) = 𝛰(𝑉 2 )
Introduction to Shortest Path:

In order to find a path between two vertices in such a manner that the sum of the weights of edges lies
in the path is minimum. We can find the shortest path for the directed, undirected or mixed graph.
Analogy
Road map between two cities. Source city in Greater Noida and Destination city is Meerut. We have a
different path between these two cities that can better be understood by Google Maps.

Here we have three images of google Maps; each image contains a different path and different
distances. The first image has a distance 84.4 km, the second image has a distance 78 km, and the
third image shows the train path between greater Noida to Meerut. Google map is the best example to
find the shortest path between the source location to the destination location. These locations can be
considered as nodes or vertices in the graph, and roads or paths between cities can be considered as
edges between nodes or vertices of the graph.

Types of Shortest Path:

Single source shortest path: Find a path between one node to all other nodes.
Dijkstra Algorithm
Bellman-Ford Algorithm
All Pairs Shortest Path: Find a path between all pairs of nodes. All nodes are considered as a source
node and destination node.
Floyd-Warshall algorithm

Single-Source Shortest Path

The Single source shortest path algorithm is used to find the shortest path between a particular node
to the other nodes. In a single-source shortest path, first initialize a source node and find the minimum
distance between the source node to all other nodes of graph G.
Consider a situation in which a person is in city A and wants to go to city D; there are many paths
possible between A to D.

We have different path with different cost


A→B→C→D COST= 10 + 2 + 9 = 21
A→B→E→D COST= 10 + 4 + 2 = 16
A→B→E→C→D COST= 10 + 4 + 8 + 9 = 31
A→E→B→C→D COST= 3 + 1 + 2 + 9 = 15
A→E→C→D COST= 3 + 8 + 9 = 20
A→E→D COST= 3 + 2 = 5

We can see that there are many paths possible between city A to city D. This is very complex to find
all possible path between one node to all other nodes, here we find only one combination from A to D.
if we find A to B, A to C, A to D and A to E, then there are many possible paths. After finding all the
paths, we have to find a minimum path between one city to all other cities. To resolve this problem, we
have two algorithms that solves different types of problems of the single-source shortest path-

Dijkastra’s Algorithm:
• Dijkstra Algorithm is a very famous greedy algorithm.
• It is used for solving the single source shortest path problem.
• It computes the shortest path from one particular source node to all other
remaining nodes of the graph.
• Dijkstra algorithm works
• for connected graphs.
• for those graphs that do not contain any negative weight edge.
• To provides the value or cost of the shortest paths.
• for directed as well as undirected graphs.
• Dijkstra’s Algorithm (Implementation)
The implementation of Dijkstra Algorithm is executed in the following steps-
Step-01:
• In the first step. two sets are defined-
• One set contains all those vertices which have been included in the shortest path
tree.
• In the beginning, this set is empty.
• Other set contains all those vertices which are still left to be included in the shortest
path tree.
• In the beginning, this set contains all the vertices of the given graph.
Step-02: For each vertex of the given graph, two variables are defined as-
• Π[v] which denotes the predecessor of vertex ‘v’
• d[v] which denotes the shortest path estimate of vertex ‘v’ from the source vertex.
• Initially, the value of these variables is set as-
• The value of variable ‘Π’ for each vertex is set to NIL i.e. Π[v] = NIL
• The value of variable ‘d’ for source vertex is set to 0 i.e. d[S] = 0
• The value of variable ‘d’ for remaining vertices is set to ∞ i.e. d[v] = ∞
Step-03: The following procedure is repeated until all the vertices of the graph are processed-
• Among unprocessed vertices, a vertex with minimum value of variable ‘d’ is chosen.
• Its outgoing edges are relaxed.
• After relaxing the edges for that vertex, the sets created in step-01 are updated.
Dijkstra’s Algorithm:
Dijkstra(G, w, s)
1 INITIALIZE-SINGLE-SOURCE(G, s)
2S←Ø
3 Q ← V[G]
4 while Q ≠ Ø
5 do u ← EXTRACT-MIN(Q)
6 S ← 𝑆 ∈ {𝑢}
7 for each vertex 𝑣 ∈ 𝐴𝑑𝑗[𝑢]
8 do RELAX(u, v, w)
INITIALIZE-SINGLE-SOURCE(G, s)
1 for each vertex v V[G]
2 do d[v] ← ∞
3 π[v] ← NIL
1 d[s] ← 0
RELAX(u, v, w)
1 if d[v] > d[u] + w(u, v)
2 then d[v] ← d[u] + w(u, v)
3 π[v] ← u

Example 1: Construct the Single source shortest path for the given graph using Dijkstra’s Algorithm-
Complexity Analysis
Case-01:
𝐴[𝑖, 𝑗] stores the information about edge (𝑖, 𝑗).
Time taken for selecting 𝑖 with the smallest 𝑑𝑖𝑠𝑡 is 𝑂(𝑉).
For each neighbor of i, time taken for updating 𝑑𝑖𝑠𝑡[𝑗] is 𝑂(1) and there will be maximum V-1
neighbors.
Time taken for each iteration of the loop is O(V) and one vertex is deleted from Q.
Thus, total time complexity becomes O(𝑉 2 ).
Case-02:
With adjacency list representation, all vertices of the graph can be traversed using BFS in 𝑂(𝑉 + 𝐸)
time.
In min heap, operations like extract-min and decrease-key value takes 𝑂(log 𝑉) time.
So, overall time complexity becomes
𝑂(𝐸 + 𝑉) 𝑥 𝑂(log 𝑉) which is 𝑂((𝐸 + 𝑉) 𝑥 log 𝑉) = 𝑂(𝐸 log 𝑉)

Advantages of Dijkstra’s Algorithm

 It is used in Google Maps.


 It is employed to identify the shortest path.
 It is very popular in the Geographical Maps.
 It is used to locate the points on the map that correspond to the graph’s vertices.
 In order to identify the Open Shortest Path First, it is needed in IP routing.
 The telephone network makes use of it.
Disadvantages of Dijkstra Algorithm

 It conducts a blind scan, which takes a lot of processing time.


 It is unable to manage sharp edges. As a result, acyclic graphs are produced, and the ideal
shortest path is frequently impossible to find.

Applications of Dijkstra’s Algorithm

 To determine the quickest route


 In applications for social networking
 Within a phone network
 To locate the places on the map
Bellman Ford Algorithm
o Bellman-Ford algorithm solves the single-source shortest-path problem in the general
case in which edges of a given digraph can have negative weight as long as G contains
no negative cycles.
o Like Dijkstra's algorithm, this algorithm, uses the notion of edge relaxation but does not
use with greedy method. Again, it uses d[u] as an upper bound on the distance d[u, v]
from u to v.
o The algorithm progressively decreases an estimate d[v] on the weight of the shortest
path from the source vertex s to each vertex v ∈ V until it achieve the actual shortest-
path.
o The algorithm returns Boolean TRUE if the given digraph contains no negative cycles
that are reachable from source vertex s otherwise it returns Boolean FALSE.

Bellman ford algorithm is a single-source shortest path algorithm. This algorithm is used to find the
shortest distance from the single vertex to all the other vertices of a weighted graph. There are various
other algorithms used to find the shortest path like Dijkstra algorithm, etc. If the weighted graph contains
the negative weight values, then the Dijkstra algorithm does not confirm whether it produces the correct
answer or not. In contrast to Dijkstra algorithm, bellman ford algorithm guarantees the correct answer
even if the weighted graph contains the negative weight values.

Rule of this algorithm

1. We will go on relaxing all the edges (n - 1) times where,


2. n = number of vertices

Consider the below graph:

As we can observe in the above graph that some of the weights are negative. The above graph contains
6 vertices so we will go on relaxing till the 5 vertices. Here, we will relax all the edges 5 times. The loop
will iterate 5 times to get the correct answer. If the loop is iterated more than 5 times then also the
answer will be the same, i.e., there would be no change in the distance between the vertices.

Relaxing means:

1. If (d(u) + c(u , v) < d(v))


2. d(v) = d(u) + c(u , v)

To find the shortest path of the above graph, the first step is note down all the edges which are given
below:

(A, B), (A, C), (A, D), (B, E), (C, E), (D, C), (D, F), (E, F), (C, B)

Let's consider the source vertex as 'A'; therefore, the distance value at vertex A is 0 and the distance
value at all the other vertices as infinity shown as below:
Since the graph has six vertices so it will have five iterations.

First iteration

Consider the edge (A, B). Denote vertex 'A' as 'u' and vertex 'B' as 'v'. Now use the relaxing formula:

d(u) = 0

d(v) = ∞

c(u , v) = 6

Since (0 + 6) is less than ∞, so update

1. d(v) = d(u) + c(u , v)

d(v) = 0 + 6 = 6

Therefore, the distance of vertex B is 6.

Consider the edge (A, C). Denote vertex 'A' as 'u' and vertex 'C' as 'v'. Now use the relaxing formula:

d(u) = 0

d(v) = ∞

c(u , v) = 4

Since (0 + 4) is less than ∞, so update

1. d(v) = d(u) + c(u , v)

d(v) = 0 + 4 = 4

Therefore, the distance of vertex C is 4.

Consider the edge (A, D). Denote vertex 'A' as 'u' and vertex 'D' as 'v'. Now use the relaxing formula:
d(u) = 0

d(v) = ∞

c(u , v) = 5

Since (0 + 5) is less than ∞, so update

1. d(v) = d(u) + c(u , v)

d(v) = 0 + 5 = 5

Therefore, the distance of vertex D is 5.

Consider the edge (B, E). Denote vertex 'B' as 'u' and vertex 'E' as 'v'. Now use the relaxing formula:

d(u) = 6

d(v) = ∞

c(u , v) = -1

Since (6 - 1) is less than ∞, so update

1. d(v) = d(u) + c(u , v)

d(v) = 6 - 1= 5

Therefore, the distance of vertex E is 5.

Consider the edge (C, E). Denote vertex 'C' as 'u' and vertex 'E' as 'v'. Now use the relaxing formula:

d(u) = 4

d(v) = 5

c(u , v) = 3

Since (4 + 3) is greater than 5, so there will be no updation. The value at vertex E is 5.

Consider the edge (D, C). Denote vertex 'D' as 'u' and vertex 'C' as 'v'. Now use the relaxing formula:

d(u) = 5

d(v) = 4

c(u , v) = -2

Since (5 -2) is less than 4, so update


1. d(v) = d(u) + c(u , v)

d(v) = 5 - 2 = 3

Therefore, the distance of vertex C is 3.

Consider the edge (D, F). Denote vertex 'D' as 'u' and vertex 'F' as 'v'. Now use the relaxing formula:

d(u) = 5

d(v) = ∞

c(u , v) = -1

Since (5 -1) is less than ∞, so update

1. d(v) = d(u) + c(u , v)

d(v) = 5 - 1 = 4

Therefore, the distance of vertex F is 4.

Consider the edge (E, F). Denote vertex 'E' as 'u' and vertex 'F' as 'v'. Now use the relaxing formula:

d(u) = 5

d(v) = ∞

c(u , v) = 3

Since (5 + 3) is greater than 4, so there would be no updation on the distance value of vertex F.

Consider the edge (C, B). Denote vertex 'C' as 'u' and vertex 'B' as 'v'. Now use the relaxing formula:

d(u) = 3

d(v) = 6

c(u , v) = -2

Since (3 - 2) is less than 6, so update

1. d(v) = d(u) + c(u , v)

d(v) = 3 - 2 = 1

Therefore, the distance of vertex B is 1.

Now the first iteration is completed. We move to the second iteration.


Second iteration:

In the second iteration, we again check all the edges. The first edge is (A, B). Since (0 + 6) is greater
than 1 so there would be no updation in the vertex B.

The next edge is (A, C). Since (0 + 4) is greater than 3 so there would be no updation in the vertex C.

The next edge is (A, D). Since (0 + 5) equals to 5 so there would be no updation in the vertex D.

The next edge is (B, E). Since (1 - 1) equals to 0 which is less than 5 so update:

d(v) = d(u) + c(u, v)

d(E) = d(B) +c(B , E)

=1-1=0

The next edge is (C, E). Since (3 + 3) equals to 6 which is greater than 5 so there would be no updation
in the vertex E.

The next edge is (D, C). Since (5 - 2) equals to 3 so there would be no updation in the vertex C.

The next edge is (D, F). Since (5 - 1) equals to 4 so there would be no updation in the vertex F.

The next edge is (E, F). Since (5 + 3) equals to 8 which is greater than 4 so there would be no updation
in the vertex F.

The next edge is (C, B). Since (3 - 2) equals to 1` so there would be no updation in the vertex B.

Third iteration

We will perform the same steps as we did in the previous iterations. We will observe that there will be
no updation in the distance of vertices.

1. The following are the distances of vertices:


2. A: 0
3. B: 1
4. C: 3
5. D: 5
6. E: 0
7. F: 3

• Bellman Ford Algorithm (Algorithm)


BELLMAN-FORD(G, w, s)
1 INITIALIZE-SINGLE-SOURCE(G, s)
2 for i ← 1 to |V[G]| - 1
3 do for each edge (u, v) ∈ E[G]
4 do RELAX(u, v, w)
5 for each edge (u, v) ∈ E[G]
6 do if d[v] > d[u] + w(u, v)
7 then return FALSE
8 return TRUE
INITIALIZE-SINGLE-SOURCE(G, s)
1 for each vertex v V[G]
2 do d[v] ← ∞
3 π[v] ← NIL
1 d[s] ← 0
RELAX(u, v, w)
1 if d[v] > d[u] + w(u, v)
2 then d[v] ← d[u] + w(u, v)
3 π[v] ← u

Drawbacks of Bellman ford algorithm

o The bellman ford algorithm does not produce a correct answer if the sum of the edges of a cycle
is negative. Let's understand this property through an example. Consider the below graph.

o In the above graph, we consider vertex 1 as the source vertex and provides 0 value to it. We
provide infinity value to other vertices shown as below:
Edges can be written as:
(1, 3), (1, 2), (2, 4), (3, 2), (4, 3)

First iteration

Consider the edge (1, 3). Denote vertex '1' as 'u' and vertex '3' as 'v'. Now use the relaxing
formula:

d(u) = 0

d(v) = ∞

c(u , v) = 5

Since (0 + 5) is less than ∞, so update

1. d(v) = d(u) + c(u , v)

d(v) = 0 + 5 = 5

Therefore, the distance of vertex 3 is 5.

Consider the edge (1, 2). Denote vertex '1' as 'u' and vertex '2' as 'v'. Now use the relaxing
formula:

d(u) = 0

d(v) = ∞

c(u , v) = 4

Since (0 + 4) is less than ∞, so update

1. d(v) = d(u) + c(u , v)

d(v) = 0 + 4 = 4

Therefore, the distance of vertex 2 is 4.


Consider the edge (3, 2). Denote vertex '3' as 'u' and vertex '2' as 'v'. Now use the relaxing
formula:

d(u) = 5

d(v) = 4

c(u , v) = 7

Since (5 + 7) is greater than 4, so there would be no updation in the vertex 2.

Consider the edge (2, 4). Denote vertex '2' as 'u' and vertex '4' as 'v'. Now use the relaxing
formula:

d(u) = 4

d(v) = ∞

c(u , v) = 7

Since (4 + 7) equals to 11 which is less than ∞, so update

1. d(v) = d(u) + c(u , v)

d(v) = 4 + 7 = 11

Therefore, the distance of vertex 4 is 11.

Consider the edge (4, 3). Denote vertex '4' as 'u' and vertex '3' as 'v'. Now use the relaxing
formula:

d(u) = 11

d(v) = 5

c(u , v) = -15

Since (11 - 15) equals to -4 which is less than 5, so update

1. d(v) = d(u) + c(u , v)

d(v) = 11 - 15 = -4

Therefore, the distance of vertex 3 is -4.

Now we move to the second iteration.

Second iteration

Now, again we will check all the edges. The first edge is (1, 3). Since (0 + 5) equals to 5 which is greater
than -4 so there would be no updation in the vertex 3.
The next edge is (1, 2). Since (0 + 4) equals to 4 so there would be no updation in the vertex 2.

The next edge is (3, 2). Since (-4 + 7) equals to 3 which is less than 4 so update:

d(v) = d(u) + c(u, v)

d(2) = d(3) +c(3, 2)

= -4 + 7 = 3

Therefore, the value at vertex 2 is 3.

The next edge is (2, 4). Since ( 3+7) equals to 10 which is less than 11 so update

d(v) = d(u) + c(u, v)

d(4) = d(2) +c(2, 4)

= 3 + 7 = 10

Therefore, the value at vertex 4 is 10.

The next edge is (4, 3). Since (10 - 15) equals to -5 which is less than -4 so update:

d(v) = d(u) + c(u, v)

d(3) = d(4) +c(4, 3)

= 10 - 15 = -5

Therefore, the value at vertex 3 is -5.

Now we move to the third iteration.

Third iteration

Now again we will check all the edges. The first edge is (1, 3). Since (0 + 5) equals to 5 which is greater
than -5 so there would be no updation in the vertex 3.

The next edge is (1, 2). Since (0 + 4) equals to 4 which is greater than 3 so there would be no updation
in the vertex 2.

The next edge is (3, 2). Since (-5 + 7) equals to 2 which is less than 3 so update:

d(v) = d(u) + c(u, v)

d(2) = d(3) +c(3, 2)

= -5 + 7 = 2

Therefore, the value at vertex 2 is 2.


The next edge is (2, 4). Since (2 + 7) equals to 9 which is less than 10 so update:

d(v) = d(u) + c(u, v)

d(4) = d(2) +c(2, 4)

=2+7=9

Therefore, the value at vertex 4 is 9.

The next edge is (4, 3). Since (9 - 15) equals to -6 which is less than -5 so update:

d(v) = d(u) + c(u, v)

d(3) = d(4) +c(4, 3)

= 9 - 15 = -6

Therefore, the value at vertex 3 is -6.

Since the graph contains 4 vertices, so according to the bellman ford algorithm, there would be only 3
iterations. If we try to perform 4th iteration on the graph, the distance of the vertices from the given
vertex should not change. If the distance varies, it means that the bellman ford algorithm is not providing
the correct answer.

4th iteration

The first edge is (1, 3). Since (0 +5) equals to 5 which is greater than -6 so there would be no change
in the vertex 3.

The next edge is (1, 2). Since (0 + 4) is greater than 2 so there would be no updation.

The next edge is (3, 2). Since (-6 + 7) equals to 1 which is less than 3 so update:

d(v) = d(u) + c(u, v)

d(2) = d(3) +c(3, 2)

= -6 + 7 = 1
In this case, the value of the vertex is updated. So, we conclude that the bellman ford algorithm does
not work when the graph contains the negative weight cycle.

Therefore, the value at vertex 2 is 1.

You might also like