0% found this document useful (0 votes)
44 views42 pages

Unit-4 Dynamic Programming and Backtracking

The document discusses Dynamic Programming and Backtracking, highlighting the differences between Dynamic Programming and Divide & Conquer approaches. It explains the 0/1 Knapsack problem, Longest Common Subsequence, and Matrix Chain Multiplication, providing algorithms and examples for each. The emphasis is on how Dynamic Programming optimizes recursive solutions by storing results of subproblems to improve efficiency.

Uploaded by

supekarsarthak6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views42 pages

Unit-4 Dynamic Programming and Backtracking

The document discusses Dynamic Programming and Backtracking, highlighting the differences between Dynamic Programming and Divide & Conquer approaches. It explains the 0/1 Knapsack problem, Longest Common Subsequence, and Matrix Chain Multiplication, providing algorithms and examples for each. The emphasis is on how Dynamic Programming optimizes recursive solutions by storing results of subproblems to improve efficiency.

Uploaded by

supekarsarthak6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Dynamic Programming and Backtracking

Basic concepts and comparison with divide & conquer


Dynamic Programming is mainly an optimization over plain recursion. Wherever we see a
recursive solution that has sometimes repeated calls for the same input states, we can
optimize it using Dynamic Programming. The idea is to simply store the results of
subproblems so that we do not have to re-compute them when needed later. This simple
optimization reduces time complexities from exponential to polynomial.

Divide and Conquer Algorithm vs Dynamic Algorithm:

[Link] Divide and conquer Dynamic Programming

1 Follows Top-down approach Follows bottom-up approach

2 Used to solve decision problem Used to solve optimization problem

Solution of subproblem is
The solution of subproblems is computed
3 computed recursively more than
once and stored in a table for later use.
once.

It is used to obtain a solution to the


4 given problem, it does not aim for It always generates optimal solution.
the optimal solution

5 Recursive in nature. Recursive in nature.

more efficient but slower than greedy. For


6 less efficient and slower. instance, single source shortest path finding
using Bellman Ford Algo takes O(VE) time.

more memory is required to store


7 some memory is required.
subproblems for later use.
[Link] Divide and conquer Dynamic Programming

Examples: Merge sort, Examples: 0/1 Knapsack,


8 Quick sort, All pair shortest path,
Strassen’s matrix multiplication. Matrix-chain multiplication.

0/1 Knapsack
Unlike in fractional knapsack, the items are always stored fully without using the fractional
part of them. Its either the item is added to the knapsack or not. That is why, this method is
known as the 0-1 Knapsack problem.

Hence, in case of 0-1 Knapsack, the value of xi can be either 0 or 1, where other constraints
remain the same.

0-1 Knapsack cannot be solved by Greedy approach. Greedy approach does not ensure an
optimal solution in this method. In many instances, Greedy approach may give an optimal
solution.

0-1 Knapsack Algorithm

Problem Statement − A thief is robbing a store and can carry a maximal weight of W into his
knapsack. There are n items and weight of ith item is wi and the profit of selecting this item
is pi. What items should the thief take?

Let i be the highest-numbered item in an optimal solution S for W dollars. Then S = S {i} is
an optimal solution for W wi dollars and the value to the solution S is Vi plus the value of the
sub-problem.

We can express this fact in the following formula: define c[i, w] to be the solution for
items 1,2, , i and the maximum weight w.

The algorithm takes the following inputs

 The maximum weight W

 The number of items n

 The two sequences v = <v1, v2, , vn> and w = <w1, w2, , wn>

The set of items to take can be deduced from the table, starting at c[n, w] and tracing
backwards where the optimal values came from.

If c[i, w] = c[i-1, w], then item i is not part of the solution, and we continue tracing with c[i-
1, w]. Otherwise, item i is part of the solution, and we continue tracing with c [i-1, w-W].
Example

Let us consider that the capacity of the knapsack is W = 8 and the items are as shown in the
following table.

Item A B C D

Profit 2 4 7 10

Weight 1 3 5 7

Solution

Using the greedy approach of 0-1 knapsack, the weight thats stored in the knapsack would be
A+B = 4 with the maximum profit 2 + 4 = 6. But, that solution would not be the optimal
solution.

Therefore, dynamic programming must be adopted to solve 0-1 knapsack problems.

Step 1

Construct an adjacency table with maximum weight of knapsack as rows and items with
respective weights and profits as columns.

Values to be stored in the table are cumulative profits of the items whose weights do not
exceed the maximum weight of the knapsack (designated values of each row)

So we add zeroes to the 0th row and 0th column because if the weight of item is 0, then it
weighs nothing; if the maximum weight of knapsack is 0, then no item can be added into the
knapsack.
The remaining values are filled with the maximum profit achievable with respect to the items
and weight per column that can be stored in the knapsack.

The formula to store the profit values is −

c[i,w]=max{c[i−1,w−w[i]]+P[i]}

By computing all the values using the formula, the table obtained would be −
To find the items to be added in the knapsack, recognize the maximum profit from the table
and identify the items that make up the profit, in this example, its {1, 7}.

The optimal solution is {1, 7} with the maximum profit is 12.

This algorithm takes (n.w) times as table c has (n+1).(w+1) entries, where each entry requires
(1) time to compute.

Longest Common Subsequence (LCS)


The longest common subsequence problem is finding the longest sequence which exists in
both the given strings.

But before we understand the problem, let us understand what the term subsequence is −

Let us consider a sequence S = <s1, s2, s3, s4, ,sn>. And another sequence Z = <z1, z2, z3, ,zm>
over S is called a subsequence of S, if and only if it can be derived from S deletion of some
elements. In simple words, a subsequence consists of consecutive elements that make up a
small part in a sequence.

Common Subsequence

Suppose, X and Y are two sequences over a finite set of elements. We can say that Z is a
common subsequence of X and Y, if Z is a subsequence of both X and Y.

Longest Common Subsequence

If a set of sequences are given, the longest common subsequence problem is to find a
common subsequence of all the sequences that is of maximal length.
Naive Method

Let X be a sequence of length m and Y a sequence of length n. Check for every subsequence
of X whether it is a subsequence of Y, and return the longest common subsequence found.

There are 2m subsequences of X. Testing sequences whether or not it is a subsequence


of Y takes O(n) time. Thus, the naive algorithm would take O(n2m) time.

Longest Common Subsequence Algorithm

Let X=<x1,x2,x3....,xm> and Y=<y1,y2,y3....,ym> be the sequences. To compute the length of an


element the following algorithm is used.

Step 1 − Construct an empty adjacency table with the size, n m, where n = size of
sequence X and m = size of sequence Y. The rows in the table represent the elements in
sequence X and columns represent the elements in sequence Y.

Step 2 − The zeroeth rows and columns must be filled with zeroes. And the remaining values
are filled in based on different cases, by maintaining a counter value.

 Case 1 − If the counter encounters common element in both X and Y sequences,


increment the counter by 1.

 Case 2 − If the counter does not encounter common elements in X and Y sequences at
T[i, j], find the maximum value between T[i-1, j] and T[i, j-1] to fill it in T[i, j].

Step 3 − Once the table is filled, backtrack from the last value in the table. Backtracking here
is done by tracing the path where the counter incremented first.

Step 4 − The longest common subseqence obtained by noting the elements in the traced path.

Example

In this example, we have two strings X=BACDB and Y=BDCB to find the longest common
subsequence.

Following the algorithm, we need to calculate two tables 1 and 2.

Given n = length of X, m = length of Y

X = BDCB, Y = BACDB

Constructing the LCS Tables

In the table below, the zeroeth rows and columns are filled with zeroes. Remianing values are
filled by incrementing and choosing the maximum values according to the algorithm.
Once the values are filled, the path is traced back from the last value in the table at T[4, 5].

From the traced path, the longest common subsequence is found by choosing the values
where the counter is first incremented.

In this example, the final count is 3 so the counter is incremented at 3 places, i.e., B, C, B.
Therefore, the longest common subsequence of sequences X and Y is BCB.
Applications

The longest common subsequence problem is a classic computer science problem, the basis
of data comparison programs such as the diff-utility, and has applications in bioinformatics. It
is also widely used by revision control systems, such as SVN and Git, for reconciling
multiple changes made to a revision-controlled collection of files.

Matrix Chain Multiplication


Consider a chain of matrices to be multiplied together. For example, we have matrices A, B,
C, and D. The goal is to find the most efficient order of multiplication that minimizes the
total number of scalar multiplications required. In this case, we can multiply them in different
orders: ((AB)C)D, (A(BC))D, or A((BC)D). The Matrix Chain Algorithm helps us determine
the optimal order.

The rule for matrix chain multiplication states that the order in which matrices are
multiplied together can significantly impact the computational efficiency of the
multiplication process. In other words, the way matrices are grouped and multiplied affects
the total number of scalar multiplications required.

To illustrate this rule, consider a chain of matrices to be multiplied: A1, A2, A3, …, An. The
result of multiplying this chain depends on how the matrices are parenthesized and the order
in which the multiplications are performed.

For example, given matrices A, B, and C, we can have two possible ways to multiply them:

1. (AB)C: In this case, we first multiply matrices A and B, and then multiply the
resulting matrix with matrix C.

2. A(BC): Here, we multiply matrices B and C first, and then multiply matrix A with the
resulting matrix.

The rule is that the order of multiplication can affect the number of scalar multiplications
required to obtain the final result. In some cases, different parenthesizations can lead to
significantly different numbers of scalar multiplications.

For instance, the number of scalar multiplications required for the product (AB)C can be
fewer than the number required for A(BC). The rule emphasizes that the order of
multiplication should be chosen in a way that minimizes the total number of scalar
multiplications in order to optimize the matrix chain multiplication process.

Matrix Chain Multiplication Example

Let’s consider an example to illustrate the matrix chain multiplication.

Suppose we have a chain of four matrices: A, B, C, and D, with dimensions as follows:

A: 10 x 30
B: 30 x 5

C: 5 x 60

D: 60 x 8

The goal is to find the optimal order of multiplying these matrices that minimizes the total
number of scalar multiplications.

To solve this problem, we can apply the Matrix Chain Multiplication algorithm
using dynamic programming. Let’s go step by step:

Step 1: Create the table We create a 4×4 table to store the minimum number of scalar
multiplications needed for each subchain. The table initially looks like this:

Step 2: Fill in the table We iterate diagonally through the table, calculating the minimum
number of scalar multiplications for each subchain.

First, we consider subchains of length 2: For the subchain AB, the only option is multiplying
matrices A and B, resulting in 10 x 5 scalar multiplications. For the subchain BC, the only
option is multiplying matrices B and C, resulting in 30 x 5 scalar multiplications. For the
subchain CD, the only option is multiplying matrices C and D, resulting in 5 x 8 scalar
multiplications.

Updating the table:


Next, we consider subchains of length 3: For the subchain ABC, we have two options: (AB)C
or A(BC). Let’s calculate the scalar multiplications for each option:

 (AB)C: We multiply matrices A and B (10 x 30) resulting in a temporary matrix of


size 10 x 5. Then, we multiply this temporary matrix with matrix C (10 x 5) resulting
in a final matrix of size 10 x 60. The total scalar multiplications for this option are 10
x 30 x 5 + 10 x 5 x 60 = 4500.

 A(BC): We multiply matrices B and C (30 x 5) resulting in a temporary matrix of size


30 x 60. Then, we multiply matrix A (10 x 30) with this temporary matrix, resulting in
a final matrix of size 10 x 60. The total scalar multiplications for this option are 10 x
30 x 60 + 10 x 5 x 60 = 9000.

Updating the table:


Finally, we consider the entire chain of matrices ABCD: We have two options: ((AB)C)D or
(A(BC))D. Calculating the scalar multiplications for each option:

 ((AB)C)D: We multiply matrices (AB) and C, resulting in a temporary matrix of size


10 x 60. Then, we multiply this temporary matrix with matrix D (10 x 60), resulting in
a final matrix of size 10 x 8. The total scalar multiplications for this option are 10 x 5
x 60 + 10 x 60 x 8 = 6000.

 (A(BC))D: We multiply matrices A and (BC), resulting in a temporary matrix of size


10 x 60. Then, we multiply this temporary matrix with matrix D (10 x 60), resulting in
a final matrix of size 10 x 8. The total scalar multiplications for this option is 10 x 30
x 5 + 10 x 5 x 8 = 1300.

Updating the table:


Step 3: Backtrack the optimal solution To determine the optimal order of multiplication, we
start at M[1, 4], which gives us the minimum number of scalar multiplications for the entire
chain. In this case, M[1, 4] = 6000, indicating that ((AB)C)D is the optimal order.

Step 4: Return the result The minimum number of scalar multiplications required to multiply
the entire chain is 6000, and the optimal order is ((AB)C)D.

That’s an example of how the Matrix Chain Multiplication algorithm works. By applying
dynamic programming, we can find the optimal order and minimize the computational cost of
multiplying a chain of matrices.

Algorithm to Solve Matrix Chain Multiplication

The matrix chain multiplication algorithm is a dynamic programming approach used to


determine the most efficient order of multiplying a chain of matrices. It aims to minimize the
total number of scalar multiplications required to compute the final matrix product.

Let’s break down the steps of the algorithm:

1. Define the problem:

 Given a chain of matrices A1, A2, A3, …, An, where the dimensions of matrix
Ai are d[i-1] x d[i] for i = 1 to n.

 The goal is to find the optimal order of matrix multiplication to minimize the
total number of scalar multiplications.

2. Create a table:

 Create an n x n table, where n is the number of matrices in the chain.


 The table, let’s call it M, will store the minimum number of scalar
multiplications required to multiply the subchains of matrices.

3. Initialize the table:

 Set the diagonal entries of M to 0, as the subchains of length 1 do not require


any multiplication.

4. Fill in the table:

 Iterate over the table diagonally, starting from the subchains of length 2 and
gradually increasing the length.

 For each subchain length k, starting from matrix i, calculate the minimum
number of scalar multiplications.

 For each possible split point j, where i ≤ j < i + k, calculate the number of
scalar multiplications as follows:

 Compute the number of scalar multiplications required for the left


subchain: M[i][j]

 Compute the number of scalar multiplications required for the right


subchain: M[j+1][i+k]

 Calculate the number of scalar multiplications for the current split


point: d[i-1] * d[j] * d[i+k] + M[i][j] + M[j+1][i+k]

 Update the table entry M[i][i+k] with the minimum value among all
possible split points.

5. Backtrack the optimal solution:

 To determine the optimal order of matrix multiplication, track the split that
gives the minimum number of scalar multiplications for each subchain.

 This can be achieved by storing additional information during the table-filling


step, such as the split points that yield the minimum number of scalar
multiplications for each subchain.

6. Return the result:

 The minimum number of scalar multiplications required to multiply the entire


chain is stored in M[1][n], where n is the number of matrices in the chain.

 The optimal order of matrix multiplication can be obtained by backtracking


through the split points stored during the algorithm execution.
Optimal Binary Search Tree
As we know that in binary search tree, the nodes in the left subtree have lesser value than the
root node and the nodes in the right subtree have greater value than the root node.

We know the key values of each node in the tree, and we also know the frequencies of each
node in terms of searching means how much time is required to search a node. The frequency
and key-value determine the overall cost of searching a node. The cost of searching is a very
important factor in various applications. The overall cost of searching a node should be less.
The time required to search a node in BST is more than the balanced binary search tree as a
balanced binary search tree contains a lesser number of levels than the BST. There is one way
that can reduce the cost of a binary search tree is known as an optimal binary search tree.

Let's understand through an example.

If the keys are 10, 20, 30, 40, 50, 60, 70

In the above tree, all the nodes on the left subtree are smaller than the value of the root node,
and all the nodes on the right subtree are larger than the value of the root node. The
maximum time required to search a node is equal to the minimum height of the tree, equal to
logn.

Now we will see how many binary search trees can be made from the given number of keys.

For example: 10, 20, 30 are the keys, and the following are the binary search trees that can be
made out from these keys.

The Formula for calculating the number of trees:


When we use the above formula, then it is found that total 5 number of trees can be created.

The cost required for searching an element depends on the comparisons to be made to search
an element. Now, we will calculate the average cost of time of the above binary search trees.

In the above tree, total number of 3 comparisons can be made. The average number of
comparisons can be made as:

In the above tree, the average number of comparisons that can be made as:
In the above tree, the average number of comparisons that can be made as:

In the above tree, the total number of comparisons can be made as 3. Therefore, the average
number of comparisons that can be made as:
In the above tree, the total number of comparisons can be made as 3. Therefore, the average
number of comparisons that can be made as:

In the third case, the number of comparisons is less because the height of the tree is less, so
it's a balanced binary search tree.

Till now, we read about the height-balanced binary search tree. To find the optimal binary
search tree, we will determine the frequency of searching a key.

Let's assume that frequencies associated with the keys 10, 20, 30 are 3, 2, 5.

The above trees have different frequencies. The tree with the lowest frequency would be
considered the optimal binary search tree. The tree with the frequency 17 is the lowest, so it
would be considered as the optimal binary search tree.

Dynamic Approach

Consider the below table, which contains the keys and frequencies.

First, we will calculate the values where j-i is equal to zero.


When i=0, j=0, then j-i = 0

When i = 1, j=1, then j-i = 0

When i = 2, j=2, then j-i = 0

When i = 3, j=3, then j-i = 0

When i = 4, j=4, then j-i = 0

Therefore, c[0, 0] = 0, c[1 , 1] = 0, c[2,2] = 0, c[3,3] = 0, c[4,4] = 0

Now we will calculate the values where j-i equal to 1.

When j=1, i=0 then j-i = 1

When j=2, i=1 then j-i = 1

When j=3, i=2 then j-i = 1

When j=4, i=3 then j-i = 1

Now to calculate the cost, we will consider only the jth value.

The cost of c[0,1] is 4 (The key is 10, and the cost corresponding to key 10 is 4).

The cost of c[1,2] is 2 (The key is 20, and the cost corresponding to key 20 is 2).

The cost of c[2,3] is 6 (The key is 30, and the cost corresponding to key 30 is 6)

The cost of c[3,4] is 3 (The key is 40, and the cost corresponding to key 40 is 3)

Now we will calculate the values where j-i = 2


When j=2, i=0 then j-i = 2

When j=3, i=1 then j-i = 2

When j=4, i=2 then j-i = 2

In this case, we will consider two keys.

o When i=0 and j=2, then keys 10 and 20. There are two possible trees that can be made
out from these two keys shown below:

In the first binary tree, cost would be: 4*1 + 2*2 = 8

In the second binary tree, cost would be: 4*2 + 2*1 = 10

The minimum cost is 8; therefore, c[0,2] = 8

o When i=1 and j=3, then keys 20 and 30. There are two possible trees that can be made
out from these two keys shown below:

In the first binary tree, cost would be: 1*2 + 2*6 = 14

In the second binary tree, cost would be: 1*6 + 2*2 = 10

The minimum cost is 10; therefore, c[1,3] = 10

o When i=2 and j=4, we will consider the keys at 3 and 4, i.e., 30 and 40. There are two
possible trees that can be made out from these two keys shown as below:
In the first binary tree, cost would be: 1*6 + 2*3 = 12

In the second binary tree, cost would be: 1*3 + 2*6 = 15

The minimum cost is 12, therefore, c[2,4] = 12

Now we will calculate the values when j-i = 3

When j=3, i=0 then j-i = 3

When j=4, i=1 then j-i = 3

o When i=0, j=3 then we will consider three keys, i.e., 10, 20, and 30.

The following are the trees that can be made if 10 is considered as a root node.

In the above tree, 10 is the root node, 20 is the right child of node 10, and 30 is the right child
of node 20.

Cost would be: 1*4 + 2*2 + 3*6 = 26


In the above tree, 10 is the root node, 30 is the right child of node 10, and 20 is the left child
of node 20.

Cost would be: 1*4 + 2*6 + 3*2 = 22

The following tree can be created if 20 is considered as the root node.

In the above tree, 20 is the root node, 30 is the right child of node 20, and 10 is the left child
of node 20.

Cost would be: 1*2 + 4*2 + 6*2 = 22

The following are the trees that can be created if 30 is considered as the root node.
In the above tree, 30 is the root node, 20 is the left child of node 30, and 10 is the left child of
node 20.

Cost would be: 1*6 + 2*2 + 3*4 = 22

In the above tree, 30 is the root node, 10 is the left child of node 30 and 20 is the right child
of node 10.

Cost would be: 1*6 + 2*4 + 3*2 = 20

Therefore, the minimum cost is 20 which is the 3rd root. So, c[0,3] is equal to 20.

o When i=1 and j=4 then we will consider the keys 20, 30, 40

c[1,4] = min{ c[1,1] + c[2,4], c[1,2] + c[3,4], c[1,3] + c[4,4] } + 11

= min{0+12, 2+3, 10+0}+ 11

= min{12, 5, 10} + 11

The minimum value is 5; therefore, c[1,4] = 5+11 = 16

o Now we will calculate the values when j-i = 4

When j=4 and i=0 then j-i = 4

In this case, we will consider four keys, i.e., 10, 20, 30 and 40. The frequencies of 10, 20, 30
and 40 are 4, 2, 6 and 3 respectively.

w[0, 4] = 4 + 2 + 6 + 3 = 15

If we consider 10 as the root node then

C[0, 4] = min {c[0,0] + c[1,4]}+ w[0,4]


= min {0 + 16} + 15= 31

If we consider 20 as the root node then

C[0,4] = min{c[0,1] + c[2,4]} + w[0,4]

= min{4 + 12} + 15

= 16 + 15 = 31

If we consider 30 as the root node then,

C[0,4] = min{c[0,2] + c[3,4]} +w[0,4]

= min {8 + 3} + 15

= 26

If we consider 40 as the root node then,

C[0,4] = min{c[0,3] + c[4,4]} + w[0,4]

= min{20 + 0} + 15

= 35

In the above cases, we have observed that 26 is the minimum cost; therefore, c[0,4] is equal
to 26.

The optimal binary tree can be created as:


General formula for calculating the minimum cost is:

C[i,j] = min{c[i, k-1] + c[k,j]} + w(i,j)

Travelling Salesman Problem


The travelling salesman problem is a graph computational problem where the salesman needs
to visit all cities (represented using nodes in a graph) in a list just once and the distances
(represented using edges in the graph) between all these cities are known. The solution that is
needed to be found for this problem is the shortest possible route in which the salesman visits
all the cities and returns to the origin city.

If you look at the graph below, considering that the salesman starts from the vertex a, they
need to travel through all the remaining vertices b, c, d, e, f and get back to a while making
sure that the cost taken is minimum.
There are various approaches to find the solution to the travelling salesman problem: naive
approach, greedy approach, dynamic programming approach, etc. In this tutorial we will be
learning about solving travelling salesman problem using greedy approach.

Travelling Salesperson Algorithm

As the definition for greedy approach states, we need to find the best optimal solution locally
to figure out the global optimal solution. The inputs taken by the algorithm are the graph G
{V, E}, where V is the set of vertices and E is the set of edges. The shortest path of graph G
starting from one vertex returning to the same vertex is obtained as the output.

Algorithm

 Travelling salesman problem takes a graph G {V, E} as an input and declare another
graph as the output (say G) which will record the path the salesman is going to take
from one node to another.

 The algorithm begins by sorting all the edges in the input graph G from the least
distance to the largest distance.

 The first edge selected is the edge with least distance, and one of the two vertices (say
A and B) being the origin node (say A).

 Then among the adjacent edges of the node other than the origin node (B), find the
least cost edge and add it onto the output graph.

 Continue the process with further nodes making sure there are no cycles in the output
graph and the path reaches back to the origin node A.

 However, if the origin is mentioned in the given problem, then the solution must
always start from that node only. Let us look at some example problems to understand
this better.

Examples

Consider the following graph with six cities and the distances between them −
From the given graph, since the origin is already mentioned, the solution must always start
from that node. Among the edges leading from A, A → B has the shortest distance.

Then, B → C has the shortest and only edge between, therefore it is included in the output
graph.

Theres only one edge between C → D, therefore it is added to the output graph.
Theres two outward edges from D. Even though, D → B has lower distance than D → E, B is
already visited once and it would form a cycle if added to the output graph. Therefore, D → E
is added into the output graph.

Theres only one edge from e, that is E → F. Therefore, it is added into the output graph.
Again, even though F → C has lower distance than F → A, F → A is added into the output
graph in order to avoid the cycle that would form and C is already visited once.

The shortest path that originates and ends at A is A → B → C → D → E → F → A

The cost of the path is: 16 + 21 + 12 + 15 + 16 + 34 = 114.

Even though, the cost of path could be decreased if it originates from other nodes but the
question is not raised with respect to that.

Backtracking: N-Queens
Backtracking is a fundamental algorithmic technique used to solve problems that
involve searching for solutions among a set of possible candidates, often where the solution
must satisfy certain constraints. It visits all the possible solutions and returns if the solution
does not meet the criteria. So we can say that it uses a more refined version of brute force
searching style for a given problem. For example in the 0/1 knapsack problem, we use
backtracking to assign the O’s and 1’s to the corresponding item’s weight.

N-Queen Problem:

It is a type of classic backtracking problem where queens are placed on an n x n board in a


such way that two queens cannot cross each other diagonally, row and column (diagonal, left,
right, top, and down way).

Algorithm:

1. Place a queen in the first column of the first row.

2. Check if placing the queen in the current position conflicts with any previously placed
queens.

3. If there is no conflict, proceed to the next column. If there is a conflict, backtrack to


the previous column and try placing the queen in the next row.
4. Repeat steps 2 and 3 for all columns.

5. If all queens are placed successfully without conflicts, a solution is found.

6. If no solution is found after exploring all possibilities, backtrack to the previous


column and try placing the queen in the next row.

Time Complexity Analysis

The time complexity of the N-Queens problem using backtracking can be analyzed as
follows:

 Worst-case Time Complexity: In the worst case, the algorithm might explore all
possible configurations of the board. For each row, it tries to place a queen in all
columns and checks for safety. Therefore, the time complexity can be approximated
as O(N!) because there are N! possible permutations of queens on the board.

Practical Performance:

The actual number of configurations explored is much less than N! because the
algorithm prunes many configurations early (when it detects conflicts), making the practical
performance much better than the worst-case scenario. But it is still N! which makes matter
to be a concern and should not take a large number of n x n board.

There are two popular categories of the N-Queen Problem:

1. 4-queen problem (4 x 4 = 16 cell board)

2. 8-queen problem (8 x 8= 64 cell board)

For 8-Queens Problem

Press enter or click to view image in full size

For the 8 x 8 problem, our tree structure will be 2⁸ = 256 unique nodes.
Press enter or click to view image in full size

Press enter or click to view image in full size


Graph Colouring
Graph coloring can be described as a process of assigning colors to the vertices of a graph. In
this, the same color should not be used to fill the two adjacent vertices. We can also call
graph coloring as Vertex Coloring. In graph coloring, we have to take care that a graph must
not contain any edge whose end vertices are colored by the same color. This type of graph is
known as the Properly colored graph.

Example of Graph coloring

In this graph, we are showing the properly colored graph, which is described as follows:
The above graph contains some points, which are described as follows:

o The same color cannot be used to color the two adjacent vertices.

o Hence, we can call it as a properly colored graph.

Applications of Graph coloring

There are various applications of graph coloring. Some of their important applications are
described as follows:

o Assignment

o Map coloring

o Scheduling the tasks

o Sudoku

o Prepare time table

o Conflict resolution

Chromatic Number

The chromatic number can be described as the minimum number of colors required to
properly color any graph. In other words, the chromatic number can be described as a
minimum number of colors that are needed to color any graph in such a way that no two
adjacent vertices of a graph will be assigned the same color.

Example of Chromatic number:

To understand the chromatic number, we will consider a graph, which is described as


follows:
The above graph contains some points, which are described as follows:

o The same color is not used to color the two adjacent vertices.

o The minimum number of colors of this graph is 3, which is needed to properly color
the vertices.

o Hence, in this graph, the chromatic number = 3

o If we want to properly color this graph, in this case, we are required at least 3 colors.

Types of Chromatic Number of Graphs:

There are various types of chromatic number of graphs, which are described as follows:

Cycle Graph:

A graph will be known as a cycle graph if it contains 'n' edges and 'n' vertices (n >= 3), which
form a cycle of length 'n'. There can be only 2 or 3 number of degrees of all the vertices in the
cycle graph.

Chromatic number:

1. The chromatic number in a cycle graph will be 2 if the number of vertices in that
graph is even.

2. The chromatic number in a cycle graph will be 3 if the number of vertices in that
graph is odd.

Examples of Cycle graph:


There are various examples of cycle graphs. Some of them are described as follows:

Example 1: In the following graph, we have to determine the chromatic number.

Solution: In the above cycle graph, there are 3 different colors for three vertices, and none of
the adjacent vertices are colored with the same color. In this graph, the number of vertices is
odd. So

Chromatic number = 3

Example 2: In the following graph, we have to determine the chromatic number.

Solution: In the above cycle graph, there are 2 colors for four vertices, and none of the
adjacent vertices are colored with the same color. In this graph, the number of vertices is
even. So

Chromatic number = 2

Example 3: In the following graph, we have to determine the chromatic number.


Solution: In the above graph, there are 4 different colors for five vertices, and two adjacent
vertices are colored with the same color (blue). So this graph is not a cycle graph and does
not contain a chromatic number.

Example 4: In the following graph, we have to determine the chromatic number.

Solution: In the above graph, there are 2 different colors for six vertices, and none of the
adjacent vertices are colored with the same color. In this graph, the number of vertices is
even. So

Chromatic number = 2

Planner Graph

A graph will be known as a planner graph if it is drawn in a plane. The edges of the planner
graph must not cross each other.
Chromatic Number:

1. In a planner graph, the chromatic Number must be Less than or equal to 4.

2. The planner graph can also be shown by all the above cycle graphs except example 3.

Examples of Planer graph:

There are various examples of planer graphs. Some of them are described as follows:

Example 1: In the following graph, we have to determine the chromatic number.

Solution: In the above graph, there are 3 different colors for three vertices, and none of the
edges of this graph cross each other. So

Chromatic number = 3

Here, the chromatic number is less than 4, so this graph is a plane graph.

Example 2: In the following graph, we have to determine the chromatic number.


Solution: In the above graph, there are 2 different colors for four vertices, and none of the
edges of this graph cross each other. So

Chromatic number = 2

Here, the chromatic number is less than 4, so this graph is a plane graph.

Example 3: In the following graph, we have to determine the chromatic number.

Solution: In the above graph, there are 5 different colors for five vertices, and none of the
edges of this graph cross each other. So

Chromatic number = 5

Here, the chromatic number is greater than 4, so this graph is not a plane graph.

Example 4: In the following graph, we have to determine the chromatic number.


Solution: In the above graph, there are 2 different colors for six vertices, and none of the
edges of this graph cross each other. So

Chromatic number = 2

Here, the chromatic number is less than 4, so this graph is a plane graph.

Complete Graph

A graph will be known as a complete graph if only one edge is used to join every two distinct
vertices. Every vertex in a complete graph is connected with every other vertex. In this graph,
every vertex will be colored with a different color. That means in the complete graph, two
vertices do not contain the same color.

Chromatic Number

In a complete graph, the chromatic number will be equal to the number of vertices in that
graph.

Examples of Complete graph:

There are various examples of complete graphs. Some of them are described as follows:

Example 1: In the following graph, we have to determine the chromatic number.


Solution: There are 4 different colors for 4 different vertices, and none of the colors are the
same in the above graph. According to the definition, a chromatic number is the number of
vertices. So,

Chromatic number = 4

Example 2: In the following graph, we have to determine the chromatic number.

Solution: There are 5 different colors for 5 different vertices, and none of the colors are the
same in the above graph. According to the definition, a chromatic number is the number of
vertices. So,

Chromatic number = 5

Example 3: In the following graph, we have to determine the chromatic number.


Solution: There are 3 different colors for 4 different vertices, and one color is repeated in two
vertices in the above graph. So this graph is not a complete graph and does not contain a
chromatic number.

Bipartite Graph

A graph will be known as a bipartite graph if it contains two sets of vertices, A and B. The
vertex of A can only join with the vertices of B. That means the edges cannot join the vertices
with a set.

Chromatic Number

In any bipartite graph, the chromatic number is always equal to 2.

Examples of Bipartite graph:

There are various examples of bipartite graphs. Some of them are described as follows:

Example 1: In the following graph, we have to determine the chromatic number.


Solution: There are 2 different sets of vertices in the above graph. So the chromatic number
of all bipartite graphs will always be 2. So

Chromatic number = 2

Tree:

A connected graph will be known as a tree if there are no circuits in that graph. In a tree, the
chromatic number will equal to 2 no matter how many vertices are in the tree. Every bipartite
graph is also a tree.

Chromatic Number

In any tree, the chromatic number is equal to 2.

Examples of Tree:

There are various examples of a tree. Some of them are described as follows:

Example 1: In the following tree, we have to determine the chromatic number.

Solution: There are 2 different colors for four vertices. A tree with any number of vertices
must contain the chromatic number as 2 in the above tree. So,

Chromatic number = 2

Example 2: In the following tree, we have to determine the chromatic number.


Solution: There are 2 different colors for five vertices. A tree with any number of vertices
must contain the chromatic number as 2 in the above tree. So,

Chromatic number = 2

Real-life example of Chromatic Number

Suppose Marry is a manager in Xyz Company. The company hires some new employees, and
she has to get a training schedule for those new employees. She has to schedule the three
meetings, and she is trying to use the few time slots as much as possible for meetings. If there
is an employee who has to be at two different meetings, then the manager needs to use the
different time schedules for those meetings. Suppose we want to get a visual representation of
this meeting.

For the visual representation, Marry uses the dot to indicate the meeting. If there is an
employee who has two meetings and requires to join both the meetings, then both the meeting
will be connected with the help of an edge. The different time slots are represented with the
help of colors. So the manager fills the dots with these colors in such a way that two dots do
not contain the same color that shares an edge. (That means an employee who needs to attend
the two meetings must not have the same time slot). The visual representation of this is
described as follows:

You might also like