You are on page 1of 14

4 Module

1. Explain the concept of dynamic programming with example. Differentiate between


divide and conquer and Dynamic Programming techniques.
Answer: Dynamic Programming:
Dynamic Programming is an algorithmic optimization technique used to solve problems by
breaking them down into simpler overlapping subproblems. It involves storing the solutions
to these subproblems so that they are not recomputed, leading to improved time
complexity. Dynamic Programming is applicable when a problem exhibits both optimal
substructure (optimal solution can be constructed from optimal solutions of its
subproblems) and overlapping subproblems (subproblems recur multiple times).
Example of Dynamic Programming:
One classic example is the Fibonacci sequence. To find the nth Fibonacci number, a
straightforward recursive approach leads to redundant computations:

Notice that Fib(3) is calculated twice, and Fib(2) and Fib(1) are calculated multiple times in
this process.
Divide and Conquer:
Divide and Conquer is a technique that breaks down a problem into smaller, non-
overlapping subproblems, solves these subproblems independently, and combines their
solutions to solve the original problem. Unlike dynamic programming, it typically doesn't
store or reuse solutions to subproblems.
Example of Divide and Conquer:
The merge sort algorithm exemplifies the divide and conquer strategy. It breaks down an
array into smaller subarrays, sorts these subarrays, and then merges them back together in
a sorted manner.

Differences Between Divide and Conquer and Dynamic Programming:


Divide and Conquer (D&C):
1. Subproblem Overlap: D&C deals with non-overlapping subproblems.
2. Solutions Storage: Does not store solutions for reuse.
3. Optimal Substructure: Both D&C and DP break problems down, but DP requires
optimal substructure, whereas D&C may not.
4. Merging Step: In D&C, a merging step combines solutions of subproblems.
5. Examples: Classic D&C algorithms include merge sort and binary search.
6. Recursion Emphasis: D&C often involves recursive decomposition of problems.
7. No Solution Reuse: D&C typically does not reuse solutions to subproblems.
8. Time Complexity: Can be more straightforward but may result in redundant
computations.
9. Space Complexity: Usually less space-intensive as it doesn’t store solutions.
10.Applicability: Suitable for problems without overlapping subproblems.

Dynamic Programming:

1. Subproblem Overlap: DP involves solving overlapping subproblems.


2. Solutions Storage: Stores solutions to subproblems for reuse.
3. Optimal Substructure: Requires optimal substructure.
4. Memoization or Tabulation: Uses memoization or tabulation to store solutions.
5. Examples: Classic DP problems include Fibonacci, shortest path, and knapsack
problems.
6. Recursion Optimization: Avoids redundant computations through memoization or
tabulation.
7. Solution Reuse: Reuses solutions to overlapping subproblems.
8. Time Complexity: Reduces time complexity significantly for problems with
overlapping subproblems.
9. Space Complexity: Tends to have higher space complexity due to storing solutions.
10.Applicability: Suitable for problems with overlapping subproblems, optimizing time
complexity.
2. Briefly explain Multistage Graphs and elaborate the Multistage graph algorithm using
forward approach with an example.( 3rd question is also included)
Answer: Multistage Graphs:
A multistage graph is a directed graph where the nodes are divided into multiple stages or
levels. The graph is acyclic, and edges only exist between nodes of adjacent levels/stages.
These graphs are used to model and solve various real-world optimization problems,
especially problems related to decision-making or processes that happen in multiple stages.
Multistage Graph Algorithm - Forward Approach:
The forward approach is one way to solve multistage graph problems, such as the shortest
path or minimum cost traversal from the starting node to the end node through various
stages. This method involves moving forward through the graph from the initial stage to the
final stage, calculating the minimum cost path by considering the costs associated with
moving between stages.
Algorithm Steps:
1. Initialization:
 Assign costs to the nodes in the first stage.
 Initialize the cost for each node in the first stage as the cost of reaching that
node directly from the source.
2. Forward Movement:
 Move forward through the stages, calculating the minimum cost to reach each
node in subsequent stages.
 For each subsequent stage:
 Consider each node in that stage.
 Calculate the cost to reach that node from its incoming edges (nodes
from the previous stage).
 Choose the minimum cost path among the incoming edges to get to the
current node.
 Update the cost for the current node with this minimum cost.
3. Termination:
 Once the final stage is reached, the minimum cost path can be determined
from the costs assigned to the nodes in the final stage.
1. F graph (graph G, int K, int n, int p[])
2. {
3. Float cost [max size], int d [max size], r;
4. Cost [n] = 0.0
5. For (int j = n-1; j>= 1; j--)
6. {
7. Let r be a vertex such that is an edge of G and C[j][r] + cost[r] is
minimum;
8. Cost [j] = C[j][r] + Cost[r]
9. D [j] = r
10. }
11. P [1] = 1 , P[k] = n
12. For (j = 2 ; j <= K-1; j++)
13. P[j] = d[P(j-1)];
14. }
4. Write Warshall’s Algorithm to compute the transitive closure of the graph. Apply
Warshall’s Transitive Closure for the given graph shown below:

Answer: Warshall's algorithm is used to find the transitive closure of a directed graph. The
transitive closure of a graph shows all the vertices reachable from every vertex in the graph.
Warshall's Algorithm:
for k from 1 to V: (V is the number of vertices)
for i from 1 to V:
for j from 1 to V:
tc[i][j] = tc[i][j] OR (tc[i][k] AND tc[k][j])

This algorithm works by considering each vertex as an intermediate vertex and updates the
transitive closure matrix if there exists a path between two vertices through the
intermediate vertex.
Example Application:
Let's consider a directed graph given by its adjacency matrix:

Warshall's Algorithm Execution:


Given the initial adjacency matrix as described, we perform the following steps:
1. Initialize the transitive closure matrix tc[][] as a copy of the adjacency matrix.
2. Apply the algorithm to update the transitive closure matrix:
Transitive Closure Matrix (tc[][]):
The resulting transitive closure matrix tc[][] after applying Warshall's algorithm:

5. Make use of Warshall’s Algorithm for the following given graph to compute transitive
closure

Answer:
6. Apply Floyd’s Algorithm for the following given graph to find all pair shortest path.
Answer:

7. Apply the bottom – up dynamic programming algorithm to the following instance of


the knapsack problem with capacity m=5.Also compute the solution vector.
7. Apply Bellman-Ford Algorithm to find the shortest paths from node 1 to every other
node in the following given graph.

8. Apply Travelling Salesperson Problem (TSP) Algorithm for the following given graph
using dynamic programming.

You might also like