You are on page 1of 20

UNIT IIII

DYNAMIC PROGRAMMING

1
DYNAMIC PROGRAMMING

• Dynamic programming is an algorithm design method that can be used when the
solution to a problem may be viewed as the result of a sequence of decisions
• Dynamic programming is the most powerful design technique for solving
optimization problems.
• Divide & conquer algorithm partition the problem into disjoint subproblems solve
the subproblems recursively and then combine their solution to solve the original
problems.
• Dynamic programming is used when the subproblems are not independent, e.g.
When they share the same subproblems. In this case, divide and conquer may do
more work than necessary, because it solves the same sub problem multiple times.
2
DYNAMIC PROGRAMMING

3
DIVIDE AND CONQUER

4
DIVIDE AND CONQUER

5
DP VS DANDC

6
DIFFERENCE B/W DP & RECURSIVE DANDC

7
8
FIBO USING D&C

9
0 1 2 3 4 5 6
-1 -1 -1 -1 -1 -1 -1

Algorithm fib(n)
if n == 0: return 0
if n == 1: return 1

if memo[n] != -1: return memo[n]

memo[n] = fib(n-1) + fib(n-2)


return memo[n]

10
THE SHORTEST PATH IN MULTISTAGE
GRAPHS

• E.G.

• THE GREEDY METHOD CAN NOT BE APPLIED TO THIS CASE: (S, A, D, T) 1+4+18 = 23.
• THE REAL SHORTEST PATH IS:
(S, C, F, T) 5+2+2 = 9.

7 -11
GREEDY APPROACH VS DYNAMIC
PROGRAMMING
GREEDY APPROACH:
• The greedy approach makes locally optimal choices at each step with the hope of finding a global optimum.
• The greedy approach does not necessarily consider the future consequences of the current choice.
• The greedy approach is generally faster and simpler than dynamic programming.

DYNAMIC PROGRAMMING:
• Dynamic programming is a bottom-up algorithmic approach that builds up the solution to a problem by solving
its subproblems recursively.
• Dynamic programming stores the solutions to subproblems and reuses them when necessary to avoid
solving the same subproblems multiple times.
• Dynamic programming is useful for solving problems where the optimal solution can be obtained by combining
optimal solutions to subproblems.
• Dynamic programming is generally slower and more complex than the greedy approach, but it guarantees
the optimal solution.
• In summary, the main difference between the greedy approach and dynamic programming is that the greedy
approach makes locally optimal choices at each step without considering the future consequences, while
dynamic programming solves subproblems recursively and reuses their solutions to avoid repeated calculations.
The greedy approach is generally faster and simpler, but may not always provide the optimal solution, while
dynamic programming guarantees the optimal solution but is slower and more complex
12
MULTISTAGE GRAPHS
• A multistage graph is a directed, weighted graph in which the nodes can be divided into a
set of stages such that all edges are from a stage to next stage only (in other words there is
no edge between vertices of same stage and from a vertex of current stage to previous
stage).
• The vertices of a multistage graph are divided into n number of disjoint subsets
s={s1 ,s2, s3 ……….. sn }, where s1 is the source and sn is the sink ( destination ). the cardinality
of s1 and sn are equal to 1. I.E., |S1| = |sn| = 1.
• We are given a multistage graph, a source and a destination, we need to find shortest path
from source to destination. By convention, we consider source at stage 1 and destination as
last stage.
Following is an example graph we will consider in this article :-

13
• The brute force method of finding all possible paths between source and destination and
then finding the minimum. That’s the WORST possible strategy.
• Dijkstra’s algorithm of single source shortest paths. This method will find shortest paths
from source to all other nodes which is not required in this case. So it will take a lot of time
and it doesn’t even use the SPECIAL feature that this MULTI-STAGE graph has.
• Simple greedy method – at each node, choose the shortest outgoing path. If we apply this
approach to the example graph given above we get the solution as 1 + 4 + 18 = 23. But a
quick look at the graph will show much shorter paths available than 23. So the greedy
method fails !
• The best option is dynamic programming. So we need to find optimal sub-structure,
recursive equations and overlapping sub-problems.

14
• Dynamic programming formulation for a k- stage graph problem is obtained by
first noticing that every s to t path is the result of a sequence of k - 2 decision .
That is ith decision involves determining which vertex in Vi+i, 1<i < k -2,is to
be on the path- it is easy to see that the principle of optimality holds.
• Let p(i,j) be a minimum-cost path from vertex j in vi to vertex t. Let cost(i,j) be
the cost of this path. Then, using the forward approach, we obtain

15
MULTISTAGE GRAPHS- FORWARD APPROACH

16
Multistage graphs- Backward approach

17
The time complexity analysis of the function fgraph is fairly
straightforward.
• If g is represented by its adjacency lists, then r in line 9 of algorithm
5.1 Can be found in time proportional to the degree of vertex j.
Hence, if G has |E| edges ,then the time for the for loop of line7 is |V|
+ |E|). The time For the for loop of line16is O(k).Hence ,the total time
is
Q(|V| + |E|)

18
DYNAMIC PROGRAMMING
APPROACH
• DYNAMIC PROGRAMMING APPROACH (FORWARD
APPROACH):

• D(S, T) = MIN{1+D(A, T), 2+D(B, T), 5+D(C, T)}


■ d(A,T) = min{4+d(D,T), 11+d(E,T)}
7 -19
= min{4+18, 11+13} = 22.
ASSIGNMENT

20

You might also like