Dynamic programming

Dynamic programming is an algorithm technique, which is applicable to a problem which is
composed of overlapping sub-problems and the problem has Optimal sub-structure.
A problem is said to have overlapping sub-problems if a problem can be broken down into a sub-
problems which are reused several times or recursive algorithm solves same sub-problem over
and over.
So, this overlapping nature of the sub-problems help reducing the complexity by using a technique
called memoization.
emoi!ation is technique in which the result of a sub-problem is stored in a data structure if it
is computed for the 1st time. And next time if the same sub-problem is used, we lookup into the
data structure before solving the sub-poroblem.
So the output of any sub-problem is re-used to solve the other sub-problem.
As a result, the computation of overlapping part in sub-problems is done only once. "hereby
reducing the complexity.
Optimal Substructure-
A problem is said to have optimal sub-structure if an optimal solution for the problem can be
constructed efficiently from optimal solutions of its sub-problems.
Summary-
• Solution to a sub-problem is obtained by using the result of previously computed sub-
problems.
• !ll sub-problems are computed using previously computed sub-problems e"cept the
base cases, which we find to derive before start solving the sub-problems.
• So we consider base cases as previosuly computed sub-problems for solving other sub-
problems.
#.g- fibonaci series
#n fibonacci series, the base case is the following$-
• %e consider Solution for fibonacci of & '(
• %e consider Solution for fibonacci of ( '(
)ow, we use these base cases to get solution to other sub-problems.
Difference between dynamic programming and divide $ con%uer
Divide and con%uer -
Divides the problem into sub-problems.
Solves the sub-problems.
*ombine the result of sub-problems to get the solution to original problem.
&ote- 'n Divide $ (on%uer the sub-problems are independent of each other.+whereas in D,
sub-problems are overlapping in nature means solution to one sub-problem may be dependent on
the output of other sub-problem for computation-.
)ey 'dea behind dynamic programming
• In general, to solve a given problem, we need to solve diferent parts of the problem
(subproblems), then combine the solutions of the subproblems to reach an overall
solution.
• Often, many of these subproblems are really the same.
• The dynamic programming approach seeks to solve each subproblem only once, thus
reducing the number of computations once the solution to a given subproblem has been
computed, it is stored or !memo"i#ed! the ne$t time the same solution is needed, it is
simply looked up.
• This approach is especially useful when the number of repeating subproblems grows
e$ponentially as a function of the si#e of the input.
Top-Down Approach
Problem Solving Help:- %ain task of solving a problem using dp is "
• &reaking down the problem into sub"problems.
• 'inding out the base cases and recurrence relation(re"occurring relation which tells how
one sub"problem uses other sub"problem).
,roblem
Sub-,roblem (
Sub-*roblem +
Sub-,roblem . Sub-,roblem n
Sub-,roblem ( will
act as the base case for
Solving rest of the sub
,roblems. #t is memoi!ed
Sub-,roblem (
)ew Sub-,roblem
*omposed of
/ere Sub-,roblem (
is not recalculated
instead it looks up in
the data structure used
for memoi!ation
"his sub-problem will be
solved as it is a
new sub-problem.
"his sub-problem will be
a part of further
sub-problems.
And there it will not
be recomputed.
*omputation of each sub-problem is done only
0nce.
Types of dynamic programming Approaches
1) Top-down Approach:- (emoi!a"ion Approach)
• In this approach, we solve the top sub"problems (rst which recurses down to solve the
sub"problems.(Overlapping sub"problems)
• The solution to the sub"problem is computed and stored in a table.
• )henever a new sub"problem is being solved, (rst we check whether the sub"problem is
already solved or not. If it is then we use the same result, otherwise we compute it*s result
and store it in the table.
• +ll ,ecursive alog*s uses memoi#ation are e$amples of top"down dynamic programming.
#ons:-
• It re-uires us to save all the previously computed results at any point in time.
• If the si#e of problem is large, then we will end up occupying large space for the previously
computed results which is not good.
• ,ecursion overhead will not be there.
• .ere any subproblem can be encountered any time so we need to store all the results till
end. $a!y approach(or on demand approach)
%&g:- solving 'bonacci series (sing rec(rsive algo and memoi!ing "he res(l"s a"
each s"age&
'b(n) ) 'b(n-1) * 'b(n-+) called rec(rsivly&
+) ,o""om -p Approach:- (dynamic programming)
• In this approach, we solve the sub"problems (rst in a particular order and then use their
solution to build the solution to the original problem/top problems.
• .ere also the solution to sub"problems is memoi#ed.
• Order of the solving sub"problems is of important interest as this order helps us in
memoi#ing the intermediate results only and throwing away the rest of the previously
computed results which are of no use(will not be used further).
• .ere since we are solving the sub"problems in a order, such that only the intermediate
results of sub"problems are re-uired to build the solution to original problem/top problems.
Pros:-
• It re-uires us to store only the intermediate results of sub"problems and throws away the
rest of the results of sub"problems.
• 0o recursion overhead
• 1pace re--uirement is not much.
.or solving s(b-problems/ 0rdering of s(b-problems is impor"an" &
%&g:- 'bonacci series (sing follwong rec(rrence rela"ion&
'b(n) ) 'b(n-1) * 'b(n-+) loopi" from + "o n