You are on page 1of 2

Greedy approach vs Dynamic programming

A Greedy algorithm is an algorithmic paradigm that builds up a solution piece by


piece, always choosing the next piece that offers the most obvious and immediate
benefit. So the problems where choosing locally optimal also leads to a global
solution are best fit for Greedy.
For example, consider the Fractional Knapsack Problem. The local optimal strategy
is to choose the item that has maximum value vs weight ratio. This strategy also
leads to global optimal solution because we allowed taking fractions of an item.
Dynamic programming is mainly an optimization over plain recursion. Wherever we
see a recursive solution that has repeated calls for the same inputs, we can optimize
it using Dynamic Programming. The idea is to simply store the results of
subproblems so that we do not have to re-compute them when needed later. This
simple optimization reduces time complexities from exponential to polynomial. For
example, if we write a simple recursive solution for Fibonacci Numbers, we get
exponential time complexity and if we optimize it by storing solutions of
subproblems, time complexity reduces to linear.
Below are some major differences between Greedy method and Dynamic
programming:
FEATURE GREEDY METHOD DYNAMIC PROGRAMMING

In Dynamic Programming we
In a greedy Algorithm, we make decision at each step
make whatever choice seems considering current problem
best at the moment in the and solution to previously
hope that it will lead to global solved sub problem to calculate
Feasibility optimal solution. optimal solution .

It is guaranteed that Dynamic


Programming will generate an
In Greedy Method, sometimes optimal solution as it generally
there is no such guarantee of considers all possible cases
Optimality getting Optimal Solution. and then choose the best.

A Dynamic programming is an
A greedy method follows the algorithmic technique which is
problem solving heuristic of usually based on a recurrent
making the locally optimal formula that uses some
Recursion choice at each stage. previously calculated states.

It is more efficient in terms of It requires dp table for


memory as it never look back memorization and it increases
Memorization or revise previous choices it’s memory complexity.

Greedy methods are generally Dynamic Programming is


faster. For example, Dijkstra’s generally slower. For
Time shortest path algorithm takes example, Bellman Ford
complexity O(ELogV + VLogV) time. algorithm takes O(VE) time.

The greedy method computes Dynamic programming


its solution by making its computes its solution bottom up
choices in a serial forward or top down by synthesizing
fashion, never looking back or them from smaller optimal sub
Fashion revising previous choices. solutions.

Example Fractional knapsack . 0/1 knapsack problem

You might also like