You are on page 1of 40

DEPARTMENT OF CS & IT

ANALYSIS AND DESIGN OF ALGORITHMS


(21BCA5C01)

Module 3 : Dynamic Programming

Prepared by
Dr.A.Kannagi
UNIT 3: Dynamic Programming

1. The method - Computing of Binomial Coefficient


2. All pairs shortest path-Floyd’s algorithm
3. Warshall’s algorithm
4. Knapsack problem.

1. The method - Dynamic Programming


Dynamic Programming was invented by Richard Bellman, 1950. It is a very general
technique for solving optimization problems. Dynamic Programming is also used in
optimization problems. Like divide-and-conquer method, Dynamic Programming solves
problems by combining the solutions of sub-problems. Moreover, Dynamic Programming
algorithm solves each sub-problem just once and then saves its answer in a table, thereby
avoiding the work of re-computing the answer every time.
Two main properties of a problem suggest that the given problem can be solved
using Dynamic Programming. These properties are overlapping sub-problems and
optimal substructure.
1. Overlapping Sub-Problems
Similar to Divide-and-Conquer approach, Dynamic Programming also combines
solutions to sub-problems. It is mainly used where the solution of one sub-problem is
needed repeatedly. The computed solutions are stored in a table, so that these don’t have to
be re-computed. Hence, this technique is needed where overlapping sub-problem exists.
For example, Binary Search does not have overlapping sub-problem. Whereas recursive
program of Fibonacci numbers have many overlapping sub-problems.
2. Optimal Sub-Structure
A given problem has Optimal Substructure Property, if the optimal solution of the
given problem can be obtained using optimal solutions of its sub-problems.
For example, the Shortest Path problem has the following optimal substructure property −

2
If a node x lies in the shortest path from a source node u to destination node v, then
the shortest path from u to v is the combination of the shortest path from u to x, and the
shortest path from x to v.
The standard All Pair Shortest Path algorithms like Floyd-Warshall and Bellman-Ford
are typical examples of Dynamic Programming.
Steps of Dynamic Programming Approach
Dynamic Programming algorithm is designed using the following four steps −
 Characterize the structure of an optimal solution.
 Recursively define the value of an optimal solution.
 Compute the value of an optimal solution, typically in a bottom-up fashion.
 Construct an optimal solution from the computed information.
Applications of Dynamic Programming Approach
 Matrix Chain Multiplication
 Longest Common Subsequence
 Travelling Salesman Problem
Computing of Binomial Coefficient
Computing of Binomial Coefficient
Computing binomial coefficients is non-optimization problem but can be solved using
dynamic programming.
Following are common definition of Binomial Coefficients.
1. A binomial coefficient C(n, k) can be defined as the coefficient of X^k in the expansion of
(1 + X)^n.
2. A binomial coefficient C(n, k) also gives the number of ways, disregarding order, that k
objects can be chosen from among n objects; more formally, the number of k-element
subsets (or k-combinations) of an n-element set.
Binomial coefficients are represented by C(n, k) or (nk) and can be used to represent
the coefficients of a binomail:
(a + b)n = nc0anb0+= nc1an-1b1+= nc2an-2b2+……….+= ncna0bn
Where nc0, nc1, nc2 and ncn are called binomial coefficients.

3
The recursive relation is defined by the prior power

C(n, k) = C(n-1, k-1) + C(n-1, k) for n > k > 0


IC C(n, 0) = C(n, n) = 1
Dynamic algorithm constructs a nxk table, with the first column and diagonal filled out using
the IC.
Construct the table:

The table is then filled out iteratively, row by row using the recursive relation.

Algorithm Binomial(n, k)
for i ← 0 to n do // fill out the table row wise
for i = 0 to min(i, k) do
if j==0 or j==i then
C[i, j] ← 1 // IC
else
C[i, j] ← C[i-1, j-1] + C[i-1, j] // recursive relation
return C[n, k]

4
Complexity Analysis
 The cost of the algorithm is cost of filing out the table. Addition is the basic
operation. Because k ≤ n, the sum needs to be split into two parts, only the half of the
table needs to be filled out for i < k and the remaining part of the table is filled out
across the entire row.
 T(n, k) = sum for upper triangle + sum for the lower rectangle

Program :
// A Dynamic Programming based solution that uses table C[][] to calculate the Binomial
Coefficient
class BinomialCoefficient
{
// Returns value of Binomial Coefficient C(n, k)
static int binomialCoeff(int n, int k)
{
int C[][] = new int[n + 1][k + 1];
int i, j;
// Calculate value of Binomial Coefficient in bottom up manner
for (i = 0; i <= n; i++)
{
for (j = 0; j <= min(i, k); j++)

5
{
// Base Cases
if (j == 0 || j == i)
C[i][j] = 1;
// Calculate value using previously stored values
else
C[i][j] = C[i - 1][j - 1] + C[i - 1][j];
}
}
return C[n][k];
}
// A utility function to return minimum of two integers
static int min(int a, int b)
{
return (a < b) ? a : b;
}
/* Driver program to test above function*/
public static void main(String args[])
{
int n = 5, k = 2;
System.out.println("Value of C(" + n + "," + k+ ") is " + binomialCoeff(n, k));
}
}
Output
Value of C[5][2] is 10
Time Complexity: O(n*k)
Auxiliary Space: O(n*k)
Explanation:
1==========>> n = 0, C(0,0) = 1

6
1–1========>> n = 1, C(1,0) = 1, C(1,1) = 1
1–2–1======>> n = 2, C(2,0) = 1, C(2,1) = 2, C(2,2) = 1
1–3–3–1====>> n = 3, C(3,0) = 1, C(3,1) = 3, C(3,2) = 3, C(3,3)=1
1–4–6–4–1==>> n = 4, C(4,0) = 1, C(4,1) = 4, C(4,2) = 6, C(4,3)=4, C(4,4)=1
2. All Pairs Shortest Path Problem – Floyd-Warshall Algorithm
Given a directed, connected weighted graph G(V,E), for each edge ⟨u,v⟩∈E, a weight
w(u,v) is associated with the edge. The all pairs of shortest paths problem (APSP) is to find a
shortest path from u to v for every pair of vertices u and v in V.
Algorithms for the APSP problem
 Matrix Multiplication / Repeated Squaring
 The Floyd-Warshall Algorithm
 Transitive Closure of a Graph
Floyd–Warshall algorithm is an algorithm for finding shortest paths in a weighted
graph with positive or negative edge weights (but with no negative cycles). A single
execution of the algorithm will find the lengths (summed weights) of shortest paths between
all pairs of vertices.
The all pair shortest path algorithm is also known as Floyd-Warshall algorithm is used
to find all pair shortest path problem from a given weighted graph. As a result of this
algorithm, it will generate a matrix, which will represent the minimum distance from any
node to all other nodes in the graph.
The all pair shortest path algorithm is also known as Floyd-Warshall algorithm is used
to find all pair shortest path problem from a given weighted graph. As a result of this
algorithm, it will generate a matrix, which will represent the minimum distance from any
node to all other nodes in the graph.
The algorithm proceeds by allowing an additional intermediate vertex at each step.
For each introduction of a new intermediate vertex x, the shortest path between any pair of
vertices u and v, x,u,v∈V, is the minimum of the previous best estimate of δ(u,v), or the
combination of the paths from u→x and x→v. δ(u,v)←min(δ(u,v),δ(u,x)+δ(x,v))
Let the directed graph be represented by a weighted matrix W.

7
FLOYD - WARSHALL (W)
n ← rows [W].
D0 ← W
for k ← 1 to n
do for i ← 1 to n
do for j ← 1 to n
do dij(k) ← min (dij(k-1),dik(k-1)+dkj(k-1) )
return D(n)
Let dij(k) be the weight of the shortest path from vertex i to vertex j with all intermediate
vertices in the set {1, 2.......k}.
A recursive definition is given by

The strategy adopted by the Floyd-Warshall algorithm is Dynamic Programming.


The running time of the Floyd-Warshall algorithm is determined by the triply nested for
loops of lines 3-6. Each execution of line 6 takes O (1) time. The algorithm thus runs in time
θ(n3 ).
EXAMPLE 1: Apply Floyd-Warshall algorithm for constructing the shortest path. Show
that matrices D(k) and π(k) computed by the Floyd-Warshall algorithm for the graph.

Follow the steps below to find the shortest path between all the pairs of vertices.
1. Create a matrix A0 of dimension n*n where n is the number of vertices. The row and
the column are indexed as i and j respectively. i and j are the vertices of the graph.

8
Each cell A[i][j] is filled with the distance from the ith vertex to the jth vertex. If there is no
path from ith vertex to jth vertex, the cell is left as infinity.

2. Now, create a matrix A1 using matrix A0. The elements in the first column and the
first row are left as they are. The remaining cells are filled in the following way.
Let k be the freezing vertex in the shortest path from source to destination. In this step, k is
the first vertex.A[i][j] is filled with (A[i][k] + A[k][j]) if (A[i][j] > A[i][k] + A[k][j]).
That is, if the direct distance from the source to the destination is greater than the path
through the vertex k, then the cell is filled with A[i][k] + A[k][j].
In this step, k is vertex 1. We calculate the distance from the source vertex to destination
vertex through this vertex k.

For example: For A1[2, 4], the direct distance from vertex 2 to 4 is 4 and the sum of the
distance from vertex 2 to 4 through vertex (i.e. from vertex 2 to 1 and from vertex 1 to 4) is 7.
Since 4 < 7, A0[2, 4] is filled with 4.
3. In a similar way, A2 is created using A3. The elements in the second column and the
second row are left as they are.

9
In this step, k is the second vertex (i.e. vertex 2). The remaining steps are the same as in step
2.

4.
Similarly, A3 an
d A4 is also
created.

5. A4 gives the shortest path between each pair of vertices.

EXAMPLE 2 :

10
Solution:

Step (i) When k = 0

Step (ii) When k


=1

11
Step (iii) When k = 2

12
Step (iv)
When k = 3

Step (v) When k = 4

13
Step (vi) When k = 5

14
Time Complexity
There are three loops. Each loop has constant complexities. So, the time complexity of
the Floyd-Warshall algorithm is O(n3).

Space Complexity
The space complexity of the Floyd-Warshall algorithm is O(n2).
3. Warshall's Algorithm
Warshall's algorithm is used to determine the transitive closure of a directed graph or
all paths in a directed graph by using the adjacency matrix. For this, it generates a sequence
of n matrices. Where, n is used to describe the number of vertices.
R(0), ..., R(k-1), R(k), ... , R(n)
A sequence of vertices is used to define a path in a simple graph. In the k th matrix
(R(k)), (rij(k)), the element's definition at the i th row and jth column will be one if it contains a
path from vi to vj. For all intermediate vertices, w q is among the first k vertices that mean 1 ≤
q ≤ k.
The R(0) matrix is used to describe the path without any intermediate vertices. So we
can say that it is an adjacency matrix. The R (n) matrix will contain ones if it contains a path
between vertices with intermediate vertices from any of the n vertices of a graph. So we can
say that it is a transitive closure.
Now we will assume a case in which r ij(k) is 1 and rij(k-1) is 0. This condition will be true
only if it contains a path from v i to vj using the vk. More specifically, the list of vertices is in
the following form
vi, wq (where 1 ≤ q < k), vk. wq (where 1 ≤ q < k), vj

15
The above case will be occur only if rik(k-1) = rkj(k-1) = 1. Here, k is subscript.
The rij(k) will be one if and only if rij(k-1) = 1.
So in summary, we can say that
rij(k) = rij(k-1) or (rik(k-1) and rkj(k-1))
Now we will describe the algorithm of Warshall's Algorithm for computing transitive
closure
Warshall(A[1...n, 1...n]) // A is the adjacency matrix
R(0) ← A
for k ← 1 to n do
for i ← 1 to n do
for j ← to n do
R(k)[i, j] ← R(k-1)[i, j] or (R(k-1)[i, k] and R(k-1)[k, j])
return R(n)
Here,
o Time efficiency of this algorithm is (n 3)
o In the Space efficiency of this algorithm, the matrices can be written over their
predecessors.
o Θ(n3) is the worst-case cost. We should know that the brute force algorithm is better
than Warshall's algorithm. In fact, the brute force algorithm is also faster for a space
graph.
Example of Transitive closure

In this example, we will consider two graphs. The first graph is described as follows:

The matrix of this graph is described as follows:

16
The second graph is described as follows:

The matrix of this graph is described as follows:

The main idea of these graphs is described as follows:

17
o The vertices i, j will be contained a path if
o The graph contains an edge from i to j; or
o The graph contains a path from i to j with the help of vertex 1; or
o The graph contains a path from i to j with the help of vertex 1 and/or vertex 2; or
o The graph contains a path from i to j with the help of vertex 1, 2, and/or vertex 3; or
o The graph contains a path from i to j with the help of any other vertices.

On the kth iteration, the algorithm will use the vertices among 1, …., k, known as the
intermediate, and find out that there is a path exists between i and j vertices or not

In a directed graph, the transitive closure with n vertices is used to describe the n-by-n
Boolean matrix T. Where, elements in the ith row and jth column will be 1. This can occur
only if it contains a directed path form ith vertex to jth vertex. Otherwise, the element will be
zero. The transitive closure is described as follows:

18
The adjacency matrix is a type of square matrix, which is used to represent a finite
graph. In the graph, the element of a matrix is used to indicate whether pairs of vertices are
adjacent or not. The adjacency matrix can also be known as the connection matrix, which has
rows and columns. A simple labeled graph with the position 0 or 1 will be represented by
the rows and columns. The position of 0 or 1 will be assigned in a graph on the basis of
condition the whether Vi and Vj are adjacent or not. If the graph contains an edge between
two nodes, then it will assign as 1. If the graph has no nodes, then it will assign as 0. The
adjacency matrix is described as follows:

A digraph is a pair of characters. The digraph is described


as follows:
19
Warshall's Algorithm (Matrix generation)

Recurrence relating elements R(k) to elements of R(k-1) can be described as follows:

R(k)[i, j] = R(k-1)[i, j] or (R(k-1)[i, k] and R(k-1)[k, j])

In order to generate R(k) from R(k-1), the following rules will be implemented:

Rule 1: In row i and column j, if the element is 1 in R(k-1), then in R(k), it will also remain 1.

Rule 2: In row i and column j, if the element is 0 in R(k-1), then the element in R(k) will be
changed to 1 iff the element in its column j and row k, and the element in row i and column
k are both 1's in R(k-1).

Application of Warshall's algorithm to the directed graph

In order to understand this, we will use a graph, which is described as follow:

20
For this graph R(0) will be looked like this:

Here R(0) shows the adjacency matrix. In R(0), we can see the existence of a path,
which has no intermediate vertices. We will get R(1) with the help of a boxed row and
column in R(0).

In R(1), we can see the existence of a path, which has intermediate vertices. The
number of the vertices is not greater than 1, which means just a vertex. It contains a new
path from d to b. We will get R(2) with the help of a boxed row and column in R(1).

21
In R(2), we can see the existence of a path, which has intermediate vertices. The
number of the vertices is not greater than 2, which means a and b. It contains two new
paths. We will get R(3) with the help of a boxed row and column in R(2).

In R(3), we can see the existence of a path, which has intermediate vertices. The
number of the vertices is not greater than 3, which means a, b and c. It does not contain any
new paths. We will get R(4) with the help of a boxed row and column in R(3).

22
In R(4), we can see the existence of a path, which has intermediate vertices. The
number of the vertices is not greater than 4, which means a, b, c and d. It contains five new
paths.

Example 1:

In this example, we will assume A = {1, 2, 3, 4} and let R be the relation on A whose matrix is
described as follows:

Now we will use Warshall's algorithm to compute M R∞.

Solution:

Here, matrix is the starting point. So matrix is described as follows:

23
Now we will assume W0 = MR and then successively compute W1, W2, W3, and W4. (If
we have an MR which contains n * n matrix, in this case, we will assume W 0 = WR and
compute W1, W2, W3, …., Wn.) For each k>0, Wk is calculated with the help of row k and
column k of Wk-1. So we can say that W 1 is computed with the help of column 1 and row 1 of
W0 = MR, W2 is calculated with the help of column 2 and row 2 of W 1, and the same process
will be followed for W3, W4, and so on. The step by step process to get W k from Wk-1 is
described as follow:

Step 1:

In this step, we will transfer all 1's in W k-1 to the corresponding position of W k. If we
take k = 1 for this problem, then we get the following table:

24
Step 2:

Now we will make of list separately of all the columns that have 1's in row k and all
the rows that have 1's in column k. After separation, we will get columns 1, 2, and 4, and
rows 1, 2, and 3.

Step 3:

In this step, we will pair each of the row numbers with each of the column numbers. If
there is not 1 in the paired position, then we will add 1 in that position of W 1. In all the
empty positions, we will put 0.

The pairs which we get are described as follow:

(1, 1), (1, 2), (1, 4), (2, 1), (2, 2), (2, 4), (3, 1), (3, 2) and (3, 4).

The matrix with the help of these pairs is


described as follows:

Now we will repeat the same process for k = 2.


The first step will provide us the following matrix:

25
Using the second step, we will get the row numbers 1, 2, 3, 4 and column numbers 1, 2, 3, 4.
With the help of third step, we will get all possible pairs of row numbers and column
numbers for k = 2, which are described as follows:

(1, 1), (1, 2), (1, 3), (1, 4), (2, 1), (2, 2), (2, 3), (2, 4), (3, 1), (3, 2), (3, 3), (3, 4), (4, 1), (4, 2), (4, 3),
and (4, 4).

The matrix with the help of these pairs is described as follows:

In this example, we will also have to calculate W 3 and W4, but we know that W3 and W4 will
get the same result as W2. According to this algorithm, if an entry in Wk-1 is 1, then the entry
of Wk will remain the same as 1. Hence in Wk+1, …, Wn.

Thus

26
And the transition closure R can be defined as the total relation A*A, which contains
every possible ordered pair of elements of A. So it can specify that everything in A is related
by

R∞ to everything in A.

Now we will use the Boolean operation ∧ and ∨ on {0, 1} so that we can understand
this algorithm more formally. If Wij(k) is the (i, j) entry of Wk, then on the basis of the above-
described steps, we can say that

wij(k) = w(k?1)ij ∨ (w(k?1)ik ∧ w(k?1)kj)

According to the second step of this algorithm, if i exists in the row list and j exists in
the column list, then w(k?1)ik∧w(k?1)kj=1will be 1. If the third step of this algorithm either
shows wij(k) was already 1, which shows in the first step, or (i, j) is one of the pairs formed in
the third step.

4. Knapsack Problem :
 Problem : Given a set of items, each having different weight and value or profit
associated with it. Find the set of items such that the total weight is less than or equal to
a capacity of the knapsack and the total value earned is as large as possible.
 The knapsack problem is useful in solving resource allocation problem. Let X = < x 1, x2, x3,
. . . . . , x n> be the set of n items. Sets W = <w 1, w2, w3, . . . , w n> and V = < v1, v2, v3, . . . ,
vn> are weight and value associated with each item in X. Knapsack capacity is M unit.
 The knapsack problem is to find the set of items which maximizes the profit such that
collective weight of selected items does not cross the knapsack capacity.
 Select items from X and fill the knapsack such that it would maximize the profit.
Knapsack problem has two variations. 0/1 knapsack, that does not allow breaking of

27
items. Either add an entire item or reject it. It is also known as a binary knapsack.
Fractional knapsack allows breaking of items. Profit will be earned proportionally.
Mathematical formulation
We can select any item only ones. We can put items x i in knapsack if knapsack can
accommodate it. If the item is added to a knapsack, the associated profit is accumulated.
Knapsack problem can be formulated as follow :

Maximize

sumni=1vixi
subjected to
sumni=1wixileM
xiin(0,1)
for binary knapsack
xiin[0,1]
for fractional knapsack

We will discuss two approaches for solving knapsack using dynamic programming.

First Approach for Knapsack Problem using Dynamic Programming

If the weight of the item is larger than the remaining knapsack capacity, we skip the
item, and the solution of the previous step remains as it is. Otherwise, we should add the
item to the solution set and the problem size will be reduced by the weight of that item.
Corresponding profit will be added for the selected item.

Dynamic programming divides the problem into small sub-problems. Let V is an array
of the solution of sub-problems. V[i, j] represents the solution for problem size j with first i
items. The mathematical notion of the knapsack problem is given as :

28
V [1 …. n, 0 … M] : Size of the table

V (n, M) = Solution

n = Number of items

Algorithm

Algorithm for binary knapsack using dynamic programming is described below :

Algorithm DP_BINARY_KNAPSACK (V, W, M)


// Description: Solve binary knapsack problem using dynamic programming
// Input: Set of items X, set of weight W, profit of items V and knapsack capacity M
// Output: Array V, which holds the solution of problem
for i ← 1 to n do
V[i, 0] ← 0
end
for i ← 1 to M do
V[0, i] ← 0
end
for V[0, i] ← 0 do

29
for j ← 0 to M do
if w[i] ≤ j then
V[i, j] ← max{V[i-1, j], v[i] + V[i – 1, j – w[i]]
else
V[i, j] ← V[i – 1, j] // w[i]>j
end
end
end
The above algorithm will just tell us the maximum value we can earn with dynamic
programming. It does not speak anything about which items should be selected. We can
find the items that give optimum result using the following algorithm
Algorithm TRACE_KNAPSACK(w, v, M)
// w is array of weight of n items
// v is array of value of n items
// M is the knapsack capacity S
W←{}
SP ← { }
i←n
j←M
while ( j> 0 ) do
if (V[i, j] == V[i – 1, j]) then
i←i–1
else
V[i, j] ← V[i, j] – vi
j ← j – w[i]
SW ← SW +w[i]
SP ← SP +v[i]
end
end

30
Complexity analysis

With n items, there exist 2n subsets, the brute force approach examines all subsets to
find the optimal solution. Hence, the running time of the brute force approach is O(2 n). This
is unacceptable for large n.

Dynamic programming finds an optimal solution by constructing a table of size n ´ M,


where n is a number of items and M is the capacity of the knapsack. This table can be filled
up in O(nM) time, same is the space complexity.

 Running time of Brute force approach is O(2 n).


 Running time using dynamic programming with memorization is O(n * M).
Example

Example: Find an optimal solution for following 0/1 Knapsack problem using dynamic
programming: Number of objects n = 4, Knapsack Capacity M = 5, Weights (W 1, W2,
W3, W4) = (2, 3, 4, 5) and profits (P1, P2, P3, P4) = (3, 4, 5, 6).

Solution:

Solution of the knapsack problem is defined as,

We have the following stats about the problem,

Item Weight (wi) Value (vi)

I1 2 3

I2 3 4

I3 4 5

31
Item Weight (wi) Value (vi)

I4 5 6

 Boundary conditions would be V [0, i] = V[i, 0] = 0. Initial configuration of table looks


like.

j→

Item Detail 0 1 2 3 4 5

i=0 0 0 0 0 0 0

i=1 w1=2 v1=3 0

i=2 w2=3 v2=4 0

i=3 w3=4 v3=5 0

i=4 w4=5 v4=6 0

Filling first column, j = 1

V [1, 1] ⇒ i = 1, j = 1, wi = w1 = 2

As, j < wi, V [i, j] = V [i – 1, j]

V [1, 1] = V [0, 1] = 0

V [2, 1] ⇒ i = 2, j = 1, wi = w2 = 3

As, j < wi, V [i, j] = V [i – 1, j]

V [2, 1] = V [1, 1] = 0

V [3, 1] ⇒ i = 3, j = 1, wi = w3 = 4

As, j < wi, V [i, j] = V [i – 1, j]

V [3, 1] = V [2, 1] = 0
32
V [4, 1] ⇒ i = 4, j = 1, wi = w4 = 5

As, j < wi, V[i, j] = V[i – 1, j]

V [4, 1] = V [3, 1] = 0

Filling first column, j = 2

V[1, 2] ⇒ i = 1, j = 2, wi = w1 = 2, vi = 3

As, j ≥ wi, V [i, j]=max {V [i – 1, j], vi + V[i – 1, j – wi] }

= max {V [0, 2], 3 + V [0, 0]}

V[1, 2] = max (0, 3) = 3

V[2, 2] ⇒ i = 2, j = 2, wi = w2 = 3, vi = 4

As, j < wi, V [i, j] = V[i – 1, j]

V[2, 2] = V[1, 2] = 3

V[3, 2] ⇒ i = 3, j = 2, wi = w3 = 4, vi = 5

As, j < wi, V[i, j] = V[i – 1, j]

V[3, 2] = V [2, 2] = 3

V[4, 2] ⇒ i = 4, j = 2, wi = w4 = 5, vi = 6

As, j < wi, V[i, j] = V[i – 1, j]

V[4, 2] = V[3, 2] = 3

Filling first column, j = 3

V[1, 3] ⇒ i = 1, j = 3, wi = w1 = 2, vi = 3
33
As, j ≥ wi, V [i, j]=max {V [i – 1, j], vi + V [i – 1, j – wi] }

= max {V [0, 3], 3 + V [0, 1]}

V[1, 3] = max (0, 3) = 3

V[2, 3] ⇒ i = 2, j = 3, wi = w2 = 3, vi = 4

As, j ≥ wi, V [i, j] = max {V [i – 1, j], vi + V [i – 1, j – wi] }

= max {V [1, 3], 4 + V [1, 0]}

V[2, 3] = max (3, 4) = 4

V[3, 3] ⇒ i = 3, j = 3, wi = w3 = 4, vi = 5

As, j < wi, V [i, j] = V [i – 1, j]

V[3, 3] = V [2, 3] = 4

V[4, 3] ⇒ i = 4, j = 3, wi = w4 = 5, vi = 6

As, j < wi, V[i, j] = V[i – 1, j]

V[4, 3] = V [3, 3] = 4

Filling first column, j = 4

V[1, 4] ⇒ i = 1, j = 4, wi = w1 = 2, vi = 3

As, j ≥ wi, V [i, j]=max {V [i – 1, j], vi + V [i – 1, j – wi] }

= max {V [0, 4], 3 + V [0, 2]}

V[1, 4] = max (0, 3) = 3

V[2, 4] ⇒ i = 2, j = 4, wi = w2 = 3 , vi = 4
34
As, j ≥ wi, V [i, j] =max {V [i – 1, j], vi + V [i – 1, j – wi] }

= max {V [1, 4], 4 + V [1, 1]}

V[2, 4] = max (3, 4 + 0) = 4

V[3, 4] ⇒ i = 3, j = 4, wi = w3 = 4, vi = 5

As, j ≥ wi, V [i, j]=max {V [i – 1, j], vi + V [i – 1, j – wi] }

= max {V [2, 4], 5 + V [2, 0]}

V[3, 4] = max (4, 5 + 0) = 5

V[4, 4] ⇒ i = 4, j = 4, wi = w4 = 5, vi = 6

As, j < wi, V [i, j] = V [i – 1, j]

V[4, 4] = V [3, 4] = 5

Filling first column, j = 5

V [1, 5] ⇒ i = 1, j = 5, wi = w1 = 2, vi = 3

As, j ≥ wi, V [i, j] = max {V [i – 1, j], vi + V [i – 1, j – wi] }

= max {V [0, 5], 3 + V [0, 3]}

V[1, 5] = max (0, 3) = 3

V[2, 5] ⇒ i = 2, j = 5, wi = w2 = 3, vi = 4

As, j ≥ wi, V [i, j] =max {V [i – 1, j], vi + V [i – 1, j – wi] }

= max {V [1, 5], 4 + V [1, 2]}

V[2, 5] = max (3, 4 + 3) = 7


35
V[3, 5] ⇒ i = 3, j = 5, wi = w3 = 4, vi = 5

As, j ≥ wi, V [i, j] = max {V [i – 1, j], vi + V [i – 1, j – wi] }

= max {V [2, 5], 5 + V [2, 1]}

V[3, 5] = max (7, 5 + 0) = 7

V [4, 5] ⇒ i = 4, j = 5, wi = w4 =5, vi = 6

As, j ≥ wi, V [i, j] = max {V [i – 1, j], vi + V [i – 1, j – wi] }

= max {V [3, 5], 6 + V [3, 0]}

V[4, 5] = max (7, 6 + 0) = 7

Final table would be,

j→

Item Detail 0 1 2 3 4 5

i=0 0 0 0 0 0 0

i=1 w1=2 v1=3 0 0 3 3 3 3

i=2 w2=3 v2=4 0 0 3 4 4 7

i=3 w3=4 v3=5 0 0 3 4 5 7

i=4 w4=5 v4=6 0 0 3 4 5 7

Find selected items for M = 5

36
Step 1 : Initially, i = n = 4, j = M = 5

V[i, j] = V[4, 5] = 7

V[i – 1, j] = V[3, 5] = 7

V[i, j] = V[i – 1, j], so don’t select ith item and check for the previous item.

so i = i – 1 = 4 – 1 = 3

Solution Set S = { }

Step 2 : i = 3, j = 5

37
V[i, j] = V[3, 5] = 7

V[i – 1, j] = V[2, 5] = 7

V[i, j] = V[i – 1, j], so don’t select ith item and check for the previous item.

so i = i – 1 = 3 – 1 = 2

Solution Set S = { }

Step 3 : i = 2, j = 5

38
V[i, j] = V[2, 5] = 7

V[i – 1, j] = V[1, 5] = 3

V[i, j] ≠ V[i – 1, j], so add item Ii = I2 in solution set.

Reduce problem size j by wi

j = j – w i = j – w2 = 5 – 3 = 2

i=i–1=2–1=1

Solution Set S = {I2}

Step 4 : i = 1, j = 2

39
V[1, j] = V[1, 2] = 3

V[i – 1, j] = V[0, 2] = 0

V[i, j] ≠ V[i – 1, j], so add item Ii = I1 in solution set.

Reduce problem size j by wi

j = j – w i = j – w1 = 2 – 2 = 0

Solution Set S = {I1, I2}

Problem size has reached to 0, so final solution is


S = {I1, I2} Earned profit = P1 + P2 = 7

40

You might also like