You are on page 1of 8

Matthew Bowman

CSCI 405

Homework 2

1.

function reconstructLCS(c, X, Y):


m = length(X)
n = length(Y)
LCS = []

i=m
j=n

while i > 0 and j > 0:


if X[i] == Y[j]:
// The current elements in X and Y are part of the LCS
LCS.prepend(X[i])
i=i-1
j=j-1
else:
// Move in the direction of the maximum LCS length
if c[i-1][j] > c[i][j-1]:
i=i-1
else:
j=j-1

return LCS

2. In this pseudocode, the outer loop iterates over diagonals, and the inner loop
iterates over the elements along the current diagonal. It checks if the
corresponding elements in X and Y are equal and updates the length of the LCS
accordingly.

This algorithm uses only min(m, n) entries along the diagonals of the c table and O(1)
additional space. It doesn't require a separate b table for reconstruction because it's
focused on computing the length of the LCS.

You couldn’t reconstruct the solution without a b table without using some other kind of
backtracking table.

function computeLCSLength(c, X, Y):


m = length(X)
n = length(Y)
lengthLCS = 0

// Iterate over diagonals


for k from 1 to min(m, n):
// Compute LCS length for the current diagonal
diagonalLength = 0
for i from max(1, k - n) to min(m, k):
j = k - i
if X[i] == Y[j]:
diagonalLength = max(diagonalLength, c[i-1][j-1] + 1)

// Update the overall LCS length


lengthLCS = max(lengthLCS, diagonalLength)

return lengthLCS

3. The dynamic programming recurrence relation is defined as follows:

𝑐𝑐[𝑖𝑖, 𝑗𝑗] =
0 if 𝑖𝑖 ≥ 𝑗𝑗

max(𝑐𝑐[𝑖𝑖, 𝑘𝑘] + 𝑐𝑐[𝑘𝑘, 𝑗𝑗] + 1) if 𝑓𝑓𝑖𝑖 ≤ 𝑠𝑠𝑘𝑘 ≤ 𝑓𝑓𝑘𝑘 ≤ 𝑠𝑠𝑗𝑗 for some 𝑘𝑘 in 𝑖𝑖

Here, 𝑐𝑐[𝑖𝑖, 𝑗𝑗] represents the size of the optimal solution for activities 𝑎𝑎𝑖𝑖 , 𝑎𝑎𝑖𝑖+1 , … , 𝑎𝑎𝑗𝑗 , where
𝑠𝑠𝑘𝑘 and 𝑓𝑓𝑘𝑘 are the start and finish times of activity 𝑎𝑎𝑘𝑘 respectively.

• If 𝑖𝑖 ≥ 𝑗𝑗, it means there are no activities in the subproblem, so 𝑐𝑐[𝑖𝑖, 𝑗𝑗] = 0 .


• For 𝑖𝑖 < 𝑗𝑗, we consider the possibility of including activity 𝑎𝑎𝑘𝑘 in the optimal
solution. The condition 𝑓𝑓𝑖𝑖 ≤ 𝑠𝑠𝑘𝑘 ≤ 𝑓𝑓𝑘𝑘 ≤ 𝑠𝑠𝑗𝑗 ensures that activity 𝑎𝑎𝑘𝑘 is compatible
with the activities in the subproblem.
• We maximize over all possible choices of 𝑘𝑘 to find the optimal solution.
The dynamic programming algorithm utilizes a bottom-up approach to fill the table 𝑐𝑐
and reconstructs the solution based on the filled table.

function activity_selector(start_times, finish_times)


n = length(start_times)

// Initialize the c table with zeros


let c[0..n, 0..n] be a 2D array of zeros

// Fill in the c table


for l from 2 to n + 1
for i from 0 to n - l + 1
j = i + l - 1
c[i][j] = 0
for k from i + 1 to j - 1
if finish_times[i] <= start_times[k] and finish_times[k]
<= finish_times[j]
c[i][j] = max(c[i][j], c[i][k] + c[k][j] + 1)

// Reconstruct the solution


selected_activities = empty list
reconstruct_solution(0, n, start_times, finish_times, c,
selected_activities)

return c[0][n], selected_activities

function reconstruct_solution(i, j, start_times, finish_times, c,


selected_activities)
for k from i + 1 to j - 1
if finish_times[i] <= start_times[k] and finish_times[k] <=
finish_times[j]
if c[i][j] == c[i][k] + c[k][j] + 1
append k to selected_activities
reconstruct_solution(i, k, start_times, finish_times, c,
selected_activities)
reconstruct_solution(k, j, start_times, finish_times, c,
selected_activities)
break

4.

Dynamic Programming Approach:

Optimal Substructure: The dynamic programming approach relies on the principle of


optimal substructure, meaning that the solution to the overall problem can be
constructed from the solutions of its subproblems.

Recurrence Relation: The dynamic programming algorithm builds a table c[i, j] where i
represents the number of activities under consideration and j represents the last activity
selected. The recurrence relation takes into account whether the current activity is
compatible with the previously selected activities.
Computing Table Values: The algorithm iteratively fills in the values of the table c based
on the recurrence relation. It considers each activity in the sorted order of finish times
and computes the maximum-size subset up to that point.

Retracing the Solution: After constructing the table, the algorithm uses it to reconstruct
the solution, finding the activities that contribute to the maximum-size subset.

Time Complexity: The dynamic programming approach typically has a time complexity
of O(n^2), where n is the number of activities. The nested loops are used to fill in the
table.

Greedy Approach:

Greedy Choice Property: The greedy algorithm for the activity-selection problem selects
the next activity based on a greedy choice—picking the activity that finishes first among
the remaining compatible activities.

Sort by Finish Times: Before applying the greedy approach, the input activities are sorted
in non-decreasing order of finish times.

Iterative Selection: The algorithm iteratively selects activities one by one, always
choosing the one with the earliest finish time among the remaining compatible activities.

No Backtracking: Once an activity is selected, it is added to the solution, and the


algorithm does not revisit the choice. There is no backtracking.

Time Complexity: The greedy approach typically has a time complexity of O(n) after
sorting the activities. The sorting step dominates the time complexity.

Key Differences:
Optimal Substructure: Dynamic programming explicitly breaks down the problem into
subproblems and builds a solution from the bottom up. Greedy algorithms, on the other
hand, make locally optimal choices at each step without considering future
consequences.

Memory Usage: Dynamic programming often requires more memory to store the table of
solutions for all subproblems. Greedy algorithms usually require less memory as they
make decisions on the fly.

Backtracking: Dynamic programming involves backtracking through the table to


reconstruct the solution. Greedy algorithms, in general, do not backtrack—they make a
series of irrevocable choices.

Time Complexity: The time complexity of dynamic programming is often higher (O(n^2)
in this example) compared to the linear time complexity of many greedy algorithms.
However, the actual efficiency depends on the specific problem instance and
characteristics.

5. Greedy-Choice Property Proof:

Suppose we have a set of items, each with a weight 𝑤𝑤𝑖𝑖 and a value 𝑣𝑣𝑖𝑖 , and a knapsack
with a maximum weight capacity 𝑊𝑊.

1. Greedy Choice: At each step, the greedy algorithm selects the item with the
highest value-to-weight ratio (value per unit weight).
2. Optimal Substructure: If we consider the optimal solution to the problem, it can
be broken down into subproblems. Let 𝑋𝑋 be the optimal solution, and let 𝑌𝑌 be the
𝑣𝑣
set of items in 𝑋𝑋 with the highest value-toweight ratio, 𝑟𝑟𝑖𝑖 = 𝑖𝑖 , at each step.
𝑤𝑤𝑖𝑖
Now, let's consider an optimal solution 𝑋𝑋 ′ that includes 𝑌𝑌 and another item 𝑘𝑘 not
in 𝑌𝑌. If 𝑘𝑘 is not in 𝑌𝑌, it means that 𝑟𝑟𝑘𝑘 < 𝑟𝑟𝑖𝑖 for all items 𝑖𝑖 in 𝑌𝑌.
However, if we replace 𝑘𝑘 with items from 𝑌𝑌 in 𝑋𝑋 ′ , we would get a solution with a
higher total value because 𝑟𝑟𝑖𝑖 for items in 𝑌𝑌 is greater than 𝑟𝑟𝑘𝑘 . This contradicts the
assumption that 𝑋𝑋 ′ is optimal.
3. Greedy Choice Property: By choosing the item with the highest value to weight
ratio at each step, the algorithm ensures that the chosen item contributes
maximally to the total value for the given weight constraint.
Optimal Solution: Since the algorithm consistently makes choices that contribute
optimally at each step, the overall solution obtained is globally optimal.
Therefore, the fractional knapsack problem exhibits the greedy-choice property,
justifying the correctness of the greedy algorithm for solving this problem
6. Greedy Choice: At each step, the algorithm makes the locally optimal choice by
selecting the activity that finishes last among those that are compatible with the
previously selected activities.

Optimal Substructure: The subproblem in this case is the remaining set of activities after
each choice is made. The algorithm considers each subproblem independently and
optimally selects the last finishing activity for that subproblem.

Now, let's prove that this greedy approach yields an optimal solution:

Claim: The greedy algorithm always produces an optimal solution for the activity
selection problem.

Proof:

Consider an optimal solution, and let A be the first activity selected in this solution. Let B
be the first activity selected by the greedy algorithm.

If A = B, then both algorithms have made the same choice, and there is nothing to prove.

If A ≠ B, without loss of generality, let f(A) be the finish time of A and f(B) be the finish
time of B. Since B is the first activity selected by the greedy algorithm, we have f(B) ≤
f(A).

Now, consider the remaining set of activities after A is chosen in the optimal solution.
Let O' be the set of activities that are compatible with A, and let G' be the set of activities
that are compatible with B.

Since f(B) ≤ f(A), every activity in G' is also compatible with A. Therefore, G' is a subset of
O'. Now, the optimal solution considers the remaining activities after A is chosen and
chooses activities optimally. Since G' is a subset of O', the optimal solution will include
activities with finish times later than or equal to f(B).

Thus, the optimal solution will include B, and the greedy algorithm's choice is also
included in the optimal solution. This shows that the greedy algorithm's choice is
compatible with an optimal solution.

Since the greedy algorithm makes locally optimal choices and each subproblem's
solution is optimal, it follows that the overall solution produced by the greedy algorithm
is optimal for the entire problem. This completes the proof that the greedy approach
yields an optimal solution for the activity selection problem.

7. Suppose we have rods of length 4 , and their values per inch are given as follows:
𝑝𝑝1 = 1, 𝑝𝑝2 = 5, 𝑝𝑝3 = 8, 𝑝𝑝4 = 9

The lengths of the rods are denoted as 1,2,3, and 4 .

apply the greedy strategy:

1. First cut: Choose the length with the maximum density. The density for each
length is:
1
Density (1) = =1
1
5
Density (2) = = 2.5
2
8
Density (3) = ≈ 2.67
3
9
Density (4) = = 2.25
4
The greedy strategy would choose the first cut to be a rod of length 2 , as it has the
maximum density.

1. Second cut: Now, the remaining rod has length 4 − 2 = 2. Applying the same
strategy:
1
Density (1) = =1
1
5
Density (2) = = 2.5
2
The greedy strategy would choose the remaining rod to be of length 2 .

However, the optimal solution in this case is to cut the rod into two pieces of length 2
each. The total value for this optimal solution is 5 + 5 = 10, whereas the greedy strategy
results in a total value of 5 + 1 = 6.

Therefore, this counterexample demonstrates that the greedy strategy based on the
density criterion does not always yield an optimal solution for cutting rods.

8. Professor Capulet's approach suggests a greedy strategy for matrix chain


multiplication, where we choose the split point 𝑘𝑘 based on minimizing the quantity 𝑝𝑝𝑖𝑖−1 ⋅
𝑝𝑝𝑘𝑘 ⋅ 𝑝𝑝𝑗𝑗 for each subproduct 𝐴𝐴𝑖𝑖 𝐴𝐴𝑖𝑖+1 … 𝐴𝐴𝑗𝑗 .

However, this strategy does not always guarantee an optimal solution. Let's consider an
instance of the matrix-chain multiplication problem where this greedy approach leads to
a suboptimal solution.

Suppose we have three matrices with dimensions:

𝐴𝐴1 : 10 × 100
𝐴𝐴2 : 100 × 5
𝐴𝐴3 : 5 × 50
The dimensions are represented as rows × columns. Now, let's calculate the products for
each possible split point 𝑘𝑘 and find the one that minimizes 𝑝𝑝𝑖𝑖−1 ⋅ 𝑝𝑝𝑘𝑘 ⋅ 𝑝𝑝𝑗𝑗 :

For 𝑘𝑘 = 1: 𝑝𝑝0 ⋅ 𝑝𝑝1 ⋅ 𝑝𝑝2 = 10 ⋅ 10 ⋅ 5 = 500


For 𝑘𝑘 = 2: 𝑝𝑝0 ⋅ 𝑝𝑝2 ⋅ 𝑝𝑝3 = 10 ⋅ 5 ⋅ 50 = 2500

The greedy approach would choose 𝑘𝑘 = 1 since it minimizes the quantity. Therefore, the
greedy strategy would split the product 𝐴𝐴1 𝐴𝐴2 𝐴𝐴3 at 𝐴𝐴1 .

However, the optimal solution involves splitting the product at 𝐴𝐴2 (i.e., 𝑘𝑘 = 2), resulting
in the sequence 𝐴𝐴1 (𝐴𝐴2 𝐴𝐴3 ). The cost of the optimal solution is 10 ⋅ 100 ⋅ 5 + 10 ⋅ 5 ⋅ 50 =
500 + 2500 = 3000.

The greedy solution, on the other hand, would give the sequence (𝐴𝐴1 𝐴𝐴2 )𝐴𝐴3 , and the cost
of this solution is 10 ⋅ 10 ⋅ 5 + 100 ⋅ 5 ⋅ 50 = 500 + 25000 = 25500.

Therefore, in this instance, the greedy approach yields a suboptimal solution,


demonstrating that the strategy proposed by Professor Capulet is not always sufficient to
guarantee optimality in the matrix-chain multiplication problem.

You might also like