You are on page 1of 17

BITS, PILANI – K. K.

BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 10

1
Activity Selection Problem
• Input: A set of activities S = {a1,…, an}
• Each activity ai has start time si and a finish time fi
such that 0 ≤ si < fi < ∞
• An activity ai takes place in the half-open interval [si
, fi )
• Two activities are compatible if and only if their
interval does not overlap
i.e., ai and aj are compatible if [si , fi ) and
[sj , fj ) do not overlap (i.e. si ≥ fj or sj ≥ fi )
• Output: A maximum-size subset of mutually
compatible activities
• Also Called Interval Scheduling Problem
Activity Selection Problem
Example

A = {1,4,8,11} is an optimal solution.


A = {2,4,9,11} is also an optimal solution
Activity Selection Problem
• Brute Force Solution
All possible subsets
Running time: O(2n)
• “Greedy” Strategies:
Greedy 1: Pick the shortest activity, eliminate all activities
that conflict with it, and recurse.
Greedy 2: Pick the activity that starts first, eliminate all the
activities that conflict with it, and recurse.
Greedy 3: Pick the activity that ends first, eliminate all the
activities that conflict with it, and recurse.
Observe
• Greedy1 and Greedy2 does not work
• Greedy3 seems to work (Why?)
Greedy Choice
Why?
• Intuitively, this choice leaves as much
opportunity as possible for the remaining
activities to be scheduled

• That is, the greedy choice is the one that


maximizes the amount of unscheduled time
remaining
Recipe for Dynamic Programming
Step 1: Identify optimal substructure.
Step 2: Find a recursive formulation for the
value of the optimal solution.
Step 3: Use dynamic programming to find the
value of the optimal solution.
(bottom–up manner)
Notation
Notation

= subset of activities that start after activity ai finishes


and finish before Activity aj starts
Optimal substructure
Let Aij to be an optimal solution, i.e., a maximal set of mutually
compatible activities in Sij.
At some point we will need to make a choice to include some
activity ak with start time sk and finishing by fk in this solution.
This choice will leave two sets of compatible candidates after ak is
taken out:
• Sik : activities that start after ai finishes, and finish before ak
starts
• Skj : activities that start after ak finishes, and finish before aj
starts
Observe
• Aik = Aij∩ Sik is the optimal solution to Sik
• Akj = Aij ∩ Skj is the optimal solution to Skj
Recursive Formulation
• Let c[i,j] be the number of activities in a
maximum-size subset of mutually compatible
activities in Sij.

• So the solution is c[0,n+1]=S0,n+1.


• Recurrence Relation for c[i, j] is :

We can solve this using dynamic programming


But there is important property which we can
exploit.
We Show this is a Greedy Problem
Theorem:
Consider any nonempty subproblem Sij, and let am
be the activity in Sij with earliest finish time: fm=
min{fk : ak  Sij}, then
1. Activity am is used in some maximum-size subset
of mutually compatible activities of Sij.
2. The subproblem Sim is empty, so that choosing am
leaves Smj as the only one that may be nonempty.
Proof
Proof:
Let Aij be an optimal solution to Sij
And let ak ∈ Aij have the earliest finish time in Aij
If ak = am we are done.
Otherwise, let A‘ij = (Aij - {ak}) ∪ {am}
(substitute am for ak)
Claim: Activities in A‘ij are disjoint.
Proof of Claim:
Activities in Aij are disjoint because it was a solution.
Since aj is the first activity in Aij to finish, and fm ≤ fj (am is the
earliest in Sij), am cannot overlap with any other activities in A‘ij
Since |A‘ij| = |Aij| we can conclude that A‘ij is also an optimal
solution to Sij and it includes am
Solution
Greedy Solution
• Repeatedly choose the activity that finishes
first
• Remove activities that are incompatible with
it, and
• Repeat on the remaining activities until no
activities remain.
We Show this is a Greedy Problem
Consequences of the Theorem?
1. Normally we have to inspect all subproblems,
here we only have to choose one subproblem
2. What this theorem says is that we only have to find
the first subproblem with the smallest finishing time
3. This means we can solve the problem top down by
selecting the optimal solution to the local subproblem
rather than bottom-up as in DP
Example

To solve the Si,j:


1. Choose the activity am with the earliest finish time.
2. Solution of Si,j = {am} U Solution of subproblem Sm,j
To solve S0,12, we select a1 that will finish earliest, and solve for
S1,12.
To solve S1,12, we select a4 that will finish earliest, and solve for
S4,12.
To solve S4,12, we select a8 that will finish earliest, and solve for
S8,12
And so on (Solve the problem in a top-down fashion)
Elements of greedy strategy
• Determine the optimal substructure
• Develop the recursive solution
• Prove one of the optimal choices is the greedy choice yet safe
• Show that all but one of subproblems are empty after greedy
choice
• Develop a recursive algorithm that implements the greedy
strategy
• Convert the recursive algorithm to an iterative one.

This was designed to show the similarities and differences


between dynamic programming and the Greedy approach;
These steps can be simplified if we apply the Greedy approach
directly
Applying Greedy Strategy
Steps in designing a greedy algorithm
The optimal substructure property holds
(same as dynamic programming)
Other Greedy Strategies
Other Greedy Strategies:
Greedy 4: Pick the activity that has the fewest conflicts,
eliminate all the activities that conflict with it, and
recurse.
Any More???

You might also like