You are on page 1of 415

BITS, PILANI – K. K.

BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 1

1
Strong Belief

“How Nadal solved the Djokovic puzzle”


In 2011, Novak Djokovic solved the riddle of Rafael Nadal on all three surfaces
Nadal looked at once beleaguered and confused by Djokovic’s technical and tactical mastery,
as if stymied by the puzzle of how to solve the Djokovic problem.
Nadal and his Uncle Toni spent the fall of 2011 pondering the Djokovic Problem.
The changes Nadal made to close the gap on Djokovic were subtle,
ranging from the psychological to the technical to the tactical.
Algorithms
• Seen many algorithms
Graph Searching
Sorting Graph Algorithms
- Breadth First Search
- Insertion Sort - Shortest Path
- Depth First Search
- Bubble Sort - Minimum Spanning Tree
- Tree Traversal
- Merge Sort
- Quick Sort
- Heap Sort
- Radix Sort
- Counting Sort

Searching
- Linear Search
- Binary Search
And Many More
Key Questions
Given a problem:
1st Question
Does Solution/Algorithm exist?
Do we know any such problem?
2nd Question
If solution exists, is there alternate better solution?
3rd Question
What is the least time required to solve the problem?
- lower bound results
4th Question
Does there exist algorithm solving the problem taking the least
time?
Key Questions
5th Question
Is the known solution polynomial time?
What about primality?
6th Question
If the known solution is not polynomial time, does/will
there exist a polynomial time solution?
7th Question
Can we prove that no polynomial time solution will
ever exist?
8th Question
If we don’t know a polynomial time solution and
answer to 7th Question is no, then what?
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 2

1
Algorithms
• Seen many algorithms
Graph Searching
Sorting Graph Algorithms
- Breadth First Search
- Insertion Sort - Shortest Path
- Depth First Search
- Bubble Sort - Minimum Spanning Tree
- Tree Traversal
- Merge Sort
- Quick Sort
- Heap Sort
- Radix Sort
- Counting Sort

Searching
- Linear Search
- Binary Search
And Many More
Key Questions
Given a problem:
1st Question
Does Solution/Algorithm exist?
Do we know any such problem?
2nd Question
If solution exists, is there alternate better solution?
3rd Question
What is the least time required to solve the problem?
- lower bound results
4th Question
Does there exist algorithm solving the problem taking the least
time?
Key Questions
5th Question
Is the known solution polynomial time?
What about primality?
6th Question
If the known solution is not polynomial time, does/will
there exist a polynomial time solution?
7th Question
Can we prove that no polynomial time solution will
ever exist?
8th Question
If we don’t know a polynomial time solution and
answer to 7th Question is no, then what?
Course Objective 1:
Algorithm Design Techniques
We already know one popular strategy
Divide & Conquer
Consider the Coin Change Problem with coins of
denomination 1, 5, 10 & 25
Solution is Easy
What is the guarantee that solution works!
Introduce two more popular & widely applicable problem
solving strategies:
• Dynamic Programming
• Greedy Algorithms
Key Questions
Course Objective 2:
One of the objectives of this course is to look at
Question 5 to Question 8 in detail for a class of
problems
Understand famous P vs NP problem
We strongly believe that certain important class
of problems will not have polynomial time
solution.
Course Objective - 3
• How to deal with the class of problems for
which we strongly believe that no polynomial
time algorithm will exist?
• This class consists of important practical
problems
Example: Traveling Salesman Problem, 0-1
Knapsack Problem, Bin Packing Problem
And Many more
Course Objective - 3
1st Alternative
Try to get polynomial time solution for an important
particular instance of the problem
2nd Alternative
Backtracking Algorithms
With good heuristics this works well for some important
particular instance of the problem
3rd Alternative
Approximation Algorithms
As name indicates, these algorithms will give approximate
solutions but will be polynomial in time.
Many Other Alternatives
Course Objective - 3
Course Objective – 3
For a certain class of problems to study, develop
and analyze ‘good’ approximation algorithms.
Text Book
• Text Book
Thomas H. Cormen, Charles E. Leiserson,
Ronald L. Rivest & Clifford Stein
Introduction to Algorithms
Third Edition, PHI Learning Private Limited, 2015
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 3

1
Course Objective 1:
Algorithm Design Techniques
We already know one popular strategy
Divide & Conquer
Consider the Coin Change Problem with coins of
denomination 1, 5, 10 & 25
Solution is Easy
What is the guarantee that solution works!
Introduce two more popular & widely applicable problem
solving strategies:
• Dynamic Programming
• Greedy Algorithms
Key Questions
Course Objective 2:
One of the objectives of this course is to look at
Problems for which the known solution is non
polynomial and we can’t prove that no polynomial
time solution will ever exist.
Understand famous P vs NP problem
We strongly believe that certain important class of
problems will not have polynomial time solution.
Course Objective - 3
• How to deal with the class of problems for which
we strongly believe that no polynomial time
algorithm will exist?
• Approximation Algorithms
• This class consists of important practical
problems
Example: Traveling Salesman Problem, 0-1
Knapsack Problem, Bin Packing Problem
And Many more
Binary Search Tree
• Binary Search Tree
Well known, important data structure for efficient
search
If T is Binary tree with n nodes, then
min height of the tree is
logn
max height of the tree is
n-1
Disadvantage
Tree can be skewed making search inefficient
Binary Search Tree
• How to fix the skewness
Balanced trees
- AVL Trees
- Red-Black Trees
- Multi-way Search Trees, (2, 4) Trees
- Few More
Technique used for balancing
- Rotations
Left Rotation or Right Rotation
Binary Search Tree
• Disadvantage with Balanced Trees
Number of rotations needed to maintain balanced
structure is
O(logn)
Question
Are there type of binary search trees for which
number of rotations needed for balancing is
constant
(independent of n)
Course Objective - 4
Course Objective – 4
Study type of binary search trees called Treap for
which expected number of rotations needed for
balancing is constant
We will be proving that expected number of
rotations needed for balancing in Treap is 2
(something really strong)
Treap is an Randomized Data Structure
Review - Algorithm Design Strategy
• Divide & Conquer
- binary search
- merge sort
- quick sort
- Matrix Multiplication (Strassen Algorithm)
Many More
Many standard iterative algorithms can be written
as Divide & Conquer Algorithms.
Example: Sum/Max of n numbers
Question: Any advantage in doing so?
Divide & Conquer
Divide-and-conquer algorithms:
1. Dividing the problem into smaller sub-
problems
(independent sub-problems)
2. Solving those sub-problems
3. Combining the solutions for those smaller
sub-problems to solve the original problem
Divide & Conquer
How to analyze Divide & Conquer Algorithms
Generally, using Recurrence Relations
• Let T(n) be the number of operations required to
solve the problem of size n.
Then T(n) = a T(n/b) + c(n) where
- each sub-problem is of size n/b
- There are a such sub-problems
- c(n) extra operations are required to combine the
solutions of sub-problems into a solution of the
original problem
Divide & Conquer
• How to solve Recurrence Relations
- Substitution Method
- works in simple relations
- Masters Theorem
- See Text Book for details
Coin Change Problem
Coin Change Problem
Given a value N, if we want to make change for
N paisa, and we have infinite supply of each of C
= { 1, 5, 10, 25} valued coins, what is the
minimum number of coins to make the change?
Solution – Easy (We all know this)
Greedy Solution
Coin Change Problem
Key Observations
• Optimization Problem
• Making Change is possible
i.e. solution exists
• Multiple solution exists
For example: 60 - 25, 25, 10 OR 10, 10, six times
• Question has two parts
1. Minimum number of coins required
2. Which coins are part of the solution
• What is the guarantee that solution works?
Coin Change Problem
Key Observations
Suppose we add coin of denomination 20 to the set
i.e. C = { 1, 5, 10, 20, 25}
If N = 40
Then the greedy solution fails.
Because the greedy solution will give answer as
3 which is {25, 10, 5}
Whereas correct answer is {20, 20}
Course Objective
Main Question:
Given an problem (mostly optimization
problem)
1. How to decide whether to use
Greedy Strategy OR Dynamic Programming Strategy?
2. How to show that solution works?
We will explore these ideas in the initial few
lectures.
Coin Change Problem
Suppose we want to compute the minimum
number of coins with values
d[1], d[2], …,d[n] where each d[i]>0
& where coin of denomination i has value d[i]
Let c[i][j] be minimum number of coins required
to pay an amount of j units 0<=j<=N using only
coins of denominations 1 to i, 1<=i<=n
C[n][N] is the solution to the problem
Coin Change Problem
In calculating c[i][j], notice that:
• Suppose we do not use the coin with value
d[i] in the solution of the (i,j)-problem,
then c[i][j] = c[i-1][j]
• Suppose we use the coin with value d[i] in the
solution of the (i,j)-problem,
then c[i][j] = 1 + c[i][ j-d[i]]
Since we want to minimize the number of coins,
we choose whichever is the better alternative
Coin Change Problem – Recurrence
Therefore
c[i][j] = min{c[i-1][j], 1 + c[i][ j-d[i]]}
&
c[i][0] = 0 for every i
Alternative 1
Recursive algorithm
Overlapping Subproblems
When a recursive algorithm revisits the same
problem over and over again, we say that the
optimization problem has overlapping
subproblems.
How to observe/prove that problem has
overlapping subproblems.
Answer – Draw Computation tree and observe
Overlapping Subproblems
Computation Tree

Dynamic-programming algorithms typically take advantage of overlapping


subproblems by solving each subproblem once and then storing the solution
in a table where it can be looked up when needed, using constant time per
lookup.
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 4

1
Coin Change Problem
Suppose we have infinite supply of n coins.
d[1], d[2], …,d[n] where each d[i]>0
& where coin of denomination i has value d[i]
Given an amount N, we want to compute the
minimum number of coins needed to make a
change of N.
Let c[i][j] be minimum number of coins required to
pay an amount of j units 0<=j<=N using only coins
of denominations 1 to i, 1<=i<=n
C[n][N] is the solution to the problem
Coin Change Problem
In calculating c[i][j], notice that:
• Suppose we do not use the coin with value
d[i] in the solution of the (i,j)-problem,
then c[i][j] = c[i-1][j]
• Suppose we use the coin with value d[i] in the
solution of the (i,j)-problem,
then c[i][j] = 1 + c[i][ j-d[i]]
Since we want to minimize the number of coins,
we choose whichever is the better alternative
Coin Change Problem – Recurrence
Therefore
c[i][j] = min{c[i-1][j], 1 + c[i][ j-d[i]]}
&
c[i][0] = 0 for every i
Alternative 1
Recursive algorithm
Overlapping Subproblems
When a recursive algorithm revisits the same
problem over and over again, we say that the
optimization problem has overlapping
subproblems.
How to observe/prove that problem has
overlapping subproblems.
Answer – Draw Computation tree and observe
Overlapping Subproblems
Computation Tree

Dynamic-programming algorithms typically take advantage of overlapping


subproblems by solving each subproblem once and then storing the solution
in a table where it can be looked up when needed, using constant time per
lookup.
Overlapping Subproblems- Fibonacci numbers
• Fibonacci numbers:
F(n) = F(n-1) + F(n-2)
F(0) = 0 & F(1) = 1
Computing the nth Fibonacci number recursively
(top-down):
F(n)

F(n-1) + F(n-2)

F(n-2) + F(n-3) F(n-3) + F(n-4)


...
Subproblems share subsubproblems
Coin Change Problem
• Example
We have to pay 8 units with coins worth 1,4 & 6 units
For example c[2][6] is obtained in this case as the smaller of
c[1][6] and 1+ c[2][6-d[2]] = 1+c[2][2]
The other entries of the table are obtained similarly
0 1 2 3 4 5 6 7 8
d[1]=1 0 1 2 3 4 5 6 7 8
d[2]=4 0 1 2 3 1 2 3 4 2
d[3]=6 0 1 2 3 1 2 1 2 2

The table gives the solution to our problem for all the
instances involving a payment of 8 units or less
Analysis
Time Complexity
We have to compute n(N+1) entries
Each entry takes constant time to compute
Running time – O(nN)
Question
• How can you modify the algorithm to actually
compute the change (i.e., the multiplicities of
the coins)?
• Modify the algorithm to handle exceptional
cases.
if i = 1 and j < d[i], c[i][j] = +∞
if i = 1, c[i][j] = 1 + c[i][j – d[1]]
if j < d[i], c[i][j] = c[i-1][j]
• In case when there is no solution algorithm
will return +∞
Optimal Substructure Property
• Why does the solution work?
Optimal Substructure Property/ Principle of
Optimality
• The optimal solution to the original problem
incorporates optimal solutions to the subproblems.
This is a hallmark of problems amenable to dynamic
programming.
• Not all problems have this property.
Optimal Substructure Property
• In our example though we are interested only
in c[n][N], we took it granted that all the other
entries in the table must also represent
optimal choices.
• If c[i][j] is the optimal way of making change
for j units using coins of denominations 1 to i,
then c[i-1][j] & c[i][j-d[i]] must also give the
optimal solutions to the instances they
represent
Optimal Substructure Property
How to prove Optimal Substructure Property?
Generally by Cut-Paste Argument or By
Contradiction
Note
Optimal Substructure Property looks obvious
But it does not apply to every problem.
Exercise:
Give an problem which does not exhibit Optimal
Substructure Property.
Dynamic Programming Algorithm
The dynamic-programming algorithm can be broken
into a sequence of four steps.
1. Characterize the structure of an optimal solution.
Optimal Substructure Property
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution in a
bottom-up fashion.
Overlapping subproblems
4. Construct an optimal solution from computed
information.
(not always necessary)
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 5

1
Optimal Substructure Property
Optimal Substructure Property/ Principle of
Optimality
• The optimal solution to the original problem
incorporates optimal solutions to the subproblems.
This is a hallmark of problems amenable to dynamic
programming.
• Not all problems have this property.
Optimal Substructure Property
How to prove Optimal Substructure Property?
Generally by Cut-Paste Argument or By
Contradiction
Note
Optimal Substructure Property looks obvious
But it does not apply to every problem.
Exercise:
Give an problem which does not exhibit Optimal
Substructure Property.
Dynamic Programming Algorithm
The dynamic-programming algorithm can be broken
into a sequence of four steps.
1. Characterize the structure of an optimal solution.
Optimal Substructure Property
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution in a
bottom-up fashion.
Overlapping subproblems
4. Construct an optimal solution from computed
information.
(not always necessary)
The 0-1 Knapsack Problem
Given: A set S of n items, with each item i having
wi - a positive “weight”
vi - a positive “benefit”
Goal:
Choose items with maximum total benefit but with weight
at most W.
And we are not allowed to take fractional amounts
In this case, we let T denote the set of items we take

Objective : maximize ෍ 𝒗𝒊
𝒊∈𝑻

Constraint : ෍ 𝒘𝒊 ≤ 𝑾
𝒊∈𝑻
Greedy approach
Possible greedy approach:
Approach1: Pick item with largest value first
Approach2: Pick item with least weight first
Approach3: Pick item with largest value per
weight first
None of the above approaches work
Exercise:
Prove by giving counterexamples.
Brute Force Approach
• Brute Force
The naive way to solve this problem is to go
through all 2n subsets of the n items and pick
the subset with a legal weight that maximizes
the value of the knapsack.
Recursive Formulation
Very similar to Coin Change Problem
Sk: Set of items numbered 1 to k.
Define B[k,w] to be the best selection from Sk with weight
at most w
 B[k − 1, w] if wk  w
B[k , w] = 
max{B[k − 1, w], B[k − 1, w − wk ] + bk } else

Our goal is to find B[n, W], where n is the total number of


items and W is the maximal weight the knapsack can carry.
This does have Optimal Substructure property.
Exercise – Prove Optimal Substructure property.
Optimal Substructure
Consider a most valuable load L where WL ≤ W
Suppose we remove item j from this optimal load L
The remaining load 𝑳′𝒋 = 𝑳 − {𝑰𝒋 }
must be a most valuable load weighing at most
𝑾′𝒋 = 𝑾 − {𝒘𝒋 }
pounds that the thief can take from
𝑺′𝒋 = 𝑺 − {𝑰𝒋 }
That is 𝑳′𝒋 should be an optimal solution to the
0-1 Knapsack Problem( 𝑺′𝒋 , 𝑾′𝒋 )
The 0-1 Knapsack Problem
Exercise
Overlapping Subproblems
Pseudo- code
for w = 0 to W
B[0,w] = 0
for i = 1 to n
B[i,0] = 0 Running Time – O(nW)
for i = 1 to n
for w = 0 to W
if 𝑤𝑖 <= w // item i can be part of the solution
if 𝑏𝑖 + B[i-1,w-𝑤𝑖 ] > B[i-1,w]
B[i,w] = 𝑏𝑖 + B[i-1,w- 𝑤𝑖 ]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // 𝑤𝑖 > w
The 0-1 Knapsack Problem
How to find actual Knapsack Items?
if B[i,k] ≠ B[i−1,k] then
mark the ith item as in the knapsack
i = i−1, k= k- 𝒘𝒊
else
i = i−1 // Assume the ith item is not in the
knapsack
Optimal Substructure- Alternate
Alternate attempt to characterize a sub-problem as
follows:
Let Ok be the optimal subset of elements from {I0, I1, …,
Ik}.
Observe
Optimal subset from the elements {I0, I1, …, Ik+1} may
not correspond to the optimal subset of elements from
{I0, I1, …, Ik}
That means, the solution to the optimization problem for
Sk+1 might NOT contain the optimal solution of problem
Sk.
where Sk: Set of items numbered 1 to k.
Optimal Substructure
Example: Let W=20
Item Weight Value
I0 3 10
I1 8 4
I2 9 9
I3 8 11
• The best set of items from {I0, I1, I2} is {I0, I1, I2}
• BUT the best set of items from {I0, I1, I2, I3} is {I0, I2, I3}.
Note
1. Optimal solution, {I0, I2, I3} of S3 for does NOT contain the
optimal solution, {I0, I1, I2} of S2
2. {I0, I2, I3} build's upon the solution, {I0, I2}, which is really the
optimal subset of {I0, I1, I2} with weight 12 or less.
(We incorporates this idea in solution)
Subset Sum
Subset Sum Problem (Decision Problem)
Given a set X = {x1, x2, . . . , xn} of positive integers and a
target value T, we wish to find whether there is a
subset of X with sum exactly equal to T.
Subset Sum Problem (Optimization Problem)
Given a set X = {x1, x2, . . . , xn} of positive integers and a
target value T, we wish to find a subset S of X that
maximizeσ𝑖𝑆 𝑥𝑖 while keeping σ𝑖𝑆 𝑥𝑖 ≤ T
Subset Sum
OPT(j, t) : the optimal value for subproblems
consisting of the first j integers for every 0 ≤ t ≤ T
OPT(j, t) = max{OPT(j-1, t), xj + OPT(j-1, t - xj)}
Exercise
1. Optimal Substructure Property
2. Overlapping Subproblems
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 6

1
Re Cap - Recursive Formulation
Coin Change Problem
c[i][j] = min{c[i-1][j], 1 + c[i][ j-d[i]]}

0-1 Knapsack problem

 B[k − 1, w] if wk  w
B[k , w] = 
max{B[k − 1, w], B[k − 1, w − wk ] + bk } else
Subset Sum problem
OPT(j, t) = max{OPT(j-1, t), xj + OPT(j-1, t - xj)}

Optimal Substructure - Cut Paste Argument


Dynamic Programming
Two Problems
Problem 1: Coin Change Problem
Problem 2: 0 – 1 Knapsack Problem
First Step:
Brute Force Solution
- Exponential Running Time
First Attempt:
“Greedy Strategy”
- Show “Greedy Strategy” fails by giving
counterexample
Observations
Question 1
How many subproblems are used in an optimal solution
for the original problem?
• Coin Change Problem
Two Which?
• 0-1 Knapsack Problem
Two Which?
Question 2
How many choices we have in determining which
subproblems to use in an optimal solution?
• Coin Change Problem
Two Why?
• 0-1 Knapsack Problem?
Matrix Chain Multiplication
Recalling Matrix Multiplication
If A is p x q matrix and B is q x r matrix
then the product C = AB is a p x r matrix given by
𝒒

𝒄 𝒊, 𝒋 = ෍ 𝒂 𝒊, 𝒌 𝒃[𝒌, 𝒋]
𝒌=𝟏
where 1≤ i ≤ p and 1≤ j ≤ r
OR
Dot product of ith row of A with jth column of B
Properties
• Matrix multiplication is associative
i.e., A1(A2A3) = (A1A2 )A3
So parenthesization does not change result
• It may appear that the amount of work done
won’t change if you change the parenthesization
of the expression
• But that is not the case!
Example
• Let us use the following example:
– Let A be a 2x10 matrix
– Let B be a 10x50 matrix
– Let C be a 50x20 matrix
• Consider computing A(BC):
Total multiplications = 10000 + 400 = 10400
• Consider computing (AB)C:
Total multiplications = 1000 + 2000 = 3000
Substantial difference in the cost for computing
Matrix Chain Multiplication
• Thus, our goal today is:
• Given a chain of matrices to multiply,
determine the fewest number of multiplications
necessary to compute the product.
• Let dixdi+1 denote the dimensions of matrix Ai.
• Let A = A0 A1 ... An-1
• Let Ni,j denote the minimal number of
multiplications necessary to find the product: Ai Ai+1
... Aj.
• To determine the minimal number of multiplications
necessary N0,n-1 to find A,
• That is, determine how to parenthisize the
multiplications
Matrix Chain Multiplication
1st Approach –
Brute Force
• Given the matrices A1,A2,A3,A4 Assume the
dimensions of A1=d0×d1 etc
• Five possible parenthesizations of these arrays, along
with the number of multiplications:
(A1A2)(A3A4):d0d1d2+d2d3d4+d0d2d4
((A1A2)A3)A4:d0d1d2+d0d2d3+d0d3d4
(A1(A2A3))A4:d1d2d3+d0d1d3+d0d3d4
A1((A2A3)A4):d1d2d3+d1d3d4+d0d1d4
A1(A2(A3A4)):d2d3d4+d1d2d4+d0d1d4
Matrix Chain Multiplication
Questions?
• How many possible parenthesization?
• At least lower bound?
The number of parenthesizations is atleast Ω(2n)
Exercise: Prove
The exact number is given by the recurrence relation
𝑛−1

𝑇 𝑛 = ෍ 𝑇 𝑘 𝑇(𝑛 − 𝑘)
𝑘=1
Because, the original product can be split into two parts
In (n-1) places.
Each split is to be parenthesized optimally
Matrix Chain Multiplication
Solution to the recurrence is the famous Catalan
Numbers
T(n) = Ω(4n/3n/2)

Question : Any better approach?


Greedy Approach??
Dynamic Programming
Matrix Chain Multiplication
Step1:
Optimal Substructure Property
If a particular parenthesization of the whole product is
optimal,
then any sub-parenthesization in that product is optimal as
well.
What does it mean?
– If (A (B ((CD) (EF)) ) ) is optimal
– Then (B ((CD) (EF)) ) is optimal as well
How to Prove?
Matrix Chain Multiplication
• Cut - Paste Argument
– Because if it wasn't,
and say ( ((BC) (DE)) F) was better,
then it would also follow that
(A ( ((BC) (DE)) F) ) was better than
(A (B ((CD) (EF)) ) ),
– contradicting its optimality!
Matrix Chain Multiplication
Step 2:
Recursive Formulation
Let M[i,j] represent the minimum number of
multiplications required for matrix product Ai ×⋯×
Aj, For 1≤i≤j<n
High-Level Parenthesization for Ai..j
Notation: Ai..j = Ai x ….x Aj
For any optimal multiplication sequence,
at the last step we are multiplying two matrices
Ai..k and Ak+1..j for some k, i.e.,
Ai..j = (Ai x ….x Ak) (Ak+1 x ….x Aj) = Ai..k Ak+1..j
Matrix Chain Multiplication
Thus,
M[i,j]=M[i,k]+M[k+1,j]+ di-1dkdj
Thus the problem of determining the optimal
sequence of multiplications is broken down to the
following question?
How do we decide where to split the chain?
OR (what is k)?
Answer:
Search all possible values of k & take the minimum
of it.
Matrix Chain Multiplication
Therefore,
0, 𝑖𝑓 𝑖 = 𝑗
𝑀 𝑖, 𝑗 = ቐ min {𝑀 𝑖, 𝑘 + 𝑀 𝑘 + 1, 𝑗 +𝑑 𝑑 𝑑 }, 𝑖𝑓 𝑖 < 𝑗
𝑖−1 𝑘 𝑗
𝑖≤𝑘<𝑗

Step3:
Compute the value of an optimal solution in a
bottom-up fashion
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 7

1
Matrix Chain Multiplication
• Thus, our goal today is:
• Given a chain of matrices to multiply,
determine the fewest number of multiplications
necessary to compute the product.
• Let dixdi+1 denote the dimensions of matrix Ai.
• Let A = A0 A1 ... An-1
• Let Ni,j denote the minimal number of
multiplications necessary to find the product: Ai Ai+1
... Aj.
• To determine the minimal number of multiplications
necessary N0,n-1 to find A,
• That is, determine how to parenthisize the
multiplications
Matrix Chain Multiplication
Questions?
• How many possible parenthesization?
• At least lower bound?
The number of parenthesizations is atleast Ω(2n)
Exercise: Prove
The exact number is given by the recurrence relation
𝑛−1

𝑇 𝑛 = ෍ 𝑇 𝑘 𝑇(𝑛 − 𝑘)
𝑘=1
Because, the original product can be split into two parts
In (n-1) places.
Each split is to be parenthesized optimally
Matrix Chain Multiplication
Solution to the recurrence is the famous Catalan
Numbers
T(n) = Ω(4n/3n/2)
Optimal Substructure Property
If a particular parenthesization of the whole product is
optimal,
then any sub-parenthesization in that product is
optimal as well.
Matrix Chain Multiplication
Step 2:
Recursive Formulation
Let M[i,j] represent the minimum number of
multiplications required for matrix product Ai ×⋯×
Aj, For 1≤i≤j<n
High-Level Parenthesization for Ai..j
Notation: Ai..j = Ai x ….x Aj
For any optimal multiplication sequence,
at the last step we are multiplying two matrices
Ai..k and Ak+1..j for some k, i.e.,
Ai..j = (Ai x ….x Ak) (Ak+1 x ….x Aj) = Ai..k Ak+1..j
Matrix Chain Multiplication
Thus,
M[i,j]=M[i,k]+M[k+1,j]+ di-1dkdj
Thus the problem of determining the optimal
sequence of multiplications is broken down to the
following question?
How do we decide where to split the chain?
OR (what is k)?
Answer:
Search all possible values of k & take the minimum
of it.
Matrix Chain Multiplication
Therefore,
0, 𝑖𝑓 𝑖 = 𝑗
𝑀 𝑖, 𝑗 = ቐ min {𝑀 𝑖, 𝑘 + 𝑀 𝑘 + 1, 𝑗 +𝑑 𝑑 𝑑 }, 𝑖𝑓 𝑖 < 𝑗
𝑖−1 𝑘 𝑗
𝑖≤𝑘<𝑗

Step3:
Compute the value of an optimal solution in a
bottom-up fashion
Overlapping Subproblem
Matrix Chain Multiplication
Which sub-problems are necessary to solve first?
By Definition M[i,i] = 0
Clearly it's necessary to solve the smaller problems
before the larger ones.
• In particular, we need to know M[i, i+1], the number of
multiplications to multiply any adjacent pair of
matrices before we move onto larger tasks.
Chains of length 1
• The next task we want to solve is finding all the values
of the form M[i, i+2], then M[i, i+3], etc.
Chains of length 2 & then chains of length 3 & so on
Matrix Chain Multiplication
That is, we calculate in the order
Matrix Chain Multiplication
• This tells us the order in which to build the
table:
By diagonals
Diagonal indices:
• On diagonal 0, j=i
• On diagonal 1, j=i+1
• On diagonal q, j=i+q
• On diagonal n−1, j=i+n−1
Matrix Chain Multiplication
Example
• Array dimensions:
• A1: 2 x 3 , A2: 3 x 5 , A3: 5 x 2
• A4: 2 x 4 , A5: 4 x 3
𝑀 2,2 + 𝑀 3,5 +𝑑1 𝑑2 𝑑5
𝑀 2, 5 = 𝑚𝑖𝑛 ൞𝑀 2,3 + 𝑀 4,5 +𝑑1 𝑑3 𝑑5
𝑀 2,4 + 𝑀 5,5 +𝑑1 𝑑4 𝑑5
Matrix Chain Multiplication
Table for M[i, j]

𝑴 𝟐, 𝟐 + 𝑴 𝟑, 𝟓 +𝒅𝟏 𝒅𝟐 𝒅𝟓
𝑴 𝟐, 𝟓 = 𝒎𝒊𝒏 ൞𝑴 𝟐, 𝟑 + 𝑴 𝟒, 𝟓 +𝒅𝟏 𝒅𝟑 𝒅𝟓
𝑴 𝟐, 𝟒 + 𝑴 𝟓, 𝟓 +𝒅𝟏 𝒅𝟒 𝒅𝟓
Matrix Chain Multiplication
Optimal locations for parentheses:
Table for s[i, j]
The multiplication sequence is recovered
as follows.

s[1, 5] = 3 (A1 A2 A3 ) ( A4 A5)


s[1, 3] = 1 (A1 (A2 A3 ) )

Hence the final multiplication sequence is

(A1 (A2 A3 ) ) ( A4 A5)


Matrix Chain Multiplication
Optimal locations for parentheses:
Idea: Maintain an array s[1….n,1….n] where s[i, j] denotes k
for the optimal splitting in computing Ai..j = Ai..k Ak+1..j
Array s[1….n,1….n] can be used recursively to compute
the multiplication sequence
Matrix Chain Multiplication
Pseudo code
Observations
Question 1
How many subproblems are used in an optimal solution
for the original problem?
• Coin Change Problem
Two Which?
• Matrix Chain Multiplication
Two Which?
Question 2
How many choices we have in determining which
subproblems to use in an optimal solution?
• Coin Change Problem
Two Why?
• Matrix Chain Multiplication
j - i choices for k (splitting the product)
All-Pairs Shortest Path
G = (V, E) be a graph
G has no negative weight cycles
Vertices are labeled 1 to n
If (i, j) is an edge its weight is denoted by wij
Optimal Substructure Property
Easy to Prove
Recursive formulation - 1
(𝒎)
𝐋𝐞𝐭 𝒍𝒊𝒋 be the minimum weight of any path from
vertex i to vertex j that contains atmost m edges.
Then,
(𝟎) 𝟎, 𝒊𝒇 𝒊 = 𝒋
𝒍𝒊𝒋 = ቊ
∞, 𝒊𝒇 𝒊 ≠ 𝒋

(𝒎) 𝒎−𝟏 𝒎−𝟏


𝒍𝒊𝒋 = 𝒎𝒊𝒏(𝒍𝒊𝒋 , 𝒎𝒊𝒏 {𝒍𝒊𝒌 + 𝒘𝒌𝒋 })
𝟏≤𝒌≤𝒏
minimum over all predecessors k of j
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 8

1
All-Pairs Shortest Path
G = (V, E) be a graph
G has no negative weight cycles
Vertices are labeled 1 to n
If (i, j) is an edge its weight is denoted by wij
Optimal Substructure Property
Easy to Prove
Recursive formulation - 1
(𝒎)
𝐋𝐞𝐭 𝒍𝒊𝒋 be the minimum weight of any path from
vertex i to vertex j that contains atmost m edges.
Then,
(𝟎) 𝟎, 𝒊𝒇 𝒊 = 𝒋
𝒍𝒊𝒋 = ቊ
∞, 𝒊𝒇 𝒊 ≠ 𝒋

(𝒎) 𝒎−𝟏 𝒎−𝟏


𝒍𝒊𝒋 = 𝒎𝒊𝒏(𝒍𝒊𝒋 , 𝒎𝒊𝒏 {𝒍𝒊𝒌 + 𝒘𝒌𝒋 })
𝟏≤𝒌≤𝒏
minimum over all predecessors k of j
Recursive formulation - 2
Floyd-Warshall Algorithm
(𝒌)
𝐋𝐞𝐭 𝒅𝒊𝒋be the weight of the shortest path from
vertex i to vertex j,
for which all the intermediate vertices are in the set
{1,2,….,k}
Then
(𝒌)
𝒅𝒊𝒋 = wij if k = 0
(𝒌) (𝒌−𝟏) (𝒌−𝟏) (𝒌−𝟏)
𝒅𝒊𝒋 = min (𝒅𝒊𝒌 , 𝒅𝒊𝒌 + 𝒅𝒌𝒋 ) if k ≥ 1
We have tacitly used the fact that an optimal path
through k does not visit k twice.
Optimal Binary Search Tree
We have n keys (represented as k1,k2,…,kn) in sorted order (so
that k1<k2<…<kn),
and we wish to build a binary search tree from these keys.
For each ki ,we have a probability pi that a search will be for ki.
In contrast, some searches may be for values not in ki , and so we
also have n+1 “dummy keys” d0,d1,…,dn
In particular, d0 represents all values less than k1, and
dn represents all values greater than kn, and for i = 1,2,…,n-1, the
dummy key di represents all values between ki and ki+1.
For each di ,we have a probability qi that a search will be for di
The dummy keys are leaves (external nodes), and the data keys
mean internal nodes.
Optimal Binary Search Tree
• Every search is either successful or unsuccessful and
so we have
𝒏 𝒏

෍ 𝒑𝒊 + ෍ 𝒒𝒊 = 𝟏
𝒊=𝟏 𝒊=𝟎
Because we have probabilities of searches for each key
and each dummy key, we can determine the expected
cost of a search in a given binary search tree T.
Let us assume that the actual cost of a search is the
number of nodes examined,
i.e., the depth of the node found by the search in T,
plus1.
Optimal Binary Search Tree
Then the expected cost of a search in T is

E[ search cost in T]

= σ𝑛𝑖=1(𝑑𝑒𝑝𝑡ℎ 𝑇 (𝑘𝑖 ) + 1)𝑝𝑖 + σ𝑛𝑖=0(𝑑𝑒𝑝𝑡ℎ 𝑇 𝑑𝑖 +


1)𝑞𝑖

= 1 + σ𝑛𝑖=1 𝑑𝑒𝑝𝑡ℎ 𝑇 (𝑘𝑖 )𝑝𝑖 + σ𝑛𝑖=0 𝑑𝑒𝑝𝑡ℎ 𝑇 𝑑𝑖 𝑞𝑖 (1)

Where depthT denotes a node’s depth in the tree T


Example k2 k2

k1 k4 k1 k5

d0 d1
d0 d1 d5
k3 k5 k4

d2 d3 d4 d5 d4
k3
Figure (a) Figure (a) costs 2.80

i 0 1 2 3 4 5
d2 d3

pi 0.15 0.10 0.05 0.10 0.20


Figure (b)

qi 0.05 0.10 0.05 0.05 0.05 0.10 Figure (b) costs 2.75
Optimal Binary Search Tree
We start with a problem regarding binary search trees
in an environment in which the probabilities of
accessing elements and gaps between elements is
known.
Goal:
We want to find the binary search tree that minimizes
the expected number of nodes probed on a search.
Optimal Binary Search Tree
Brute Force Approach:
Exhaustive checking of all possibilities.
Question
What is the number of binary search trees on n keys?
Answer

The number of BST is atleast Ω(2n)


• Greedy Strategy???
1. Put the most frequent key at the root, and then
recursively builds the left and right subtrees.
2. The balanced makes the height the smallest.
Dynamic Programming Solution
The Dynamic Program for the optimal search tree follows
the same pattern we have seen multiple times now.
Step1 : Optimal Substructure Property
Exercise
Step2 : Recursive Formulation
We pick our subproblem domain as finding an Optimal BST
containing the keys ki ,…,kj , where i ≥1, j ≤ n, and
j ≥ i-1. (It is when j = i-1 that there are no actual keys; we
have just the dummy key di-1.)
Let us define e[i, j] as the expected cost of searching an
Optimal BST containing the keys ki ,…, kj
Ultimately, we wish to compute e[1,n].
Optimal Binary Search Tree
When j = i-1
Then we have just the dummy key di-1
The expected search cost is e[i, i-1] = qi-1
When j ≥ 1, we need to select a root kr from among
ki ,…,kj
and then make an Optimal BST with keys ki ,…,kr-1 as its
left subtree
and an Optimal BST with keys kr+1 ,…,kj as its right
subtree.
Optimal Binary Search Tree
What happens to the expected search cost of a subtree
when it becomes a subtree of a node?
The depth of each node in the subtree increases by 1.
So, by equation (1) the excepted search cost of this
subtree increases by the sum of all the probabilities in
the subtree.
For a subtree with keys ki ,…,kj let us denote this sum of
probabilities as
𝒋 𝒋

𝒘 𝒊, 𝒋 = ෍ 𝒑𝒍 + ෍ 𝒒𝒊
𝒍=𝒊 𝒍=𝒊−𝟏
Optimal Binary Search Tree
Thus, if kr is the root of an optimal subtree containing
keys ki ,…,kj , we have
e[i, j]= pr + (e[i, r-1]+w(i, r-1)) + (e[r+1, j]+w(r+1, j))
Noting that w (i, j) = w(i,r-1)+ pr +w(r+1,j)
We rewrite e[i, j] as
e[i, j]= e[i, r-1] + e[r+1, j] + w(i, j)
The recursive equation as above assumes that we know
which node kr to use as the root.
We choose the root that gives the lowest expected
search cost
Optimal Binary Search Tree
Final recursive formulation:
𝑞𝑖−1 , 𝑖𝑓 𝑗 = 𝑖 − 1
𝑒 𝑖, 𝑗 = ቐ min { 𝑒 𝑖, 𝑟 − 1 + 𝑒 𝑟 + 1, 𝑗 + 𝑤(𝑖, 𝑗)}, 𝑖𝑓 𝑖 ≤ 𝑗
𝑖≤𝑟≤𝑗

The e[i, j] values give the expected search costs in


Optimal BST.
To help us keep track of the structure of Optimal BST, we
define root[i, j], for 1≤ i ≤ j ≤ n, to be the index r for
which kr is
the root of an Optimal BST containing keys ki ,…,kj
Longest Common Subsequence (LCS)
Given two sequences
X = x1, x2, …, xm
Y = y1, y2, …, yn
find a maximum length common subsequence
(LCS) of X and Y
Example
X = A, B, C, B, D, A, B
Subsequences of X:
– A subset of elements in the sequence taken in order
For example, A, B, D, B, C, D, B, etc.
Example
Example
X = A, B, C, B, D, A, B
Y = B, D, C, A, B, A
B, C, B, A is a longest common subsequence of
X and Y (length = 4)
B, D, A, B is also a longest common
subsequence of X and Y (length = 4)
B, D, A, however is not a LCS of X and Y
Brute Force Solution
Let length of X be m & length of Y be n
Brute Force
For every subsequence of X,
check whether it’s a subsequence of Y
Question? How many subsequences are there of X?
There are 2m subsequences of X
Question?
What is time required to check for each subsequence?
Each subsequence takes O(n) time to check
Scan Y for first letter, then scan for second, & so on
Therefore, Running time: O(n2m) Exponential
Making the choice
X = A, B, Z, D
Y = Z, B, D
Choice : include one element into the common
sequence (D) and solve the resulting
subproblem
Notations
• Given a sequence X = x1, x2, …, xm
The i-th prefix of X, for i = 0, 1, 2, …, m is
Xi = x1, x2, …, xi

• c[i, j] = the length of a LCS of the sequences


Xi = x1, x2, …, xi and Yj = y1, y2, …, yj
Recursive Solution
Case 1: xi = yj
Example
Xi = <D, B, Z, E>
Yj = <Z, B, E>

c[i, j] = c[i - 1, j - 1] + 1

• Append xi = yj to the LCS of Xi-1 and Yj-1


• Must find a LCS of Xi-1 and Yj-1
Recursive Solution
Case 2: xi  yj
Example
Xi = A, B, Z, G & Yj = A, G, Z
c[i, j] = max { c[i - 1, j], c[i, j-1] }
• Must solve two subproblems
1. find a LCS of Xi-1 and Yj:
Xi-1 = A, B, Z and Yj = A, G, Z
2. find a LCS of Xi and Yj-1:
Xi = A, B, Z, G and Yj-1 = A, G
Recursive Solution
0 if i = 0 or j = 0
c[i, j] = c[i-1, j-1] + 1 if xi = yj
max(c[i, j-1], c[i-1, j]) if xi  yj
Overlapping Subproblems
To find a LCS of X and Y
We may need to find the LCS between
X and Yn-1 and that of Xm-1 and Y
Both the above subproblems has the subproblem
of finding the following:
LCS of Xm-1 and Yn-1
Subproblems share subsubproblems
Optimal Substructure
Optimal Substructure Property
Easy to prove that in both the cases
Optimal solution to a problem includes optimal
solutions to subproblems
Cut-Paste Argument
Computing the Length of the LCS
0 if i = 0 or j = 0
c[i, j] = c[i-1, j-1] + 1 if xi = yj
max(c[i, j-1], c[i-1, j]) if xi  yj
0 1 2 n
yj: y1 y2 yn
0 xi: 0 0 0 0 0 0
1 x1 0 first

2 x2 0 second
0
0
m xm 0 c[m, n]
Computing the table
0 if i = 0 or j = 0
c[i, j] = c[i-1, j-1] + 1 if xi = yj
max(c[i, j-1], c[i-1, j]) if xi  yj

Along with c[i, j] we also compute and record b[i, j] which


tells us what choice was made to obtain the optimal value
If xi = yj
b[i, j] = “ ”

Else, if c[i - 1, j] ≥ c[i, j-1]


b[i, j] = “  ”

Else
b[i, j] = “  ”
Pseudo Code for LCS
1. for i ← 1 to m
2. do c[i, 0] ← 0
3. for j ← 0 to n
4. do c[0, j] ← 0
5. for i ← 1 to m
6. do for j ← 1 to n
7. do if xi = yj
8. then c[i, j] ← c[i - 1, j - 1] + 1
9. b[i, j ] ← “ ”
10. else if c[i - 1, j] ≥ c[i, j - 1]
11. then c[i, j] ← c[i - 1, j]
12. b[i, j] ← “↑”
13. else c[i, j] ← c[i, j - 1]
14. b[i, j] ← “←”
15.return c and b Running time: O(mn)
Example
X = A, B, C, B, D, A, B 0 if i = 0 or j = 0
Y = B, D, C, A, B, A c[i, j] = c[i-1, j-1] + 1 if xi = yj
max(c[i, j-1], c[i-1, j]) if xi  yj
0 1 2 3 4 5 6
yj B D C A B A
If xi = yj
0 xi
b[i, j] = “ ” 0 0 0 0 0 0 0

Else if c[i - 1, j] ≥ c[i, j-1] 1 A 0 0 0 0  1 1 1
  
b[i, j] = “  ”
2 B 0 1 1 1 1 2 2
else    
3 C 0 1 1 2 2 2 2
b[i, j] = “  ”
4 B 0   
1 1 2 2 3 3
5 D 0     
1 2 2 2 3 3
6 A 0    
1 2 2 3 3 4
7 B 0    
1 2 2 3 4 4
Constructing a LCS

Start at b[m, n] and


follow the arrows yj B D C A B A

0 xi
0 0 0 0 0 0 0

When we
1 A 0 0 0 0  1 1 1
encounter a “ “ in   
b[i, j] 2 B 0 1 1 1 1 2 2
 xi = yj is an 3 C 0    
1 1 2 2 2 2
element of the LCS
4 B 0   
1 1 2 2 3 3
LCS is BCBA 5 D 0     
1 2 2 2 3 3
6 A 0    
1 2 2 3 3 4
7 B 0    
1 2 2 3 4 4
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 9

1
DP - Exercise
Exercise:
1. Partition Problem
You have a set of n integers. Problem is to Partition these
integers into two subsets such that you minimize |S1 − S2|,
where S1 and S2 denote the sums of the elements in each of the
two subsets.
2. Edit distance
the minimum number of edits - insertions, deletions, and
substitutions of characters - needed to transform the first
string into the second.
3. Independent set
An independent set is a set I ⊆ V of vertices such that for all u, v
∈ I, (u, v)  E. An independent set problem is to find a
maximum size independent set in G.
DP - Exercise
4. Palindromes
Given a string s, we are interested in computing minimum
number of palindromes from which one can construct s (that is,
the minimum k such that s can be written as w1w2 . . . wk where
w1,w2, . . . ,wk are all palindromes).
5. Longest palindromic subsequence
Devise an algorithm that takes a sequence x[1,...,n] and returns
the length of the longest palindromic subsequence.
6. Longest Increasing Subsequence
Given a sequence of numbers find the longest increasing
Subsequence.
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 10

1
Activity Selection Problem
• Input: A set of activities S = {a1,…, an}
• Each activity ai has start time si and a finish time fi
such that 0 ≤ si < fi < ∞
• An activity ai takes place in the half-open interval [si
, fi )
• Two activities are compatible if and only if their
interval does not overlap
i.e., ai and aj are compatible if [si , fi ) and
[sj , fj ) do not overlap (i.e. si ≥ fj or sj ≥ fi )
• Output: A maximum-size subset of mutually
compatible activities
• Also Called Interval Scheduling Problem
Activity Selection Problem
Example

A = {1,4,8,11} is an optimal solution.


A = {2,4,9,11} is also an optimal solution
Activity Selection Problem
• Brute Force Solution
All possible subsets
Running time: O(2n)
• “Greedy” Strategies:
Greedy 1: Pick the shortest activity, eliminate all activities
that conflict with it, and recurse.
Greedy 2: Pick the activity that starts first, eliminate all the
activities that conflict with it, and recurse.
Greedy 3: Pick the activity that ends first, eliminate all the
activities that conflict with it, and recurse.
Observe
• Greedy1 and Greedy2 does not work
• Greedy3 seems to work (Why?)
Greedy Choice
Why?
• Intuitively, this choice leaves as much
opportunity as possible for the remaining
activities to be scheduled

• That is, the greedy choice is the one that


maximizes the amount of unscheduled time
remaining
Recipe for Dynamic Programming
Step 1: Identify optimal substructure.
Step 2: Find a recursive formulation for the
value of the optimal solution.
Step 3: Use dynamic programming to find the
value of the optimal solution.
(bottom–up manner)
Notation
Notation

= subset of activities that start after activity ai finishes


and finish before Activity aj starts
Optimal substructure
Let Aij to be an optimal solution, i.e., a maximal set of mutually
compatible activities in Sij.
At some point we will need to make a choice to include some
activity ak with start time sk and finishing by fk in this solution.
This choice will leave two sets of compatible candidates after ak is
taken out:
• Sik : activities that start after ai finishes, and finish before ak
starts
• Skj : activities that start after ak finishes, and finish before aj
starts
Observe
• Aik = Aij∩ Sik is the optimal solution to Sik
• Akj = Aij ∩ Skj is the optimal solution to Skj
Recursive Formulation
• Let c[i,j] be the number of activities in a
maximum-size subset of mutually compatible
activities in Sij.

• So the solution is c[0,n+1]=S0,n+1.


• Recurrence Relation for c[i, j] is :

We can solve this using dynamic programming


But there is important property which we can
exploit.
We Show this is a Greedy Problem
Theorem:
Consider any nonempty subproblem Sij, and let am
be the activity in Sij with earliest finish time: fm=
min{fk : ak  Sij}, then
1. Activity am is used in some maximum-size subset
of mutually compatible activities of Sij.
2. The subproblem Sim is empty, so that choosing am
leaves Smj as the only one that may be nonempty.
Proof
Proof:
Let Aij be an optimal solution to Sij
And let ak ∈ Aij have the earliest finish time in Aij
If ak = am we are done.
Otherwise, let A‘ij = (Aij - {ak}) ∪ {am}
(substitute am for ak)
Claim: Activities in A‘ij are disjoint.
Proof of Claim:
Activities in Aij are disjoint because it was a solution.
Since aj is the first activity in Aij to finish, and fm ≤ fj (am is the
earliest in Sij), am cannot overlap with any other activities in A‘ij
Since |A‘ij| = |Aij| we can conclude that A‘ij is also an optimal
solution to Sij and it includes am
Solution
Greedy Solution
• Repeatedly choose the activity that finishes
first
• Remove activities that are incompatible with
it, and
• Repeat on the remaining activities until no
activities remain.
We Show this is a Greedy Problem
Consequences of the Theorem?
1. Normally we have to inspect all subproblems,
here we only have to choose one subproblem
2. What this theorem says is that we only have to find
the first subproblem with the smallest finishing time
3. This means we can solve the problem top down by
selecting the optimal solution to the local subproblem
rather than bottom-up as in DP
Example

To solve the Si,j:


1. Choose the activity am with the earliest finish time.
2. Solution of Si,j = {am} U Solution of subproblem Sm,j
To solve S0,12, we select a1 that will finish earliest, and solve for
S1,12.
To solve S1,12, we select a4 that will finish earliest, and solve for
S4,12.
To solve S4,12, we select a8 that will finish earliest, and solve for
S8,12
And so on (Solve the problem in a top-down fashion)
Elements of greedy strategy
• Determine the optimal substructure
• Develop the recursive solution
• Prove one of the optimal choices is the greedy choice yet safe
• Show that all but one of subproblems are empty after greedy
choice
• Develop a recursive algorithm that implements the greedy
strategy
• Convert the recursive algorithm to an iterative one.

This was designed to show the similarities and differences


between dynamic programming and the Greedy approach;
These steps can be simplified if we apply the Greedy approach
directly
Applying Greedy Strategy
Steps in designing a greedy algorithm
The optimal substructure property holds
(same as dynamic programming)
Other Greedy Strategies
Other Greedy Strategies:
Greedy 4: Pick the activity that has the fewest conflicts,
eliminate all the activities that conflict with it, and
recurse.
Any More???
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 11

1
Activity Selection Problem
• Input: A set of activities S = {a1,…, an}
• Each activity ai has start time si and a finish time fi
such that 0 ≤ si < fi < ∞
• An activity ai takes place in the half-open interval [si
, fi )
• Two activities are compatible if and only if their
interval does not overlap
i.e., ai and aj are compatible if [si , fi ) and
[sj , fj ) do not overlap (i.e. si ≥ fj or sj ≥ fi )
• Output: A maximum-size subset of mutually
compatible activities
• Also Called Interval Scheduling Problem
Activity Selection Problem
• Brute Force Solution
All possible subsets
Running time: O(2n)
• “Greedy” Strategies:
Greedy 1: Pick the shortest activity, eliminate all activities
that conflict with it, and recurse.
Greedy 2: Pick the activity that starts first, eliminate all the
activities that conflict with it, and recurse.
Greedy 3: Pick the activity that ends first, eliminate all the
activities that conflict with it, and recurse.
Observe
• Greedy1 and Greedy2 does not work
• Greedy3 seems to work (Why?)
Notation
Notation

= subset of activities that start after activity ai finishes


and finish before Activity aj starts

Let c[i,j] be the number of activities in a maximum-size


subset of mutually compatible activities in Sij.
We Show this is a Greedy Problem
Theorem:
Consider any nonempty subproblem Sij, and let am
be the activity in Sij with earliest finish time: fm=
min{fk : ak  Sij}, then
1. Activity am is used in some maximum-size subset
of mutually compatible activities of Sij.
2. The subproblem Sim is empty, so that choosing am
leaves Smj as the only one that may be nonempty.
Solution
Greedy Solution
• Repeatedly choose the activity that finishes
first
• Remove activities that are incompatible with
it, and
• Repeat on the remaining activities until no
activities remain.
We Show this is a Greedy Problem
Consequences of the Theorem?
1. Normally we have to inspect all subproblems,
here we only have to choose one subproblem
2. What this theorem says is that we only have to find
the first subproblem with the smallest finishing time
3. This means we can solve the problem top down by
selecting the optimal solution to the local subproblem
rather than bottom-up as in DP
Example

To solve the Si,j:


1. Choose the activity am with the earliest finish time.
2. Solution of Si,j = {am} U Solution of subproblem Sm,j
To solve S0,12, we select a1 that will finish earliest, and solve for
S1,12.
To solve S1,12, we select a4 that will finish earliest, and solve for
S4,12.
To solve S4,12, we select a8 that will finish earliest, and solve for
S8,12
And so on (Solve the problem in a top-down fashion)
Elements of greedy strategy
• Determine the optimal substructure
• Develop the recursive solution
• Prove one of the optimal choices is the greedy choice yet safe
• Show that all but one of subproblems are empty after greedy
choice
• Develop a recursive algorithm that implements the greedy
strategy
• Convert the recursive algorithm to an iterative one.

This was designed to show the similarities and differences


between dynamic programming and the Greedy approach;
These steps can be simplified if we apply the Greedy approach
directly
Applying Greedy Strategy
Steps in designing a greedy algorithm
The optimal substructure property holds
(same as dynamic programming)
Other Greedy Strategies
Other Greedy Strategies:
Greedy 4: Pick the activity that has the fewest conflicts,
eliminate all the activities that conflict with it, and
recurse.
Any More???
Weighted Interval Scheduling
Weighted interval scheduling problem
• Each job ai has start time si and a finish time fi
such that 0 ≤ si < fi < ∞
• and has weight or value vi .
• Two jobs are compatible if they don't overlap.
• Goal: find maximum weight subset of
mutually compatible jobs.
Weighted Interval Scheduling
Recall
• Greedy algorithm works if all weights are 1.
• Consider jobs in ascending order of finish
time.
• Add job to subset if it is compatible with
previously chosen jobs.
Observation
• Greedy algorithm can fail if arbitrary weights
are allowed.
Weighted Interval Scheduling
Dynamic Programming
Label jobs by finishing time: f1 ≤ f2 ≤ . . . ≤ fn
Define
p(j) = largest index i < j such that job i is
compatible with j
Notation
S[j] = value of optimal solution to the problem
consisting of job requests 1, 2, ..., j.
Weighted Interval Scheduling
Case 1: j is not selected
– must include optimal solution to problem consisting of
remaining compatible jobs 1, 2, ..., j-1
Case2: j is selected.
- must include optimal solution to problem consisting of
remaining compatible jobs 1, 2, ..., p(j)

0, 𝑖𝑓 𝑗 = 0
𝑆𝑗 = ൝
max {𝑣𝑗 + 𝑆 𝑝 𝑗 , 𝑆 𝑗 − 1 }, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Interval Partitioning
Interval Partitioning
Lecture i starts at si and finishes at fi.
Goal:
Find minimum number of classrooms to schedule all
lectures so that no two occur at the same time in
the same room.
Answer
• Depth - Given a set of intervals, the depth of this
set is the maximum number of open intervals that
contain a time t.
Interval Partitioning
Lemma
In any instance of interval partitioning we need at
least depth many classrooms to schedule these
courses.
Proof
This is simply because by definition of depth there is
a time t and depth many courses that are all running
at time t. That means that these courses are mutually
in-compatible, i.e., no two of them can be scheduled
at the same classroom. So, in any schedule we would
need depth many classrooms.
Greedy Strategies
Greedy Strategies:
Greedy 1: Pick the shortest activity
- ascending order of fj - sj
Greedy 2: Pick the activity that starts first,
ascending order of sj
Greedy 3: Pick the activity that ends first
ascending order of fj
Greedy algorithm
Greedy algorithm
Consider lectures in increasing order of start time: assign
lecture to any compatible classroom.

Implementation : O(n log n)


• For each classroom k, maintain
the finish time of the last job
added.
• Keep the classrooms in a priority
queue.
Greedy Theorem
Theorem: Greedy algorithm is optimal.
Proof:
Suppose that the greedy algorithm allocates d
classrooms.
• Enough to prove that d ≤ depth.
By the lemma, depth ≤ OPT. So, putting these
together we get d ≤ OPT.
On the other hand, by definition of OPT, we know
OPT ≤ d.
So, we must have d = OPT.
Proof
To show d ≤ depth, by definition of depth, it is enough to find a
time t* such that more than d open intervals contain t*.
Let t be the time that we allocate the d-th classroom. At this time
we were suppose to schedule, say j-th, course but all classrooms
were already occupied so greedy had to allocate the d-th
classroom.
Observe that, by description of the algorithm, every course we
have schedule so far must start before s(j).
Furthermore, all classrooms are occupied at time t there must be
d − 1 courses which are still running.
Now, let t* := t+ where  > 0 is chosen small enough such that
none of those d−1 jobs together with job j end before or at t*.
But then we have d running courses at time t* and this implies
depth ≥ d.
Graph Problems
Interval scheduling and Interval partitioning can be
seen as graph problems
Input
➢ Graph 𝐺 = (𝑉, 𝐸)
➢ Vertices 𝑉 = jobs/lectures
➢ Edge 𝑖,𝑗 ∈ 𝐸 if jobs/lectures 𝑖 and 𝑗 are incompatible
Interval scheduling
 Maximum independent set?
Interval partitioning
 Graph Colouring?
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 12

1
Greedy - Exercise
Suppose you were to drive from Mumbai to Pune along the
express highway. Suppose your car’s petrol tank holds enough
petrol to travel M miles. Suppose you have a map that gives
distances between petrol stations along the route. Let d1 < d2
< …< dn be the locations of all the petrol stations along the
route where di is the distance of petrol station from Mumbai.
Assume that distance between neighboring petrol stations is
less than M miles. Your goal is to accomplish the journey
making as few petrol stops as possible along the way.
Design an efficient algorithm to determine at which petrol
stations to take a halt. Prove that your algorithm gives optimal
solution and also analyze the time complexity of the
algorithm.
The Fractional Knapsack Problem
Given: A set S of n items, with each item i having
wi - a positive “weight”
vi - a positive “benefit”
Goal:
Choose items with maximum total benefit but with weight
at most W.
And we are allowed to take fractional amounts
The Fractional Knapsack Problem
Possible Greedy Strategies:
• Pick the items in increasing order of weights
• Pick the items in decreasing order of benefits
• Pick the items by decreasing order of value per
pound
Note:
1st two strategies do not give optimal solution
Counterexamples - Exercise
Greedy Algorithm
We can solve the fractional knapsack problem with a
greedy algorithm:
• Compute the value per pound (vi/wi) for each item
• Sort (decreasing) the items by value per pound
• Greedy strategy of always taking as much as possible
of the item remaining which has highest value per
pound
Time Complexity:
If there are n items, this greedy algorithm takes
O(nlogn) time
Greedy Choice Property
• Theorem
Consider a knapsack instance P, and let item 1
be item of highest value density
Then there exists an optimal solution to P that
uses as much of item 1 as possible (that is,
min(w1, W)).
Proof:
Suppose we have a optimal solution Q that uses
weight w <min(w1, W) of item 1.
Let w′ = min(w1, W) − w
Greedy Choice Property
Q must contain at least weight w′ of some other
item(s), since it never pays to leave the knapsack
partly empty.
Construct Q* from Q by removing w′ worth of
other items and replacing with w′ worth of item
1
Because item 1 has max value per weight, So
Q* has total value at least as big as Q.
Alternate Proof - 1
Alternate Proof:
Assume the objects are sorted in order of cost
per pound.
Let vi be the value for item i and let wi be its
weight.
Let xi be the fraction of object i selected by
greedy and let V(X) be the total value obtained
by greedy
Alternate Proof - 1
Alternate Proof
Alternate Proof - 1
Alternate Proof - 1
Alternate proof (2) - Proof by contradiction
We can also prove by contradiction.
We start by assuming that there is an optimal solution where we did
not take as much of item i as possible and we also assume that our
knapsack is full (If it is not full, just add more of item i).
Since item i has the highest value to weight ratio, there must exist an
𝒗𝒋 𝒗
item j in our knapsack such that < 𝒊
𝒘𝒋 𝒘𝒊
We can take item j of weight x from our knapsack and we can add item
i of weight x to our knapsack (Since we take out x weight and put in x
weight, we are still within capacity).
𝒗𝒊 𝒗𝒋
The change in value of our knapsack is x − 𝒙 >𝟎
𝒘𝒊 𝒘𝒋
Therefore, we arrive at a contradiction because the ”so-called” optimal
solution in our starting assumption, can in fact be improved by taking
out some of item j and adding more of item i. Hence, it is not optimal.
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 13

1
Minimum Spanning Tree (MST)
Optimal substructure for MST

Prims Algorithm
Choose some v ∈ V and let S = {v}
Let T = Ø
While S ≠ V
Choose a least-cost edge e with one
endpoint in S and one endpoint in V – S
Add e to T
Add both endpoints of e to S
Minimum Spanning Tree (MST)
Greedy-choice property:

Exercise:
Shortest Path Problem
Minimize time in the system
Problem Statement:
• A single server with N customers to serve
• Customer i will take time ti, 1≤ i ≤ N to be served.
• Goal: Minimize average time that a customer spends in
the system.
where time in system for customer i = total waiting time + ti
• Since N is fixed, we try to minimize time spend by all
customers to reach our goal

Minimize T = σ𝑁
𝑖=1 (time in system for customer i)
Minimize time in the system
Example:
Assume that we have 3 jobs with t1 = 5, t2 = 3, t3 = 7

Order Total Time In System


1, 2, 3 5 + (5 + 3) + (5 + 3 + 7) = 28
1, 3, 2 5 + (5 + 7) + (5 + 7 + 3) = 32
2, 3, 1 3 + (3 + 7) + (3 + 7 + 5) = 28
2, 1, 3 3 + (3 + 5) + (3 + 5 + 7) = 23 optimal
3, 1, 2 7 + (7 + 5) + (7 + 5 + 3) = 34
3, 2, 1 7 + (7 + 3) + (7 + 3 + 5) = 32
Minimize time in the system
Brute Force Solution:
Time Complexity: N!
Which is exponential!!
Optimal Substructure Property:
Exercise
Minimize time in the system
Greedy Strategy
At each step, add to the end of the schedule, the
customer requiring the least service time
among those who remain.
So serve least time consuming customer first.
Minimize time in the system
Observe
Let P = p1p2…..pN be any permutation of the
integers 1 to N and let si = 𝒕𝒑𝒊
If customers are served in the order corresponding
to P, then
Total time passed in the system by all the
customers is
T(P) = s1 + (s1 + s2) + (s1+ s2 + s3) + .....
= ns1 + (n – 1)s2 + (n – 2)s3 + ……
= σ𝑵
𝒌=𝟏(𝒏 − 𝒌 + 𝟏)𝒔𝒌
Minimize time in the system
Theorem:
Greedy strategy is optimal.
Proof:
Suppose strategy P does not arrange the customers in
increasing service time.
Then we can find two integers a & b with a < b and sa > sb
i.e, the a-th customer is served before the b-th
customer even though the former needs more service
time than the latter.
Minimize time in the system
Now, we exchange the position of these two customers to
obtain a new order of service O
Then
𝑛

𝑇 𝑂 = 𝑛 − 𝑎 + 1 𝑠𝑏 + 𝑛 − 𝑏 + 1 𝑠𝑎 + ෍ (𝑛 − 𝑘 + 1) 𝑠𝑘
𝑘=1
𝑘≠𝑎,𝑏.

And T(P) – T(O)


= (n – a + 1)(sa – sb) + (n – b + 1)(sb – sa)
= (b – a)(sa – sb)
>0
i.e., the new schedule O is better than the old schedule P
Scheduling with deadlines
Problem Statement:
• We have set of n jobs to execute
• Each job takes unit time to execute
• At any time T=1,2,.. we can execute exactly one
job
• Job i earns us profit gi > 0 if and only if it is
executed no later than time di
• Goal: Find a schedule that maximizes the profit
Scheduling with deadlines
Example,
i 1 2 3 4
gi 50 10 15 30
di 2 1 2 1

Possible schedules & corresponding profits are


Sequence Profit Sequence Profit
1 50 2, 1 60
2 10 2, 3 25
3 15 3, 1 65
4 30 4, 1 80 optimal
1, 3 65 4, 3 45
The sequence 2, 3, 1 is not considered, as job 1 would be executed at time t=3,
after its deadline
Greedy Algorithm
Definition
A set of jobs is feasible if there exits at least one
sequence (called feasible sequence) that allows all the
jobs in the set to be executed no later than their
respective deadlines.
Greedy Algorithm
Beginning with empty schedule, at each step,
Add the job with the highest profit among those not
yet considered,
provided that the chosen set of jobs remains feasible.
Greedy Algorithm
For our example,
We first choose job 1.
Next we choose job 4, the set {1, 4} is feasible because
it can be executed in the order 4, 1
Next we try the set {1, 3, 4} which is not feasible & so
job 3 is rejected
Finally we try set {1, 2, 4} which is not feasible & so job
2 is rejected
Our optimal solution is the set of jobs {1, 4}
Greedy Proof
Theorem:
The greedy algorithm always finds an optimal solution.
Proof:
Suppose greedy algorithm returns set of jobs I
And suppose the set J is optimal.
Let SI and SJ be the corresponding feasible sequences
Claim:
By rearranging the jobs in SI and those in SJ we can obtain
two feasible sequences 𝑺′𝑰 and 𝑺′𝑱 (which may include gaps)
such that every job common to I and J is scheduled at the
same time in both the sequences.
Example
For example, Suppose
p y q x r SI

r s t p u v q w SJ
Example
After reorganization

x y p r q 𝑺′𝑰
gap

u s t p r v q w 𝑺′𝑱
Proof of the claim
Proof of the claim:
Suppose that some job a occurs in both the feasible
sequences SI and SJ where it is scheduled at times
tI and tJ respectively.
Case 1: If tI = tJ there is nothing to prove
Case 2: If tI < tJ
Note, since sequence is SJ feasible, it follows that the
deadline for job a is no earlier than tJ
Modify sequence SI as follows:
Case i : Suppose there is a gap in SI at time tJ move job
a from time tI into the gap at tJ
Proof of the claim
Case ii :
Suppose there is some job b scheduled in SI at time tJ ,
then exchange jobs a and b in SI
The resulting sequence is still feasible, since in either
case job a will be executed by its deadline, and in the
second case job b is moved to an earlier time and so
can be executed.
So job a is executed at the same time tJ in both the
modified sequences SI and SJ
Case 3: If tI > tJ
Similar argument works, except in this case SJ is
modified
Greedy Proof
Once job a has been treated in this way, we never need
to move it again.
Therefore, if SI and SJ have m jobs in common, after at
most m modifications of either SI or SJ we can ensure
that all the jobs common to I and J are scheduled at the
same time in both sequences.
𝑺′𝑰 and 𝑺′𝑱 be the resulting sequences.
Suppose there is a time when the job scheduled in 𝑺′𝑰
is different from that scheduled in 𝑺′𝑱
Greedy Proof
Case1:
If some job a is scheduled in 𝑺′𝑰 opposite a gap in 𝑺′𝑱 .
Then a does not belong to J & the set J ∪ {a} is feasible.
So we have a feasible solution profitable than J.
This is impossible since J is optimal by assumption.
Case2:
If some job b is scheduled in 𝑺′𝑱 opposite a gap in 𝑺′𝑰 .
Then b does not belong to I & the set I ∪ {b} is feasible.
So the greedy algorithm would have included b in I.
This is impossible since it did not do so.
Greedy Proof
Case 3:
Some job a is scheduled in 𝑺′ opposite a different job b in 𝑺′𝑱
𝑰
In this case a does not appear in J and b does not appear in I.
Case i : If ga > gb
Then we could replace a for b in J and get a better solution.
This is impossible because J is optimal.
Case ii : If ga < gb
The greedy algorithm would have chosen b before considering a
since (I \ {a})∪ {b} would be feasible.
This is impossible because the algorithm did not include b in I.
Case iii : The only remaining possibility is ga= gb
The total profit from I is therefore equal to the profit from the
optimal set J, and so I is optimal to.
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 14

1
Scheduling Problems
1. Activity Selection Problem/Interval Scheduling
Problem
2. Interval Partitioning
3. Minimize time in the system
4. Scheduling with deadlines
5. Weighted interval scheduling problem
Note:
• Greedy Algorithm for Problems 1 – 4
• DP Solution for Problem 5
Scheduling to minimizing lateness
Scheduling to minimizing lateness:
• Single resource processes one job at a time.
• Job j requires tj units of processing time and is
due at time dj
• If j starts at time sj , it finishes at time fj = sj + tj
Lateness: lj = max {0, fj – dj }
Goal:
Schedule all jobs to minimize maximum lateness
𝐿 = 𝑚𝑎𝑥𝑗 𝑙𝑗
Example
Example:
Greedy Algorithm
EARLIEST-DEADLINE-FIRST
SORT jobs by due times and renumber so that
d1 ≤ d2 ≤ … ≤ dn
Theorem: The earliest-deadline-first schedule is optimal.
Claims:
1. There exists an optimal schedule with no idle time
2. The earliest-deadline-first schedule has no idle time
Def. Given a schedule S, an inversion is a pair of jobs i and j such that: i < j but
j is scheduled before i.
3. The earliest-deadline-first schedule is the unique idle-free schedule
with no inversions.
4. If an idle-free schedule has an inversion, then it has an adjacent
inversion.
5. Exchanging two adjacent, inverted jobs i and j reduces the number
of inversions by 1 and does not increase the max lateness.
Greedy - Exercise
Maximize the payoff
Suppose you are given two sets A and B, each
containing n positive integers. You can choose to
reorder each set however you like. After reordering,
let ai be the ith element of set A, and let bi be
the ith element of set B. You then receive a payoff
ς𝒏𝒊=𝟏 𝒂𝒊 𝒃𝒊 . Give an algorithm that will maximize
your payoff. Prove that your algorithm maximizes
the payoff, and state its running time.
Coding
Suppose that we have a 100000 character data file that
we wish to store. The file contains only 6 characters,
appearing with the following frequencies:
a b c d e f
Frequency in thousands 45 13 12 16 9 5

A binary code encodes each character as a binary string


or code word.
Goal: We would like to find a binary code that encodes
the file using as few bits as possible
Fixed Length coding
In a fixed-length code each code word has the same
length.
For our example, a fixed-length code must have at least
3 bits per code word. One possible coding is as follows:
a b c d e f
Frequency in thousands 45 13 12 16 9 5
Fixed length coding 000 001 010 011 100 101

The fixed length code requires 300000 bits to store the


file
variable-length code
In a variable-length code, code words may have
different lengths.
One possible coding for our example is as follows:
a b c d e f
Frequency in thousands 45 13 12 16 9 5
Variable length coding 0 101 100 111 1101 1100

The variable-length code uses only


(45x1+13x3+12x3+16x3+9x4+5x4)x1000= 224000 bits
Much better than fixed length code
Can we do better?
Key Property - Encoding
Key Property
Given an encoded message, decoding is the process of
turning it back into the original message. A message is
uniquely decodable, if it can only be decoded in one way.
Encoding should be done so that message is uniquely
decodable.
Note:
1. Fixed-length codes are always uniquely decodable
2. May not be so for variable length encoding
Example : C = {a= 1; b= 110; c= 10; d= 111}
1101111 is not uniquely decipherable since it could have
encoded either bad or acad.
Variable-length codes may not be uniquely decodable
Prefix Codes
Prefix Code: A code is called a prefix code if no code word
is a prefix of another one.
Example:
{a= 0; b= 110; c= 10; d= 111} is a prefix code.
Important Fact:
Every message encoded by a prefix code is uniquely
decipherable.
Since no code-word is a prefix of any other we can always
find the first code word in a message, peel it off, and
continue decoding.
Example:
0110100 = 0110100 = abca
We are therefore interested in finding good prefix codes.
Representation – Binary Encoding
Representation
Prefix codes (in general any binary encoding) can be
represented as a binary tree.
Left edge is labeled 0 and the right edge is labeled 1
Label of leaf is frequency of character.
Path from root to leaf is code word associated with
character.
Representation – Binary Tree
Fixed-length prefix code from the example
earlier represented as a binary tree (the
numbers inside the nodes are the sum of
frequencies for characters in each subtree):

An ‘optimal’ prefix code from the


example earlier represented as a binary
tree
Cost of Tree
• For each character c in the set C let freq.c denote the
frequency of c in the file
• Given a tree T corresponding to a prefix code
• dT(c) denote the depth of c’s leaf in the tree
• dT(c) is also the length of the codeword for character c
• The number of bits required to encode a file is

𝐵 𝑇 = ෍ 𝑐. 𝑓𝑟𝑒𝑞. 𝑑 𝑇 (𝑐)
𝑐 ∈𝐶
which is defined as cost of the tree T
Full Binary Tree
Key Idea:
An optimal code for a file is always represented by a
full binary tree.
Full binary tree is a binary tree in which every non-leaf
node has two children
Proof: If some internal node had only one child then
we could simply get rid of this node and replace it with
its unique child. This would decrease the total cost of
the encoding.
Greedy Choice Property
Lemma:
Consider the two letters, x and y with the
smallest frequencies. Then there exists an
optimal code tree in which these two letters are
sibling leaves in the tree at the lowest level
Proof (Idea)
Take a tree T representing arbitrary optimal
prefix code and modify it to make a tree
representing another prefix code such that the
resulting tree has the required greedy property.
Greedy Choice Property
Let T be an optimum prefix code tree, and let b and c
be two siblings at the maximum depth of the tree
(must exist because T is full).
Assume without loss of generality that f(b) ≤ f(c) and
f(x) ≤ f(y)
Since x and y have the two smallest frequencies it
follows that f(x) ≤ f(b) and f(y) ≤ f(c)
Since b & c are at the deepest level of the tree d(b) ≥
d(x) and d(c) ≥ d(y)
Now switch the positions of x and b in the tree
resulting in new tree T’
Proof

Illustration of the proof:


Greedy Choice Property
Since T is optimum
Optimal Substructure Property
Optimal Substructure Property
Lemma:
Let T be a full binary tree representing an optimal
prefix code over an alphabet C, where frequency
f[c] is define for each character c belongs to set C.
Consider any two characters x and y that appear as
sibling leaves in the tree T and let z be their parent.
Then, considering character z with frequency f[z] =
f[x] + f[y], tree T` = T - {x, y} represents an optimal
code for the alphabet C` = C - {x, y}U{z}.
Proof : See CLRS
Huffman Coding
Step1:
Pick two letters x; y from alphabet C with the smallest
frequencies and create a subtree that has these two
characters as leaves.
Label the root of this subtree as z
Step2:
Set frequency f(z) = f(x) +f(y).
Remove x; y and add z creating new alphabet
A* =A ∪ {z} – {x, y}, Note that |A*|= |A| - 1
Repeat this procedure, called merge, with new alpha-
bet A* until an alphabet with only one symbol is left.
Huffman Code Algorithm
Huffman Code Algorithm
Huffman Code - Example
Example:
Huffman Code - Example

Example Continued
Huffman Code - Example

Example Continued
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 15

1
NP Completeness
Until now we have been designing algorithms for
specific problems.
We have seen running times O(logn), O(n), O(nlogn), O(n2),
O(n3), .
We often think about problems we can solve in polynomial
time O(nk) as being easy/practically solvable/tractable
We have seen a lot of these
Similarly we think about problems we need exponential
time like O(2n) to solve as being hard/practically
unsolvable/intractable
We have seen a few of these.
NP Completeness
Showing that a problem has an efficient
algorithm is relatively easy.
“All’ that is needed is to demonstrate an
algorithm.
Proving that no efficient algorithm exists for a
particular problem is difficult.
How can we prove the non-existence of
something?
NP Completeness
Goal:
To study interesting class of problems (called NP
Complete)
whose status is unknown.
i.e., no polynomial time algorithm has yet been
discovered, nor has anyone yet been able to prove that
no polynomial time algorithm can exist for any one of
them.
One of the deepest research problem in theoretical
computer science.
NP Complete
• The problem of finding an efficient solution to an NP-
Complete problem is known as P ≠ NP?.
• There is currently a US$1,000,000 award offered by the
Clay Institute (http://www.claymath.org/) for its
solution
• In the remainder of this module we will introduce the
notation, terminology and tools needed to discuss NP-
Complete problems and to prove that problems are
NP-complete.
Proving that a problem is NP-Complete does not prove
that the problem is hard.
It does indicate that the problem is very likely to be hard.
NP Complete
Why so difficult?
Several NP-Complete Problems are very similar to problems
that we know already how to solve in polynomial time.
For example:
• Shortest simple path vs. Longest simple path
• Fractional Knapsack vs. 0-1 Knapsack
• Euler Tour vs Hamiltonian Cycle
• 2-CNF Satisfiability vs. 3-CNF Satisfiability
In Boolean logic, a formula is in conjunctive normal
form (CNF) if it is a conjunction of one or more
clauses, where a clause is a disjunction of literals;
otherwise put, it is a product of sums or an AND of
ORs
Class P
Informal discussion
The class P consists of those problems that are solvable
in polynomial time.
More specifically, they are problems that can be solved
in time O(nk) for some constant k, where n is the size of
the input to the problem.
Most of the problems we have examined are in class P
Class NP
Consider Hamiltonian cycle problem
Given a directed graph G = (V, E) a ‘certificate’ is a
sequence v = (v1 ,v2 ,v3 ,…..v|V|) of |V| vertices.
We can easily check/verify in polynomial time if v is an
Hamiltonian cycle or not.
Consider the problem of 3-CNF Satisfiability
A certificate here is an assignment of values to the
variables.
We can easily check/verify in polynomial time
whether the assignment satisfies the boolean formula.
Class NP
The class NP consists of those problems that are
verifiable in polynomial time.
Already observed that Hamiltonian Cycle Problem &
3-CNF Satisfiability problem are in class NP.
Note
If a problem is in P then we can solve it in polynomial
time & so given a certificate it is verifiable in
polynomial time
Therefore P  NP
The open question is whether P ⊂ NP?
Class NP-Complete
A problem is in the class NPC (referred as NP-Complete) if
1. It is in NP
2. Is as ‘hard’ as any problem in NP
Note
Most computer scientists believe that NP-complete
problems are intractable.
Because, if any NP-complete problem can be solved in
polynomial time then every problem in NP has a
polynomial time algorithm.
- We will prove this later
NPC
Technique to show that the problem is NP-complete:
Different than what we have been doing:
We are not trying to prove the existence of polynomial
time algorithm
Instead we are trying to show that no efficient or
polynomial time algorithm is likely to exist.
Three key ideas needed to show problem is NPC
1. Decision problem vs. Optimization problem
2. Reductions
3. First NP-complete problem
Decision problem vs. Optimization problem
Decision problem vs. Optimization problem
Many problems of interest are Optimization problems

An Decision problem asks us to check if something is


true (possible answers: ‘yes’ or ‘no’)
Example:
–PRIMES
•Instance: A positive integer n
•Question: Is n prime?
Optimization problem
An optimization problem asks us to find, among all
feasible solutions, one that maximizes or minimizes a
given objective
Example: –
Single shortest-path problem (SPP)
•Instance: Given a directed, weighted graph G & two
nodes s and t of G
•Problem: find a simple path from s to t of minimum
total length
–Possible answer: ‘a shortest path from s to t ’
Decision problem vs. Optimization problem

Remark: An optimization problem usually has a


corresponding decision problem.
Usually easily defined with the help of a bound on the
value of feasible solutions
Examples:
MST vs. Decision Spanning Tree(DST)
SPP vs. Decision SPP (DKnapsack)
Decision problem vs. Optimization problem

Optimization problem: Minimum Spanning Tree


Given a weighted graph find a minimum spanning tree
(MST) of G

Decision problem: Decision Spanning Tree (DST)


Given a weighted graph and an integer k, does G have a
spanning tree of weight at most k?
Decision problem vs. Optimization problem
Optimization SPP
Instance: Given a weighted graph G, two nodes s and t of
G
Problem: find a simple path from s to t of minimum total
length

Decision SPP
Instance: A weighted graph G, two nodes s and t of G,
and a bound b
Question: is there a simple path from s to t of length at
most b?
Decision problem vs. Optimization problem
Observe, if one can solve an optimization problem (in
polynomial time), then one can answer the decision
version (in polynomial time)
Example: If we know how to solve MST we can solve DST
which asks if there is an Spanning Tree with weight at
most k.
How?
First solve the MST problem and then check if the MST
has cost k.
If it does, answer Yes.
If it doesn’t, answer No
Decision problem vs. Optimization problem

Observe, if optimization problem is easy then the


decision version is easy.
Alternately
If we give evidence that a decision problem is hard then
we give evidence that its corresponding optimization
problem is hard.
This is important observation for us.
Reduction
Let us consider a decision problem A
We want to solve it in polynomial time
Suppose we already know how to solve another decision
problem B in polynomial time
Additionally we also have a procedure/algorithm that
transforms any instance x of A into some instance y of B
with the following properties:
• The transformation takes polynomial time
• Answer for x is YES if and only if the answer for y is also
YES
Such a procedure is called a polynomial time reduction
algorithm .
It helps us to solve problem A in polynomial time.
Reduction
Pictorial Representation
yes
yes
Instance x Polynomial time Instance y Polynomial time
of A reduction algorithm of B algorithm to decide B no no

Polynomial time algorithm to decide A

1. Given an instance x of A, use polynomial time reduction


algorithm to transform it to an instance y of problem B
2. Run the polynomial time decision algorithm for B on the
instance y
3. Use the answer for y as the answer for x
Reduction
Using reduction idea to prove a decision problem is
hard.
Suppose we have to prove that no polynomial time
algorithm can exist for a decision problem B.
Suppose we have a decision problem A for which we
already know that no polynomial time algorithm can
exist.
Suppose further we have a polynomial time reduction
algorithm transforming instances of A to instances of B.
By contradiction we can prove that no polynomial time
algorithm can exist for B.
Reduction
Suppose otherwise, i.e., suppose that B has a
polynomial time algorithm .
Then as observed earlier, we would have a polynomial
time algorithm for A, which is a contradiction.
1st NP-Complete Problem
Key step in using reduction is to start with a already
known NP-Complete problem.
We need a ‘first’ NP-Complete problem.
The problem we shall use is the Circuit-Satifiability.
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 16

1
Class P
Informal discussion
The class P consists of those problems that are solvable
in polynomial time.
More specifically, they are problems that can be solved
in time O(nk) for some constant k, where n is the size of
the input to the problem.
Most of the problems we have examined are in class P
Class NP
Consider Hamiltonian cycle problem
Given a directed graph G = (V, E) a ‘certificate’ is a
sequence v = (v1 ,v2 ,v3 ,…..v|V|) of |V| vertices.
We can easily check/verify in polynomial time if v is an
Hamiltonian cycle or not.
Consider the problem of 3-CNF Satisfiability
A certificate here is an assignment of values to the
variables.
We can easily check/verify in polynomial time
whether the assignment satisfies the boolean formula.
Class NP
The class NP consists of those problems that are
verifiable in polynomial time.
Already observed that Hamiltonian Cycle Problem &
3-CNF Satisfiability problem are in class NP.
Note
If a problem is in P then we can solve it in polynomial
time & so given a certificate it is verifiable in
polynomial time
Therefore P  NP
The open question is whether P ⊂ NP?
Class NP-Complete
A problem is in the class NPC (referred as NP-Complete) if
1. It is in NP
2. Is as ‘hard’ as any problem in NP
Note
Most computer scientists believe that NP-complete
problems are intractable.
Because, if any NP-complete problem can be solved in
polynomial time then every problem in NP has a
polynomial time algorithm.
- We will prove this later
NPC
Technique to show that the problem is NP-complete:
Different than what we have been doing:
We are not trying to prove the existence of polynomial
time algorithm
Instead we are trying to show that no efficient or
polynomial time algorithm is likely to exist.
Three key ideas needed to show problem is NPC
1. Decision problem vs. Optimization problem
2. Reductions
3. First NP-complete problem
Decision problem vs. Optimization problem
Remark: An optimization problem usually has a
corresponding decision problem.
Usually easily defined with the help of a bound on the value
of feasible solutions

Example
Optimization problem: Minimum Spanning Tree
Given a weighted graph find a minimum spanning tree (MST)
of G
Decision problem: Decision Spanning Tree (DST)
Given a weighted graph and an integer k, does G have a
spanning tree of weight at most k?
Decision problem vs. Optimization problem
Observe, if one can solve an optimization problem (in
polynomial time), then one can answer the decision
version (in polynomial time)
Example: If we know how to solve MST we can solve DST
which asks if there is an Spanning Tree with weight at
most k.
How?
First solve the MST problem and then check if the MST
has cost k.
If it does, answer Yes.
If it doesn’t, answer No
Decision problem vs. Optimization problem

Observe, if optimization problem is easy then the


decision version is easy.
Alternately
If we give evidence that a decision problem is hard then
we give evidence that its corresponding optimization
problem is hard.
This is important observation for us.
Reduction
Let us consider a decision problem A
We want to solve it in polynomial time
Suppose we already know how to solve another decision
problem B in polynomial time
Additionally we also have a procedure/algorithm that
transforms any instance x of A into some instance y of B
with the following properties:
• The transformation takes polynomial time
• Answer for x is YES if and only if the answer for y is also
YES
Such a procedure is called a polynomial time reduction
algorithm .
It helps us to solve problem A in polynomial time.
Reduction
Pictorial Representation
yes
yes
Instance x Polynomial time Instance y Polynomial time
of A reduction algorithm of B algorithm to decide B no no

Polynomial time algorithm to decide A

1. Given an instance x of A, use polynomial time reduction


algorithm to transform it to an instance y of problem B
2. Run the polynomial time decision algorithm for B on the
instance y
3. Use the answer for y as the answer for x
Reduction
Using reduction idea to prove a decision problem is
hard.
Suppose we have to prove that no polynomial time
algorithm can exist for a decision problem B.
Suppose we have a decision problem A for which we
already know that no polynomial time algorithm can
exist.
Suppose further we have a polynomial time reduction
algorithm transforming instances of A to instances of B.
By contradiction we can prove that no polynomial time
algorithm can exist for B.
Reduction
Suppose otherwise, i.e., suppose that B has a
polynomial time algorithm .
Then as observed earlier, we would have a polynomial
time algorithm for A, which is a contradiction.
1st NP-Complete Problem
Key step in using reduction is to start with a already
known NP-Complete problem.
We need a ‘first’ NP-Complete problem.
The problem we shall use is the Circuit-Satifiability.
Notation & Terminology
Abstract problem
An abstract problem Q is a binary relation on a set I of
problem instances and a set S of problems solutions.
The Shortest-Path problem statement is:
Given an undirected graph G= (V,E) and two vertices u, v ∈
V, find the shortest path between u and v.
Each instance is a triple <G, u, v> and the corresponding
solution is the shortest path (v1,v2,...,vk)
Since shortest paths are not necessarily unique, a given
problem instance may have more than one solution.
Abstract Problem
Since for NP Completeness, we are interested in
Decision Problems.
An abstract decision problem is a function that maps
the instance set to the solution set {0, 1}
The decision problem version of Shortest-path can be
written as follows.
The Path-problem statement is:
Given an undirected graph G= (V,E), two vertices u, v ∈
V, and a non-negative integer k, is there a path
between u and v with length at most k?
Abstract Problem
The abstract decision problem is represented by the function :
Path: I → {0,1}

If i = < G,u,v,k > ∈ I then

Path(i) = 1, if there exists a path between u and v with length at most k


0, otherwise
Encodings
In order to solve a problem by computer, instances must be
represented in a way the computer understands.
An encoding of a set I of abstract objects is a mapping e from I to
a set of strings over an alphabet Σ.
If Σ = {0, 1} we can use the standard binary encoding
e : I → Σ*
We know how to map graphs, programs as binary strings.
A problem whose instance set is the set of strings over Σ is a
concrete problem.
We can use encodings to map an abstract problem to a concrete
problem
Encodings
We say that an algorithm solves a concrete problem in O(T(n))
time if when it is provided a problem instance i of length n=|i|,
the algorithm can produce the solution in at most O(T(n)) time.
For an instance i ∈ I, the length of i, denoted by |i|, is the
number of symbols in e(i).
Using the binary encoding,|0|= 1,|1|= 1,|2|= 2,|3|= 2,|4|= 3,
etc.
The length of an instance of the concrete problem depends
heavily on the encoding used. Some encodings are polynomially
related, so the polynomial-time solvability for problems using
one encoding extends to the other.
There are some possible “expensive” encodings, but we will rule
them out.
We will use the binary encoding as the standard one.
Encodings
We say that an algorithm solves a concrete problem in
O(T(n)) time if when it is provided a problem instance i of
length n=|i|, the algorithm can produce the solution in at
most O(T(n)) time.
For an instance i ∈ I, the length of i, denoted by |i|, is the
number of symbols in e(i).
Using the binary encoding,|0|= 1,|1|= 1,|2|= 2,|3|=
2,|4|= 3, etc.
We will use the binary encoding as the standard one.
Formal Language Framework
The formal-language framework allows us to define algorithms
for concrete decision problems as “machines” that operate on
languages.
An alphabet Σ is any finite set of symbols.
A language L over Σ is any set of strings made up of symbols
from Σ.
We denote the empty string by , and the empty language by ∅.
The set/language of all strings over Σ is denoted by Σ*.
So, if Σ = {0,1}, then Σ* = {, 0, 1, 00, 01, 10, 11, 000,...} is the set
of all binary strings.
Every language L over Σ is a subset of Σ*.
Operations on Languages

Union of L1 and L2 : L1 ∪ L2
Intersection of L1 and L2: L1 ∩ L2
Complement of L: = Σ*− L
Concatenation of L1 and L2:
L1L2 = {x1x2 : x1 ∈ L1 and x2 ∈ L2}
Kleene star of L :
L* = {} ∪ L∪ L^2 ∪ L^3 ∪...where L^k = LL...L, k times.
Formal Language Framework
Consider the concrete problem corresponding to the
problem of deciding whether a natural number is prime.
Using binary encoding, Σ = {0,1}, this concrete problem is a
function:
Prime: {0,1}* → {0,1} with
Prime(10) = Prime(11) = Prime(101) = Prime(111) = 1
Prime(0) = Prime(1) = Prime(100) = Prime(110) = 0
We can associate with Prime a language LPRIME
corresponding to all strings s over {0,1} with Prime(s) = 1:
LPRIME = {10,11,101,111,1011,1101,...}
Sometimes it’s convenient to use the same name for
the concrete problem and its associated language :
Prime = {10,11,101,111,1011,1101,...}
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 17

1
Re Cap
Abstract problem
An abstract problem Q is a binary relation on a set I of
problem instances and a set S of problems solutions.
An abstract decision problem is a function that maps
the instance set to the solution set {0, 1}
An encoding of a set I of abstract objects is a mapping e
from I to a set of strings over an alphabet Σ.
We will use the binary encoding as the standard one.
Formal Language Framework
Consider the concrete problem corresponding to the
problem of deciding whether a natural number is prime.
Using binary encoding, Σ = {0,1}, this concrete problem is a
function:
Prime: {0,1}* → {0,1} with
Prime(10) = Prime(11) = Prime(101) = Prime(111) = 1
Prime(0) = Prime(1) = Prime(100) = Prime(110) = 0
We can associate with Prime a language LPRIME
corresponding to all strings s over {0,1} with Prime(s) = 1:
LPRIME = {10,11,101,111,1011,1101,...}
Sometimes it’s convenient to use the same name for
the concrete problem and its associated language :
Prime = {10,11,101,111,1011,1101,...}
Formal Language Framework
The formal-language framework allows us to define algorithms
for concrete decision problems as “machines” that operate on
languages.
An alphabet Σ is any finite set of symbols.
A language L over Σ is any set of strings made up of symbols
from Σ.
We denote the empty string by , and the empty language by ∅.
The set/language of all strings over Σ is denoted by Σ*.
So, if Σ = {0,1}, then Σ* = {, 0, 1, 00, 01, 10, 11, 000,...} is the set
of all binary strings.
Every language L over Σ is a subset of Σ*.
Operations on Languages

Union of L1 and L2 : L1 ∪ L2
Intersection of L1 and L2: L1 ∩ L2
Complement of L: = Σ*− L
Concatenation of L1 and L2:
L1L2 = {x1x2 : x1 ∈ L1 and x2 ∈ L2}
Kleene star of L :
L* = {} ∪ L∪ L^2 ∪ L^3 ∪...where L^k = LL...L, k times.
Formal Language Framework

Formal language framework allows us to express the


relation between decision problems and algorithms
that solves them.
• Algorithm A accepts a string x in Σ* if for the given
input x, the algorithm output A(x) is 1.
• The algorithm A rejects a string x if A(x) is 0.
• The language accepted by an algorithm A is the set
of strings L = {x {0, 1}* : A(x) = 1}
Formal Language Framework

Note:
Even if language L is accepted by an algorithm A, the
algorithm will not necessarily reject a string x  L
provided as input to it.
For example, the algorithm may loop forever.
A language L is decided by an algorithm A if every
binary string in L is accepted by A and every binary
string not in L is rejected by the A.
Formal Language Framework
A language L is accepted in polynomial time by an
algorithm A if for any length-n string x  L, the
algorithm accepts x in time O(nk) for some constant k.

A language L is decided in polynomial time by an


algorithm A if for any length-n string x  {0, 1}* , the
algorithm correctly decides whether x  L in time O(nk)
for some constant k.
Class P

We define the complexity class P as:


P = { L  {0, 1}*| there exists an algorithm A
that decides L in polynomial time}.
Exercise:
Prove that the class P is closed under union,
intersection, complement, concatenation and
Kleene star.
Class NP
A verification algorithm is a two-argument algorithm
A, where one argument is an ordinary input string x
and the other is a binary string y called a certificate.
A two-argument algorithm A verifies an input x if there
exists a certificate y such that A(x, y) = 1.

The language verified by a verification algorithm A is


L = {x {0, 1}* |  y {0, 1}* s.t A(x, y) = 1}
Complexity class NP
The complexity class NP is the class of languages that
can be verified by a polynomial-time algorithm.
More precise, a language L belongs to NP if and only if
there exists a two-input polynomial-time algorithm A
and a constant c such that
L = {x {0, 1}* |  a certificate y with |y| = O(|x|c)
such that A(x, y) = 1}
Note
• Hamiltonian-Cycle  NP. Therefore, NP ≠ 
• P  NP
Closure properties for NP
Exercise:
Prove that the class NP is closed under union,
intersection, concatenation and Kleene star.
Note:
It is an open problem whether NP is closed
under complement or not.
Class co-NP
Complexity class co-NP
L  co-NP if LC  NP
Easy to prove P  NP  co-NP
Problem:
• P ≠ NP
• NP = co-NP
• P = NP  co-NP
Relationships amongst classes

Four Possibilities

NP=co-NP
P=NP=co-NP
P

P
NP co-NP

P = NP  co − NP

P
Most researchers regard this
NP co-NP
Possibility as the most likely

P  NP  co − NP
Reduction Algorithm
A language L1 is polynomial-time reducible to a
language L2, written L1 ≤P L2 if there exists a polynomial-
time computable function f: {0, 1}* {0, 1}*
such that for all x  L1 if and only if f(x)  L2.
We call the function f the reduction function, and a
polynomial algorithm F that computes f is called a
reduction algorithm.
Reduction
Lemma: If L1, L2  {0, 1}*are languages such that
L1 ≤P L2, then L2 P implies L1 P.
Proof:

yes, f ( x)  L2 yes, f ( x )  L1

x f (x)
F A2
no, f ( x )  L2
no, f ( x)  L1
A1
NP-Completeness
A language L  {0, 1}* is NP-complete if
1. L  NP and
2. L’ ≤P L for every L’  NP

• If a language L satisfies Property 2 but not Property


1, then we say that L is NP-hard
• We also define NPC to be the class of NP-complete
language.
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 18

1
Re Cap
We define the complexity class P as:
P = { L  {0, 1}*| there exists an algorithm A that
decides L in polynomial time}.
The complexity class NP is the class of languages that
can be verified by a polynomial-time algorithm.
More precise, a language L belongs to NP if and only if
there exists a two-input polynomial-time algorithm A
and a constant c such that
L = {x {0, 1}* |  a certificate y with |y| = O(|x|c)
such that A(x, y) = 1}
• P  NP
Class co-NP
Complexity class co-NP
L  co-NP if LC  NP
Easy to prove P  NP  co-NP

NP=co-NP
P=NP=co-NP
P

P
NP co-NP

P = NP  co − NP

NP
P
co-NP
Most researchers regard this
Possibility as the most likely
P  NP  co − NP
Reduction Algorithm
A language L1 is polynomial-time reducible to a
language L2, written L1 ≤P L2 if there exists a polynomial-
time computable function f: {0, 1}* {0, 1}*
such that for all x  L1 if and only if f(x)  L2.
We call the function f the reduction function, and a
polynomial algorithm F that computes f is called a
reduction algorithm.
Lemma: If L1, L2  {0, 1}*are languages such that
L1 ≤P L2, then L2 P implies L1 P.
NP-Completeness
A language L  {0, 1}* is NP-complete if
1. L  NP and
2. L’ ≤P L for every L’  NP

• If a language L satisfies Property 2 but not Property


1, then we say that L is NP-hard
• We also define NPC to be the class of NP-complete
language.
NPC
Theorem
If any NP-complete problem is polynomial-time solvable,
then P = NP.
Equivalently, If any problem in NP is not polynomial-time solvable,
then no NP-complete problem is polynomial-time solvable.
Proof:
Suppose that LP and also that LNPC.
Let L’NP.
Then we have L’ ≤P L by Property 2 of the definition of NP-
completeness.
Thus by earlier Lemma L’P
NPC
How to show that L  NPC?
• Direct Proof
We show that L  NP & that L’ ≤P L for every L’  NP
• Indirect proof
We use the following lemma for the indirect proof
Lemma
If L is a language such that L’ ≤P L for some L’  NPC,
then L is NP-hard.
Moreover, if L  NP then L  NPC
NPC
Method for proving a language L is NPC:
• Prove L  NP
• Select a known NPC language L’
• Describe an algorithm that computes a function f
mapping every instance x {0, 1}* of L’ to an instance
f(x) of L.
• Prove that the function f satisfies x  L’ if and only if
f(x)  L for all x {0, 1}*
• Prove that the algorithm computing f runs in
polynomial time.
NPC
Proof of the Lemma:
Since L’  NPC, for all L’’ NP we have L’’ ≤P L’.
By assumption L’ ≤P L
Therefore by transitivity, L’’ ≤P L
which shows that L is NP-hard
Moreover, if L  NP then we also have L  NPC
1st NP Complete Problem
• Circuit-satisfiability problem: Given a boolean
combinational circuits composed of AND, OR, or NOT
gates, is it satisfiable?

• CIRCUIT-SAT={<C> | C is a satisfiable boolean


combinational circuit}

Theorem:
The circuit-satisfiability problem is NP-Complete
Proof:
For time being we will assume this theorem
Circuit satisfiability
• Circuit satisfiability
SAT
Formula satisfiability (SAT)
An instance of SAT is a boolean formula composed of :
• n boolean variables: x1,x2,….,xn
• m boolean connectives: any boolean function with
one or two input and one output
• Parentheses
Wlog, we assume there are no redundant parentheses
SAT = {< > |  is a satisfiable boolean formula}
SAT
Example
SAT is NP Complete
Theorem
Satisfiability of boolean formulas is NP-complete
Proof
• SAT  NP
Easy
• CIRCUIT-SAT ≤P SAT
We show how to reduce any instance of circuit
satisfiability to an instance of formula satisfiability in
polynomial time
SAT is NP Complete
What do we have to do?
• Given an instance C of Circuit-SAT, define
poly-time function f that converts C to
instance φ of SAT
• Argue that f is poly-time
• Argue that f is correct (i.e., C of Circuit-SAT is
satisfiable iff φ of SAT is satisfiable)
Construction
Let C be instance of circuit satisfiability

Construction/Algorithm
Look at the gate that produces the circuit output and
inductively express each of the gate’s inputs as formulas
Note:
This approach cannot lead to a polynomial time reduction.
Exercise: Give Example
Useful Identity
The reduction relies on the following identity:
‘if and only if’ (denoted by ↔) is a Boolean operator
that follows the following truth table.
Let C = A ↔B
A B C
1 1 1
1 0 0
0 1 0
0 0 1

We know that A ↔B is equivalent to (𝐴𝐶 B)(A 𝐵𝐶 )


Reduction - Idea
• Reduction for Not Gate
A B Formula φ = B ↔ 𝑨𝑪
Note: φ is always True

• Reduction for AND Gate


Formula φ = Z ↔ A  B
Note: φ is always True

• Reduction for OR Gate


Formula φ = Y ↔ A  B
Note: φ is always True

AND/OR gate with any number of input wires can be similarly reduced to a
Boolean Formula
SAT is NP Complete
Let C be instance of circuit satisfiability

For each wire xi in the circuit C the formula  has a variable xi .


Next, we describe how each gate operates as a formula
involving the variables of its incident wires (using if and only if formula)
• 𝑥i ↔ (boolean operations consistent with gate)
For example: x10  (x7  x8  x9)
Each of these small formula is called a clause.
SAT is NP Complete
Reduction Algorithm:
The formula  produced by the reduction algorithm is the AND of
the circuit-output variable with the conjunction of clauses
describing the operation of each gate If x denotes the output of the
O
circuit & φk is the formula for
the kth gate then the formula is
φ = xO φ1  φ2 …..

The size of a circuit is the number of


gates
Given a circuit C the formula  is produced in polynomial time
SAT is NP Complete
To Prove CIRCUIT-SAT ≤P SAT
Suppose circuit C has a satisfying assignment.
Then each wire of the circuit has a well assigned value and
the output of the circuit is 1.
If we assign wire values to variables in , then each clause
of  evaluates to 1 .
Thus the conjunction of all clauses evaluates to 1.
Hence the formula  evaluates to 1.
SAT is NP Complete
Conversely,
If some assignment causes  to evaluate to 1.
Then all the clauses must be True.
i.e., all the ‘if and only if ’ formulae of the form φk will
be True.
This assignment preserves the functionality of all
corresponding gates.
Also the output xO is True.
Hence the circuit C is satisfiable
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 19

1
NPC
How to show that L  NPC?
• Direct Proof
We show that L  NP & that L’ ≤P L for every L’  NP
• Indirect proof
We use the following lemma for the indirect proof
Lemma
If L is a language such that L’ ≤P L for some L’  NPC,
then L is NP-hard.
Moreover, if L  NP then L  NPC
1st NP Complete Problem
1st NP Complete Problem
• Circuit-satisfiability problem: Given a boolean
combinational circuits composed of AND, OR, or NOT
gates, is it satisfiable?
Theorem:
The circuit-satisfiability problem is NP-Complete
Theorem
Satisfiability of boolean formulas is NP-complete
Proof
• SAT  NP
• CIRCUIT-SAT ≤P SAT
3 - CNF
3–SAT/3-CNF problem
Given a set of clauses C1, C2, . . . , Cm in 3-CNF form
variables x1, x2, . . . , xn
Example: ( x1  x1  x2 )  ( x3  x2  x4 )
 (x1  x3  x4 )

Problem is to check if all the clauses are simultaneously


satisfiable.

3-SAT := {Satisfiable 3-CNF formulae}


3-CNF  NPC
Theorem:
Satisfibility of boolean formula in 3-CNF is NP complete.
Proof :
• 3-CNF  NP
Easy to prove
Argue that, given a certificate, you can verify that the
certificate provides a solution in polynomial time
• Claim:
SAT  P 3 − CNF
3-CNF  NPC
What do we have to do?
• Given an instance < φ > of SAT, define poly-time
function f that converts < φ > to instance < φ′′′ > of
3-CNF
• Argue that f is poly-time
• Argue that f is correct (i.e., < φ > of SAT is satisfiable
iff < φ′′′ > of 3-CNF-SAT is satisfiable
3-CNF  NPC
Algorithm for f
• Suppose φ is any boolean formula,
Construct a binary parse tree, with literals as leaves
and connectives as internal nodes.
• Introduce a variable yi for the output of each internal
nodes.
• Rewrite the formula to φ' as the AND of the root
variable and a conjunction of clauses describing the
operation of each node.
Binary Parse Tree

The result is that in φ', each


clause has at most three literals.
And φ’ is ‘AND’ of clauses.
3-CNF  NPC
Remains to convert φ' into 3-CNF
Change each clause into conjunctive normal form as
follows:
– Construct a truth table
– Write the disjunctive normal form for all truth-
table items evaluating to 0
(We build a formula in DNF that is equivalent to
⌐ φ'
– Use DeMorgans law to change to CNF.
3-CNF  NPC

The resulting φ'' is in CNF but each


clause has 3 or fewer literals.
3-CNF  NPC
Change 1 or 2-literal clauses into a 3-literal clause φ'''
as follows:
– If a clause has one literal l, change it to
(l∨p∨q)∧(l∨p∨¬q)∧ (l∨¬p∨q)∧ (l∨¬p∨¬q)
– If a clause has two literals (l1∨ l2), change it to
(l1∨ l2 ∨p) ∧ (l1∨ l2 ∨¬p)
3-CNF  NPC
Now, to prove correctness of f :
• We first, prove that reduction is in polynomial time:
1. From φ to φ' , we introduce at most 1 variable and
1 clause per connective in φ.
2. From φ' to φ'' , we introduce at most 8 clauses for
each clause in φ'.
3. From φ'' to final 3-CNF, we introduce at most 4
clauses for each clause in φ''
3-CNF  NPC
Next, we prove that reduction is correct
– i.e., φ and resulting 3-CNF formula φ’’’ are equivalent:
1. From φ to φ' , keep equivalence by
construction.
2. From φ' to φ'' , keep equivalence by
construction.
3. From φ'' to final 3-CNF φ''', keep equivalence by
construction.
We have proved:
(1) 3-CNF ∈ NP, and (2) SAT ≤𝑃 3-CNF
Therefore, 3-CNF is NP-Complete.
k-CNF
Exercise
1. 4-CNF is NP Complete.
2. k-CNF is NP Complete for k  3.
3. MAX-3CNF
Given a Boolean formula in conjunctive normal form,
such that each clause contains 3 literals.
The task is to find an assignment to the variables of the
formula such that a maximum number of clauses is
satisfied.
4. What about DNF?
5. What about 2-CNF? MAX 2-CNF?
We will show that 2-CNF is in P but MAX 2-CNF is NPC
2 - CNF
Instance: A 2-CNF formula 
Problem: To decide if  is satisfiable
Example: a 2-CNF formula

(xy)(yz)(xz)(zy)

Theorem: 2-CNF is polynomial-time decidable.

Proof: We’ll show how to solve this problem


efficiently using path searches in graphs
(Directed) Graph Construction
An implication graph for a given formula has 2 vertices for
each variable.
• Vertex for each variable and a negation of a variable
Intuitively, each vertex resembles a true or not true literal
for each variable in the formula
• Edge (,) iff there exists a clause equivalent to ()
Alternatively
For every clause li∨ lj, add edges ¬li → lj and ¬lj → li
• We think of an edge li → lj
as saying if li = 1 then we must have lj = 1 in any
satisfying assignment
Implication Graph - Example
Example
(x  y)  (y  z)  (x  z)  (z  y)
x

y
x

y

z

z
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 20

1
Re Cap
Theorem:
Satisfibility of boolean formula in 3-CNF is NP complete.
Exercise
1. 4-CNF is NP Complete.
2. k-CNF is NP Complete for k  3.
3. MAX-3CNF
Given a Boolean formula in conjunctive normal form, such
that each clause contains 3 literals.
The task is to find an assignment to the variables of the
formula such that a maximum number of clauses is
satisfied.
4. What about DNF?
5. What about 2-CNF? MAX 2-CNF?
We will show that 2-CNF is in P but MAX 2-CNF is NPC
2 - CNF
Instance: A 2-CNF formula 
Problem: To decide if  is satisfiable
Example: a 2-CNF formula

(xy)(yz)(xz)(zy)

Theorem: 2-CNF is polynomial-time decidable.

Proof: We’ll show how to solve this problem


efficiently using path searches in graphs
(Directed) Graph Construction
An implication graph for a given formula has 2 vertices for
each variable.
• Vertex for each variable and a negation of a variable
Intuitively, each vertex resembles a true or not true literal
for each variable in the formula
• Edge (,) iff there exists a clause equivalent to ()
Alternatively
For every clause li∨ lj, add edges ¬li → lj and ¬lj → li
• We think of an edge li → lj
as saying if li = 1 then we must have lj = 1 in any
satisfying assignment
Implication Graph - Example
Example
(x  y)  (y  z)  (x  z)  (z  y)
x

y
x

y

z

z
Lemma 1
Lemma 1:
If the graph contains a path from  to , it also contains
a path from  to .
Observe
If there’s an edge (,), then there’s also an edge
(,)
Proof:
Extend the observation
Lemma 2
Lemma 2:
A 2-CNF formula  is unsatisfiable
iff there exists a variable x, such that:
1. there is a path from x to x in the graph
2. there is a path from x to x in the graph
Proof
By contradiction
Proof
Suppose there are paths x from x and x from x for
some variable x,
But there’s also a satisfying assignment 
Case1: If (x) = T

 
x . . . x
T T F F
() is false! A contradiction
Case2: If (x) = F
Similar Analysis
Proof
• Suppose there are no such paths
Construct an assignment as follows:
1. pick an 3. assign F to
unassigned their negations
x
literal , with
no path from  y
to , and x 4. Repeat until all
assign it T vertices are
y
assigned
2. assign T to
z
all reachable
vertices z
Proof
Claim: The algorithm is well defined.
Proof:
If there were a path from x to both y and y,
then there would have been a path from x to y and
from y to x
2-SATP
Algorithm for 2SAT:
– For each variable x find if there is a path from x to
x and vice-versa.
– Reject if any of these tests succeeded.
– Accept otherwise
 2-SATP
Theorem:
Given a graph G=(V,E) and two vertices s, t ∈ V, finding
if there is a path from s to t in G is polynomial time
algorithm.
MAX 2SAT
Theorem : MAX-2SAT is NP-complete
Proof:
• MAX-2SAT is in NP
Guess a truth assignment and verify the count.
The verifying takes polynomial time.
• We now reduce 3-CNF to MAX-2SAT
We show that given an instance of 3-CNF we construct
an instance of MAX-2SAT so that a satisfying truth
assignment of 3-CNF can be ex-tended to a satisfying
truth assignment of MAX-2SAT.
MAX-2SAT
Let S be the instance of 3CNF where the clauses are C1,
C2,......,Cm
where Ci = (x ∨ y ∨ z)
From S we build an instance S′ of MAX-2SAT as follows:
Each Ci in S corresponds to a clause group 𝑪′𝒊 in S′,
where 𝑪′𝒊 has the following 10 clauses:
(x)∧(y)∧(z)∧(w)
(¬x∨¬y)∧(¬y∨¬z)∧(¬z∨¬x)
(x∨¬w)∧(y∨¬w)∧(z∨¬w)
where w is a new variable and 1 ≤ i ≤ m.
MAX-2SAT
Assume that S is satisfiable.
Then in a typical clause Ci = (x ∨ y ∨ z)
either one or two or all the three variables are true.
• All of x, y, z are true:
By setting w to true, we satisfy 4 + 0 + 3 = 7 clauses of 𝑪′𝒊
• Two of x, y, z are true:
By setting w to true, we satisfy 3 + 2 + 2 = 7 clausesof 𝑪′𝒊
• One of x, y, z is true:
By setting w to false, we satisfy 1 + 3 + 3 = 7 clauses of 𝑪′𝒊
Observe: A satisfying truth assignment of S can be
extended to a satisfying truth assignment of S′ where
exactly seven clauses in each clause group get satisfied.
MAX-2SAT
Now assume that S is not satisfiable.
Then in at least one clause Ci = (x ∨ y ∨ z) in S, we have
x = F, y = F, z = F
By setting w to false, we satisfy 0 + 3 + 3 = 6 clauses of
𝑪′𝒊
By setting w to true, we satisfy only 1 + 3 + 0 = 4
clauses of 𝑪′𝒊
That is, if S is not satisfiable, no assignment can make
at least seven clauses true in each clause group of 𝑪′𝒊
MAX-2SAT
Observe
Each clause in S′ as constructed above has at most two
literals.
It can be seen that the clauses in S′ can be efficiently
generated from the clauses in S in polynomial time.
Next we prove:
A satisfying truth assignment of S exists if and only if it
can be extended to a truth assignment for S′ satisfying
atleast k clauses, where k= 7m
MAX-2SAT
Let φ be an instance of 3-CNF
and R(φ) be the corresponding instance of MAX-2SAT
Ifφ has m clauses, then R(φ) has 10m clauses
Claim:
K = 7m clauses of R(φ) can be satisfied if and only if φ
is satisfiable
Therefore 3-CNF ≤𝑷 MAX-2SAT
And so MAX-2SAT  NPC
Max-Clique
Max-Clique:
Given a graph G, find the largest clique
Clique: set of nodes such that all pairs in the set are
neighbors OR A clique in G of size k is a complete subgraph
of G on k vertices
Decision problem:
Given G and integer k, does G contain a clique of size ≥ k?
Max-Clique is clearly in NP.
Easy
Theorem :
Max-Clique is NP-Complete
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 21

1
NP Complete Problems
We have proved the following reductions:
CIRCUIT-SAT

SAT

3-CNF

MAX 2SAT k-CNF Max k-CNF

We have also proved the 2-CNF is in P


Max-Clique
Max-Clique:
Given a graph G, find the largest clique
Clique: set of nodes such that all pairs in the set are
neighbors OR A clique in G of size k is a complete subgraph
of G on k vertices
Decision problem:
Given G and integer k, does G contain a clique of size ≥ k?
Max-Clique is clearly in NP.
Easy
Theorem :
Max-Clique is NP-Complete
Max-Clique
Proof:
We will prove 3-CNF ≤𝑷 Max-Clique.
Given a 3-CNF formula  of m clauses C1, C2,….., Cm
and over n variables x1, x2,….., xn
We construct a graph G as follows:
1. for each clause Cr = 𝒍𝒓𝟏  𝒍𝒓𝟐  𝒍𝒓𝟑 , create one vertex for
each of 𝒍𝒓𝟏 , 𝒍𝒓𝟐 , 𝒍𝒓𝟑
2. Place an edge between two vertices 𝒍𝒓𝒊 and 𝒍𝒔𝒋 if and only if
• r ≠ s, i.e., the corresponding literals are from different
clauses
• 𝒍𝒓𝒊 ≠ ⌐ 𝒍𝒔𝒋 i.e., they are consistent
Max-Clique

Example: Suppose  = C1  C2  C3
where C1 = x1  ⌐ x2  ⌐ x3, C 2 = ⌐ x 1  x2  x3
and C3 = x1  x2  x3
G:
Max-Clique
Proof Contd.
• Reduction takes polynomial time
easy to see
Observe
Since each of the 3 vertices corresponding to each
clause are not connected to each other, at most one of
them can be in a clique at a time.
This means the maximum clique size is at most m.
Claim:
 is satisfiable if and only if a clique of size m exist in G
Max-Clique
Suppose  is satisfiable.
Then each clause has at least one literal that’s assigned
value T.
Let V’ be the set of corresponding vertices.
Then |V’| = m
Claim: V’ is a clique.
Consider any two vertices of V’
1. These vertices corresponds to literals from different
clauses.
2. Since the corresponding literals are assigned value T, the
corresponding literals are consistent.
Therefore,  an edge between the two vertices
Hence the claim.
Max-Clique
Conversely, suppose G has a clique V’ of size m.
Since no edges in G connect vertices corresponding to
literals from the same clause,
So V’ contains exactly one vertex corresponding to literals
from each of the m clauses.
Assign value T to each of these m literals.
Observe : Assignment is consistent
Because G contains no edges between inconsistent literals.
Since there is a literal in each clause assigned value T
 Each clause is satisfied.
Hence  is satisfied
Vertex Cover Problem
Vertex Cover Problem
Let G = (V, E) be an undirected graph.
A subset V’  V is called a vertex cover of G if
(u, v)  E, then u  V’ or v  V’ (or both)
The size of vertex cover is the number of vertices in it.
The Vertex Cover Problem is to find a vertex cover of
minimum size in a given graph.
The Vertex Cover Decision problem is:
Given a graph G and an integer k,
Does G have a vertex cover of size ≤ k?
Vertex Cover Problem
Example: For the following graph G find the vertex
cover

Theorem:
Vertex Cover  NPC
Proof:
• Vertex Cover is in NP
Easy to prove
Vertex Cover Problem
Claim: Vertex Cover is NP-hard
For proving this we need the notion of a complement graph.
The complement graph of G= (V, E) is defined by 𝑮 ഥ (𝑽, 𝑬
ഥ)
where
ഥ = {(u, v) : u, v  V, u ≠ v, (u, v) E}
𝑬
(It is the graph with all edges that are not in E)
Vertex Cover Problem
Claim: Clique ≤𝑷 Vertex Cover
Reduction Algorithm:
Input is an instance (G, k) of the Clique Problem
The reduction algorithm computes the complement

𝑮.
This can be done in polynomial time.
Claim: G has a clique of size k if and only if the
graph 𝑮ഥ has a vertex cover of size |V| - k.
Vertex Cover Problem

Suppose that G has a Clique V’  V with |V’| = k


Claim: V – V’ is a vertex cover in 𝑮ഥ.
Let (u, v)  𝑬

Then (u, v)  E
 At least one of u or v does not belong to V’
(because V’ is a clique, every pair of vertices in V’ is
connected by an edge of E)
 At least one of u or v belongs to V - V’
 (u, v) is covered by V- V’
 V- V’ is a vertex cover of 𝑮ഥ of size |V| - k
Vertex Cover Problem
Conversely, suppose that 𝑮 ഥ has a vertex cover V’  V,
where |V’| = |V| - k
Claim: V – V’ is a clique in G
Suppose u, v  V – V’
 u, v  V’
 (u, v)  𝑬

Because V’ is an vertex cover of 𝑮 ഥ and so covers every

edge in 𝑬
(u, v)  E
 V – V’ is a clique in G of size k
Independent Set
An independent set is a set I ⊆ V of vertices such that
for all u, v ∈ I, (u, v)  E
OR
No pair of vertices in I are connected by an edge
An independent set problem is to find a maximum size
independent set in G.
The independent set decision problem is as follows:
Given a graph G and an integer k,
Is there an independent set I of vertices of size at most
k?
Independent Set
Theorem:
Independent Set is NP-Complete.
Proof:
1. Independent Set  NP
Easy to Prove
2. Independent Set is NP-hard
Claim: Max-Clique ≤𝑷 Independent Set
Reduction Algorithm:
Given G and k, compute the complement graph 𝑮 ഥ
ഥ has
Claim: G has a clique of size k if and only if the graph 𝑮
a independent set of size k.
Independent Set
Assume that there exists a clique C of size k in G.
Thus, for any u, v ∈ C, (u, v) ∈ E.
Thus, (u, v)  𝑬

Thus, the vertices in C form an independent set of size k in

𝑮
Conversely, assume that there exists an independent set I

of size k in 𝑮
Let u, v ∈ I
Then (u, v)  𝑬ഥ
Thus, (u, v) ∈ E.
Thus, the vertices in I form a clique of size k in G.
Relationship between Clique, VC, and IS

Theorem:
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 22

1
NP Complete Problems
We have proved the following reductions:
CIRCUIT-SAT

SAT

3-CNF

MAX 2SAT k-CNF Max k-CNF Clique

Vertex Cover

Independent Set
Max-Clique
Theorem :
Max-Clique is NP-Complete
Proof:
We will prove 3-CNF ≤𝑷 Max-Clique.
Given a 3-CNF formula  of m clauses C1, C2,….., Cm
and over n variables x1, x2,….., xn
We construct a graph G as follows:
1. for each clause Cr = 𝒍𝒓𝟏  𝒍𝒓𝟐  𝒍𝒓𝟑 , create one vertex for each
of 𝒍𝒓𝟏 , 𝒍𝒓𝟐 , 𝒍𝒓𝟑
2. Place an edge between two vertices 𝒍𝒓𝒊 and 𝒍𝒔𝒋 if and only if
• r ≠ s, i.e., the corresponding literals are from different clauses
• 𝒍𝒓𝒊 ≠ ⌐ 𝒍𝒔𝒋 i.e., they are consistent
Max-Clique

Example: Suppose  = C1  C2  C3
where C1 = x1  ⌐ x2  ⌐ x3, C 2 = ⌐ x 1  x2  x3
and C3 = x1  x2  x3
G:
Vertex Cover Problem
Vertex Cover  NPC
Claim: Clique ≤𝑷 Vertex Cover
Claim: G has a clique of size k if and only if the complement
ഥ has a vertex cover of size |V| - k.
graph 𝑮
Theorem:
Independent Set is NP-Complete.
Claim: Max-Clique ≤𝑷 Independent Set
Claim: G has a clique of size k if and only if the the
complement graph 𝑮 ഥ has a independent set of size k.
Set Covering Problem
Example

Elements of U are the black circles, and the sets


S1 , S2 ,…..,S6 are indicated by rectangles.

The optimum cover consists of the three sets


{S3 , S4 , S5}
Set Covering Problem
Set Covering Problem
We are given a pair Σ = (X,F)
Where X = {x1, x2,…..,xm } is a finite set of objects, called
the universe
and F = {S1, S2,…..,Sn } is a collection of subsets of X,
such that every element of X belongs to at least one
subset in F.
Goal: Compute a set C ⊆ {1,...,n} of minimum
cardinality such that 𝑿 = ‫𝒊𝑺 𝑪∈𝒊ڂ‬
Set Covering Problem
Theorem: Set Cover problem is NP Complete
Proof:
• Set Cover is in NP (Easy)
• Claim:
Vertex Cover ≤𝑷 Set Cover.
Given an instance of Vertex Cover (i.e. a graph G=(V, E)
and an integer k)
We will construct an instance of the Set Cover problem.
• Let U = E.
Set Covering Problem
We will define n subsets of U as follows:
Label the vertices of G from 1 to n,
and let Si be the set of edges that incident to vertex i
Then Si  U and 𝑼 = ‫𝒊𝑺 𝒊ڂ‬
This construction can be done in polynomial time in the
size of the Vertex Cover instance
We show that G has a Vertex Cover of size k iff
system (U,F) has Set Cover of size k , where F = {Si |i =
1,…,n}
Set Covering Problem
Suppose G has a vertex cover of size k.
Let S be such a set of nodes.
By our construction, S corresponds to a collection C of
subsets of U
C clearly has k subsets.
Claim : The sets belonging to C cover U
Consider any element of U.
Such an element is an edge e in G.
Since S is a vertex cover for G, at least one of e’s endpoints
is in S.
Therefore C contains at least one of the sets associated with
the endpoints of e.
Set Covering Problem
Conversely, suppose there is a set cover C of size k in our
constructed instance.
Since each set in C is associated with a vertex in G,
let S be the set of these vertices.
Then |S|=|C|= k
Consider any edge e.
Since e is in the set U and C is a set cover, C must contain at
least one set that includes e.
But by construction, the only sets that include e correspond
to nodes that are endpoints of e.
Thus, S must contain at least one of the endpoints of e.
Therefore, S is a Vertex Cover of G of size k
NP Complete Problems
We have proved the following reductions:
CIRCUIT-SAT

SAT

3-CNF

MAX 2SAT k-CNF Max k-CNF Clique

Vertex Cover

Independent Set Set Cover


Dominating Set
Dominating Set
A dominating set in a graph G = (V, E) is a subset of vertices
V’ such that every vertex in the graph is either in V’ or is
adjacent to some vertex in V’.
Note that we can assume there are no isolated vertices in
G.
Because a graph G with r isolated vertices has a
dominating set of size k iff the graph with those isolated
vertices removed has a dominating set of size k − r.
Since each isolated vertex must be in every dominating
set.
Dominating set
Problem:
Given (G, k) , does a dominating set of size at most k for
G exists?
Theorem:
Dominating set is NPC problem.
Proof:
Dominating set is in NP (Exercise)
Claim:
Vertex Cover ≤𝑷 Dominating-set
Algorithm
Reduction Algorithm
• For each edge (v, w) of G, add a vertex vw and
the edges (v, vw) and (w, vw) to G’
• Remove all vertices with no incident edges;
such vertices would always have to go in a
dominating set but are not needed in a vertex
cover of G
Dominating set
Example

Observe - G’ is constructed from G in polynomial time


Proof
Claim:
G has a Vertex Cover of size k iff G’ has a dominating set of
size k
Let C be a vertex cover of size k in G
• For an old vertex, v ∈ G’
- By the definition of VC, all edges incident to v are covered
- v is also covered
• For a new vertex, uv ∈ G’ :
– Edge u-v must be covered, either u or v ∈ C
– This node will cover uv in G’
• Thus, C is a dominating set for G’
Proof
Conversely, Let D be a dominating set of size k in G’
Case 1 : D contains only vertices from G
Then, all vertices have an edge to a vertex in D
D covers all edges
D is a valid vertex cover of G
Case 2 : D contains some new vertices (vertex in the form of uv)
For each new vertex uv, replace it by u (or v)
If u ∈ D, this node is not needed
Then the edge u-v in G will be covered
D is a valid vertex cover of G (of size at most k)
Hamiltonian Cycle
• For Today
Assume Hamiltonian Cycle Problem is NP Complete
Travelling Salesman Problem
The travelling salesman problem or TSP :
Given a list of cities and the distances between each pair of
cities, what is the shortest possible route that visits each
city exactly once and returns to the origin city?
The problem was first formulated in 1930 and is one of the
most intensively studied problems in optimization.
The corresponding decision traveling salesman problem
can be described as follows:
TSP = {(G, c, k): G = (V, E) a complete graph,
c is a function from V × V → Z,
k ∈ Z,
G has a traveling salesman tour with cost at most k}.
Travelling Salesman Problem

Theorem :
The traveling salesman problem is NP-complete.
Proof :
First, we have to prove that TSP belongs to NP.
In TSP, we find a tour and check that the tour contains
each vertex once.
Then the total cost of the edges of the tour is
calculated and verified if it is at most k
This can be done in polynomial time thus TSP belongs
to NP.
Travelling Salesman Problem
Secondly, we prove that TSP is NP-hard.
Claim : Hamiltonian cycle ≤p TSP
(Assume Hamiltonian cycle problem is NP complete)
Reduction Algorithm:
Assume G = (V, E) to be an instance of Hamiltonian cycle.
An instance of TSP is then constructed as follows.
• We create the complete graph G’ = (V, E’), where
E′ = {(i, j) : i, j ∈ V and i ≠ j}
• We define the cost function c by
𝟎, 𝒊𝒇 𝒊, 𝒋  𝑬
𝒄 𝒊, 𝒋 = ቊ
𝟏, 𝒊𝒇 𝒊, 𝒋 𝑬
Travelling Salesman Problem
Claim:
G has a Hamiltonian cycle if and only if G’ has a tour of
cost at most 0.
Suppose that a Hamiltonian cycle h exists in G.
It is clear that the cost of each edge in h is 0 in G’ as
each edge belongs to E.
Therefore, h has a cost of 0 in G′.
Thus, if graph G has a Hamiltonian cycle then graph G’
has a tour of cost 0.
Travelling Salesman Problem
Conversely,
We assume that G’ has a tour h of cost at most 0.
By definition, the cost of edges in E’ are 0 and 1.
Since cost of h is 0, each edge of h must have a cost 0.
Thus h contains only edges in E.
Therefore h is a Hamiltonian cycle of G.
TSP
Suppose in the reduction algorithm we define
𝟏, 𝒊𝒇 𝒊, 𝒋  𝑬
𝒄 𝒊, 𝒋 = ቊ
𝟐, 𝒊𝒇 𝒊, 𝒋 𝑬
Then the corresponding claim is:
G has a Hamiltonian cycle if and only if G’ has a
tour of cost |V|.
Hamiltonian
Exercise:
1. Longest Simple Cycle is NP Complete
BITS, PILANI – K. K. BIRLA GOA CAMPUS

Design & Analysis of Algorithms


(CS F364)

Lecture No. 24

1
NP Complete Problems
We have proved the following reductions:
CIRCUIT-SAT Hamiltonian Cycle

SAT Longest Simple Cycle TSP

3-CNF

MAX 2SAT k-CNF Max k-CNF Clique

Vertex Cover

Independent Set Set Cover Dominating Set


Hamiltonian Path
Theorem
Hamiltonian Path is NP Complete.
Proof
We prove Hamiltonian Cycle ≤p Hamiltonian Path
Hamiltonian Path
Construction
Given an instance 𝐺 = 〈𝑉, 𝐸〉 of the Hamilton Cycle
problem
Construct a new graph 𝐺′ by adding three new vertices
“Start”, “AlmostThere”, and “Finish”.
The vertex “Start” connects to an arbitrary vertex 𝑣 in 𝑉
“AlmostThere” connects to every vertex in 𝑉 that are
adjacent to 𝑣
and “Finish” connects to “AlmostThere”.
Clearly this reduction is in polynomial time.
Hamiltonian Path
Claim:
𝐺 has a Hamilton Cycle if and only if 𝐺′ has a Hamilton
Path.
Suppose 𝐺 has a Hamiltonian cycle.
Without loss of generality, suppose the cycle is 𝑣1 → 𝑣2
→ ⋯ → 𝑣𝑛 → 𝑣1
And suppose Start connects to 𝑣1 in 𝐺 ′ .
Then, the following path is a Hamiltonian path in 𝐺 ′ :
𝑆𝑡𝑎𝑟𝑡 → 𝑣1 → 𝑣2 → ⋯ → 𝑣𝑛 → 𝐴𝑙𝑚𝑜𝑠𝑡𝑇ℎ𝑒𝑟𝑒 →
𝐹𝑖𝑛𝑖𝑠ℎ
Hamiltonian Path
Conversely,
Suppose 𝐺 ′ has a Hamiltonian path.
Then the path must start with 𝑆𝑡𝑎𝑟𝑡 → 𝑢 and
finish with 𝑣 → 𝐴𝑙𝑚𝑜𝑠𝑡𝑇ℎ𝑒𝑟𝑒 → 𝐹𝑖𝑛𝑖𝑠ℎ.
But, 𝑣 has an edge 𝑒 connected to 𝑢.
The path from 𝑢 to 𝑣, plus this edge 𝑒 form a
Hamiltonian cycle of 𝐺.
Hamiltonian
Exercise:
1. Longest Simple Cycle is NP Complete
2. Longest Simple Path is NP Complete
Subset Sum
Subset Sum
• Given a set A of integers and an integer number S,
does there exist a subset of A such that the sum of
its elements is equal to S?
Theorem:
SUBSET SUM is NP-complete.
Assume for now
Set-Partition
Set Partition Problem
Given a set S of integers.
Does there exist an set A  S such that

෍𝒂 = ෍ 𝒂
𝒂∈𝑨 𝒂∈𝑺−𝑨

In other words, is there a way to partition 𝑺 into two


parts, so that both parts have equal sum?
Set-Partition
Theorem:
SET-PARTITION is NP-Complete
Proof:
• SET-PARTITION ∈ NP:
Guess the two partitions and verify that the two have
equal sums, This can be done in polynomial time.
• Claim
SUBSET-SUM ≤𝑷 SET-PARTITION
Set-Partition
Reduction Algorithm:
Given instance (X, t) of Subset Sum
Let s be the sum of members of X.
Let X’ = X  {s – 2t}
This can be done in polynomial (linear) time.
Claim:
〈X, t〉∈ SUBSET-SUM iff〈X′〉∈ SET-PARTITION
That is, X has a subset that sums to t ⇔ X’ can be
partitioned into two sets with equal sums
Note: The sum of members of X′ is 2s−2t.
Set-Partition
Suppose there exists a set (say Y) of numbers in X that
sum to t.
Then the set of remaining numbers in X sum to s−t.
Let Y’ = Y  {s – 2t}
Then Y’  X’ such that σ𝒂∈𝒀′ 𝒂 = σ𝒂∈𝑿′ −𝒀′ 𝒂

=𝒔−𝒕
Therefore, there exists a partition of X′ into two such
that each partition sums to s−t
Set-Partition
Conversely, suppose there exists a partition of X′ into
two sets such that the sum over each set is s−t.
Then, one of these sets contains the number s−2t.
Removing this number, we get a set of numbers whose
sum is t, and all of these numbers are in X
0-1 Knapsack problem
0-1 Knapsack Decision Problem
Given n items with weights w1, w2,……, wn
and values v1, v2,……, vn and capacity W and value V
Is there a subset S ⊆ {1,2, .., n} such that
σ𝒊 ∈ 𝑺 𝒘𝒊 ≤ 𝑾 and σ𝒊 ∈ 𝑺 𝒗𝒊 ≥ 𝐕?

Theorem:
0-1 Knapsack Problem is NP-complete
0-1 Knapsack problem

Proof:
• 0-1 Knapsack is in NP.
The proof is the set S of items that are chosen and the
verification process is to computeσ𝒊 ∈ 𝑺 𝒘𝒊 and σ𝒊 ∈ 𝑺 𝒗𝒊
This can be done in polynomial time in the size of input
• Claim
Set-Partition ≤𝑷 0-1 Knapsack
0-1 Knapsack problem
Reduction Algorithm Q:
Suppose we are given a set X = {a1, a2,……, an} for the
Set-Partition Problem
Consider the following Knapsack problem:
wi = ai & vi = ai for i = 1,2,…,n &
𝟏 𝒏
𝑾=𝑽= σ 𝒂
𝟐 𝒊=𝟏 𝒊
This process of converting the Set-Partition problem to
0-1 Knapsack problem is polynomial in the input size.
0-1 Knapsack problem
Suppose X is a ‘Yes’ instance for the Set-Partition
problem.
Then there exists a subset S of X such that
𝟏
σ𝒊 ∈ 𝑺 𝒂𝒊 = σ𝒊 ∈𝑿−𝑺 𝒂𝒊 = σ𝒏𝒊=𝟏 𝒂𝒊
𝟐
Let our Knapsack contain the items in S
𝟏 𝒏
Then σ𝒊 ∈ 𝑺 𝒘𝒊 = σ𝒊 ∈ 𝑺 𝒂𝒊 = σ𝒊=𝟏 𝒂𝒊 = W
𝟐
𝟏 𝒏
And σ𝒊 ∈ 𝑺 𝒗𝒊 = σ𝒊 ∈ 𝑺 𝒂𝒊 = σ𝒊=𝟏 𝒂𝒊 = V
𝟐
0-1 Knapsack problem
Conversely, Suppose Q(X) is a ‘Yes’ instance for the 0-1
Knapsack problem
Let S be the set chosen for the 0-1 Knapsack problem
𝟏
We have σ𝒊 ∈ 𝑺 𝒘𝒊 = σ𝒊 ∈ 𝑺 𝒂𝒊 ≤ W = σ𝒏𝒊=𝟏 𝒂𝒊 and
𝟐
𝟏 𝒏
σ𝒊 ∈ 𝑺 𝒗𝒊 = σ𝒊 ∈ 𝑺 𝒂𝒊 ≥ V = σ 𝒂
𝟐 𝒊=𝟏 𝒊
𝟏 𝒏
Thereforeσ𝒊 ∈ 𝑺 𝒂𝒊 = σ𝒊 ∈ 𝑿 − 𝑺 𝒂𝒊 = σ𝒊=𝟏 𝒂𝒊
𝟐
Thus S is the required subset of X.

You might also like