You are on page 1of 56

CONTENTS CONTENTS

CS 341 Course Package — Chris Erbach

Contents
1 Sep 9th, 2008 1
1.1 Welcome to CS 341: Algorithms, Fall 2008 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Marking Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Course Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.4 A Case Study (Convex Hull) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Sep 11th, 2008 4

3 Sep 16th, 2008 4


3.1 Example: Making change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.2 Example: Scheduling time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.3 Example: Knapsack problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

4 Sep 18, 2008: MISSING 6

5 Sep 23, 2008: Divide and Conquer 6


5.1 Solving Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
5.1.1 ”Unrolling” a recurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
5.1.2 Guess an answer, prove by induction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
5.1.3 Changing Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5.1.4 Master Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

6 Sep 25, 2008 11


6.1 Assignment Info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
6.2 Divide & Conquer Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
6.2.1 Counting Inversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
6.2.2 Multiplying Large Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

7 Sep 30, 2008 14


7.1 D&C: Multiplying Matrices: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
7.2 D&C: Closest pair of points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
7.3 Hidden Surface Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

8 Oct 2nd, 2008 15


8.1 Dynamic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
8.2 Second example: optimum binary search trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

9 Oct 7th, 2008 17


9.1 Example 2: Minimum Weight Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

10 Oct 9th, 2008 19


10.1 Dynamic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
10.2 Certain types of subproblems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
10.3 Memoization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

i
CONTENTS CONTENTS

11 Oct 14th, 2008 20


11.1 Graph Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
11.2 Minimum Spanning Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

12 Oct 16th, 2008 23


12.1 Graph Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
12.1.1 Prim’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
12.2 Shortest Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

13 Oct 21, 2008 25


13.1 All Pairs Shortest Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
13.1.1 Floyd-Warshall Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

14 Oct 23, 2008 27


14.1 Dijkstra’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
14.2 Connectivity in Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
14.2.1 Finding 2-connected components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

15 Oct 28th, 2008 30


15.1 Backtracking and Branch/Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
15.2 Branch-and-Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
15.2.1 Branch and Bound TSP Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

16 Oct 30th, 2008 33


16.1 Recall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
16.2 Lower Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
16.2.1 Basic Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
16.2.2 State-of-the-Art in Lower Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
16.3 Polynomial Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
16.4 Reductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

17 Nov 4th, 2008 35


17.1 Decision Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
17.2 P or NP? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
17.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

18 Nov 6th, 2008 38


18.1 Recall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
18.2 N P -Complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
18.2.1 Circuit Satisfiability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
18.2.2 3-SAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

19 Nov 11th, 2008 40


19.1 Satisfiability – no restricted form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
19.2 Independent Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
19.3 Vertex Cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
19.4 Set-Cover Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
19.5 Road map of NP-Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
19.6 Hamiltonian Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

ii
CONTENTS CONTENTS

20 Nov 13th, 2008 43


20.1 Undirected Hamiltonian Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
20.2 TSP is NP-complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
20.3 Subset-Sum is NP-Complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

21 Nov 18th, 2008 46


21.1 Major Open Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
21.2 Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
21.2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

22 Nov 20th, 2008 48


22.1 Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
22.2 History of Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

23 Nov 25th, 2008 49


23.1 Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
23.2 Other Undecidable Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
23.2.1 Half-No-Input or Halt-on-Empty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
23.2.2 Program Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
23.2.3 Other Problems (no proofs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

24 Nov 27th, 2008 51


24.1 What to do with NP-complete problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
24.2 P vs. NP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

iii
1 SEP 9TH, 2008

1 Sep 9th, 2008


1.1 Welcome to CS 341: Algorithms, Fall 2008
I’m Anna Lubiw, I’ve been in this department/school quite some time. This term I’m teaching both sections of
CS 341. I find the earlier lecture is better though, which may be counterintuitive.
The number of assignments is fewer this term. There are fewer grad TA’s this term, so the assignments may be
shorter (but quite likely, not any easier!)
Textbook is CLRS. $140 in the bookstore, on reserve in the library.

1.2 Marking Scheme


25% Midterm
40% Final exam
35% Assignments
We have due dates for assignments already (see the website.) Unlike in 2nd year courses where ISG keeps everything
coordinated, in third year we’re on our own.

1.3 Course Outline


Where does this word come from? An Arabic scientist from 600 AD. Originally, algorithms for arithmetic,
developed by the mathematician/scientist (not sure what to call him back then.)
In this course, we’re looking for the best algorithmic solutions to problems. Several aspects:

1. How to design algorithms


i.e. what shortest-path algorithm to use for street-level walking directions.

(a) Greedy algorithms


(b) Divide and Conquer
(c) Dynamic Programming
(d) Reductions

2. Basic Algorithms (often domain specific)


Anyone educated in algorithms needs to have a general repertoire of algorithms to apply in solving new
problems

(a) Sorting (from first year)


(b) String Matching (CS 240)

3. How to analyze algorithms


i.e. do we run it on examples, or try a more theoretical approach

(a) How good is an algorithm?


(b) Time, space, goodness (of an approximation)

4. You are expected to know

(a) O notation, worst case/avg. case


(b) Models of computation

1
1 SEP 9TH, 2008 1.4 A Case Study (Convex Hull)

5. Lower Bounds
This is not a course on complexity theory, which is where people really get excited about lower bounds, but
you need to know something about this.

(a) Do we have the best algorithm?


(b) Models of computation become crucial here.
(c) NP-completeness (how many of you have secret ambitions to solve this? I started off wanting to solve
it, before it was known it was so hard...)

1.4 A Case Study (Convex Hull)


To bound a set of points in 2D space, we can find the max/min X,Y values and make a box that contains all the
points. A convex hull is the smallest convex shape containing the points (think the smallest set of points that we
can connect in a ring that contains all the other points.) Analogy: putting an elastic band around the points, or
in three dimensions putting shrink-wrap around the points.
Why? This is a basic computational geometry problem. The convex hull gives an approximation to the shape of
a set of points better than a minimum bounding box. Arises when digitizing sculptures in 3D, or maybe while
doing OCR character recognition in 2D.

1.4.1 Algorithm
Definition (better from an algorithmic point of view)
A convex hull is a polygon and its sides are formed by lines ` that connect at least two points and have no points
on one side.
A straightforward algorithm (sometimes called a brute force algorithm, but that gives them a bad names because
oftentimes the straightforward algorithms are the way to go) – for all pairs of points r, s find the line between r, s
and if all other points lie on one side only then the line is part of the convex hull.
Time for n points: O(n3 ).
Aside: even with this there are good and bad ways to ”see which side points are on.” Computing the slope
between the lines is actually a bad way to do this. Exercise: for r, s, and p, how to do it in the least steps, avoiding
underflow/overflow/division.
Improvement Given one line `, there is a natural ”next” line. Rotate ` through s until it hits the next point.

l r t l'

t is an ”extreme point” (min angle α). Finding it is like ginding a max (or min) – O(n). Time for n points: O(n2 ).
Actually, if h = the number of points on the convex hull, the algorithm takes O(n × h)
Can we do even better? (you bet!)
Repeatedly finding a min/max (which should remind you of sorting.)
Example Sort the points by x coordinate, and then find the ”upper convex hull” and ”lower convex hull” (each of
which comes in sorted order.)
The sorting will cost O(n log n) but the second step is just linear. We don’t quite have a linear algorithm here but
this will be much better. Process from left to right, adding points and each time figuring out whether you need to

2
1 SEP 9TH, 2008 1.4 A Case Study (Convex Hull)

go ”up” or ”down” from each point.


This is a case of using a reduction (which we will study a lot in this course)
Time for n points: O(n log n).
One more algorithm
Will not be better than O(n log n). Why not? We’ll show soon, but intuition is that we’ll have to sort the points
somehow. In three-dimensional space you can still get O(n log n) algorithms for this, but not the same way. This
answer uses divide and conquer.

upper bridge

lower bridge

1. Divide points in half by vertical line.

2. Recursively find convex hull on each side.

3. Combine by finding upper and lower bridges.

From e, edge from max x coordinate on the left to minimum x coordinate on the right, ”walk up” to get upper
bridge, and ”walk down” to get the lower bridge.
This will be O(n) to divide, and O(n) to find the upper/lower bridges. Get recurrence relation:
n
T (n) = 2T + O(n)
2
This is the same as e.g. merge-sort. It comes out to O(n log n).
Never Any Better Finally let’s talk ever-so-slightly about not getting better than O(n log n). In some sense, no.
If we could find a convex hull faster, we could sort faster.
Technique: put points on a parabola (or alternately other shape) with a map x → (x, x2 ) and compute the convex
hull of these points. From there recover the sorted order. This is an intuitive argument. To be rigorous, we need
to specify the model of computation. We need a restricted model to say that sorting is Ω(n log n) – but need the
power of indirect addressing. (Don’t worry if that seems fuzzy. The take-home message is that to be precise we
need to spend more time on models of computation.)
Measuring in terms of n, the input size, and h, the output size. We saw an O(n log n) algorithm, an O(n × h)
algorithm. Which is better? Well, depends on whether h > log n or not.
One paper written called ”The ultimate convex hull algorithm?” (with a question mark in the name, very unusual)
gave an algorithm that’s O(n log h).
Challenge Look up the O(n log h) algorithm by Timothy Chan (here in SCS) and try to understand it.

3
3 SEP 16TH, 2008

2 Sep 11th, 2008


Missing.

3 Sep 16th, 2008


Assignment 1 is available online.

3.1 Example: Making change


Example: for making change. Suppose you want to pay $ 3.47 in as few coins as possible. This takes seven coins,
and I claim this is the minimum number of coins. On the assignment you must prove this is in fact true.

3.2 Example: Scheduling time


Interval scheduling, or ”activity selection.” The goal is to maximize the number of activities we can perform.
Given activities, each with an associated time interval, pick non-overlapping activities.
Greedy Approaches

• Pick the first activity


NO

• Pick the shortest activity


NO

• Pick one with the fewest overlaps


NO

• Pick the one that ends earliest


YES

We can write the algorithm as

A <- empty set


for i = 1 .. n
if activity i doesn’t overlap any activities in A
A <- A union { i }
end

This looks like an O(n log n) algorithm (as it takes that long to sort, and then O(n) after that)

Correctness Proof
There are three approaches to proving correctness of greedy algorithms.

• Greedy does better at each step.

• Suppose there is an optimal solution. Then the Greedy approach can be made into this solution.

• Metroids (a formalization of when Greedy approaches work) (in C&O)

4
3 SEP 16TH, 2008 3.3 Example: Knapsack problem

Theorem This algorithm returns a maximum size set A of non-overlapping intervals.


Proof Let A = {a1 , . . . ak } ordered by finish time (i.e. in the order greedy alg. chooses them.) Let B = {b1 , . . . , bl }
be any other set of non-overlapping intervals ordered by finish time.
We want to show l ≤ k. Suppose that l > k and show that greedy algorithm would not have stopped at k.
Claim a1 , . . . , ai bi+1 . . . bl is also a solution.
Proof By induction on i. Base case i = 0 and b1 , b2 , . . . , bl is a solution. Inductive case a1 , . . . , ai−1 bi . . . bl is a
solution. Prove a1 , . . . , ai , bi+1 , . . . , bl is a solution. i.e. we’re swapping bi out and ai in.
Well, bi does not overlap ai−1 by assumption. So when we choose ai , bi was a candidate – we chose ai . So finish
(ai ) ≤ finish (bi ) ∴ ai doesn’t overlap bi+1 , . . . , bl so swap is OK.
Exercise, go through the picture.
That proves claim. To proce theorem, if l > k then by claim a1 , . . . , ak , bk+1 , . . . , bl is a solution. But then the
Greedy algorithm would not have stopped at ak .
Therefore l ≤ k and greedy gives the optimal solution.

3.3 Example: Knapsack problem


I have items i, . . . , n. Item i has weight wi and i has values vi . Weight limit W for the knapsack. Pick items of
total weight ≤ W maximizing the sum of V .
There are two versions:

• 0-1 Knapsack: the items are indivisible (e.g. tent)

• Fractional: items are divisible (e.g. oatmeal)

We’ll look at 0-1 Knapsack later (since it’s harder) (and when we study dynamic programming)
So imagine we have a table of items:

Weight wi Value vi
1 6 12
2 4 7
3 4 6
vi
W = 8. Greedy by wi . For the 0 − 1 knapsack:

• Greedy picks item 1 – value 12

• Optimal solution

For the fractional case:

• Take all of item 1, half of item 2

Greedy Algorithm
vi
Order items 1, . . . , n by wi . xi is the weight of item i that we chose.

free-w <- W
for i=1..n
x_i <- min{ w_i, free-W }
free-w <- free-w - x_i
end

5
5 SEP 23, 2008: DIVIDE AND CONQUER

P P
xi = W (assuming W < wi )
The value we get is
n  
X vi
xi
wi
i=1
Note: solution looks like it’s for 0-1. The only item we take fractionally is the last.
Claim Greedy algorithm gives the optimal solution to fractional knapsack problem.
Proof We use x1 , . . . , xn and the P uses y1 , . . . , yn . Let k be the minimum index with xk 6= yk . Then yk < xk
optimal P
(because greedy took max xk .) xi = yi = W . So there exists an index l > k such that yl > xl . Ida: swap
excess item l for item k.
yk0 ←k +∆ andP yl0 ← yl − ∆. Well, ∆ ← min{yl , wk − yk }, both terms of which are greater than zero. So the sum
of the weights yi0 = W

+∆(vk /wk ) − ∆(vl /wl )


= ∆(vk /wk − vl /wl )
vk vl
> because k > l
wk wl
Thus yi0 is an even better solution. Thus own assumption that opt is better than greedy fails.

4 Sep 18, 2008: MISSING

5 Sep 23, 2008: Divide and Conquer


I started with Greedy because it’s fun to get to some interesting algorithms right away. Divide and conquer however
is likely the one you’re most familiar with. Sorting and searching are often divide-and-conquer algorithms.

The steps are:

• Divide – break problem into smaller subproblems

• Recurse – solve smaller sets of problems

• Conquer/Combine – ”put together” solutions from smaller subproblems

Some examples are:

• Binary search

– Divide: Pick the middle item


n
– Recurse: Search in each side, with only one subproblem of size 2
– Conquer: No work
n
+ 1 or more formally T (n) = max T n2 , T n2
     
– Recurrence relation: T (n) = T 2 +1
– Time: T (n) ∈ O(log n)

• Merge sort

– Divide: basically nothing

6
5 SEP 23, 2008: DIVIDE AND CONQUER 5.1 Solving Recurrence Relations

n
– Recurse: Two subproblems of size 2
– Conquer: n − 1 comparisons
– Recurrence: T (n) = T n2 + T n2 + (n − 1) and T (1) = 0 comparisons.
   

– Time: T (n) ∈ O(n log n)

5.1 Solving Recurrence Relations


Three approaches, all of which are in CLRS.

5.1.1 ”Unrolling” a recurrence


Use

n
T (n) = 2T + n − 1 for n even
2
T (1) = 0
So for n a power of 2,

n
T (n) = 2T +n−1
h 2 n  n i
= 2 2T + −1 +n−1
 n 4 2
= 4T + 2n − 3
4
..
.
n i−1
X
= 2i T + in − (2i
− 1) or 2j
2i
j=0

n
We want 2k
= 1, 2k = n, k = log n.

n
= 2 ∗ kT + k × n − (2k − 1)
2k
= nT (1) + n log n − n + 1
= n log n − n + 1 ∈ O(n log n)
If our goal is to say that mergesort takes O(n log n) for all n (as apposed to exactly computing T (n)) then we can
just add that T (n) ≤ T (n0 ) where n0 = the smallest power of 2 bigger than n.

If we really did want to compute exactly T (n), then

j n k l n m
T (n) = T +T +n−1
2 2
T (1) = 0
and the exact solution is

T (n) = n dlog ne − 2dlog ne + 1

7
5 SEP 23, 2008: DIVIDE AND CONQUER 5.1 Solving Recurrence Relations

5.1.2 Guess an answer, prove by induction


Again for mergesort recurrence, prove that

T (n) ∈ O(n log n)


Be careful: prove by induction that T (n) ≤ cn log n for some constant c. Often you don’t know c until you’re
working on the problem.
A good trick for avoiding bc, de is to deal separately with n even and n odd.
For n even,

n h n ni
T (n) = 2T + n − 1 ≤ 2 c log +n−1
2 2 2
= cn(log n − log 2) + n − 1 (by induction)
= cn log n − cn + n − 1
≤ cn log n if c ≥ 1

I’ll leave the details as an exercise (we need a base case, and need to do the case of n odd) for those of you for
whom this is not entirely intuitive.

Another example

n
T (n) = 2T +n
2
Claim T (n) ∈ O(n)
Prove T (n) ≤ cn for some constant c

Assume by inductive hypothesis that

T (n0 ) ≤ cn0 for n0 < n


Inductive step

n
T (n) = 2T +n
2
n
≤ 2c + n = (c + 1)n
2
Wait, constants aren’t supposed to grow like c + 1 above. This proof is fallacious. Please do not make this kind
of mistake on your assignments.

Example 2

j n k l n m
T (n) = T +T +1
2 2
T (1) = 1

Let’s guess T (n) ∈ O(n). Prove by induction that T (n) ≤ cn for some c.

8
5 SEP 23, 2008: DIVIDE AND CONQUER 5.1 Solving Recurrence Relations

Induction step:

jnk lnm
T (n) = c +c +1
2 2
= cn + 1 – we’ve got trouble from that + 1

Let’s try unrolling for n a power of 2.

n
T (n) = 2T +1
2
n
= 4T +2+1
4
..
.
n k−1
X
= 2k T + 2i
2k
i=1
(n = 2k )
= nT (1) + 2k − 1
= 2n − 1

So try proving by induction that

T (n) ≤ c × n − 1
In that case we have

jnk lnm
T (n) = c −1+c −1+1
2 2
= cn − 1

This matches perfectly.

Message: Sometimes we need to strengthen the inductive hypothesis and lower the bound.

5.1.3 Changing Variables


Suppose we have a mystery algorithm with recurrence

T (n) = 2T (b nc) + log n and ignore the bc
Substitute m = log n, n = 2m , and we have

T (n) = 2T (2m/2 ) + m
Let S(m) = T (2m ), then S(m) = 2S(m/2) + m. We can say

S(m) ∈ O(m log m)


T (2m ) ∈ O(m log m)
T (n) ∈ O(log n log log n)

9
5 SEP 23, 2008: DIVIDE AND CONQUER 5.1 Solving Recurrence Relations

5.1.4 Master Theorem


From MATH 239, homogeneous linear recurrences T (n) = an−1 T (n − 1) + an−2 T (n − 2) + . . . + a1 T (1) + f (n) = 0
are ”homogeneous” because they’re equal to zero. That never happens in algorithms (because we always have
some work to do!)

We need
n
T (n) = aT + c × nk
b
The more general case where c × nk 6= f (n) is handled in the textbook. We’ll first look at k = 1.
n
T (n) = aT + cn
b
Results (exact) are:

a=b T (n) ∈ Θ(n log n)


a<b T (n) ∈ Θ(n)
a>b T (n) ∈ Θ(nlogb a ) – the final term dominates n log n

n
+ cnk , a ≥ 1, b > 1, c > 0, k ≥ 1 then

Theorem If T (n) = aT b

 Θ(nk ) if a < bk

T (n) ∈ Θ(nk log n) if a = bk


Θ(nlogb a ) if a > bk

We’re not going to do a rigorous proof but we’ll do enough to give you some intuition. We’ll use unrolling. The
rigorous way is through induction.

n
T (n) = aT + cnk
b
  
n  n k 
= a aT 2 + c + cnk
b b
n  n k
= a2 T 2 + ac + cnk
b b
n  n k n
= a3 T 3 + a2 c 2 + ac + cnk
b b b
..
.
logb n−1  n k
X
= ak T (1) + ai c
bi
i=0
logb n−1  i
logb a k
X a
= n T (1) + cn
bk
i=0

n = bt , t = logb n, alogb n = nlogb a . It comes out exactly like that sum in your assignment.
Just to wrap up, if a < bk i.e. logb a < k, the sum is constant and nk dominates. If a = bk the sum is logb n and
we get Θ(nk log n). The third case is when a > bk , and then nlogb a dominates.

10
6 SEP 25, 2008

6 Sep 25, 2008


6.1 Assignment Info
Assignment 1 is due Friday at 5PM in the assignment boxes.
Q5. US = UC.
Q2a. In CS240 we learned to take the log of n + 1. ”How is the number of bits going to grow” is a much nicer
√ √
angle. There is a reason that n and d ne are in the list.
Q3. (e) (f) See the newsgroup and website. D(i, j, l). Shortest path length from i to j using at most l edges but
formula is exactly l edges. Either assumption is fine. State clearly which one you are using. Same issue in (e) but
if you use exactly you may find that you don’t save. Use ”at most” if you haven’t started.
So we aren’t planning on marking every question. We will provide solutions for everything, however. The unmarked
questions are likely to appear on midterms or finals.
Q4. If you want examples of coin systems, go look around the Internet. Don’t get your proof from the Internet,
but examples of systems is fine.
Q5. How efficient? Well, you probably have to sort, so you probably won’t get better than O(n log n). Try to beat
O(n2 ).
Q4,Q5,Q6 are counterexample and a proof.
Please just come to office hours instead of asking too many questions over e-mail.

6.2 Divide & Conquer Algorithms


6.2.1 Counting Inversions
Comparing two people’s rankings of n items – books, music, etc. Useful for web sites giving recommendations
based on similar preferences.
Suppose my ranking is BDCA, and yours is ADBC from best to worst. We’d like a measure of how similar these
lists are. We can count inversions: on how many pairs do we disagree? Here there are four pairs where we disagree:
BD, BA, DA, CA and two where we agree: BC, DC.

Equivalently, we can say given a1 , a2 , . . . an , a permutation of 1 . . . n, count the number of inversions i.e. the
number of pairs ai , aj with
 i < j but ai > aj .2
n
Brute Force: Check all 2 pairs, taking O(n ).
1
Divide & Conquer: Divide the list in half, with m = 2 .

A = a1 . . . am
B = am+1 . . . an

Recursively count

rA = # inversions in A
rB = # inversions in B

Final answer is rA + rB + r where r = number of inversions ai aj , i ≤ m, j ≥ m + 1 and ai > aj .


For each
P j = m + 1 . . . n let rj = # of pairs involving aj .
r = nj=m+1 rj
Strengthen recursion – sort the list, too. If A and B are sorted, we can compute rj ’s

11
6 SEP 25, 2008 6.2 Divide & Conquer Algorithms

Sort-and-Count(L): sorted L and # of inversions


Split L into A and B
(r_A,A) <- Sort-and-Count(A)
(r_B,B) <- Sort-and-Count(B)
r <- 0
merge A and B
when element is moved from B to output list
r <- r + # elements left in A
end
return r_a + r_b + r

Runtime:
n
T (n) = 2T + O(n)
2
Since it’s the same as mergesort, we get O(n log n). Can we do better?

6.2.2 Multiplying Large Numbers


The school method:

981
1234
------
3924
2943
1962
981
-------
1210554

O(n2 ) for two n-digit numbers. (one step is × or + for two digits)
There is a faster way using divide-and-conquer. First pad 981 to 0981.

09 81 × 12 34
Then calculate

09 × 12 4 → 108
09 × 34 2 → 306
81 × 12 2 → 972
81 × 34 0 → 2754
1210554
The runtime here is
n
T (n) = 4T + O(n)
2
Apply the Master Method.

12
6 SEP 25, 2008 6.2 Divide & Conquer Algorithms

n
T (n) = aT + cnk
b
Here, a = 4, b = 2, k = 1. Compare a with bk . We see a = 4 > bk = 2 so then we have runtime Θ(nlogb a ) = Θ(n2 ).
So far we have not made progress!

We can get by with fewer than four multiplications.

(102 w + x) × (102 y + z) = 104 wy + 102 (wz + xy) + xz

Note we need wz + xy, not the terms individually.


Look at

(w + x)(y + z) = wy + wz + xy + xz
We know wy and xz but we want wz + xy. This leads to:

p = wy = 09 × 12 = 108
q = xz = 81 × 34 = 2754
r = (w + x)(y + z) = 90[that’s 09 + 81] × 46

Answer: 104 p + 102 (r − p − q) + q

108____
1278__
2754
-------
1210554

We can apply this as a basis for a recursive algorithm. We’ll get


n
T (n) = 3T + O(n)
2
From the master theorem, now we have a = 3, b = 2, k = 1 and since we have a > bk

Θ(nlogb a ) = Θ(nlog2 3 ) ≈ Θ(n1.585... )


Practical Issues

• What if n is odd?

• What about two numbers with different digit counts?

• How small do you let the recursion get? (Answer: hardware word)

• What about different bases?

• When is this algorithm useful? (For about 1,000 digits or fewer, don’t use it [BB])

– Schonnage and Strassen better for very large numbers, which runs in O(n log n log log n)

13
7 SEP 30, 2008

7 Sep 30, 2008


Assignment 2 is available.

7.1 D&C: Multiplying Matrices:


Multiplying two square matrices. Basic method takes n2 (and in some sense this is the best you can do, since you
need to write n2 numbers in the result!)

Basic D&C
n
Divide each matrix into 2 blocks.
    
A B E F I J
=
C D G H K L
I = AE + BG etc. Each of the four output blocks has 2 subproblems and O(n2 ) additions.
n
T (n) = 8T + O(n2 )
2
By the master theorem, a = 8, b = 2, k = 2. a = 8 > bk = 4 (the case when recursive work overwhelms other case)
then T (n) ∈ Θ(nlogb a ) = O(n3 ).

Strassen’s Algorithm shows how to get by with just seven (a = 7) subproblems. Not discussing here, but if you’re
curious it’s in the textbook. This gives
n
T (n) = 7T + O(n2 )
2
This is Θ(nlog2 7 ) ≈ O(n2.8... ). There are more complicated algorithms that get even better results (only for very
large n however)

7.2 D&C: Closest pair of points


Divide and Conquer is very useful for geometric problems. For example, given n points in a plane, select the
closest two by Euclidean distance. (There are other measures, including the ”Manhattan distance” which is the
distance assuming you can’t cross city blocks.)
Generally, we assume that arithmetic is unit cost. For this problem we don’t need to make that assumption.

In one dimension, consider {10, 5, 17, 100}. How would we do this? Sort and compare adjacent numbers.

In a plane, we can use brute force, and that’s O(n2 ). What about

• Sorting by position on one axis.


Nope!

What’s the way?

(1) Divide points into left/right at the median x coordinate. Most efficient to sort once by x coordinate. Then
we can find a line L in O(1) time.

14
8 OCT 2ND, 2008 7.3 Hidden Surface Removal

(2) Recurse on Q and R.



closest pair inQ
δ = min
closest pair in R

Solution is the minimum of δ or the closest pair crossing L.


We need to find pairs q ∈ Q, r ∈ R with d(p, r) < δ.
Claim If Q ∈ Q, r inR and d(q, r) < δ then d(q, L) < δ and d(r, L) < δ (i.e. q, r lie in this strip of width 2δ.)
Proof If otherwise, suppose q outside its strip. d(q, r) ≥ distance in DC from q to r ≥ δ.
Now let S be points in the strip of width 2δ. We can restrict our search to S. But S can be all the points!
Our hope is that if we sort S by coordinate then any pair q ∈ Q, r ∈ R with d(q, r) < δ are near each other
in sorted order.
Claim A δ × δ square T left of L can have at most 4 points on it.
Because every two points in T have distance ≥ δ we can fit four points but only in the four corners. Therefore
you can’t fit five.
Claim If S sorted by y coordinate and q inQ and r ∈ R with d(q, r) < δ then they are at most seven positions
apart in sorted order.

(T) Total algorithm:

– Sort by x
– Sort by y
n

– T (n) = 2T 2 + O(n) ∈ O(n log n)

More general problems – given n points, find closest neighbour of each one. This can be done in O(n log n) (not
obvious)

• Voronoi diagrams

• Delaunay triangulations

Used in mesh generation.

7.3 Hidden Surface Removal


(a baby version of it, at least.) Find ”upper envelope” of a set of n lines in O(n log n) by divide & conquer.

8 Oct 2nd, 2008


8.1 Dynamic Programming
Weighted Interval Scheduling. Recall, interval scheduling aka activity selection aka packing of intervals. Pick the
max. number of disjoint intervals.

Generalization – each interval i has a weight w(i). Pick disjoint intervals to maximize the sum of the weights.
What if we try to use Greedy?

15
8 OCT 2ND, 2008 8.1 Dynamic Programming

• Pick maximum weight – fails


An even more general program: given a graph G = (V, E) with weights on vertices pick a set of vertices, no two
joined by an edge to maximize a sum of weights. Make G with a vertex for each interval an edge when two intervals
overlap.

A general idea: for interval (or vertex) i, either we use it or we don’t. Let OPT(I) = max weight of non-overlapping
subset. W-OPT(I) is the opt. weight sum of weights of intervals in OPT(I).
If we don’t use i, OPT(I) = OPT(I \ { I } ).
If we use i, OPT(I) = w(i) + OPT(I’) where I’ = the set of intervals that don’t overlap with i.
Leads to a recursive algorithm.
W-OPT(I) = max { W-OPT(I { i } ) , w(i) + W-OPT(I’) }

T (n) = 2T (n − 1) + O(1)
But this is exponential time.
Essentially we are trying all possible subsets of n items – all 2n of them.

For intervals (but not for the general graph problem) we can do better. Order intervals 1, . . . , n by their right
endpoint.
If we choose interval n, then l0 = all intervals disjoint from n – has form 1, 2, . . . , j for some j.
W-OPT(1 ... n) = max ( W-OPT(1 ... n-1 ), w(n) + W-OPT(1..p(n)) ).
p(n) = max index j such that interval j doesn’t overlap n.
More generally,
p(i) = max index j ¿ i such that interval j doesn’t overlap i. W-OPT(1 .. i) = max ( W-OPT(1 .. i-1), w (i) +
W-OPT(1..p(i)))
This leads to an O(n) time algorithm. Note: don’t use recursion blindly. The same subproblem may be solved
many times in your program.

Solution Use memoized recursion (see text.) OR, use an iterative approach.
Let’s look at an algorithm using the second approach.
notation M[i] = W-OPT(1 .. i)

M[0] = 0
for i = 1..n
M[i] = max{ M[i-1], w(i) + M(p(i)) }
end
Runtime is O(n). What about computing p(i) with i = 1..n?
Sorting by right endpoint is O(n log n). To find p(i) sort by the left endpoint as well. Then-Exercise: in O(n) time
find p(i) i = 1..n.
So far this algorithm finds W-OPT but not OPT. (i.e. the weight, not the actual set of items.)
One possibility: enhance above loop to keep set OPT(1..i). Danger here is that storing n sets of size n for n2 size.
One solution: first compute M as above. Then call OPT(n).
recurse fun OPT(i)
if M[i] >= w(i) + M[p(i)]
then return OPT(i-1)
else
return { i } union OPT (p( i))

16
9 OCT 7TH, 2008 8.2 Second example: optimum binary search trees

8.2 Second example: optimum binary search trees


Store values 1, . . . , n in leaves of a binary tree (in order.) Given probability pi of searching for i build a binary
search tree.
Minimize expected search cost
n
X
pi depth(i)
i=1

Note: In CD 240 you did dynamic binary search trees – insert, delete, and rebalancing to control depth.
This is different in that we have items and probabilities ahead of time.
The difference from Huffman coding (a similar problem) is that for Huffman codes, left-to-right order of leaves is
free.
The heart of dynamic programming to find optimum binary search tree: Try all possible splits 1..k and k + 1..n.
Subproblem: ∀i, j find optimum tree for i,P i + 1, . . . , j.
M [i, j] = mink=i..j M [i, k] + M [k + 1, j] + jt=i pt . Each node is one deeper now.
Exercise: work this out.

for i=1..n
M[i,i] = p_i
for r=1..n-1
for i = 1..n-r
-- solve for M[i, i+r]
best <- M[i,i] + M[i+1, i+r]
for k=i+1..i+r-1
temp <- m[i,k] + m[k+1, i+r]
if temp > best, best <- temp
end
M[i,i+r] <- best + sum_(t=i)^(i+r) p_t
(better: p[j] = sum_t=1^j p(t) then use p[i+r] - P[i-1]

Runtime? O(n3 ).

9 Oct 7th, 2008


Last day, we looked at weighted interval scheduling.
Today, we’ll look at matrix chain multiplication.
The problem was to compute the product of n matrices M1 × M2 × . . . × Mn where Mi is an αi−1 × αi matrix.
What is the best order in which to do multiplications?
Think about this in terms of parenthesizing the matrices in your multiplication. I.e. we could calculate ((M1 M2 )(M3 M4 ))
or (((M1 M2 )M3 )M4 ). The number of ways to build a binary tree on leaves 1 . . . n is
n
X
Pn = Pi Pn−i
i=1

The Catalan  numbers are


rn
Pn ∈ Ω n2 which is exponential.
Solve subproblems:
mi,j = min cost to multiply the scalar multiplications. Matrices Mi , . . . , Mj

17
9 OCT 7TH, 2008 9.1 Example 2: Minimum Weight Triangulation

Let mii = 0 and mij = min for k = i . . . j − 1. The idea is we’ll break into subproblems from mi to mk times mk+1
to mj .
Algorithm pseudocode:

for i=1..n
m(i,i) = 0
end
for diff=1 .. n
for i = 1..n-diff
j <- i + diff
m(i,j) <- infinity
for k = i .. j-1
temp <- m(i,k) + m(k+1,j) + d_{i-1} d_j d_k
if temp < m (i,j)
m(i,j) <- temp
end
end
end
end

The runtime is O(n3 ) for the O(n2 ) subproblems of O(n) each. Final answer m(1, n) and ex, use k matrix to
recover the actual parenthesization.

9.1 Example 2: Minimum Weight Triangulation


Problem: Given a convex polygon with vertices 1 . . . n in clockwise order, divide into triangles by adding ”chords”
– segments from one vertex to another. No two chords are allowed to cross.
The goal is to minimize the lengths of chords we use. Picking the smallest chord does not work.
We will give a dynamic programming algorithm that will also work for non-convex shapes.
A more general problem is to triangulate a set of points. Find the minimum sum of lengths of edges to triangulate.
”Minimum triangulation.”
The dynamic programming approach for the convex polygon case: choosing one chord breaks down into two
subpolygons.
Notice a subset of polygons gives a subpolygon. Can get by by looking just at subpolygons on verticies i, i+1, . . . , j.
The edge 1, n lies in some delta with vertex k – try all choices for k. More generally, m(i, j) = min sum of edge
lengths to triangulate subpolygon on verticies i, i + 1, . . . , j. m(i, j) = min k=i+1,...,j−1 {m(i, k) + m(k, j) + `(i, j)}
(` the length of chord)
Let’s count the perimeter as well. This doesn’t hurt our optimization and it makes base cases easier.

Base cases
m(i, i + 2) = `(i, i + 1) + `(i + 1, i + 2) + `(i, i + 2)
Note: We’d better add m(i, i + 1) = `(i, i + 1). And we don’t atually need case m(i, i + 2) – it falls out of the
general formula.
Algorithm:

initialize m(i,i+1)
for diff = 2, ..., n-1
for i = 1 .. n-diff
j<-i + diff

18
10 OCT 9TH, 2008

m(i,j) <- infinity


for k = i+1 .. j-1
t <- m(i,k) + m(k,j) + l(i,j)
if t < M(i,j) then
M(i,j) <- t
end
end
end

Runtime O(n3 ): n × n table and O(n2 ) subproblems. O(n) to solve each one.

10 Oct 9th, 2008


Midterm (Mon Oct 20th): covers material up through today and a bit of next week’s material too.

10.1 Dynamic Programming


Key idea: Bottom-up method: identify subproblems and order so that you’re relying on previously solved sub-
problems.
Example (Knapsack/Subset Sum)
Recall knapsack problem: given items 1 . . . n, item iPhas weight wi andPvalue vi , both ∈ N, and W , the knapsack
capacity. Choose a subset S ∈ {1, . . . , n} such that i∈S wi ≤ W and i∈S vi is maximized.
Recall a fractional versus 0-1. Recall a greedy algorithm works for the fractional case. For the 0-1 knapsack, there
is no polynomial-time algorithm.
Note: coin changing problem is similar to knapsack but having multiple copies of items.

Top-down: Item n can either be IN (items 1 . . . n − 1 with W − wn ) or OUT (items 1 . . . n − 1) of S.P


Subproblems
P are – for each i, w i = 0 . . . n and w = 0 . . . W , find subset S from items 1 . . . i such that i∈S wi ≤ w
and i∈S vi is maximized.
How to solve this subproblem?
If wi > w then OP T (I, w) ← OP T (i − 1, w) (can’t use item i) but otherwise,

OP T (i − 1, w) don’t include i
OP T (i, w) ← max
vi + OP T (i − 1, w − wi ) include i
Pseudo-code and ordering of subproblems:

store OPT(i,w) in matrix


M[i,w]
i=0..n w=0..W
initialize M[0,w] := 0 w = 0..W
for i=1..n
for w=0..W
compute M[i,w] with (*)
end
end
M[n,W] gives OPT value

EX: Find opt set S.

19
11 OCT 14TH, 2008 10.2 Certain types of subproblems

[KT] has examples.


Runtime: nW c (outer loop, inner loop, constant for (*))
O(n × w)
Is this good? Does it behave like a polynomial?
Depends on size of input. Input v1 , . . . , vn and w1 , . . . , wn and W . Note that wi ≤ W – else throw out item i. So
size of w1 . . .? ≤ (n + 1) log W . Input size is O(n log W ).
Input size O(n log W ) but output size O(nW = n2k ).
Intuition why this is bad: let’s say we have weights .001, .002, 10, and W = 100.
This algorithm is called ”pseudo-polynomial” because runtime is polynomial on the value of W, not the size
(number of bits) of W.

10.2 Certain types of subproblems


• Input x1 , . . . , xn and subproblem x1 , . . . , xi . Number of subproblems is O(n).

• Input x1 , . . . , xn and subproblems xi , xi+1 , . . . , xj . Number of subproblems is O(n2 ).

• Input x1 , . . . , xn and y1 , . . . , yn with subproblems x1 , . . . , xi and y1 , . . . , yj . Number of subproblems: O(n×m)

• Input is rooted tree (not necessarily binary) and subproblems are rooted subtrees.

Example Longest ascending subsequence.


In 5,3,4,1,6,2.
Given a1 , . . . an finding ai1 < ai2 , < . . . < aij . i1 < i2 < . . . < ij . Maximize j.
Can we use subproblems on a1 , . . . , ai ?
Find largest ascending subsequence ending with ai .
Find answer:max li with i = 1..n
Consider 2nd last item aj , j < i, ai < ai .
li = max{1 + lj : j < i, aj < ai }
O(n2 ) algorithm: n subproblems O(n) each.

10.3 Memoization
Use recursion (not explicit solution to subproblems in the bottom-up approach we have used) – danger, solve sub
subproblem over and over. So
T (n) = 2T (n − 1) + O(1) – exponential!
Advantage: storing solved subproblems saves time if we don’t need solutions to all subproblems.

11 Oct 14th, 2008


Assignment 2 due Friday. Midterm on Mon Oct 20th, 7 PM. Alternate is during class time on Tuesday.

11.1 Graph Algorithms


A graph G = (V, E) with V a finite set of vertices and E ∈ V × V are edges.

• Undirected graph, edge (u, v) = (v, u).

• Directed graph, order matters.

• No loops (i.e. no edge (u, u))

20
11 OCT 14TH, 2008 11.2 Minimum Spanning Trees

• No multiple edges.

We will use n or |V | for the number of vertices, and m or |E| for the number of edges.

n
 n(n−1)
• 0≤m≤ 2 = 2 undirected.

• 0 ≤ m ≤ n(n − 1) directed. m ∈ O(n2 ).

What is a path? A sequence of vertices where every consecutive pair is joined by an edge. e.g.3, 5, 4. A walk
allows repetition of vertices and edges. A simple path does not allow.

If there is a walk from u to v then there is a simple path from u to v.

We say that an undirected graph G is connected if for every pair of vertices, there is a path joining them. For
testing if a graph is connected, we can use DFS or BFS.

For directed graphs: there are different notions of connectivity. A graph can be strongly connected – ∀u, v inV
there is a directed path from u to v.

Cycle: a path from u to u.

Tree: A graph that is connected but has no cycles. Note: a tree on n vertices has n − 1 edges.

Storing a graph:

• Adjacency matrix: A(i, j) = 1 if there is an edge from i to j, else 0.

• Adjacency list: Vertices down the left, edge destinations in a list on the right.

Advantages and disadvantages?

• Space: n2 matrix, 2m + n list.

• Time to test e ∈ E: O(1) matrix, O(n) or O(log v) in list.

• Enumerating edges: O(n2 ) versus O(m + n).

We usually use adjacency lists – then we can (sometimes) get algorithms with runtime better than O(n2 ).

11.2 Minimum Spanning Trees


Problem Given an undirected graph G = (V, E) and weights w ≥ 0 : E → R find aP minimum weight subset of
edges that’s connected. i.e. Find E 0 ⊂ E such that (V, E 0 ) is connected and w(E 0 ) = e∈E 0 w(e) is minimized.

Claim E 0 will be a tree. Else E 0 has a cycle. Throw away an edge of the cycle, which leaves a connected graph. If
path a − b used edge (u, v), then replace edge (u, v) with the rest of the cycle.

Almost any Greedy approach will succeed.

• Take a minimum weight edge that creates no cycle.

21
11 OCT 14TH, 2008 11.2 Minimum Spanning Trees

• Throw away maximum weight that doesn’t disconnect.

• Grow one connected component and use the minimum weight wedge.

All of these are justified by one lemma:

Lemma Let V1 , V2 be a partition of V (into two disjoint non-empty sets with union V .) Let e be a minimum-weight
edge from V1 to V2 . Then there is a minimum spanning tree that includes e.

Stronger version Let X be a set of edges ⊂ minimum spanning tree, and no edge of X goes from V1 to V2 . Let the
minimum spanning tree also include X.

Proof Let T be a minimum spanning tree (stronger: containing X.) T has a path that connects u and v. P must
use an edge from V1 to V2 – say, f .
Let T 0 = T ∪ {e}
{f } exchange e for f . Claim: T 0 is it.
w(e) ≤ w(f ) so w(T 0 ) ≤ w(T ). T 0 is a spanning tree: P ∪ {(u, v)} makes a cycle , so we can remove f and stay
connected.
Note that T 0 contains e and x (because f not in X.)
Following Kruskal’s Algorithm,

• Order edges by weight:


w(e1 ) ≤ w(e2 ) ≤ . . . ≤ w(em )

T <- empty set


for i = 1..n
if e_1 does not make a cycle with T
then t <- T u {e}
end

• We add e iff u and v are in different connected components.

• To test this efficiently we use the Union-Find data structure.

– Find(element) – find which set contains element.


– Union – unites two sets.

• Focus set = connected component of vertices.

– Add edge e iff Find(u) 6= Find(v)


– Add edge e to T ⇒ unite conn. components of u and v

A simple Union-Find structure : Store an array C(1 . . . n) and C(i) is the # of connected components containing
vertex i. Union: must rename one of the two sets, do the smaller one. Then h units take O(n log n) in CS 466:
reduce this.

Krustkal’s Algorithm takes O(m log m) to sort plus O(n log n) for the Union-Find test. And O(m log m) =
O(m log n) since log m ≤ log n2 = 2 log n.

22
12 OCT 16TH, 2008

12 Oct 16th, 2008


• Assignment 1 – out of 40.

– Solutions will be on website.


– Marking scheme is in the newsgroup.

• Assignment 2 – due tomorrow.

• Midterm – Monday – covers to the end of today.

• You are allowed one 8.5 × 11 sheet brought to the midterm. Doesn’t have to be hand-written either.

12.1 Graph Algorithms


Minimum Spanning Tree: Given an undirected graph G = (V, E) with weight function w : E → R+ , find a subset
of edges E 0 ∈ E such that (V, E 0 ) is connected.

Recall:

• Kruskal’s algorithm orders edges from minimum-maximum weight. Take each edge unless it forms a cycle
with previously chosen edges.

• Lemma, the cheapest two edges connecting two groups is indeed the best.

12.1.1 Prim’s Algorithm


Also a greedy algorithm. Builds a tree. General structure: let u be vertices of the tree so far. Initially, U = {s}.
While U 6= V , find a minimum weight edge e = {u, v} where u ∈ U and v ∈ V − U . Add e to T and v to U .

Correctness – from lemma last day.

Implementation: we need to (repeatedly) find a minimum-weight edge leaving U (as U changes.) Let S(U ) be a
set of edges from U to V − U . We want to find the minimum, insert, and delete. We need a priority queue – use
a heap.

Exactly how does δ(u) change?

When we do U ← U ∪ {v}, any edge from U to v leaves δ(u). Any other edge incident with v enters δ(u).

For all x incident to v,

• if x ∈ U then remove edge (x, v) from priority queue.

• else insert edge (x, v) into PQ.

Recall that a heap provides O(log n) for insert and delete, and O(1) for finding a minimum.

For one r, how many PQ inserts/deletes do we need?

• n in the worst case.

23
12 OCT 16TH, 2008 12.2 Shortest Paths

• deg(v) = # of edges incident with v.

Total number of PQ insert/delete operations over all vertices v: (hope for better than n × n.)
Every edge enters
P δ(u) once and leaves once, so 2m.
Alternatively, v∈V deg v = 2m.

Total time for the algorithm is O(n + m log m) = O(m log m) because m ≤ n2 and log m ≤ 2 log n. If m = 0: check
first if m < n − 1 and if so bail out.

Improvements

• Store vertices in the PQ instead of edges. Define w(v) = minimum weight of an edge from U to v.
When we do U ← U ∪ {v}, we must adjust weights of some vertices. Gives (m log n).

• Tweak the PQ to be a ”fibonacci heap,” which gives O(1) for weight change and O(log k) to find minimum.
Gives O(n log n + m).

• Barouvka’s Algorithm: another way to handle this case

12.2 Shortest Paths


Shortest path from A to D: ABD weight 3 + 2 = 5, A to E: ABE with weight 4. (From diagram in class.)

General input: directed graph G = (V, E) with weights w : E → R. Allow negative weight edges, but disallow
negative weight cycles. (If we have a negative weight cycle, then repeating it potentially gives paths of −∞ weight.)

We might ask for shortest simple path but this is actually hard (NP-complete.)

Weight of path = sum of weights of edges.

Versions of shortest path problem:

1. Given u, v ∈ V , find a shortest path from u to v.

2. Given u ∈ V , find shortest paths to all other vertices. ”Single source shortest path problem”

3. Find shortest u, v path ∀u, v – the ”all paths shortest path problem.”

Solving 1 seems to involve solving 2.


Later: Dijkstra’s algorithm for 2. Like Prim’s algorithm. Build a shortest path tree from u

Dynamic Programming solution for problem 3.


Does u − v path go through x or not shortest? Split into: find shortest path u − x and shortest path x − v.
In what way are these subproblems smaller?

• They use fewer edges.


M [u, v, l] = min weight path from u to v using ≤ l edges.
n3 subproblems from l = 1 . . . n − 1.

• The paths u − x and x − v don’t use x as intermediate vertex.

24
13 OCT 21, 2008

13 Oct 21, 2008


13.1 All Pairs Shortest Path
Given a directed graph G = (V, E) with weights w : E → R, find shortest u − v paths from all u, v ∈ V .
In general, the weight of a path is the sum of weights of edges in path.

B
-1
C
5
6
A 2
11
D

e.g. w(ACD) = 8

Assume: no negative weight cycles. Otherwise, minimum length path can be ∞.

Use Dynamic Programming.

x
u v

Main idea: try all intermediate vertices x. If we use x, we need a shortest u → x path and a shortest x → v path.
How are these subproblems simpler?

1. Fewer edges – get efficient dynamic programming M [u, v, `] give shortest u, v path with ≤ ` edges.
However, we’re not using this. This gives the same runtime, but uses more space.

2. The u − x and x − v paths do not use x as an intermediate vertex.


We’ll use this one.

Let V = {1, 2, . . . , n}. Let Di [u, v] = min. length of a path u → v using intermediate vertices from the set
{1, . . . , i}. Solve subproblem Di [u, v] for i = 0, 1, . . . , n.

Final answer: matrix Dn [u, v]. Number of subproblems: O(n3 ).

How do we initialize? D0 [u, v] = {w(u, v) if (u, v) ∈ E; ∞ otherwise .

Main formula:

Di [u, v] = min{Di−1 [u, v], Di−1 [u, i] + Di−1 [i, v]}


This leads to:

13.1.1 Floyd-Warshall Algorithm


Initialize D_0 as above

25
13 OCT 21, 2008 13.1 All Pairs Shortest Path

for i = 1..n
for u = 1..n
for v = 1..n
D_i[u,v] = as above in main formula
end
return D_n

Time is O(n3 ). The space however is also O(n3 ), which is extremely undesirable. Notice to compute Di we only
use Di−1 . So we can throw away any previous matrices, bringing space to O(n2 ).
In fact, even better (although not in degree of n) we can:

Initialize D full of D_0


for i = 1..n
for u = 1..n
for v = 1..n
D_i[u,v] = min { D[u,v], D[u,i] + D[i,v] } (**)
end
return D_n

Note: in the inner loop, D will be a mixture of Di and Di−1 , but this is correct because we don’t go below the
true min by doing this, but we correctly compute the main equation.

How to find the actual shortest path?

• Compute H[u, v] =highest numbered vertex on u → v path


Note: If we explicitly stored all n2 paths, we’d be back to O(n3 ) space – avoid this. Better:

• S[u, v]− successor of u on a shortest u, v path


Initialize S[u, v] = v if (u, v) ∈ E and φ otherwise.
Modify (**) to become:

• if D[u,i + D[i,v] < D[u,v] then


D[u,v] <- D[u,i] + D[i,v]
S[u,v] <- S[u,i]
end
Once we have S with complete paths:

Path[u,v]
x <- u
while neq u
output S[x,v]
x <- S[x,v]
end
output v

Exercise: Use this algorithm to test if a graph has a negative weight cycle.

26
14 OCT 23, 2008

14 Oct 23, 2008


Shortest Paths
Last day’s study was the all-pairs shortest path problem, whereas today’s is the single-source shortest path. Find
the shortest path from s to v ∀v.

• In the case with no negative weight edges, we can use Dijkstra’s Algorithm, which is O(m log n).

• With no directed cycles, O(n + m).

• With no negative weight cycles, O(n × m). (This is the most general – still faster than all pairs.)

14.1 Dijkstra’s Algorithm


Input: Directed graph G = (V, E) and weight function w : E → R≥0 and source vertex s.
Output: Shortest s → v path ∀v.

Idea: Grow a tree of shortest paths from s.

x
y

General step: have shortest paths to all vertices in B. Initially, B = {s}. Choose the edge (x, y) where x ∈ B and
y ∈ V \ B that minimizes the following:

d(s, x) + w(x, y)
Call this minimum d:

• d(s, y) ← d

• Add (x, y) to shortest path tree parent(y) ← x

• B ← B ∪ {y}

This is greedy in the sense that y has the next minimum distance from s.

Claim: d = minimum distance from s to y.

Proof: The idea is that any path has this structure:

• s: Begins here

• π1 : Precedes u

27
14 OCT 23, 2008 14.2 Connectivity in Graphs

• (u, v): First edge leaving B

• π2 : Rest of path (which may re-enter B)

So w(π) = w(π1 )+w(u, v)+w(π2 ). Note that w(π1 )+w(u, v) ≥ d and w(π2 ) ≥ 0 as edge-weights are non-negative.
From Claim by induction on |B|, this algorithm finds the shortest path.

Implementation: Make a priority queue (heap) on vertices V \B using value D(v) for v ∈ V such that the minimum
value of D gives the wanted vertex.

D(v) = minimum weight path from s → v using a path in B plus one edge.

• Initialize:

– D(v) ← ∞, ∀v
– D(s) ← 0
– B←φ

• While |b| < n:

– y ← vertex of V \ B of minimum D(v)


– B ← B ∪ {y}
– For each edge (y, z) where z ∈ V \ B
∗ t ← D(y) + w(y, z)
∗ If y < D(z) then
· D(z) ← t
· parent(z) ← y

Store the D values in a heap. How many times are we extracting the minimum? n times at O(log n) time each.
The ”decrease D value” is done ≤ m times. (Same argument as for Prim.) Each decrease D operation is O(log n)
(done as insert-delete.) Total time is O(n log n + m log n) which is O(m log n) if m ≥ n − 1. Using a Fibonacci
Heap, we can decrease this to O(n log n + m).

14.2 Connectivity in Graphs


Testing connectivity, exploring a graph. Recall: Breadth First Search (BFS) and Depth First Search (DFS.)

1 2 5

6 4

8 7

• BFS: 1,2,3,6,8,4,5,7 (1, adj to 1, adj to 2, etc.)

• DFS: 1,2,4,6,3,5,8,7

28
14 OCT 23, 2008 14.2 Connectivity in Graphs

Either takes O(n + m). DFS is more useful.


We’ll talk about ”higher connectivity” – for networks, connected isn’t enough. We want connected even with a
few failures (vertices/edges.) What’s bad is a cut vertex – if it fails, the graph becomes disconnected.

We call a graph 2-connected if there are no cut vertices. 2-connected components. A figure-eight graphic made of
two connected triangles or squares has two 2-connected components, the triangles/squares. Similarly, 3-connected
means we can remove two vertices without breaking the graph into components.

By the way, Paul Seymour, a famous name in graph theory, is visiting UW this weekend, and he’s speaking
tomorrow at 3:30. He’s also getting an honourary degree on Saturday at convocation.

14.2.1 Finding 2-connected components


We can use DFS to find cut vertices and 2-connected components in O(n + m) time.

2 6

3 7

4 5

Solid edges are DFS edges, dotted edges are ”back edges.”

Claim: Every non-tree DFS edge goes from some u to an ancestor. e.g. we can’t have edge (5,7). This justifies
the term ”back edge.”

DFS Algorithm:

• Initialize:

– mark(v) ← not visited


– num ← 1
– DFS(s)

• DFS(u) recursive:

– mark(v) ← visited
– DFSnum(v) ← num; num ← num + 1
– for each edge (u, w)
∗ if mark(w) = not visited then
· (v, w) is a tree edge
· parent(w) ← v
· DFS(w)
else
· if parent(v) 6= w then: (v, w) is a back edge

29
15 OCT 28TH, 2008

What do cut vertices look like in a DFS tree?

• A leaf is never a cut vertex

• A root is a cut vertex iff the number of children ≥ 1

Removing arbitrary (non-root, non-leaf) node in the tree v we have T1 , . . . , Ti children and T0 the tree connected
from above. Are these connected in G \ v? It depends on back edges. If Tj has a back edge to T0 then Tj is
connected to T0 . Otherwise, it falls away (and is disconnected.)

We need one more thing: high(v) = highest (i.e. lowest DFS number) vertex reachable from v by going down tree
edges and then along one back edge.

Claim: v is a cut vertex iff it has a DFS child x such that high(x) ≥ DFSnum(v).

Modifying DFS code: set high(v) ←DFSnum(v) in Initialize, and later on set high(v) ← min { high(v), DFSnum(w)
} and later high(v) ← min { high(v), high(w) } .
This is still O(n + m).

15 Oct 28th, 2008


Midterm: Think about it as out of 35. (In that case you got an 86%.)

Backtracking: A systematic way to try all possibilities. In the workplace, and you need a find an algorithm,
if you’re extremely lucky it’ll be one of the ones we encountered. But more likely, it’ll be similar to one we’ve
seen. But more likely, it’ll be one nobody knows how to solve, and it’s NP-complete. Backtracking is useful for
algorithms that are not NP-complete.

Options:

• Heuristic approach – run quickly, with no guarantee on the quality of the solution.

• Approximation algorithms – run quickly, but with a guarantee on the quality.

• Exact algorithm – and bear with the fact it (may) take a long time.

Note: to test (experimentally) a heuristic you need an exact algorithm.

15.1 Backtracking and Branch/Bound


Exact, exponential time algorithms. Search in implicit graph of partial solutions. General backtracking: we have
a configuration C that is the remaining subproblem to be solved, and choices made to get to this subproblem.
e.g. knapsack: configuration is items selected to far and items discarded so far, also with capacity remaining.
e.g. trying all permutations of 1 . . . n. Configuration is permutations so far, and remaining permutations.

Backtracking Algorithm: F = set of active configurations. Initially, one configuration, the whole problem. While
F 6= φ, C ← remove configuration from F , expand into C1 , . . . , Ct . For each Ci , test for success (solves whole
problem) and failure (dead end.) Otherwise, add Ci to F .

Storing F :

30
15 OCT 28TH, 2008 15.1 Backtracking and Branch/Bound

• Stack: DFS of configuration space


Size: height of tree

• Queue: BFS of configuration space


Size: width of tree

• Priority Queue: explore current best configuration

Usually, height << width, and we should use DFS.

e.g. exploring all subsets of {1, . . . , n}:

S = empty set
R = {1 … n}

1 in 1 out

S = { 1 } S = empty
R = { 2 … n } R = { 2 … n }

2 in 2 out

S = { 1,2 } S = { 1 }
R = { 3 … n } R = { 3 … n }

Example: Subset Sum – Knapsack where weight is the value of each item.
P
P 1 . . . n and weight wi for item i, and W , find subset S ∈ {1, . . . , n} with
Given items i∈S wi ≤ W where we
maximize i∈S wi .
P
Decision Version – can we find S with i∈S wi = W ?
A polynomial time algorithm for this decision version gives poly time for the optimization version.

Backtracking for the decision version of Subset Sum:

• Configurations are as above (S so far, R remaining)


P P
• w = i∈S wi , r = i∈R wi .

Need to fill in success w = W and failure (of the configuration) when w > W or w + r < W .

Note: if F becomes empty and we haven’t found a solution, then no solution.

This is O(2n ). Before, we built a dynamic programming algorithm for Knapsack with subproblems O(n × W ).
Which is better? Depends on W . e.g. if W has n bits then W ∼ 2n and backtracking is better.

31
15 OCT 28TH, 2008 15.2 Branch-and-Bound

15.2 Branch-and-Bound
• for optimization problems

• we’ll talk about minimizing an objective function

• keep track of minimum solution so far

• not DFS – explore ”most promising” configuration first

• ”branch” generate children of configuration (as in backtracking)

• ”bound” – for each configuration compute a lower bound on the objective function and prune if ≥ minimum
so far.

General paradigm:

• F = active configurations

• Keep best so far

• While F 6= φ

– C ← remove ”best” configuration from F


– Expand C to children C1 , . . . , Ct (”branch”)
– For each Ci ,
∗ If Ci solves the problem, if better than current best, update best
∗ Else if Ci is infeasible, discard it.
∗ Else, ”bound:” If lower bound (Ci ) < best so far, add Ci to F .

15.2.1 Branch and Bound TSP Algorithm


Example: Traveling Salesman problem. Idea here is we have a graph with weights on the edges, and our traveling
salesman wants to start in a home town, visit every city exactly once, and return to the home town.

Given a graph G = (V, E) and edge weights w : E → R≥0 find a cycle C that goes through every vertex once and
has minimum weight.

This is a famous, ”hard” problem.

Algorithm: based on enumerating all subsets of edges. Configuration: Ic ∈ E (included edges) and Ec ∈ E
(excluded edges.) Ic ∩ Xc = φ. Undecided edges E \ (Ic ∪ Xi ).

Necessary conditions: E \ Xc must be connected. In fact it must be 2-connected. Ic must have ≥ 2 edges at each
vertex, must not contain a cycle.

How to branch? Take the next edge not decided about yet. C − Ic , Xc choose e ∈ E \ (Ic ∪ Xc ). But how to bound?
Given Ic , Xc find a lower bound on minimum TSP tour respecting Ic , Xc . We want an efficiently computable lower
bound (so it’s sort of like a heuristic, but we don’t have issues of correctness.)

32
16 OCT 30TH, 2008

Instead of finding a tour, we’re finding a 1−tree, a spanning tree on nodes 2, . . . , n (not a MST) and two edges
from vertex 1 to leaves of the tree.

Claim Any TSP-tour is a 1-tree. w(min TSP-tour) ≥ w( min 1-tree ). So use this for lower bound.

Claim We can efficiently find a minimum weight 1-tree given Ic , Xc . (Not proven.)

Final Enhancements:

• When we choose the ”best” configuration C from F , as our measure of best, use the one with the minimum
1-tree.

• Branch wisely. e.g. find vertex i in minimum 1-tree with degree ≥ 2.


Let e = maximum weight edge

16 Oct 30th, 2008


16.1 Recall
Course outline:

• Designing algorithms

• Analyzing algorithms

• Lower Bounds – do we have the best algorithm?

16.2 Lower Bounds


If we have a lower bound for a problem P , we claim any algorithm will take at least this much time.

Note: distinction between lower bound for an algorithm and lower bound for a problem. For an example, look at
multiplying large integers. The school method was O(n2 ).

In fact, school method is Ω(n2 ) worst case run time of because there are example inputs that take ≥ c × n2 steps.

But there is an algorithm (divide and conquer) with a better worst-case runtime – O(nk ) with k < 2. But a lower
bound for the problem says that all algorithms have to take ≥ some time.

Lower bounds for algorithms are hard to prove!

16.2.1 Basic Techniques


1. Lower bound based on output size.
For example, if we ask for all the permutations of 1, 2, . . . , n, there are n! of them and it won’t take less than
n! time to write them all down – Ω(n!).

2. Information-Theoretic Lower Bounds


e.g. Ω(log n) lower bound for searching for an element inside a1 , a2 , . . . , an . This takes log n bits as that is
the information content of distinguishing n possibilities.

33
16 OCT 30TH, 2008 16.3 Polynomial Time

In a comparison-based model, each comparison gives one bit of information, and since we need log n bits we
need log n comparisons. Often this argument is presented as a tree.

3. Reductions: showing one problem is easier or harder than another.


e.g. convex hull is harder than sorting. We took an index of numbers and mapped them into a curve, and
then the convex hull would tell the sorted order. ”If I could find convex hulls faster than O(n log n) then I
could sort faster than O(n log n).”

16.2.2 State-of-the-Art in Lower Bounds


• Some problems are undecidable (they don’t have algorithms) e.g. the halting problem. We’ll do this later
in the course (and CS 360.)

• Some problems can only be solved in exponential time.

• (Lower end) some problems have Ω(n log n) lower bounds on special models.

Things we care about, like ”is there a TSP algorithm in O(n6 )” – nobody knows. ”Can O(n3 ) dynamic program-
ming algorithms be improved?” – nobody knows.

Major open question: Many practical problems have no polynomial time algorithm and no proved lower bound.

The best that’s known is proving that a large set of problems are all equivalent, and we know that solving one in
polynomial time solves all the others.

In the rest of the course, we’ll fill this in.

16.3 Polynomial Time


Definition An algorithm runs in polynomial time if its worst case runtime is O(nk ) for some k.

What is polynomial?

Θ(n) YES
Θ(n2 ) YES
Θ(n log n) YES (because it’s better than O(n))
Θ(n100 ) YES
Θ(2n ) NO
Θ(n!) NO
The algorithms in this course were (mostly) all poly-time, except backtracking and certain dynamic programming
algorithms (specifically 0-1 Knapsack.)

Low-degree polynomials are efficient. High-degree polynomial don’t seem to come up in practice.

Jack Edmonds is a retired C&O prof. The ”matching” problem has you given a graph and you want to assign
pairs. He first formulated the idea of polynomial time.

In any other algorithms class, you would cover linear programming in algorithms. We have a C&O department
that covers that, but if you’re serious about algorithms, you should be taking courses over there.

34
17 NOV 4TH, 2008 16.4 Reductions

Other history:

• In the 50’s and 60’s, there was a success story creating a linear programming and simplex method – practical
(though not polynomial.)
• Next step, integer linear programming. Seemed promising at the time, and people reduced other problems
to this one, but in the 70’s with the theory of NP-completeness, we found this is actually a hard problem
and people did reductions from integer programming.

Our goal: to attempt to distinguish problems with poly-time algorithms from those that don’t have any. This is
the theory of NP-completeness. (NP = Non-deterministic Polynomial)

16.4 Reductions
Problem A reduces (in polytime) to a problem B (written A ≤ B or A ≤P B) and we can say ”A is easier than
B” if a (polytime) algorithm for B can be used to create a (polytime) algorithm for A. More precisely, there is a
polytime algorithm for A that makes subroutine calls to (polytime) algorithm B.
Note: we can have a reduction with having an algorithm for B.

Consequence of A ≤ B:
An algorithm for B is an algorithm for A. But if we have a lower bound non-polytime algorithm for A then this
implies a non-polytime algorithm for B.
Even without an algorithm for B or a lower bound for A, if we prove reductions A ≤P B and B ≤P A then A and
B are equivalent with respect to polytime (either both have them, or both don’t.)

Example: Longest increasing subsequence problem. We will reduce this problem to not shortest path but longest
path in a graph.

This is a reduction – it reduces the longest increasing subsequence problem to the longest path problem. Is it a
polynomial-time reduction?

How can we solve the longest path problem? Reduction to shortest path problem. Negate the edge weights.

17 Nov 4th, 2008


Permanents are like determinants except they’re all positive terms.

Today’s topics: Reductions (from last class), P and NP, and decision problems.

17.1 Decision Problems


What is a decision problem? A problem with output YES/NO or TRUE/FALSE. We will concentrate on decision
problems to define P/NP. Why? It’s more rigorous, and it seems to be equivalent to optimization anyways.

Examples

• Given a number, is it prime?


• Given a graph, does it have a Hamiltonian cycle? (a cycle visiting every vertex once)

35
17 NOV 4TH, 2008 17.2 P or NP?

• TSP decision version: given a graph G = (V, E) with w : E → R+ , and given some bound k ∈ R, is there a
TSP tour of length at most k?
• Independent Set: given a graph G = V (E) and k ∈ N is there an independent set of size ≥ k? Optimization
version: given G, find max independent set.

Usually, decisions and optimization are equivalent with respect to polynomial time. e.g. independent set. In fact,
typically, we can show decision ≤P opt. Input: G, k.

• Give G to algorithm for optimization problem


• Return YES or NO depending on whether the returned set is ≥ k.

Showing opt ≤P decision: suppose we have a poly-time algorithm for the decision version of independent set. For
k = n . . . 1, give G, k to decision algorithm and stop when it’s NO. Runtime: Assume decision takes O(nt ). Then
this loop takes O(nt+1 ).

We can find the actual independent set in polytime too. Idea: try vertex 1 in/out of independent set. Exercise:
fill this in and check poly-time.

Examples:

• Factoring – find prime factors


• Primality – given number, is it prime?

In some sense, primality is the ”decision” version of factoring. But although we can test primality in polynomial
time, we can’t factor in polynomial time (and to find one would be bad news for cryptography!)

Definition P = { decision problems that have polytime algorithms }.

Notes:

• Must be careful about model of computing and input size – count bits.

17.2 P or NP?
Which problems are in P ? Which are not in P ? We will study a class of ”N P -complete” problems that are
equivalently hard (wrt polytime) (i.e. A ≤P B ∀A, B in class) and none seem to be in P .

Definition of NP (”nondeterministic polynomial time”): there’s a set of NP problems, which contains P prob-
lems and NP-complete algorithms (that are equivalent.) NP problems are polytime if we get some lucky extra
information.

For independent set, it’s easy to verify a graph has an independent set of size ≥ k if you’re given the set. Contrast
with verifying that G has no independent set of size ≥ k, what lucky info would help?

e.g. primes: given n, is it prime? Not clear what info to give (there is some) but for composite numbers (given n,
is it composite (= not prime?)) we could give factors.

A certifier algorithm takes an input plus a certificate (our extra info.) An algorithm B is a certifier for problem
X if:

36
17 NOV 4TH, 2008 17.3 Properties

• B takes two inputs s and t and outputs YES and NO.

• ∀s, s is a YES input for X iff ∃t ”certificate” such that B(s, t) outputs YES.

B is a polytime certifier if

• B runs in polynomial time.

• There is a polynomial bound on size of certificate t in terms of the size of s.

Examples

• Independent Set
Input is a graph G and k ∈ N. Question does G have an independent set of size ≥ k?
Claim: Independent Set ∈ NP.
Proof Certificate u ⊆ V (set of vertices.) Certifier: Check if u is an independent set and check |u| ≥ k.

• Decision version of TSP.


Input: Given G = (V, E) and w : E → R+ , and k ∈ R
Question: Does G have a TSP tour of weight ≤ k?
Certificate: Sequence of edges
Certifier: Check edges, and check no repeated vertices (sum of weights ≤ k).

• Non-TSP
Does G have no TSP turn of length ≤ k?
Is Non-TSP in N P ? Nobody knows.

• Subset-Sum:
Input: w1 , . . . , wn in R+ . Is there a subset S = {1 . . . n} such that the sum is exactly W ?
Claim: Subset Sum ∈ N P . Certificate: S. Certifier: add the weights in S.

17.3 Properties
Claim P ⊆ N P .
Let X be a decision problem in P . So X has a polyime algorithm to show X ⊆ N P .

• Certificate: nothing

• Certifier Algorithm: original algorithm

Claim: any problem in N P has an exponential algorithm. In particular, the running time is O(2poly(n) ).
Proof idea: try all possible certificates using the certifier. The number of certificates is O(2poly(n) ).
Open Questions
Is P = N P ? co-np: ”no versions of NP problems.” non-TSP is in co-NP. Is Co-NP NP? Is P NP intersect co-NP?

37
18 NOV 6TH, 2008

18 Nov 6th, 2008


18.1 Recall
A ≤P B – problem A ”reduces (in P olytime) to” problem B if there is a polytime algorithm for A (possibly) using
a polytime algorithm for B. (B is ”harder.”) P = { decision problems with polytime algorithms } and N P = {
decision problems with a polynomial-time certifier algorithm } (i.e. poly-time IF we get extra information.)

18.2 N P -Complete
These are the hardest problems in N P . Definition: A decision problem X is N P -complete if:

1. X ∈ N P

2. For every Y ∈ N P , Y ≤P X.

Two important implications:

1. If X is N P -complete and if X has a polytime algorithm then P = N P . i.e. every Y ∈ N P has a polytime
algorithm.

2. If X is N P -complete, and if X has no polytime algorithm (i.e. lower bound) then no problem in N P -complete
has a polytime algorithm.

The first N P -completeness proof is hard. To show X N P -complete, we must show Y ≤P X for all Y ∈ N P .
Subsequent N P -completeness proofs are easier. If we know X is N P -complete, then to prove Z is N P -complete:

1. Prove Z ∈ N P

2. X ≤P Z

Note that X is a known N P -complete problem and Z is the new problem. Please don’t get this backwards.

18.2.1 Circuit Satisfiability


The first N P -complete problem is called circuit satisfiability.

v (one) output (sink)

^ ¬ ¬

x1 x2 inputs, with variables

38
18 NOV 6TH, 2008 18.2 N P -Complete

This is a dag with OR, AND, and NOT operations. 0-1 values for variables determine output value. e.g. if x1 = 0
and x2 = 1 then output = 0.

Question: Are there 0-1 values for variables that give 1 as output?

Circuit SAT is a decision problem in NP.

• Certificate – Values for variables.


• Certifier – Go through circuit from sources to sink, computing values. Check output is 1.

Theorem Circuit-SAT is N P -complete.

Proof Sketch: We know ∈ N P as above. We must show Y ≤P Circuit SAT for all Y ∈ N P . The idea is that
an algorithm becomes a circuit computation. A certifier algorithm with an unknown certificate becomes a circuit
with variables as some inputs. The question is, is there a certificate such that the certifier says YES – which leads
to circuit satisfiability.

Essentially, if we had a polynomial time way to test circuit satisfiability, we would have a general way to solve any
problem in N P by turning it into a Circuit-SAT problem.

18.2.2 3-SAT
Satisfiability: (of Boolean formulas).
• Input: a boolean formula.
e.g. (x1 ∧ x2 ) ∨ (¬x1 ∧ ¬x2 )

• Question: is there an assignment of 0, 1 to variables to make the formula TRUE (i.e. 1?)
Well, circuits = formulas so these satisfiability problems should be equivalent. We will be rigorous. Even special
form of Satisfiability (SAT) is N P -complete.

3-SAT: e.g. (x1 ∨ ¬x1 ∨ x2 ) ∧ (x2 ∨ x3 ∨ x4 ) ∧ . . .. The ”formula” is the ∧ of ”clauses,” the ∨ of three literals. A
literal is a variable or negation of a variable.

Theorem 3-SAT is N P -complete.

Proof

• 3-SAT ∈ N P :
Certificate: values for variables.
Certifier algorithm: check that each clause has ≥ 1 true literal.
• 3-SAT is harder than another N P -complete problem:
i.e. prove Circuit-SAT ≤P 3-SAT.
Assume we have a polytime algorithm for 3-SAT, so use it to create a polytime algorithm for Circuit-SAT.
Input to algorithm is a circuit C and we want to construct in polytime a 3-SAT formula F to send to the
3-SAT algorithm s.t. C is satisfiable iff F is satisfiable.

39
19 NOV 11TH, 2008

We could derive a formula by carrying the inputs up through the tree (i.e. for f1 and f2 and ∨, just pull
the inputs up and write f1 ∨ f2 .) Caution: the size of formula doubles at every level (thus this is not a
polynomial time or size reduction.)
Idea: make a variable for every node in the circuit. Rewrite a ≡ b as (a ⇒ b) ∧ (b ⇒ a), and a ⇒ b as
(b ∨ ¬a). a ≡ b ∨ c becomes (a ⇒ (b ∨ c)) ∧ ((b ∨ c) ⇒ a) and (b ∨ c ∨ ¬a) ∧ (a ∨ ¬(b ∨ c)) and (a ∨ (¬b ∧ ¬c)).
We get (b ∨ c ∨ ¬a) ∧ (a ∨ ¬b) ∧ (a ∨ ¬c).
Note: we can pad these size two clauses by adding new dummy variable t and (a ∨ b ∨ t) ∧ (a ∨ b ∨ ¬t) etc.
There’s a similar padding for size 1.
The final formula for F :

– ∨ of all clauses for circuit nodes


– ∧xi where i is the output node.

e.g. xy ∧ (x7 ≡ x5 ∨ x6 ) ∧ (x5 ≡ x1 ∧ x2 ) ∧ (x6 ≡ x3 ∧ x4 ) ∧ (x3 = ¬x1 ) ∧ (x4 ≡ ¬x2 ).


Claim F has a polynomial size and can be constructed in polynomial time.
Claim C is satisfiable iff F is satisfiable.
Proof (⇒) by construction (⇐) . . .

19 Nov 11th, 2008


NP is decision problems with a polynomial time certifier algorithm.
P is decision problems with a polynomial time algorithm.
NP-complete problems are the hardest problems in NP.

Definition A decision problem X is NP-complete if:


• X ∈ NP
• Y ≤P X for all Y ∈ N P
Once we know X is NP-complete, we can prove Z is NP-complete by proving:
• Z ∈ NP
• X ≤P Z

19.1 Satisfiability – no restricted form


Recall: 3-SAT is NP-complete. Recall the input is a Boolean formula in a special form (three-conjunctive normal
form, F = (x1 ∨ x2 ∨ ¬x3 ) ∧ . . .)

Question: Are there T/F values for variables that make F true?

Theorem SAT is NP-complete.

Proof:
• SAT ∈ N P
• 3-SAT ≤P SAT

40
19 NOV 11TH, 2008 19.2 Independent Set

19.2 Independent Set


Input: Graph G = (V, E) and k ∈ N.
Question: Is there a subset u ∈ V with |u| ≥ k that is independent (i.e. no two vertices joined by an edge?)

Theorem Independent-Set is NP-complete.


Proof Independent-Set is in NP. See previous lecture. We will show 3-SAT reduces to Independent-Set. We
want to give a polytime algorithm for 3-SAT using a hypothesized polytime algorithm for Independent-Set.

Input: Boolean formula F


Goal: Construct a graph G and choose k ∈ N such that F is satisfiable iff G has an independent set ≥ k.

For each clause in F , we’ll make a triangle in the graph. For example, (x1 ∨ x2 ∨ ¬x3 ) is drawn as a graph with
three vertices x1 , x2 and x3 , and edges (x1 , x2 ), (x2 , ¬x3 ), (¬x3 , x1 ). We have m clauses, so 3m vertices.
For example: (x1 ∨ x2 ∨ ¬x3 ) ∧ (x1 ∨ ¬x2 ∨ x3 ) becomes:
x1 x1

x2 ¬x3 ¬x2 x3

Connect any vertex labelled xi with any vertex labelled ¬xi .

Claim: G has polynomial size. 3m vertices.

Details of Algorithm:
• Input: 3-SAT formua F

– Construct G
– Call Independent-Set algorithm on G, m
– Return answer

• Runtime: Constructing G takes poly time. Independent set runs in poly time by assumption.

• Correctness: Claim F is satisfiable iff G has an independent set ≥ m.

• Proof: (⇒) Suppose we can assign T/F to variables to satisfy every clause. So, each clause has ≥ 1 true
literal. Pick the corresponding vertex in the graph. Pick the corresponding vertex from the triangle. This
gives an independent set of size = m.
(⇐) Independent set in G must use one vertex from each triangle. Set the corresponding literals to be true.
Set any remaining variables arbitrarily. This satisfies all clauses.

19.3 Vertex Cover


Input: Graph G = (V, E) and number k ∈ N.
Question: Does G have a vertex cover U ⊆ V with |u| ≤ k?
A vertex cover is a set of vertices that ”hits” all edges – i.e. ∀(u, v) ∈ E, u ∈ U or v ∈ U (or both.)

Theorem Vertex-Cover (VC) is NP-complete.


Proof

41
19 NOV 11TH, 2008 19.4 Set-Cover Problem

• VC ∈ N P
Certificate: set u. Certifier algorithm: verify U vertex cover and ≤ k.
• Ind-Set ≤P VC
Ind-Set and VC are closely related.
Claim u ∈ V is an independent set iff V − U is an vertex cover.
Suppose that we have a polynomial time algorithm for VC. Here’s an algorithm for independent set. Input
G, k, and call VC algorithm on G, n − k.
Correctness: Claim, G has independent set ≥ k iff G has VC ≤ n − k.

19.4 Set-Cover Problem


Input: set E of elements and some subsets of E: S1 , . . . , Sm . Si ∈ E and k ∈ N.

Question:
Can we choose subset of k Si ’s that still cover all the elements? i.e. i1 , . . . , ik such that
[
Sij = E
j=1...k

Example: Can we throw away some intersecting rectangles and still cover some area?
Theorem Set-Cover is NP-complete.
Please find reduction proof on the Internet.

19.5 Road map of NP-Completeness


Circuit-SAT

3-SAT Subset-Sum

Independent Hamiltonian
Set Cycle

VC TSP

Set-Cover

Note: VC ≤P Set-Cover because VC is a special case, but Set-Cover ≤P VC because VC is NP-complete.

These proofs are from a 1976 paper by Richard Karp.

19.6 Hamiltonian Cycle


Input: Directed Graph G = (V, E)
Q: Does G have a directed cycle that visits every vertex exactly once?

Proof (1) ∈ N P and (2) 3-SAT ≤P Ham.Cycle. Give a polytime algorithm for 3-SAT assuming we have one for
Ham.Cycle.

42
20 NOV 13TH, 2008

• Input: 3-SAT formula F


• Idea: Construct digraph G such that F is satisfiable iff G has a Hamiltonian cycle.
F has m clauses and n variables x1 , . . . , xn .
(skipped this section. read online.)
Can you show the undirected ham cycle problem is hard?

20 Nov 13th, 2008


20.1 Undirected Hamiltonian Cycle
Input: Undirected G = (V, E)
Decision: Does this graph have an undirected Hamiltonian cycle that visits every vertex exactly once?

Theorem Undirected H.C. is N P -complete.


Proof
• ∈ NP
• Dir. H.C. ≤P Undir.H.C.
Assume we have a polytime algorithm for the undirected case. Design a polytime algorithm for the directed
case.
Input: Directed graph G
Construct an undirected graph G0 such that G has directed H.C. iff G0 has undirected GC.
First idea – G0 = G with direction erased. (⇒) is OK, but (⇐) fails in a one-directional cycle.
Second idea –
vin vout

v vmid

For each vertex v create vin , vout , and vmid as shown above. We’ve created G0 .
Claim G0 has polynomial size. Say G has n vertices, m edges. Then G0 has 3n vertices, m + 2n.
Claim (Correctness) G has a directed H.C. iff G0 has undirected H.C.
(⇒) easy
(⇐) vmid has degree two. So the Hamiltonian cycle must use both incident edges. Then it must use one
incoming edge at v and one outgoing edge at v.
This is the level of N P -completeness proof you’ll be expected to do on your assignment.

20.2 TSP is NP-complete


Theorem TSP (decision version) is N P -complete.
Input: G = (V, E) and w : E → R+ with k ∈ R.
Q: Does G have a T SP tour with weights ≤ k?

Proof

43
20 NOV 13TH, 2008 20.3 Subset-Sum is NP-Complete

• ∈ NP

• Ham. Cycle ≤P TSP.


Ham. Cycle is a special case of TSP when w(e) = 1 ∀e and k = n.

Theorem Hamiltonian Path is NP-complete.


Input: undirected graph G
Question: does G have Ham path that visits each vertex exactly once?
Proof

– ∈ NP
– Ham Cycle ≤P Ham Path
Want algorithm for Ham. Cycle using algorithm for Ham Path. Given G, input for Ham. cycle,
construct G0 such that G has H.C. iff G0 has Ham path.
First idea: G0 ← G. Well, ⇒ is OK but we can find a counterexample for ⇐. Exercise: find a
counterexample.
Second idea: Create three new vertices abc in G0 and connect a and c to all vertices in G0 . This gives
G has Ham. path iff G0 has Ham cycle.
Third idea: Add a single vertex and connect it to everything in G0 .
Fourth idea: erase each vertex from G one-at-a-time and ask for Hamiltonian path.
Final idea: Take one vertex v and split it into two identical cupies. Add new vertices s and t as above.
Claim poly-size.

Again, this is the kind of thing you’ll be expected to do on your assignment.

20.3 Subset-Sum is NP-Complete


This one is not something you’ll be expected to do on your assignment.
Input: Numbers a1 , . . . , an ∈ R and target W . P
Question: Is there a subset S ∈ {1, . . . , n} such that i∈S ai = W ?
Recall: Dynamic programming algorithm O(n × W ). Branch-and-bound algorithm was O(2n ).

Proof

1. ∈ N P

2. 3-SAT ≤P Subset-Sum
Give a polynomial-time algorithm for 3-SAT using a polytime algorithm for Subset-Sum.
Input is a 3-SAT formula F with variables x1 , x2 , . . . xn and Pclauses c1 , . . . , cn . Construct a Subset-Sum input
a1 , . . . , at , W s.t. F is satisfiable iff ∃ subset of ai ’s with = W.
Ex, F = (x1 ∨ ¬x2 ∨ x3 ) ∧ (¬x1 ∨ ¬x2 ∨ x3 ).

44
20 NOV 13TH, 2008 20.3 Subset-Sum is NP-Complete

c1 c2 . . . cm x1 x2 x3
x1 1 0 1 0 0
¬x1 0 1 1 0 0
x2 0 0 0 1 0
¬x2 1 1 0 1 0
x3 1 1 0 0 1
¬x3 0 0 0 0 1
xn
¬xn
slack 1, 1 1
slack 1, 2 2
slack 2, 1 1
slack 2, 2 2
≥1 ≥1 1 1
4 4

Make a 0-1 matrix, interpreting the rows as binary numbers (actually with a bigger base of 10.) Add extra
columns: column xi has 10 s in rows xi and rows ¬xi , but zeros elsewhere.

• Want to choose x1 row or ¬x1 row, but not both. Solution is slack rows.
• Want to deal with target ≥ 1. Solution: add two rows per column forcol ci . Add rows slack i,1 = 1 in
c1 and sl i,2 = 2 in ci – and 0 everywhere else.

Set target for column ci = 4.


Finally, each row of the matrix becomes a base-10 number. These are the ai ’s. The target row of the matrix
turns into W in base 10.
Claim Size. How many ai ’s? 2n + 2m. How many base 10 digits in ai ’s and W ? Equal to number of columns,
n + m.
Claim Correctness. Satisfiable iff ∃ subset of ai ’s with sum W .
Proof (⇒) If xi is true, choose xi . If false, choose ¬xi . Then column xi has sum = 1 as required. Column
for Ci clause: either:

• True literal in Ci

Use slack i,1 = 1, so total = 4. Use slack i,2 = 1, total = 4. If only a single true literal, use slack i,1 and
slack i,2 for again 4.
This row set gives sum W .
(⇐) Some subset of rows adds to W .
Column xi ⇒ we use rows xi or ¬xi . Set xi = T or F . That satisfies all clauses. Consider cj , and sum down
cj column to get 4. Slacks give ≤ 3 so some literal in cj must be true.

45
21 NOV 18TH, 2008

21 Nov 18th, 2008


NP-Completeness continued.
Theorem Circuit-SAT is NP-Complete.
Recall: Input: Circuit of ∨, ∧ and ¬ gates and variables as some of the inputs. One sink: the final output.
Question: are there 0-1 values for which the circuit outputs 1?
Proof

• ∈ NP

• Y ≤p Circuit-SAT for all Y in NP.


What do we know about Y ? It has a polynomial time certifier algorithm B (input s for Y has Yes output
iff there exists a certificate t of poly size such that B(s, t) outputs YES.
We assume there is a polynomial time algorithm for Circuit-SAT and give a polynomial time algorithm
for Y using that subroutine.
Let n = size(s), size of input size. Let p(n) be a polynomial bounding size(t) i.e. size(t) ≤ p(n).
We must convert algorithm B to a circuit (to hand to Circuit-SAT subroutine.)
Alg. B (after compiling and assembling) becomes a circuit at lowest hardware level. Because B runs in
polynomial time, the circuit has polynomial size.
Alg B (for input of size n) becomes circuit Cn (of polynomial size in n.)
(Is there a certificate?) becomes (Are there values for variables?)
Correctness:
Input s for Y gets YES output iff there exists a certificate such that B(s, t) outputs YES iff there exist values
for variables t such that Cn outputs 1 iff Cn is satisfiable.
Algorithm for Y :

– Input S
– Convert B to circuit Cn
– Hand Cn to Circuit-SAT subroutine

21.1 Major Open Questions


Is P = N P ? If one N P -complete problem is in P , then they all are.
If P 6= N P then there are problems in between P and N P -complete (Badner 70’s) i.e. A ≤P B but B not ≤P A
(i.e. A <P B)
But what are natural candidates for these? IN Garey and Johnson (’79) these were:

• Linear Programming: in P (’80)

• Primality Testing: in P (’02)

• Min. Weight Triangulation for Point Set: in N P -complete (’06) (not famous problem)

• Graph isomorphism: open.

Given two graphs each on n vertices, are they the same after relabeling vertices?

46
21 NOV 18TH, 2008 21.2 Undecidability

21.2 Undecidability
So far we’ve been talking about efficiency of algorithms. Now, we’ll look at problems with no algorithm whatsoever.
This is also a topic not conventionally covered in an algorithms course. So you won’t find it in textbooks. But
everyone in the School of Computer Science thinks it’s ”absolutely crucial” that everyone graduating with a
Waterloo degree knows this stuff.

21.2.1 Examples
Tiling: Given square tiles with colours on their sides, can I tile the whole plane with copies of these tiles? Must
match colours, and no rotations or flips allowed.

The answer is, actually, no. For a finite piece (k × k) of the plane, it’s possible as I could just try t choices in k 2
2
places, so the problem is O(tk ).

Program Verification: Given specification of inputs and corresponding outputs of a program (specification is finite,
potential number of inputs is infinite) given a program, does this program give correct corresponding output?

Answer: no. On one hand, this is sad for software engineers, because what their processes do attempts to check
this. On the plus side, your skills and ingenuity will always be needed...

Halting Problem: Given a program, does it halt (or go into an infinite loop?)

Sample-Program

while x 6= 1 do

x←x−2

end

This halts if x is odd and positive.

Sample-Program-2

while x 6= 1 do
x
if x is even then x ← 2
else x ← 3x + 1

end

Assume x > 0. Sample runs: x = 5, 16, 8, 4, 2, 1. x = 9, 28, 14, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1.
Does this program halt for all x? That’s open.

47
22 NOV 20TH, 2008

Also, any math question about existence of a number can be turned into a halting question. Idea: There is an x
such that Foo(x). x ← 1. While not Foo(x), x ← x − 1.

Definition A decision program is undecidable if there’s no algorithm for it.

Definition (more general)


A program is unsolvable if there’s no algorithm for it.

What is a problem? Specification of inputs and corresponding outputs.

What is an algorithm? Church-Turing Thesis (not proved.)


Algorithm is a Turing machine.

Theorem The following models of computing are equivalent:

• Turning machines

• Java programs

• RAM

• Circuit families

22 Nov 20th, 2008


22.1 Undecidability
”Which problems have no algorithm?”

Definition A decision problem is undecidable if it has no algorithm. A (general) problem is unsolvable if it as no


algorithm.

22.2 History of Undecidability


• Gottlob Frege - 1900 - one of many who tried to axiomatize mathematics.

• Bertrand Russell (1872-1970) Russell’s paradox (recommend his biography, and some philosophy books)
Let S = the set of sets that do not contain themselves. Is S a member of itself?

– NO. then S meets the second condition, so S is a member of S.


– YES. contradiction.

Contradiction either way! So what is wrong about this?

First undecidability result (from Turing):

Theorem The Halting Problem is undecidable.

Halting Problem

48
23 NOV 25TH, 2008

• Input: Some program or algorithm A and some input string w for A.


• Question: Does A halt on w?
Proof: (by contradiction.) Suppose there is a program H that decides the halting problem. H takes A, w as input
and outputs yes/no.

Construct a new program H 0 with input a program B.

begin
call H(B, B)
if no, halt.
else, loop forever.
end

So H 0 is like Russell’s set S. His question, ”does S contain S?” is like asking, ”does H 0 halt on its own input?”
Suppose yes, then this is a yes case of the halting problem. So H(H 0 , H 0 ) outputs yes. Look at code for H 0 on
input H 0 . It loops forever. Contradiction.

Suppose no. Then this is the no case of the halting problem. So H(H 0 , H 0 ) outputs no. But then (looking at
code of H 0 ) H 0 halts on input H 0 . Contradiction either way. Therefore, our assumption that H exists is wrong.
Therefore, there is no algorithm to decide the halting problem.

23 Nov 25th, 2008


Assignment 3 – out of 45.
Assignment 4 – due Friday.
Final exam: study sheet is allowed.

23.1 Undecidability
Recall: a decision problem is undecidable if there is no algorithm for it.

Halting Problem: given a program/algorithm A and an input w, does A halt on input w?


To show other problems are undecidable, use reductions.

Theorem: If P and Q are decision problems and P is undecidable and P ≤ Q then Q is undecidable.

Recall A ≤ B or ”A reduces to B” if an algorithm for B can be used to make an algorithm for A.

Proof By contradiction. Suppose Q is decidable. Then it has an algorithm. By the definition of ≤, we get an
algorithm for P . This is contrary to P undecidable.

23.2 Other Undecidable Problems


23.2.1 Half-No-Input or Halt-on-Empty
Given a program A with no input, does it halt?

49
23 NOV 25TH, 2008 23.2 Other Undecidable Problems

Theorem Halt-no-Input is undecidable.


Proof: Halting Problem ≤ Half-no-input.
Suppose we have an algorithm X for Halt-no-input. Make an algorithm for the Halting Problem.
Input: program A, input string w.
Algorithm: Make a program A0 that has w hard-coded inside it and then run A on it. Call X on A0 which outputs
the yes/no answer.

Correctness A halts on w iff A0 halts.

23.2.2 Program Verification


Given a program, and specification of inputs and corresponding outputs, does the program compute the correct
output for each input?

Theorem Program Verification is undecidable.

Proof Halt-No-Input ≤ Program Verification.

Suppose we have an algorithm V to decide Program Verification. Make an algorithm to solve Halt-No-Input.

Input: program A.
Output: does A halt?
Idea: Modify code of A to get a program A0 with input and output.

 read input, discard it
A0 A
output 1

Then call V (A0 , specs: for any input, output 1 ).

Correctness A halts iff V (A0 , specs above) answers yes.


Proof: A halts iff A0 produces 1 output for every input iff V (A0 , spec above) answers yes.

Program Equivalence (something TA’s would love!)


Given two programs, do they behave the same (i.e. produce the same outputs?)

Theorem Program Equivalence is undecidable.


Proof Program-Verification ≤ Program-Equiv (?)
Suppose we have an algorithm for Program Equivalence. Give an algorithm for Program Verification.

Input: program A, input/specs for A. This will work, but we need more formality about input/output specs.
Let’s try another approach.

Halt-No-Input ≤ Program-Equiv.

Suppose we have an algorithm for Program Equivalence. Make an algorithm for Halt-no-Input. Input: program A.
Algorithm: Make A0 as in previous. Make program B: read input, just output 1. Call algorithm for Program-Equiv
on A0 , B.

50
24 NOV 27TH, 2008

Correctness
A0 is equivalent to B iff A halts.

23.2.3 Other Problems (no proofs)


Hilbert’s 10th Problem
Given a polynomial P (x1 , . . . , xn ) with integer coefficients, does P (x1 , . . . , xN ) = 0 have positive integer solutions?

Possible approach: try all integers. This will correctly answer ”yes” if the answer is ”yes.” e.g. least integer
solution to x2 = 991y1 + 1 is a 30-digit x and 29-digit y.

This was proved undecidable in the 70’s.

Conway’s Game of Life


Rules: spots die with 0-1 or 4 neighbours, are born with three neighbours, Undecidable.

24 Nov 27th, 2008


Final Exam: Wed Dec 10th. Office hours: show webpage. 48 and 49 must be rounded up to 50.

24.1 What to do with NP-complete problems


Sometimes you only want special cases of an NP-complete problem.

• Parameterized Tractability: exponential algorithms that work in polynomial time for special inputs. For
example, maximum degree in a graph. There may be algorithms that work in polytime when you bound
that maximum degree.

• Exact exponential time algorithm: use heuristics to make branch-and-bound explore the most promising
choice first (and run fast sometimes.)

• Approximation Algorithms: CS 466.

– Vertex Cover: Greedy algorithm that finds a good (not necessarily min) vertex cover.

C <- empty set


while E not empty set
pick e = (u,v) in E
C <- C u {u,v}
remove from E all edges
incident to u or v
end

Claim is this algorithm finds |C| ≤ 2( min size of a V.C. ).


Proof: The edges we choose form a matching M (no two share an endpoint.) |C| = 2|M |. Every edge
in M must be hit by a vertex in any V.C. and ∴ |M | ≤ min size of V.C. and ∴ |C| ≤ 2 × ( min V.C. ).
We call this a ”2-approximation algorithm.”
Some NP-complete problems have no constant-factor approximation algorithm (unless P = N P ) such
as Independent Set.

51
24 NOV 27TH, 2008 24.1 What to do with NP-complete problems

Some NP-complete problems have approximation factors as close to 1 as we like – at the cost of
increasing running time. Limit is approximation factor = 1 (an exact algorithm) with an exponential-
time algorithm.
– Example Subset-Sum
P
Given w1 , . . . , wn and W , is there S ∈ {1 . . . n} such that i∈S wi = W ?
P P
As optimization, we want i∈S wi ≤ W to maximize i∈S wi .
Recall: Dynamic programming O(n × W ).
Note i∈S wi ≥ 21 (true max. this would be a 2-approximation)
P
P 1
i∈S wi ≥ (1+) (true max) is a ”(1 + )-approximation.
1 3

Claim is there is a (1 + ) approximation algorithm for Subset-Sum with runtime O n . As  → 0
we get better approximation but worse runtime.
Idea: apply dynamic programming to rounded input.
Rough rounding – few bits – rough approximation.
Refined rounding – many bits – good approximation.
Rounding parameter b (later b = n (max wi for i = 1 . . . n))
So w̃i ← wbi b
 

Claim that wi ≤ w̃i ≤ wi + b.


Now all the w̃i ’s are multiples of b so scale and run dynamic programming.
˜ ← w̃i . Also,
 
w̃ b
˜
W̃ ←
W 
.
b
Note: we should check feasibility of rounding.
Runtime: O(n × W̃˜ ).

     
˜ ≤O W W n 1 2
W̃ =O ≤O n
B  (max wi ) 
and
W ≤ n(max wi )
Therefore, our runtime is like O 1 n .
3

P
How good is our approximation?
P P Each
P w̃i is  off by ≤P b. The true maximum ≤ i∈S wi + nb ≤
w
i∈S i + (max w i ) ≤ w
i∈S i +  w
i∈S i = (1 + ) w
i∈S i .
Second last step: else use max wi as solution.
Therefore, (1 + ) approx. alg.
(And assume wi < W ∀i. Else throw out.)
Idea: dynamic programming algorithm is very good – it only can’t handle having lots of bits in a
number. So throw away half the bits and get an approximate answer.

• Do alternative methods of computing help with NP-complete problems?


Will massively parallel computers help? Only by a factor of number of CPUs. This is like ”a drop in the
bucket” for exponential time algorithms.

• Randomized algorithms (CS 466?)


If I have access to a RNG, then what can I now do?
Primality: can be tested in polytime with a randomized algorithm (70’s) but also without randomness (2002.)

52
24 NOV 27TH, 2008 24.2 P vs. NP

• Quantum Computing
The hope is that it offers massive parallelism for free. Huge result (Shor, 1994) – efficient factoring on a
quantum computer.
Waterloo is, by the way, the place to be for quantum computing. In Physics, CS, and C&O we have experts
on the subject.
To read a tiny bit more on Quantum Computing is [DPV]

24.2 P vs. NP

53

You might also like