CONTENTS CONTENTS

CS 341 Course Package — Chris Erbach
Contents
1 Sep 9th, 2008 1
1.1 Welcome to CS 341: Algorithms, Fall 2008 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Marking Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Course Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.4 A Case Study (Convex Hull) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Sep 11th, 2008 4
3 Sep 16th, 2008 4
3.1 Example: Making change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.2 Example: Scheduling time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.3 Example: Knapsack problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4 Sep 18, 2008: MISSING 6
5 Sep 23, 2008: Divide and Conquer 6
5.1 Solving Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
5.1.1 ”Unrolling” a recurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
5.1.2 Guess an answer, prove by induction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
5.1.3 Changing Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5.1.4 Master Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
6 Sep 25, 2008 11
6.1 Assignment Info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
6.2 Divide & Conquer Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
6.2.1 Counting Inversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
6.2.2 Multiplying Large Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
7 Sep 30, 2008 14
7.1 D&C: Multiplying Matrices: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
7.2 D&C: Closest pair of points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
7.3 Hidden Surface Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
8 Oct 2nd, 2008 15
8.1 Dynamic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
8.2 Second example: optimum binary search trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
9 Oct 7th, 2008 17
9.1 Example 2: Minimum Weight Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
10 Oct 9th, 2008 19
10.1 Dynamic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
10.2 Certain types of subproblems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
10.3 Memoization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
i
CONTENTS CONTENTS
11 Oct 14th, 2008 20
11.1 Graph Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
11.2 Minimum Spanning Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
12 Oct 16th, 2008 23
12.1 Graph Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
12.1.1 Prim’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
12.2 Shortest Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
13 Oct 21, 2008 25
13.1 All Pairs Shortest Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
13.1.1 Floyd-Warshall Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
14 Oct 23, 2008 27
14.1 Dijkstra’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
14.2 Connectivity in Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
14.2.1 Finding 2-connected components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
15 Oct 28th, 2008 30
15.1 Backtracking and Branch/Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
15.2 Branch-and-Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
15.2.1 Branch and Bound TSP Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
16 Oct 30th, 2008 33
16.1 Recall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
16.2 Lower Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
16.2.1 Basic Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
16.2.2 State-of-the-Art in Lower Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
16.3 Polynomial Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
16.4 Reductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
17 Nov 4th, 2008 35
17.1 Decision Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
17.2 P or NP? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
17.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
18 Nov 6th, 2008 38
18.1 Recall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
18.2 NP-Complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
18.2.1 Circuit Satisfiability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
18.2.2 3-SAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
19 Nov 11th, 2008 40
19.1 Satisfiability – no restricted form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
19.2 Independent Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
19.3 Vertex Cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
19.4 Set-Cover Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
19.5 Road map of NP-Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
19.6 Hamiltonian Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
ii
CONTENTS CONTENTS
20 Nov 13th, 2008 43
20.1 Undirected Hamiltonian Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
20.2 TSP is NP-complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
20.3 Subset-Sum is NP-Complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
21 Nov 18th, 2008 46
21.1 Major Open Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
21.2 Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
21.2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
22 Nov 20th, 2008 48
22.1 Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
22.2 History of Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
23 Nov 25th, 2008 49
23.1 Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
23.2 Other Undecidable Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
23.2.1 Half-No-Input or Halt-on-Empty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
23.2.2 Program Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
23.2.3 Other Problems (no proofs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
24 Nov 27th, 2008 51
24.1 What to do with NP-complete problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
24.2 P vs. NP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
iii
1 SEP 9TH, 2008
1 Sep 9th, 2008
1.1 Welcome to CS 341: Algorithms, Fall 2008
I’m Anna Lubiw, I’ve been in this department/school quite some time. This term I’m teaching both sections of
CS 341. I find the earlier lecture is better though, which may be counterintuitive.
The number of assignments is fewer this term. There are fewer grad TA’s this term, so the assignments may be
shorter (but quite likely, not any easier!)
Textbook is CLRS. $140 in the bookstore, on reserve in the library.
1.2 Marking Scheme
25% Midterm
40% Final exam
35% Assignments
We have due dates for assignments already (see the website.) Unlike in 2nd year courses where ISG keeps everything
coordinated, in third year we’re on our own.
1.3 Course Outline
Where does this word come from? An Arabic scientist from 600 AD. Originally, algorithms for arithmetic,
developed by the mathematician/scientist (not sure what to call him back then.)
In this course, we’re looking for the best algorithmic solutions to problems. Several aspects:
1. How to design algorithms
i.e. what shortest-path algorithm to use for street-level walking directions.
(a) Greedy algorithms
(b) Divide and Conquer
(c) Dynamic Programming
(d) Reductions
2. Basic Algorithms (often domain specific)
Anyone educated in algorithms needs to have a general repertoire of algorithms to apply in solving new
problems
(a) Sorting (from first year)
(b) String Matching (CS 240)
3. How to analyze algorithms
i.e. do we run it on examples, or try a more theoretical approach
(a) How good is an algorithm?
(b) Time, space, goodness (of an approximation)
4. You are expected to know
(a) O notation, worst case/avg. case
(b) Models of computation
1
1 SEP 9TH, 2008 1.4 A Case Study (Convex Hull)
5. Lower Bounds
This is not a course on complexity theory, which is where people really get excited about lower bounds, but
you need to know something about this.
(a) Do we have the best algorithm?
(b) Models of computation become crucial here.
(c) NP-completeness (how many of you have secret ambitions to solve this? I started off wanting to solve
it, before it was known it was so hard...)
1.4 A Case Study (Convex Hull)
To bound a set of points in 2D space, we can find the max/min X,Y values and make a box that contains all the
points. A convex hull is the smallest convex shape containing the points (think the smallest set of points that we
can connect in a ring that contains all the other points.) Analogy: putting an elastic band around the points, or
in three dimensions putting shrink-wrap around the points.
Why? This is a basic computational geometry problem. The convex hull gives an approximation to the shape of
a set of points better than a minimum bounding box. Arises when digitizing sculptures in 3D, or maybe while
doing OCR character recognition in 2D.
1.4.1 Algorithm
Definition (better from an algorithmic point of view)
A convex hull is a polygon and its sides are formed by lines that connect at least two points and have no points
on one side.
A straightforward algorithm (sometimes called a brute force algorithm, but that gives them a bad names because
oftentimes the straightforward algorithms are the way to go) – for all pairs of points r, s find the line between r, s
and if all other points lie on one side only then the line is part of the convex hull.
Time for n points: O(n
3
).
Aside: even with this there are good and bad ways to ”see which side points are on.” Computing the slope
between the lines is actually a bad way to do this. Exercise: for r, s, and p, how to do it in the least steps, avoiding
underflow/overflow/division.
Improvement Given one line , there is a natural ”next” line. Rotate through s until it hits the next point.
l r
s
t l'
t is an ”extreme point” (min angle α). Finding it is like ginding a max (or min) – O(n). Time for n points: O(n
2
).
Actually, if h = the number of points on the convex hull, the algorithm takes O(n h)
Can we do even better? (you bet!)
Repeatedly finding a min/max (which should remind you of sorting.)
Example Sort the points by x coordinate, and then find the ”upper convex hull” and ”lower convex hull” (each of
which comes in sorted order.)
The sorting will cost O(nlog n) but the second step is just linear. We don’t quite have a linear algorithm here but
this will be much better. Process from left to right, adding points and each time figuring out whether you need to
2
1 SEP 9TH, 2008 1.4 A Case Study (Convex Hull)
go ”up” or ”down” from each point.
This is a case of using a reduction (which we will study a lot in this course)
Time for n points: O(nlog n).
One more algorithm
Will not be better than O(nlog n). Why not? We’ll show soon, but intuition is that we’ll have to sort the points
somehow. In three-dimensional space you can still get O(nlog n) algorithms for this, but not the same way. This
answer uses divide and conquer.
upper bridge
lower bridge
1. Divide points in half by vertical line.
2. Recursively find convex hull on each side.
3. Combine by finding upper and lower bridges.
From e, edge from max x coordinate on the left to minimum x coordinate on the right, ”walk up” to get upper
bridge, and ”walk down” to get the lower bridge.
This will be O(n) to divide, and O(n) to find the upper/lower bridges. Get recurrence relation:
T(n) = 2T

n
2

+O(n)
This is the same as e.g. merge-sort. It comes out to O(nlog n).
Never Any Better Finally let’s talk ever-so-slightly about not getting better than O(nlog n). In some sense, no.
If we could find a convex hull faster, we could sort faster.
Technique: put points on a parabola (or alternately other shape) with a map x → (x, x
2
) and compute the convex
hull of these points. From there recover the sorted order. This is an intuitive argument. To be rigorous, we need
to specify the model of computation. We need a restricted model to say that sorting is Ω(nlog n) – but need the
power of indirect addressing. (Don’t worry if that seems fuzzy. The take-home message is that to be precise we
need to spend more time on models of computation.)
Measuring in terms of n, the input size, and h, the output size. We saw an O(nlog n) algorithm, an O(n h)
algorithm. Which is better? Well, depends on whether h > log n or not.
One paper written called ”The ultimate convex hull algorithm?” (with a question mark in the name, very unusual)
gave an algorithm that’s O(nlog h).
Challenge Look up the O(nlog h) algorithm by Timothy Chan (here in SCS) and try to understand it.
3
3 SEP 16TH, 2008
2 Sep 11th, 2008
Missing.
3 Sep 16th, 2008
Assignment 1 is available online.
3.1 Example: Making change
Example: for making change. Suppose you want to pay $ 3.47 in as few coins as possible. This takes seven coins,
and I claim this is the minimum number of coins. On the assignment you must prove this is in fact true.
3.2 Example: Scheduling time
Interval scheduling, or ”activity selection.” The goal is to maximize the number of activities we can perform.
Given activities, each with an associated time interval, pick non-overlapping activities.
Greedy Approaches
• Pick the first activity
NO
• Pick the shortest activity
NO
• Pick one with the fewest overlaps
NO
• Pick the one that ends earliest
YES
We can write the algorithm as
A <- empty set
for i = 1 .. n
if activity i doesn’t overlap any activities in A
A <- A union { i }
end
This looks like an O(nlog n) algorithm (as it takes that long to sort, and then O(n) after that)
Correctness Proof
There are three approaches to proving correctness of greedy algorithms.
• Greedy does better at each step.
• Suppose there is an optimal solution. Then the Greedy approach can be made into this solution.
• Metroids (a formalization of when Greedy approaches work) (in C&O)
4
3 SEP 16TH, 2008 3.3 Example: Knapsack problem
Theorem This algorithm returns a maximum size set A of non-overlapping intervals.
Proof Let A = ¦a
1
, . . . a
k
¦ ordered by finish time (i.e. in the order greedy alg. chooses them.) Let B = ¦b
1
, . . . , b
l
¦
be any other set of non-overlapping intervals ordered by finish time.
We want to show l ≤ k. Suppose that l > k and show that greedy algorithm would not have stopped at k.
Claim a
1
, . . . , a
i
b
i+1
. . . b
l
is also a solution.
Proof By induction on i. Base case i = 0 and b
1
, b
2
, . . . , b
l
is a solution. Inductive case a
1
, . . . , a
i−1
b
i
. . . b
l
is a
solution. Prove a
1
, . . . , a
i
, b
i+1
, . . . , b
l
is a solution. i.e. we’re swapping b
i
out and a
i
in.
Well, b
i
does not overlap a
i−1
by assumption. So when we choose a
i
, b
i
was a candidate – we chose a
i
. So finish
(a
i
) ≤ finish (b
i
) ∴ a
i
doesn’t overlap b
i+1
, . . . , b
l
so swap is OK.
Exercise, go through the picture.
That proves claim. To proce theorem, if l > k then by claim a
1
, . . . , a
k
, b
k+1
, . . . , b
l
is a solution. But then the
Greedy algorithm would not have stopped at a
k
.
Therefore l ≤ k and greedy gives the optimal solution.
3.3 Example: Knapsack problem
I have items i, . . . , n. Item i has weight w
i
and i has values v
i
. Weight limit W for the knapsack. Pick items of
total weight ≤ W maximizing the sum of V .
There are two versions:
• 0-1 Knapsack: the items are indivisible (e.g. tent)
• Fractional: items are divisible (e.g. oatmeal)
We’ll look at 0-1 Knapsack later (since it’s harder) (and when we study dynamic programming)
So imagine we have a table of items:
Weight w
i
Value v
i
1 6 12
2 4 7
3 4 6
W = 8. Greedy by
v
i
w
i
. For the 0 −1 knapsack:
• Greedy picks item 1 – value 12
• Optimal solution
For the fractional case:
• Take all of item 1, half of item 2
Greedy Algorithm
Order items 1, . . . , n by
v
i
w
i
. x
i
is the weight of item i that we chose.
free-w <- W
for i=1..n
x_i <- min{ w_i, free-W }
free-w <- free-w - x_i
end
5
5 SEP 23, 2008: DIVIDE AND CONQUER
¸
x
i
= W (assuming W <
¸
w
i
)
The value we get is
n
¸
i=1

v
i
w
i

x
i
Note: solution looks like it’s for 0-1. The only item we take fractionally is the last.
Claim Greedy algorithm gives the optimal solution to fractional knapsack problem.
Proof We use x
1
, . . . , x
n
and the optimal uses y
1
, . . . , y
n
. Let k be the minimum index with x
k
= y
k
. Then y
k
< x
k
(because greedy took max x
k
.)
¸
x
i
=
¸
y
i
= W. So there exists an index l > k such that y
l
> x
l
. Ida: swap
excess item l for item k.
y

k

k
+∆ and y

l
← y
l
−∆. Well, ∆ ← min¦y
l
, w
k
−y
k
¦, both terms of which are greater than zero. So the sum
of the weights
¸
y

i
= W
+∆(v
k
/w
k
) −∆(v
l
/w
l
)
= ∆(v
k
/w
k
−v
l
/w
l
)
v
k
w
k
>
v
l
w
l
because k > l
Thus y

i
is an even better solution. Thus own assumption that opt is better than greedy fails.
4 Sep 18, 2008: MISSING
5 Sep 23, 2008: Divide and Conquer
I started with Greedy because it’s fun to get to some interesting algorithms right away. Divide and conquer however
is likely the one you’re most familiar with. Sorting and searching are often divide-and-conquer algorithms.
The steps are:
• Divide – break problem into smaller subproblems
• Recurse – solve smaller sets of problems
• Conquer/Combine – ”put together” solutions from smaller subproblems
Some examples are:
• Binary search
– Divide: Pick the middle item
– Recurse: Search in each side, with only one subproblem of size
n
2
– Conquer: No work
– Recurrence relation: T(n) = T

n
2

+ 1 or more formally T(n) = max
¸
T

n
2

, T

n
2
¸
+ 1
– Time: T(n) ∈ O(log n)
• Merge sort
– Divide: basically nothing
6
5 SEP 23, 2008: DIVIDE AND CONQUER 5.1 Solving Recurrence Relations
– Recurse: Two subproblems of size
n
2
– Conquer: n −1 comparisons
– Recurrence: T(n) = T

n
2

+T

n
2

+ (n −1) and T(1) = 0 comparisons.
– Time: T(n) ∈ O(nlog n)
5.1 Solving Recurrence Relations
Three approaches, all of which are in CLRS.
5.1.1 ”Unrolling” a recurrence
Use
T(n) = 2T

n
2

+n −1 for n even
T(1) = 0
So for n a power of 2,
T(n) = 2T

n
2

+n −1
= 2

2T

n
4

+
n
2
−1

+n −1
= 4T

n
4

+ 2n −3
.
.
.
= 2
i
T

n
2
i

+in −(2
i
−1) or
i−1
¸
j=0
2
j
We want
n
2
k
= 1, 2
k
= n, k = log n.
= 2 ∗ kT

n
2
k

+k n −(2
k
−1)
= nT(1) +nlog n −n + 1
= nlog n −n + 1 ∈ O(nlog n)
If our goal is to say that mergesort takes O(nlog n) for all n (as apposed to exactly computing T(n)) then we can
just add that T(n) ≤ T(n

) where n

= the smallest power of 2 bigger than n.
If we really did want to compute exactly T(n), then
T(n) = T

n
2
¸
+T

n
2
¸
+n −1
T(1) = 0
and the exact solution is
T(n) = nlog n| −2
log n
+ 1
7
5 SEP 23, 2008: DIVIDE AND CONQUER 5.1 Solving Recurrence Relations
5.1.2 Guess an answer, prove by induction
Again for mergesort recurrence, prove that
T(n) ∈ O(nlog n)
Be careful: prove by induction that T(n) ≤ cnlog n for some constant c. Often you don’t know c until you’re
working on the problem.
A good trick for avoiding |, | is to deal separately with n even and n odd.
For n even,
T(n) = 2T

n
2

+n −1 ≤ 2

c
n
2
log
n
2

+n −1
= cn(log n −log 2) +n −1 (by induction)
= cnlog n −cn +n −1
≤ cnlog n if c ≥ 1
I’ll leave the details as an exercise (we need a base case, and need to do the case of n odd) for those of you for
whom this is not entirely intuitive.
Another example
T(n) = 2T

n
2

+n
Claim T(n) ∈ O(n)
Prove T(n) ≤ cn for some constant c
Assume by inductive hypothesis that
T(n

) ≤ cn

for n

< n
Inductive step
T(n) = 2T

n
2

+n
≤ 2c
n
2
+n = (c + 1)n
Wait, constants aren’t supposed to grow like c + 1 above. This proof is fallacious. Please do not make this kind
of mistake on your assignments.
Example 2
T(n) = T

n
2
¸
+T

n
2
¸
+ 1
T(1) = 1
Let’s guess T(n) ∈ O(n). Prove by induction that T(n) ≤ cn for some c.
8
5 SEP 23, 2008: DIVIDE AND CONQUER 5.1 Solving Recurrence Relations
Induction step:
T(n) = c

n
2
¸
+c

n
2
¸
+ 1
= cn + 1 – we’ve got trouble from that + 1
Let’s try unrolling for n a power of 2.
T(n) = 2T

n
2

+ 1
= 4T

n
4

+ 2 + 1
.
.
.
= 2
k
T

n
2
k

+
k−1
¸
i=1
2
i
(n = 2
k
)
= nT(1) + 2
k
−1
= 2n −1
So try proving by induction that
T(n) ≤ c n −1
In that case we have
T(n) = c

n
2
¸
−1 +c

n
2
¸
−1 + 1
= cn −1
This matches perfectly.
Message: Sometimes we need to strengthen the inductive hypothesis and lower the bound.
5.1.3 Changing Variables
Suppose we have a mystery algorithm with recurrence
T(n) = 2T(

n|) + log n and ignore the |
Substitute m = log n, n = 2
m
, and we have
T(n) = 2T(2
m/2
) +m
Let S(m) = T(2
m
), then S(m) = 2S(m/2) +m. We can say
S(m) ∈ O(mlog m)
T(2
m
) ∈ O(mlog m)
T(n) ∈ O(log nlog log n)
9
5 SEP 23, 2008: DIVIDE AND CONQUER 5.1 Solving Recurrence Relations
5.1.4 Master Theorem
From MATH 239, homogeneous linear recurrences T(n) = a
n−1
T(n−1) +a
n−2
T(n−2) +. . . +a
1
T(1) +f(n) = 0
are ”homogeneous” because they’re equal to zero. That never happens in algorithms (because we always have
some work to do!)
We need
T(n) = aT

n
b

+c n
k
The more general case where c n
k
= f(n) is handled in the textbook. We’ll first look at k = 1.
T(n) = aT

n
b

+cn
Results (exact) are:
a = b T(n) ∈ Θ(nlog n)
a < b T(n) ∈ Θ(n)
a > b T(n) ∈ Θ(n
log
b
a
) – the final term dominates nlog n
Theorem If T(n) = aT

n
b

+cn
k
, a ≥ 1, b > 1, c > 0, k ≥ 1 then
T(n) ∈

Θ(n
k
) if a < b
k
Θ(n
k
log n) if a = b
k
Θ(n
log
b
a
) if a > b
k
We’re not going to do a rigorous proof but we’ll do enough to give you some intuition. We’ll use unrolling. The
rigorous way is through induction.
T(n) = aT

n
b

+cn
k
= a
¸
aT

n
b
2

+c

n
b

k

+cn
k
= a
2
T

n
b
2

+ac

n
b

k
+cn
k
= a
3
T

n
b
3

+a
2
c

n
b
2

k
+ac

n
b

+cn
k
.
.
.
= a
k
T (1) +
log
b
n−1
¸
i=0
a
i
c

n
b
i

k
= n
log
b
a
T(1) +cn
k
log
b
n−1
¸
i=0

a
b
k

i
n = b
t
, t = log
b
n, a
log
b
n
= n
log
b
a
. It comes out exactly like that sum in your assignment.
Just to wrap up, if a < b
k
i.e. log
b
a < k, the sum is constant and n
k
dominates. If a = b
k
the sum is log
b
n and
we get Θ(n
k
log n). The third case is when a > b
k
, and then n
log
b
a
dominates.
10
6 SEP 25, 2008
6 Sep 25, 2008
6.1 Assignment Info
Assignment 1 is due Friday at 5PM in the assignment boxes.
Q5. US = UC.
Q2a. In CS240 we learned to take the log of n + 1. ”How is the number of bits going to grow” is a much nicer
angle. There is a reason that

n and

n| are in the list.
Q3. (e) (f) See the newsgroup and website. D(i, j, l). Shortest path length from i to j using at most l edges but
formula is exactly l edges. Either assumption is fine. State clearly which one you are using. Same issue in (e) but
if you use exactly you may find that you don’t save. Use ”at most” if you haven’t started.
So we aren’t planning on marking every question. We will provide solutions for everything, however. The unmarked
questions are likely to appear on midterms or finals.
Q4. If you want examples of coin systems, go look around the Internet. Don’t get your proof from the Internet,
but examples of systems is fine.
Q5. How efficient? Well, you probably have to sort, so you probably won’t get better than O(nlog n). Try to beat
O(n
2
).
Q4,Q5,Q6 are counterexample and a proof.
Please just come to office hours instead of asking too many questions over e-mail.
6.2 Divide & Conquer Algorithms
6.2.1 Counting Inversions
Comparing two people’s rankings of n items – books, music, etc. Useful for web sites giving recommendations
based on similar preferences.
Suppose my ranking is BDCA, and yours is ADBC from best to worst. We’d like a measure of how similar these
lists are. We can count inversions: on how many pairs do we disagree? Here there are four pairs where we disagree:
BD, BA, DA, CA and two where we agree: BC, DC.
Equivalently, we can say given a
1
, a
2
, . . . a
n
, a permutation of 1 . . . n, count the number of inversions i.e. the
number of pairs a
i
, a
j
with i < j but a
i
> a
j
.
Brute Force: Check all

n
2

pairs, taking O(n
2
).
Divide & Conquer: Divide the list in half, with m =

1
2

.
A = a
1
. . . a
m
B = a
m+1
. . . a
n
Recursively count
r
A
= # inversions in A
r
B
= # inversions in B
Final answer is r
A
+r
B
+r where r = number of inversions a
i
a
j
, i ≤ m, j ≥ m+ 1 and a
i
> a
j
.
For each j = m+ 1 . . . n let r
j
= # of pairs involving a
j
.
r =
¸
n
j=m+1
r
j
Strengthen recursion – sort the list, too. If A and B are sorted, we can compute r
j
’s
11
6 SEP 25, 2008 6.2 Divide & Conquer Algorithms
Sort-and-Count(L): sorted L and # of inversions
Split L into A and B
(r_A,A) <- Sort-and-Count(A)
(r_B,B) <- Sort-and-Count(B)
r <- 0
merge A and B
when element is moved from B to output list
r <- r + # elements left in A
end
return r_a + r_b + r
Runtime:
T(n) = 2T

n
2

+O(n)
Since it’s the same as mergesort, we get O(nlog n). Can we do better?
6.2.2 Multiplying Large Numbers
The school method:
981
1234
------
3924
2943
1962
981
-------
1210554
O(n
2
) for two n-digit numbers. (one step is or + for two digits)
There is a faster way using divide-and-conquer. First pad 981 to 0981.

09 81 12 34

Then calculate
09 12 4 → 108
09 34 2 → 306
81 12 2 → 972
81 34 0 → 2754
1210554
The runtime here is
T(n) = 4T

n
2

+O(n)
Apply the Master Method.
12
6 SEP 25, 2008 6.2 Divide & Conquer Algorithms
T(n) = aT

n
b

+cn
k
Here, a = 4, b = 2, k = 1. Compare a with b
k
. We see a = 4 > b
k
= 2 so then we have runtime Θ(n
log
b
a
) = Θ(n
2
).
So far we have not made progress!
We can get by with fewer than four multiplications.
(10
2
w +x) (10
2
y +z) = 10
4
wy + 10
2
(wz +xy) +xz
Note we need wz +xy, not the terms individually.
Look at
(w +x)(y +z) = wy +wz +xy +xz
We know wy and xz but we want wz +xy. This leads to:
p = wy = 09 12 = 108
q = xz = 81 34 = 2754
r = (w +x)(y +z) = 90[that’s 09 + 81] 46
Answer: 10
4
p + 10
2
(r −p −q) +q
108____
1278__
2754
-------
1210554
We can apply this as a basis for a recursive algorithm. We’ll get
T(n) = 3T

n
2

+O(n)
From the master theorem, now we have a = 3, b = 2, k = 1 and since we have a > b
k
Θ(n
log
b
a
) = Θ(n
log
2
3
) ≈ Θ(n
1.585...
)
Practical Issues
• What if n is odd?
• What about two numbers with different digit counts?
• How small do you let the recursion get? (Answer: hardware word)
• What about different bases?
• When is this algorithm useful? (For about 1,000 digits or fewer, don’t use it [BB])
– Schonnage and Strassen better for very large numbers, which runs in O(nlog nlog log n)
13
7 SEP 30, 2008
7 Sep 30, 2008
Assignment 2 is available.
7.1 D&C: Multiplying Matrices:
Multiplying two square matrices. Basic method takes n
2
(and in some sense this is the best you can do, since you
need to write n
2
numbers in the result!)
Basic D&C
Divide each matrix into
n
2
blocks.

A B
C D

E F
G H

=

I J
K L

I = AE +BG etc. Each of the four output blocks has 2 subproblems and O(n
2
) additions.
T(n) = 8T

n
2

+O(n
2
)
By the master theorem, a = 8, b = 2, k = 2. a = 8 > b
k
= 4 (the case when recursive work overwhelms other case)
then T(n) ∈ Θ(n
log
b
a
) = O(n
3
).
Strassen’s Algorithm shows how to get by with just seven (a = 7) subproblems. Not discussing here, but if you’re
curious it’s in the textbook. This gives
T(n) = 7T

n
2

+O(n
2
)
This is Θ(n
log
2
7
) ≈ O(n
2.8...
). There are more complicated algorithms that get even better results (only for very
large n however)
7.2 D&C: Closest pair of points
Divide and Conquer is very useful for geometric problems. For example, given n points in a plane, select the
closest two by Euclidean distance. (There are other measures, including the ”Manhattan distance” which is the
distance assuming you can’t cross city blocks.)
Generally, we assume that arithmetic is unit cost. For this problem we don’t need to make that assumption.
In one dimension, consider ¦10, 5, 17, 100¦. How would we do this? Sort and compare adjacent numbers.
In a plane, we can use brute force, and that’s O(n
2
). What about
• Sorting by position on one axis.
Nope!
What’s the way?
(1) Divide points into left/right at the median x coordinate. Most efficient to sort once by x coordinate. Then
we can find a line L in O(1) time.
14
8 OCT 2ND, 2008 7.3 Hidden Surface Removal
(2) Recurse on Q and R.
δ = min

closest pair inQ
closest pair in R
Solution is the minimum of δ or the closest pair crossing L.
We need to find pairs q ∈ Q, r ∈ R with d(p, r) < δ.
Claim IfQ ∈ Q, r inR and d(q, r) < δ then d(q, L) < δ and d(r, L) < δ (i.e. q, r lie in this strip of width 2δ.)
Proof If otherwise, suppose q outside its strip. d(q, r) ≥ distance in DC from q to r ≥ δ.
Now let S be points in the strip of width 2δ. We can restrict our search to S. But S can be all the points!
Our hope is that if we sort S by coordinate then any pair q ∈ Q, r ∈ R with d(q, r) < δ are near each other
in sorted order.
Claim A δ δ square T left of L can have at most 4 points on it.
Because every two points in T have distance ≥ δ we can fit four points but only in the four corners. Therefore
you can’t fit five.
Claim If S sorted by y coordinate and q inQ and r ∈ R with d(q, r) < δ then they are at most seven positions
apart in sorted order.
(T) Total algorithm:
– Sort by x
– Sort by y
– T(n) = 2T

n
2

+O(n) ∈ O(nlog n)
More general problems – given n points, find closest neighbour of each one. This can be done in O(nlog n) (not
obvious)
• Voronoi diagrams
• Delaunay triangulations
Used in mesh generation.
7.3 Hidden Surface Removal
(a baby version of it, at least.) Find ”upper envelope” of a set of n lines in O(nlog n) by divide & conquer.
8 Oct 2nd, 2008
8.1 Dynamic Programming
Weighted Interval Scheduling. Recall, interval scheduling aka activity selection aka packing of intervals. Pick the
max. number of disjoint intervals.
Generalization – each interval i has a weight w(i). Pick disjoint intervals to maximize the sum of the weights.
What if we try to use Greedy?
15
8 OCT 2ND, 2008 8.1 Dynamic Programming
• Pick maximum weight – fails
An even more general program: given a graph G = (V, E) with weights on vertices pick a set of vertices, no two
joined by an edge to maximize a sum of weights. Make G with a vertex for each interval an edge when two intervals
overlap.
A general idea: for interval (or vertex) i, either we use it or we don’t. Let OPT(I) = max weight of non-overlapping
subset. W-OPT(I) is the opt. weight sum of weights of intervals in OPT(I).
If we don’t use i, OPT(I) = OPT(I ` ¦ I ¦ ).
If we use i, OPT(I) = w(i) + OPT(I’) where I’ = the set of intervals that don’t overlap with i.
Leads to a recursive algorithm.
W-OPT(I) = max ¦ W-OPT(I ¦ i ¦ ) , w(i) + W-OPT(I’) ¦
T(n) = 2T(n −1) +O(1)
But this is exponential time.
Essentially we are trying all possible subsets of n items – all 2
n
of them.
For intervals (but not for the general graph problem) we can do better. Order intervals 1, . . . , n by their right
endpoint.
If we choose interval n, then l

= all intervals disjoint from n – has form 1, 2, . . . , j for some j.
W-OPT(1 ... n) = max ( W-OPT(1 ... n-1 ), w(n) + W-OPT(1..p(n)) ).
p(n) = max index j such that interval j doesn’t overlap n.
More generally,
p(i) = max index j ¿ i such that interval j doesn’t overlap i. W-OPT(1 .. i) = max ( W-OPT(1 .. i-1), w (i) +
W-OPT(1..p(i)))
This leads to an O(n) time algorithm. Note: don’t use recursion blindly. The same subproblem may be solved
many times in your program.
Solution Use memoized recursion (see text.) OR, use an iterative approach.
Let’s look at an algorithm using the second approach.
notation M[i] = W-OPT(1 .. i)
M[0] = 0
for i = 1..n
M[i] = max{ M[i-1], w(i) + M(p(i)) }
end
Runtime is O(n). What about computing p(i) with i = 1..n?
Sorting by right endpoint is O(nlog n). To find p(i) sort by the left endpoint as well. Then-Exercise: in O(n) time
find p(i) i = 1..n.
So far this algorithm finds W-OPT but not OPT. (i.e. the weight, not the actual set of items.)
One possibility: enhance above loop to keep set OPT(1..i). Danger here is that storing n sets of size n for n
2
size.
One solution: first compute M as above. Then call OPT(n).
recurse fun OPT(i)
if M[i] >= w(i) + M[p(i)]
then return OPT(i-1)
else
return { i } union OPT (p( i))
16
9 OCT 7TH, 2008 8.2 Second example: optimum binary search trees
8.2 Second example: optimum binary search trees
Store values 1, . . . , n in leaves of a binary tree (in order.) Given probability p
i
of searching for i build a binary
search tree.
Minimize expected search cost
n
¸
i=1
p
i
depth(i)
Note: In CD 240 you did dynamic binary search trees – insert, delete, and rebalancing to control depth.
This is different in that we have items and probabilities ahead of time.
The difference from Huffman coding (a similar problem) is that for Huffman codes, left-to-right order of leaves is
free.
The heart of dynamic programming to find optimum binary search tree: Try all possible splits 1..k and k + 1..n.
Subproblem: ∀i, j find optimum tree for i, i + 1, . . . , j.
M[i, j] = min
k=i..j
M[i, k] +M[k + 1, j] +
¸
j
t=i
p
t
. Each node is one deeper now.
Exercise: work this out.
for i=1..n
M[i,i] = p_i
for r=1..n-1
for i = 1..n-r
-- solve for M[i, i+r]
best <- M[i,i] + M[i+1, i+r]
for k=i+1..i+r-1
temp <- m[i,k] + m[k+1, i+r]
if temp > best, best <- temp
end
M[i,i+r] <- best + sum_(t=i)^(i+r) p_t
(better: p[j] = sum_t=1^j p(t) then use p[i+r] - P[i-1]
Runtime? O(n
3
).
9 Oct 7th, 2008
Last day, we looked at weighted interval scheduling.
Today, we’ll look at matrix chain multiplication.
The problem was to compute the product of n matrices M
1
M
2
. . . M
n
where M
i
is an α
i−1
α
i
matrix.
What is the best order in which to do multiplications?
Think about this in terms of parenthesizing the matrices in your multiplication. I.e. we could calculate ((M
1
M
2
)(M
3
M
4
))
or (((M
1
M
2
)M
3
)M
4
). The number of ways to build a binary tree on leaves 1 . . . n is
P
n
=
n
¸
i=1
P
i
P
n−i
The Catalan numbers are
P
n
∈ Ω

r
n
n
2

which is exponential.
Solve subproblems:
m
i,j
= min cost to multiply the scalar multiplications. Matrices M
i
, . . . , M
j
17
9 OCT 7TH, 2008 9.1 Example 2: Minimum Weight Triangulation
Let m
ii
= 0 and m
ij
= min for k = i . . . j −1. The idea is we’ll break into subproblems from m
i
to m
k
times m
k+1
to m
j
.
Algorithm pseudocode:
for i=1..n
m(i,i) = 0
end
for diff=1 .. n
for i = 1..n-diff
j <- i + diff
m(i,j) <- infinity
for k = i .. j-1
temp <- m(i,k) + m(k+1,j) + d_{i-1} d_j d_k
if temp < m (i,j)
m(i,j) <- temp
end
end
end
end
The runtime is O(n
3
) for the O(n
2
) subproblems of O(n) each. Final answer m(1, n) and ex, use k matrix to
recover the actual parenthesization.
9.1 Example 2: Minimum Weight Triangulation
Problem: Given a convex polygon with vertices 1 . . . n in clockwise order, divide into triangles by adding ”chords”
– segments from one vertex to another. No two chords are allowed to cross.
The goal is to minimize the lengths of chords we use. Picking the smallest chord does not work.
We will give a dynamic programming algorithm that will also work for non-convex shapes.
A more general problem is to triangulate a set of points. Find the minimum sum of lengths of edges to triangulate.
”Minimum triangulation.”
The dynamic programming approach for the convex polygon case: choosing one chord breaks down into two
subpolygons.
Notice a subset of polygons gives a subpolygon. Can get by by looking just at subpolygons on verticies i, i+1, . . . , j.
The edge 1, n lies in some delta with vertex k – try all choices for k. More generally, m(i, j) = min sum of edge
lengths to triangulate subpolygon on verticies i, i +1, . . . , j. m(i, j) = min
k=i+1,...,j−1
¦m(i, k) +m(k, j) +(i, j)¦
( the length of chord)
Let’s count the perimeter as well. This doesn’t hurt our optimization and it makes base cases easier.
Base cases
m(i, i + 2) = (i, i + 1) +(i + 1, i + 2) +(i, i + 2)
Note: We’d better add m(i, i + 1) = (i, i + 1). And we don’t atually need case m(i, i + 2) – it falls out of the
general formula.
Algorithm:
initialize m(i,i+1)
for diff = 2, ..., n-1
for i = 1 .. n-diff
j<-i + diff
18
10 OCT 9TH, 2008
m(i,j) <- infinity
for k = i+1 .. j-1
t <- m(i,k) + m(k,j) + l(i,j)
if t < M(i,j) then
M(i,j) <- t
end
end
end
Runtime O(n
3
): n n table and O(n
2
) subproblems. O(n) to solve each one.
10 Oct 9th, 2008
Midterm (Mon Oct 20th): covers material up through today and a bit of next week’s material too.
10.1 Dynamic Programming
Key idea: Bottom-up method: identify subproblems and order so that you’re relying on previously solved sub-
problems.
Example (Knapsack/Subset Sum)
Recall knapsack problem: given items 1 . . . n, item i has weight w
i
and value v
i
, both ∈ N, and W, the knapsack
capacity. Choose a subset S ∈ ¦1, . . . , n¦ such that
¸
i∈S
w
i
≤ W and
¸
i∈S
v
i
is maximized.
Recall a fractional versus 0-1. Recall a greedy algorithm works for the fractional case. For the 0-1 knapsack, there
is no polynomial-time algorithm.
Note: coin changing problem is similar to knapsack but having multiple copies of items.
Top-down: Item n can either be IN (items 1 . . . n −1 with W −w
n
) or OUT (items 1 . . . n −1) of S.
Subproblems are – for each i, w i = 0 . . . n and w = 0 . . . W, find subset S from items 1 . . . i such that
¸
i∈S
w
i
≤ w
and
¸
i∈S
v
i
is maximized.
How to solve this subproblem?
If w
i
> w then OPT(I, w) ← OPT(i −1, w) (can’t use item i) but otherwise,
OPT(i, w) ← max

OPT(i −1, w) don’t include i
v
i
+OPT(i −1, w −w
i
) include i
Pseudo-code and ordering of subproblems:
store OPT(i,w) in matrix
M[i,w]
i=0..n w=0..W
initialize M[0,w] := 0 w = 0..W
for i=1..n
for w=0..W
compute M[i,w] with (*)
end
end
M[n,W] gives OPT value
EX: Find opt set S.
19
11 OCT 14TH, 2008 10.2 Certain types of subproblems
[KT] has examples.
Runtime: nWc (outer loop, inner loop, constant for (*))
O(n w)
Is this good? Does it behave like a polynomial?
Depends on size of input. Input v
1
, . . . , v
n
and w
1
, . . . , w
n
and W. Note that w
i
≤ W – else throw out item i. So
size of w
1
. . .? ≤ (n + 1) log W. Input size is O(nlog W).
Input size O(nlog W) but output size O(nW = n2
k
).
Intuition why this is bad: let’s say we have weights .001, .002, 10, and W = 100.
This algorithm is called ”pseudo-polynomial” because runtime is polynomial on the value of W, not the size
(number of bits) of W.
10.2 Certain types of subproblems
• Input x
1
, . . . , x
n
and subproblem x
1
, . . . , x
i
. Number of subproblems is O(n).
• Input x
1
, . . . , x
n
and subproblems x
i
, x
i+1
, . . . , x
j
. Number of subproblems is O(n
2
).
• Input x
1
, . . . , x
n
and y
1
, . . . , y
n
with subproblems x
1
, . . . , x
i
and y
1
, . . . , y
j
. Number of subproblems: O(nm)
• Input is rooted tree (not necessarily binary) and subproblems are rooted subtrees.
Example Longest ascending subsequence.
In 5,3,4,1,6,2.
Given a
1
, . . . a
n
finding a
i
1
< a
i
2
, < . . . < a
i
j
. i
1
< i
2
< . . . < i
j
. Maximize j.
Can we use subproblems on a
1
, . . . , a
i
?
Find largest ascending subsequence ending with a
i
.
Find answer:max l
i
with i = 1..n
Consider 2nd last item a
j
, j < i, a
i
< a
i
.
l
i
= max¦1 +l
j
: j < i, a
j
< a
i
¦
O(n
2
) algorithm: n subproblems O(n) each.
10.3 Memoization
Use recursion (not explicit solution to subproblems in the bottom-up approach we have used) – danger, solve sub
subproblem over and over. So
T(n) = 2T(n −1) +O(1) – exponential!
Advantage: storing solved subproblems saves time if we don’t need solutions to all subproblems.
11 Oct 14th, 2008
Assignment 2 due Friday. Midterm on Mon Oct 20th, 7 PM. Alternate is during class time on Tuesday.
11.1 Graph Algorithms
A graph G = (V, E) with V a finite set of vertices and E ∈ V V are edges.
• Undirected graph, edge (u, v) = (v, u).
• Directed graph, order matters.
• No loops (i.e. no edge (u, u))
20
11 OCT 14TH, 2008 11.2 Minimum Spanning Trees
• No multiple edges.
We will use n or [V [ for the number of vertices, and m or [E[ for the number of edges.
• 0 ≤ m ≤

n
2

=
n(n−1)
2
undirected.
• 0 ≤ m ≤ n(n −1) directed. m ∈ O(n
2
).
What is a path? A sequence of vertices where every consecutive pair is joined by an edge. e.g.3, 5, 4. A walk
allows repetition of vertices and edges. A simple path does not allow.
If there is a walk from u to v then there is a simple path from u to v.
We say that an undirected graph G is connected if for every pair of vertices, there is a path joining them. For
testing if a graph is connected, we can use DFS or BFS.
For directed graphs: there are different notions of connectivity. A graph can be strongly connected – ∀u, v inV
there is a directed path from u to v.
Cycle: a path from u to u.
Tree: A graph that is connected but has no cycles. Note: a tree on n vertices has n −1 edges.
Storing a graph:
• Adjacency matrix: A(i, j) = 1 if there is an edge from i to j, else 0.
• Adjacency list: Vertices down the left, edge destinations in a list on the right.
Advantages and disadvantages?
• Space: n
2
matrix, 2m+n list.
• Time to test e ∈ E: O(1) matrix, O(n) or O(log v) in list.
• Enumerating edges: O(n
2
) versus O(m+n).
We usually use adjacency lists – then we can (sometimes) get algorithms with runtime better than O(n
2
).
11.2 Minimum Spanning Trees
Problem Given an undirected graph G = (V, E) and weights w ≥ 0 : E → R find a minimum weight subset of
edges that’s connected. i.e. Find E

⊂ E such that (V, E

) is connected and w(E

) =
¸
e∈E
w(e) is minimized.
Claim E

will be a tree. Else E

has a cycle. Throw away an edge of the cycle, which leaves a connected graph. If
path a −b used edge (u, v), then replace edge (u, v) with the rest of the cycle.
Almost any Greedy approach will succeed.
• Take a minimum weight edge that creates no cycle.
21
11 OCT 14TH, 2008 11.2 Minimum Spanning Trees
• Throw away maximum weight that doesn’t disconnect.
• Grow one connected component and use the minimum weight wedge.
All of these are justified by one lemma:
Lemma Let V
1
, V
2
be a partition of V (into two disjoint non-empty sets with union V .) Let e be a minimum-weight
edge from V
1
to V
2
. Then there is a minimum spanning tree that includes e.
Stronger version Let X be a set of edges ⊂ minimum spanning tree, and no edge of X goes from V
1
to V
2
. Let the
minimum spanning tree also include X.
Proof Let T be a minimum spanning tree (stronger: containing X.) T has a path that connects u and v. P must
use an edge from V
1
to V
2
– say, f.
Let T

= T ∪ ¦e¦
¦f¦ exchange e for f. Claim: T

is it.
w(e) ≤ w(f) so w(T

) ≤ w(T). T

is a spanning tree: P ∪ ¦(u, v)¦ makes a cycle , so we can remove f and stay
connected.
Note that T

contains e and x (because f not in X.)
Following Kruskal’s Algorithm,
• Order edges by weight:
w(e
1
) ≤ w(e
2
) ≤ . . . ≤ w(e
m
)
T <- empty set
for i = 1..n
if e_1 does not make a cycle with T
then t <- T u {e}
end
• We add e iff u and v are in different connected components.
• To test this efficiently we use the Union-Find data structure.
– Find(element) – find which set contains element.
– Union – unites two sets.
• Focus set = connected component of vertices.
– Add edge e iff Find(u) = Find(v)
– Add edge e to T ⇒ unite conn. components of u and v
A simple Union-Find structure : Store an array C(1 . . . n) and C(i) is the # of connected components containing
vertex i. Union: must rename one of the two sets, do the smaller one. Then h units take O(nlog n) in CS 466:
reduce this.
Krustkal’s Algorithm takes O(mlog m) to sort plus O(nlog n) for the Union-Find test. And O(mlog m) =
O(mlog n) since log m ≤ log n
2
= 2 log n.
22
12 OCT 16TH, 2008
12 Oct 16th, 2008
• Assignment 1 – out of 40.
– Solutions will be on website.
– Marking scheme is in the newsgroup.
• Assignment 2 – due tomorrow.
• Midterm – Monday – covers to the end of today.
• You are allowed one 8.5 11 sheet brought to the midterm. Doesn’t have to be hand-written either.
12.1 Graph Algorithms
Minimum Spanning Tree: Given an undirected graph G = (V, E) with weight function w : E →R
+
, find a subset
of edges E

∈ E such that (V, E

) is connected.
Recall:
• Kruskal’s algorithm orders edges from minimum-maximum weight. Take each edge unless it forms a cycle
with previously chosen edges.
• Lemma, the cheapest two edges connecting two groups is indeed the best.
12.1.1 Prim’s Algorithm
Also a greedy algorithm. Builds a tree. General structure: let u be vertices of the tree so far. Initially, U = ¦s¦.
While U = V , find a minimum weight edge e = ¦u, v¦ where u ∈ U and v ∈ V −U. Add e to T and v to U.
Correctness – from lemma last day.
Implementation: we need to (repeatedly) find a minimum-weight edge leaving U (as U changes.) Let S(U) be a
set of edges from U to V −U. We want to find the minimum, insert, and delete. We need a priority queue – use
a heap.
Exactly how does δ(u) change?
When we do U ← U ∪ ¦v¦, any edge from U to v leaves δ(u). Any other edge incident with v enters δ(u).
For all x incident to v,
• if x ∈ U then remove edge (x, v) from priority queue.
• else insert edge (x, v) into PQ.
Recall that a heap provides O(log n) for insert and delete, and O(1) for finding a minimum.
For one r, how many PQ inserts/deletes do we need?
• n in the worst case.
23
12 OCT 16TH, 2008 12.2 Shortest Paths
• deg(v) = # of edges incident with v.
Total number of PQ insert/delete operations over all vertices v: (hope for better than n n.)
Every edge enters δ(u) once and leaves once, so 2m.
Alternatively,
¸
v∈V
deg v = 2m.
Total time for the algorithm is O(n+mlog m) = O(mlog m) because m ≤ n
2
and log m ≤ 2 log n. If m = 0: check
first if m < n −1 and if so bail out.
Improvements
• Store vertices in the PQ instead of edges. Define w(v) = minimum weight of an edge from U to v.
When we do U ← U ∪ ¦v¦, we must adjust weights of some vertices. Gives (mlog n).
• Tweak the PQ to be a ”fibonacci heap,” which gives O(1) for weight change and O(log k) to find minimum.
Gives O(nlog n +m).
• Barouvka’s Algorithm: another way to handle this case
12.2 Shortest Paths
Shortest path from A to D: ABD weight 3 + 2 = 5, A to E: ABE with weight 4. (From diagram in class.)
General input: directed graph G = (V, E) with weights w : E → R. Allow negative weight edges, but disallow
negative weight cycles. (If we have a negative weight cycle, then repeating it potentially gives paths of −∞weight.)
We might ask for shortest simple path but this is actually hard (NP-complete.)
Weight of path = sum of weights of edges.
Versions of shortest path problem:
1. Given u, v ∈ V , find a shortest path from u to v.
2. Given u ∈ V , find shortest paths to all other vertices. ”Single source shortest path problem”
3. Find shortest u, v path ∀u, v – the ”all paths shortest path problem.”
Solving 1 seems to involve solving 2.
Later: Dijkstra’s algorithm for 2. Like Prim’s algorithm. Build a shortest path tree from u
Dynamic Programming solution for problem 3.
Does u −v path go through x or not shortest? Split into: find shortest path u −x and shortest path x −v.
In what way are these subproblems smaller?
• They use fewer edges.
M[u, v, l] = min weight path from u to v using ≤ l edges.
n
3
subproblems from l = 1 . . . n −1.
• The paths u −x and x −v don’t use x as intermediate vertex.
24
13 OCT 21, 2008
13 Oct 21, 2008
13.1 All Pairs Shortest Path
Given a directed graph G = (V, E) with weights w : E →R, find shortest u −v paths from all u, v ∈ V .
In general, the weight of a path is the sum of weights of edges in path.
A
B
C
D
5
-1
6
11
2
e.g. w(ACD) = 8
Assume: no negative weight cycles. Otherwise, minimum length path can be ∞.
Use Dynamic Programming.
u
x
v
Main idea: try all intermediate vertices x. If we use x, we need a shortest u → x path and a shortest x → v path.
How are these subproblems simpler?
1. Fewer edges – get efficient dynamic programming M[u, v, ] give shortest u, v path with ≤ edges.
However, we’re not using this. This gives the same runtime, but uses more space.
2. The u −x and x −v paths do not use x as an intermediate vertex.
We’ll use this one.
Let V = ¦1, 2, . . . , n¦. Let D
i
[u, v] = min. length of a path u → v using intermediate vertices from the set
¦1, . . . , i¦. Solve subproblem D
i
[u, v] for i = 0, 1, . . . , n.
Final answer: matrix D
n
[u, v]. Number of subproblems: O(n
3
).
How do we initialize? D
0
[u, v] = ¦w(u, v) if (u, v) ∈ E; ∞ otherwise .
Main formula:
D
i
[u, v] = min¦D
i−1
[u, v], D
i−1
[u, i] +D
i−1
[i, v]¦
This leads to:
13.1.1 Floyd-Warshall Algorithm
Initialize D_0 as above
25
13 OCT 21, 2008 13.1 All Pairs Shortest Path
for i = 1..n
for u = 1..n
for v = 1..n
D_i[u,v] = as above in main formula
end
return D_n
Time is O(n
3
). The space however is also O(n
3
), which is extremely undesirable. Notice to compute D
i
we only
use D
i−1
. So we can throw away any previous matrices, bringing space to O(n
2
).
In fact, even better (although not in degree of n) we can:
Initialize D full of D_0
for i = 1..n
for u = 1..n
for v = 1..n
D_i[u,v] = min { D[u,v], D[u,i] + D[i,v] } (**)
end
return D_n
Note: in the inner loop, D will be a mixture of D
i
and D
i−1
, but this is correct because we don’t go below the
true min by doing this, but we correctly compute the main equation.
How to find the actual shortest path?
• Compute H[u, v] =highest numbered vertex on u → v path
Note: If we explicitly stored all n
2
paths, we’d be back to O(n
3
) space – avoid this. Better:
• S[u, v]− successor of u on a shortest u, v path
Initialize S[u, v] = v if (u, v) ∈ E and φ otherwise.
Modify (**) to become:
• if D[u,i + D[i,v] < D[u,v] then
D[u,v] <- D[u,i] + D[i,v]
S[u,v] <- S[u,i]
end
Once we have S with complete paths:
Path[u,v]
x <- u
while neq u
output S[x,v]
x <- S[x,v]
end
output v
Exercise: Use this algorithm to test if a graph has a negative weight cycle.
26
14 OCT 23, 2008
14 Oct 23, 2008
Shortest Paths
Last day’s study was the all-pairs shortest path problem, whereas today’s is the single-source shortest path. Find
the shortest path from s to v ∀v.
• In the case with no negative weight edges, we can use Dijkstra’s Algorithm, which is O(mlog n).
• With no directed cycles, O(n +m).
• With no negative weight cycles, O(n m). (This is the most general – still faster than all pairs.)
14.1 Dijkstra’s Algorithm
Input: Directed graph G = (V, E) and weight function w : E →R
≥0
and source vertex s.
Output: Shortest s → v path ∀v.
Idea: Grow a tree of shortest paths from s.
s
x
B
y
General step: have shortest paths to all vertices in B. Initially, B = ¦s¦. Choose the edge (x, y) where x ∈ B and
y ∈ V ` B that minimizes the following:
d(s, x) +w(x, y)
Call this minimum d:
• d(s, y) ← d
• Add (x, y) to shortest path tree parent(y) ← x
• B ← B ∪ ¦y¦
This is greedy in the sense that y has the next minimum distance from s.
Claim: d = minimum distance from s to y.
Proof: The idea is that any path has this structure:
• s: Begins here
• π
1
: Precedes u
27
14 OCT 23, 2008 14.2 Connectivity in Graphs
• (u, v): First edge leaving B
• π
2
: Rest of path (which may re-enter B)
So w(π) = w(π
1
)+w(u, v)+w(π
2
). Note that w(π
1
)+w(u, v) ≥ d and w(π
2
) ≥ 0 as edge-weights are non-negative.
From Claim by induction on [B[, this algorithm finds the shortest path.
Implementation: Make a priority queue (heap) on vertices V `B using value D(v) for v ∈ V such that the minimum
value of D gives the wanted vertex.
D(v) = minimum weight path from s → v using a path in B plus one edge.
• Initialize:
– D(v) ← ∞, ∀v
– D(s) ← 0
– B ← φ
• While [b[ < n:
– y ← vertex of V ` B of minimum D(v)
– B ← B ∪ ¦y¦
– For each edge (y, z) where z ∈ V ` B
∗ t ← D(y) +w(y, z)
∗ If y < D(z) then
D(z) ← t
parent(z) ← y
Store the D values in a heap. How many times are we extracting the minimum? n times at O(log n) time each.
The ”decrease D value” is done ≤ m times. (Same argument as for Prim.) Each decrease D operation is O(log n)
(done as insert-delete.) Total time is O(nlog n +mlog n) which is O(mlog n) if m ≥ n − 1. Using a Fibonacci
Heap, we can decrease this to O(nlog n +m).
14.2 Connectivity in Graphs
Testing connectivity, exploring a graph. Recall: Breadth First Search (BFS) and Depth First Search (DFS.)
1
3
5 2
4 6
8 7
• BFS: 1,2,3,6,8,4,5,7 (1, adj to 1, adj to 2, etc.)
• DFS: 1,2,4,6,3,5,8,7
28
14 OCT 23, 2008 14.2 Connectivity in Graphs
Either takes O(n +m). DFS is more useful.
We’ll talk about ”higher connectivity” – for networks, connected isn’t enough. We want connected even with a
few failures (vertices/edges.) What’s bad is a cut vertex – if it fails, the graph becomes disconnected.
We call a graph 2-connected if there are no cut vertices. 2-connected components. A figure-eight graphic made of
two connected triangles or squares has two 2-connected components, the triangles/squares. Similarly, 3-connected
means we can remove two vertices without breaking the graph into components.
By the way, Paul Seymour, a famous name in graph theory, is visiting UW this weekend, and he’s speaking
tomorrow at 3:30. He’s also getting an honourary degree on Saturday at convocation.
14.2.1 Finding 2-connected components
We can use DFS to find cut vertices and 2-connected components in O(n +m) time.
2
1
6
4
3
5
7
Solid edges are DFS edges, dotted edges are ”back edges.”
Claim: Every non-tree DFS edge goes from some u to an ancestor. e.g. we can’t have edge (5,7). This justifies
the term ”back edge.”
DFS Algorithm:
• Initialize:
– mark(v) ← not visited
– num ← 1
– DFS(s)
• DFS(u) recursive:
– mark(v) ← visited
– DFSnum(v) ← num; num ← num + 1
– for each edge (u, w)
∗ if mark(w) = not visited then
(v, w) is a tree edge
parent(w) ← v
DFS(w)
else
if parent(v) = w then: (v, w) is a back edge
29
15 OCT 28TH, 2008
What do cut vertices look like in a DFS tree?
• A leaf is never a cut vertex
• A root is a cut vertex iff the number of children ≥ 1
Removing arbitrary (non-root, non-leaf) node in the tree v we have T
1
, . . . , T
i
children and T
0
the tree connected
from above. Are these connected in G ` v? It depends on back edges. If T
j
has a back edge to T
0
then T
j
is
connected to T
0
. Otherwise, it falls away (and is disconnected.)
We need one more thing: high(v) = highest (i.e. lowest DFS number) vertex reachable from v by going down tree
edges and then along one back edge.
Claim: v is a cut vertex iff it has a DFS child x such that high(x) ≥ DFSnum(v).
Modifying DFS code: set high(v) ←DFSnum(v) in Initialize, and later on set high(v) ←min ¦ high(v), DFSnum(w)
¦ and later high(v) ← min ¦ high(v), high(w) ¦ .
This is still O(n +m).
15 Oct 28th, 2008
Midterm: Think about it as out of 35. (In that case you got an 86%.)
Backtracking: A systematic way to try all possibilities. In the workplace, and you need a find an algorithm,
if you’re extremely lucky it’ll be one of the ones we encountered. But more likely, it’ll be similar to one we’ve
seen. But more likely, it’ll be one nobody knows how to solve, and it’s NP-complete. Backtracking is useful for
algorithms that are not NP-complete.
Options:
• Heuristic approach – run quickly, with no guarantee on the quality of the solution.
• Approximation algorithms – run quickly, but with a guarantee on the quality.
• Exact algorithm – and bear with the fact it (may) take a long time.
Note: to test (experimentally) a heuristic you need an exact algorithm.
15.1 Backtracking and Branch/Bound
Exact, exponential time algorithms. Search in implicit graph of partial solutions. General backtracking: we have
a configuration C that is the remaining subproblem to be solved, and choices made to get to this subproblem.
e.g. knapsack: configuration is items selected to far and items discarded so far, also with capacity remaining.
e.g. trying all permutations of 1 . . . n. Configuration is permutations so far, and remaining permutations.
Backtracking Algorithm: F = set of active configurations. Initially, one configuration, the whole problem. While
F = φ, C ← remove configuration from F, expand into C
1
, . . . , C
t
. For each C
i
, test for success (solves whole
problem) and failure (dead end.) Otherwise, add C
i
to F.
Storing F:
30
15 OCT 28TH, 2008 15.1 Backtracking and Branch/Bound
• Stack: DFS of configuration space
Size: height of tree
• Queue: BFS of configuration space
Size: width of tree
• Priority Queue: explore current best configuration
Usually, height << width, and we should use DFS.
e.g. exploring all subsets of ¦1, . . . , n¦:
S = empty set
R = {1 … n}
S = { 1 }
R = { 2 … n }
S = empty
R = { 2 … n }
1 out 1 in
S = { 1,2 }
R = { 3 … n }
S = { 1 }
R = { 3 … n }
2 out
2 in
Example: Subset Sum – Knapsack where weight is the value of each item.
Given items 1 . . . n and weight w
i
for item i, and W, find subset S ∈ ¦1, . . . , n¦ with
¸
i∈S
w
i
≤ W where we
maximize
¸
i∈S
w
i
.
Decision Version – can we find S with
¸
i∈S
w
i
= W?
A polynomial time algorithm for this decision version gives poly time for the optimization version.
Backtracking for the decision version of Subset Sum:
• Configurations are as above (S so far, R remaining)
• w =
¸
i∈S
w
i
, r =
¸
i∈R
w
i
.
Need to fill in success w = W and failure (of the configuration) when w > W or w +r < W.
Note: if F becomes empty and we haven’t found a solution, then no solution.
This is O(2
n
). Before, we built a dynamic programming algorithm for Knapsack with subproblems O(n W).
Which is better? Depends on W. e.g. if W has n bits then W ∼ 2
n
and backtracking is better.
31
15 OCT 28TH, 2008 15.2 Branch-and-Bound
15.2 Branch-and-Bound
• for optimization problems
• we’ll talk about minimizing an objective function
• keep track of minimum solution so far
• not DFS – explore ”most promising” configuration first
• ”branch” generate children of configuration (as in backtracking)
• ”bound” – for each configuration compute a lower bound on the objective function and prune if ≥ minimum
so far.
General paradigm:
• F = active configurations
• Keep best so far
• While F = φ
– C ← remove ”best” configuration from F
– Expand C to children C
1
, . . . , C
t
(”branch”)
– For each C
i
,
∗ If C
i
solves the problem, if better than current best, update best
∗ Else if C
i
is infeasible, discard it.
∗ Else, ”bound:” If lower bound (C
i
) < best so far, add C
i
to F.
15.2.1 Branch and Bound TSP Algorithm
Example: Traveling Salesman problem. Idea here is we have a graph with weights on the edges, and our traveling
salesman wants to start in a home town, visit every city exactly once, and return to the home town.
Given a graph G = (V, E) and edge weights w : E →R
≥0
find a cycle C that goes through every vertex once and
has minimum weight.
This is a famous, ”hard” problem.
Algorithm: based on enumerating all subsets of edges. Configuration: I
c
∈ E (included edges) and E
c
∈ E
(excluded edges.) I
c
∩ X
c
= φ. Undecided edges E ` (I
c
∪ X
i
).
Necessary conditions: E ` X
c
must be connected. In fact it must be 2-connected. I
c
must have ≥ 2 edges at each
vertex, must not contain a cycle.
How to branch? Take the next edge not decided about yet. C−I
c
, X
c
choose e ∈ E`(I
c
∪X
c
). But how to bound?
Given I
c
, X
c
find a lower bound on minimum TSP tour respecting I
c
, X
c
. We want an efficiently computable lower
bound (so it’s sort of like a heuristic, but we don’t have issues of correctness.)
32
16 OCT 30TH, 2008
Instead of finding a tour, we’re finding a 1−tree, a spanning tree on nodes 2, . . . , n (not a MST) and two edges
from vertex 1 to leaves of the tree.
Claim Any TSP-tour is a 1-tree. w(min TSP-tour) ≥ w( min 1-tree ). So use this for lower bound.
Claim We can efficiently find a minimum weight 1-tree given I
c
, X
c
. (Not proven.)
Final Enhancements:
• When we choose the ”best” configuration C from F, as our measure of best, use the one with the minimum
1-tree.
• Branch wisely. e.g. find vertex i in minimum 1-tree with degree ≥ 2.
Let e = maximum weight edge
16 Oct 30th, 2008
16.1 Recall
Course outline:
• Designing algorithms
• Analyzing algorithms
• Lower Bounds – do we have the best algorithm?
16.2 Lower Bounds
If we have a lower bound for a problem P, we claim any algorithm will take at least this much time.
Note: distinction between lower bound for an algorithm and lower bound for a problem. For an example, look at
multiplying large integers. The school method was O(n
2
).
In fact, school method is Ω(n
2
) worst case run time of because there are example inputs that take ≥ c n
2
steps.
But there is an algorithm (divide and conquer) with a better worst-case runtime – O(n
k
) with k < 2. But a lower
bound for the problem says that all algorithms have to take ≥ some time.
Lower bounds for algorithms are hard to prove!
16.2.1 Basic Techniques
1. Lower bound based on output size.
For example, if we ask for all the permutations of 1, 2, . . . , n, there are n! of them and it won’t take less than
n! time to write them all down – Ω(n!).
2. Information-Theoretic Lower Bounds
e.g. Ω(log n) lower bound for searching for an element inside a
1
, a
2
, . . . , a
n
. This takes log n bits as that is
the information content of distinguishing n possibilities.
33
16 OCT 30TH, 2008 16.3 Polynomial Time
In a comparison-based model, each comparison gives one bit of information, and since we need log n bits we
need log n comparisons. Often this argument is presented as a tree.
3. Reductions: showing one problem is easier or harder than another.
e.g. convex hull is harder than sorting. We took an index of numbers and mapped them into a curve, and
then the convex hull would tell the sorted order. ”If I could find convex hulls faster than O(nlog n) then I
could sort faster than O(nlog n).”
16.2.2 State-of-the-Art in Lower Bounds
• Some problems are undecidable (they don’t have algorithms) e.g. the halting problem. We’ll do this later
in the course (and CS 360.)
• Some problems can only be solved in exponential time.
• (Lower end) some problems have Ω(nlog n) lower bounds on special models.
Things we care about, like ”is there a TSP algorithm in O(n
6
)” – nobody knows. ”Can O(n
3
) dynamic program-
ming algorithms be improved?” – nobody knows.
Major open question: Many practical problems have no polynomial time algorithm and no proved lower bound.
The best that’s known is proving that a large set of problems are all equivalent, and we know that solving one in
polynomial time solves all the others.
In the rest of the course, we’ll fill this in.
16.3 Polynomial Time
Definition An algorithm runs in polynomial time if its worst case runtime is O(n
k
) for some k.
What is polynomial?
Θ(n) YES
Θ(n
2
) YES
Θ(nlog n) YES (because it’s better than O(n))
Θ(n
100
) YES
Θ(2
n
) NO
Θ(n!) NO
The algorithms in this course were (mostly) all poly-time, except backtracking and certain dynamic programming
algorithms (specifically 0-1 Knapsack.)
Low-degree polynomials are efficient. High-degree polynomial don’t seem to come up in practice.
Jack Edmonds is a retired C&O prof. The ”matching” problem has you given a graph and you want to assign
pairs. He first formulated the idea of polynomial time.
In any other algorithms class, you would cover linear programming in algorithms. We have a C&O department
that covers that, but if you’re serious about algorithms, you should be taking courses over there.
34
17 NOV 4TH, 2008 16.4 Reductions
Other history:
• In the 50’s and 60’s, there was a success story creating a linear programming and simplex method – practical
(though not polynomial.)
• Next step, integer linear programming. Seemed promising at the time, and people reduced other problems
to this one, but in the 70’s with the theory of NP-completeness, we found this is actually a hard problem
and people did reductions from integer programming.
Our goal: to attempt to distinguish problems with poly-time algorithms from those that don’t have any. This is
the theory of NP-completeness. (NP = Non-deterministic Polynomial)
16.4 Reductions
Problem A reduces (in polytime) to a problem B (written A ≤ B or A ≤
P
B) and we can say ”A is easier than
B” if a (polytime) algorithm for B can be used to create a (polytime) algorithm for A. More precisely, there is a
polytime algorithm for A that makes subroutine calls to (polytime) algorithm B.
Note: we can have a reduction with having an algorithm for B.
Consequence of A ≤ B:
An algorithm for B is an algorithm for A. But if we have a lower bound non-polytime algorithm for A then this
implies a non-polytime algorithm for B.
Even without an algorithm for B or a lower bound for A, if we prove reductions A ≤
P
B and B ≤
P
A then A and
B are equivalent with respect to polytime (either both have them, or both don’t.)
Example: Longest increasing subsequence problem. We will reduce this problem to not shortest path but longest
path in a graph.
This is a reduction – it reduces the longest increasing subsequence problem to the longest path problem. Is it a
polynomial-time reduction?
How can we solve the longest path problem? Reduction to shortest path problem. Negate the edge weights.
17 Nov 4th, 2008
Permanents are like determinants except they’re all positive terms.
Today’s topics: Reductions (from last class), P and NP, and decision problems.
17.1 Decision Problems
What is a decision problem? A problem with output YES/NO or TRUE/FALSE. We will concentrate on decision
problems to define P/NP. Why? It’s more rigorous, and it seems to be equivalent to optimization anyways.
Examples
• Given a number, is it prime?
• Given a graph, does it have a Hamiltonian cycle? (a cycle visiting every vertex once)
35
17 NOV 4TH, 2008 17.2 P or NP?
• TSP decision version: given a graph G = (V, E) with w : E →R
+
, and given some bound k ∈ R, is there a
TSP tour of length at most k?
• Independent Set: given a graph G = V (E) and k ∈ N is there an independent set of size ≥ k? Optimization
version: given G, find max independent set.
Usually, decisions and optimization are equivalent with respect to polynomial time. e.g. independent set. In fact,
typically, we can show decision ≤
P
opt. Input: G, k.
• Give G to algorithm for optimization problem
• Return YES or NO depending on whether the returned set is ≥ k.
Showing opt ≤
P
decision: suppose we have a poly-time algorithm for the decision version of independent set. For
k = n. . . 1, give G, k to decision algorithm and stop when it’s NO. Runtime: Assume decision takes O(n
t
). Then
this loop takes O(n
t+1
).
We can find the actual independent set in polytime too. Idea: try vertex 1 in/out of independent set. Exercise:
fill this in and check poly-time.
Examples:
• Factoring – find prime factors
• Primality – given number, is it prime?
In some sense, primality is the ”decision” version of factoring. But although we can test primality in polynomial
time, we can’t factor in polynomial time (and to find one would be bad news for cryptography!)
Definition P = ¦ decision problems that have polytime algorithms ¦.
Notes:
• Must be careful about model of computing and input size – count bits.
17.2 P or NP?
Which problems are in P? Which are not in P? We will study a class of ”NP-complete” problems that are
equivalently hard (wrt polytime) (i.e. A ≤
P
B ∀A, B in class) and none seem to be in P.
Definition of NP (”nondeterministic polynomial time”): there’s a set of NP problems, which contains P prob-
lems and NP-complete algorithms (that are equivalent.) NP problems are polytime if we get some lucky extra
information.
For independent set, it’s easy to verify a graph has an independent set of size ≥ k if you’re given the set. Contrast
with verifying that G has no independent set of size ≥ k, what lucky info would help?
e.g. primes: given n, is it prime? Not clear what info to give (there is some) but for composite numbers (given n,
is it composite (= not prime?)) we could give factors.
A certifier algorithm takes an input plus a certificate (our extra info.) An algorithm B is a certifier for problem
X if:
36
17 NOV 4TH, 2008 17.3 Properties
• B takes two inputs s and t and outputs YES and NO.
• ∀s, s is a YES input for X iff ∃t ”certificate” such that B(s, t) outputs YES.
B is a polytime certifier if
• B runs in polynomial time.
• There is a polynomial bound on size of certificate t in terms of the size of s.
Examples
• Independent Set
Input is a graph G and k ∈ N. Question does G have an independent set of size ≥ k?
Claim: Independent Set ∈ NP.
Proof Certificate u ⊆ V (set of vertices.) Certifier: Check if u is an independent set and check [u[ ≥ k.
• Decision version of TSP.
Input: Given G = (V, E) and w : E →R
+
, and k ∈ R
Question: Does G have a TSP tour of weight ≤ k?
Certificate: Sequence of edges
Certifier: Check edges, and check no repeated vertices (sum of weights ≤ k).
• Non-TSP
Does G have no TSP turn of length ≤ k?
Is Non-TSP in NP? Nobody knows.
• Subset-Sum:
Input: w
1
, . . . , w
n
in R
+
. Is there a subset S = ¦1 . . . n¦ such that the sum is exactly W?
Claim: Subset Sum ∈ NP. Certificate: S. Certifier: add the weights in S.
17.3 Properties
Claim P ⊆ NP.
Let X be a decision problem in P. So X has a polyime algorithm to show X ⊆ NP.
• Certificate: nothing
• Certifier Algorithm: original algorithm
Claim: any problem in NP has an exponential algorithm. In particular, the running time is O(2
poly(n)
).
Proof idea: try all possible certificates using the certifier. The number of certificates is O(2
poly(n)
).
Open Questions
Is P = NP? co-np: ”no versions of NP problems.” non-TSP is in co-NP. Is Co-NP NP? Is P NP intersect co-NP?
37
18 NOV 6TH, 2008
18 Nov 6th, 2008
18.1 Recall
A ≤
P
B – problem A ”reduces (in Polytime) to” problem B if there is a polytime algorithm for A (possibly) using
a polytime algorithm for B. (B is ”harder.”) P = ¦ decision problems with polytime algorithms ¦ and NP = ¦
decision problems with a polynomial-time certifier algorithm ¦ (i.e. poly-time IF we get extra information.)
18.2 NP-Complete
These are the hardest problems in NP. Definition: A decision problem X is NP-complete if:
1. X ∈ NP
2. For every Y ∈ NP, Y ≤
P
X.
Two important implications:
1. If X is NP-complete and if X has a polytime algorithm then P = NP. i.e. every Y ∈ NP has a polytime
algorithm.
2. If X is NP-complete, and if X has no polytime algorithm (i.e. lower bound) then no problem in NP-complete
has a polytime algorithm.
The first NP-completeness proof is hard. To show X NP-complete, we must show Y ≤
P
X for all Y ∈ NP.
Subsequent NP-completeness proofs are easier. If we know X is NP-complete, then to prove Z is NP-complete:
1. Prove Z ∈ NP
2. X ≤
P
Z
Note that X is a known NP-complete problem and Z is the new problem. Please don’t get this backwards.
18.2.1 Circuit Satisfiability
The first NP-complete problem is called circuit satisfiability.
v
x
1
x
2
^
^
¬ ¬
(one) output (sink)
inputs, with variables
38
18 NOV 6TH, 2008 18.2 NP-Complete
This is a dag with OR, AND, and NOT operations. 0-1 values for variables determine output value. e.g. if x
1
= 0
and x
2
= 1 then output = 0.
Question: Are there 0-1 values for variables that give 1 as output?
Circuit SAT is a decision problem in NP.
• Certificate – Values for variables.
• Certifier – Go through circuit from sources to sink, computing values. Check output is 1.
Theorem Circuit-SAT is NP-complete.
Proof Sketch: We know ∈ NP as above. We must show Y ≤
P
Circuit SAT for all Y ∈ NP. The idea is that
an algorithm becomes a circuit computation. A certifier algorithm with an unknown certificate becomes a circuit
with variables as some inputs. The question is, is there a certificate such that the certifier says YES – which leads
to circuit satisfiability.
Essentially, if we had a polynomial time way to test circuit satisfiability, we would have a general way to solve any
problem in NP by turning it into a Circuit-SAT problem.
18.2.2 3-SAT
Satisfiability: (of Boolean formulas).
• Input: a boolean formula.
e.g. (x
1
∧ x
2
) ∨ (x
1
∧ x
2
)
• Question: is there an assignment of 0, 1 to variables to make the formula TRUE (i.e. 1?)
Well, circuits = formulas so these satisfiability problems should be equivalent. We will be rigorous. Even special
form of Satisfiability (SAT) is NP-complete.
3-SAT: e.g. (x
1
∨ x
1
∨ x
2
) ∧ (x
2
∨ x
3
∨ x
4
) ∧ . . .. The ”formula” is the ∧ of ”clauses,” the ∨ of three literals. A
literal is a variable or negation of a variable.
Theorem 3-SAT is NP-complete.
Proof
• 3-SAT ∈ NP:
Certificate: values for variables.
Certifier algorithm: check that each clause has ≥ 1 true literal.
• 3-SAT is harder than another NP-complete problem:
i.e. prove Circuit-SAT ≤
P
3-SAT.
Assume we have a polytime algorithm for 3-SAT, so use it to create a polytime algorithm for Circuit-SAT.
Input to algorithm is a circuit C and we want to construct in polytime a 3-SAT formula F to send to the
3-SAT algorithm s.t. C is satisfiable iff F is satisfiable.
39
19 NOV 11TH, 2008
We could derive a formula by carrying the inputs up through the tree (i.e. for f
1
and f
2
and ∨, just pull
the inputs up and write f
1
∨ f
2
.) Caution: the size of formula doubles at every level (thus this is not a
polynomial time or size reduction.)
Idea: make a variable for every node in the circuit. Rewrite a ≡ b as (a ⇒ b) ∧ (b ⇒ a), and a ⇒ b as
(b ∨a). a ≡ b ∨c becomes (a ⇒ (b ∨c)) ∧((b ∨c) ⇒ a) and (b ∨c ∨a) ∧(a ∨(b ∨c)) and (a ∨(b ∧c)).
We get (b ∨ c ∨ a) ∧ (a ∨ b) ∧ (a ∨ c).
Note: we can pad these size two clauses by adding new dummy variable t and (a ∨ b ∨ t) ∧ (a ∨ b ∨ t) etc.
There’s a similar padding for size 1.
The final formula for F:
– ∨ of all clauses for circuit nodes
– ∧x
i
where i is the output node.
e.g. x
y
∧ (x
7
≡ x
5
∨ x
6
) ∧ (x
5
≡ x
1
∧ x
2
) ∧ (x
6
≡ x
3
∧ x
4
) ∧ (x
3
= x
1
) ∧ (x
4
≡ x
2
).
Claim F has a polynomial size and can be constructed in polynomial time.
Claim C is satisfiable iff F is satisfiable.
Proof (⇒) by construction (⇐) . . .
19 Nov 11th, 2008
NP is decision problems with a polynomial time certifier algorithm.
P is decision problems with a polynomial time algorithm.
NP-complete problems are the hardest problems in NP.
Definition A decision problem X is NP-complete if:
• X ∈ NP
• Y ≤
P
X for all Y ∈ NP
Once we know X is NP-complete, we can prove Z is NP-complete by proving:
• Z ∈ NP
• X ≤
P
Z
19.1 Satisfiability – no restricted form
Recall: 3-SAT is NP-complete. Recall the input is a Boolean formula in a special form (three-conjunctive normal
form, F = (x
1
∨ x
2
∨ x
3
) ∧ . . .)
Question: Are there T/F values for variables that make F true?
Theorem SAT is NP-complete.
Proof:
• SAT ∈ NP
• 3-SAT ≤
P
SAT
40
19 NOV 11TH, 2008 19.2 Independent Set
19.2 Independent Set
Input: Graph G = (V, E) and k ∈ N.
Question: Is there a subset u ∈ V with [u[ ≥ k that is independent (i.e. no two vertices joined by an edge?)
Theorem Independent-Set is NP-complete.
Proof Independent-Set is in NP. See previous lecture. We will show 3-SAT reduces to Independent-Set. We
want to give a polytime algorithm for 3-SAT using a hypothesized polytime algorithm for Independent-Set.
Input: Boolean formula F
Goal: Construct a graph G and choose k ∈ N such that F is satisfiable iff G has an independent set ≥ k.
For each clause in F, we’ll make a triangle in the graph. For example, (x
1
∨ x
2
∨ x
3
) is drawn as a graph with
three vertices x
1
, x
2
and x
3
, and edges (x
1
, x
2
), (x
2
, x
3
), (x
3
, x
1
). We have m clauses, so 3m vertices.
For example: (x
1
∨ x
2
∨ x
3
) ∧ (x
1
∨ x
2
∨ x
3
) becomes:
x
1
x
2
¬x
3
x
1
¬x
2
x
3
Connect any vertex labelled x
i
with any vertex labelled x
i
.
Claim: G has polynomial size. 3m vertices.
Details of Algorithm:
• Input: 3-SAT formua F
– Construct G
– Call Independent-Set algorithm on G, m
– Return answer
• Runtime: Constructing G takes poly time. Independent set runs in poly time by assumption.
• Correctness: Claim F is satisfiable iff G has an independent set ≥ m.
• Proof: (⇒) Suppose we can assign T/F to variables to satisfy every clause. So, each clause has ≥ 1 true
literal. Pick the corresponding vertex in the graph. Pick the corresponding vertex from the triangle. This
gives an independent set of size = m.
(⇐) Independent set in G must use one vertex from each triangle. Set the corresponding literals to be true.
Set any remaining variables arbitrarily. This satisfies all clauses.
19.3 Vertex Cover
Input: Graph G = (V, E) and number k ∈ N.
Question: Does G have a vertex cover U ⊆ V with [u[ ≤ k?
A vertex cover is a set of vertices that ”hits” all edges – i.e. ∀(u, v) ∈ E, u ∈ U or v ∈ U (or both.)
Theorem Vertex-Cover (VC) is NP-complete.
Proof
41
19 NOV 11TH, 2008 19.4 Set-Cover Problem
• VC ∈ NP
Certificate: set u. Certifier algorithm: verify U vertex cover and ≤ k.
• Ind-Set ≤
P
VC
Ind-Set and VC are closely related.
Claim u ∈ V is an independent set iff V −U is an vertex cover.
Suppose that we have a polynomial time algorithm for VC. Here’s an algorithm for independent set. Input
G, k, and call VC algorithm on G, n −k.
Correctness: Claim, G has independent set ≥ k iff G has VC ≤ n −k.
19.4 Set-Cover Problem
Input: set E of elements and some subsets of E: S
1
, . . . , S
m
. S
i
∈ E and k ∈ N.
Question:
Can we choose subset of k S
i
’s that still cover all the elements? i.e. i
1
, . . . , i
k
such that
¸
j=1...k
S
ij
= E
Example: Can we throw away some intersecting rectangles and still cover some area?
Theorem Set-Cover is NP-complete.
Please find reduction proof on the Internet.
19.5 Road map of NP-Completeness
3-SAT
Circuit-SAT
Subset-Sum
Independent
Set
Hamiltonian
Cycle
TSP
Set-Cover
VC
Note: VC ≤
P
Set-Cover because VC is a special case, but Set-Cover ≤
P
VC because VC is NP-complete.
These proofs are from a 1976 paper by Richard Karp.
19.6 Hamiltonian Cycle
Input: Directed Graph G = (V, E)
Q: Does G have a directed cycle that visits every vertex exactly once?
Proof (1) ∈ NP and (2) 3-SAT ≤
P
Ham.Cycle. Give a polytime algorithm for 3-SAT assuming we have one for
Ham.Cycle.
42
20 NOV 13TH, 2008
• Input: 3-SAT formula F
• Idea: Construct digraph G such that F is satisfiable iff G has a Hamiltonian cycle.
F has m clauses and n variables x
1
, . . . , x
n
.
(skipped this section. read online.)
Can you show the undirected ham cycle problem is hard?
20 Nov 13th, 2008
20.1 Undirected Hamiltonian Cycle
Input: Undirected G = (V, E)
Decision: Does this graph have an undirected Hamiltonian cycle that visits every vertex exactly once?
Theorem Undirected H.C. is NP-complete.
Proof
• ∈ NP
• Dir. H.C. ≤
P
Undir.H.C.
Assume we have a polytime algorithm for the undirected case. Design a polytime algorithm for the directed
case.
Input: Directed graph G
Construct an undirected graph G

such that G has directed H.C. iff G

has undirected GC.
First idea – G

= G with direction erased. (⇒) is OK, but (⇐) fails in a one-directional cycle.
Second idea –
v
v
mid
v
in
v
out
For each vertex v create v
in
, v
out
, and v
mid
as shown above. We’ve created G

.
Claim G

has polynomial size. Say G has n vertices, m edges. Then G

has 3n vertices, m+ 2n.
Claim (Correctness) G has a directed H.C. iff G

has undirected H.C.
(⇒) easy
(⇐) v
mid
has degree two. So the Hamiltonian cycle must use both incident edges. Then it must use one
incoming edge at v and one outgoing edge at v.
This is the level of NP-completeness proof you’ll be expected to do on your assignment.
20.2 TSP is NP-complete
Theorem TSP (decision version) is NP-complete.
Input: G = (V, E) and w : E →R
+
with k ∈ R.
Q: Does G have a TSP tour with weights ≤ k?
Proof
43
20 NOV 13TH, 2008 20.3 Subset-Sum is NP-Complete
• ∈ NP
• Ham. Cycle ≤
P
TSP.
Ham. Cycle is a special case of TSP when w(e) = 1 ∀e and k = n.
Theorem Hamiltonian Path is NP-complete.
Input: undirected graph G
Question: does G have Ham path that visits each vertex exactly once?
Proof
– ∈ NP
– Ham Cycle ≤
P
Ham Path
Want algorithm for Ham. Cycle using algorithm for Ham Path. Given G, input for Ham. cycle,
construct G

such that G has H.C. iff G

has Ham path.
First idea: G

← G. Well, ⇒ is OK but we can find a counterexample for ⇐. Exercise: find a
counterexample.
Second idea: Create three new vertices abc in G

and connect a and c to all vertices in G

. This gives
G has Ham. path iff G

has Ham cycle.
Third idea: Add a single vertex and connect it to everything in G

.
Fourth idea: erase each vertex from G one-at-a-time and ask for Hamiltonian path.
Final idea: Take one vertex v and split it into two identical cupies. Add new vertices s and t as above.
Claim poly-size.
Again, this is the kind of thing you’ll be expected to do on your assignment.
20.3 Subset-Sum is NP-Complete
This one is not something you’ll be expected to do on your assignment.
Input: Numbers a
1
, . . . , a
n
∈ R and target W.
Question: Is there a subset S ∈ ¦1, . . . , n¦ such that
¸
i∈S
a
i
= W?
Recall: Dynamic programming algorithm O(n W). Branch-and-bound algorithm was O(2
n
).
Proof
1. ∈ NP
2. 3-SAT ≤
P
Subset-Sum
Give a polynomial-time algorithm for 3-SAT using a polytime algorithm for Subset-Sum.
Input is a 3-SAT formula F with variables x
1
, x
2
, . . . x
n
and clauses c
1
, . . . , c
n
. Construct a Subset-Sum input
a
1
, . . . , a
t
, W s.t. F is satisfiable iff ∃ subset of a
i
’s with
¸
= W.
Ex, F = (x
1
∨ x
2
∨ x
3
) ∧ (x
1
∨ x
2
∨ x
3
).
44
20 NOV 13TH, 2008 20.3 Subset-Sum is NP-Complete
c
1
c
2
. . . c
m
x
1
x
2
x
3
x
1
1 0 1 0 0
x
1
0 1 1 0 0
x
2
0 0 0 1 0
x
2
1 1 0 1 0
x
3
1 1 0 0 1
x
3
0 0 0 0 1
x
n
x
n
slack 1, 1 1
slack 1, 2 2
slack 2, 1 1
slack 2, 2 2
≥ 1 ≥ 1 1 1
4 4
Make a 0-1 matrix, interpreting the rows as binary numbers (actually with a bigger base of 10.) Add extra
columns: column x
i
has 1

s in rows x
i
and rows x
i
, but zeros elsewhere.
• Want to choose x
1
row or x
1
row, but not both. Solution is slack rows.
• Want to deal with target ≥ 1. Solution: add two rows per column forcol c
i
. Add rows slack i,1 = 1 in
c
1
and sl i,2 = 2 in c
i
– and 0 everywhere else.
Set target for column c
i
= 4.
Finally, each row of the matrix becomes a base-10 number. These are the a
i
’s. The target row of the matrix
turns into W in base 10.
Claim Size. How many a
i
’s? 2n+2m. How many base 10 digits in a
i
’s and W? Equal to number of columns,
n +m.
Claim Correctness. Satisfiable iff ∃ subset of a
i
’s with sum W.
Proof (⇒) If x
i
is true, choose x
i
. If false, choose x
i
. Then column x
i
has sum = 1 as required. Column
for C
i
clause: either:
• True literal in C
i


Use slack i,1 = 1, so total = 4. Use slack i,2 = 1, total = 4. If only a single true literal, use slack i,1 and
slack i,2 for again 4.
This row set gives sum W.
(⇐) Some subset of rows adds to W.
Column x
i
⇒ we use rows x
i
or x
i
. Set x
i
= T or F. That satisfies all clauses. Consider c
j
, and sum down
c
j
column to get 4. Slacks give ≤ 3 so some literal in c
j
must be true.
45
21 NOV 18TH, 2008
21 Nov 18th, 2008
NP-Completeness continued.
Theorem Circuit-SAT is NP-Complete.
Recall: Input: Circuit of ∨, ∧ and gates and variables as some of the inputs. One sink: the final output.
Question: are there 0-1 values for which the circuit outputs 1?
Proof
• ∈ NP
• Y ≤
p
Circuit-SAT for all Y in NP.
What do we know about Y ? It has a polynomial time certifier algorithm B (input s for Y has Yes output
iff there exists a certificate t of poly size such that B(s, t) outputs YES.
We assume there is a polynomial time algorithm for Circuit-SAT and give a polynomial time algorithm
for Y using that subroutine.
Let n = size(s), size of input size. Let p(n) be a polynomial bounding size(t) i.e. size(t) ≤ p(n).
We must convert algorithm B to a circuit (to hand to Circuit-SAT subroutine.)
Alg. B (after compiling and assembling) becomes a circuit at lowest hardware level. Because B runs in
polynomial time, the circuit has polynomial size.
Alg B (for input of size n) becomes circuit C
n
(of polynomial size in n.)
(Is there a certificate?) becomes (Are there values for variables?)
Correctness:
Input s for Y gets YES output iff there exists a certificate such that B(s, t) outputs YES iff there exist values
for variables t such that C
n
outputs 1 iff C
n
is satisfiable.
Algorithm for Y :
– Input S
– Convert B to circuit C
n
– Hand C
n
to Circuit-SAT subroutine
21.1 Major Open Questions
Is P = NP? If one NP-complete problem is in P, then they all are.
If P = NP then there are problems in between P and NP-complete (Badner 70’s) i.e. A ≤
P
B but B not ≤
P
A
(i.e. A <
P
B)
But what are natural candidates for these? IN Garey and Johnson (’79) these were:
• Linear Programming: in P (’80)
• Primality Testing: in P (’02)
• Min. Weight Triangulation for Point Set: in NP-complete (’06) (not famous problem)
• Graph isomorphism: open.
Given two graphs each on n vertices, are they the same after relabeling vertices?
46
21 NOV 18TH, 2008 21.2 Undecidability
21.2 Undecidability
So far we’ve been talking about efficiency of algorithms. Now, we’ll look at problems with no algorithm whatsoever.
This is also a topic not conventionally covered in an algorithms course. So you won’t find it in textbooks. But
everyone in the School of Computer Science thinks it’s ”absolutely crucial” that everyone graduating with a
Waterloo degree knows this stuff.
21.2.1 Examples
Tiling: Given square tiles with colours on their sides, can I tile the whole plane with copies of these tiles? Must
match colours, and no rotations or flips allowed.
The answer is, actually, no. For a finite piece (k k) of the plane, it’s possible as I could just try t choices in k
2
places, so the problem is O(t
k
2
).
Program Verification: Given specification of inputs and corresponding outputs of a program (specification is finite,
potential number of inputs is infinite) given a program, does this program give correct corresponding output?
Answer: no. On one hand, this is sad for software engineers, because what their processes do attempts to check
this. On the plus side, your skills and ingenuity will always be needed...
Halting Problem: Given a program, does it halt (or go into an infinite loop?)
Sample-Program
while x = 1 do
x ← x −2
end
This halts if x is odd and positive.
Sample-Program-2
while x = 1 do
if x is even then x ←
x
2
else x ← 3x + 1
end
Assume x > 0. Sample runs: x = 5, 16, 8, 4, 2, 1. x = 9, 28, 14, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1.
Does this program halt for all x? That’s open.
47
22 NOV 20TH, 2008
Also, any math question about existence of a number can be turned into a halting question. Idea: There is an x
such that Foo(x). x ← 1. While not Foo(x), x ← x −1.
Definition A decision program is undecidable if there’s no algorithm for it.
Definition (more general)
A program is unsolvable if there’s no algorithm for it.
What is a problem? Specification of inputs and corresponding outputs.
What is an algorithm? Church-Turing Thesis (not proved.)
Algorithm is a Turing machine.
Theorem The following models of computing are equivalent:
• Turning machines
• Java programs
• RAM
• Circuit families
22 Nov 20th, 2008
22.1 Undecidability
”Which problems have no algorithm?”
Definition A decision problem is undecidable if it has no algorithm. A (general) problem is unsolvable if it as no
algorithm.
22.2 History of Undecidability
• Gottlob Frege - 1900 - one of many who tried to axiomatize mathematics.
• Bertrand Russell (1872-1970) Russell’s paradox (recommend his biography, and some philosophy books)
Let S = the set of sets that do not contain themselves. Is S a member of itself?
– NO. then S meets the second condition, so S is a member of S.
– YES. contradiction.
Contradiction either way! So what is wrong about this?
First undecidability result (from Turing):
Theorem The Halting Problem is undecidable.
Halting Problem
48
23 NOV 25TH, 2008
• Input: Some program or algorithm A and some input string w for A.
• Question: Does A halt on w?
Proof: (by contradiction.) Suppose there is a program H that decides the halting problem. H takes A, w as input
and outputs yes/no.
Construct a new program H

with input a program B.
begin
call H(B, B)
if no, halt.
else, loop forever.
end
So H

is like Russell’s set S. His question, ”does S contain S?” is like asking, ”does H

halt on its own input?”
Suppose yes, then this is a yes case of the halting problem. So H(H

, H

) outputs yes. Look at code for H

on
input H

. It loops forever. Contradiction.
Suppose no. Then this is the no case of the halting problem. So H(H

, H

) outputs no. But then (looking at
code of H

) H

halts on input H

. Contradiction either way. Therefore, our assumption that H exists is wrong.
Therefore, there is no algorithm to decide the halting problem.
23 Nov 25th, 2008
Assignment 3 – out of 45.
Assignment 4 – due Friday.
Final exam: study sheet is allowed.
23.1 Undecidability
Recall: a decision problem is undecidable if there is no algorithm for it.
Halting Problem: given a program/algorithm A and an input w, does A halt on input w?
To show other problems are undecidable, use reductions.
Theorem: If P and Q are decision problems and P is undecidable and P ≤ Q then Q is undecidable.
Recall A ≤ B or ”A reduces to B” if an algorithm for B can be used to make an algorithm for A.
Proof By contradiction. Suppose Q is decidable. Then it has an algorithm. By the definition of ≤, we get an
algorithm for P. This is contrary to P undecidable.
23.2 Other Undecidable Problems
23.2.1 Half-No-Input or Halt-on-Empty
Given a program A with no input, does it halt?
49
23 NOV 25TH, 2008 23.2 Other Undecidable Problems
Theorem Halt-no-Input is undecidable.
Proof: Halting Problem ≤ Half-no-input.
Suppose we have an algorithm X for Halt-no-input. Make an algorithm for the Halting Problem.
Input: program A, input string w.
Algorithm: Make a program A

that has w hard-coded inside it and then run A on it. Call X on A

which outputs
the yes/no answer.
Correctness A halts on w iff A

halts.
23.2.2 Program Verification
Given a program, and specification of inputs and corresponding outputs, does the program compute the correct
output for each input?
Theorem Program Verification is undecidable.
Proof Halt-No-Input ≤ Program Verification.
Suppose we have an algorithm V to decide Program Verification. Make an algorithm to solve Halt-No-Input.
Input: program A.
Output: does A halt?
Idea: Modify code of A to get a program A

with input and output.
A

read input, discard it
A
output 1
Then call V (A

, specs: for any input, output 1 ).
Correctness A halts iff V (A

, specs above) answers yes.
Proof: A halts iff A

produces 1 output for every input iff V (A

, spec above) answers yes.
Program Equivalence (something TA’s would love!)
Given two programs, do they behave the same (i.e. produce the same outputs?)
Theorem Program Equivalence is undecidable.
Proof Program-Verification ≤ Program-Equiv (?)
Suppose we have an algorithm for Program Equivalence. Give an algorithm for Program Verification.
Input: program A, input/specs for A. This will work, but we need more formality about input/output specs.
Let’s try another approach.
Halt-No-Input ≤ Program-Equiv.
Suppose we have an algorithm for Program Equivalence. Make an algorithm for Halt-no-Input. Input: program A.
Algorithm: Make A

as in previous. Make program B: read input, just output 1. Call algorithm for Program-Equiv
on A

, B.
50
24 NOV 27TH, 2008
Correctness
A

is equivalent to B iff A halts.
23.2.3 Other Problems (no proofs)
Hilbert’s 10th Problem
Given a polynomial P(x
1
, . . . , x
n
) with integer coefficients, does P(x
1
, . . . , x
N
) = 0 have positive integer solutions?
Possible approach: try all integers. This will correctly answer ”yes” if the answer is ”yes.” e.g. least integer
solution to x
2
= 991y
1
+ 1 is a 30-digit x and 29-digit y.
This was proved undecidable in the 70’s.
Conway’s Game of Life
Rules: spots die with 0-1 or 4 neighbours, are born with three neighbours, Undecidable.
24 Nov 27th, 2008
Final Exam: Wed Dec 10th. Office hours: show webpage. 48 and 49 must be rounded up to 50.
24.1 What to do with NP-complete problems
Sometimes you only want special cases of an NP-complete problem.
• Parameterized Tractability: exponential algorithms that work in polynomial time for special inputs. For
example, maximum degree in a graph. There may be algorithms that work in polytime when you bound
that maximum degree.
• Exact exponential time algorithm: use heuristics to make branch-and-bound explore the most promising
choice first (and run fast sometimes.)
• Approximation Algorithms: CS 466.
– Vertex Cover: Greedy algorithm that finds a good (not necessarily min) vertex cover.
C <- empty set
while E not empty set
pick e = (u,v) in E
C <- C u {u,v}
remove from E all edges
incident to u or v
end
Claim is this algorithm finds [C[ ≤ 2( min size of a V.C. ).
Proof: The edges we choose form a matching M (no two share an endpoint.) [C[ = 2[M[. Every edge
in M must be hit by a vertex in any V.C. and ∴ [M[ ≤ min size of V.C. and ∴ [C[ ≤ 2 ( min V.C. ).
We call this a ”2-approximation algorithm.”
Some NP-complete problems have no constant-factor approximation algorithm (unless P = NP) such
as Independent Set.
51
24 NOV 27TH, 2008 24.1 What to do with NP-complete problems
Some NP-complete problems have approximation factors as close to 1 as we like – at the cost of
increasing running time. Limit is approximation factor = 1 (an exact algorithm) with an exponential-
time algorithm.
– Example Subset-Sum
Given w
1
, . . . , w
n
and W, is there S ∈ ¦1 . . . n¦ such that
¸
i∈S
w
i
= W?
As optimization, we want
¸
i∈S
w
i
≤ W to maximize
¸
i∈S
w
i
.
Recall: Dynamic programming O(n W).
Note
¸
i∈S
w
i

1
2
(true max. this would be a 2-approximation)
¸
i∈S
w
i

1
(1+)
(true max) is a ”(1 +)-approximation.
Claim is there is a (1 + ) approximation algorithm for Subset-Sum with runtime O

1

n
3

. As → 0
we get better approximation but worse runtime.
Idea: apply dynamic programming to rounded input.
Rough rounding – few bits – rough approximation.
Refined rounding – many bits – good approximation.
Rounding parameter b (later b =

n
(max w
i
for i = 1 . . . n))
So ˜ w
i

w
i
b

b
Claim that w
i
≤ ˜ w
i
≤ w
i
+b.
Now all the ˜ w
i
’s are multiples of b so scale and run dynamic programming.
˜
˜ w ←

˜ w
i
b

. Also,
˜
˜
W ←

W
b

.
Note: we should check feasibility of rounding.
Runtime: O(n
˜
˜
W).
˜
˜
W ≤ O

W
B

= O

W

n
(max w
i
)

≤ O

1

n
2

and
W ≤ n(max w
i
)
Therefore, our runtime is like O

1

n
3

.
How good is our approximation? Each ˜ w
i
is off by ≤ b. The true maximum ≤
¸
i∈S
w
i
+ nb ≤
¸
i∈S
w
i
+(max w
i
) ≤
¸
i∈S
w
i
+
¸
i∈S
w
i

= (1 +)
¸
i∈S
w
i
.
Second last step: else use max w
i
as solution.
Therefore, (1 +) approx. alg.
(And assume w
i
< W ∀i. Else throw out.)
Idea: dynamic programming algorithm is very good – it only can’t handle having lots of bits in a
number. So throw away half the bits and get an approximate answer.
• Do alternative methods of computing help with NP-complete problems?
Will massively parallel computers help? Only by a factor of number of CPUs. This is like ”a drop in the
bucket” for exponential time algorithms.
• Randomized algorithms (CS 466?)
If I have access to a RNG, then what can I now do?
Primality: can be tested in polytime with a randomized algorithm (70’s) but also without randomness (2002.)
52
24 NOV 27TH, 2008 24.2 P vs. NP
• Quantum Computing
The hope is that it offers massive parallelism for free. Huge result (Shor, 1994) – efficient factoring on a
quantum computer.
Waterloo is, by the way, the place to be for quantum computing. In Physics, CS, and C&O we have experts
on the subject.
To read a tiny bit more on Quantum Computing is [DPV]
24.2 P vs. NP
53

CONTENTS

CONTENTS

11 Oct 14th, 2008 11.1 Graph Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Minimum Spanning Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Oct 16th, 2008 12.1 Graph Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.1 Prim’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Shortest Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Oct 21, 2008 13.1 All Pairs Shortest Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.1 Floyd-Warshall Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Oct 23, 2008 14.1 Dijkstra’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Connectivity in Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.1 Finding 2-connected components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Oct 28th, 2008 15.1 Backtracking and Branch/Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 Branch-and-Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.1 Branch and Bound TSP Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Oct 30th, 2008 16.1 Recall . . . . . . . . . . . . . . . . . . . . 16.2 Lower Bounds . . . . . . . . . . . . . . . . 16.2.1 Basic Techniques . . . . . . . . . . 16.2.2 State-of-the-Art in Lower Bounds . 16.3 Polynomial Time . . . . . . . . . . . . . . 16.4 Reductions . . . . . . . . . . . . . . . . .

20 20 21 23 23 23 24 25 25 25 27 27 28 29 30 30 32 32 33 33 33 33 34 34 35 35 35 36 37 38 38 38 38 39 40 40 41 41 42 42 42

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

17 Nov 4th, 2008 17.1 Decision Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2 P or NP? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Nov 6th, 2008 18.1 Recall . . . . 18.2 N P -Complete 18.2.1 Circuit 18.2.2 3-SAT

. . . . . . . . . . . . . . . . Satisfiability . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

19 Nov 11th, 2008 19.1 Satisfiability – no restricted form 19.2 Independent Set . . . . . . . . . 19.3 Vertex Cover . . . . . . . . . . . 19.4 Set-Cover Problem . . . . . . . . 19.5 Road map of NP-Completeness . 19.6 Hamiltonian Cycle . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

ii

CONTENTS

CONTENTS

20 Nov 13th, 2008 20.1 Undirected Hamiltonian Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.2 TSP is NP-complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.3 Subset-Sum is NP-Complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Nov 18th, 2008 21.1 Major Open Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Nov 20th, 2008 22.1 Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2 History of Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Nov 25th, 2008 23.1 Undecidability . . . . . . . . . . . . . . 23.2 Other Undecidable Problems . . . . . . 23.2.1 Half-No-Input or Halt-on-Empty 23.2.2 Program Verification . . . . . . . 23.2.3 Other Problems (no proofs) . . .

43 43 43 44 46 46 47 47 48 48 48 49 49 49 49 50 51 51 51 53

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

24 Nov 27th, 2008 24.1 What to do with NP-complete problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.2 P vs. NP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

iii

1 SEP 9TH, 2008

1
1.1

Sep 9th, 2008
Welcome to CS 341: Algorithms, Fall 2008

I’m Anna Lubiw, I’ve been in this department/school quite some time. This term I’m teaching both sections of CS 341. I find the earlier lecture is better though, which may be counterintuitive. The number of assignments is fewer this term. There are fewer grad TA’s this term, so the assignments may be shorter (but quite likely, not any easier!) Textbook is CLRS. $140 in the bookstore, on reserve in the library.

1.2

Marking Scheme

25% Midterm 40% Final exam 35% Assignments We have due dates for assignments already (see the website.) Unlike in 2nd year courses where ISG keeps everything coordinated, in third year we’re on our own.

1.3

Course Outline

Where does this word come from? An Arabic scientist from 600 AD. Originally, algorithms for arithmetic, developed by the mathematician/scientist (not sure what to call him back then.) In this course, we’re looking for the best algorithmic solutions to problems. Several aspects: 1. How to design algorithms i.e. what shortest-path algorithm to use for street-level walking directions. (a) Greedy algorithms (b) Divide and Conquer (c) Dynamic Programming (d) Reductions 2. Basic Algorithms (often domain specific) Anyone educated in algorithms needs to have a general repertoire of algorithms to apply in solving new problems (a) Sorting (from first year) (b) String Matching (CS 240) 3. How to analyze algorithms i.e. do we run it on examples, or try a more theoretical approach (a) How good is an algorithm? (b) Time, space, goodness (of an approximation) 4. You are expected to know (a) O notation, worst case/avg. case (b) Models of computation

1

1 SEP 9TH, 2008

1.4 A Case Study (Convex Hull)

5. Lower Bounds This is not a course on complexity theory, which is where people really get excited about lower bounds, but you need to know something about this. (a) Do we have the best algorithm? (b) Models of computation become crucial here. (c) NP-completeness (how many of you have secret ambitions to solve this? I started off wanting to solve it, before it was known it was so hard...)

1.4

A Case Study (Convex Hull)

To bound a set of points in 2D space, we can find the max/min X,Y values and make a box that contains all the points. A convex hull is the smallest convex shape containing the points (think the smallest set of points that we can connect in a ring that contains all the other points.) Analogy: putting an elastic band around the points, or in three dimensions putting shrink-wrap around the points. Why? This is a basic computational geometry problem. The convex hull gives an approximation to the shape of a set of points better than a minimum bounding box. Arises when digitizing sculptures in 3D, or maybe while doing OCR character recognition in 2D. 1.4.1 Algorithm

Definition (better from an algorithmic point of view) A convex hull is a polygon and its sides are formed by lines that connect at least two points and have no points on one side. A straightforward algorithm (sometimes called a brute force algorithm, but that gives them a bad names because oftentimes the straightforward algorithms are the way to go) – for all pairs of points r, s find the line between r, s and if all other points lie on one side only then the line is part of the convex hull. Time for n points: O(n3 ). Aside: even with this there are good and bad ways to ”see which side points are on.” Computing the slope between the lines is actually a bad way to do this. Exercise: for r, s, and p, how to do it in the least steps, avoiding underflow/overflow/division. Improvement Given one line , there is a natural ”next” line. Rotate through s until it hits the next point.

s l r t l'

t is an ”extreme point” (min angle α). Finding it is like ginding a max (or min) – O(n). Time for n points: O(n2 ). Actually, if h = the number of points on the convex hull, the algorithm takes O(n × h) Can we do even better? (you bet!) Repeatedly finding a min/max (which should remind you of sorting.) Example Sort the points by x coordinate, and then find the ”upper convex hull” and ”lower convex hull” (each of which comes in sorted order.) The sorting will cost O(n log n) but the second step is just linear. We don’t quite have a linear algorithm here but this will be much better. Process from left to right, adding points and each time figuring out whether you need to 2

The take-home message is that to be precise we need to spend more time on models of computation. and ”walk down” to get the lower bridge. One paper written called ”The ultimate convex hull algorithm?” (with a question mark in the name. and O(n) to find the upper/lower bridges.) Measuring in terms of n. We need a restricted model to say that sorting is Ω(n log n) – but need the power of indirect addressing. no.1 SEP 9TH. we could sort faster. This will be O(n) to divide. Which is better? Well. If we could find a convex hull faster.g. an O(n × h) algorithm. We saw an O(n log n) algorithm. but intuition is that we’ll have to sort the points somehow. Get recurrence relation: n + O(n) 2 This is the same as e. Why not? We’ll show soon. and h. This answer uses divide and conquer. In three-dimensional space you can still get O(n log n) algorithms for this. 2. From there recover the sorted order. It comes out to O(n log n). Never Any Better Finally let’s talk ever-so-slightly about not getting better than O(n log n). edge from max x coordinate on the left to minimum x coordinate on the right. upper bridge lower bridge 1. depends on whether h > log n or not. From e. very unusual) gave an algorithm that’s O(n log h). To be rigorous. T (n) = 2T 3 . This is a case of using a reduction (which we will study a lot in this course) Time for n points: O(n log n). Technique: put points on a parabola (or alternately other shape) with a map x → (x. Combine by finding upper and lower bridges. In some sense. 3. This is an intuitive argument. 2008 1. merge-sort. the output size. Challenge Look up the O(n log h) algorithm by Timothy Chan (here in SCS) and try to understand it. but not the same way. (Don’t worry if that seems fuzzy. Divide points in half by vertical line. One more algorithm Will not be better than O(n log n). ”walk up” to get upper bridge. we need to specify the model of computation. the input size. Recursively find convex hull on each side.4 A Case Study (Convex Hull) go ”up” or ”down” from each point. x2 ) and compute the convex hull of these points.

On the assignment you must prove this is in fact true. This takes seven coins.. 2008 Missing. • Suppose there is an optimal solution.” The goal is to maximize the number of activities we can perform. or ”activity selection. 3. • Greedy does better at each step.47 in as few coins as possible. 3. Then the Greedy approach can be made into this solution. 2008 2 Sep 11th. Suppose you want to pay $ 3. Greedy Approaches • Pick the first activity NO • Pick the shortest activity NO • Pick one with the fewest overlaps NO • Pick the one that ends earliest YES We can write the algorithm as A <. and I claim this is the minimum number of coins. pick non-overlapping activities. • Metroids (a formalization of when Greedy approaches work) (in C&O) 4 . 3 Sep 16th. Given activities.A union { i } end This looks like an O(n log n) algorithm (as it takes that long to sort. and then O(n) after that) Correctness Proof There are three approaches to proving correctness of greedy algorithms.1 Example: Making change Example: for making change. each with an associated time interval.empty set for i = 1 . 2008 Assignment 1 is available online. n if activity i doesn’t overlap any activities in A A <.2 Example: Scheduling time Interval scheduling.3 SEP 16TH.

. . Exercise. half of item 2 Greedy Algorithm Order items 1. Therefore l ≤ k and greedy gives the optimal solution. . xi is the weight of item i that we chose. bi was a candidate – we chose ai . . . bl so swap is OK. . . n by vi wi . . Proof By induction on i. bl is a solution. . .e. . . oatmeal) We’ll look at 0-1 Knapsack later (since it’s harder) (and when we study dynamic programming) So imagine we have a table of items: Weight wi 6 4 4 Value vi 12 7 6 vi wi . tent) • Fractional: items are divisible (e. .3 Example: Knapsack problem I have items i. free-W } free-w <.g. free-w <. . . . . bl is a solution. Prove a1 . . . . bi does not overlap ai−1 by assumption. Claim a1 . . in the order greedy alg. Base case i = 0 and b1 . we’re swapping bi out and ai in. .free-w . .x_i end 5 . chooses them. bk+1 . . Item i has weight wi and i has values vi .min{ w_i. . There are two versions: • 0-1 Knapsack: the items are indivisible (e. if l > k then by claim a1 . That proves claim. . ak . . . Suppose that l > k and show that greedy algorithm would not have stopped at k. ak } ordered by finish time (i. . . .g. We want to show l ≤ k. . .) Let B = {b1 . Pick items of total weight ≤ W maximizing the sum of V . 2008 3. ai bi+1 . bl } be any other set of non-overlapping intervals ordered by finish time. . . i.3 SEP 16TH.W for i=1. Inductive case a1 .3 Example: Knapsack problem Theorem This algorithm returns a maximum size set A of non-overlapping intervals. . .n x_i <. But then the Greedy algorithm would not have stopped at ak . go through the picture. b2 . .e. 3.. . . . . 1 2 3 W = 8. To proce theorem. Weight limit W for the knapsack. bi+1 . . . ai . . Greedy by For the 0 − 1 knapsack: • Greedy picks item 1 – value 12 • Optimal solution For the fractional case: • Take all of item 1. Well. . So when we choose ai . . . n. ai−1 bi . bl is a solution. bl is a solution. . bl is also a solution. Proof Let A = {a1 . . . So finish (ai ) ≤ finish (bi ) ∴ ai doesn’t overlap bi+1 .

4 5 Sep 18. 2008: MISSING Sep 23. Let k be the minimum index with xk = yk . Sorting and searching are often divide-and-conquer algorithms. . .T +1 .) xi = yi = W . Then yk < xk (because greedy took max xk . So the sum of the weights yi = W +∆(vk /wk ) − ∆(vl /wl ) = ∆(vk /wk − vl /wl ) vk vl > because k > l wk wl Thus yi is an even better solution. yn .5 SEP 23. . yk ←k +∆ and yl ← yl − ∆. . Proof We use x1 . The steps are: • Divide – break problem into smaller subproblems • Recurse – solve smaller sets of problems • Conquer/Combine – ”put together” solutions from smaller subproblems Some examples are: • Binary search – Divide: Pick the middle item – Recurse: Search in each side. Thus own assumption that opt is better than greedy fails. Ida: swap excess item l for item k. with only one subproblem of size – Conquer: No work – Recurrence relation: T (n) = T – Time: T (n) ∈ O(log n) • Merge sort – Divide: basically nothing 6 n 2 n 2 n 2 n 2 + 1 or more formally T (n) = max T . So there exists an index l > k such that yl > xl . Claim Greedy algorithm gives the optimal solution to fractional knapsack problem. The only item we take fractionally is the last. 2008: Divide and Conquer I started with Greedy because it’s fun to get to some interesting algorithms right away. . Divide and conquer however is likely the one you’re most familiar with. ∆ ← min{yl . wk − yk }. . 2008: DIVIDE AND CONQUER xi = W (assuming W < The value we get is wi ) n i=1 vi wi xi Note: solution looks like it’s for 0-1. . . both terms of which are greater than zero. xn and the optimal uses y1 . Well.

1 Solving Recurrence Relations Three approaches. T (n) = 2T n +n−1 2 n n + −1 +n−1 = 2 2T 4 2 n = 4T + 2n − 3 4 . If we really did want to compute exactly T (n). 2008: DIVIDE AND CONQUER n 2 n 2 5.5 SEP 23. .1 Solving Recurrence Relations – Recurse: Two subproblems of size – Conquer: n − 1 comparisons – Recurrence: T (n) = T – Time: T (n) ∈ O(n log n) n 2 +T + (n − 1) and T (1) = 0 comparisons. all of which are in CLRS. 5. . then T (n) = T T (1) = 0 and the exact solution is T (n) = n log n − 2 7 log n n 2 +T n 2 +n−1 +1 . n + k × n − (2k − 1) 2k = nT (1) + n log n − n + 1 = 2 ∗ kT = n log n − n + 1 ∈ O(n log n) If our goal is to say that mergesort takes O(n log n) for all n (as apposed to exactly computing T (n)) then we can just add that T (n) ≤ T (n ) where n = the smallest power of 2 bigger than n. 5. k = log n.1 Use T (n) = 2T T (1) = 0 So for n a power of 2. 2k = n. n + in − (2i − 1) or 2i i−1 ”Unrolling” a recurrence n + n − 1 for n even 2 = 2i T We want n 2k 2j j=0 = 1.1.

For n even. 2008: DIVIDE AND CONQUER 5. Often you don’t know c until you’re working on the problem.5 SEP 23. This proof is fallacious. constants aren’t supposed to grow like c + 1 above.2 Guess an answer. Prove by induction that T (n) ≤ cn for some c. Example 2 n 2 n 2 T (n) = T T (1) = 1 +T +1 Let’s guess T (n) ∈ O(n). is to deal separately with n even and n odd. Another example n +n 2 T (n) ∈ O(n) T (n) ≤ cn for some constant c T (n) = 2T Claim Prove Assume by inductive hypothesis that T (n ) ≤ cn for n < n Inductive step n T (n) = 2T +n 2 n ≤ 2c + n = (c + 1)n 2 Wait.1. and need to do the case of n odd) for those of you for whom this is not entirely intuitive. n n n + n − 1 ≤ 2 c log +n−1 2 2 2 = cn(log n − log 2) + n − 1 (by induction) T (n) = 2T = cn log n − cn + n − 1 ≤ cn log n if c ≥ 1 I’ll leave the details as an exercise (we need a base case.1 Solving Recurrence Relations 5. prove that T (n) ∈ O(n log n) Be careful: prove by induction that T (n) ≤ cn log n for some constant c. prove by induction Again for mergesort recurrence. Please do not make this kind of mistake on your assignments. 8 . A good trick for avoiding .

= 2k T n + 2k k−1 n +1 2 n +2+1 4 2i i=1 (n = 2k ) = nT (1) + 2k − 1 = 2n − 1 So try proving by induction that T (n) ≤ c × n − 1 In that case we have T (n) = c n n −1+c −1+1 2 2 = cn − 1 This matches perfectly. T (n) = 2T = 4T . 2008: DIVIDE AND CONQUER 5. Message: Sometimes we need to strengthen the inductive hypothesis and lower the bound. . We can say S(m) ∈ O(m log m) T (2m ) ∈ O(m log m) T (n) ∈ O(log n log log n) 9 √ n ) + log n and ignore the . n = 2m . and we have T (n) = 2T (2m/2 ) + m Let S(m) = T (2m ).3 Changing Variables Suppose we have a mystery algorithm with recurrence T (n) = 2T ( Substitute m = log n. .1.1 Solving Recurrence Relations Induction step: T (n) = c n n +c +1 2 2 = cn + 1 – we’ve got trouble from that + 1 Let’s try unrolling for n a power of 2. then S(m) = 2S(m/2) + m. 5.5 SEP 23.

4 Master Theorem From MATH 239. logb a < k. . That never happens in algorithms (because we always have some work to do!) We need T (n) = aT n + c × nk b n + cn b The more general case where c × nk = f (n) is handled in the textbook. c > 0.1. The third case is when a > bk . k ≥ 1 then  if a < bk  Θ(nk ) Θ(nk log n) if a = bk T (n) ∈  Θ(nlogb a ) if a > bk We’re not going to do a rigorous proof but we’ll do enough to give you some intuition. Just to wrap up. We’ll use unrolling. . t = logb n. b > 1. if a < bk i. n + cnk b n = a aT 2 + c b n = a2 T 2 + ac b n 3 = a T 3 + a2 c b . 10 . a ≥ 1. the sum is constant and nk dominates. We’ll first look at k = 1.5 SEP 23. It comes out exactly like that sum in your assignment.e. . alogb n = nlogb a . If a = bk the sum is logb n and we get Θ(nk log n). 2008: DIVIDE AND CONQUER 5.1 Solving Recurrence Relations 5. . = ak T (1) + i=0 T (n) = aT n k + cnk b n k + cnk b n k n + ac + cnk b2 b logb n−1 ai c n bi k logb n−1 = nlogb a T (1) + cnk i=0 a bk i n = bt . The rigorous way is through induction. + a1 T (1) + f (n) = 0 are ”homogeneous” because they’re equal to zero. T (n) = aT Results (exact) are: a=b a<b a>b T (n) ∈ Θ(n log n) T (n) ∈ Θ(n) T (n) ∈ Θ(nlogb a ) – the final term dominates n log n n b Theorem If T (n) = aT + cnk . and then nlogb a dominates. homogeneous linear recurrences T (n) = an−1 T (n − 1) + an−2 T (n − 2) + .

n. you probably have to sort. j ≥ m + 1 and ai > aj . the number of pairs ai . Shortest path length from i to j using at most l edges but formula is exactly l edges. Q4. Q3. a2 . . a permutation of 1 . If you want examples of coin systems. Either assumption is fine. go look around the Internet. Q5. r= n j=m+1 rj Strengthen recursion – sort the list. we can compute rj ’s 11 . In CS240 we learned to take the log of n + 1. . taking O(n2 ).2 6. .e. Q4. count the number of inversions i. Try to beat O(n2 ). Suppose my ranking is BDCA. Same issue in (e) but if you use exactly you may find that you don’t save. aj with i < j but ai > aj .6 SEP 25. 2008 Assignment Info Assignment 1 is due Friday at 5PM in the assignment boxes. am B = am+1 . Don’t get your proof from the Internet. music. i ≤ m. with m = 1 2 . . ”How is the number of bits going to grow” is a much nicer √ √ angle. Use ”at most” if you haven’t started. . 6.Q6 are counterexample and a proof. There is a reason that n and n are in the list.2. an Recursively count rA = # inversions in A rB = # inversions in B Final answer is rA + rB + r where r = number of inversions ai aj . Useful for web sites giving recommendations based on similar preferences. Equivalently. For each j = m + 1 . CA and two where we agree: BC. so you probably won’t get better than O(n log n).Q5. 2 Divide & Conquer: Divide the list in half. DC. but examples of systems is fine. If A and B are sorted. US = UC.1 Sep 25. We’d like a measure of how similar these lists are. Please just come to office hours instead of asking too many questions over e-mail. and yours is ADBC from best to worst. State clearly which one you are using. too. The unmarked questions are likely to appear on midterms or finals. We will provide solutions for everything. Q5. BA. an . etc. . How efficient? Well. . (e) (f) See the newsgroup and website. l). . . j. So we aren’t planning on marking every question. Brute Force: Check all n pairs. however.1 Divide & Conquer Algorithms Counting Inversions Comparing two people’s rankings of n items – books. Q2a. we can say given a1 . We can count inversions: on how many pairs do we disagree? Here there are four pairs where we disagree: BD. . 2008 6 6. DA. D(i. A = a1 . n let rj = # of pairs involving aj . .

(one step is × or + for two digits) There is a faster way using divide-and-conquer.0 merge A and B when element is moved from B to output list r <.r + # elements left in A end return r_a + r_b + r Runtime: n + O(n) 2 Since it’s the same as mergesort.2. Can we do better? T (n) = 2T 6. First pad 981 to 0981. we get O(n log n). 09 81 × 12 34 Then calculate 09 × 12 09 × 34 81 × 12 81 × 34 The runtime here is T (n) = 4T Apply the Master Method.B) <.Sort-and-Count(B) r <.Sort-and-Count(A) (r_B. n + O(n) 2 4 2 2 0 → → → → 108 306 972 2754 1210554 12 . 2008 6.A) <.6 SEP 25.2 Divide & Conquer Algorithms Sort-and-Count(L): sorted L and # of inversions Split L into A and B (r_A.2 Multiplying Large Numbers The school method: 981 1234 -----3924 2943 1962 981 ------1210554 O(n2 ) for two n-digit numbers.

k = 1 and since we have a > bk Θ(nlogb a ) = Θ(nlog2 3 ) ≈ Θ(n1. ) Practical Issues • What if n is odd? • What about two numbers with different digit counts? • How small do you let the recursion get? (Answer: hardware word) • What about different bases? • When is this algorithm useful? (For about 1. which runs in O(n log n log log n) 13 .. don’t use it [BB]) – Schonnage and Strassen better for very large numbers. 2008 6. b = 2. This leads to: p = wy = 09 × 12 = 108 q = xz = 81 × 34 = 2754 r = (w + x)(y + z) = 90[that’s 09 + 81] × 46 Answer: 104 p + 102 (r − p − q) + q 108____ 1278__ 2754 ------1210554 We can apply this as a basis for a recursive algorithm. Compare a with bk .585.2 Divide & Conquer Algorithms T (n) = aT n + cnk b Here. We’ll get T (n) = 3T n + O(n) 2 From the master theorem.000 digits or fewer. now we have a = 3. b = 2. (102 w + x) × (102 y + z) = 104 wy + 102 (wz + xy) + xz Note we need wz + xy. not the terms individually.. We see a = 4 > bk = 2 so then we have runtime Θ(nlogb a ) = Θ(n2 ).6 SEP 25. k = 1. So far we have not made progress! We can get by with fewer than four multiplications. Look at (w + x)(y + z) = wy + wz + xy + xz We know wy and xz but we want wz + xy. a = 4.

we assume that arithmetic is unit cost. 5. including the ”Manhattan distance” which is the distance assuming you can’t cross city blocks. 100}. T (n) = 8T n + O(n2 ) 2 By the master theorem. k = 2.8. (There are other measures. 7.) Generally. ). Strassen’s Algorithm shows how to get by with just seven (a = 7) subproblems. There are more complicated algorithms that get even better results (only for very large n however) 7. 2008 Assignment 2 is available. b = 2. since you need to write n2 numbers in the result!) Basic D&C Divide each matrix into n 2 blocks. a = 8. Nope! What’s the way? (1) Divide points into left/right at the median x coordinate. A B C D E F G H = I J K L I = AE + BG etc. Then we can find a line L in O(1) time. Most efficient to sort once by x coordinate. given n points in a plane. Not discussing here. we can use brute force. consider {10. What about • Sorting by position on one axis. a = 8 > bk = 4 (the case when recursive work overwhelms other case) then T (n) ∈ Θ(nlogb a ) = O(n3 ). but if you’re curious it’s in the textbook.. Basic method takes n2 (and in some sense this is the best you can do. and that’s O(n2 ).7 SEP 30. In a plane. For this problem we don’t need to make that assumption. select the closest two by Euclidean distance. In one dimension.1 D&C: Multiplying Matrices: Multiplying two square matrices. 17. 14 . 2008 7 Sep 30. This gives T (n) = 7T n + O(n2 ) 2 This is Θ(nlog2 7 ) ≈ O(n2. Each of the four output blocks has 2 subproblems and O(n2 ) additions. For example.. How would we do this? Sort and compare adjacent numbers.2 D&C: Closest pair of points Divide and Conquer is very useful for geometric problems.

1 Oct 2nd. We need to find pairs q ∈ Q. We can restrict our search to S.8 OCT 2ND. What if we try to use Greedy? 15 . at least. Because every two points in T have distance ≥ δ we can fit four points but only in the four corners. r) < δ then d(q. Now let S be points in the strip of width 2δ. 2008 Dynamic Programming Weighted Interval Scheduling. r ∈ R with d(q. Recall.) Proof If otherwise. r ∈ R with d(p. r inR and d(q. r) < δ. r) < δ then they are at most seven positions apart in sorted order. Therefore you can’t fit five. r lie in this strip of width 2δ. Claim If S sorted by y coordinate and q inQ and r ∈ R with d(q.) Find ”upper envelope” of a set of n lines in O(n log n) by divide & conquer. δ = min closest pair inQ closest pair in R Solution is the minimum of δ or the closest pair crossing L.3 Hidden Surface Removal (a baby version of it. Claim A δ × δ square T left of L can have at most 4 points on it.3 Hidden Surface Removal (2) Recurse on Q and R. Claim If Q ∈ Q. interval scheduling aka activity selection aka packing of intervals. But S can be all the points! Our hope is that if we sort S by coordinate then any pair q ∈ Q. 2008 7. number of disjoint intervals. r) < δ are near each other in sorted order. d(q. L) < δ (i. (T) Total algorithm: – Sort by x – Sort by y – T (n) = 2T n 2 + O(n) ∈ O(n log n) More general problems – given n points. find closest neighbour of each one. 8 8. L) < δ and d(r. 7. Generalization – each interval i has a weight w(i). Pick the max. suppose q outside its strip.e. Pick disjoint intervals to maximize the sum of the weights. This can be done in O(n log n) (not obvious) • Voronoi diagrams • Delaunay triangulations Used in mesh generation. q. r) ≥ distance in DC from q to r ≥ δ.

. notation M[i] = W-OPT(1 .. Make G with a vertex for each interval an edge when two intervals overlap. Leads to a recursive algorithm. Let OPT(I) = max weight of non-overlapping subset. . weight sum of weights of intervals in OPT(I).n? Sorting by right endpoint is O(n log n). recurse fun OPT(i) if M[i] >= w(i) + M[p(i)] then return OPT(i-1) else return { i } union OPT (p( i)) 16 .. One solution: first compute M as above. . . not the actual set of items. If we choose interval n. If we don’t use i. E) with weights on vertices pick a set of vertices. If we use i.e. either we use it or we don’t.. .. Let’s look at an algorithm using the second approach. i) M[0] = 0 for i = 1. n) = max ( W-OPT(1 . 2008 • Pick maximum weight – fails 8. Solution Use memoized recursion (see text.n. . W-OPT(1 . OPT(I) = OPT(I \ { I } )..p(i))) This leads to an O(n) time algorithm. w(i) + M(p(i)) } end Runtime is O(n).. .8 OCT 2ND.. More generally. no two joined by an edge to maximize a sum of weights. (i. w(i) + W-OPT(I’) } T (n) = 2T (n − 1) + O(1) But this is exponential time. Essentially we are trying all possible subsets of n items – all 2n of them. What about computing p(i) with i = 1... . n-1 ). Order intervals 1. Note: don’t use recursion blindly. The same subproblem may be solved many times in your program.i). p(i) = max index j ¿ i such that interval j doesn’t overlap i. OPT(I) = w(i) + OPT(I’) where I’ = the set of intervals that don’t overlap with i. Then call OPT(n). 2.) One possibility: enhance above loop to keep set OPT(1. W-OPT(I) = max { W-OPT(I { i } ) . then l = all intervals disjoint from n – has form 1. n by their right endpoint.. p(n) = max index j such that interval j doesn’t overlap n. For intervals (but not for the general graph problem) we can do better.p(n)) ). use an iterative approach. A general idea: for interval (or vertex) i. So far this algorithm finds W-OPT but not OPT.) OR.. the weight. w(n) + W-OPT(1. Then-Exercise: in O(n) time find p(i) i = 1. Danger here is that storing n sets of size n for n2 size. j for some j.n M[i] = max{ M[i-1]. i-1). To find p(i) sort by the left endpoint as well. w (i) + W-OPT(1.1 Dynamic Programming An even more general program: given a graph G = (V. W-OPT(I) is the opt. i) = max ( W-OPT(1 . .. W-OPT(1 .

i+r-1 temp <. I. 2008 8. for i=1.n.P[i-1] Runtime? O(n3 ).) Given probability pi of searching for i build a binary search tree. . we’ll look at matrix chain multiplication. .9 OCT 7TH.. . Subproblem: ∀i. Solve subproblems: mi. The problem was to compute the product of n matrices M1 × M2 × .. .temp end M[i.j = min cost to multiply the scalar multiplications. The difference from Huffman coding (a similar problem) is that for Huffman codes. .i] = p_i for r=1. left-to-right order of leaves is free.e. What is the best order in which to do multiplications? Think about this in terms of parenthesizing the matrices in your multiplication.best + sum_(t=i)^(i+r) p_t (better: p[j] = sum_t=1^j p(t) then use p[i+r] . The heart of dynamic programming to find optimum binary search tree: Try all possible splits 1. Matrices Mi . Mj 17 . This is different in that we have items and probabilities ahead of time. and rebalancing to control depth. k] + M [k + 1. j.i] + M[i+1.j M [i. j find optimum tree for i.. × Mn where Mi is an αi−1 × αi matrix. best <. delete.k] + m[k+1. . t=i Exercise: work this out.solve for M[i.i+r] <. we looked at weighted interval scheduling. The number of ways to build a binary tree on leaves 1 .M[i..2 Second example: optimum binary search trees 8. j] = mink=i. . Today..2 Second example: optimum binary search trees Store values 1. .. i+r] best <. M [i. .n-r -. 2008 Last day. n is n Pn = i=1 Pi Pn−i The Catalan numbers are rn Pn ∈ Ω n2 which is exponential. we could calculate ((M1 M2 )(M3 M4 )) or (((M1 M2 )M3 )M4 ). . . i + 1.n-1 for i = 1. Minimize expected search cost n pi depth(i) i=1 Note: In CD 240 you did dynamic binary search trees – insert.n M[i. .m[i.. i+r] if temp > best.k and k + 1. . j] + j pt . . . 9 Oct 7th. . Each node is one deeper now. n in leaves of a binary tree (in order. i+r] for k=i+1.

j-1 temp <. j. More generally. j. n-diff j<-i + diff 18 . j)} ( the length of chord) Let’s count the perimeter as well. ..n-diff j <... i + 2) Note: We’d better add m(i. i + 1) = (i. The idea is we’ll break into subproblems from mi to mk times mk+1 to mj . . . We will give a dynamic programming algorithm that will also work for non-convex shapes. n lies in some delta with vertex k – try all choices for k. . use k matrix to recover the actual parenthesization.i) = 0 end for diff=1 . No two chords are allowed to cross. 2008 9. i + 1) + (i + 1. .j−1 {m(i. i + 2) = (i. k) + m(k..k) + m(k+1. Final answer m(1. 9... And we don’t atually need case m(i.. Find the minimum sum of lengths of edges to triangulate. . Can get by by looking just at subpolygons on verticies i.temp end end end end The runtime is O(n3 ) for the O(n2 ) subproblems of O(n) each. n for i = 1. . This doesn’t hurt our optimization and it makes base cases easier. m(i. j) = min sum of edge lengths to triangulate subpolygon on verticies i. Notice a subset of polygons gives a subpolygon. n) and ex. .j) + d_{i-1} d_j d_k if temp < m (i.m(i. i+1.i + diff m(i. . Picking the smallest chord does not work. . Algorithm: initialize m(i.j) <. i + 1. i + 1). j − 1. divide into triangles by adding ”chords” – segments from one vertex to another. Algorithm pseudocode: for i=1. i + 2) – it falls out of the general formula. . The edge 1.” The dynamic programming approach for the convex polygon case: choosing one chord breaks down into two subpolygons.. j) = min k=i+1. Base cases m(i.n m(i. ”Minimum triangulation. The goal is to minimize the lengths of chords we use. i + 2) + (i.. . m(i.1 Example 2: Minimum Weight Triangulation Problem: Given a convex polygon with vertices 1 .1 Example 2: Minimum Weight Triangulation Let mii = 0 and mij = min for k = i .j) m(i..i+1) for diff = 2. j) + (i. .. n-1 for i = 1 ..9 OCT 7TH.j) <.infinity for k = i . A more general problem is to triangulate a set of points. n in clockwise order.

i such that and i∈S vi is maximized..10 OCT 9TH.t end end end Runtime O(n3 ): n × n table and O(n2 ) subproblems. .. . both ∈ N. Note: coin changing problem is similar to knapsack but having multiple copies of items. . n and w = 0 . 19 OP T (i − 1. . w) ← OP T (i − 1. item i has weight wi and value vi . .W compute M[i. 10 Oct 9th. w) don’t include i vi + OP T (i − 1. . For the 0-1 knapsack. w) ← max Pseudo-code and ordering of subproblems: store OPT(i. .k) + m(k. w) (can’t use item i) but otherwise. Subproblems are – for each i. n − 1 with W − wn ) or OUT (items 1 . . . 2008 m(i.j) then M(i.1 Dynamic Programming Key idea: Bottom-up method: identify subproblems and order so that you’re relying on previously solved subproblems.j) <. w − wi ) include i i∈S wi ≤ w . Example (Knapsack/Subset Sum) Recall knapsack problem: given items 1 . find subset S from items 1 .w] with (*) end end M[n.w) in matrix M[i.W] gives OPT value EX: Find opt set S. OP T (i. w i = 0 . n} such that i∈S wi ≤ W and i∈S vi is maximized.w] i=0. Recall a fractional versus 0-1.w] := 0 w = 0. 2008 Midterm (Mon Oct 20th): covers material up through today and a bit of next week’s material too.m(i. Top-down: Item n can either be IN (items 1 . . . . . . How to solve this subproblem? If wi > w then OP T (I.j) if t < M(i. n − 1) of S.j) <. .n for w=0. ... j-1 t <. Choose a subset S ∈ {1. the knapsack capacity.W initialize M[0.j) + l(i. there is no polynomial-time algorithm. n. Recall a greedy algorithm works for the fractional case.. 10. and W .infinity for k = i+1 . O(n) to solve each one.n w=0. W ..W for i=1.

. < aij . . xn and subproblems xi . Given a1 . Input size O(n log W ) but output size O(nW = n2k ). . inner loop. < .. . . . yn with subproblems x1 . wn and W . vn and w1 . .2. • Input x1 . E) with V a finite set of vertices and E ∈ V × V are edges. . • Directed graph. 11. . i1 < i2 < . . . So size of w1 . .n Consider 2nd last item aj . li = max{1 + lj : j < i. xi . Runtime: nW c (outer loop. edge (u. . . .1.001. . Example Longest ascending subsequence. .1 Graph Algorithms A graph G = (V. . Note that wi ≤ W – else throw out item i. 2008 10. . an finding ai1 < ai2 . . aj < ai } O(n2 ) algorithm: n subproblems O(n) each. xn and y1 . . solve sub subproblem over and over. Alternate is during class time on Tuesday. . 2008 Assignment 2 due Friday.3. .4. . . Find answer:max li with i = 1. < ij . . This algorithm is called ”pseudo-polynomial” because runtime is polynomial on the value of W. .11 OCT 14TH. Intuition why this is bad: let’s say we have weights . j < i. . • Undirected graph. xi and y1 .e. . Input v1 . yj . So T (n) = 2T (n − 1) + O(1) – exponential! Advantage: storing solved subproblems saves time if we don’t need solutions to all subproblems.002. .6. . Maximize j. Can we use subproblems on a1 .2 Certain types of subproblems • Input x1 . Number of subproblems is O(n2 ). . . . . Input size is O(n log W ). In 5. no edge (u.3 Memoization Use recursion (not explicit solution to subproblems in the bottom-up approach we have used) – danger. . 10. Number of subproblems is O(n). .? ≤ (n + 1) log W . . . . . u). . xj . order matters. ai ? Find largest ascending subsequence ending with ai . . • Input x1 . • No loops (i. Midterm on Mon Oct 20th. . 11 Oct 14th. . 10. . 10. . . . . Number of subproblems: O(n×m) • Input is rooted tree (not necessarily binary) and subproblems are rooted subtrees. xn and subproblem x1 . . constant for (*)) O(n × w) Is this good? Does it behave like a polynomial? Depends on size of input.2 Certain types of subproblems [KT] has examples. . v) = (v. ai < ai . and W = 100. u)) 20 . not the size (number of bits) of W. . . xi+1 . 7 PM.

j) = 1 if there is an edge from i to j. and m or |E| for the number of edges. Advantages and disadvantages? • Space: n2 matrix. edge destinations in a list on the right. What is a path? A sequence of vertices where every consecutive pair is joined by an edge. then replace edge (u. 11.e. m ∈ O(n2 ). Note: a tree on n vertices has n − 1 edges. We say that an undirected graph G is connected if for every pair of vertices.g.11 OCT 14TH. E) and weights w ≥ 0 : E → R find a minimum weight subset of edges that’s connected. E ) is connected and w(E ) = e∈E w(e) is minimized. Else E has a cycle. For directed graphs: there are different notions of connectivity. If path a − b used edge (u. • Take a minimum weight edge that creates no cycle. 5. • Adjacency list: Vertices down the left. Almost any Greedy approach will succeed. 2m + n list. 4. Find E ⊂ E such that (V. 11. A graph can be strongly connected – ∀u. else 0. • 0 ≤ m ≤ n(n − 1) directed. Cycle: a path from u to u. e.2 Minimum Spanning Trees We will use n or |V | for the number of vertices.3. For testing if a graph is connected. A walk allows repetition of vertices and edges. O(n) or O(log v) in list. We usually use adjacency lists – then we can (sometimes) get algorithms with runtime better than O(n2 ). 21 .2 Minimum Spanning Trees Problem Given an undirected graph G = (V. we can use DFS or BFS. • 0≤m≤ n 2 = n(n−1) 2 undirected. If there is a walk from u to v then there is a simple path from u to v. 2008 • No multiple edges. • Time to test e ∈ E: O(1) matrix. Storing a graph: • Adjacency matrix: A(i. v) with the rest of the cycle. v inV there is a directed path from u to v. • Enumerating edges: O(n2 ) versus O(m + n). i. A simple path does not allow. there is a path joining them. Throw away an edge of the cycle. v). Tree: A graph that is connected but has no cycles. which leaves a connected graph. Claim E will be a tree.

components of u and v A simple Union-Find structure : Store an array C(1 . v)} makes a cycle .) T has a path that connects u and v.) Let e be a minimum-weight edge from V1 to V2 .11 OCT 14TH. And O(m log m) = O(m log n) since log m ≤ log n2 = 2 log n. Stronger version Let X be a set of edges ⊂ minimum spanning tree. • Order edges by weight: w(e1 ) ≤ w(e2 ) ≤ .T u {e} end • We add e iff u and v are in different connected components. • To test this efficiently we use the Union-Find data structure. Let the minimum spanning tree also include X. f .empty set for i = 1. . • Grow one connected component and use the minimum weight wedge. 2008 • Throw away maximum weight that doesn’t disconnect. Krustkal’s Algorithm takes O(m log m) to sort plus O(n log n) for the Union-Find test.) Following Kruskal’s Algorithm. – Union – unites two sets. All of these are justified by one lemma: 11. 22 . do the smaller one. and no edge of X goes from V1 to V2 . Union: must rename one of the two sets. n) and C(i) is the # of connected components containing vertex i. P must use an edge from V1 to V2 – say. so we can remove f and stay connected. Claim: T is it. – Find(element) – find which set contains element. Then there is a minimum spanning tree that includes e.. . w(e) ≤ w(f ) so w(T ) ≤ w(T ). T is a spanning tree: P ∪ {(u. . – Add edge e iff Find(u) = Find(v) – Add edge e to T ⇒ unite conn. • Focus set = connected component of vertices. .2 Minimum Spanning Trees Lemma Let V1 .n if e_1 does not make a cycle with T then t <. Then h units take O(n log n) in CS 466: reduce this. Proof Let T be a minimum spanning tree (stronger: containing X. V2 be a partition of V (into two disjoint non-empty sets with union V . Note that T contains e and x (because f not in X. Let T = T ∪ {e} {f } exchange e for f . ≤ w(em ) T <.

Implementation: we need to (repeatedly) find a minimum-weight edge leaving U (as U changes. – Solutions will be on website. – Marking scheme is in the newsgroup. • Lemma. Recall: • Kruskal’s algorithm orders edges from minimum-maximum weight. 12. 2008 12 Oct 16th. Any other edge incident with v enters δ(u). find a minimum weight edge e = {u. v) into PQ. We need a priority queue – use a heap.) Let S(U ) be a set of edges from U to V − U . and O(1) for finding a minimum. While U = V . how many PQ inserts/deletes do we need? • n in the worst case. E ) is connected. Doesn’t have to be hand-written either. 2008 • Assignment 1 – out of 40. • else insert edge (x. Add e to T and v to U . Initially.5 × 11 sheet brought to the midterm. and delete. any edge from U to v leaves δ(u). 23 . the cheapest two edges connecting two groups is indeed the best. Correctness – from lemma last day. find a subset of edges E ∈ E such that (V. For all x incident to v. U = {s}. Recall that a heap provides O(log n) for insert and delete. • Midterm – Monday – covers to the end of today. • You are allowed one 8.1. General structure: let u be vertices of the tree so far. Builds a tree. insert. • Assignment 2 – due tomorrow. Take each edge unless it forms a cycle with previously chosen edges. v} where u ∈ U and v ∈ V − U . Exactly how does δ(u) change? When we do U ← U ∪ {v}.1 Graph Algorithms Minimum Spanning Tree: Given an undirected graph G = (V. v) from priority queue. • if x ∈ U then remove edge (x. E) with weight function w : E → R+ .12 OCT 16TH.1 Prim’s Algorithm Also a greedy algorithm. 12. For one r. We want to find the minimum.

v. • Barouvka’s Algorithm: another way to handle this case 12.) We might ask for shortest simple path but this is actually hard (NP-complete. 2. Like Prim’s algorithm.” Solving 1 seems to involve solving 2. . When we do U ← U ∪ {v}. 12. n3 subproblems from l = 1 . If m = 0: check first if m < n − 1 and if so bail out. • Tweak the PQ to be a ”fibonacci heap. Alternatively. Total time for the algorithm is O(n + m log m) = O(m log m) because m ≤ n2 and log m ≤ 2 log n. but disallow negative weight cycles. Given u ∈ V . n − 1. l] = min weight path from u to v using ≤ l edges. • The paths u − x and x − v don’t use x as intermediate vertex. Find shortest u. Does u − v path go through x or not shortest? Split into: find shortest path u − x and shortest path x − v. so 2m. Improvements • Store vertices in the PQ instead of edges.2 Shortest Paths Total number of PQ insert/delete operations over all vertices v: (hope for better than n × n.) General input: directed graph G = (V. Later: Dijkstra’s algorithm for 2. find shortest paths to all other vertices. A to E: ABE with weight 4. 24 . v∈V deg v = 2m. Versions of shortest path problem: 1.12 OCT 16TH. M [u. Gives O(n log n + m). (From diagram in class.2 Shortest Paths Shortest path from A to D: ABD weight 3 + 2 = 5. Gives (m log n). then repeating it potentially gives paths of −∞ weight. Allow negative weight edges. (If we have a negative weight cycle. E) with weights w : E → R. Build a shortest path tree from u Dynamic Programming solution for problem 3.) Every edge enters δ(u) once and leaves once. v ∈ V . In what way are these subproblems smaller? • They use fewer edges. v – the ”all paths shortest path problem. . Given u. find a shortest path from u to v.) Weight of path = sum of weights of edges. we must adjust weights of some vertices. Define w(v) = minimum weight of an edge from U to v. 2008 • deg(v) = # of edges incident with v. v path ∀u. ”Single source shortest path problem” 3.” which gives O(1) for weight change and O(log k) to find minimum.

v ∈ V . Final answer: matrix Dn [u.g. 2.13 OCT 21. n}. We’ll use this one. v]. The u − x and x − v paths do not use x as an intermediate vertex. x u v Main idea: try all intermediate vertices x. . we need a shortest u → x path and a shortest x → v path. .1 Oct 21. 2008 All Pairs Shortest Path Given a directed graph G = (V. . but uses more space. . minimum length path can be ∞. i}. ∞ otherwise . Initialize D_0 as above 25 . . v] = min. Let Di [u. Main formula: Di [u. length of a path u → v using intermediate vertices from the set {1. Fewer edges – get efficient dynamic programming M [u. . v] = min{Di−1 [u. . n. . . Di−1 [u.1. If we use x. . we’re not using this. 1. v] = {w(u. the weight of a path is the sum of weights of edges in path. 2008 13 13. 2. i] + Di−1 [i. Otherwise. ] give shortest u. This gives the same runtime. In general. find shortest u − v paths from all u.1 Floyd-Warshall Algorithm edges. How do we initialize? D0 [u. v]} This leads to: 13. v]. v) if (u. Number of subproblems: O(n3 ). v] for i = 0. . v. E) with weights w : E → R. B 5 A 11 D 6 2 -1 C e. How are these subproblems simpler? 1. Use Dynamic Programming. . Solve subproblem Di [u. v path with ≤ However. w(ACD) = 8 Assume: no negative weight cycles. Let V = {1. v) ∈ E.

v]− successor of u on a shortest u.n for u = 1... 2008 13.i + D[i.v] <.i] + D[i. The space however is also O(n3 ). but we correctly compute the main equation. we’d be back to O(n3 ) space – avoid this.v] < D[u.i] + D[i. So we can throw away any previous matrices. bringing space to O(n2 ).S[u. v) ∈ E and φ otherwise. even better (although not in degree of n) we can: Initialize D full of D_0 for i = 1.S[x. Better: • S[u.1 All Pairs Shortest Path for i = 1.v] S[u.v] = as above in main formula end return D_n Time is O(n3 ).D[u. v] =highest numbered vertex on u → v path Note: If we explicitly stored all n2 paths. v] = v if (u. 26 .v] = min { D[u.v] } (**) end return D_n Note: in the inner loop. D[u.n D_i[u.v] x <. Modify (**) to become: • if D[u. Notice to compute Di we only use Di−1 .. v path Initialize S[u..u while neq u output S[x.v] end output v Exercise: Use this algorithm to test if a graph has a negative weight cycle.n D_i[u.v] then D[u.i] end Once we have S with complete paths: Path[u..n for u = 1.v]. but this is correct because we don’t go below the true min by doing this.n for v = 1. How to find the actual shortest path? • Compute H[u.13 OCT 21. In fact.v] x <.n for v = 1. D will be a mixture of Di and Di−1 ..v] <. which is extremely undesirable.

Idea: Grow a tree of shortest paths from s. (This is the most general – still faster than all pairs. E) and weight function w : E → R≥0 and source vertex s. B = {s}. Choose the edge (x. y) ← d • Add (x. Claim: d = minimum distance from s to y. we can use Dijkstra’s Algorithm. Output: Shortest s → v path ∀v. x) + w(x. • With no negative weight cycles. Find the shortest path from s to v ∀v. x y s B General step: have shortest paths to all vertices in B. 2008 14 Oct 23. • With no directed cycles.) 14. whereas today’s is the single-source shortest path.14 OCT 23. Initially. y) where x ∈ B and y ∈ V \ B that minimizes the following: d(s. O(n + m). O(n × m). which is O(m log n). • In the case with no negative weight edges. y) Call this minimum d: • d(s. Proof: The idea is that any path has this structure: • s: Begins here • π1 : Precedes u 27 . y) to shortest path tree parent(y) ← x • B ← B ∪ {y} This is greedy in the sense that y has the next minimum distance from s.1 Dijkstra’s Algorithm Input: Directed graph G = (V. 2008 Shortest Paths Last day’s study was the all-pairs shortest path problem.

3. adj to 2.2. z) ∗ If y < D(z) then · D(z) ← t · parent(z) ← y Store the D values in a heap.5.) Each decrease D operation is O(log n) (done as insert-delete. 14. v)+w(π2 ).8. etc. How many times are we extracting the minimum? n times at O(log n) time each.2 Connectivity in Graphs Testing connectivity. 2008 • (u. Note that w(π1 )+w(u.8.4. v) ≥ d and w(π2 ) ≥ 0 as edge-weights are non-negative.6.4. exploring a graph. Recall: Breadth First Search (BFS) and Depth First Search (DFS.2 Connectivity in Graphs So w(π) = w(π1 )+w(u.14 OCT 23.6. ∀v – D(s) ← 0 – B←φ • While |b| < n: – y ← vertex of V \ B of minimum D(v) – B ← B ∪ {y} – For each edge (y. (Same argument as for Prim.) 3 1 6 8 7 2 4 5 • BFS: 1. v): First edge leaving B • π2 : Rest of path (which may re-enter B) 14. • Initialize: – D(v) ← ∞. From Claim by induction on |B|.) Total time is O(n log n + m log n) which is O(m log n) if m ≥ n − 1. Implementation: Make a priority queue (heap) on vertices V \B using value D(v) for v ∈ V such that the minimum value of D gives the wanted vertex.) • DFS: 1. adj to 1.7 (1.5.7 28 . this algorithm finds the shortest path.3. The ”decrease D value” is done ≤ m times. D(v) = minimum weight path from s → v using a path in B plus one edge. Using a Fibonacci Heap. z) where z ∈ V \ B ∗ t ← D(y) + w(y.2. we can decrease this to O(n log n + m).

A figure-eight graphic made of two connected triangles or squares has two 2-connected components. num ← num + 1 – for each edge (u. He’s also getting an honourary degree on Saturday at convocation. 2008 14.2. By the way. We want connected even with a few failures (vertices/edges. w) ∗ if mark(w) = not visited then · (v. w) is a back edge 29 . dotted edges are ”back edges. the triangles/squares.” DFS Algorithm: • Initialize: – mark(v) ← not visited – num ← 1 – DFS(s) • DFS(u) recursive: – mark(v) ← visited – DFSnum(v) ← num. 1 2 3 4 5 6 7 Solid edges are DFS edges. e.14 OCT 23. 3-connected means we can remove two vertices without breaking the graph into components.2 Connectivity in Graphs Either takes O(n + m). is visiting UW this weekend. 14. a famous name in graph theory. Paul Seymour. we can’t have edge (5. the graph becomes disconnected. This justifies the term ”back edge.1 Finding 2-connected components We can use DFS to find cut vertices and 2-connected components in O(n + m) time. and he’s speaking tomorrow at 3:30. 2-connected components.) What’s bad is a cut vertex – if it fails.” Claim: Every non-tree DFS edge goes from some u to an ancestor. connected isn’t enough. DFS is more useful. w) is a tree edge · parent(w) ← v · DFS(w) else · if parent(v) = w then: (v. Similarly. We call a graph 2-connected if there are no cut vertices. We’ll talk about ”higher connectivity” – for networks.7).g.

. and later on set high(v) ← min { high(v). knapsack: configuration is items selected to far and items discarded so far. test for success (solves whole problem) and failure (dead end.g. This is still O(n + m). Ct . Options: • Heuristic approach – run quickly. Ti children and T0 the tree connected from above. if you’re extremely lucky it’ll be one of the ones we encountered. lowest DFS number) vertex reachable from v by going down tree edges and then along one back edge.) We need one more thing: high(v) = highest (i. Storing F : 30 . Backtracking is useful for algorithms that are not NP-complete. e. add Ci to F .e. Modifying DFS code: set high(v) ←DFSnum(v) in Initialize. . and you need a find an algorithm. one configuration. . n. Search in implicit graph of partial solutions. If Tj has a back edge to T0 then Tj is connected to T0 . high(w) } . DFSnum(w) } and later high(v) ← min { high(v). exponential time algorithms. In the workplace. C ← remove configuration from F . and choices made to get to this subproblem. But more likely. with no guarantee on the quality of the solution. but with a guarantee on the quality. 2008 What do cut vertices look like in a DFS tree? • A leaf is never a cut vertex • A root is a cut vertex iff the number of children ≥ 1 Removing arbitrary (non-root.) Otherwise.1 Backtracking and Branch/Bound Exact. it’ll be similar to one we’ve seen. General backtracking: we have a configuration C that is the remaining subproblem to be solved. trying all permutations of 1 . Are these connected in G \ v? It depends on back edges. non-leaf) node in the tree v we have T1 . . .g. 2008 Midterm: Think about it as out of 35. also with capacity remaining. . For each Ci . 15. • Exact algorithm – and bear with the fact it (may) take a long time. the whole problem. (In that case you got an 86%.15 OCT 28TH. Note: to test (experimentally) a heuristic you need an exact algorithm. • Approximation algorithms – run quickly. Otherwise. . it’ll be one nobody knows how to solve. and it’s NP-complete. and remaining permutations. e. Initially. Claim: v is a cut vertex iff it has a DFS child x such that high(x) ≥ DFSnum(v).) Backtracking: A systematic way to try all possibilities. . expand into C1 . . Configuration is permutations so far. 15 Oct 28th. But more likely. Backtracking Algorithm: F = set of active configurations. it falls away (and is disconnected. While F = φ. .

. R remaining) • w= i∈S wi . . This is O(2n ). r = i∈R wi . we built a dynamic programming algorithm for Knapsack with subproblems O(n × W ). exploring all subsets of {1. . . Note: if F becomes empty and we haven’t found a solution.g. 31 . . i∈S wi ≤ W where we Decision Version – can we find S with i∈S wi = W ? A polynomial time algorithm for this decision version gives poly time for the optimization version.g. n}: S = empty set R = {1 … n} 1 in S = { 1 } R = { 2 … n } 2 in S = { 1. e. Need to fill in success w = W and failure (of the configuration) when w > W or w + r < W . . Given items 1 .1 Backtracking and Branch/Bound S = empty R = { 2 … n } Example: Subset Sum – Knapsack where weight is the value of each item. and we should use DFS. . . e. and W . then no solution. find subset S ∈ {1. Backtracking for the decision version of Subset Sum: • Configurations are as above (S so far.2 } R = { 3 … n } 2 out S = { 1 } R = { 3 … n } 1 out 15. 2008 • Stack: DFS of configuration space Size: height of tree • Queue: BFS of configuration space Size: width of tree • Priority Queue: explore current best configuration Usually. Which is better? Depends on W . n} with maximize i∈S wi . . Before. if W has n bits then W ∼ 2n and backtracking is better. height << width. n and weight wi for item i. .15 OCT 28TH.

visit every city exactly once.) Ic ∩ Xc = φ. Ic must have ≥ 2 edges at each vertex. . ”bound:” If lower bound (Ci ) < best so far. 2008 15. . Xc choose e ∈ E \ (Ic ∪ Xc ). add Ci to F . Xc . General paradigm: • F = active configurations • Keep best so far • While F = φ – C ← remove ”best” configuration from F – Expand C to children C1 . Necessary conditions: E \ Xc must be connected. E) and edge weights w : E → R≥0 find a cycle C that goes through every vertex once and has minimum weight.15 OCT 28TH. Idea here is we have a graph with weights on the edges. Given a graph G = (V. if better than current best. Ct (”branch”) – For each Ci . ”hard” problem. Configuration: Ic ∈ E (included edges) and Ec ∈ E (excluded edges. 15. . ∗ Else. update best ∗ Else if Ci is infeasible. Algorithm: based on enumerating all subsets of edges. and return to the home town. In fact it must be 2-connected. and our traveling salesman wants to start in a home town.2. must not contain a cycle.2 Branch-and-Bound • for optimization problems • we’ll talk about minimizing an objective function • keep track of minimum solution so far • not DFS – explore ”most promising” configuration first • ”branch” generate children of configuration (as in backtracking) • ”bound” – for each configuration compute a lower bound on the objective function and prune if ≥ minimum so far. discard it. We want an efficiently computable lower bound (so it’s sort of like a heuristic. Undecided edges E \ (Ic ∪ Xi ). Xc find a lower bound on minimum TSP tour respecting Ic . But how to bound? Given Ic .1 Branch and Bound TSP Algorithm Example: Traveling Salesman problem. How to branch? Take the next edge not decided about yet. .2 Branch-and-Bound 15.) 32 . but we don’t have issues of correctness. C − Ic . ∗ If Ci solves the problem. This is a famous.

1 Oct 30th.2. . look at multiplying large integers. But a lower bound for the problem says that all algorithms have to take ≥ some time. 2.g. Claim We can efficiently find a minimum weight 1-tree given Ic . In fact. a2 . there are n! of them and it won’t take less than n! time to write them all down – Ω(n!). we’re finding a 1−tree. a spanning tree on nodes 2. (Not proven. . n (not a MST) and two edges from vertex 1 to leaves of the tree. 2008 Recall Course outline: • Designing algorithms • Analyzing algorithms • Lower Bounds – do we have the best algorithm? 16.2 Lower Bounds If we have a lower bound for a problem P . Claim Any TSP-tour is a 1-tree. Xc . 2. an . • Branch wisely. Note: distinction between lower bound for an algorithm and lower bound for a problem. school method is Ω(n2 ) worst case run time of because there are example inputs that take ≥ c × n2 steps. Lower bounds for algorithms are hard to prove! 16. find vertex i in minimum 1-tree with degree ≥ 2. So use this for lower bound. . e. . For example. 2008 Instead of finding a tour. . Let e = maximum weight edge 16 16. . Ω(log n) lower bound for searching for an element inside a1 . . 33 .) Final Enhancements: • When we choose the ”best” configuration C from F . . . Information-Theoretic Lower Bounds e. This takes log n bits as that is the information content of distinguishing n possibilities. use the one with the minimum 1-tree. if we ask for all the permutations of 1. For an example.16 OCT 30TH. The school method was O(n2 ). . .1 Basic Techniques 1. w(min TSP-tour) ≥ w( min 1-tree ). as our measure of best. n. we claim any algorithm will take at least this much time. But there is an algorithm (divide and conquer) with a better worst-case runtime – O(nk ) with k < 2. Lower bound based on output size.g. .

We took an index of numbers and mapped them into a curve. The ”matching” problem has you given a graph and you want to assign pairs. Things we care about.) Low-degree polynomials are efficient.g.3 Polynomial Time Definition An algorithm runs in polynomial time if its worst case runtime is O(nk ) for some k. convex hull is harder than sorting. The best that’s known is proving that a large set of problems are all equivalent. In any other algorithms class. we’ll fill this in.16 OCT 30TH. like ”is there a TSP algorithm in O(n6 )” – nobody knows. 16. We have a C&O department that covers that. 2008 16. except backtracking and certain dynamic programming algorithms (specifically 0-1 Knapsack. Jack Edmonds is a retired C&O prof. you would cover linear programming in algorithms. ”Can O(n3 ) dynamic programming algorithms be improved?” – nobody knows.) • Some problems can only be solved in exponential time. and we know that solving one in polynomial time solves all the others.” 16. He first formulated the idea of polynomial time. you should be taking courses over there. 34 . 3.3 Polynomial Time In a comparison-based model. What is polynomial? Θ(n) Θ(n2 ) Θ(n log n) Θ(n100 ) Θ(2n ) Θ(n!) YES YES YES (because it’s better than O(n)) YES NO NO The algorithms in this course were (mostly) all poly-time.2. the halting problem. In the rest of the course. Often this argument is presented as a tree. and since we need log n bits we need log n comparisons.g. each comparison gives one bit of information. and then the convex hull would tell the sorted order. ”If I could find convex hulls faster than O(n log n) then I could sort faster than O(n log n). We’ll do this later in the course (and CS 360. High-degree polynomial don’t seem to come up in practice. • (Lower end) some problems have Ω(n log n) lower bounds on special models. but if you’re serious about algorithms. Major open question: Many practical problems have no polynomial time algorithm and no proved lower bound. e.2 State-of-the-Art in Lower Bounds • Some problems are undecidable (they don’t have algorithms) e. Reductions: showing one problem is easier or harder than another.

but in the 70’s with the theory of NP-completeness. (NP = Non-deterministic Polynomial) 16. or both don’t. and it seems to be equivalent to optimization anyways. does it have a Hamiltonian cycle? (a cycle visiting every vertex once) 35 . But if we have a lower bound non-polytime algorithm for A then this implies a non-polytime algorithm for B. and decision problems. Seemed promising at the time. Today’s topics: Reductions (from last class). Even without an algorithm for B or a lower bound for A. Note: we can have a reduction with having an algorithm for B. Negate the edge weights. Why? It’s more rigorous. Is it a polynomial-time reduction? How can we solve the longest path problem? Reduction to shortest path problem.17 NOV 4TH.4 Reductions Other history: • In the 50’s and 60’s. 2008 Permanents are like determinants except they’re all positive terms. Our goal: to attempt to distinguish problems with poly-time algorithms from those that don’t have any. We will concentrate on decision problems to define P/NP. there was a success story creating a linear programming and simplex method – practical (though not polynomial. P and NP. This is the theory of NP-completeness. 17.1 Decision Problems What is a decision problem? A problem with output YES/NO or TRUE/FALSE.) • Next step. 17 Nov 4th. Consequence of A ≤ B: An algorithm for B is an algorithm for A. This is a reduction – it reduces the longest increasing subsequence problem to the longest path problem. there is a polytime algorithm for A that makes subroutine calls to (polytime) algorithm B. integer linear programming. 2008 16. Examples • Given a number. We will reduce this problem to not shortest path but longest path in a graph.) Example: Longest increasing subsequence problem.4 Reductions Problem A reduces (in polytime) to a problem B (written A ≤ B or A ≤P B) and we can say ”A is easier than B” if a (polytime) algorithm for B can be used to create a (polytime) algorithm for A. More precisely. we found this is actually a hard problem and people did reductions from integer programming. and people reduced other problems to this one. is it prime? • Given a graph. if we prove reductions A ≤P B and B ≤P A then A and B are equivalent with respect to polytime (either both have them.

Input: G. In fact. is there a TSP tour of length at most k? • Independent Set: given a graph G = V (E) and k ∈ N is there an independent set of size ≥ k? Optimization version: given G. give G. B in class) and none seem to be in P . Usually. k to decision algorithm and stop when it’s NO. we can show decision ≤P opt. 2008 17. what lucky info would help? e. A ≤P B ∀A. But although we can test primality in polynomial time. Examples: • Factoring – find prime factors • Primality – given number. typically. • Give G to algorithm for optimization problem • Return YES or NO depending on whether the returned set is ≥ k. find max independent set. For k = n .) An algorithm B is a certifier for problem X if: 36 . independent set. We can find the actual independent set in polytime too. A certifier algorithm takes an input plus a certificate (our extra info. Showing opt ≤P decision: suppose we have a poly-time algorithm for the decision version of independent set. is it composite (= not prime?)) we could give factors. is it prime? Not clear what info to give (there is some) but for composite numbers (given n. Then this loop takes O(nt+1 ). we can’t factor in polynomial time (and to find one would be bad news for cryptography!) Definition P = { decision problems that have polytime algorithms }. . which contains P problems and NP-complete algorithms (that are equivalent. Notes: • Must be careful about model of computing and input size – count bits. and given some bound k ∈ R. it’s easy to verify a graph has an independent set of size ≥ k if you’re given the set. For independent set. Idea: try vertex 1 in/out of independent set. Contrast with verifying that G has no independent set of size ≥ k. 17. 1. .g. e. Exercise: fill this in and check poly-time.e. primality is the ”decision” version of factoring. E) with w : E → R+ . decisions and optimization are equivalent with respect to polynomial time. Definition of NP (”nondeterministic polynomial time”): there’s a set of NP problems.) NP problems are polytime if we get some lucky extra information.2 P or NP? • TSP decision version: given a graph G = (V.2 P or NP? Which problems are in P ? Which are not in P ? We will study a class of ”N P -complete” problems that are equivalently hard (wrt polytime) (i. is it prime? In some sense.17 NOV 4TH.g. Runtime: Assume decision takes O(nt ). primes: given n. k.

Input: Given G = (V. Question does G have an independent set of size ≥ k? Claim: Independent Set ∈ NP. . • Non-TSP Does G have no TSP turn of length ≤ k? Is Non-TSP in N P ? Nobody knows. • Subset-Sum: Input: w1 . • Decision version of TSP. E) and w : E → R+ .3 Properties Claim P ⊆ N P . B is a polytime certifier if • B runs in polynomial time. . t) outputs YES. • Certificate: nothing • Certifier Algorithm: original algorithm Claim: any problem in N P has an exponential algorithm. s is a YES input for X iff ∃t ”certificate” such that B(s. . Is Co-NP NP? Is P NP intersect co-NP? 37 . . and k ∈ R Question: Does G have a TSP tour of weight ≤ k? Certificate: Sequence of edges Certifier: Check edges. So X has a polyime algorithm to show X ⊆ N P .3 Properties Proof Certificate u ⊆ V (set of vertices. wn in R+ . 2008 • B takes two inputs s and t and outputs YES and NO. • ∀s. and check no repeated vertices (sum of weights ≤ k). 17. Certifier: add the weights in S. The number of certificates is O(2poly(n) ).” non-TSP is in co-NP. n} such that the sum is exactly W ? Claim: Subset Sum ∈ N P .) Certifier: Check if u is an independent set and check |u| ≥ k. Proof idea: try all possible certificates using the certifier. Let X be a decision problem in P . . Is there a subset S = {1 . the running time is O(2poly(n) ). Open Questions Is P = N P ? co-np: ”no versions of NP problems. Examples • Independent Set Input is a graph G and k ∈ N. In particular. . Certificate: S. 17. • There is a polynomial bound on size of certificate t in terms of the size of s.17 NOV 4TH.

If we know X is N P -complete. To show X N P -complete.) 18. i. then to prove Z is N P -complete: 1.”) P = { decision problems with polytime algorithms } and N P = { decision problems with a polynomial-time certifier algorithm } (i. 2008 18 18. For every Y ∈ N P . X ∈ N P 2. and if X has no polytime algorithm (i. Two important implications: 1. we must show Y ≤P X for all Y ∈ N P . Prove Z ∈ N P 2. v (one) output (sink) ^ ^ ¬ ¬ x1 x2 inputs.1 Nov 6th.e. poly-time IF we get extra information. Please don’t get this backwards. The first N P -completeness proof is hard. with variables 38 .e. every Y ∈ N P has a polytime algorithm. X ≤P Z Note that X is a known N P -complete problem and Z is the new problem. If X is N P -complete and if X has a polytime algorithm then P = N P . 2008 Recall A ≤P B – problem A ”reduces (in P olytime) to” problem B if there is a polytime algorithm for A (possibly) using a polytime algorithm for B.18 NOV 6TH. (B is ”harder. If X is N P -complete. lower bound) then no problem in N P -complete has a polytime algorithm.1 Circuit Satisfiability The first N P -complete problem is called circuit satisfiability. Definition: A decision problem X is N P -complete if: 1.2 N P -Complete These are the hardest problems in N P . Subsequent N P -completeness proofs are easier. 2.e. 18.2. Y ≤P X.

computing values. • 3-SAT is harder than another N P -complete problem: i. 1?) Well. if x1 = 0 and x2 = 1 then output = 0. Assume we have a polytime algorithm for 3-SAT. e. (x1 ∨ ¬x1 ∨ x2 ) ∧ (x2 ∨ x3 ∨ x4 ) ∧ . is there a certificate such that the certifier says YES – which leads to circuit satisfiability. The question is.g. • Input: a boolean formula. Theorem 3-SAT is N P -complete. circuits = formulas so these satisfiability problems should be equivalent. . Input to algorithm is a circuit C and we want to construct in polytime a 3-SAT formula F to send to the 3-SAT algorithm s.18 NOV 6TH. Theorem Circuit-SAT is N P -complete. prove Circuit-SAT ≤P 3-SAT. AND. Essentially. Check output is 1.t. The ”formula” is the ∧ of ”clauses. 18.e. Even special form of Satisfiability (SAT) is N P -complete. Question: Are there 0-1 values for variables that give 1 as output? Circuit SAT is a decision problem in NP. 0-1 values for variables determine output value. The idea is that an algorithm becomes a circuit computation.2 N P -Complete This is a dag with OR. (x1 ∧ x2 ) ∨ (¬x1 ∧ ¬x2 ) • Question: is there an assignment of 0. if we had a polynomial time way to test circuit satisfiability. e.g. Proof Sketch: We know ∈ N P as above. C is satisfiable iff F is satisfiable. Proof • 3-SAT ∈ N P : Certificate: values for variables. A literal is a variable or negation of a variable. . Certifier algorithm: check that each clause has ≥ 1 true literal.2 3-SAT Satisfiability: (of Boolean formulas).2. We must show Y ≤P Circuit SAT for all Y ∈ N P . so use it to create a polytime algorithm for Circuit-SAT. • Certificate – Values for variables. we would have a general way to solve any problem in N P by turning it into a Circuit-SAT problem.” the ∨ of three literals. 3-SAT: e.. • Certifier – Go through circuit from sources to sink. We will be rigorous.e. 2008 18.g. 1 to variables to make the formula TRUE (i. A certifier algorithm with an unknown certificate becomes a circuit with variables as some inputs. 39 . and NOT operations.

a ≡ b ∨ c becomes (a ⇒ (b ∨ c)) ∧ ((b ∨ c) ⇒ a) and (b ∨ c ∨ ¬a) ∧ (a ∨ ¬(b ∨ c)) and (a ∨ (¬b ∧ ¬c)). Rewrite a ≡ b as (a ⇒ b) ∧ (b ⇒ a). P is decision problems with a polynomial time algorithm. . e. Claim C is satisfiable iff F is satisfiable. Recall the input is a Boolean formula in a special form (three-conjunctive normal form. 19 Nov 11th. . Proof (⇒) by construction (⇐) . just pull the inputs up and write f1 ∨ f2 . NP-complete problems are the hardest problems in NP.1 Satisfiability – no restricted form Recall: 3-SAT is NP-complete.19 NOV 11TH.g. Claim F has a polynomial size and can be constructed in polynomial time.) Question: Are there T/F values for variables that make F true? Theorem SAT is NP-complete. There’s a similar padding for size 1. F = (x1 ∨ x2 ∨ ¬x3 ) ∧ . . The final formula for F : – ∨ of all clauses for circuit nodes – ∧xi where i is the output node.) Idea: make a variable for every node in the circuit. 2008 NP is decision problems with a polynomial time certifier algorithm. Definition A decision problem X is NP-complete if: • X ∈ NP • Y ≤P X for all Y ∈ N P Once we know X is NP-complete.e. Proof: • SAT ∈ N P • 3-SAT ≤P SAT 40 . for f1 and f2 and ∨. and a ⇒ b as (b ∨ ¬a). We get (b ∨ c ∨ ¬a) ∧ (a ∨ ¬b) ∧ (a ∨ ¬c). 2008 We could derive a formula by carrying the inputs up through the tree (i. Note: we can pad these size two clauses by adding new dummy variable t and (a ∨ b ∨ t) ∧ (a ∨ b ∨ ¬t) etc.) Caution: the size of formula doubles at every level (thus this is not a polynomial time or size reduction. . we can prove Z is NP-complete by proving: • Z ∈ NP • X ≤P Z 19. xy ∧ (x7 ≡ x5 ∨ x6 ) ∧ (x5 ≡ x1 ∧ x2 ) ∧ (x6 ≡ x3 ∧ x4 ) ∧ (x3 = ¬x1 ) ∧ (x4 ≡ ¬x2 ).

Question: Does G have a vertex cover U ⊆ V with |u| ≤ k? A vertex cover is a set of vertices that ”hits” all edges – i. (x2 . For example. We want to give a polytime algorithm for 3-SAT using a hypothesized polytime algorithm for Independent-Set. Proof 41 . We will show 3-SAT reduces to Independent-Set. (⇐) Independent set in G must use one vertex from each triangle.19 NOV 11TH. Pick the corresponding vertex from the triangle. no two vertices joined by an edge?) Theorem Independent-Set is NP-complete. we’ll make a triangle in the graph. x1 ).e. v) ∈ E. Input: Boolean formula F Goal: Construct a graph G and choose k ∈ N such that F is satisfiable iff G has an independent set ≥ k. We have m clauses.2 Independent Set Input: Graph G = (V. x2 and x3 . 3m vertices. Proof Independent-Set is in NP. Details of Algorithm: • Input: 3-SAT formua F – Construct G – Call Independent-Set algorithm on G. Claim: G has polynomial size. Set the corresponding literals to be true. 2008 19.3 Vertex Cover Input: Graph G = (V. 19. each clause has ≥ 1 true literal. E) and number k ∈ N. E) and k ∈ N. x2 ). This satisfies all clauses. Question: Is there a subset u ∈ V with |u| ≥ k that is independent (i. m – Return answer • Runtime: Constructing G takes poly time. so 3m vertices. For each clause in F . (x1 ∨ x2 ∨ ¬x3 ) is drawn as a graph with three vertices x1 . ¬x3 ). and edges (x1 .e. (¬x3 . Pick the corresponding vertex in the graph. So. Set any remaining variables arbitrarily. u ∈ U or v ∈ U (or both. This gives an independent set of size = m. For example: (x1 ∨ x2 ∨ ¬x3 ) ∧ (x1 ∨ ¬x2 ∨ x3 ) becomes: x1 x2 ¬x3 ¬x2 x1 x3 Connect any vertex labelled xi with any vertex labelled ¬xi . Independent set runs in poly time by assumption. ∀(u. • Proof: (⇒) Suppose we can assign T/F to variables to satisfy every clause.2 Independent Set 19. • Correctness: Claim F is satisfiable iff G has an independent set ≥ m. See previous lecture.) Theorem Vertex-Cover (VC) is NP-complete.

. Certifier algorithm: verify U vertex cover and ≤ k. Si ∈ E and k ∈ N. ik such that Sij = E j=1. .6 Hamiltonian Cycle Input: Directed Graph G = (V. 2008 • VC ∈ N P Certificate: set u.Cycle.4 Set-Cover Problem Suppose that we have a polynomial time algorithm for VC.. k. . 19. . 42 . i1 . E) Q: Does G have a directed cycle that visits every vertex exactly once? Proof (1) ∈ N P and (2) 3-SAT ≤P Ham. 19. . n − k. and call VC algorithm on G. Sm . 19. Input G. . but Set-Cover ≤P VC because VC is NP-complete. Claim u ∈ V is an independent set iff V − U is an vertex cover. Give a polytime algorithm for 3-SAT assuming we have one for Ham.4 Set-Cover Problem Input: set E of elements and some subsets of E: S1 .e. .19 NOV 11TH.. Here’s an algorithm for independent set.5 Road map of NP-Completeness Circuit-SAT 3-SAT Subset-Sum Hamiltonian Cycle TSP Independent Set VC Set-Cover Note: VC ≤P Set-Cover because VC is a special case. . 19. G has independent set ≥ k iff G has VC ≤ n − k. These proofs are from a 1976 paper by Richard Karp. Correctness: Claim. Please find reduction proof on the Internet.k Example: Can we throw away some intersecting rectangles and still cover some area? Theorem Set-Cover is NP-complete. • Ind-Set ≤P VC Ind-Set and VC are closely related.Cycle. Question: Can we choose subset of k Si ’s that still cover all the elements? i.

C. Proof • ∈ NP • Dir. xn . So the Hamiltonian cycle must use both incident edges. Claim (Correctness) G has a directed H. E) Decision: Does this graph have an undirected Hamiltonian cycle that visits every vertex exactly once? Theorem Undirected H. Design a polytime algorithm for the directed case. is N P -complete. Assume we have a polytime algorithm for the undirected case. . m edges. Q: Does G have a T SP tour with weights ≤ k? Proof 43 . Second idea – vin vout v vmid For each vertex v create vin . ≤P Undir. F has m clauses and n variables x1 . 20. First idea – G = G with direction erased. (⇒) easy (⇐) vmid has degree two. 2008 Undirected Hamiltonian Cycle Input: Undirected G = (V. 2008 • Input: 3-SAT formula F • Idea: Construct digraph G such that F is satisfiable iff G has a Hamiltonian cycle. Input: Directed graph G Construct an undirected graph G such that G has directed H.C. (⇒) is OK. We’ve created G . m + 2n. read online.C. Claim G has polynomial size. . iff G has undirected H. vout . Say G has n vertices. iff G has undirected GC. Then G has 3n vertices. Then it must use one incoming edge at v and one outgoing edge at v. This is the level of N P -completeness proof you’ll be expected to do on your assignment.C.C. H. E) and w : E → R+ with k ∈ R. but (⇐) fails in a one-directional cycle. . .1 Nov 13th.2 TSP is NP-complete Theorem TSP (decision version) is N P -complete.H.) Can you show the undirected ham cycle problem is hard? 20 20.20 NOV 13TH. and vmid as shown above. (skipped this section. Input: G = (V.C.

.C. Cycle ≤P TSP. . Third idea: Add a single vertex and connect it to everything in G . . input for Ham. . x2 . Ham. Input: Numbers a1 . 44 . . First idea: G ← G. . . an ∈ R and target W . 20. cycle. This gives G has Ham. Ex. iff G has Ham path. Second idea: Create three new vertices abc in G and connect a and c to all vertices in G . Again. xn and clauses c1 . ∈ N P 2. Proof 1. 2008 • ∈ NP • Ham. . Construct a Subset-Sum input a1 . W s. path iff G has Ham cycle. F = (x1 ∨ ¬x2 ∨ x3 ) ∧ (¬x1 ∨ ¬x2 ∨ x3 ). . Claim poly-size. . . Cycle is a special case of TSP when w(e) = 1 ∀e and k = n.3 Subset-Sum is NP-Complete – Ham Cycle ≤P Ham Path Want algorithm for Ham. Branch-and-bound algorithm was O(2n ). Fourth idea: erase each vertex from G one-at-a-time and ask for Hamiltonian path. . Exercise: find a counterexample. n} such that i∈S ai = W ? Recall: Dynamic programming algorithm O(n × W ). Final idea: Take one vertex v and split it into two identical cupies. Add new vertices s and t as above. . . Question: Is there a subset S ∈ {1.t. Input is a 3-SAT formula F with variables x1 . 3-SAT ≤P Subset-Sum Give a polynomial-time algorithm for 3-SAT using a polytime algorithm for Subset-Sum. this is the kind of thing you’ll be expected to do on your assignment. . at . Well.20 NOV 13TH. Cycle using algorithm for Ham Path. F is satisfiable iff ∃ subset of ai ’s with = W. Input: undirected graph G Question: does G have Ham path that visits each vertex exactly once? Proof – ∈ NP 20. construct G such that G has H. . . . Theorem Hamiltonian Path is NP-complete. . Given G.3 Subset-Sum is NP-Complete This one is not something you’ll be expected to do on your assignment. ⇒ is OK but we can find a counterexample for ⇐. cn .

Claim Correctness. How many ai ’s? 2n + 2m. • Want to choose x1 row or ¬x1 row. This row set gives sum W .) Add extra columns: column xi has 1 s in rows xi and rows ¬xi . but zeros elsewhere. That satisfies all clauses. Set xi = T or F . Column xi ⇒ we use rows xi or ¬xi . Add rows slack i. Slacks give ≤ 3 so some literal in cj must be true. but not both. and sum down cj column to get 4. Proof (⇒) If xi is true. Claim Size. . The target row of the matrix turns into W in base 10. 1 slack 1. total = 4. use slack i. Then column xi has sum = 1 as required.1 = 1. Consider cj . choose xi . Solution is slack rows.2 = 1. If only a single true literal. n + m. (⇐) Some subset of rows adds to W . Use slack i. 2008 20. 1 slack 2. 45 .1 and slack i.20 NOV 13TH. interpreting the rows as binary numbers (actually with a bigger base of 10. Solution: add two rows per column forcol ci . so total = 4. each row of the matrix becomes a base-10 number.2 for again 4.1 = 1 in c1 and sl i.2 = 2 in ci – and 0 everywhere else. Set target for column ci = 4. Satisfiable iff ∃ subset of ai ’s with sum W . choose ¬xi . 2 slack 2. Finally. • Want to deal with target ≥ 1. cm x1 1 1 0 0 0 0 x2 0 0 1 1 0 0 x3 0 0 0 0 1 1 1 2 1 2 ≥1 ≥1 4 4 1 1 Make a 0-1 matrix. How many base 10 digits in ai ’s and W ? Equal to number of columns. Column for Ci clause: either: • True literal in Ci • • Use slack i. . 2 c1 1 0 0 1 1 0 c2 0 1 0 1 1 0 . If false.3 Subset-Sum is NP-Complete x1 ¬x1 x2 ¬x2 x3 ¬x3 xn ¬xn slack 1. These are the ai ’s.

2008 21 Nov 18th. size of input size. Let n = size(s). If P = N P then there are problems in between P and N P -complete (Badner 70’s) i.1 Major Open Questions Is P = N P ? If one N P -complete problem is in P . Algorithm for Y : – Input S – Convert B to circuit Cn – Hand Cn to Circuit-SAT subroutine 21. What do we know about Y ? It has a polynomial time certifier algorithm B (input s for Y has Yes output iff there exists a certificate t of poly size such that B(s. A <P B) But what are natural candidates for these? IN Garey and Johnson (’79) these were: • Linear Programming: in P (’80) • Primality Testing: in P (’02) • Min. size(t) ≤ p(n). Alg B (for input of size n) becomes circuit Cn (of polynomial size in n. Let p(n) be a polynomial bounding size(t) i.e. then they all are. We assume there is a polynomial time algorithm for Circuit-SAT and give a polynomial time algorithm for Y using that subroutine. B (after compiling and assembling) becomes a circuit at lowest hardware level.21 NOV 18TH. Weight Triangulation for Point Set: in N P -complete (’06) (not famous problem) • Graph isomorphism: open. the circuit has polynomial size. are they the same after relabeling vertices? 46 . A ≤P B but B not ≤P A (i. Given two graphs each on n vertices. One sink: the final output. Because B runs in polynomial time. t) outputs YES. Question: are there 0-1 values for which the circuit outputs 1? Proof • ∈ NP • Y ≤p Circuit-SAT for all Y in NP.e.e. Recall: Input: Circuit of ∨.) Alg. Theorem Circuit-SAT is NP-Complete. 2008 NP-Completeness continued.) (Is there a certificate?) becomes (Are there values for variables?) Correctness: Input s for Y gets YES output iff there exists a certificate such that B(s. ∧ and ¬ gates and variables as some of the inputs. t) outputs YES iff there exist values for variables t such that Cn outputs 1 iff Cn is satisfiable. We must convert algorithm B to a circuit (to hand to Circuit-SAT subroutine.

it’s possible as I could just try t choices in k 2 2 places. does it halt (or go into an infinite loop?) Sample-Program while x = 1 do x←x−2 end This halts if x is odd and positive. 2008 21.2 Undecidability So far we’ve been talking about efficiency of algorithms. x 2 47 . 10. On the plus side. Program Verification: Given specification of inputs and corresponding outputs of a program (specification is finite. 2. 11.1 Examples Tiling: Given square tiles with colours on their sides. 1. 20. your skills and ingenuity will always be needed. 1.. Sample runs: x = 5. Halting Problem: Given a program. 21. so the problem is O(tk ).. we’ll look at problems with no algorithm whatsoever. Sample-Program-2 while x = 1 do if x is even then x ← else x ← 3x + 1 end Assume x > 0.21 NOV 18TH. 28. 8. 26.2 Undecidability 21. 52. Now. 34. The answer is. But everyone in the School of Computer Science thinks it’s ”absolutely crucial” that everyone graduating with a Waterloo degree knows this stuff. and no rotations or flips allowed. 16. 22. can I tile the whole plane with copies of these tiles? Must match colours. So you won’t find it in textbooks. 4. does this program give correct corresponding output? Answer: no. On one hand. because what their processes do attempts to check this. 16. 17. 7. x = 9. no. This is also a topic not conventionally covered in an algorithms course. 8. 4. potential number of inputs is infinite) given a program. 2. 14. 5. Does this program halt for all x? That’s open. 40. this is sad for software engineers.2. actually. 13. For a finite piece (k × k) of the plane.

Halting Problem 48 . While not Foo(x).22 NOV 20TH. 22. so S is a member of S. What is a problem? Specification of inputs and corresponding outputs. x ← x − 1. Theorem The following models of computing are equivalent: • Turning machines • Java programs • RAM • Circuit families 22 22. 2008 Also. and some philosophy books) Let S = the set of sets that do not contain themselves. Definition A decision program is undecidable if there’s no algorithm for it. x ← 1.one of many who tried to axiomatize mathematics. Contradiction either way! So what is wrong about this? First undecidability result (from Turing): Theorem The Halting Problem is undecidable.) Algorithm is a Turing machine. 2008 Undecidability ”Which problems have no algorithm?” Definition A decision problem is undecidable if it has no algorithm. Definition (more general) A program is unsolvable if there’s no algorithm for it. then S meets the second condition.1900 . – YES.1 Nov 20th. Is S a member of itself? – NO. A (general) problem is unsolvable if it as no algorithm. What is an algorithm? Church-Turing Thesis (not proved. any math question about existence of a number can be turned into a halting question. contradiction.2 History of Undecidability • Gottlob Frege . Idea: There is an x such that Foo(x). • Bertrand Russell (1872-1970) Russell’s paradox (recommend his biography.

B) if no. there is no algorithm to decide the halting problem. else. • Question: Does A halt on w? Proof: (by contradiction. Proof By contradiction.1 Undecidability Recall: a decision problem is undecidable if there is no algorithm for it. Then it has an algorithm. Therefore. Construct a new program H with input a program B. we get an algorithm for P . halt. 2008 • Input: Some program or algorithm A and some input string w for A. Contradiction. our assumption that H exists is wrong.2 23. then this is a yes case of the halting problem. ”does H halt on its own input?” Suppose yes. Therefore.23 NOV 25TH. H ) outputs no. Look at code for H on input H . It loops forever. use reductions. His question.) Suppose there is a program H that decides the halting problem.2. 2008 Assignment 3 – out of 45. Suppose no.1 Other Undecidable Problems Half-No-Input or Halt-on-Empty Given a program A with no input. Suppose Q is decidable. Theorem: If P and Q are decision problems and P is undecidable and P ≤ Q then Q is undecidable. does it halt? 49 . begin call H(B. Contradiction either way. loop forever. So H(H . Then this is the no case of the halting problem. This is contrary to P undecidable. ”does S contain S?” is like asking. H takes A. does A halt on input w? To show other problems are undecidable. Halting Problem: given a program/algorithm A and an input w. But then (looking at code of H ) H halts on input H . 23. end So H is like Russell’s set S. 23 Nov 25th. 23. Assignment 4 – due Friday. Recall A ≤ B or ”A reduces to B” if an algorithm for B can be used to make an algorithm for A. Final exam: study sheet is allowed. By the definition of ≤. H ) outputs yes. w as input and outputs yes/no. So H(H .

Make an algorithm to solve Halt-No-Input. Input: program A. Make program B: read input. produce the same outputs?) Theorem Program Equivalence is undecidable. Make an algorithm for Halt-no-Input. 23. Call X on A which outputs the yes/no answer.23 NOV 25TH. Let’s try another approach. Output: does A halt? Idea: Modify code of A to get a program A with input and output. Proof Program-Verification ≤ Program-Equiv (?) Suppose we have an algorithm for Program Equivalence.e. does the program compute the correct output for each input? Theorem Program Verification is undecidable. spec above) answers yes.2 Program Verification Given a program. 50 . Input: program A. Correctness A halts on w iff A halts. Make an algorithm for the Halting Problem. Suppose we have an algorithm for Program Equivalence. This will work. Input: program A. Correctness A halts iff V (A . Give an algorithm for Program Verification. 2008 23. Suppose we have an algorithm V to decide Program Verification. Call algorithm for Program-Equiv on A . Algorithm: Make a program A that has w hard-coded inside it and then run A on it. Program Equivalence (something TA’s would love!) Given two programs. Proof Halt-No-Input ≤ Program Verification. do they behave the same (i. specs: for any input. just output 1. Suppose we have an algorithm X for Halt-no-input. discard it A A  output 1 Then call V (A . Input: program A. output 1 ). and specification of inputs and corresponding outputs. but we need more formality about input/output specs. B. Proof: A halts iff A produces 1 output for every input iff V (A . input/specs for A. input string w. specs above) answers yes.2 Other Undecidable Problems Theorem Halt-no-Input is undecidable. Algorithm: Make A as in previous. Halt-No-Input ≤ Program-Equiv.2.   read input. Proof: Halting Problem ≤ Half-no-input.

1 What to do with NP-complete problems Sometimes you only want special cases of an NP-complete problem. 51 . does P (x1 .v} remove from E all edges incident to u or v end Claim is this algorithm finds |C| ≤ 2( min size of a V. Undecidable. 23. . We call this a ”2-approximation algorithm. . maximum degree in a graph.) • Approximation Algorithms: CS 466. • Exact exponential time algorithm: use heuristics to make branch-and-bound explore the most promising choice first (and run fast sometimes. There may be algorithms that work in polytime when you bound that maximum degree. . least integer solution to x2 = 991y1 + 1 is a 30-digit x and 29-digit y. 24 Nov 27th. 2008 Correctness A is equivalent to B iff A halts. Proof: The edges we choose form a matching M (no two share an endpoint.C. This was proved undecidable in the 70’s.” e. Office hours: show webpage. . . and ∴ |C| ≤ 2 × ( min V.empty set while E not empty set pick e = (u. are born with three neighbours. xn ) with integer coefficients.v) in E C <. ). This will correctly answer ”yes” if the answer is ”yes.g.C. . xN ) = 0 have positive integer solutions? Possible approach: try all integers.C.C u {u.” Some NP-complete problems have no constant-factor approximation algorithm (unless P = N P ) such as Independent Set.3 Other Problems (no proofs) Hilbert’s 10th Problem Given a polynomial P (x1 . 48 and 49 must be rounded up to 50. . – Vertex Cover: Greedy algorithm that finds a good (not necessarily min) vertex cover. • Parameterized Tractability: exponential algorithms that work in polynomial time for special inputs. ).) |C| = 2|M |. . Conway’s Game of Life Rules: spots die with 0-1 or 4 neighbours.24 NOV 27TH. 2008 Final Exam: Wed Dec 10th. C <. 24. Every edge in M must be hit by a vertex in any V. For example.2. and ∴ |M | ≤ min size of V.C.

• Do alternative methods of computing help with NP-complete problems? Will massively parallel computers help? Only by a factor of number of CPUs. then what can I now do? Primality: can be tested in polytime with a randomized algorithm (70’s) but also without randomness (2002. (And assume wi < W ∀i. – Example Subset-Sum Given w1 . Claim is there is a (1 + ) approximation algorithm for Subset-Sum with runtime O we get better approximation but worse runtime. 2008 24. w ˜ b ˜ ˜ ← W . ˜ ˜ ˜ ← wi . ˜ Now all the wi ’s are multiples of b so scale and run dynamic programming. ˜ ˜ Runtime: O(n × W ). n} such that i∈S wi = W ? As optimization. Limit is approximation factor = 1 (an exact algorithm) with an exponentialtime algorithm. . Else throw out.) Idea: dynamic programming algorithm is very good – it only can’t handle having lots of bits in a number. . Idea: apply dynamic programming to rounded input. n)) So wi ← wi b ˜ b Claim that wi ≤ wi ≤ wi + b. How good is our approximation? Each wi is off by ≤ b. our runtime is like O 1 n3 . (1 + ) approx. . this would be a 2-approximation) 1 i∈S wi ≥ (1+ ) (true max) is a ”(1 + )-approximation. Rough rounding – few bits – rough approximation. . we want i∈S wi ≤ W to maximize i∈S wi . As →0 Note: we should check feasibility of rounding. Rounding parameter b (later b = n (max wi for i = 1 . So throw away half the bits and get an approximate answer. Recall: Dynamic programming O(n × W ). 1 Note i∈S wi ≥ 2 (true max. . is there S ∈ {1 .1 What to do with NP-complete problems Some NP-complete problems have approximation factors as close to 1 as we like – at the cost of increasing running time.) 52 W B =O W n (max wi ) ≤O 1 n2 . wn and W . W b 1 3 n . Also. . wi + (max wi ) ≤ i∈S wi + i∈S i∈S Second last step: else use max wi as solution.24 NOV 27TH. . Refined rounding – many bits – good approximation. alg. . • Randomized algorithms (CS 466?) If I have access to a RNG. The true maximum ≤ ˜ i∈S wi + nb ≤ wi = (1 + ) i∈S wi . This is like ”a drop in the bucket” for exponential time algorithms. Therefore. ˜ ˜ W ≤O and W ≤ n(max wi ) Therefore.

NP 53 . 1994) – efficient factoring on a quantum computer.24 NOV 27TH. the place to be for quantum computing. Waterloo is.2 P vs. To read a tiny bit more on Quantum Computing is [DPV] 24. and C&O we have experts on the subject. 2008 • Quantum Computing 24.2 P vs. CS. Huge result (Shor. NP The hope is that it offers massive parallelism for free. by the way. In Physics.