CONTENTS CONTENTS

CS 341 Course Package — Chris Erbach
Contents
1 Sep 9th, 2008 1
1.1 Welcome to CS 341: Algorithms, Fall 2008 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Marking Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Course Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.4 A Case Study (Convex Hull) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Sep 11th, 2008 4
3 Sep 16th, 2008 4
3.1 Example: Making change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.2 Example: Scheduling time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.3 Example: Knapsack problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4 Sep 18, 2008: MISSING 6
5 Sep 23, 2008: Divide and Conquer 6
5.1 Solving Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
5.1.1 ”Unrolling” a recurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
5.1.2 Guess an answer, prove by induction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
5.1.3 Changing Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5.1.4 Master Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
6 Sep 25, 2008 11
6.1 Assignment Info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
6.2 Divide & Conquer Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
6.2.1 Counting Inversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
6.2.2 Multiplying Large Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
7 Sep 30, 2008 14
7.1 D&C: Multiplying Matrices: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
7.2 D&C: Closest pair of points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
7.3 Hidden Surface Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
8 Oct 2nd, 2008 15
8.1 Dynamic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
8.2 Second example: optimum binary search trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
9 Oct 7th, 2008 17
9.1 Example 2: Minimum Weight Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
10 Oct 9th, 2008 19
10.1 Dynamic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
10.2 Certain types of subproblems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
10.3 Memoization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
i
CONTENTS CONTENTS
11 Oct 14th, 2008 20
11.1 Graph Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
11.2 Minimum Spanning Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
12 Oct 16th, 2008 23
12.1 Graph Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
12.1.1 Prim’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
12.2 Shortest Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
13 Oct 21, 2008 25
13.1 All Pairs Shortest Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
13.1.1 Floyd-Warshall Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
14 Oct 23, 2008 27
14.1 Dijkstra’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
14.2 Connectivity in Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
14.2.1 Finding 2-connected components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
15 Oct 28th, 2008 30
15.1 Backtracking and Branch/Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
15.2 Branch-and-Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
15.2.1 Branch and Bound TSP Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
16 Oct 30th, 2008 33
16.1 Recall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
16.2 Lower Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
16.2.1 Basic Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
16.2.2 State-of-the-Art in Lower Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
16.3 Polynomial Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
16.4 Reductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
17 Nov 4th, 2008 35
17.1 Decision Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
17.2 P or NP? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
17.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
18 Nov 6th, 2008 38
18.1 Recall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
18.2 NP-Complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
18.2.1 Circuit Satisfiability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
18.2.2 3-SAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
19 Nov 11th, 2008 40
19.1 Satisfiability – no restricted form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
19.2 Independent Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
19.3 Vertex Cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
19.4 Set-Cover Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
19.5 Road map of NP-Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
19.6 Hamiltonian Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
ii
CONTENTS CONTENTS
20 Nov 13th, 2008 43
20.1 Undirected Hamiltonian Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
20.2 TSP is NP-complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
20.3 Subset-Sum is NP-Complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
21 Nov 18th, 2008 46
21.1 Major Open Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
21.2 Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
21.2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
22 Nov 20th, 2008 48
22.1 Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
22.2 History of Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
23 Nov 25th, 2008 49
23.1 Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
23.2 Other Undecidable Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
23.2.1 Half-No-Input or Halt-on-Empty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
23.2.2 Program Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
23.2.3 Other Problems (no proofs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
24 Nov 27th, 2008 51
24.1 What to do with NP-complete problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
24.2 P vs. NP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
iii
1 SEP 9TH, 2008
1 Sep 9th, 2008
1.1 Welcome to CS 341: Algorithms, Fall 2008
I’m Anna Lubiw, I’ve been in this department/school quite some time. This term I’m teaching both sections of
CS 341. I find the earlier lecture is better though, which may be counterintuitive.
The number of assignments is fewer this term. There are fewer grad TA’s this term, so the assignments may be
shorter (but quite likely, not any easier!)
Textbook is CLRS. $140 in the bookstore, on reserve in the library.
1.2 Marking Scheme
25% Midterm
40% Final exam
35% Assignments
We have due dates for assignments already (see the website.) Unlike in 2nd year courses where ISG keeps everything
coordinated, in third year we’re on our own.
1.3 Course Outline
Where does this word come from? An Arabic scientist from 600 AD. Originally, algorithms for arithmetic,
developed by the mathematician/scientist (not sure what to call him back then.)
In this course, we’re looking for the best algorithmic solutions to problems. Several aspects:
1. How to design algorithms
i.e. what shortest-path algorithm to use for street-level walking directions.
(a) Greedy algorithms
(b) Divide and Conquer
(c) Dynamic Programming
(d) Reductions
2. Basic Algorithms (often domain specific)
Anyone educated in algorithms needs to have a general repertoire of algorithms to apply in solving new
problems
(a) Sorting (from first year)
(b) String Matching (CS 240)
3. How to analyze algorithms
i.e. do we run it on examples, or try a more theoretical approach
(a) How good is an algorithm?
(b) Time, space, goodness (of an approximation)
4. You are expected to know
(a) O notation, worst case/avg. case
(b) Models of computation
1
1 SEP 9TH, 2008 1.4 A Case Study (Convex Hull)
5. Lower Bounds
This is not a course on complexity theory, which is where people really get excited about lower bounds, but
you need to know something about this.
(a) Do we have the best algorithm?
(b) Models of computation become crucial here.
(c) NP-completeness (how many of you have secret ambitions to solve this? I started off wanting to solve
it, before it was known it was so hard...)
1.4 A Case Study (Convex Hull)
To bound a set of points in 2D space, we can find the max/min X,Y values and make a box that contains all the
points. A convex hull is the smallest convex shape containing the points (think the smallest set of points that we
can connect in a ring that contains all the other points.) Analogy: putting an elastic band around the points, or
in three dimensions putting shrink-wrap around the points.
Why? This is a basic computational geometry problem. The convex hull gives an approximation to the shape of
a set of points better than a minimum bounding box. Arises when digitizing sculptures in 3D, or maybe while
doing OCR character recognition in 2D.
1.4.1 Algorithm
Definition (better from an algorithmic point of view)
A convex hull is a polygon and its sides are formed by lines that connect at least two points and have no points
on one side.
A straightforward algorithm (sometimes called a brute force algorithm, but that gives them a bad names because
oftentimes the straightforward algorithms are the way to go) – for all pairs of points r, s find the line between r, s
and if all other points lie on one side only then the line is part of the convex hull.
Time for n points: O(n
3
).
Aside: even with this there are good and bad ways to ”see which side points are on.” Computing the slope
between the lines is actually a bad way to do this. Exercise: for r, s, and p, how to do it in the least steps, avoiding
underflow/overflow/division.
Improvement Given one line , there is a natural ”next” line. Rotate through s until it hits the next point.
l r
s
t l'
t is an ”extreme point” (min angle α). Finding it is like ginding a max (or min) – O(n). Time for n points: O(n
2
).
Actually, if h = the number of points on the convex hull, the algorithm takes O(n h)
Can we do even better? (you bet!)
Repeatedly finding a min/max (which should remind you of sorting.)
Example Sort the points by x coordinate, and then find the ”upper convex hull” and ”lower convex hull” (each of
which comes in sorted order.)
The sorting will cost O(nlog n) but the second step is just linear. We don’t quite have a linear algorithm here but
this will be much better. Process from left to right, adding points and each time figuring out whether you need to
2
1 SEP 9TH, 2008 1.4 A Case Study (Convex Hull)
go ”up” or ”down” from each point.
This is a case of using a reduction (which we will study a lot in this course)
Time for n points: O(nlog n).
One more algorithm
Will not be better than O(nlog n). Why not? We’ll show soon, but intuition is that we’ll have to sort the points
somehow. In three-dimensional space you can still get O(nlog n) algorithms for this, but not the same way. This
answer uses divide and conquer.
upper bridge
lower bridge
1. Divide points in half by vertical line.
2. Recursively find convex hull on each side.
3. Combine by finding upper and lower bridges.
From e, edge from max x coordinate on the left to minimum x coordinate on the right, ”walk up” to get upper
bridge, and ”walk down” to get the lower bridge.
This will be O(n) to divide, and O(n) to find the upper/lower bridges. Get recurrence relation:
T(n) = 2T

n
2

+O(n)
This is the same as e.g. merge-sort. It comes out to O(nlog n).
Never Any Better Finally let’s talk ever-so-slightly about not getting better than O(nlog n). In some sense, no.
If we could find a convex hull faster, we could sort faster.
Technique: put points on a parabola (or alternately other shape) with a map x → (x, x
2
) and compute the convex
hull of these points. From there recover the sorted order. This is an intuitive argument. To be rigorous, we need
to specify the model of computation. We need a restricted model to say that sorting is Ω(nlog n) – but need the
power of indirect addressing. (Don’t worry if that seems fuzzy. The take-home message is that to be precise we
need to spend more time on models of computation.)
Measuring in terms of n, the input size, and h, the output size. We saw an O(nlog n) algorithm, an O(n h)
algorithm. Which is better? Well, depends on whether h > log n or not.
One paper written called ”The ultimate convex hull algorithm?” (with a question mark in the name, very unusual)
gave an algorithm that’s O(nlog h).
Challenge Look up the O(nlog h) algorithm by Timothy Chan (here in SCS) and try to understand it.
3
3 SEP 16TH, 2008
2 Sep 11th, 2008
Missing.
3 Sep 16th, 2008
Assignment 1 is available online.
3.1 Example: Making change
Example: for making change. Suppose you want to pay $ 3.47 in as few coins as possible. This takes seven coins,
and I claim this is the minimum number of coins. On the assignment you must prove this is in fact true.
3.2 Example: Scheduling time
Interval scheduling, or ”activity selection.” The goal is to maximize the number of activities we can perform.
Given activities, each with an associated time interval, pick non-overlapping activities.
Greedy Approaches
• Pick the first activity
NO
• Pick the shortest activity
NO
• Pick one with the fewest overlaps
NO
• Pick the one that ends earliest
YES
We can write the algorithm as
A <- empty set
for i = 1 .. n
if activity i doesn’t overlap any activities in A
A <- A union { i }
end
This looks like an O(nlog n) algorithm (as it takes that long to sort, and then O(n) after that)
Correctness Proof
There are three approaches to proving correctness of greedy algorithms.
• Greedy does better at each step.
• Suppose there is an optimal solution. Then the Greedy approach can be made into this solution.
• Metroids (a formalization of when Greedy approaches work) (in C&O)
4
3 SEP 16TH, 2008 3.3 Example: Knapsack problem
Theorem This algorithm returns a maximum size set A of non-overlapping intervals.
Proof Let A = ¦a
1
, . . . a
k
¦ ordered by finish time (i.e. in the order greedy alg. chooses them.) Let B = ¦b
1
, . . . , b
l
¦
be any other set of non-overlapping intervals ordered by finish time.
We want to show l ≤ k. Suppose that l > k and show that greedy algorithm would not have stopped at k.
Claim a
1
, . . . , a
i
b
i+1
. . . b
l
is also a solution.
Proof By induction on i. Base case i = 0 and b
1
, b
2
, . . . , b
l
is a solution. Inductive case a
1
, . . . , a
i−1
b
i
. . . b
l
is a
solution. Prove a
1
, . . . , a
i
, b
i+1
, . . . , b
l
is a solution. i.e. we’re swapping b
i
out and a
i
in.
Well, b
i
does not overlap a
i−1
by assumption. So when we choose a
i
, b
i
was a candidate – we chose a
i
. So finish
(a
i
) ≤ finish (b
i
) ∴ a
i
doesn’t overlap b
i+1
, . . . , b
l
so swap is OK.
Exercise, go through the picture.
That proves claim. To proce theorem, if l > k then by claim a
1
, . . . , a
k
, b
k+1
, . . . , b
l
is a solution. But then the
Greedy algorithm would not have stopped at a
k
.
Therefore l ≤ k and greedy gives the optimal solution.
3.3 Example: Knapsack problem
I have items i, . . . , n. Item i has weight w
i
and i has values v
i
. Weight limit W for the knapsack. Pick items of
total weight ≤ W maximizing the sum of V .
There are two versions:
• 0-1 Knapsack: the items are indivisible (e.g. tent)
• Fractional: items are divisible (e.g. oatmeal)
We’ll look at 0-1 Knapsack later (since it’s harder) (and when we study dynamic programming)
So imagine we have a table of items:
Weight w
i
Value v
i
1 6 12
2 4 7
3 4 6
W = 8. Greedy by
v
i
w
i
. For the 0 −1 knapsack:
• Greedy picks item 1 – value 12
• Optimal solution
For the fractional case:
• Take all of item 1, half of item 2
Greedy Algorithm
Order items 1, . . . , n by
v
i
w
i
. x
i
is the weight of item i that we chose.
free-w <- W
for i=1..n
x_i <- min{ w_i, free-W }
free-w <- free-w - x_i
end
5
5 SEP 23, 2008: DIVIDE AND CONQUER
¸
x
i
= W (assuming W <
¸
w
i
)
The value we get is
n
¸
i=1

v
i
w
i

x
i
Note: solution looks like it’s for 0-1. The only item we take fractionally is the last.
Claim Greedy algorithm gives the optimal solution to fractional knapsack problem.
Proof We use x
1
, . . . , x
n
and the optimal uses y
1
, . . . , y
n
. Let k be the minimum index with x
k
= y
k
. Then y
k
< x
k
(because greedy took max x
k
.)
¸
x
i
=
¸
y
i
= W. So there exists an index l > k such that y
l
> x
l
. Ida: swap
excess item l for item k.
y

k

k
+∆ and y

l
← y
l
−∆. Well, ∆ ← min¦y
l
, w
k
−y
k
¦, both terms of which are greater than zero. So the sum
of the weights
¸
y

i
= W
+∆(v
k
/w
k
) −∆(v
l
/w
l
)
= ∆(v
k
/w
k
−v
l
/w
l
)
v
k
w
k
>
v
l
w
l
because k > l
Thus y

i
is an even better solution. Thus own assumption that opt is better than greedy fails.
4 Sep 18, 2008: MISSING
5 Sep 23, 2008: Divide and Conquer
I started with Greedy because it’s fun to get to some interesting algorithms right away. Divide and conquer however
is likely the one you’re most familiar with. Sorting and searching are often divide-and-conquer algorithms.
The steps are:
• Divide – break problem into smaller subproblems
• Recurse – solve smaller sets of problems
• Conquer/Combine – ”put together” solutions from smaller subproblems
Some examples are:
• Binary search
– Divide: Pick the middle item
– Recurse: Search in each side, with only one subproblem of size
n
2
– Conquer: No work
– Recurrence relation: T(n) = T

n
2

+ 1 or more formally T(n) = max
¸
T

n
2

, T

n
2
¸
+ 1
– Time: T(n) ∈ O(log n)
• Merge sort
– Divide: basically nothing
6
5 SEP 23, 2008: DIVIDE AND CONQUER 5.1 Solving Recurrence Relations
– Recurse: Two subproblems of size
n
2
– Conquer: n −1 comparisons
– Recurrence: T(n) = T

n
2

+T

n
2

+ (n −1) and T(1) = 0 comparisons.
– Time: T(n) ∈ O(nlog n)
5.1 Solving Recurrence Relations
Three approaches, all of which are in CLRS.
5.1.1 ”Unrolling” a recurrence
Use
T(n) = 2T

n
2

+n −1 for n even
T(1) = 0
So for n a power of 2,
T(n) = 2T

n
2

+n −1
= 2

2T

n
4

+
n
2
−1

+n −1
= 4T

n
4

+ 2n −3
.
.
.
= 2
i
T

n
2
i

+in −(2
i
−1) or
i−1
¸
j=0
2
j
We want
n
2
k
= 1, 2
k
= n, k = log n.
= 2 ∗ kT

n
2
k

+k n −(2
k
−1)
= nT(1) +nlog n −n + 1
= nlog n −n + 1 ∈ O(nlog n)
If our goal is to say that mergesort takes O(nlog n) for all n (as apposed to exactly computing T(n)) then we can
just add that T(n) ≤ T(n

) where n

= the smallest power of 2 bigger than n.
If we really did want to compute exactly T(n), then
T(n) = T

n
2
¸
+T

n
2
¸
+n −1
T(1) = 0
and the exact solution is
T(n) = nlog n| −2
log n
+ 1
7
5 SEP 23, 2008: DIVIDE AND CONQUER 5.1 Solving Recurrence Relations
5.1.2 Guess an answer, prove by induction
Again for mergesort recurrence, prove that
T(n) ∈ O(nlog n)
Be careful: prove by induction that T(n) ≤ cnlog n for some constant c. Often you don’t know c until you’re
working on the problem.
A good trick for avoiding |, | is to deal separately with n even and n odd.
For n even,
T(n) = 2T

n
2

+n −1 ≤ 2

c
n
2
log
n
2

+n −1
= cn(log n −log 2) +n −1 (by induction)
= cnlog n −cn +n −1
≤ cnlog n if c ≥ 1
I’ll leave the details as an exercise (we need a base case, and need to do the case of n odd) for those of you for
whom this is not entirely intuitive.
Another example
T(n) = 2T

n
2

+n
Claim T(n) ∈ O(n)
Prove T(n) ≤ cn for some constant c
Assume by inductive hypothesis that
T(n

) ≤ cn

for n

< n
Inductive step
T(n) = 2T

n
2

+n
≤ 2c
n
2
+n = (c + 1)n
Wait, constants aren’t supposed to grow like c + 1 above. This proof is fallacious. Please do not make this kind
of mistake on your assignments.
Example 2
T(n) = T

n
2
¸
+T

n
2
¸
+ 1
T(1) = 1
Let’s guess T(n) ∈ O(n). Prove by induction that T(n) ≤ cn for some c.
8
5 SEP 23, 2008: DIVIDE AND CONQUER 5.1 Solving Recurrence Relations
Induction step:
T(n) = c

n
2
¸
+c

n
2
¸
+ 1
= cn + 1 – we’ve got trouble from that + 1
Let’s try unrolling for n a power of 2.
T(n) = 2T

n
2

+ 1
= 4T

n
4

+ 2 + 1
.
.
.
= 2
k
T

n
2
k

+
k−1
¸
i=1
2
i
(n = 2
k
)
= nT(1) + 2
k
−1
= 2n −1
So try proving by induction that
T(n) ≤ c n −1
In that case we have
T(n) = c

n
2
¸
−1 +c

n
2
¸
−1 + 1
= cn −1
This matches perfectly.
Message: Sometimes we need to strengthen the inductive hypothesis and lower the bound.
5.1.3 Changing Variables
Suppose we have a mystery algorithm with recurrence
T(n) = 2T(

n|) + log n and ignore the |
Substitute m = log n, n = 2
m
, and we have
T(n) = 2T(2
m/2
) +m
Let S(m) = T(2
m
), then S(m) = 2S(m/2) +m. We can say
S(m) ∈ O(mlog m)
T(2
m
) ∈ O(mlog m)
T(n) ∈ O(log nlog log n)
9
5 SEP 23, 2008: DIVIDE AND CONQUER 5.1 Solving Recurrence Relations
5.1.4 Master Theorem
From MATH 239, homogeneous linear recurrences T(n) = a
n−1
T(n−1) +a
n−2
T(n−2) +. . . +a
1
T(1) +f(n) = 0
are ”homogeneous” because they’re equal to zero. That never happens in algorithms (because we always have
some work to do!)
We need
T(n) = aT

n
b

+c n
k
The more general case where c n
k
= f(n) is handled in the textbook. We’ll first look at k = 1.
T(n) = aT

n
b

+cn
Results (exact) are:
a = b T(n) ∈ Θ(nlog n)
a < b T(n) ∈ Θ(n)
a > b T(n) ∈ Θ(n
log
b
a
) – the final term dominates nlog n
Theorem If T(n) = aT

n
b

+cn
k
, a ≥ 1, b > 1, c > 0, k ≥ 1 then
T(n) ∈

Θ(n
k
) if a < b
k
Θ(n
k
log n) if a = b
k
Θ(n
log
b
a
) if a > b
k
We’re not going to do a rigorous proof but we’ll do enough to give you some intuition. We’ll use unrolling. The
rigorous way is through induction.
T(n) = aT

n
b

+cn
k
= a
¸
aT

n
b
2

+c

n
b

k

+cn
k
= a
2
T

n
b
2

+ac

n
b

k
+cn
k
= a
3
T

n
b
3

+a
2
c

n
b
2

k
+ac

n
b

+cn
k
.
.
.
= a
k
T (1) +
log
b
n−1
¸
i=0
a
i
c

n
b
i

k
= n
log
b
a
T(1) +cn
k
log
b
n−1
¸
i=0

a
b
k

i
n = b
t
, t = log
b
n, a
log
b
n
= n
log
b
a
. It comes out exactly like that sum in your assignment.
Just to wrap up, if a < b
k
i.e. log
b
a < k, the sum is constant and n
k
dominates. If a = b
k
the sum is log
b
n and
we get Θ(n
k
log n). The third case is when a > b
k
, and then n
log
b
a
dominates.
10
6 SEP 25, 2008
6 Sep 25, 2008
6.1 Assignment Info
Assignment 1 is due Friday at 5PM in the assignment boxes.
Q5. US = UC.
Q2a. In CS240 we learned to take the log of n + 1. ”How is the number of bits going to grow” is a much nicer
angle. There is a reason that

n and

n| are in the list.
Q3. (e) (f) See the newsgroup and website. D(i, j, l). Shortest path length from i to j using at most l edges but
formula is exactly l edges. Either assumption is fine. State clearly which one you are using. Same issue in (e) but
if you use exactly you may find that you don’t save. Use ”at most” if you haven’t started.
So we aren’t planning on marking every question. We will provide solutions for everything, however. The unmarked
questions are likely to appear on midterms or finals.
Q4. If you want examples of coin systems, go look around the Internet. Don’t get your proof from the Internet,
but examples of systems is fine.
Q5. How efficient? Well, you probably have to sort, so you probably won’t get better than O(nlog n). Try to beat
O(n
2
).
Q4,Q5,Q6 are counterexample and a proof.
Please just come to office hours instead of asking too many questions over e-mail.
6.2 Divide & Conquer Algorithms
6.2.1 Counting Inversions
Comparing two people’s rankings of n items – books, music, etc. Useful for web sites giving recommendations
based on similar preferences.
Suppose my ranking is BDCA, and yours is ADBC from best to worst. We’d like a measure of how similar these
lists are. We can count inversions: on how many pairs do we disagree? Here there are four pairs where we disagree:
BD, BA, DA, CA and two where we agree: BC, DC.
Equivalently, we can say given a
1
, a
2
, . . . a
n
, a permutation of 1 . . . n, count the number of inversions i.e. the
number of pairs a
i
, a
j
with i < j but a
i
> a
j
.
Brute Force: Check all

n
2

pairs, taking O(n
2
).
Divide & Conquer: Divide the list in half, with m =

1
2

.
A = a
1
. . . a
m
B = a
m+1
. . . a
n
Recursively count
r
A
= # inversions in A
r
B
= # inversions in B
Final answer is r
A
+r
B
+r where r = number of inversions a
i
a
j
, i ≤ m, j ≥ m+ 1 and a
i
> a
j
.
For each j = m+ 1 . . . n let r
j
= # of pairs involving a
j
.
r =
¸
n
j=m+1
r
j
Strengthen recursion – sort the list, too. If A and B are sorted, we can compute r
j
’s
11
6 SEP 25, 2008 6.2 Divide & Conquer Algorithms
Sort-and-Count(L): sorted L and # of inversions
Split L into A and B
(r_A,A) <- Sort-and-Count(A)
(r_B,B) <- Sort-and-Count(B)
r <- 0
merge A and B
when element is moved from B to output list
r <- r + # elements left in A
end
return r_a + r_b + r
Runtime:
T(n) = 2T

n
2

+O(n)
Since it’s the same as mergesort, we get O(nlog n). Can we do better?
6.2.2 Multiplying Large Numbers
The school method:
981
1234
------
3924
2943
1962
981
-------
1210554
O(n
2
) for two n-digit numbers. (one step is or + for two digits)
There is a faster way using divide-and-conquer. First pad 981 to 0981.

09 81 12 34

Then calculate
09 12 4 → 108
09 34 2 → 306
81 12 2 → 972
81 34 0 → 2754
1210554
The runtime here is
T(n) = 4T

n
2

+O(n)
Apply the Master Method.
12
6 SEP 25, 2008 6.2 Divide & Conquer Algorithms
T(n) = aT

n
b

+cn
k
Here, a = 4, b = 2, k = 1. Compare a with b
k
. We see a = 4 > b
k
= 2 so then we have runtime Θ(n
log
b
a
) = Θ(n
2
).
So far we have not made progress!
We can get by with fewer than four multiplications.
(10
2
w +x) (10
2
y +z) = 10
4
wy + 10
2
(wz +xy) +xz
Note we need wz +xy, not the terms individually.
Look at
(w +x)(y +z) = wy +wz +xy +xz
We know wy and xz but we want wz +xy. This leads to:
p = wy = 09 12 = 108
q = xz = 81 34 = 2754
r = (w +x)(y +z) = 90[that’s 09 + 81] 46
Answer: 10
4
p + 10
2
(r −p −q) +q
108____
1278__
2754
-------
1210554
We can apply this as a basis for a recursive algorithm. We’ll get
T(n) = 3T

n
2

+O(n)
From the master theorem, now we have a = 3, b = 2, k = 1 and since we have a > b
k
Θ(n
log
b
a
) = Θ(n
log
2
3
) ≈ Θ(n
1.585...
)
Practical Issues
• What if n is odd?
• What about two numbers with different digit counts?
• How small do you let the recursion get? (Answer: hardware word)
• What about different bases?
• When is this algorithm useful? (For about 1,000 digits or fewer, don’t use it [BB])
– Schonnage and Strassen better for very large numbers, which runs in O(nlog nlog log n)
13
7 SEP 30, 2008
7 Sep 30, 2008
Assignment 2 is available.
7.1 D&C: Multiplying Matrices:
Multiplying two square matrices. Basic method takes n
2
(and in some sense this is the best you can do, since you
need to write n
2
numbers in the result!)
Basic D&C
Divide each matrix into
n
2
blocks.

A B
C D

E F
G H

=

I J
K L

I = AE +BG etc. Each of the four output blocks has 2 subproblems and O(n
2
) additions.
T(n) = 8T

n
2

+O(n
2
)
By the master theorem, a = 8, b = 2, k = 2. a = 8 > b
k
= 4 (the case when recursive work overwhelms other case)
then T(n) ∈ Θ(n
log
b
a
) = O(n
3
).
Strassen’s Algorithm shows how to get by with just seven (a = 7) subproblems. Not discussing here, but if you’re
curious it’s in the textbook. This gives
T(n) = 7T

n
2

+O(n
2
)
This is Θ(n
log
2
7
) ≈ O(n
2.8...
). There are more complicated algorithms that get even better results (only for very
large n however)
7.2 D&C: Closest pair of points
Divide and Conquer is very useful for geometric problems. For example, given n points in a plane, select the
closest two by Euclidean distance. (There are other measures, including the ”Manhattan distance” which is the
distance assuming you can’t cross city blocks.)
Generally, we assume that arithmetic is unit cost. For this problem we don’t need to make that assumption.
In one dimension, consider ¦10, 5, 17, 100¦. How would we do this? Sort and compare adjacent numbers.
In a plane, we can use brute force, and that’s O(n
2
). What about
• Sorting by position on one axis.
Nope!
What’s the way?
(1) Divide points into left/right at the median x coordinate. Most efficient to sort once by x coordinate. Then
we can find a line L in O(1) time.
14
8 OCT 2ND, 2008 7.3 Hidden Surface Removal
(2) Recurse on Q and R.
δ = min

closest pair inQ
closest pair in R
Solution is the minimum of δ or the closest pair crossing L.
We need to find pairs q ∈ Q, r ∈ R with d(p, r) < δ.
Claim IfQ ∈ Q, r inR and d(q, r) < δ then d(q, L) < δ and d(r, L) < δ (i.e. q, r lie in this strip of width 2δ.)
Proof If otherwise, suppose q outside its strip. d(q, r) ≥ distance in DC from q to r ≥ δ.
Now let S be points in the strip of width 2δ. We can restrict our search to S. But S can be all the points!
Our hope is that if we sort S by coordinate then any pair q ∈ Q, r ∈ R with d(q, r) < δ are near each other
in sorted order.
Claim A δ δ square T left of L can have at most 4 points on it.
Because every two points in T have distance ≥ δ we can fit four points but only in the four corners. Therefore
you can’t fit five.
Claim If S sorted by y coordinate and q inQ and r ∈ R with d(q, r) < δ then they are at most seven positions
apart in sorted order.
(T) Total algorithm:
– Sort by x
– Sort by y
– T(n) = 2T

n
2

+O(n) ∈ O(nlog n)
More general problems – given n points, find closest neighbour of each one. This can be done in O(nlog n) (not
obvious)
• Voronoi diagrams
• Delaunay triangulations
Used in mesh generation.
7.3 Hidden Surface Removal
(a baby version of it, at least.) Find ”upper envelope” of a set of n lines in O(nlog n) by divide & conquer.
8 Oct 2nd, 2008
8.1 Dynamic Programming
Weighted Interval Scheduling. Recall, interval scheduling aka activity selection aka packing of intervals. Pick the
max. number of disjoint intervals.
Generalization – each interval i has a weight w(i). Pick disjoint intervals to maximize the sum of the weights.
What if we try to use Greedy?
15
8 OCT 2ND, 2008 8.1 Dynamic Programming
• Pick maximum weight – fails
An even more general program: given a graph G = (V, E) with weights on vertices pick a set of vertices, no two
joined by an edge to maximize a sum of weights. Make G with a vertex for each interval an edge when two intervals
overlap.
A general idea: for interval (or vertex) i, either we use it or we don’t. Let OPT(I) = max weight of non-overlapping
subset. W-OPT(I) is the opt. weight sum of weights of intervals in OPT(I).
If we don’t use i, OPT(I) = OPT(I ` ¦ I ¦ ).
If we use i, OPT(I) = w(i) + OPT(I’) where I’ = the set of intervals that don’t overlap with i.
Leads to a recursive algorithm.
W-OPT(I) = max ¦ W-OPT(I ¦ i ¦ ) , w(i) + W-OPT(I’) ¦
T(n) = 2T(n −1) +O(1)
But this is exponential time.
Essentially we are trying all possible subsets of n items – all 2
n
of them.
For intervals (but not for the general graph problem) we can do better. Order intervals 1, . . . , n by their right
endpoint.
If we choose interval n, then l

= all intervals disjoint from n – has form 1, 2, . . . , j for some j.
W-OPT(1 ... n) = max ( W-OPT(1 ... n-1 ), w(n) + W-OPT(1..p(n)) ).
p(n) = max index j such that interval j doesn’t overlap n.
More generally,
p(i) = max index j ¿ i such that interval j doesn’t overlap i. W-OPT(1 .. i) = max ( W-OPT(1 .. i-1), w (i) +
W-OPT(1..p(i)))
This leads to an O(n) time algorithm. Note: don’t use recursion blindly. The same subproblem may be solved
many times in your program.
Solution Use memoized recursion (see text.) OR, use an iterative approach.
Let’s look at an algorithm using the second approach.
notation M[i] = W-OPT(1 .. i)
M[0] = 0
for i = 1..n
M[i] = max{ M[i-1], w(i) + M(p(i)) }
end
Runtime is O(n). What about computing p(i) with i = 1..n?
Sorting by right endpoint is O(nlog n). To find p(i) sort by the left endpoint as well. Then-Exercise: in O(n) time
find p(i) i = 1..n.
So far this algorithm finds W-OPT but not OPT. (i.e. the weight, not the actual set of items.)
One possibility: enhance above loop to keep set OPT(1..i). Danger here is that storing n sets of size n for n
2
size.
One solution: first compute M as above. Then call OPT(n).
recurse fun OPT(i)
if M[i] >= w(i) + M[p(i)]
then return OPT(i-1)
else
return { i } union OPT (p( i))
16
9 OCT 7TH, 2008 8.2 Second example: optimum binary search trees
8.2 Second example: optimum binary search trees
Store values 1, . . . , n in leaves of a binary tree (in order.) Given probability p
i
of searching for i build a binary
search tree.
Minimize expected search cost
n
¸
i=1
p
i
depth(i)
Note: In CD 240 you did dynamic binary search trees – insert, delete, and rebalancing to control depth.
This is different in that we have items and probabilities ahead of time.
The difference from Huffman coding (a similar problem) is that for Huffman codes, left-to-right order of leaves is
free.
The heart of dynamic programming to find optimum binary search tree: Try all possible splits 1..k and k + 1..n.
Subproblem: ∀i, j find optimum tree for i, i + 1, . . . , j.
M[i, j] = min
k=i..j
M[i, k] +M[k + 1, j] +
¸
j
t=i
p
t
. Each node is one deeper now.
Exercise: work this out.
for i=1..n
M[i,i] = p_i
for r=1..n-1
for i = 1..n-r
-- solve for M[i, i+r]
best <- M[i,i] + M[i+1, i+r]
for k=i+1..i+r-1
temp <- m[i,k] + m[k+1, i+r]
if temp > best, best <- temp
end
M[i,i+r] <- best + sum_(t=i)^(i+r) p_t
(better: p[j] = sum_t=1^j p(t) then use p[i+r] - P[i-1]
Runtime? O(n
3
).
9 Oct 7th, 2008
Last day, we looked at weighted interval scheduling.
Today, we’ll look at matrix chain multiplication.
The problem was to compute the product of n matrices M
1
M
2
. . . M
n
where M
i
is an α
i−1
α
i
matrix.
What is the best order in which to do multiplications?
Think about this in terms of parenthesizing the matrices in your multiplication. I.e. we could calculate ((M
1
M
2
)(M
3
M
4
))
or (((M
1
M
2
)M
3
)M
4
). The number of ways to build a binary tree on leaves 1 . . . n is
P
n
=
n
¸
i=1
P
i
P
n−i
The Catalan numbers are
P
n
∈ Ω

r
n
n
2

which is exponential.
Solve subproblems:
m
i,j
= min cost to multiply the scalar multiplications. Matrices M
i
, . . . , M
j
17
9 OCT 7TH, 2008 9.1 Example 2: Minimum Weight Triangulation
Let m
ii
= 0 and m
ij
= min for k = i . . . j −1. The idea is we’ll break into subproblems from m
i
to m
k
times m
k+1
to m
j
.
Algorithm pseudocode:
for i=1..n
m(i,i) = 0
end
for diff=1 .. n
for i = 1..n-diff
j <- i + diff
m(i,j) <- infinity
for k = i .. j-1
temp <- m(i,k) + m(k+1,j) + d_{i-1} d_j d_k
if temp < m (i,j)
m(i,j) <- temp
end
end
end
end
The runtime is O(n
3
) for the O(n
2
) subproblems of O(n) each. Final answer m(1, n) and ex, use k matrix to
recover the actual parenthesization.
9.1 Example 2: Minimum Weight Triangulation
Problem: Given a convex polygon with vertices 1 . . . n in clockwise order, divide into triangles by adding ”chords”
– segments from one vertex to another. No two chords are allowed to cross.
The goal is to minimize the lengths of chords we use. Picking the smallest chord does not work.
We will give a dynamic programming algorithm that will also work for non-convex shapes.
A more general problem is to triangulate a set of points. Find the minimum sum of lengths of edges to triangulate.
”Minimum triangulation.”
The dynamic programming approach for the convex polygon case: choosing one chord breaks down into two
subpolygons.
Notice a subset of polygons gives a subpolygon. Can get by by looking just at subpolygons on verticies i, i+1, . . . , j.
The edge 1, n lies in some delta with vertex k – try all choices for k. More generally, m(i, j) = min sum of edge
lengths to triangulate subpolygon on verticies i, i +1, . . . , j. m(i, j) = min
k=i+1,...,j−1
¦m(i, k) +m(k, j) +(i, j)¦
( the length of chord)
Let’s count the perimeter as well. This doesn’t hurt our optimization and it makes base cases easier.
Base cases
m(i, i + 2) = (i, i + 1) +(i + 1, i + 2) +(i, i + 2)
Note: We’d better add m(i, i + 1) = (i, i + 1). And we don’t atually need case m(i, i + 2) – it falls out of the
general formula.
Algorithm:
initialize m(i,i+1)
for diff = 2, ..., n-1
for i = 1 .. n-diff
j<-i + diff
18
10 OCT 9TH, 2008
m(i,j) <- infinity
for k = i+1 .. j-1
t <- m(i,k) + m(k,j) + l(i,j)
if t < M(i,j) then
M(i,j) <- t
end
end
end
Runtime O(n
3
): n n table and O(n
2
) subproblems. O(n) to solve each one.
10 Oct 9th, 2008
Midterm (Mon Oct 20th): covers material up through today and a bit of next week’s material too.
10.1 Dynamic Programming
Key idea: Bottom-up method: identify subproblems and order so that you’re relying on previously solved sub-
problems.
Example (Knapsack/Subset Sum)
Recall knapsack problem: given items 1 . . . n, item i has weight w
i
and value v
i
, both ∈ N, and W, the knapsack
capacity. Choose a subset S ∈ ¦1, . . . , n¦ such that
¸
i∈S
w
i
≤ W and
¸
i∈S
v
i
is maximized.
Recall a fractional versus 0-1. Recall a greedy algorithm works for the fractional case. For the 0-1 knapsack, there
is no polynomial-time algorithm.
Note: coin changing problem is similar to knapsack but having multiple copies of items.
Top-down: Item n can either be IN (items 1 . . . n −1 with W −w
n
) or OUT (items 1 . . . n −1) of S.
Subproblems are – for each i, w i = 0 . . . n and w = 0 . . . W, find subset S from items 1 . . . i such that
¸
i∈S
w
i
≤ w
and
¸
i∈S
v
i
is maximized.
How to solve this subproblem?
If w
i
> w then OPT(I, w) ← OPT(i −1, w) (can’t use item i) but otherwise,
OPT(i, w) ← max

OPT(i −1, w) don’t include i
v
i
+OPT(i −1, w −w
i
) include i
Pseudo-code and ordering of subproblems:
store OPT(i,w) in matrix
M[i,w]
i=0..n w=0..W
initialize M[0,w] := 0 w = 0..W
for i=1..n
for w=0..W
compute M[i,w] with (*)
end
end
M[n,W] gives OPT value
EX: Find opt set S.
19
11 OCT 14TH, 2008 10.2 Certain types of subproblems
[KT] has examples.
Runtime: nWc (outer loop, inner loop, constant for (*))
O(n w)
Is this good? Does it behave like a polynomial?
Depends on size of input. Input v
1
, . . . , v
n
and w
1
, . . . , w
n
and W. Note that w
i
≤ W – else throw out item i. So
size of w
1
. . .? ≤ (n + 1) log W. Input size is O(nlog W).
Input size O(nlog W) but output size O(nW = n2
k
).
Intuition why this is bad: let’s say we have weights .001, .002, 10, and W = 100.
This algorithm is called ”pseudo-polynomial” because runtime is polynomial on the value of W, not the size
(number of bits) of W.
10.2 Certain types of subproblems
• Input x
1
, . . . , x
n
and subproblem x
1
, . . . , x
i
. Number of subproblems is O(n).
• Input x
1
, . . . , x
n
and subproblems x
i
, x
i+1
, . . . , x
j
. Number of subproblems is O(n
2
).
• Input x
1
, . . . , x
n
and y
1
, . . . , y
n
with subproblems x
1
, . . . , x
i
and y
1
, . . . , y
j
. Number of subproblems: O(nm)
• Input is rooted tree (not necessarily binary) and subproblems are rooted subtrees.
Example Longest ascending subsequence.
In 5,3,4,1,6,2.
Given a
1
, . . . a
n
finding a
i
1
< a
i
2
, < . . . < a
i
j
. i
1
< i
2
< . . . < i
j
. Maximize j.
Can we use subproblems on a
1
, . . . , a
i
?
Find largest ascending subsequence ending with a
i
.
Find answer:max l
i
with i = 1..n
Consider 2nd last item a
j
, j < i, a
i
< a
i
.
l
i
= max¦1 +l
j
: j < i, a
j
< a
i
¦
O(n
2
) algorithm: n subproblems O(n) each.
10.3 Memoization
Use recursion (not explicit solution to subproblems in the bottom-up approach we have used) – danger, solve sub
subproblem over and over. So
T(n) = 2T(n −1) +O(1) – exponential!
Advantage: storing solved subproblems saves time if we don’t need solutions to all subproblems.
11 Oct 14th, 2008
Assignment 2 due Friday. Midterm on Mon Oct 20th, 7 PM. Alternate is during class time on Tuesday.
11.1 Graph Algorithms
A graph G = (V, E) with V a finite set of vertices and E ∈ V V are edges.
• Undirected graph, edge (u, v) = (v, u).
• Directed graph, order matters.
• No loops (i.e. no edge (u, u))
20
11 OCT 14TH, 2008 11.2 Minimum Spanning Trees
• No multiple edges.
We will use n or [V [ for the number of vertices, and m or [E[ for the number of edges.
• 0 ≤ m ≤

n
2

=
n(n−1)
2
undirected.
• 0 ≤ m ≤ n(n −1) directed. m ∈ O(n
2
).
What is a path? A sequence of vertices where every consecutive pair is joined by an edge. e.g.3, 5, 4. A walk
allows repetition of vertices and edges. A simple path does not allow.
If there is a walk from u to v then there is a simple path from u to v.
We say that an undirected graph G is connected if for every pair of vertices, there is a path joining them. For
testing if a graph is connected, we can use DFS or BFS.
For directed graphs: there are different notions of connectivity. A graph can be strongly connected – ∀u, v inV
there is a directed path from u to v.
Cycle: a path from u to u.
Tree: A graph that is connected but has no cycles. Note: a tree on n vertices has n −1 edges.
Storing a graph:
• Adjacency matrix: A(i, j) = 1 if there is an edge from i to j, else 0.
• Adjacency list: Vertices down the left, edge destinations in a list on the right.
Advantages and disadvantages?
• Space: n
2
matrix, 2m+n list.
• Time to test e ∈ E: O(1) matrix, O(n) or O(log v) in list.
• Enumerating edges: O(n
2
) versus O(m+n).
We usually use adjacency lists – then we can (sometimes) get algorithms with runtime better than O(n
2
).
11.2 Minimum Spanning Trees
Problem Given an undirected graph G = (V, E) and weights w ≥ 0 : E → R find a minimum weight subset of
edges that’s connected. i.e. Find E

⊂ E such that (V, E

) is connected and w(E

) =
¸
e∈E
w(e) is minimized.
Claim E

will be a tree. Else E

has a cycle. Throw away an edge of the cycle, which leaves a connected graph. If
path a −b used edge (u, v), then replace edge (u, v) with the rest of the cycle.
Almost any Greedy approach will succeed.
• Take a minimum weight edge that creates no cycle.
21
11 OCT 14TH, 2008 11.2 Minimum Spanning Trees
• Throw away maximum weight that doesn’t disconnect.
• Grow one connected component and use the minimum weight wedge.
All of these are justified by one lemma:
Lemma Let V
1
, V
2
be a partition of V (into two disjoint non-empty sets with union V .) Let e be a minimum-weight
edge from V
1
to V
2
. Then there is a minimum spanning tree that includes e.
Stronger version Let X be a set of edges ⊂ minimum spanning tree, and no edge of X goes from V
1
to V
2
. Let the
minimum spanning tree also include X.
Proof Let T be a minimum spanning tree (stronger: containing X.) T has a path that connects u and v. P must
use an edge from V
1
to V
2
– say, f.
Let T

= T ∪ ¦e¦
¦f¦ exchange e for f. Claim: T

is it.
w(e) ≤ w(f) so w(T

) ≤ w(T). T

is a spanning tree: P ∪ ¦(u, v)¦ makes a cycle , so we can remove f and stay
connected.
Note that T

contains e and x (because f not in X.)
Following Kruskal’s Algorithm,
• Order edges by weight:
w(e
1
) ≤ w(e
2
) ≤ . . . ≤ w(e
m
)
T <- empty set
for i = 1..n
if e_1 does not make a cycle with T
then t <- T u {e}
end
• We add e iff u and v are in different connected components.
• To test this efficiently we use the Union-Find data structure.
– Find(element) – find which set contains element.
– Union – unites two sets.
• Focus set = connected component of vertices.
– Add edge e iff Find(u) = Find(v)
– Add edge e to T ⇒ unite conn. components of u and v
A simple Union-Find structure : Store an array C(1 . . . n) and C(i) is the # of connected components containing
vertex i. Union: must rename one of the two sets, do the smaller one. Then h units take O(nlog n) in CS 466:
reduce this.
Krustkal’s Algorithm takes O(mlog m) to sort plus O(nlog n) for the Union-Find test. And O(mlog m) =
O(mlog n) since log m ≤ log n
2
= 2 log n.
22
12 OCT 16TH, 2008
12 Oct 16th, 2008
• Assignment 1 – out of 40.
– Solutions will be on website.
– Marking scheme is in the newsgroup.
• Assignment 2 – due tomorrow.
• Midterm – Monday – covers to the end of today.
• You are allowed one 8.5 11 sheet brought to the midterm. Doesn’t have to be hand-written either.
12.1 Graph Algorithms
Minimum Spanning Tree: Given an undirected graph G = (V, E) with weight function w : E →R
+
, find a subset
of edges E

∈ E such that (V, E

) is connected.
Recall:
• Kruskal’s algorithm orders edges from minimum-maximum weight. Take each edge unless it forms a cycle
with previously chosen edges.
• Lemma, the cheapest two edges connecting two groups is indeed the best.
12.1.1 Prim’s Algorithm
Also a greedy algorithm. Builds a tree. General structure: let u be vertices of the tree so far. Initially, U = ¦s¦.
While U = V , find a minimum weight edge e = ¦u, v¦ where u ∈ U and v ∈ V −U. Add e to T and v to U.
Correctness – from lemma last day.
Implementation: we need to (repeatedly) find a minimum-weight edge leaving U (as U changes.) Let S(U) be a
set of edges from U to V −U. We want to find the minimum, insert, and delete. We need a priority queue – use
a heap.
Exactly how does δ(u) change?
When we do U ← U ∪ ¦v¦, any edge from U to v leaves δ(u). Any other edge incident with v enters δ(u).
For all x incident to v,
• if x ∈ U then remove edge (x, v) from priority queue.
• else insert edge (x, v) into PQ.
Recall that a heap provides O(log n) for insert and delete, and O(1) for finding a minimum.
For one r, how many PQ inserts/deletes do we need?
• n in the worst case.
23
12 OCT 16TH, 2008 12.2 Shortest Paths
• deg(v) = # of edges incident with v.
Total number of PQ insert/delete operations over all vertices v: (hope for better than n n.)
Every edge enters δ(u) once and leaves once, so 2m.
Alternatively,
¸
v∈V
deg v = 2m.
Total time for the algorithm is O(n+mlog m) = O(mlog m) because m ≤ n
2
and log m ≤ 2 log n. If m = 0: check
first if m < n −1 and if so bail out.
Improvements
• Store vertices in the PQ instead of edges. Define w(v) = minimum weight of an edge from U to v.
When we do U ← U ∪ ¦v¦, we must adjust weights of some vertices. Gives (mlog n).
• Tweak the PQ to be a ”fibonacci heap,” which gives O(1) for weight change and O(log k) to find minimum.
Gives O(nlog n +m).
• Barouvka’s Algorithm: another way to handle this case
12.2 Shortest Paths
Shortest path from A to D: ABD weight 3 + 2 = 5, A to E: ABE with weight 4. (From diagram in class.)
General input: directed graph G = (V, E) with weights w : E → R. Allow negative weight edges, but disallow
negative weight cycles. (If we have a negative weight cycle, then repeating it potentially gives paths of −∞weight.)
We might ask for shortest simple path but this is actually hard (NP-complete.)
Weight of path = sum of weights of edges.
Versions of shortest path problem:
1. Given u, v ∈ V , find a shortest path from u to v.
2. Given u ∈ V , find shortest paths to all other vertices. ”Single source shortest path problem”
3. Find shortest u, v path ∀u, v – the ”all paths shortest path problem.”
Solving 1 seems to involve solving 2.
Later: Dijkstra’s algorithm for 2. Like Prim’s algorithm. Build a shortest path tree from u
Dynamic Programming solution for problem 3.
Does u −v path go through x or not shortest? Split into: find shortest path u −x and shortest path x −v.
In what way are these subproblems smaller?
• They use fewer edges.
M[u, v, l] = min weight path from u to v using ≤ l edges.
n
3
subproblems from l = 1 . . . n −1.
• The paths u −x and x −v don’t use x as intermediate vertex.
24
13 OCT 21, 2008
13 Oct 21, 2008
13.1 All Pairs Shortest Path
Given a directed graph G = (V, E) with weights w : E →R, find shortest u −v paths from all u, v ∈ V .
In general, the weight of a path is the sum of weights of edges in path.
A
B
C
D
5
-1
6
11
2
e.g. w(ACD) = 8
Assume: no negative weight cycles. Otherwise, minimum length path can be ∞.
Use Dynamic Programming.
u
x
v
Main idea: try all intermediate vertices x. If we use x, we need a shortest u → x path and a shortest x → v path.
How are these subproblems simpler?
1. Fewer edges – get efficient dynamic programming M[u, v, ] give shortest u, v path with ≤ edges.
However, we’re not using this. This gives the same runtime, but uses more space.
2. The u −x and x −v paths do not use x as an intermediate vertex.
We’ll use this one.
Let V = ¦1, 2, . . . , n¦. Let D
i
[u, v] = min. length of a path u → v using intermediate vertices from the set
¦1, . . . , i¦. Solve subproblem D
i
[u, v] for i = 0, 1, . . . , n.
Final answer: matrix D
n
[u, v]. Number of subproblems: O(n
3
).
How do we initialize? D
0
[u, v] = ¦w(u, v) if (u, v) ∈ E; ∞ otherwise .
Main formula:
D
i
[u, v] = min¦D
i−1
[u, v], D
i−1
[u, i] +D
i−1
[i, v]¦
This leads to:
13.1.1 Floyd-Warshall Algorithm
Initialize D_0 as above
25
13 OCT 21, 2008 13.1 All Pairs Shortest Path
for i = 1..n
for u = 1..n
for v = 1..n
D_i[u,v] = as above in main formula
end
return D_n
Time is O(n
3
). The space however is also O(n
3
), which is extremely undesirable. Notice to compute D
i
we only
use D
i−1
. So we can throw away any previous matrices, bringing space to O(n
2
).
In fact, even better (although not in degree of n) we can:
Initialize D full of D_0
for i = 1..n
for u = 1..n
for v = 1..n
D_i[u,v] = min { D[u,v], D[u,i] + D[i,v] } (**)
end
return D_n
Note: in the inner loop, D will be a mixture of D
i
and D
i−1
, but this is correct because we don’t go below the
true min by doing this, but we correctly compute the main equation.
How to find the actual shortest path?
• Compute H[u, v] =highest numbered vertex on u → v path
Note: If we explicitly stored all n
2
paths, we’d be back to O(n
3
) space – avoid this. Better:
• S[u, v]− successor of u on a shortest u, v path
Initialize S[u, v] = v if (u, v) ∈ E and φ otherwise.
Modify (**) to become:
• if D[u,i + D[i,v] < D[u,v] then
D[u,v] <- D[u,i] + D[i,v]
S[u,v] <- S[u,i]
end
Once we have S with complete paths:
Path[u,v]
x <- u
while neq u
output S[x,v]
x <- S[x,v]
end
output v
Exercise: Use this algorithm to test if a graph has a negative weight cycle.
26
14 OCT 23, 2008
14 Oct 23, 2008
Shortest Paths
Last day’s study was the all-pairs shortest path problem, whereas today’s is the single-source shortest path. Find
the shortest path from s to v ∀v.
• In the case with no negative weight edges, we can use Dijkstra’s Algorithm, which is O(mlog n).
• With no directed cycles, O(n +m).
• With no negative weight cycles, O(n m). (This is the most general – still faster than all pairs.)
14.1 Dijkstra’s Algorithm
Input: Directed graph G = (V, E) and weight function w : E →R
≥0
and source vertex s.
Output: Shortest s → v path ∀v.
Idea: Grow a tree of shortest paths from s.
s
x
B
y
General step: have shortest paths to all vertices in B. Initially, B = ¦s¦. Choose the edge (x, y) where x ∈ B and
y ∈ V ` B that minimizes the following:
d(s, x) +w(x, y)
Call this minimum d:
• d(s, y) ← d
• Add (x, y) to shortest path tree parent(y) ← x
• B ← B ∪ ¦y¦
This is greedy in the sense that y has the next minimum distance from s.
Claim: d = minimum distance from s to y.
Proof: The idea is that any path has this structure:
• s: Begins here
• π
1
: Precedes u
27
14 OCT 23, 2008 14.2 Connectivity in Graphs
• (u, v): First edge leaving B
• π
2
: Rest of path (which may re-enter B)
So w(π) = w(π
1
)+w(u, v)+w(π
2
). Note that w(π
1
)+w(u, v) ≥ d and w(π
2
) ≥ 0 as edge-weights are non-negative.
From Claim by induction on [B[, this algorithm finds the shortest path.
Implementation: Make a priority queue (heap) on vertices V `B using value D(v) for v ∈ V such that the minimum
value of D gives the wanted vertex.
D(v) = minimum weight path from s → v using a path in B plus one edge.
• Initialize:
– D(v) ← ∞, ∀v
– D(s) ← 0
– B ← φ
• While [b[ < n:
– y ← vertex of V ` B of minimum D(v)
– B ← B ∪ ¦y¦
– For each edge (y, z) where z ∈ V ` B
∗ t ← D(y) +w(y, z)
∗ If y < D(z) then
D(z) ← t
parent(z) ← y
Store the D values in a heap. How many times are we extracting the minimum? n times at O(log n) time each.
The ”decrease D value” is done ≤ m times. (Same argument as for Prim.) Each decrease D operation is O(log n)
(done as insert-delete.) Total time is O(nlog n +mlog n) which is O(mlog n) if m ≥ n − 1. Using a Fibonacci
Heap, we can decrease this to O(nlog n +m).
14.2 Connectivity in Graphs
Testing connectivity, exploring a graph. Recall: Breadth First Search (BFS) and Depth First Search (DFS.)
1
3
5 2
4 6
8 7
• BFS: 1,2,3,6,8,4,5,7 (1, adj to 1, adj to 2, etc.)
• DFS: 1,2,4,6,3,5,8,7
28
14 OCT 23, 2008 14.2 Connectivity in Graphs
Either takes O(n +m). DFS is more useful.
We’ll talk about ”higher connectivity” – for networks, connected isn’t enough. We want connected even with a
few failures (vertices/edges.) What’s bad is a cut vertex – if it fails, the graph becomes disconnected.
We call a graph 2-connected if there are no cut vertices. 2-connected components. A figure-eight graphic made of
two connected triangles or squares has two 2-connected components, the triangles/squares. Similarly, 3-connected
means we can remove two vertices without breaking the graph into components.
By the way, Paul Seymour, a famous name in graph theory, is visiting UW this weekend, and he’s speaking
tomorrow at 3:30. He’s also getting an honourary degree on Saturday at convocation.
14.2.1 Finding 2-connected components
We can use DFS to find cut vertices and 2-connected components in O(n +m) time.
2
1
6
4
3
5
7
Solid edges are DFS edges, dotted edges are ”back edges.”
Claim: Every non-tree DFS edge goes from some u to an ancestor. e.g. we can’t have edge (5,7). This justifies
the term ”back edge.”
DFS Algorithm:
• Initialize:
– mark(v) ← not visited
– num ← 1
– DFS(s)
• DFS(u) recursive:
– mark(v) ← visited
– DFSnum(v) ← num; num ← num + 1
– for each edge (u, w)
∗ if mark(w) = not visited then
(v, w) is a tree edge
parent(w) ← v
DFS(w)
else
if parent(v) = w then: (v, w) is a back edge
29
15 OCT 28TH, 2008
What do cut vertices look like in a DFS tree?
• A leaf is never a cut vertex
• A root is a cut vertex iff the number of children ≥ 1
Removing arbitrary (non-root, non-leaf) node in the tree v we have T
1
, . . . , T
i
children and T
0
the tree connected
from above. Are these connected in G ` v? It depends on back edges. If T
j
has a back edge to T
0
then T
j
is
connected to T
0
. Otherwise, it falls away (and is disconnected.)
We need one more thing: high(v) = highest (i.e. lowest DFS number) vertex reachable from v by going down tree
edges and then along one back edge.
Claim: v is a cut vertex iff it has a DFS child x such that high(x) ≥ DFSnum(v).
Modifying DFS code: set high(v) ←DFSnum(v) in Initialize, and later on set high(v) ←min ¦ high(v), DFSnum(w)
¦ and later high(v) ← min ¦ high(v), high(w) ¦ .
This is still O(n +m).
15 Oct 28th, 2008
Midterm: Think about it as out of 35. (In that case you got an 86%.)
Backtracking: A systematic way to try all possibilities. In the workplace, and you need a find an algorithm,
if you’re extremely lucky it’ll be one of the ones we encountered. But more likely, it’ll be similar to one we’ve
seen. But more likely, it’ll be one nobody knows how to solve, and it’s NP-complete. Backtracking is useful for
algorithms that are not NP-complete.
Options:
• Heuristic approach – run quickly, with no guarantee on the quality of the solution.
• Approximation algorithms – run quickly, but with a guarantee on the quality.
• Exact algorithm – and bear with the fact it (may) take a long time.
Note: to test (experimentally) a heuristic you need an exact algorithm.
15.1 Backtracking and Branch/Bound
Exact, exponential time algorithms. Search in implicit graph of partial solutions. General backtracking: we have
a configuration C that is the remaining subproblem to be solved, and choices made to get to this subproblem.
e.g. knapsack: configuration is items selected to far and items discarded so far, also with capacity remaining.
e.g. trying all permutations of 1 . . . n. Configuration is permutations so far, and remaining permutations.
Backtracking Algorithm: F = set of active configurations. Initially, one configuration, the whole problem. While
F = φ, C ← remove configuration from F, expand into C
1
, . . . , C
t
. For each C
i
, test for success (solves whole
problem) and failure (dead end.) Otherwise, add C
i
to F.
Storing F:
30
15 OCT 28TH, 2008 15.1 Backtracking and Branch/Bound
• Stack: DFS of configuration space
Size: height of tree
• Queue: BFS of configuration space
Size: width of tree
• Priority Queue: explore current best configuration
Usually, height << width, and we should use DFS.
e.g. exploring all subsets of ¦1, . . . , n¦:
S = empty set
R = {1 … n}
S = { 1 }
R = { 2 … n }
S = empty
R = { 2 … n }
1 out 1 in
S = { 1,2 }
R = { 3 … n }
S = { 1 }
R = { 3 … n }
2 out
2 in
Example: Subset Sum – Knapsack where weight is the value of each item.
Given items 1 . . . n and weight w
i
for item i, and W, find subset S ∈ ¦1, . . . , n¦ with
¸
i∈S
w
i
≤ W where we
maximize
¸
i∈S
w
i
.
Decision Version – can we find S with
¸
i∈S
w
i
= W?
A polynomial time algorithm for this decision version gives poly time for the optimization version.
Backtracking for the decision version of Subset Sum:
• Configurations are as above (S so far, R remaining)
• w =
¸
i∈S
w
i
, r =
¸
i∈R
w
i
.
Need to fill in success w = W and failure (of the configuration) when w > W or w +r < W.
Note: if F becomes empty and we haven’t found a solution, then no solution.
This is O(2
n
). Before, we built a dynamic programming algorithm for Knapsack with subproblems O(n W).
Which is better? Depends on W. e.g. if W has n bits then W ∼ 2
n
and backtracking is better.
31
15 OCT 28TH, 2008 15.2 Branch-and-Bound
15.2 Branch-and-Bound
• for optimization problems
• we’ll talk about minimizing an objective function
• keep track of minimum solution so far
• not DFS – explore ”most promising” configuration first
• ”branch” generate children of configuration (as in backtracking)
• ”bound” – for each configuration compute a lower bound on the objective function and prune if ≥ minimum
so far.
General paradigm:
• F = active configurations
• Keep best so far
• While F = φ
– C ← remove ”best” configuration from F
– Expand C to children C
1
, . . . , C
t
(”branch”)
– For each C
i
,
∗ If C
i
solves the problem, if better than current best, update best
∗ Else if C
i
is infeasible, discard it.
∗ Else, ”bound:” If lower bound (C
i
) < best so far, add C
i
to F.
15.2.1 Branch and Bound TSP Algorithm
Example: Traveling Salesman problem. Idea here is we have a graph with weights on the edges, and our traveling
salesman wants to start in a home town, visit every city exactly once, and return to the home town.
Given a graph G = (V, E) and edge weights w : E →R
≥0
find a cycle C that goes through every vertex once and
has minimum weight.
This is a famous, ”hard” problem.
Algorithm: based on enumerating all subsets of edges. Configuration: I
c
∈ E (included edges) and E
c
∈ E
(excluded edges.) I
c
∩ X
c
= φ. Undecided edges E ` (I
c
∪ X
i
).
Necessary conditions: E ` X
c
must be connected. In fact it must be 2-connected. I
c
must have ≥ 2 edges at each
vertex, must not contain a cycle.
How to branch? Take the next edge not decided about yet. C−I
c
, X
c
choose e ∈ E`(I
c
∪X
c
). But how to bound?
Given I
c
, X
c
find a lower bound on minimum TSP tour respecting I
c
, X
c
. We want an efficiently computable lower
bound (so it’s sort of like a heuristic, but we don’t have issues of correctness.)
32
16 OCT 30TH, 2008
Instead of finding a tour, we’re finding a 1−tree, a spanning tree on nodes 2, . . . , n (not a MST) and two edges
from vertex 1 to leaves of the tree.
Claim Any TSP-tour is a 1-tree. w(min TSP-tour) ≥ w( min 1-tree ). So use this for lower bound.
Claim We can efficiently find a minimum weight 1-tree given I
c
, X
c
. (Not proven.)
Final Enhancements:
• When we choose the ”best” configuration C from F, as our measure of best, use the one with the minimum
1-tree.
• Branch wisely. e.g. find vertex i in minimum 1-tree with degree ≥ 2.
Let e = maximum weight edge
16 Oct 30th, 2008
16.1 Recall
Course outline:
• Designing algorithms
• Analyzing algorithms
• Lower Bounds – do we have the best algorithm?
16.2 Lower Bounds
If we have a lower bound for a problem P, we claim any algorithm will take at least this much time.
Note: distinction between lower bound for an algorithm and lower bound for a problem. For an example, look at
multiplying large integers. The school method was O(n
2
).
In fact, school method is Ω(n
2
) worst case run time of because there are example inputs that take ≥ c n
2
steps.
But there is an algorithm (divide and conquer) with a better worst-case runtime – O(n
k
) with k < 2. But a lower
bound for the problem says that all algorithms have to take ≥ some time.
Lower bounds for algorithms are hard to prove!
16.2.1 Basic Techniques
1. Lower bound based on output size.
For example, if we ask for all the permutations of 1, 2, . . . , n, there are n! of them and it won’t take less than
n! time to write them all down – Ω(n!).
2. Information-Theoretic Lower Bounds
e.g. Ω(log n) lower bound for searching for an element inside a
1
, a
2
, . . . , a
n
. This takes log n bits as that is
the information content of distinguishing n possibilities.
33
16 OCT 30TH, 2008 16.3 Polynomial Time
In a comparison-based model, each comparison gives one bit of information, and since we need log n bits we
need log n comparisons. Often this argument is presented as a tree.
3. Reductions: showing one problem is easier or harder than another.
e.g. convex hull is harder than sorting. We took an index of numbers and mapped them into a curve, and
then the convex hull would tell the sorted order. ”If I could find convex hulls faster than O(nlog n) then I
could sort faster than O(nlog n).”
16.2.2 State-of-the-Art in Lower Bounds
• Some problems are undecidable (they don’t have algorithms) e.g. the halting problem. We’ll do this later
in the course (and CS 360.)
• Some problems can only be solved in exponential time.
• (Lower end) some problems have Ω(nlog n) lower bounds on special models.
Things we care about, like ”is there a TSP algorithm in O(n
6
)” – nobody knows. ”Can O(n
3
) dynamic program-
ming algorithms be improved?” – nobody knows.
Major open question: Many practical problems have no polynomial time algorithm and no proved lower bound.
The best that’s known is proving that a large set of problems are all equivalent, and we know that solving one in
polynomial time solves all the others.
In the rest of the course, we’ll fill this in.
16.3 Polynomial Time
Definition An algorithm runs in polynomial time if its worst case runtime is O(n
k
) for some k.
What is polynomial?
Θ(n) YES
Θ(n
2
) YES
Θ(nlog n) YES (because it’s better than O(n))
Θ(n
100
) YES
Θ(2
n
) NO
Θ(n!) NO
The algorithms in this course were (mostly) all poly-time, except backtracking and certain dynamic programming
algorithms (specifically 0-1 Knapsack.)
Low-degree polynomials are efficient. High-degree polynomial don’t seem to come up in practice.
Jack Edmonds is a retired C&O prof. The ”matching” problem has you given a graph and you want to assign
pairs. He first formulated the idea of polynomial time.
In any other algorithms class, you would cover linear programming in algorithms. We have a C&O department
that covers that, but if you’re serious about algorithms, you should be taking courses over there.
34
17 NOV 4TH, 2008 16.4 Reductions
Other history:
• In the 50’s and 60’s, there was a success story creating a linear programming and simplex method – practical
(though not polynomial.)
• Next step, integer linear programming. Seemed promising at the time, and people reduced other problems
to this one, but in the 70’s with the theory of NP-completeness, we found this is actually a hard problem
and people did reductions from integer programming.
Our goal: to attempt to distinguish problems with poly-time algorithms from those that don’t have any. This is
the theory of NP-completeness. (NP = Non-deterministic Polynomial)
16.4 Reductions
Problem A reduces (in polytime) to a problem B (written A ≤ B or A ≤
P
B) and we can say ”A is easier than
B” if a (polytime) algorithm for B can be used to create a (polytime) algorithm for A. More precisely, there is a
polytime algorithm for A that makes subroutine calls to (polytime) algorithm B.
Note: we can have a reduction with having an algorithm for B.
Consequence of A ≤ B:
An algorithm for B is an algorithm for A. But if we have a lower bound non-polytime algorithm for A then this
implies a non-polytime algorithm for B.
Even without an algorithm for B or a lower bound for A, if we prove reductions A ≤
P
B and B ≤
P
A then A and
B are equivalent with respect to polytime (either both have them, or both don’t.)
Example: Longest increasing subsequence problem. We will reduce this problem to not shortest path but longest
path in a graph.
This is a reduction – it reduces the longest increasing subsequence problem to the longest path problem. Is it a
polynomial-time reduction?
How can we solve the longest path problem? Reduction to shortest path problem. Negate the edge weights.
17 Nov 4th, 2008
Permanents are like determinants except they’re all positive terms.
Today’s topics: Reductions (from last class), P and NP, and decision problems.
17.1 Decision Problems
What is a decision problem? A problem with output YES/NO or TRUE/FALSE. We will concentrate on decision
problems to define P/NP. Why? It’s more rigorous, and it seems to be equivalent to optimization anyways.
Examples
• Given a number, is it prime?
• Given a graph, does it have a Hamiltonian cycle? (a cycle visiting every vertex once)
35
17 NOV 4TH, 2008 17.2 P or NP?
• TSP decision version: given a graph G = (V, E) with w : E →R
+
, and given some bound k ∈ R, is there a
TSP tour of length at most k?
• Independent Set: given a graph G = V (E) and k ∈ N is there an independent set of size ≥ k? Optimization
version: given G, find max independent set.
Usually, decisions and optimization are equivalent with respect to polynomial time. e.g. independent set. In fact,
typically, we can show decision ≤
P
opt. Input: G, k.
• Give G to algorithm for optimization problem
• Return YES or NO depending on whether the returned set is ≥ k.
Showing opt ≤
P
decision: suppose we have a poly-time algorithm for the decision version of independent set. For
k = n. . . 1, give G, k to decision algorithm and stop when it’s NO. Runtime: Assume decision takes O(n
t
). Then
this loop takes O(n
t+1
).
We can find the actual independent set in polytime too. Idea: try vertex 1 in/out of independent set. Exercise:
fill this in and check poly-time.
Examples:
• Factoring – find prime factors
• Primality – given number, is it prime?
In some sense, primality is the ”decision” version of factoring. But although we can test primality in polynomial
time, we can’t factor in polynomial time (and to find one would be bad news for cryptography!)
Definition P = ¦ decision problems that have polytime algorithms ¦.
Notes:
• Must be careful about model of computing and input size – count bits.
17.2 P or NP?
Which problems are in P? Which are not in P? We will study a class of ”NP-complete” problems that are
equivalently hard (wrt polytime) (i.e. A ≤
P
B ∀A, B in class) and none seem to be in P.
Definition of NP (”nondeterministic polynomial time”): there’s a set of NP problems, which contains P prob-
lems and NP-complete algorithms (that are equivalent.) NP problems are polytime if we get some lucky extra
information.
For independent set, it’s easy to verify a graph has an independent set of size ≥ k if you’re given the set. Contrast
with verifying that G has no independent set of size ≥ k, what lucky info would help?
e.g. primes: given n, is it prime? Not clear what info to give (there is some) but for composite numbers (given n,
is it composite (= not prime?)) we could give factors.
A certifier algorithm takes an input plus a certificate (our extra info.) An algorithm B is a certifier for problem
X if:
36
17 NOV 4TH, 2008 17.3 Properties
• B takes two inputs s and t and outputs YES and NO.
• ∀s, s is a YES input for X iff ∃t ”certificate” such that B(s, t) outputs YES.
B is a polytime certifier if
• B runs in polynomial time.
• There is a polynomial bound on size of certificate t in terms of the size of s.
Examples
• Independent Set
Input is a graph G and k ∈ N. Question does G have an independent set of size ≥ k?
Claim: Independent Set ∈ NP.
Proof Certificate u ⊆ V (set of vertices.) Certifier: Check if u is an independent set and check [u[ ≥ k.
• Decision version of TSP.
Input: Given G = (V, E) and w : E →R
+
, and k ∈ R
Question: Does G have a TSP tour of weight ≤ k?
Certificate: Sequence of edges
Certifier: Check edges, and check no repeated vertices (sum of weights ≤ k).
• Non-TSP
Does G have no TSP turn of length ≤ k?
Is Non-TSP in NP? Nobody knows.
• Subset-Sum:
Input: w
1
, . . . , w
n
in R
+
. Is there a subset S = ¦1 . . . n¦ such that the sum is exactly W?
Claim: Subset Sum ∈ NP. Certificate: S. Certifier: add the weights in S.
17.3 Properties
Claim P ⊆ NP.
Let X be a decision problem in P. So X has a polyime algorithm to show X ⊆ NP.
• Certificate: nothing
• Certifier Algorithm: original algorithm
Claim: any problem in NP has an exponential algorithm. In particular, the running time is O(2
poly(n)
).
Proof idea: try all possible certificates using the certifier. The number of certificates is O(2
poly(n)
).
Open Questions
Is P = NP? co-np: ”no versions of NP problems.” non-TSP is in co-NP. Is Co-NP NP? Is P NP intersect co-NP?
37
18 NOV 6TH, 2008
18 Nov 6th, 2008
18.1 Recall
A ≤
P
B – problem A ”reduces (in Polytime) to” problem B if there is a polytime algorithm for A (possibly) using
a polytime algorithm for B. (B is ”harder.”) P = ¦ decision problems with polytime algorithms ¦ and NP = ¦
decision problems with a polynomial-time certifier algorithm ¦ (i.e. poly-time IF we get extra information.)
18.2 NP-Complete
These are the hardest problems in NP. Definition: A decision problem X is NP-complete if:
1. X ∈ NP
2. For every Y ∈ NP, Y ≤
P
X.
Two important implications:
1. If X is NP-complete and if X has a polytime algorithm then P = NP. i.e. every Y ∈ NP has a polytime
algorithm.
2. If X is NP-complete, and if X has no polytime algorithm (i.e. lower bound) then no problem in NP-complete
has a polytime algorithm.
The first NP-completeness proof is hard. To show X NP-complete, we must show Y ≤
P
X for all Y ∈ NP.
Subsequent NP-completeness proofs are easier. If we know X is NP-complete, then to prove Z is NP-complete:
1. Prove Z ∈ NP
2. X ≤
P
Z
Note that X is a known NP-complete problem and Z is the new problem. Please don’t get this backwards.
18.2.1 Circuit Satisfiability
The first NP-complete problem is called circuit satisfiability.
v
x
1
x
2
^
^
¬ ¬
(one) output (sink)
inputs, with variables
38
18 NOV 6TH, 2008 18.2 NP-Complete
This is a dag with OR, AND, and NOT operations. 0-1 values for variables determine output value. e.g. if x
1
= 0
and x
2
= 1 then output = 0.
Question: Are there 0-1 values for variables that give 1 as output?
Circuit SAT is a decision problem in NP.
• Certificate – Values for variables.
• Certifier – Go through circuit from sources to sink, computing values. Check output is 1.
Theorem Circuit-SAT is NP-complete.
Proof Sketch: We know ∈ NP as above. We must show Y ≤
P
Circuit SAT for all Y ∈ NP. The idea is that
an algorithm becomes a circuit computation. A certifier algorithm with an unknown certificate becomes a circuit
with variables as some inputs. The question is, is there a certificate such that the certifier says YES – which leads
to circuit satisfiability.
Essentially, if we had a polynomial time way to test circuit satisfiability, we would have a general way to solve any
problem in NP by turning it into a Circuit-SAT problem.
18.2.2 3-SAT
Satisfiability: (of Boolean formulas).
• Input: a boolean formula.
e.g. (x
1
∧ x
2
) ∨ (x
1
∧ x
2
)
• Question: is there an assignment of 0, 1 to variables to make the formula TRUE (i.e. 1?)
Well, circuits = formulas so these satisfiability problems should be equivalent. We will be rigorous. Even special
form of Satisfiability (SAT) is NP-complete.
3-SAT: e.g. (x
1
∨ x
1
∨ x
2
) ∧ (x
2
∨ x
3
∨ x
4
) ∧ . . .. The ”formula” is the ∧ of ”clauses,” the ∨ of three literals. A
literal is a variable or negation of a variable.
Theorem 3-SAT is NP-complete.
Proof
• 3-SAT ∈ NP:
Certificate: values for variables.
Certifier algorithm: check that each clause has ≥ 1 true literal.
• 3-SAT is harder than another NP-complete problem:
i.e. prove Circuit-SAT ≤
P
3-SAT.
Assume we have a polytime algorithm for 3-SAT, so use it to create a polytime algorithm for Circuit-SAT.
Input to algorithm is a circuit C and we want to construct in polytime a 3-SAT formula F to send to the
3-SAT algorithm s.t. C is satisfiable iff F is satisfiable.
39
19 NOV 11TH, 2008
We could derive a formula by carrying the inputs up through the tree (i.e. for f
1
and f
2
and ∨, just pull
the inputs up and write f
1
∨ f
2
.) Caution: the size of formula doubles at every level (thus this is not a
polynomial time or size reduction.)
Idea: make a variable for every node in the circuit. Rewrite a ≡ b as (a ⇒ b) ∧ (b ⇒ a), and a ⇒ b as
(b ∨a). a ≡ b ∨c becomes (a ⇒ (b ∨c)) ∧((b ∨c) ⇒ a) and (b ∨c ∨a) ∧(a ∨(b ∨c)) and (a ∨(b ∧c)).
We get (b ∨ c ∨ a) ∧ (a ∨ b) ∧ (a ∨ c).
Note: we can pad these size two clauses by adding new dummy variable t and (a ∨ b ∨ t) ∧ (a ∨ b ∨ t) etc.
There’s a similar padding for size 1.
The final formula for F:
– ∨ of all clauses for circuit nodes
– ∧x
i
where i is the output node.
e.g. x
y
∧ (x
7
≡ x
5
∨ x
6
) ∧ (x
5
≡ x
1
∧ x
2
) ∧ (x
6
≡ x
3
∧ x
4
) ∧ (x
3
= x
1
) ∧ (x
4
≡ x
2
).
Claim F has a polynomial size and can be constructed in polynomial time.
Claim C is satisfiable iff F is satisfiable.
Proof (⇒) by construction (⇐) . . .
19 Nov 11th, 2008
NP is decision problems with a polynomial time certifier algorithm.
P is decision problems with a polynomial time algorithm.
NP-complete problems are the hardest problems in NP.
Definition A decision problem X is NP-complete if:
• X ∈ NP
• Y ≤
P
X for all Y ∈ NP
Once we know X is NP-complete, we can prove Z is NP-complete by proving:
• Z ∈ NP
• X ≤
P
Z
19.1 Satisfiability – no restricted form
Recall: 3-SAT is NP-complete. Recall the input is a Boolean formula in a special form (three-conjunctive normal
form, F = (x
1
∨ x
2
∨ x
3
) ∧ . . .)
Question: Are there T/F values for variables that make F true?
Theorem SAT is NP-complete.
Proof:
• SAT ∈ NP
• 3-SAT ≤
P
SAT
40
19 NOV 11TH, 2008 19.2 Independent Set
19.2 Independent Set
Input: Graph G = (V, E) and k ∈ N.
Question: Is there a subset u ∈ V with [u[ ≥ k that is independent (i.e. no two vertices joined by an edge?)
Theorem Independent-Set is NP-complete.
Proof Independent-Set is in NP. See previous lecture. We will show 3-SAT reduces to Independent-Set. We
want to give a polytime algorithm for 3-SAT using a hypothesized polytime algorithm for Independent-Set.
Input: Boolean formula F
Goal: Construct a graph G and choose k ∈ N such that F is satisfiable iff G has an independent set ≥ k.
For each clause in F, we’ll make a triangle in the graph. For example, (x
1
∨ x
2
∨ x
3
) is drawn as a graph with
three vertices x
1
, x
2
and x
3
, and edges (x
1
, x
2
), (x
2
, x
3
), (x
3
, x
1
). We have m clauses, so 3m vertices.
For example: (x
1
∨ x
2
∨ x
3
) ∧ (x
1
∨ x
2
∨ x
3
) becomes:
x
1
x
2
¬x
3
x
1
¬x
2
x
3
Connect any vertex labelled x
i
with any vertex labelled x
i
.
Claim: G has polynomial size. 3m vertices.
Details of Algorithm:
• Input: 3-SAT formua F
– Construct G
– Call Independent-Set algorithm on G, m
– Return answer
• Runtime: Constructing G takes poly time. Independent set runs in poly time by assumption.
• Correctness: Claim F is satisfiable iff G has an independent set ≥ m.
• Proof: (⇒) Suppose we can assign T/F to variables to satisfy every clause. So, each clause has ≥ 1 true
literal. Pick the corresponding vertex in the graph. Pick the corresponding vertex from the triangle. This
gives an independent set of size = m.
(⇐) Independent set in G must use one vertex from each triangle. Set the corresponding literals to be true.
Set any remaining variables arbitrarily. This satisfies all clauses.
19.3 Vertex Cover
Input: Graph G = (V, E) and number k ∈ N.
Question: Does G have a vertex cover U ⊆ V with [u[ ≤ k?
A vertex cover is a set of vertices that ”hits” all edges – i.e. ∀(u, v) ∈ E, u ∈ U or v ∈ U (or both.)
Theorem Vertex-Cover (VC) is NP-complete.
Proof
41
19 NOV 11TH, 2008 19.4 Set-Cover Problem
• VC ∈ NP
Certificate: set u. Certifier algorithm: verify U vertex cover and ≤ k.
• Ind-Set ≤
P
VC
Ind-Set and VC are closely related.
Claim u ∈ V is an independent set iff V −U is an vertex cover.
Suppose that we have a polynomial time algorithm for VC. Here’s an algorithm for independent set. Input
G, k, and call VC algorithm on G, n −k.
Correctness: Claim, G has independent set ≥ k iff G has VC ≤ n −k.
19.4 Set-Cover Problem
Input: set E of elements and some subsets of E: S
1
, . . . , S
m
. S
i
∈ E and k ∈ N.
Question:
Can we choose subset of k S
i
’s that still cover all the elements? i.e. i
1
, . . . , i
k
such that
¸
j=1...k
S
ij
= E
Example: Can we throw away some intersecting rectangles and still cover some area?
Theorem Set-Cover is NP-complete.
Please find reduction proof on the Internet.
19.5 Road map of NP-Completeness
3-SAT
Circuit-SAT
Subset-Sum
Independent
Set
Hamiltonian
Cycle
TSP
Set-Cover
VC
Note: VC ≤
P
Set-Cover because VC is a special case, but Set-Cover ≤
P
VC because VC is NP-complete.
These proofs are from a 1976 paper by Richard Karp.
19.6 Hamiltonian Cycle
Input: Directed Graph G = (V, E)
Q: Does G have a directed cycle that visits every vertex exactly once?
Proof (1) ∈ NP and (2) 3-SAT ≤
P
Ham.Cycle. Give a polytime algorithm for 3-SAT assuming we have one for
Ham.Cycle.
42
20 NOV 13TH, 2008
• Input: 3-SAT formula F
• Idea: Construct digraph G such that F is satisfiable iff G has a Hamiltonian cycle.
F has m clauses and n variables x
1
, . . . , x
n
.
(skipped this section. read online.)
Can you show the undirected ham cycle problem is hard?
20 Nov 13th, 2008
20.1 Undirected Hamiltonian Cycle
Input: Undirected G = (V, E)
Decision: Does this graph have an undirected Hamiltonian cycle that visits every vertex exactly once?
Theorem Undirected H.C. is NP-complete.
Proof
• ∈ NP
• Dir. H.C. ≤
P
Undir.H.C.
Assume we have a polytime algorithm for the undirected case. Design a polytime algorithm for the directed
case.
Input: Directed graph G
Construct an undirected graph G

such that G has directed H.C. iff G

has undirected GC.
First idea – G

= G with direction erased. (⇒) is OK, but (⇐) fails in a one-directional cycle.
Second idea –
v
v
mid
v
in
v
out
For each vertex v create v
in
, v
out
, and v
mid
as shown above. We’ve created G

.
Claim G

has polynomial size. Say G has n vertices, m edges. Then G

has 3n vertices, m+ 2n.
Claim (Correctness) G has a directed H.C. iff G

has undirected H.C.
(⇒) easy
(⇐) v
mid
has degree two. So the Hamiltonian cycle must use both incident edges. Then it must use one
incoming edge at v and one outgoing edge at v.
This is the level of NP-completeness proof you’ll be expected to do on your assignment.
20.2 TSP is NP-complete
Theorem TSP (decision version) is NP-complete.
Input: G = (V, E) and w : E →R
+
with k ∈ R.
Q: Does G have a TSP tour with weights ≤ k?
Proof
43
20 NOV 13TH, 2008 20.3 Subset-Sum is NP-Complete
• ∈ NP
• Ham. Cycle ≤
P
TSP.
Ham. Cycle is a special case of TSP when w(e) = 1 ∀e and k = n.
Theorem Hamiltonian Path is NP-complete.
Input: undirected graph G
Question: does G have Ham path that visits each vertex exactly once?
Proof
– ∈ NP
– Ham Cycle ≤
P
Ham Path
Want algorithm for Ham. Cycle using algorithm for Ham Path. Given G, input for Ham. cycle,
construct G

such that G has H.C. iff G

has Ham path.
First idea: G

← G. Well, ⇒ is OK but we can find a counterexample for ⇐. Exercise: find a
counterexample.
Second idea: Create three new vertices abc in G

and connect a and c to all vertices in G

. This gives
G has Ham. path iff G

has Ham cycle.
Third idea: Add a single vertex and connect it to everything in G

.
Fourth idea: erase each vertex from G one-at-a-time and ask for Hamiltonian path.
Final idea: Take one vertex v and split it into two identical cupies. Add new vertices s and t as above.
Claim poly-size.
Again, this is the kind of thing you’ll be expected to do on your assignment.
20.3 Subset-Sum is NP-Complete
This one is not something you’ll be expected to do on your assignment.
Input: Numbers a
1
, . . . , a
n
∈ R and target W.
Question: Is there a subset S ∈ ¦1, . . . , n¦ such that
¸
i∈S
a
i
= W?
Recall: Dynamic programming algorithm O(n W). Branch-and-bound algorithm was O(2
n
).
Proof
1. ∈ NP
2. 3-SAT ≤
P
Subset-Sum
Give a polynomial-time algorithm for 3-SAT using a polytime algorithm for Subset-Sum.
Input is a 3-SAT formula F with variables x
1
, x
2
, . . . x
n
and clauses c
1
, . . . , c
n
. Construct a Subset-Sum input
a
1
, . . . , a
t
, W s.t. F is satisfiable iff ∃ subset of a
i
’s with
¸
= W.
Ex, F = (x
1
∨ x
2
∨ x
3
) ∧ (x
1
∨ x
2
∨ x
3
).
44
20 NOV 13TH, 2008 20.3 Subset-Sum is NP-Complete
c
1
c
2
. . . c
m
x
1
x
2
x
3
x
1
1 0 1 0 0
x
1
0 1 1 0 0
x
2
0 0 0 1 0
x
2
1 1 0 1 0
x
3
1 1 0 0 1
x
3
0 0 0 0 1
x
n
x
n
slack 1, 1 1
slack 1, 2 2
slack 2, 1 1
slack 2, 2 2
≥ 1 ≥ 1 1 1
4 4
Make a 0-1 matrix, interpreting the rows as binary numbers (actually with a bigger base of 10.) Add extra
columns: column x
i
has 1

s in rows x
i
and rows x
i
, but zeros elsewhere.
• Want to choose x
1
row or x
1
row, but not both. Solution is slack rows.
• Want to deal with target ≥ 1. Solution: add two rows per column forcol c
i
. Add rows slack i,1 = 1 in
c
1
and sl i,2 = 2 in c
i
– and 0 everywhere else.
Set target for column c
i
= 4.
Finally, each row of the matrix becomes a base-10 number. These are the a
i
’s. The target row of the matrix
turns into W in base 10.
Claim Size. How many a
i
’s? 2n+2m. How many base 10 digits in a
i
’s and W? Equal to number of columns,
n +m.
Claim Correctness. Satisfiable iff ∃ subset of a
i
’s with sum W.
Proof (⇒) If x
i
is true, choose x
i
. If false, choose x
i
. Then column x
i
has sum = 1 as required. Column
for C
i
clause: either:
• True literal in C
i


Use slack i,1 = 1, so total = 4. Use slack i,2 = 1, total = 4. If only a single true literal, use slack i,1 and
slack i,2 for again 4.
This row set gives sum W.
(⇐) Some subset of rows adds to W.
Column x
i
⇒ we use rows x
i
or x
i
. Set x
i
= T or F. That satisfies all clauses. Consider c
j
, and sum down
c
j
column to get 4. Slacks give ≤ 3 so some literal in c
j
must be true.
45
21 NOV 18TH, 2008
21 Nov 18th, 2008
NP-Completeness continued.
Theorem Circuit-SAT is NP-Complete.
Recall: Input: Circuit of ∨, ∧ and gates and variables as some of the inputs. One sink: the final output.
Question: are there 0-1 values for which the circuit outputs 1?
Proof
• ∈ NP
• Y ≤
p
Circuit-SAT for all Y in NP.
What do we know about Y ? It has a polynomial time certifier algorithm B (input s for Y has Yes output
iff there exists a certificate t of poly size such that B(s, t) outputs YES.
We assume there is a polynomial time algorithm for Circuit-SAT and give a polynomial time algorithm
for Y using that subroutine.
Let n = size(s), size of input size. Let p(n) be a polynomial bounding size(t) i.e. size(t) ≤ p(n).
We must convert algorithm B to a circuit (to hand to Circuit-SAT subroutine.)
Alg. B (after compiling and assembling) becomes a circuit at lowest hardware level. Because B runs in
polynomial time, the circuit has polynomial size.
Alg B (for input of size n) becomes circuit C
n
(of polynomial size in n.)
(Is there a certificate?) becomes (Are there values for variables?)
Correctness:
Input s for Y gets YES output iff there exists a certificate such that B(s, t) outputs YES iff there exist values
for variables t such that C
n
outputs 1 iff C
n
is satisfiable.
Algorithm for Y :
– Input S
– Convert B to circuit C
n
– Hand C
n
to Circuit-SAT subroutine
21.1 Major Open Questions
Is P = NP? If one NP-complete problem is in P, then they all are.
If P = NP then there are problems in between P and NP-complete (Badner 70’s) i.e. A ≤
P
B but B not ≤
P
A
(i.e. A <
P
B)
But what are natural candidates for these? IN Garey and Johnson (’79) these were:
• Linear Programming: in P (’80)
• Primality Testing: in P (’02)
• Min. Weight Triangulation for Point Set: in NP-complete (’06) (not famous problem)
• Graph isomorphism: open.
Given two graphs each on n vertices, are they the same after relabeling vertices?
46
21 NOV 18TH, 2008 21.2 Undecidability
21.2 Undecidability
So far we’ve been talking about efficiency of algorithms. Now, we’ll look at problems with no algorithm whatsoever.
This is also a topic not conventionally covered in an algorithms course. So you won’t find it in textbooks. But
everyone in the School of Computer Science thinks it’s ”absolutely crucial” that everyone graduating with a
Waterloo degree knows this stuff.
21.2.1 Examples
Tiling: Given square tiles with colours on their sides, can I tile the whole plane with copies of these tiles? Must
match colours, and no rotations or flips allowed.
The answer is, actually, no. For a finite piece (k k) of the plane, it’s possible as I could just try t choices in k
2
places, so the problem is O(t
k
2
).
Program Verification: Given specification of inputs and corresponding outputs of a program (specification is finite,
potential number of inputs is infinite) given a program, does this program give correct corresponding output?
Answer: no. On one hand, this is sad for software engineers, because what their processes do attempts to check
this. On the plus side, your skills and ingenuity will always be needed...
Halting Problem: Given a program, does it halt (or go into an infinite loop?)
Sample-Program
while x = 1 do
x ← x −2
end
This halts if x is odd and positive.
Sample-Program-2
while x = 1 do
if x is even then x ←
x
2
else x ← 3x + 1
end
Assume x > 0. Sample runs: x = 5, 16, 8, 4, 2, 1. x = 9, 28, 14, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1.
Does this program halt for all x? That’s open.
47
22 NOV 20TH, 2008
Also, any math question about existence of a number can be turned into a halting question. Idea: There is an x
such that Foo(x). x ← 1. While not Foo(x), x ← x −1.
Definition A decision program is undecidable if there’s no algorithm for it.
Definition (more general)
A program is unsolvable if there’s no algorithm for it.
What is a problem? Specification of inputs and corresponding outputs.
What is an algorithm? Church-Turing Thesis (not proved.)
Algorithm is a Turing machine.
Theorem The following models of computing are equivalent:
• Turning machines
• Java programs
• RAM
• Circuit families
22 Nov 20th, 2008
22.1 Undecidability
”Which problems have no algorithm?”
Definition A decision problem is undecidable if it has no algorithm. A (general) problem is unsolvable if it as no
algorithm.
22.2 History of Undecidability
• Gottlob Frege - 1900 - one of many who tried to axiomatize mathematics.
• Bertrand Russell (1872-1970) Russell’s paradox (recommend his biography, and some philosophy books)
Let S = the set of sets that do not contain themselves. Is S a member of itself?
– NO. then S meets the second condition, so S is a member of S.
– YES. contradiction.
Contradiction either way! So what is wrong about this?
First undecidability result (from Turing):
Theorem The Halting Problem is undecidable.
Halting Problem
48
23 NOV 25TH, 2008
• Input: Some program or algorithm A and some input string w for A.
• Question: Does A halt on w?
Proof: (by contradiction.) Suppose there is a program H that decides the halting problem. H takes A, w as input
and outputs yes/no.
Construct a new program H

with input a program B.
begin
call H(B, B)
if no, halt.
else, loop forever.
end
So H

is like Russell’s set S. His question, ”does S contain S?” is like asking, ”does H

halt on its own input?”
Suppose yes, then this is a yes case of the halting problem. So H(H

, H

) outputs yes. Look at code for H

on
input H

. It loops forever. Contradiction.
Suppose no. Then this is the no case of the halting problem. So H(H

, H

) outputs no. But then (looking at
code of H

) H

halts on input H

. Contradiction either way. Therefore, our assumption that H exists is wrong.
Therefore, there is no algorithm to decide the halting problem.
23 Nov 25th, 2008
Assignment 3 – out of 45.
Assignment 4 – due Friday.
Final exam: study sheet is allowed.
23.1 Undecidability
Recall: a decision problem is undecidable if there is no algorithm for it.
Halting Problem: given a program/algorithm A and an input w, does A halt on input w?
To show other problems are undecidable, use reductions.
Theorem: If P and Q are decision problems and P is undecidable and P ≤ Q then Q is undecidable.
Recall A ≤ B or ”A reduces to B” if an algorithm for B can be used to make an algorithm for A.
Proof By contradiction. Suppose Q is decidable. Then it has an algorithm. By the definition of ≤, we get an
algorithm for P. This is contrary to P undecidable.
23.2 Other Undecidable Problems
23.2.1 Half-No-Input or Halt-on-Empty
Given a program A with no input, does it halt?
49
23 NOV 25TH, 2008 23.2 Other Undecidable Problems
Theorem Halt-no-Input is undecidable.
Proof: Halting Problem ≤ Half-no-input.
Suppose we have an algorithm X for Halt-no-input. Make an algorithm for the Halting Problem.
Input: program A, input string w.
Algorithm: Make a program A

that has w hard-coded inside it and then run A on it. Call X on A

which outputs
the yes/no answer.
Correctness A halts on w iff A

halts.
23.2.2 Program Verification
Given a program, and specification of inputs and corresponding outputs, does the program compute the correct
output for each input?
Theorem Program Verification is undecidable.
Proof Halt-No-Input ≤ Program Verification.
Suppose we have an algorithm V to decide Program Verification. Make an algorithm to solve Halt-No-Input.
Input: program A.
Output: does A halt?
Idea: Modify code of A to get a program A

with input and output.
A

read input, discard it
A
output 1
Then call V (A

, specs: for any input, output 1 ).
Correctness A halts iff V (A

, specs above) answers yes.
Proof: A halts iff A

produces 1 output for every input iff V (A

, spec above) answers yes.
Program Equivalence (something TA’s would love!)
Given two programs, do they behave the same (i.e. produce the same outputs?)
Theorem Program Equivalence is undecidable.
Proof Program-Verification ≤ Program-Equiv (?)
Suppose we have an algorithm for Program Equivalence. Give an algorithm for Program Verification.
Input: program A, input/specs for A. This will work, but we need more formality about input/output specs.
Let’s try another approach.
Halt-No-Input ≤ Program-Equiv.
Suppose we have an algorithm for Program Equivalence. Make an algorithm for Halt-no-Input. Input: program A.
Algorithm: Make A

as in previous. Make program B: read input, just output 1. Call algorithm for Program-Equiv
on A

, B.
50
24 NOV 27TH, 2008
Correctness
A

is equivalent to B iff A halts.
23.2.3 Other Problems (no proofs)
Hilbert’s 10th Problem
Given a polynomial P(x
1
, . . . , x
n
) with integer coefficients, does P(x
1
, . . . , x
N
) = 0 have positive integer solutions?
Possible approach: try all integers. This will correctly answer ”yes” if the answer is ”yes.” e.g. least integer
solution to x
2
= 991y
1
+ 1 is a 30-digit x and 29-digit y.
This was proved undecidable in the 70’s.
Conway’s Game of Life
Rules: spots die with 0-1 or 4 neighbours, are born with three neighbours, Undecidable.
24 Nov 27th, 2008
Final Exam: Wed Dec 10th. Office hours: show webpage. 48 and 49 must be rounded up to 50.
24.1 What to do with NP-complete problems
Sometimes you only want special cases of an NP-complete problem.
• Parameterized Tractability: exponential algorithms that work in polynomial time for special inputs. For
example, maximum degree in a graph. There may be algorithms that work in polytime when you bound
that maximum degree.
• Exact exponential time algorithm: use heuristics to make branch-and-bound explore the most promising
choice first (and run fast sometimes.)
• Approximation Algorithms: CS 466.
– Vertex Cover: Greedy algorithm that finds a good (not necessarily min) vertex cover.
C <- empty set
while E not empty set
pick e = (u,v) in E
C <- C u {u,v}
remove from E all edges
incident to u or v
end
Claim is this algorithm finds [C[ ≤ 2( min size of a V.C. ).
Proof: The edges we choose form a matching M (no two share an endpoint.) [C[ = 2[M[. Every edge
in M must be hit by a vertex in any V.C. and ∴ [M[ ≤ min size of V.C. and ∴ [C[ ≤ 2 ( min V.C. ).
We call this a ”2-approximation algorithm.”
Some NP-complete problems have no constant-factor approximation algorithm (unless P = NP) such
as Independent Set.
51
24 NOV 27TH, 2008 24.1 What to do with NP-complete problems
Some NP-complete problems have approximation factors as close to 1 as we like – at the cost of
increasing running time. Limit is approximation factor = 1 (an exact algorithm) with an exponential-
time algorithm.
– Example Subset-Sum
Given w
1
, . . . , w
n
and W, is there S ∈ ¦1 . . . n¦ such that
¸
i∈S
w
i
= W?
As optimization, we want
¸
i∈S
w
i
≤ W to maximize
¸
i∈S
w
i
.
Recall: Dynamic programming O(n W).
Note
¸
i∈S
w
i

1
2
(true max. this would be a 2-approximation)
¸
i∈S
w
i

1
(1+)
(true max) is a ”(1 +)-approximation.
Claim is there is a (1 + ) approximation algorithm for Subset-Sum with runtime O

1

n
3

. As → 0
we get better approximation but worse runtime.
Idea: apply dynamic programming to rounded input.
Rough rounding – few bits – rough approximation.
Refined rounding – many bits – good approximation.
Rounding parameter b (later b =

n
(max w
i
for i = 1 . . . n))
So ˜ w
i

w
i
b

b
Claim that w
i
≤ ˜ w
i
≤ w
i
+b.
Now all the ˜ w
i
’s are multiples of b so scale and run dynamic programming.
˜
˜ w ←

˜ w
i
b

. Also,
˜
˜
W ←

W
b

.
Note: we should check feasibility of rounding.
Runtime: O(n
˜
˜
W).
˜
˜
W ≤ O

W
B

= O

W

n
(max w
i
)

≤ O

1

n
2

and
W ≤ n(max w
i
)
Therefore, our runtime is like O

1

n
3

.
How good is our approximation? Each ˜ w
i
is off by ≤ b. The true maximum ≤
¸
i∈S
w
i
+ nb ≤
¸
i∈S
w
i
+(max w
i
) ≤
¸
i∈S
w
i
+
¸
i∈S
w
i

= (1 +)
¸
i∈S
w
i
.
Second last step: else use max w
i
as solution.
Therefore, (1 +) approx. alg.
(And assume w
i
< W ∀i. Else throw out.)
Idea: dynamic programming algorithm is very good – it only can’t handle having lots of bits in a
number. So throw away half the bits and get an approximate answer.
• Do alternative methods of computing help with NP-complete problems?
Will massively parallel computers help? Only by a factor of number of CPUs. This is like ”a drop in the
bucket” for exponential time algorithms.
• Randomized algorithms (CS 466?)
If I have access to a RNG, then what can I now do?
Primality: can be tested in polytime with a randomized algorithm (70’s) but also without randomness (2002.)
52
24 NOV 27TH, 2008 24.2 P vs. NP
• Quantum Computing
The hope is that it offers massive parallelism for free. Huge result (Shor, 1994) – efficient factoring on a
quantum computer.
Waterloo is, by the way, the place to be for quantum computing. In Physics, CS, and C&O we have experts
on the subject.
To read a tiny bit more on Quantum Computing is [DPV]
24.2 P vs. NP
53

CONTENTS

CONTENTS

11 Oct 14th, 2008 11.1 Graph Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Minimum Spanning Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Oct 16th, 2008 12.1 Graph Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.1 Prim’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Shortest Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Oct 21, 2008 13.1 All Pairs Shortest Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.1 Floyd-Warshall Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Oct 23, 2008 14.1 Dijkstra’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Connectivity in Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.1 Finding 2-connected components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Oct 28th, 2008 15.1 Backtracking and Branch/Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 Branch-and-Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.1 Branch and Bound TSP Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Oct 30th, 2008 16.1 Recall . . . . . . . . . . . . . . . . . . . . 16.2 Lower Bounds . . . . . . . . . . . . . . . . 16.2.1 Basic Techniques . . . . . . . . . . 16.2.2 State-of-the-Art in Lower Bounds . 16.3 Polynomial Time . . . . . . . . . . . . . . 16.4 Reductions . . . . . . . . . . . . . . . . .

20 20 21 23 23 23 24 25 25 25 27 27 28 29 30 30 32 32 33 33 33 33 34 34 35 35 35 36 37 38 38 38 38 39 40 40 41 41 42 42 42

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

17 Nov 4th, 2008 17.1 Decision Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2 P or NP? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Nov 6th, 2008 18.1 Recall . . . . 18.2 N P -Complete 18.2.1 Circuit 18.2.2 3-SAT

. . . . . . . . . . . . . . . . Satisfiability . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

19 Nov 11th, 2008 19.1 Satisfiability – no restricted form 19.2 Independent Set . . . . . . . . . 19.3 Vertex Cover . . . . . . . . . . . 19.4 Set-Cover Problem . . . . . . . . 19.5 Road map of NP-Completeness . 19.6 Hamiltonian Cycle . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

ii

CONTENTS

CONTENTS

20 Nov 13th, 2008 20.1 Undirected Hamiltonian Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.2 TSP is NP-complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.3 Subset-Sum is NP-Complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Nov 18th, 2008 21.1 Major Open Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Nov 20th, 2008 22.1 Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2 History of Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Nov 25th, 2008 23.1 Undecidability . . . . . . . . . . . . . . 23.2 Other Undecidable Problems . . . . . . 23.2.1 Half-No-Input or Halt-on-Empty 23.2.2 Program Verification . . . . . . . 23.2.3 Other Problems (no proofs) . . .

43 43 43 44 46 46 47 47 48 48 48 49 49 49 49 50 51 51 51 53

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

24 Nov 27th, 2008 24.1 What to do with NP-complete problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.2 P vs. NP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

iii

1 SEP 9TH, 2008

1
1.1

Sep 9th, 2008
Welcome to CS 341: Algorithms, Fall 2008

I’m Anna Lubiw, I’ve been in this department/school quite some time. This term I’m teaching both sections of CS 341. I find the earlier lecture is better though, which may be counterintuitive. The number of assignments is fewer this term. There are fewer grad TA’s this term, so the assignments may be shorter (but quite likely, not any easier!) Textbook is CLRS. $140 in the bookstore, on reserve in the library.

1.2

Marking Scheme

25% Midterm 40% Final exam 35% Assignments We have due dates for assignments already (see the website.) Unlike in 2nd year courses where ISG keeps everything coordinated, in third year we’re on our own.

1.3

Course Outline

Where does this word come from? An Arabic scientist from 600 AD. Originally, algorithms for arithmetic, developed by the mathematician/scientist (not sure what to call him back then.) In this course, we’re looking for the best algorithmic solutions to problems. Several aspects: 1. How to design algorithms i.e. what shortest-path algorithm to use for street-level walking directions. (a) Greedy algorithms (b) Divide and Conquer (c) Dynamic Programming (d) Reductions 2. Basic Algorithms (often domain specific) Anyone educated in algorithms needs to have a general repertoire of algorithms to apply in solving new problems (a) Sorting (from first year) (b) String Matching (CS 240) 3. How to analyze algorithms i.e. do we run it on examples, or try a more theoretical approach (a) How good is an algorithm? (b) Time, space, goodness (of an approximation) 4. You are expected to know (a) O notation, worst case/avg. case (b) Models of computation

1

1 SEP 9TH, 2008

1.4 A Case Study (Convex Hull)

5. Lower Bounds This is not a course on complexity theory, which is where people really get excited about lower bounds, but you need to know something about this. (a) Do we have the best algorithm? (b) Models of computation become crucial here. (c) NP-completeness (how many of you have secret ambitions to solve this? I started off wanting to solve it, before it was known it was so hard...)

1.4

A Case Study (Convex Hull)

To bound a set of points in 2D space, we can find the max/min X,Y values and make a box that contains all the points. A convex hull is the smallest convex shape containing the points (think the smallest set of points that we can connect in a ring that contains all the other points.) Analogy: putting an elastic band around the points, or in three dimensions putting shrink-wrap around the points. Why? This is a basic computational geometry problem. The convex hull gives an approximation to the shape of a set of points better than a minimum bounding box. Arises when digitizing sculptures in 3D, or maybe while doing OCR character recognition in 2D. 1.4.1 Algorithm

Definition (better from an algorithmic point of view) A convex hull is a polygon and its sides are formed by lines that connect at least two points and have no points on one side. A straightforward algorithm (sometimes called a brute force algorithm, but that gives them a bad names because oftentimes the straightforward algorithms are the way to go) – for all pairs of points r, s find the line between r, s and if all other points lie on one side only then the line is part of the convex hull. Time for n points: O(n3 ). Aside: even with this there are good and bad ways to ”see which side points are on.” Computing the slope between the lines is actually a bad way to do this. Exercise: for r, s, and p, how to do it in the least steps, avoiding underflow/overflow/division. Improvement Given one line , there is a natural ”next” line. Rotate through s until it hits the next point.

s l r t l'

t is an ”extreme point” (min angle α). Finding it is like ginding a max (or min) – O(n). Time for n points: O(n2 ). Actually, if h = the number of points on the convex hull, the algorithm takes O(n × h) Can we do even better? (you bet!) Repeatedly finding a min/max (which should remind you of sorting.) Example Sort the points by x coordinate, and then find the ”upper convex hull” and ”lower convex hull” (each of which comes in sorted order.) The sorting will cost O(n log n) but the second step is just linear. We don’t quite have a linear algorithm here but this will be much better. Process from left to right, adding points and each time figuring out whether you need to 2

upper bridge lower bridge 1. 2008 1. but not the same way. 3. Recursively find convex hull on each side. and h. the input size. This answer uses divide and conquer.) Measuring in terms of n. From there recover the sorted order. T (n) = 2T 3 . In three-dimensional space you can still get O(n log n) algorithms for this.g. In some sense. Combine by finding upper and lower bridges. Which is better? Well. We need a restricted model to say that sorting is Ω(n log n) – but need the power of indirect addressing. Technique: put points on a parabola (or alternately other shape) with a map x → (x. If we could find a convex hull faster. we need to specify the model of computation. This will be O(n) to divide. Why not? We’ll show soon. This is an intuitive argument. Challenge Look up the O(n log h) algorithm by Timothy Chan (here in SCS) and try to understand it. and ”walk down” to get the lower bridge. merge-sort. Get recurrence relation: n + O(n) 2 This is the same as e. and O(n) to find the upper/lower bridges. The take-home message is that to be precise we need to spend more time on models of computation. Divide points in half by vertical line. we could sort faster. but intuition is that we’ll have to sort the points somehow.1 SEP 9TH. One paper written called ”The ultimate convex hull algorithm?” (with a question mark in the name. no. It comes out to O(n log n). depends on whether h > log n or not. From e. (Don’t worry if that seems fuzzy. We saw an O(n log n) algorithm. the output size. 2. This is a case of using a reduction (which we will study a lot in this course) Time for n points: O(n log n).4 A Case Study (Convex Hull) go ”up” or ”down” from each point. edge from max x coordinate on the left to minimum x coordinate on the right. One more algorithm Will not be better than O(n log n). an O(n × h) algorithm. To be rigorous. x2 ) and compute the convex hull of these points. Never Any Better Finally let’s talk ever-so-slightly about not getting better than O(n log n). very unusual) gave an algorithm that’s O(n log h). ”walk up” to get upper bridge.

pick non-overlapping activities.2 Example: Scheduling time Interval scheduling.47 in as few coins as possible.. 2008 Assignment 1 is available online. 3.1 Example: Making change Example: for making change. 2008 Missing. On the assignment you must prove this is in fact true.empty set for i = 1 . each with an associated time interval. and I claim this is the minimum number of coins.A union { i } end This looks like an O(n log n) algorithm (as it takes that long to sort. 2008 2 Sep 11th. 3. or ”activity selection. Given activities. Suppose you want to pay $ 3. This takes seven coins. and then O(n) after that) Correctness Proof There are three approaches to proving correctness of greedy algorithms. 3 Sep 16th. • Greedy does better at each step. • Metroids (a formalization of when Greedy approaches work) (in C&O) 4 . Then the Greedy approach can be made into this solution. n if activity i doesn’t overlap any activities in A A <. • Suppose there is an optimal solution.3 SEP 16TH. Greedy Approaches • Pick the first activity NO • Pick the shortest activity NO • Pick one with the fewest overlaps NO • Pick the one that ends earliest YES We can write the algorithm as A <.” The goal is to maximize the number of activities we can perform.

. . n.. chooses them. bi does not overlap ai−1 by assumption. . . bl is a solution.x_i end 5 .3 Example: Knapsack problem I have items i. Therefore l ≤ k and greedy gives the optimal solution. . . . . . . if l > k then by claim a1 . . . free-W } free-w <. Weight limit W for the knapsack. b2 . That proves claim. free-w <. . So finish (ai ) ≤ finish (bi ) ∴ ai doesn’t overlap bi+1 . .3 SEP 16TH. . . Claim a1 . Item i has weight wi and i has values vi . . ak . . . . Proof By induction on i. ai bi+1 . Exercise. go through the picture. bl is also a solution. . xi is the weight of item i that we chose. . oatmeal) We’ll look at 0-1 Knapsack later (since it’s harder) (and when we study dynamic programming) So imagine we have a table of items: Weight wi 6 4 4 Value vi 12 7 6 vi wi . So when we choose ai . We want to show l ≤ k. bl so swap is OK. There are two versions: • 0-1 Knapsack: the items are indivisible (e. Suppose that l > k and show that greedy algorithm would not have stopped at k. . . bk+1 . . .free-w . ak } ordered by finish time (i.e. Prove a1 . bl } be any other set of non-overlapping intervals ordered by finish time. . To proce theorem. . . . Proof Let A = {a1 . Well. . . 3. we’re swapping bi out and ai in. . n by vi wi .min{ w_i. i. Pick items of total weight ≤ W maximizing the sum of V . . .g. ai . . . . Inductive case a1 .e. . . 2008 3.3 Example: Knapsack problem Theorem This algorithm returns a maximum size set A of non-overlapping intervals.) Let B = {b1 . tent) • Fractional: items are divisible (e. bl is a solution. .n x_i <. Base case i = 0 and b1 . .W for i=1. half of item 2 Greedy Algorithm Order items 1. bl is a solution. . . . Greedy by For the 0 − 1 knapsack: • Greedy picks item 1 – value 12 • Optimal solution For the fractional case: • Take all of item 1. But then the Greedy algorithm would not have stopped at ak . . . ai−1 bi . bi was a candidate – we chose ai . . 1 2 3 W = 8. in the order greedy alg.g. bi+1 . . bl is a solution. . .

yk ←k +∆ and yl ← yl − ∆. with only one subproblem of size – Conquer: No work – Recurrence relation: T (n) = T – Time: T (n) ∈ O(log n) • Merge sort – Divide: basically nothing 6 n 2 n 2 n 2 n 2 + 1 or more formally T (n) = max T . . 2008: MISSING Sep 23. Claim Greedy algorithm gives the optimal solution to fractional knapsack problem. . Let k be the minimum index with xk = yk . . Then yk < xk (because greedy took max xk . 2008: DIVIDE AND CONQUER xi = W (assuming W < The value we get is wi ) n i=1 vi wi xi Note: solution looks like it’s for 0-1.T +1 . yn . So there exists an index l > k such that yl > xl . 4 5 Sep 18. . Proof We use x1 . Ida: swap excess item l for item k. xn and the optimal uses y1 . Well. Sorting and searching are often divide-and-conquer algorithms. both terms of which are greater than zero. The steps are: • Divide – break problem into smaller subproblems • Recurse – solve smaller sets of problems • Conquer/Combine – ”put together” solutions from smaller subproblems Some examples are: • Binary search – Divide: Pick the middle item – Recurse: Search in each side. ∆ ← min{yl . 2008: Divide and Conquer I started with Greedy because it’s fun to get to some interesting algorithms right away. So the sum of the weights yi = W +∆(vk /wk ) − ∆(vl /wl ) = ∆(vk /wk − vl /wl ) vk vl > because k > l wk wl Thus yi is an even better solution. . . The only item we take fractionally is the last. wk − yk }. Thus own assumption that opt is better than greedy fails. .) xi = yi = W . .5 SEP 23. Divide and conquer however is likely the one you’re most familiar with.

n + in − (2i − 1) or 2i i−1 ”Unrolling” a recurrence n + n − 1 for n even 2 = 2i T We want n 2k 2j j=0 = 1.1 Use T (n) = 2T T (1) = 0 So for n a power of 2. 2008: DIVIDE AND CONQUER n 2 n 2 5. k = log n. . all of which are in CLRS. then T (n) = T T (1) = 0 and the exact solution is T (n) = n log n − 2 7 log n n 2 +T n 2 +n−1 +1 . 5. . n + k × n − (2k − 1) 2k = nT (1) + n log n − n + 1 = 2 ∗ kT = n log n − n + 1 ∈ O(n log n) If our goal is to say that mergesort takes O(n log n) for all n (as apposed to exactly computing T (n)) then we can just add that T (n) ≤ T (n ) where n = the smallest power of 2 bigger than n.1 Solving Recurrence Relations – Recurse: Two subproblems of size – Conquer: n − 1 comparisons – Recurrence: T (n) = T – Time: T (n) ∈ O(n log n) n 2 +T + (n − 1) and T (1) = 0 comparisons. 5.5 SEP 23.1. T (n) = 2T n +n−1 2 n n + −1 +n−1 = 2 2T 4 2 n = 4T + 2n − 3 4 . 2k = n. If we really did want to compute exactly T (n).1 Solving Recurrence Relations Three approaches.

1 Solving Recurrence Relations 5. Please do not make this kind of mistake on your assignments. 8 . constants aren’t supposed to grow like c + 1 above. is to deal separately with n even and n odd.5 SEP 23. Prove by induction that T (n) ≤ cn for some c. Often you don’t know c until you’re working on the problem. 2008: DIVIDE AND CONQUER 5. A good trick for avoiding . For n even. Another example n +n 2 T (n) ∈ O(n) T (n) ≤ cn for some constant c T (n) = 2T Claim Prove Assume by inductive hypothesis that T (n ) ≤ cn for n < n Inductive step n T (n) = 2T +n 2 n ≤ 2c + n = (c + 1)n 2 Wait. and need to do the case of n odd) for those of you for whom this is not entirely intuitive.1. n n n + n − 1 ≤ 2 c log +n−1 2 2 2 = cn(log n − log 2) + n − 1 (by induction) T (n) = 2T = cn log n − cn + n − 1 ≤ cn log n if c ≥ 1 I’ll leave the details as an exercise (we need a base case. prove by induction Again for mergesort recurrence. Example 2 n 2 n 2 T (n) = T T (1) = 1 +T +1 Let’s guess T (n) ∈ O(n).2 Guess an answer. This proof is fallacious. prove that T (n) ∈ O(n log n) Be careful: prove by induction that T (n) ≤ cn log n for some constant c.

5 SEP 23. then S(m) = 2S(m/2) + m. = 2k T n + 2k k−1 n +1 2 n +2+1 4 2i i=1 (n = 2k ) = nT (1) + 2k − 1 = 2n − 1 So try proving by induction that T (n) ≤ c × n − 1 In that case we have T (n) = c n n −1+c −1+1 2 2 = cn − 1 This matches perfectly.1. .3 Changing Variables Suppose we have a mystery algorithm with recurrence T (n) = 2T ( Substitute m = log n. .1 Solving Recurrence Relations Induction step: T (n) = c n n +c +1 2 2 = cn + 1 – we’ve got trouble from that + 1 Let’s try unrolling for n a power of 2. T (n) = 2T = 4T . We can say S(m) ∈ O(m log m) T (2m ) ∈ O(m log m) T (n) ∈ O(log n log log n) 9 √ n ) + log n and ignore the . 2008: DIVIDE AND CONQUER 5. n = 2m . and we have T (n) = 2T (2m/2 ) + m Let S(m) = T (2m ). Message: Sometimes we need to strengthen the inductive hypothesis and lower the bound. 5.

We’ll use unrolling. The third case is when a > bk . alogb n = nlogb a . t = logb n. logb a < k. a ≥ 1. 10 . the sum is constant and nk dominates. and then nlogb a dominates. . homogeneous linear recurrences T (n) = an−1 T (n − 1) + an−2 T (n − 2) + . c > 0. k ≥ 1 then  if a < bk  Θ(nk ) Θ(nk log n) if a = bk T (n) ∈  Θ(nlogb a ) if a > bk We’re not going to do a rigorous proof but we’ll do enough to give you some intuition.1. 2008: DIVIDE AND CONQUER 5. .4 Master Theorem From MATH 239. n + cnk b n = a aT 2 + c b n = a2 T 2 + ac b n 3 = a T 3 + a2 c b . It comes out exactly like that sum in your assignment. b > 1. If a = bk the sum is logb n and we get Θ(nk log n). .1 Solving Recurrence Relations 5.e. That never happens in algorithms (because we always have some work to do!) We need T (n) = aT n + c × nk b n + cn b The more general case where c × nk = f (n) is handled in the textbook.5 SEP 23. Just to wrap up. T (n) = aT Results (exact) are: a=b a<b a>b T (n) ∈ Θ(n log n) T (n) ∈ Θ(n) T (n) ∈ Θ(nlogb a ) – the final term dominates n log n n b Theorem If T (n) = aT + cnk . if a < bk i. = ak T (1) + i=0 T (n) = aT n k + cnk b n k + cnk b n k n + ac + cnk b2 b logb n−1 ai c n bi k logb n−1 = nlogb a T (1) + cnk i=0 a bk i n = bt . We’ll first look at k = 1. + a1 T (1) + f (n) = 0 are ”homogeneous” because they’re equal to zero. The rigorous way is through induction. .

Q5. n. you probably have to sort. aj with i < j but ai > aj . a permutation of 1 . . so you probably won’t get better than O(n log n). . There is a reason that n and n are in the list. We can count inversions: on how many pairs do we disagree? Here there are four pairs where we disagree: BD. Q4. We’d like a measure of how similar these lists are. Q4. If A and B are sorted. etc. Same issue in (e) but if you use exactly you may find that you don’t save. Use ”at most” if you haven’t started.e. the number of pairs ai . If you want examples of coin systems. The unmarked questions are likely to appear on midterms or finals. We will provide solutions for everything. US = UC. . 2008 6 6. Q2a. an Recursively count rA = # inversions in A rB = # inversions in B Final answer is rA + rB + r where r = number of inversions ai aj . . CA and two where we agree: BC. Shortest path length from i to j using at most l edges but formula is exactly l edges. Try to beat O(n2 ). 2 Divide & Conquer: Divide the list in half. r= n j=m+1 rj Strengthen recursion – sort the list. .1 Sep 25. Don’t get your proof from the Internet. am B = am+1 . count the number of inversions i. . For each j = m + 1 . ”How is the number of bits going to grow” is a much nicer √ √ angle.2. A = a1 . . How efficient? Well. Either assumption is fine. taking O(n2 ). In CS240 we learned to take the log of n + 1.Q6 are counterexample and a proof. with m = 1 2 . and yours is ADBC from best to worst. j ≥ m + 1 and ai > aj . l). we can say given a1 . we can compute rj ’s 11 . Brute Force: Check all n pairs. but examples of systems is fine.Q5. . Q3. D(i. Equivalently. j. go look around the Internet. an . Please just come to office hours instead of asking too many questions over e-mail.1 Divide & Conquer Algorithms Counting Inversions Comparing two people’s rankings of n items – books. DA. BA. . State clearly which one you are using. too. Q5. music. . DC. . Suppose my ranking is BDCA. Useful for web sites giving recommendations based on similar preferences. i ≤ m. a2 . n let rj = # of pairs involving aj . however.6 SEP 25. So we aren’t planning on marking every question.2 6. 2008 Assignment Info Assignment 1 is due Friday at 5PM in the assignment boxes. (e) (f) See the newsgroup and website. 6.

B) <. 09 81 × 12 34 Then calculate 09 × 12 09 × 34 81 × 12 81 × 34 The runtime here is T (n) = 4T Apply the Master Method.6 SEP 25.Sort-and-Count(B) r <.A) <.2 Divide & Conquer Algorithms Sort-and-Count(L): sorted L and # of inversions Split L into A and B (r_A.2.0 merge A and B when element is moved from B to output list r <. 2008 6.2 Multiplying Large Numbers The school method: 981 1234 -----3924 2943 1962 981 ------1210554 O(n2 ) for two n-digit numbers. First pad 981 to 0981. n + O(n) 2 4 2 2 0 → → → → 108 306 972 2754 1210554 12 . we get O(n log n). Can we do better? T (n) = 2T 6. (one step is × or + for two digits) There is a faster way using divide-and-conquer.r + # elements left in A end return r_a + r_b + r Runtime: n + O(n) 2 Since it’s the same as mergesort.Sort-and-Count(A) (r_B.

We see a = 4 > bk = 2 so then we have runtime Θ(nlogb a ) = Θ(n2 ). ) Practical Issues • What if n is odd? • What about two numbers with different digit counts? • How small do you let the recursion get? (Answer: hardware word) • What about different bases? • When is this algorithm useful? (For about 1.. We’ll get T (n) = 3T n + O(n) 2 From the master theorem. So far we have not made progress! We can get by with fewer than four multiplications. k = 1. (102 w + x) × (102 y + z) = 104 wy + 102 (wz + xy) + xz Note we need wz + xy. a = 4. b = 2. which runs in O(n log n log log n) 13 . Compare a with bk . don’t use it [BB]) – Schonnage and Strassen better for very large numbers. 2008 6. Look at (w + x)(y + z) = wy + wz + xy + xz We know wy and xz but we want wz + xy. k = 1 and since we have a > bk Θ(nlogb a ) = Θ(nlog2 3 ) ≈ Θ(n1.. This leads to: p = wy = 09 × 12 = 108 q = xz = 81 × 34 = 2754 r = (w + x)(y + z) = 90[that’s 09 + 81] × 46 Answer: 104 p + 102 (r − p − q) + q 108____ 1278__ 2754 ------1210554 We can apply this as a basis for a recursive algorithm. b = 2.6 SEP 25. now we have a = 3.585.000 digits or fewer.2 Divide & Conquer Algorithms T (n) = aT n + cnk b Here. not the terms individually.

Nope! What’s the way? (1) Divide points into left/right at the median x coordinate. What about • Sorting by position on one axis. T (n) = 8T n + O(n2 ) 2 By the master theorem.) Generally.7 SEP 30. 100}. ). k = 2. How would we do this? Sort and compare adjacent numbers. 7. a = 8. select the closest two by Euclidean distance.. 2008 Assignment 2 is available. In one dimension.. This gives T (n) = 7T n + O(n2 ) 2 This is Θ(nlog2 7 ) ≈ O(n2. but if you’re curious it’s in the textbook.1 D&C: Multiplying Matrices: Multiplying two square matrices. (There are other measures.2 D&C: Closest pair of points Divide and Conquer is very useful for geometric problems. Most efficient to sort once by x coordinate. b = 2. There are more complicated algorithms that get even better results (only for very large n however) 7. given n points in a plane. including the ”Manhattan distance” which is the distance assuming you can’t cross city blocks. since you need to write n2 numbers in the result!) Basic D&C Divide each matrix into n 2 blocks. A B C D E F G H = I J K L I = AE + BG etc. consider {10. Each of the four output blocks has 2 subproblems and O(n2 ) additions. 17. a = 8 > bk = 4 (the case when recursive work overwhelms other case) then T (n) ∈ Θ(nlogb a ) = O(n3 ). Basic method takes n2 (and in some sense this is the best you can do. 5.8. In a plane. Not discussing here. 2008 7 Sep 30. For example. Then we can find a line L in O(1) time. we can use brute force. 14 . Strassen’s Algorithm shows how to get by with just seven (a = 7) subproblems. and that’s O(n2 ). For this problem we don’t need to make that assumption. we assume that arithmetic is unit cost.

number of disjoint intervals. (T) Total algorithm: – Sort by x – Sort by y – T (n) = 2T n 2 + O(n) ∈ O(n log n) More general problems – given n points. What if we try to use Greedy? 15 . This can be done in O(n log n) (not obvious) • Voronoi diagrams • Delaunay triangulations Used in mesh generation.e.) Find ”upper envelope” of a set of n lines in O(n log n) by divide & conquer. L) < δ (i.3 Hidden Surface Removal (2) Recurse on Q and R. interval scheduling aka activity selection aka packing of intervals. Claim A δ × δ square T left of L can have at most 4 points on it. r ∈ R with d(q. Generalization – each interval i has a weight w(i). Recall. r) < δ then d(q. We can restrict our search to S. L) < δ and d(r.1 Oct 2nd. r lie in this strip of width 2δ. r) < δ then they are at most seven positions apart in sorted order. at least. δ = min closest pair inQ closest pair in R Solution is the minimum of δ or the closest pair crossing L. r) < δ. Pick the max. 7. find closest neighbour of each one. r) ≥ distance in DC from q to r ≥ δ. Now let S be points in the strip of width 2δ. Claim If S sorted by y coordinate and q inQ and r ∈ R with d(q. 2008 Dynamic Programming Weighted Interval Scheduling. But S can be all the points! Our hope is that if we sort S by coordinate then any pair q ∈ Q. 8 8. Claim If Q ∈ Q. suppose q outside its strip.3 Hidden Surface Removal (a baby version of it. 2008 7.8 OCT 2ND. We need to find pairs q ∈ Q. r ∈ R with d(p.) Proof If otherwise. q. r inR and d(q. Pick disjoint intervals to maximize the sum of the weights. Therefore you can’t fit five. Because every two points in T have distance ≥ δ we can fit four points but only in the four corners. r) < δ are near each other in sorted order. d(q.

Then-Exercise: in O(n) time find p(i) i = 1. OPT(I) = w(i) + OPT(I’) where I’ = the set of intervals that don’t overlap with i. use an iterative approach.. notation M[i] = W-OPT(1 . Leads to a recursive algorithm. 2. More generally. recurse fun OPT(i) if M[i] >= w(i) + M[p(i)] then return OPT(i-1) else return { i } union OPT (p( i)) 16 . E) with weights on vertices pick a set of vertices. the weight. Then call OPT(n)... Order intervals 1. Let OPT(I) = max weight of non-overlapping subset. A general idea: for interval (or vertex) i. Essentially we are trying all possible subsets of n items – all 2n of them.n? Sorting by right endpoint is O(n log n). p(i) = max index j ¿ i such that interval j doesn’t overlap i.. So far this algorithm finds W-OPT but not OPT. OPT(I) = OPT(I \ { I } ). i) = max ( W-OPT(1 .. .n M[i] = max{ M[i-1].8 OCT 2ND.i). i) M[0] = 0 for i = 1. w (i) + W-OPT(1. w(i) + W-OPT(I’) } T (n) = 2T (n − 1) + O(1) But this is exponential time. .p(i))) This leads to an O(n) time algorithm. Solution Use memoized recursion (see text. Danger here is that storing n sets of size n for n2 size. Make G with a vertex for each interval an edge when two intervals overlap. n-1 ).p(n)) )..) One possibility: enhance above loop to keep set OPT(1.n... W-OPT(I) is the opt. (i. w(n) + W-OPT(1. Note: don’t use recursion blindly. If we choose interval n. no two joined by an edge to maximize a sum of weights. .. w(i) + M(p(i)) } end Runtime is O(n).1 Dynamic Programming An even more general program: given a graph G = (V. . Let’s look at an algorithm using the second approach. One solution: first compute M as above.) OR.. ..e. If we use i. 2008 • Pick maximum weight – fails 8. n) = max ( W-OPT(1 . To find p(i) sort by the left endpoint as well. weight sum of weights of intervals in OPT(I). then l = all intervals disjoint from n – has form 1. p(n) = max index j such that interval j doesn’t overlap n. not the actual set of items. What about computing p(i) with i = 1.. . W-OPT(1 . j for some j. W-OPT(I) = max { W-OPT(I { i } ) . . either we use it or we don’t. For intervals (but not for the general graph problem) we can do better.. . W-OPT(1 . n by their right endpoint. The same subproblem may be solved many times in your program. If we don’t use i. i-1).

. we could calculate ((M1 M2 )(M3 M4 )) or (((M1 M2 )M3 )M4 ). .k and k + 1.9 OCT 7TH.2 Second example: optimum binary search trees Store values 1. . The number of ways to build a binary tree on leaves 1 .M[i. Solve subproblems: mi. Each node is one deeper now. The problem was to compute the product of n matrices M1 × M2 × . . × Mn where Mi is an αi−1 × αi matrix.n. Subproblem: ∀i. . n in leaves of a binary tree (in order. i+r] best <. j] + j pt .2 Second example: optimum binary search trees 8. t=i Exercise: work this out. we looked at weighted interval scheduling. i+r] for k=i+1. Today.j M [i.i] + M[i+1. Mj 17 . delete. 2008 8. M [i.best + sum_(t=i)^(i+r) p_t (better: p[j] = sum_t=1^j p(t) then use p[i+r] . best <. Matrices Mi . . . The heart of dynamic programming to find optimum binary search tree: Try all possible splits 1.n-1 for i = 1.temp end M[i. This is different in that we have items and probabilities ahead of time..k] + m[k+1..e. k] + M [k + 1. and rebalancing to control depth. .i+r] <.m[i. Minimize expected search cost n pi depth(i) i=1 Note: In CD 240 you did dynamic binary search trees – insert.P[i-1] Runtime? O(n3 ). 2008 Last day. for i=1. n is n Pn = i=1 Pi Pn−i The Catalan numbers are rn Pn ∈ Ω n2 which is exponential.n-r -. The difference from Huffman coding (a similar problem) is that for Huffman codes. . . i+r] if temp > best.. . . .) Given probability pi of searching for i build a binary search tree.i+r-1 temp <... . we’ll look at matrix chain multiplication. I.n M[i. What is the best order in which to do multiplications? Think about this in terms of parenthesizing the matrices in your multiplication. j find optimum tree for i. . j] = mink=i.j = min cost to multiply the scalar multiplications..i] = p_i for r=1. i + 1. 9 Oct 7th. left-to-right order of leaves is free. j. ..solve for M[i.

m(i. i + 1. j) + (i.. Final answer m(1. i+1.n m(i. . .j−1 {m(i. j) = min k=i+1.” The dynamic programming approach for the convex polygon case: choosing one chord breaks down into two subpolygons.. 2008 9.. The goal is to minimize the lengths of chords we use. n lies in some delta with vertex k – try all choices for k.j) <. . And we don’t atually need case m(i. n) and ex. j) = min sum of edge lengths to triangulate subpolygon on verticies i. . . 9.. Find the minimum sum of lengths of edges to triangulate. m(i. . divide into triangles by adding ”chords” – segments from one vertex to another.1 Example 2: Minimum Weight Triangulation Let mii = 0 and mij = min for k = i . . This doesn’t hurt our optimization and it makes base cases easier. We will give a dynamic programming algorithm that will also work for non-convex shapes. . Algorithm pseudocode: for i=1. j.temp end end end end The runtime is O(n3 ) for the O(n2 ) subproblems of O(n) each. Picking the smallest chord does not work.i + diff m(i. Notice a subset of polygons gives a subpolygon. i + 1) = (i. use k matrix to recover the actual parenthesization. Base cases m(i. The idea is we’ll break into subproblems from mi to mk times mk+1 to mj .i+1) for diff = 2.. The edge 1.m(i... i + 2) + (i. . i + 2) – it falls out of the general formula. . i + 1). . i + 2) Note: We’d better add m(i. No two chords are allowed to cross.. i + 1) + (i + 1.. j − 1.n-diff j <. Algorithm: initialize m(i.i) = 0 end for diff=1 . j.9 OCT 7TH.. n-diff j<-i + diff 18 . i + 2) = (i.1 Example 2: Minimum Weight Triangulation Problem: Given a convex polygon with vertices 1 .k) + m(k+1. A more general problem is to triangulate a set of points.. ”Minimum triangulation.j) m(i. . n for i = 1.j) <. j-1 temp <. k) + m(k.infinity for k = i . n in clockwise order. j)} ( the length of chord) Let’s count the perimeter as well.j) + d_{i-1} d_j d_k if temp < m (i.. More generally. . Can get by by looking just at subpolygons on verticies i. n-1 for i = 1 .

1 Dynamic Programming Key idea: Bottom-up method: identify subproblems and order so that you’re relying on previously solved subproblems.10 OCT 9TH. j-1 t <. w) (can’t use item i) but otherwise. . w − wi ) include i i∈S wi ≤ w . there is no polynomial-time algorithm.k) + m(k. . item i has weight wi and value vi .W] gives OPT value EX: Find opt set S. 10 Oct 9th.. 2008 Midterm (Mon Oct 20th): covers material up through today and a bit of next week’s material too. For the 0-1 knapsack. Note: coin changing problem is similar to knapsack but having multiple copies of items.j) <. w i = 0 . . Example (Knapsack/Subset Sum) Recall knapsack problem: given items 1 . O(n) to solve each one. Top-down: Item n can either be IN (items 1 . w) ← max Pseudo-code and ordering of subproblems: store OPT(i. . . both ∈ N.n w=0. . n.j) if t < M(i. n − 1 with W − wn ) or OUT (items 1 . w) ← OP T (i − 1.j) then M(i..j) <. find subset S from items 1 . How to solve this subproblem? If wi > w then OP T (I. . and W ..W initialize M[0. w) don’t include i vi + OP T (i − 1.j) + l(i. i such that and i∈S vi is maximized. .W compute M[i.w) in matrix M[i.infinity for k = i+1 . n} such that i∈S wi ≤ W and i∈S vi is maximized. n − 1) of S. 2008 m(i. .w] := 0 w = 0. . . OP T (i.. 10.W for i=1.w] i=0.. Choose a subset S ∈ {1. Subproblems are – for each i. 19 OP T (i − 1. .n for w=0. n and w = 0 . .. Recall a fractional versus 0-1. W .t end end end Runtime O(n3 ): n × n table and O(n2 ) subproblems.w] with (*) end end M[n.m(i. Recall a greedy algorithm works for the fractional case. . . . the knapsack capacity.

. . . 10. Find answer:max li with i = 1. v) = (v. xn and y1 . < aij . E) with V a finite set of vertices and E ∈ V × V are edges. . . Maximize j. no edge (u. So T (n) = 2T (n − 1) + O(1) – exponential! Advantage: storing solved subproblems saves time if we don’t need solutions to all subproblems. . Number of subproblems is O(n2 ). . Midterm on Mon Oct 20th. Alternate is during class time on Tuesday. Can we use subproblems on a1 . .2 Certain types of subproblems [KT] has examples. . yj . 2008 Assignment 2 due Friday. Given a1 . • Directed graph. .11 OCT 14TH. order matters. Input size is O(n log W ). . yn with subproblems x1 . . xj .3. Input v1 . Number of subproblems is O(n).4. . .2. . . . . . u). In 5. Note that wi ≤ W – else throw out item i. u)) 20 . .e. 7 PM. xn and subproblem x1 . xi and y1 . Runtime: nW c (outer loop. < .? ≤ (n + 1) log W . . Input size O(n log W ) but output size O(nW = n2k ). . ai ? Find largest ascending subsequence ending with ai . solve sub subproblem over and over. . j < i. . .6. inner loop. an finding ai1 < ai2 . xi+1 . • Undirected graph.3 Memoization Use recursion (not explicit solution to subproblems in the bottom-up approach we have used) – danger. . . . .002. 11.1. Example Longest ascending subsequence. Intuition why this is bad: let’s say we have weights . . li = max{1 + lj : j < i. . wn and W . 10. ..2 Certain types of subproblems • Input x1 . xi .1 Graph Algorithms A graph G = (V. . constant for (*)) O(n × w) Is this good? Does it behave like a polynomial? Depends on size of input.n Consider 2nd last item aj . 11 Oct 14th. This algorithm is called ”pseudo-polynomial” because runtime is polynomial on the value of W. So size of w1 . . . . . and W = 100. aj < ai } O(n2 ) algorithm: n subproblems O(n) each. i1 < i2 < . xn and subproblems xi . . < ij . . . . not the size (number of bits) of W. 10. . . 2008 10. .001. • No loops (i. . ai < ai . . Number of subproblems: O(n×m) • Input is rooted tree (not necessarily binary) and subproblems are rooted subtrees. . . . . • Input x1 . edge (u. • Input x1 . . . . . vn and w1 .

• 0≤m≤ n 2 = n(n−1) 2 undirected.e. Note: a tree on n vertices has n − 1 edges. • Take a minimum weight edge that creates no cycle. we can use DFS or BFS. A graph can be strongly connected – ∀u. Storing a graph: • Adjacency matrix: A(i. Tree: A graph that is connected but has no cycles. edge destinations in a list on the right. 11. v inV there is a directed path from u to v. For directed graphs: there are different notions of connectivity.3. Advantages and disadvantages? • Space: n2 matrix. If path a − b used edge (u. • Enumerating edges: O(n2 ) versus O(m + n).2 Minimum Spanning Trees Problem Given an undirected graph G = (V. O(n) or O(log v) in list. i. Claim E will be a tree. • Adjacency list: Vertices down the left. E) and weights w ≥ 0 : E → R find a minimum weight subset of edges that’s connected. Else E has a cycle. 11. 2m + n list. A walk allows repetition of vertices and edges. For testing if a graph is connected. 5. else 0. If there is a walk from u to v then there is a simple path from u to v. then replace edge (u.g. and m or |E| for the number of edges. e. • 0 ≤ m ≤ n(n − 1) directed. 4. We usually use adjacency lists – then we can (sometimes) get algorithms with runtime better than O(n2 ). E ) is connected and w(E ) = e∈E w(e) is minimized. Throw away an edge of the cycle. v). which leaves a connected graph. Almost any Greedy approach will succeed. j) = 1 if there is an edge from i to j. Cycle: a path from u to u. 2008 • No multiple edges. 21 . • Time to test e ∈ E: O(1) matrix. v) with the rest of the cycle. m ∈ O(n2 ).2 Minimum Spanning Trees We will use n or |V | for the number of vertices. What is a path? A sequence of vertices where every consecutive pair is joined by an edge. A simple path does not allow.11 OCT 14TH. there is a path joining them. Find E ⊂ E such that (V. We say that an undirected graph G is connected if for every pair of vertices.

f . Let T = T ∪ {e} {f } exchange e for f . Claim: T is it. do the smaller one. so we can remove f and stay connected.) Let e be a minimum-weight edge from V1 to V2 .) T has a path that connects u and v. T is a spanning tree: P ∪ {(u. • Order edges by weight: w(e1 ) ≤ w(e2 ) ≤ .T u {e} end • We add e iff u and v are in different connected components.11 OCT 14TH. Union: must rename one of the two sets. • Grow one connected component and use the minimum weight wedge. Proof Let T be a minimum spanning tree (stronger: containing X. Then there is a minimum spanning tree that includes e. and no edge of X goes from V1 to V2 . Stronger version Let X be a set of edges ⊂ minimum spanning tree. . P must use an edge from V1 to V2 – say. And O(m log m) = O(m log n) since log m ≤ log n2 = 2 log n. Note that T contains e and x (because f not in X. V2 be a partition of V (into two disjoint non-empty sets with union V .n if e_1 does not make a cycle with T then t <. components of u and v A simple Union-Find structure : Store an array C(1 . • Focus set = connected component of vertices.. n) and C(i) is the # of connected components containing vertex i.2 Minimum Spanning Trees Lemma Let V1 . v)} makes a cycle .) Following Kruskal’s Algorithm. . 22 . – Find(element) – find which set contains element. . All of these are justified by one lemma: 11. w(e) ≤ w(f ) so w(T ) ≤ w(T ).empty set for i = 1. ≤ w(em ) T <. – Union – unites two sets. . – Add edge e iff Find(u) = Find(v) – Add edge e to T ⇒ unite conn. 2008 • Throw away maximum weight that doesn’t disconnect. Let the minimum spanning tree also include X. Krustkal’s Algorithm takes O(m log m) to sort plus O(n log n) for the Union-Find test. • To test this efficiently we use the Union-Find data structure. Then h units take O(n log n) in CS 466: reduce this.

Doesn’t have to be hand-written either.5 × 11 sheet brought to the midterm. E) with weight function w : E → R+ . Exactly how does δ(u) change? When we do U ← U ∪ {v}. E ) is connected. insert. Implementation: we need to (repeatedly) find a minimum-weight edge leaving U (as U changes. Initially. Correctness – from lemma last day. any edge from U to v leaves δ(u). v) into PQ. • You are allowed one 8.1 Graph Algorithms Minimum Spanning Tree: Given an undirected graph G = (V. For all x incident to v. – Marking scheme is in the newsgroup. While U = V . We need a priority queue – use a heap. 12. Builds a tree.) Let S(U ) be a set of edges from U to V − U . For one r. Any other edge incident with v enters δ(u). 23 . – Solutions will be on website.1 Prim’s Algorithm Also a greedy algorithm. the cheapest two edges connecting two groups is indeed the best. Add e to T and v to U . • Lemma. and O(1) for finding a minimum. 2008 12 Oct 16th. • Assignment 2 – due tomorrow. Recall: • Kruskal’s algorithm orders edges from minimum-maximum weight. • if x ∈ U then remove edge (x. Recall that a heap provides O(log n) for insert and delete. General structure: let u be vertices of the tree so far. Take each edge unless it forms a cycle with previously chosen edges. 2008 • Assignment 1 – out of 40. how many PQ inserts/deletes do we need? • n in the worst case. We want to find the minimum. • else insert edge (x. v) from priority queue.12 OCT 16TH.1. find a subset of edges E ∈ E such that (V. and delete. v} where u ∈ U and v ∈ V − U . find a minimum weight edge e = {u. • Midterm – Monday – covers to the end of today. U = {s}. 12.

Later: Dijkstra’s algorithm for 2. 2008 • deg(v) = # of edges incident with v. Improvements • Store vertices in the PQ instead of edges.) Weight of path = sum of weights of edges. Versions of shortest path problem: 1. Gives (m log n). 12. v ∈ V . n3 subproblems from l = 1 . (From diagram in class.2 Shortest Paths Shortest path from A to D: ABD weight 3 + 2 = 5. v path ∀u. find shortest paths to all other vertices. If m = 0: check first if m < n − 1 and if so bail out. we must adjust weights of some vertices.2 Shortest Paths Total number of PQ insert/delete operations over all vertices v: (hope for better than n × n. When we do U ← U ∪ {v}. In what way are these subproblems smaller? • They use fewer edges.) Every edge enters δ(u) once and leaves once. 2. v∈V deg v = 2m. Like Prim’s algorithm. Allow negative weight edges. (If we have a negative weight cycle. • Barouvka’s Algorithm: another way to handle this case 12.” Solving 1 seems to involve solving 2. 24 . . l] = min weight path from u to v using ≤ l edges. find a shortest path from u to v. v – the ”all paths shortest path problem. then repeating it potentially gives paths of −∞ weight. Given u. Total time for the algorithm is O(n + m log m) = O(m log m) because m ≤ n2 and log m ≤ 2 log n. Gives O(n log n + m). ”Single source shortest path problem” 3. v. • The paths u − x and x − v don’t use x as intermediate vertex. but disallow negative weight cycles.12 OCT 16TH. A to E: ABE with weight 4. Find shortest u.) General input: directed graph G = (V. M [u. Define w(v) = minimum weight of an edge from U to v. E) with weights w : E → R. n − 1. . Alternatively. • Tweak the PQ to be a ”fibonacci heap. Build a shortest path tree from u Dynamic Programming solution for problem 3.” which gives O(1) for weight change and O(log k) to find minimum. so 2m.) We might ask for shortest simple path but this is actually hard (NP-complete. Given u ∈ V . Does u − v path go through x or not shortest? Split into: find shortest path u − x and shortest path x − v.

w(ACD) = 8 Assume: no negative weight cycles.13 OCT 21. How do we initialize? D0 [u. Let Di [u. Number of subproblems: O(n3 ). . n}. . The u − x and x − v paths do not use x as an intermediate vertex. the weight of a path is the sum of weights of edges in path. we need a shortest u → x path and a shortest x → v path. . . If we use x. v ∈ V . v] = min.1 Floyd-Warshall Algorithm edges. ∞ otherwise . but uses more space. How are these subproblems simpler? 1. x u v Main idea: try all intermediate vertices x. 2. 2. v path with ≤ However. i}. . Initialize D_0 as above 25 . . v]. v] for i = 0. Di−1 [u. B 5 A 11 D 6 2 -1 C e.g. v. v] = min{Di−1 [u. This gives the same runtime. minimum length path can be ∞. . We’ll use this one. . length of a path u → v using intermediate vertices from the set {1. v) if (u. 2008 All Pairs Shortest Path Given a directed graph G = (V. ] give shortest u. Fewer edges – get efficient dynamic programming M [u. v] = {w(u. . find shortest u − v paths from all u. i] + Di−1 [i.1 Oct 21. In general. we’re not using this. 1. Final answer: matrix Dn [u. v) ∈ E. Use Dynamic Programming. v]} This leads to: 13. 2008 13 13. Solve subproblem Di [u. v]. n. E) with weights w : E → R. Let V = {1. . . . Otherwise.1. Main formula: Di [u.

but we correctly compute the main equation..v] x <. but this is correct because we don’t go below the true min by doing this. So we can throw away any previous matrices.n for v = 1.v] <.n for u = 1.n D_i[u.i] end Once we have S with complete paths: Path[u.i + D[i. v) ∈ E and φ otherwise.v] S[u.v] = as above in main formula end return D_n Time is O(n3 ).n D_i[u. How to find the actual shortest path? • Compute H[u. Modify (**) to become: • if D[u.v] <.S[u.1 All Pairs Shortest Path for i = 1.v] x <.i] + D[i. bringing space to O(n2 ).v] < D[u.v] = min { D[u.S[x. Better: • S[u. even better (although not in degree of n) we can: Initialize D full of D_0 for i = 1.. which is extremely undesirable. v] =highest numbered vertex on u → v path Note: If we explicitly stored all n2 paths.u while neq u output S[x. 2008 13. D[u.n for u = 1..v] then D[u.v] } (**) end return D_n Note: in the inner loop. D will be a mixture of Di and Di−1 . 26 .n for v = 1.D[u.v] end output v Exercise: Use this algorithm to test if a graph has a negative weight cycle.i] + D[i.. Notice to compute Di we only use Di−1 . v]− successor of u on a shortest u... In fact.13 OCT 21. The space however is also O(n3 ). v] = v if (u. v path Initialize S[u.v]. we’d be back to O(n3 ) space – avoid this.

which is O(m log n).1 Dijkstra’s Algorithm Input: Directed graph G = (V. B = {s}. Output: Shortest s → v path ∀v. y) to shortest path tree parent(y) ← x • B ← B ∪ {y} This is greedy in the sense that y has the next minimum distance from s. Initially. E) and weight function w : E → R≥0 and source vertex s. O(n × m). x y s B General step: have shortest paths to all vertices in B. 2008 Shortest Paths Last day’s study was the all-pairs shortest path problem.) 14. 2008 14 Oct 23. y) ← d • Add (x. • With no directed cycles. Find the shortest path from s to v ∀v. y) where x ∈ B and y ∈ V \ B that minimizes the following: d(s. Choose the edge (x. we can use Dijkstra’s Algorithm. • In the case with no negative weight edges. Idea: Grow a tree of shortest paths from s. (This is the most general – still faster than all pairs. y) Call this minimum d: • d(s. O(n + m). whereas today’s is the single-source shortest path.14 OCT 23. x) + w(x. Proof: The idea is that any path has this structure: • s: Begins here • π1 : Precedes u 27 . • With no negative weight cycles. Claim: d = minimum distance from s to y.

7 (1. Using a Fibonacci Heap.4. z) where z ∈ V \ B ∗ t ← D(y) + w(y. v) ≥ d and w(π2 ) ≥ 0 as edge-weights are non-negative.4.3.8. v): First edge leaving B • π2 : Rest of path (which may re-enter B) 14. 2008 • (u. Recall: Breadth First Search (BFS) and Depth First Search (DFS.) • DFS: 1.2. 14.3. v)+w(π2 ). (Same argument as for Prim. Note that w(π1 )+w(u.8. adj to 2. etc. • Initialize: – D(v) ← ∞. Implementation: Make a priority queue (heap) on vertices V \B using value D(v) for v ∈ V such that the minimum value of D gives the wanted vertex.7 28 . this algorithm finds the shortest path.) 3 1 6 8 7 2 4 5 • BFS: 1. adj to 1.) Total time is O(n log n + m log n) which is O(m log n) if m ≥ n − 1.2. z) ∗ If y < D(z) then · D(z) ← t · parent(z) ← y Store the D values in a heap.2 Connectivity in Graphs Testing connectivity.5. we can decrease this to O(n log n + m). From Claim by induction on |B|. exploring a graph.5.) Each decrease D operation is O(log n) (done as insert-delete.2 Connectivity in Graphs So w(π) = w(π1 )+w(u.6. The ”decrease D value” is done ≤ m times. ∀v – D(s) ← 0 – B←φ • While |b| < n: – y ← vertex of V \ B of minimum D(v) – B ← B ∪ {y} – For each edge (y. D(v) = minimum weight path from s → v using a path in B plus one edge.6.14 OCT 23. How many times are we extracting the minimum? n times at O(log n) time each.

dotted edges are ”back edges.g. w) is a tree edge · parent(w) ← v · DFS(w) else · if parent(v) = w then: (v. e. w) is a back edge 29 . num ← num + 1 – for each edge (u.” DFS Algorithm: • Initialize: – mark(v) ← not visited – num ← 1 – DFS(s) • DFS(u) recursive: – mark(v) ← visited – DFSnum(v) ← num. This justifies the term ”back edge. He’s also getting an honourary degree on Saturday at convocation. is visiting UW this weekend. By the way. Similarly. We call a graph 2-connected if there are no cut vertices. 2008 14. 1 2 3 4 5 6 7 Solid edges are DFS edges. We want connected even with a few failures (vertices/edges.1 Finding 2-connected components We can use DFS to find cut vertices and 2-connected components in O(n + m) time.” Claim: Every non-tree DFS edge goes from some u to an ancestor. the graph becomes disconnected. Paul Seymour. connected isn’t enough.2.7).14 OCT 23.2 Connectivity in Graphs Either takes O(n + m). 3-connected means we can remove two vertices without breaking the graph into components. 14. and he’s speaking tomorrow at 3:30. We’ll talk about ”higher connectivity” – for networks. DFS is more useful. w) ∗ if mark(w) = not visited then · (v. a famous name in graph theory. the triangles/squares. A figure-eight graphic made of two connected triangles or squares has two 2-connected components. we can’t have edge (5. 2-connected components.) What’s bad is a cut vertex – if it fails.

but with a guarantee on the quality. . If Tj has a back edge to T0 then Tj is connected to T0 . 2008 Midterm: Think about it as out of 35.) Backtracking: A systematic way to try all possibilities. • Exact algorithm – and bear with the fact it (may) take a long time. trying all permutations of 1 . and remaining permutations. Backtracking Algorithm: F = set of active configurations. . 15 Oct 28th. This is still O(n + m). But more likely. C ← remove configuration from F . the whole problem. But more likely. Claim: v is a cut vertex iff it has a DFS child x such that high(x) ≥ DFSnum(v).) We need one more thing: high(v) = highest (i. lowest DFS number) vertex reachable from v by going down tree edges and then along one back edge. .15 OCT 28TH. .1 Backtracking and Branch/Bound Exact. if you’re extremely lucky it’ll be one of the ones we encountered. exponential time algorithms. one configuration. n. • Approximation algorithms – run quickly. high(w) } . it’ll be similar to one we’ve seen. and later on set high(v) ← min { high(v). Search in implicit graph of partial solutions.) Otherwise. . with no guarantee on the quality of the solution. Ct . e. it’ll be one nobody knows how to solve. Modifying DFS code: set high(v) ←DFSnum(v) in Initialize. Configuration is permutations so far. General backtracking: we have a configuration C that is the remaining subproblem to be solved. Options: • Heuristic approach – run quickly. 15. DFSnum(w) } and later high(v) ← min { high(v).g. expand into C1 . and you need a find an algorithm. . Initially. Note: to test (experimentally) a heuristic you need an exact algorithm. Backtracking is useful for algorithms that are not NP-complete. . In the workplace. For each Ci . knapsack: configuration is items selected to far and items discarded so far. add Ci to F . Storing F : 30 . .e. and choices made to get to this subproblem. it falls away (and is disconnected. . 2008 What do cut vertices look like in a DFS tree? • A leaf is never a cut vertex • A root is a cut vertex iff the number of children ≥ 1 Removing arbitrary (non-root. While F = φ. . Are these connected in G \ v? It depends on back edges. e. (In that case you got an 86%.g. also with capacity remaining. test for success (solves whole problem) and failure (dead end. Otherwise. and it’s NP-complete. non-leaf) node in the tree v we have T1 . Ti children and T0 the tree connected from above.

Before. . then no solution. Note: if F becomes empty and we haven’t found a solution. . height << width. Need to fill in success w = W and failure (of the configuration) when w > W or w + r < W . n} with maximize i∈S wi . R remaining) • w= i∈S wi . e. This is O(2n ).g. and we should use DFS. exploring all subsets of {1. . Which is better? Depends on W . . n and weight wi for item i. Given items 1 .g. Backtracking for the decision version of Subset Sum: • Configurations are as above (S so far. i∈S wi ≤ W where we Decision Version – can we find S with i∈S wi = W ? A polynomial time algorithm for this decision version gives poly time for the optimization version. and W . . . we built a dynamic programming algorithm for Knapsack with subproblems O(n × W ).2 } R = { 3 … n } 2 out S = { 1 } R = { 3 … n } 1 out 15. find subset S ∈ {1. 2008 • Stack: DFS of configuration space Size: height of tree • Queue: BFS of configuration space Size: width of tree • Priority Queue: explore current best configuration Usually. if W has n bits then W ∼ 2n and backtracking is better.1 Backtracking and Branch/Bound S = empty R = { 2 … n } Example: Subset Sum – Knapsack where weight is the value of each item. 31 . . r = i∈R wi . . n}: S = empty set R = {1 … n} 1 in S = { 1 } R = { 2 … n } 2 in S = { 1. e. .15 OCT 28TH. .

”hard” problem.2. General paradigm: • F = active configurations • Keep best so far • While F = φ – C ← remove ”best” configuration from F – Expand C to children C1 . Configuration: Ic ∈ E (included edges) and Ec ∈ E (excluded edges. . but we don’t have issues of correctness. But how to bound? Given Ic . update best ∗ Else if Ci is infeasible. must not contain a cycle. This is a famous.15 OCT 28TH. Undecided edges E \ (Ic ∪ Xi ).) Ic ∩ Xc = φ. C − Ic . and return to the home town. E) and edge weights w : E → R≥0 find a cycle C that goes through every vertex once and has minimum weight. . Given a graph G = (V. add Ci to F . .1 Branch and Bound TSP Algorithm Example: Traveling Salesman problem. and our traveling salesman wants to start in a home town. We want an efficiently computable lower bound (so it’s sort of like a heuristic. visit every city exactly once. ∗ If Ci solves the problem. 15. ”bound:” If lower bound (Ci ) < best so far. discard it. if better than current best. Xc find a lower bound on minimum TSP tour respecting Ic . 2008 15.2 Branch-and-Bound 15. .2 Branch-and-Bound • for optimization problems • we’ll talk about minimizing an objective function • keep track of minimum solution so far • not DFS – explore ”most promising” configuration first • ”branch” generate children of configuration (as in backtracking) • ”bound” – for each configuration compute a lower bound on the objective function and prune if ≥ minimum so far. Algorithm: based on enumerating all subsets of edges.) 32 . Xc . Idea here is we have a graph with weights on the edges. Ic must have ≥ 2 edges at each vertex. In fact it must be 2-connected. Xc choose e ∈ E \ (Ic ∪ Xc ). Necessary conditions: E \ Xc must be connected. How to branch? Take the next edge not decided about yet. ∗ Else. Ct (”branch”) – For each Ci .

Information-Theoretic Lower Bounds e. 2. we’re finding a 1−tree. . an .g. . . For example. e. . 2008 Recall Course outline: • Designing algorithms • Analyzing algorithms • Lower Bounds – do we have the best algorithm? 16. .1 Oct 30th. we claim any algorithm will take at least this much time.2 Lower Bounds If we have a lower bound for a problem P . there are n! of them and it won’t take less than n! time to write them all down – Ω(n!). Note: distinction between lower bound for an algorithm and lower bound for a problem. 2. This takes log n bits as that is the information content of distinguishing n possibilities. look at multiplying large integers. 2008 Instead of finding a tour. . . a spanning tree on nodes 2. . • Branch wisely. In fact.1 Basic Techniques 1. if we ask for all the permutations of 1.2.) Final Enhancements: • When we choose the ”best” configuration C from F . w(min TSP-tour) ≥ w( min 1-tree ). (Not proven. a2 . But there is an algorithm (divide and conquer) with a better worst-case runtime – O(nk ) with k < 2. . n. find vertex i in minimum 1-tree with degree ≥ 2. Claim We can efficiently find a minimum weight 1-tree given Ic . . Lower bound based on output size. Let e = maximum weight edge 16 16. as our measure of best.g. school method is Ω(n2 ) worst case run time of because there are example inputs that take ≥ c × n2 steps. So use this for lower bound. Xc . Ω(log n) lower bound for searching for an element inside a1 .16 OCT 30TH. . use the one with the minimum 1-tree. Lower bounds for algorithms are hard to prove! 16. Claim Any TSP-tour is a 1-tree. 33 . But a lower bound for the problem says that all algorithms have to take ≥ some time. n (not a MST) and two edges from vertex 1 to leaves of the tree. . For an example. The school method was O(n2 ).

g.” 16. • (Lower end) some problems have Ω(n log n) lower bounds on special models. ”If I could find convex hulls faster than O(n log n) then I could sort faster than O(n log n). each comparison gives one bit of information.) • Some problems can only be solved in exponential time. Jack Edmonds is a retired C&O prof. ”Can O(n3 ) dynamic programming algorithms be improved?” – nobody knows.3 Polynomial Time Definition An algorithm runs in polynomial time if its worst case runtime is O(nk ) for some k. We took an index of numbers and mapped them into a curve.2 State-of-the-Art in Lower Bounds • Some problems are undecidable (they don’t have algorithms) e. In the rest of the course. Things we care about.) Low-degree polynomials are efficient. We’ll do this later in the course (and CS 360. In any other algorithms class. Reductions: showing one problem is easier or harder than another. 16. and then the convex hull would tell the sorted order. 3. the halting problem. and we know that solving one in polynomial time solves all the others. and since we need log n bits we need log n comparisons. you would cover linear programming in algorithms. like ”is there a TSP algorithm in O(n6 )” – nobody knows. The best that’s known is proving that a large set of problems are all equivalent. but if you’re serious about algorithms.2. What is polynomial? Θ(n) Θ(n2 ) Θ(n log n) Θ(n100 ) Θ(2n ) Θ(n!) YES YES YES (because it’s better than O(n)) YES NO NO The algorithms in this course were (mostly) all poly-time.16 OCT 30TH. 34 .3 Polynomial Time In a comparison-based model. The ”matching” problem has you given a graph and you want to assign pairs. He first formulated the idea of polynomial time. We have a C&O department that covers that. you should be taking courses over there. 2008 16. we’ll fill this in. e. High-degree polynomial don’t seem to come up in practice. except backtracking and certain dynamic programming algorithms (specifically 0-1 Knapsack. Often this argument is presented as a tree. Major open question: Many practical problems have no polynomial time algorithm and no proved lower bound. convex hull is harder than sorting.g.

2008 Permanents are like determinants except they’re all positive terms. (NP = Non-deterministic Polynomial) 16.1 Decision Problems What is a decision problem? A problem with output YES/NO or TRUE/FALSE. Even without an algorithm for B or a lower bound for A. We will concentrate on decision problems to define P/NP. 17 Nov 4th. But if we have a lower bound non-polytime algorithm for A then this implies a non-polytime algorithm for B.) • Next step. there is a polytime algorithm for A that makes subroutine calls to (polytime) algorithm B.4 Reductions Other history: • In the 50’s and 60’s. and it seems to be equivalent to optimization anyways. we found this is actually a hard problem and people did reductions from integer programming. Examples • Given a number.4 Reductions Problem A reduces (in polytime) to a problem B (written A ≤ B or A ≤P B) and we can say ”A is easier than B” if a (polytime) algorithm for B can be used to create a (polytime) algorithm for A. integer linear programming. does it have a Hamiltonian cycle? (a cycle visiting every vertex once) 35 . or both don’t. We will reduce this problem to not shortest path but longest path in a graph. is it prime? • Given a graph. Why? It’s more rigorous. 17.17 NOV 4TH. P and NP. and people reduced other problems to this one. Negate the edge weights. and decision problems. Note: we can have a reduction with having an algorithm for B. Is it a polynomial-time reduction? How can we solve the longest path problem? Reduction to shortest path problem. 2008 16.) Example: Longest increasing subsequence problem. More precisely. This is the theory of NP-completeness. Our goal: to attempt to distinguish problems with poly-time algorithms from those that don’t have any. but in the 70’s with the theory of NP-completeness. This is a reduction – it reduces the longest increasing subsequence problem to the longest path problem. Consequence of A ≤ B: An algorithm for B is an algorithm for A. if we prove reductions A ≤P B and B ≤P A then A and B are equivalent with respect to polytime (either both have them. there was a success story creating a linear programming and simplex method – practical (though not polynomial. Today’s topics: Reductions (from last class). Seemed promising at the time.

k. k to decision algorithm and stop when it’s NO. Usually. which contains P problems and NP-complete algorithms (that are equivalent. Contrast with verifying that G has no independent set of size ≥ k. . is it prime? Not clear what info to give (there is some) but for composite numbers (given n. We can find the actual independent set in polytime too. Idea: try vertex 1 in/out of independent set.2 P or NP? Which problems are in P ? Which are not in P ? We will study a class of ”N P -complete” problems that are equivalently hard (wrt polytime) (i. independent set.g. primality is the ”decision” version of factoring. typically. Showing opt ≤P decision: suppose we have a poly-time algorithm for the decision version of independent set. is there a TSP tour of length at most k? • Independent Set: given a graph G = V (E) and k ∈ N is there an independent set of size ≥ k? Optimization version: given G. it’s easy to verify a graph has an independent set of size ≥ k if you’re given the set. primes: given n. decisions and optimization are equivalent with respect to polynomial time.2 P or NP? • TSP decision version: given a graph G = (V. A ≤P B ∀A. 2008 17.) An algorithm B is a certifier for problem X if: 36 .17 NOV 4TH. B in class) and none seem to be in P . . we can show decision ≤P opt. In fact. For k = n . Then this loop takes O(nt+1 ). is it prime? In some sense. Input: G. e.e. • Give G to algorithm for optimization problem • Return YES or NO depending on whether the returned set is ≥ k. 17. and given some bound k ∈ R. Exercise: fill this in and check poly-time. Runtime: Assume decision takes O(nt ). A certifier algorithm takes an input plus a certificate (our extra info. E) with w : E → R+ . is it composite (= not prime?)) we could give factors.) NP problems are polytime if we get some lucky extra information. Definition of NP (”nondeterministic polynomial time”): there’s a set of NP problems. give G. what lucky info would help? e. we can’t factor in polynomial time (and to find one would be bad news for cryptography!) Definition P = { decision problems that have polytime algorithms }. For independent set. Examples: • Factoring – find prime factors • Primality – given number. Notes: • Must be careful about model of computing and input size – count bits. find max independent set. But although we can test primality in polynomial time. 1.g.

Question does G have an independent set of size ≥ k? Claim: Independent Set ∈ NP. Is Co-NP NP? Is P NP intersect co-NP? 37 . Let X be a decision problem in P .) Certifier: Check if u is an independent set and check |u| ≥ k. and k ∈ R Question: Does G have a TSP tour of weight ≤ k? Certificate: Sequence of edges Certifier: Check edges. Proof idea: try all possible certificates using the certifier. . . . Is there a subset S = {1 . 17. s is a YES input for X iff ∃t ”certificate” such that B(s. • Certificate: nothing • Certifier Algorithm: original algorithm Claim: any problem in N P has an exponential algorithm. In particular. • Decision version of TSP. The number of certificates is O(2poly(n) ). • ∀s. E) and w : E → R+ . 17. Open Questions Is P = N P ? co-np: ”no versions of NP problems. the running time is O(2poly(n) ). B is a polytime certifier if • B runs in polynomial time. . Certificate: S. and check no repeated vertices (sum of weights ≤ k). Examples • Independent Set Input is a graph G and k ∈ N. • There is a polynomial bound on size of certificate t in terms of the size of s. . Input: Given G = (V. .3 Properties Proof Certificate u ⊆ V (set of vertices. So X has a polyime algorithm to show X ⊆ N P .” non-TSP is in co-NP.17 NOV 4TH. 2008 • B takes two inputs s and t and outputs YES and NO. wn in R+ . n} such that the sum is exactly W ? Claim: Subset Sum ∈ N P . t) outputs YES. • Non-TSP Does G have no TSP turn of length ≤ k? Is Non-TSP in N P ? Nobody knows. Certifier: add the weights in S.3 Properties Claim P ⊆ N P . • Subset-Sum: Input: w1 .

2. with variables 38 .) 18. If X is N P -complete. 2. For every Y ∈ N P . 2008 Recall A ≤P B – problem A ”reduces (in P olytime) to” problem B if there is a polytime algorithm for A (possibly) using a polytime algorithm for B.”) P = { decision problems with polytime algorithms } and N P = { decision problems with a polynomial-time certifier algorithm } (i.18 NOV 6TH. every Y ∈ N P has a polytime algorithm. i.2 N P -Complete These are the hardest problems in N P . 2008 18 18. To show X N P -complete. (B is ”harder. Y ≤P X. If we know X is N P -complete.e. X ∈ N P 2. X ≤P Z Note that X is a known N P -complete problem and Z is the new problem.1 Nov 6th.e. Prove Z ∈ N P 2. 18. Subsequent N P -completeness proofs are easier. poly-time IF we get extra information. The first N P -completeness proof is hard. v (one) output (sink) ^ ^ ¬ ¬ x1 x2 inputs. then to prove Z is N P -complete: 1. Two important implications: 1.e. Please don’t get this backwards. and if X has no polytime algorithm (i. Definition: A decision problem X is N P -complete if: 1. If X is N P -complete and if X has a polytime algorithm then P = N P . we must show Y ≤P X for all Y ∈ N P . lower bound) then no problem in N P -complete has a polytime algorithm.1 Circuit Satisfiability The first N P -complete problem is called circuit satisfiability.

Check output is 1.e. we would have a general way to solve any problem in N P by turning it into a Circuit-SAT problem. circuits = formulas so these satisfiability problems should be equivalent. Even special form of Satisfiability (SAT) is N P -complete. and NOT operations. if x1 = 0 and x2 = 1 then output = 0. • Certifier – Go through circuit from sources to sink. The ”formula” is the ∧ of ”clauses. 1?) Well. Question: Are there 0-1 values for variables that give 1 as output? Circuit SAT is a decision problem in NP.2 3-SAT Satisfiability: (of Boolean formulas). is there a certificate such that the certifier says YES – which leads to circuit satisfiability. prove Circuit-SAT ≤P 3-SAT. The idea is that an algorithm becomes a circuit computation. Certifier algorithm: check that each clause has ≥ 1 true literal. 3-SAT: e. Assume we have a polytime algorithm for 3-SAT.” the ∨ of three literals. • 3-SAT is harder than another N P -complete problem: i. . AND. Essentially.2.g. 1 to variables to make the formula TRUE (i. computing values. so use it to create a polytime algorithm for Circuit-SAT. 18. Input to algorithm is a circuit C and we want to construct in polytime a 3-SAT formula F to send to the 3-SAT algorithm s. We must show Y ≤P Circuit SAT for all Y ∈ N P . Theorem Circuit-SAT is N P -complete. We will be rigorous.g. (x1 ∨ ¬x1 ∨ x2 ) ∧ (x2 ∨ x3 ∨ x4 ) ∧ . A literal is a variable or negation of a variable.e. 2008 18.18 NOV 6TH. . e. C is satisfiable iff F is satisfiable. A certifier algorithm with an unknown certificate becomes a circuit with variables as some inputs. (x1 ∧ x2 ) ∨ (¬x1 ∧ ¬x2 ) • Question: is there an assignment of 0.t.. • Certificate – Values for variables.2 N P -Complete This is a dag with OR. if we had a polynomial time way to test circuit satisfiability. • Input: a boolean formula. The question is.g. Proof • 3-SAT ∈ N P : Certificate: values for variables. 39 . e. 0-1 values for variables determine output value. Proof Sketch: We know ∈ N P as above. Theorem 3-SAT is N P -complete.

and a ⇒ b as (b ∨ ¬a).) Idea: make a variable for every node in the circuit. F = (x1 ∨ x2 ∨ ¬x3 ) ∧ .19 NOV 11TH. There’s a similar padding for size 1. The final formula for F : – ∨ of all clauses for circuit nodes – ∧xi where i is the output node. NP-complete problems are the hardest problems in NP. 2008 We could derive a formula by carrying the inputs up through the tree (i. we can prove Z is NP-complete by proving: • Z ∈ NP • X ≤P Z 19. .) Caution: the size of formula doubles at every level (thus this is not a polynomial time or size reduction. 19 Nov 11th. Recall the input is a Boolean formula in a special form (three-conjunctive normal form.e. e. . Claim F has a polynomial size and can be constructed in polynomial time. for f1 and f2 and ∨. Note: we can pad these size two clauses by adding new dummy variable t and (a ∨ b ∨ t) ∧ (a ∨ b ∨ ¬t) etc. We get (b ∨ c ∨ ¬a) ∧ (a ∨ ¬b) ∧ (a ∨ ¬c). Definition A decision problem X is NP-complete if: • X ∈ NP • Y ≤P X for all Y ∈ N P Once we know X is NP-complete. a ≡ b ∨ c becomes (a ⇒ (b ∨ c)) ∧ ((b ∨ c) ⇒ a) and (b ∨ c ∨ ¬a) ∧ (a ∨ ¬(b ∨ c)) and (a ∨ (¬b ∧ ¬c)). Rewrite a ≡ b as (a ⇒ b) ∧ (b ⇒ a). Claim C is satisfiable iff F is satisfiable. . just pull the inputs up and write f1 ∨ f2 .) Question: Are there T/F values for variables that make F true? Theorem SAT is NP-complete.1 Satisfiability – no restricted form Recall: 3-SAT is NP-complete. Proof (⇒) by construction (⇐) . xy ∧ (x7 ≡ x5 ∨ x6 ) ∧ (x5 ≡ x1 ∧ x2 ) ∧ (x6 ≡ x3 ∧ x4 ) ∧ (x3 = ¬x1 ) ∧ (x4 ≡ ¬x2 ). Proof: • SAT ∈ N P • 3-SAT ≤P SAT 40 . P is decision problems with a polynomial time algorithm. 2008 NP is decision problems with a polynomial time certifier algorithm.g. .

So. x2 and x3 .2 Independent Set 19. This gives an independent set of size = m. • Correctness: Claim F is satisfiable iff G has an independent set ≥ m. we’ll make a triangle in the graph. m – Return answer • Runtime: Constructing G takes poly time. Set any remaining variables arbitrarily. We will show 3-SAT reduces to Independent-Set. For example. (⇐) Independent set in G must use one vertex from each triangle.e.e. (x1 ∨ x2 ∨ ¬x3 ) is drawn as a graph with three vertices x1 . Proof Independent-Set is in NP. Pick the corresponding vertex from the triangle.2 Independent Set Input: Graph G = (V. We want to give a polytime algorithm for 3-SAT using a hypothesized polytime algorithm for Independent-Set. (x2 . no two vertices joined by an edge?) Theorem Independent-Set is NP-complete. and edges (x1 . u ∈ U or v ∈ U (or both. 19. Set the corresponding literals to be true. This satisfies all clauses.3 Vertex Cover Input: Graph G = (V. See previous lecture.) Theorem Vertex-Cover (VC) is NP-complete. • Proof: (⇒) Suppose we can assign T/F to variables to satisfy every clause. (¬x3 . Independent set runs in poly time by assumption. Question: Does G have a vertex cover U ⊆ V with |u| ≤ k? A vertex cover is a set of vertices that ”hits” all edges – i. Pick the corresponding vertex in the graph. so 3m vertices. For example: (x1 ∨ x2 ∨ ¬x3 ) ∧ (x1 ∨ ¬x2 ∨ x3 ) becomes: x1 x2 ¬x3 ¬x2 x1 x3 Connect any vertex labelled xi with any vertex labelled ¬xi . x1 ). ∀(u. E) and number k ∈ N. Input: Boolean formula F Goal: Construct a graph G and choose k ∈ N such that F is satisfiable iff G has an independent set ≥ k. E) and k ∈ N. For each clause in F . 3m vertices. 2008 19. each clause has ≥ 1 true literal. x2 ). v) ∈ E. Proof 41 . Details of Algorithm: • Input: 3-SAT formua F – Construct G – Call Independent-Set algorithm on G.19 NOV 11TH. We have m clauses. Claim: G has polynomial size. ¬x3 ). Question: Is there a subset u ∈ V with |u| ≥ k that is independent (i.

Cycle. Give a polytime algorithm for 3-SAT assuming we have one for Ham. Question: Can we choose subset of k Si ’s that still cover all the elements? i. 19... 19. . Claim u ∈ V is an independent set iff V − U is an vertex cover.19 NOV 11TH. Correctness: Claim.Cycle. . Certifier algorithm: verify U vertex cover and ≤ k.4 Set-Cover Problem Input: set E of elements and some subsets of E: S1 . and call VC algorithm on G. Here’s an algorithm for independent set. Si ∈ E and k ∈ N. . Please find reduction proof on the Internet.4 Set-Cover Problem Suppose that we have a polynomial time algorithm for VC. 42 . n − k.k Example: Can we throw away some intersecting rectangles and still cover some area? Theorem Set-Cover is NP-complete. 19. but Set-Cover ≤P VC because VC is NP-complete. 19. G has independent set ≥ k iff G has VC ≤ n − k.e. . k. Input G. Sm . . These proofs are from a 1976 paper by Richard Karp. . i1 . E) Q: Does G have a directed cycle that visits every vertex exactly once? Proof (1) ∈ N P and (2) 3-SAT ≤P Ham. 2008 • VC ∈ N P Certificate: set u.6 Hamiltonian Cycle Input: Directed Graph G = (V. ik such that Sij = E j=1. . • Ind-Set ≤P VC Ind-Set and VC are closely related. .5 Road map of NP-Completeness Circuit-SAT 3-SAT Subset-Sum Hamiltonian Cycle TSP Independent Set VC Set-Cover Note: VC ≤P Set-Cover because VC is a special case.

F has m clauses and n variables x1 .C. H. Then it must use one incoming edge at v and one outgoing edge at v. Then G has 3n vertices. Say G has n vertices. This is the level of N P -completeness proof you’ll be expected to do on your assignment.1 Nov 13th. 20. Claim (Correctness) G has a directed H. read online. 2008 • Input: 3-SAT formula F • Idea: Construct digraph G such that F is satisfiable iff G has a Hamiltonian cycle. Input: G = (V. Claim G has polynomial size. Assume we have a polytime algorithm for the undirected case. Q: Does G have a T SP tour with weights ≤ k? Proof 43 . but (⇐) fails in a one-directional cycle. Input: Directed graph G Construct an undirected graph G such that G has directed H. vout . (skipped this section. First idea – G = G with direction erased. . ≤P Undir. Second idea – vin vout v vmid For each vertex v create vin . iff G has undirected GC. Proof • ∈ NP • Dir.C. is N P -complete.H.C. xn . . (⇒) easy (⇐) vmid has degree two.C. So the Hamiltonian cycle must use both incident edges.) Can you show the undirected ham cycle problem is hard? 20 20. iff G has undirected H. Design a polytime algorithm for the directed case.20 NOV 13TH. m edges. 2008 Undirected Hamiltonian Cycle Input: Undirected G = (V.2 TSP is NP-complete Theorem TSP (decision version) is N P -complete.C. E) and w : E → R+ with k ∈ R. (⇒) is OK. We’ve created G . .C. m + 2n. . and vmid as shown above. E) Decision: Does this graph have an undirected Hamiltonian cycle that visits every vertex exactly once? Theorem Undirected H.

3-SAT ≤P Subset-Sum Give a polynomial-time algorithm for 3-SAT using a polytime algorithm for Subset-Sum. cycle. input for Ham. Second idea: Create three new vertices abc in G and connect a and c to all vertices in G .20 NOV 13TH. 44 . Claim poly-size. Ex. Proof 1. Input: undirected graph G Question: does G have Ham path that visits each vertex exactly once? Proof – ∈ NP 20. First idea: G ← G. at . . . . Cycle using algorithm for Ham Path. . . . .3 Subset-Sum is NP-Complete – Ham Cycle ≤P Ham Path Want algorithm for Ham. . Cycle is a special case of TSP when w(e) = 1 ∀e and k = n. Third idea: Add a single vertex and connect it to everything in G . n} such that i∈S ai = W ? Recall: Dynamic programming algorithm O(n × W ). . W s.C.3 Subset-Sum is NP-Complete This one is not something you’ll be expected to do on your assignment. F = (x1 ∨ ¬x2 ∨ x3 ) ∧ (¬x1 ∨ ¬x2 ∨ x3 ). an ∈ R and target W . . xn and clauses c1 . iff G has Ham path. x2 . Cycle ≤P TSP. 2008 • ∈ NP • Ham. Fourth idea: erase each vertex from G one-at-a-time and ask for Hamiltonian path. Again. ∈ N P 2. Branch-and-bound algorithm was O(2n ). . . construct G such that G has H. . Input is a 3-SAT formula F with variables x1 . Ham. Construct a Subset-Sum input a1 . . Input: Numbers a1 . this is the kind of thing you’ll be expected to do on your assignment. Question: Is there a subset S ∈ {1. F is satisfiable iff ∃ subset of ai ’s with = W. . Exercise: find a counterexample. Final idea: Take one vertex v and split it into two identical cupies. . Well.t. Add new vertices s and t as above. . Given G. path iff G has Ham cycle. . 20. cn . . This gives G has Ham. ⇒ is OK but we can find a counterexample for ⇐. Theorem Hamiltonian Path is NP-complete.

2 for again 4. If only a single true literal.20 NOV 13TH.2 = 2 in ci – and 0 everywhere else.2 = 1. How many ai ’s? 2n + 2m. • Want to deal with target ≥ 1. If false. n + m. 2008 20. Claim Size. (⇐) Some subset of rows adds to W . Claim Correctness. How many base 10 digits in ai ’s and W ? Equal to number of columns. but zeros elsewhere. . That satisfies all clauses. Column xi ⇒ we use rows xi or ¬xi . and sum down cj column to get 4. 2 c1 1 0 0 1 1 0 c2 0 1 0 1 1 0 . Solution is slack rows. These are the ai ’s. 45 .1 = 1.) Add extra columns: column xi has 1 s in rows xi and rows ¬xi . The target row of the matrix turns into W in base 10. Consider cj . choose xi . Solution: add two rows per column forcol ci .3 Subset-Sum is NP-Complete x1 ¬x1 x2 ¬x2 x3 ¬x3 xn ¬xn slack 1. Finally. Use slack i. 1 slack 1. Set xi = T or F . Then column xi has sum = 1 as required. Proof (⇒) If xi is true. Satisfiable iff ∃ subset of ai ’s with sum W . 1 slack 2. Slacks give ≤ 3 so some literal in cj must be true. use slack i. . This row set gives sum W . Set target for column ci = 4.1 and slack i. interpreting the rows as binary numbers (actually with a bigger base of 10. so total = 4. cm x1 1 1 0 0 0 0 x2 0 0 1 1 0 0 x3 0 0 0 0 1 1 1 2 1 2 ≥1 ≥1 4 4 1 1 Make a 0-1 matrix. choose ¬xi .1 = 1 in c1 and sl i. • Want to choose x1 row or ¬x1 row. 2 slack 2. total = 4. Add rows slack i. Column for Ci clause: either: • True literal in Ci • • Use slack i. but not both. each row of the matrix becomes a base-10 number.

Alg B (for input of size n) becomes circuit Cn (of polynomial size in n. A ≤P B but B not ≤P A (i. ∧ and ¬ gates and variables as some of the inputs.1 Major Open Questions Is P = N P ? If one N P -complete problem is in P . B (after compiling and assembling) becomes a circuit at lowest hardware level. Weight Triangulation for Point Set: in N P -complete (’06) (not famous problem) • Graph isomorphism: open. Question: are there 0-1 values for which the circuit outputs 1? Proof • ∈ NP • Y ≤p Circuit-SAT for all Y in NP. Theorem Circuit-SAT is NP-Complete. size(t) ≤ p(n). Recall: Input: Circuit of ∨. One sink: the final output. size of input size. Because B runs in polynomial time.) (Is there a certificate?) becomes (Are there values for variables?) Correctness: Input s for Y gets YES output iff there exists a certificate such that B(s.e. Let n = size(s).e. 2008 21 Nov 18th. What do we know about Y ? It has a polynomial time certifier algorithm B (input s for Y has Yes output iff there exists a certificate t of poly size such that B(s. the circuit has polynomial size. We assume there is a polynomial time algorithm for Circuit-SAT and give a polynomial time algorithm for Y using that subroutine.21 NOV 18TH. Let p(n) be a polynomial bounding size(t) i. t) outputs YES iff there exist values for variables t such that Cn outputs 1 iff Cn is satisfiable. are they the same after relabeling vertices? 46 . A <P B) But what are natural candidates for these? IN Garey and Johnson (’79) these were: • Linear Programming: in P (’80) • Primality Testing: in P (’02) • Min. t) outputs YES. We must convert algorithm B to a circuit (to hand to Circuit-SAT subroutine. Algorithm for Y : – Input S – Convert B to circuit Cn – Hand Cn to Circuit-SAT subroutine 21. If P = N P then there are problems in between P and N P -complete (Badner 70’s) i.e.) Alg. 2008 NP-Completeness continued. then they all are. Given two graphs each on n vertices.

11. 2.1 Examples Tiling: Given square tiles with colours on their sides. 16. does it halt (or go into an infinite loop?) Sample-Program while x = 1 do x←x−2 end This halts if x is odd and positive. 1. does this program give correct corresponding output? Answer: no. it’s possible as I could just try t choices in k 2 2 places. 26. 20. For a finite piece (k × k) of the plane. So you won’t find it in textbooks. 13.21 NOV 18TH. Sample-Program-2 while x = 1 do if x is even then x ← else x ← 3x + 1 end Assume x > 0. so the problem is O(tk ). On one hand.2. 17. But everyone in the School of Computer Science thinks it’s ”absolutely crucial” that everyone graduating with a Waterloo degree knows this stuff. we’ll look at problems with no algorithm whatsoever. 22. because what their processes do attempts to check this. 10. your skills and ingenuity will always be needed. 2. 52. 4. 7. 28. x = 9. 40.. Program Verification: Given specification of inputs and corresponding outputs of a program (specification is finite. 8. 4. On the plus side. Does this program halt for all x? That’s open. This is also a topic not conventionally covered in an algorithms course. 14. 1. x 2 47 . 16. Now. 21. no. The answer is. 34. 5. Sample runs: x = 5.2 Undecidability 21. 2008 21.2 Undecidability So far we’ve been talking about efficiency of algorithms. Halting Problem: Given a program. 8. potential number of inputs is infinite) given a program. this is sad for software engineers. can I tile the whole plane with copies of these tiles? Must match colours. actually.. and no rotations or flips allowed.

22. Theorem The following models of computing are equivalent: • Turning machines • Java programs • RAM • Circuit families 22 22. Contradiction either way! So what is wrong about this? First undecidability result (from Turing): Theorem The Halting Problem is undecidable. A (general) problem is unsolvable if it as no algorithm.) Algorithm is a Turing machine. Idea: There is an x such that Foo(x). – YES.2 History of Undecidability • Gottlob Frege . While not Foo(x). contradiction.1 Nov 20th. so S is a member of S. • Bertrand Russell (1872-1970) Russell’s paradox (recommend his biography. What is an algorithm? Church-Turing Thesis (not proved.22 NOV 20TH. Halting Problem 48 . 2008 Undecidability ”Which problems have no algorithm?” Definition A decision problem is undecidable if it has no algorithm. any math question about existence of a number can be turned into a halting question. x ← x − 1. Definition A decision program is undecidable if there’s no algorithm for it. Is S a member of itself? – NO. Definition (more general) A program is unsolvable if there’s no algorithm for it. What is a problem? Specification of inputs and corresponding outputs. and some philosophy books) Let S = the set of sets that do not contain themselves.1900 . then S meets the second condition.one of many who tried to axiomatize mathematics. 2008 Also. x ← 1.

23 Nov 25th. else. Halting Problem: given a program/algorithm A and an input w.1 Other Undecidable Problems Half-No-Input or Halt-on-Empty Given a program A with no input.2 23. Theorem: If P and Q are decision problems and P is undecidable and P ≤ Q then Q is undecidable. Therefore. 2008 • Input: Some program or algorithm A and some input string w for A. we get an algorithm for P . Therefore. ”does H halt on its own input?” Suppose yes. 2008 Assignment 3 – out of 45. Look at code for H on input H . does it halt? 49 . use reductions. So H(H . Contradiction either way.23 NOV 25TH. Construct a new program H with input a program B. This is contrary to P undecidable. then this is a yes case of the halting problem.) Suppose there is a program H that decides the halting problem. H ) outputs no. our assumption that H exists is wrong. So H(H .1 Undecidability Recall: a decision problem is undecidable if there is no algorithm for it. there is no algorithm to decide the halting problem. Final exam: study sheet is allowed. halt. • Question: Does A halt on w? Proof: (by contradiction. Recall A ≤ B or ”A reduces to B” if an algorithm for B can be used to make an algorithm for A. ”does S contain S?” is like asking. Suppose Q is decidable. Then it has an algorithm. By the definition of ≤. begin call H(B. B) if no. It loops forever. end So H is like Russell’s set S. Proof By contradiction. loop forever. w as input and outputs yes/no. does A halt on input w? To show other problems are undecidable. Contradiction.2. 23. His question. 23. H ) outputs yes. But then (looking at code of H ) H halts on input H . H takes A. Then this is the no case of the halting problem. Suppose no. Assignment 4 – due Friday.

and specification of inputs and corresponding outputs. Make an algorithm for the Halting Problem. input/specs for A. Halt-No-Input ≤ Program-Equiv. 2008 23. Proof Program-Verification ≤ Program-Equiv (?) Suppose we have an algorithm for Program Equivalence. B. Proof Halt-No-Input ≤ Program Verification. does the program compute the correct output for each input? Theorem Program Verification is undecidable. Input: program A. Correctness A halts on w iff A halts. Make an algorithm to solve Halt-No-Input.2 Program Verification Given a program. discard it A A  output 1 Then call V (A . Algorithm: Make a program A that has w hard-coded inside it and then run A on it. Suppose we have an algorithm for Program Equivalence.e. Make an algorithm for Halt-no-Input. input string w. Input: program A.2 Other Undecidable Problems Theorem Halt-no-Input is undecidable.2. Make program B: read input. Correctness A halts iff V (A . Output: does A halt? Idea: Modify code of A to get a program A with input and output. 23. Algorithm: Make A as in previous. Input: program A. specs above) answers yes. Proof: Halting Problem ≤ Half-no-input. Call algorithm for Program-Equiv on A . do they behave the same (i. Call X on A which outputs the yes/no answer. produce the same outputs?) Theorem Program Equivalence is undecidable. 50 . Input: program A. specs: for any input. Give an algorithm for Program Verification.   read input.23 NOV 25TH. Suppose we have an algorithm X for Halt-no-input. but we need more formality about input/output specs. output 1 ). Suppose we have an algorithm V to decide Program Verification. Proof: A halts iff A produces 1 output for every input iff V (A . Let’s try another approach. Program Equivalence (something TA’s would love!) Given two programs. spec above) answers yes. just output 1. This will work.

. .g. There may be algorithms that work in polytime when you bound that maximum degree. C <.empty set while E not empty set pick e = (u. 51 . 24. . ).24 NOV 27TH. Conway’s Game of Life Rules: spots die with 0-1 or 4 neighbours. Every edge in M must be hit by a vertex in any V. • Parameterized Tractability: exponential algorithms that work in polynomial time for special inputs.3 Other Problems (no proofs) Hilbert’s 10th Problem Given a polynomial P (x1 . does P (x1 . – Vertex Cover: Greedy algorithm that finds a good (not necessarily min) vertex cover. We call this a ”2-approximation algorithm.2. and ∴ |M | ≤ min size of V. 48 and 49 must be rounded up to 50. . 23. . 2008 Correctness A is equivalent to B iff A halts. . least integer solution to x2 = 991y1 + 1 is a 30-digit x and 29-digit y.C. 2008 Final Exam: Wed Dec 10th.C. xn ) with integer coefficients.C. . Office hours: show webpage.C u {u. This was proved undecidable in the 70’s. Undecidable.1 What to do with NP-complete problems Sometimes you only want special cases of an NP-complete problem.” Some NP-complete problems have no constant-factor approximation algorithm (unless P = N P ) such as Independent Set. are born with three neighbours. and ∴ |C| ≤ 2 × ( min V.v) in E C <. 24 Nov 27th. • Exact exponential time algorithm: use heuristics to make branch-and-bound explore the most promising choice first (and run fast sometimes.” e. ).) |C| = 2|M |. maximum degree in a graph. . Proof: The edges we choose form a matching M (no two share an endpoint. For example. This will correctly answer ”yes” if the answer is ”yes.) • Approximation Algorithms: CS 466. xN ) = 0 have positive integer solutions? Possible approach: try all integers.v} remove from E all edges incident to u or v end Claim is this algorithm finds |C| ≤ 2( min size of a V.C.

How good is our approximation? Each wi is off by ≤ b. Rough rounding – few bits – rough approximation. .) 52 W B =O W n (max wi ) ≤O 1 n2 . ˜ ˜ ˜ ← wi . Recall: Dynamic programming O(n × W ).24 NOV 27TH. . Also. – Example Subset-Sum Given w1 . then what can I now do? Primality: can be tested in polytime with a randomized algorithm (70’s) but also without randomness (2002. . n)) So wi ← wi b ˜ b Claim that wi ≤ wi ≤ wi + b. 2008 24. alg.) Idea: dynamic programming algorithm is very good – it only can’t handle having lots of bits in a number. is there S ∈ {1 . ˜ ˜ Runtime: O(n × W ). we want i∈S wi ≤ W to maximize i∈S wi . . 1 Note i∈S wi ≥ 2 (true max. Therefore. this would be a 2-approximation) 1 i∈S wi ≥ (1+ ) (true max) is a ”(1 + )-approximation. wn and W . w ˜ b ˜ ˜ ← W .1 What to do with NP-complete problems Some NP-complete problems have approximation factors as close to 1 as we like – at the cost of increasing running time. So throw away half the bits and get an approximate answer. . Rounding parameter b (later b = n (max wi for i = 1 . ˜ Now all the wi ’s are multiples of b so scale and run dynamic programming. . Limit is approximation factor = 1 (an exact algorithm) with an exponentialtime algorithm. This is like ”a drop in the bucket” for exponential time algorithms. • Randomized algorithms (CS 466?) If I have access to a RNG. (1 + ) approx. Refined rounding – many bits – good approximation. Claim is there is a (1 + ) approximation algorithm for Subset-Sum with runtime O we get better approximation but worse runtime. ˜ ˜ W ≤O and W ≤ n(max wi ) Therefore. Idea: apply dynamic programming to rounded input. • Do alternative methods of computing help with NP-complete problems? Will massively parallel computers help? Only by a factor of number of CPUs. . As →0 Note: we should check feasibility of rounding. The true maximum ≤ ˜ i∈S wi + nb ≤ wi = (1 + ) i∈S wi . wi + (max wi ) ≤ i∈S wi + i∈S i∈S Second last step: else use max wi as solution. n} such that i∈S wi = W ? As optimization. (And assume wi < W ∀i. . Else throw out. W b 1 3 n . our runtime is like O 1 n3 .

In Physics. To read a tiny bit more on Quantum Computing is [DPV] 24. by the way. Huge result (Shor. NP 53 . 1994) – efficient factoring on a quantum computer.2 P vs. 2008 • Quantum Computing 24. the place to be for quantum computing.2 P vs. and C&O we have experts on the subject.24 NOV 27TH. CS. NP The hope is that it offers massive parallelism for free. Waterloo is.

Sign up to vote on this title
UsefulNot useful