## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

**Design and Analysis of Computer Algorithms
**

1

David M. Mount

Department of Computer Science

University of Maryland

Fall 2003

1

Copyright, David M. Mount, 2004, Dept. of Computer Science, University of Maryland, College Park, MD, 20742. These lecture notes were

prepared by David Mount for the course CMSC 451, Design and Analysis of Computer Algorithms, at the University of Maryland. Permission to

use, copy, modify, and distribute these notes for educational purposes and without fee is hereby granted, provided that this copyright notice appear

in all copies.

Lecture Notes 1 CMSC 451

Lecture 1: Course Introduction

Read: (All readings are from Cormen, Leiserson, Rivest and Stein, Introduction to Algorithms, 2nd Edition). Review

Chapts. 1–5 in CLRS.

What is an algorithm? Our text deﬁnes an algorithm to be any well-deﬁned computational procedure that takes some

values as input and produces some values as output. Like a cooking recipe, an algorithm provides a step-by-step

method for solving a computational problem. Unlike programs, algorithms are not dependent on a particular

programming language, machine, system, or compiler. They are mathematical entities, which can be thought of

as running on some sort of idealized computer with an inﬁnite random access memory and an unlimited word

size. Algorithm design is all about the mathematical theory behind the design of good programs.

Why study algorithm design? Programming is a very complex task, and there are a number of aspects of program-

ming that make it so complex. The ﬁrst is that most programming projects are very large, requiring the coor-

dinated efforts of many people. (This is the topic a course like software engineering.) The next is that many

programming projects involve storing and accessing large quantities of data efﬁciently. (This is the topic of

courses on data structures and databases.) The last is that many programming projects involve solving complex

computational problems, for which simplistic or naive solutions may not be efﬁcient enough. The complex

problems may involve numerical data (the subject of courses on numerical analysis), but often they involve

discrete data. This is where the topic of algorithm design and analysis is important.

Although the algorithms discussed in this course will often represent only a tiny fraction of the code that is

generated in a large software system, this small fraction may be very important for the success of the overall

project. An unfortunately common approach to this problem is to ﬁrst design an inefﬁcient algorithm and

data structure to solve the problem, and then take this poor design and attempt to ﬁne-tune its performance. The

problem is that if the underlying design is bad, then often no amount of ﬁne-tuning is going to make a substantial

difference.

The focus of this course is on how to design good algorithms, and how to analyze their efﬁciency. This is among

the most basic aspects of good programming.

Course Overview: This course will consist of a number of major sections. The ﬁrst will be a short review of some

preliminary material, including asymptotics, summations, and recurrences and sorting. These have been covered

in earlier courses, and so we will breeze through them pretty quickly. We will then discuss approaches to

designing optimization algorithms, including dynamic programming and greedy algorithms. The next major

focus will be on graph algorithms. This will include a review of breadth-ﬁrst and depth-ﬁrst search and their

application in various problems related to connectivity in graphs. Next we will discuss minimum spanning trees,

shortest paths, and network ﬂows. We will brieﬂy discuss algorithmic problems arising from geometric settings,

that is, computational geometry.

Most of the emphasis of the ﬁrst portion of the course will be on problems that can be solved efﬁciently, in the

latter portion we will discuss intractability and NP-hard problems. These are problems for which no efﬁcient

solution is known. Finally, we will discuss methods to approximate NP-hard problems, and how to prove how

close these approximations are to the optimal solutions.

Issues in Algorithm Design: Algorithms are mathematical objects (in contrast to the must more concrete notion of

a computer program implemented in some programming language and executing on some machine). As such,

we can reason about the properties of algorithms mathematically. When designing an algorithm there are two

fundamental issues to be considered: correctness and efﬁciency.

It is important to justify an algorithm’s correctness mathematically. For very complex algorithms, this typically

requires a careful mathematical proof, which may require the proof of many lemmas and properties of the

solution, upon which the algorithm relies. For simple algorithms (BubbleSort, for example) a short intuitive

explanation of the algorithm’s basic invariants is sufﬁcient. (For example, in BubbleSort, the principal invariant

is that on completion of the ith iteration, the last i elements are in their proper sorted positions.)

Lecture Notes 2 CMSC 451

Establishing efﬁciency is a much more complex endeavor. Intuitively, an algorithm’s efﬁciency is a function

of the amount of computational resources it requires, measured typically as execution time and the amount of

space, or memory, that the algorithm uses. The amount of computational resources can be a complex function of

the size and structure of the input set. In order to reduce matters to their simplest form, it is common to consider

efﬁciency as a function of input size. Among all inputs of the same size, we consider the maximum possible

running time. This is called worst-case analysis. It is also possible, and often more meaningful, to measure

average-case analysis. Average-case analyses tend to be more complex, and may require that some probability

distribution be deﬁned on the set of inputs. To keep matters simple, we will usually focus on worst-case analysis

in this course.

Throughout out this course, when you are asked to present an algorithm, this means that you need to do three

things:

• Present a clear, simple and unambiguous description of the algorithm (in pseudo-code, for example). They

key here is “keep it simple.” Uninteresting details should be kept to a minimum, so that the key compu-

tational issues stand out. (For example, it is not necessary to declare variables whose purpose is obvious,

and it is often simpler and clearer to simply say, “Add X to the end of list L” than to present code to do

this or use some arcane syntax, such as “L.insertAtEnd(X).”)

• Present a justiﬁcation or proof of the algorithm’s correctness. Your justiﬁcation should assume that the

reader is someone of similar background as yourself, say another student in this class, and should be con-

vincing enough make a skeptic believe that your algorithm does indeed solve the problem correctly. Avoid

rambling about obvious or trivial elements. A good proof provides an overview of what the algorithm

does, and then focuses on any tricky elements that may not be obvious.

• Present a worst-case analysis of the algorithms efﬁciency, typically it running time (but also its space, if

space is an issue). Sometimes this is straightforward, but if not, concentrate on the parts of the analysis

that are not obvious.

Note that the presentation does not need to be in this order. Often it is good to begin with an explanation of

how you derived the algorithm, emphasizing particular elements of the design that establish its correctness and

efﬁciency. Then, once this groundwork has been laid down, present the algorithm itself. If this seems to be a bit

abstract now, don’t worry. We will see many examples of this process throughout the semester.

Lecture 2: Mathematical Background

Read: Review Chapters 1–5 in CLRS.

Algorithm Analysis: Today we will review some of the basic elements of algorithm analysis, which were covered in

previous courses. These include asymptotics, summations, and recurrences.

Asymptotics: Asymptotics involves O-notation (“big-Oh”) and its many relatives, Ω, Θ, o (“little-Oh”), ω. Asymp-

totic notation provides us with a way to simplify the functions that arise in analyzing algorithm running times

by ignoring constant factors and concentrating on the trends for large values of n. For example, it allows us to

reason that for three algorithms with the respective running times

n

3

log n + 4n

2

+ 52nlog n ∈ Θ(n

3

log n)

15n

2

+ 7nlog

3

n ∈ Θ(n

2

)

3n + 4 log

5

n + 19n

2

∈ Θ(n

2

).

Thus, the ﬁrst algorithm is signiﬁcantly slower for large n, while the other two are comparable, up to a constant

factor.

Since asymptotics were covered in earlier courses, I will assume that this is familiar to you. Nonetheless, here

are a few facts to remember about asymptotic notation:

Lecture Notes 3 CMSC 451

Ignore constant factors: Multiplicative constant factors are ignored. For example, 347n is Θ(n). Constant

factors appearing exponents cannot be ignored. For example, 2

3n

is not O(2

n

).

Focus on large n: Asymptotic analysis means that we consider trends for large values of n. Thus, the fastest

growing function of n is the only one that needs to be considered. For example, 3n

2

log n + 25nlog n +

(log n)

7

is Θ(n

2

log n).

Polylog, polynomial, and exponential: These are the most common functions that arise in analyzing algo-

rithms:

Polylogarithmic: Powers of log n, such as (log n)

7

. We will usually write this as log

7

n.

Polynomial: Powers of n, such as n

4

and

√

n = n

1/2

.

Exponential: A constant (not 1) raised to the power n, such as 3

n

.

An important fact is that polylogarithmic functions are strictly asymptotically smaller than polynomial

function, which are strictly asymptotically smaller than exponential functions (assuming the base of the

exponent is bigger than 1). For example, if we let ≺ mean “asymptotically smaller” then

log

a

n ≺ n

b

≺ c

n

for any a, b, and c, provided that b > 0 and c > 1.

Logarithm Simpliﬁcation: It is a good idea to ﬁrst simplify terms involving logarithms. For example, the

following formulas are useful. Here a, b, c are constants:

log

b

n =

log

a

n

log

a

b

= Θ(log

a

n)

log

a

(n

c

) = c log

a

n = Θ(log

a

n)

b

log

a

n

= n

log

a

b

.

Avoid using log n in exponents. The last rule above can be used to achieve this. For example, rather than

saying 3

log

2

n

, express this as n

log

2

3

≈ n

1.585

.

Following the conventional sloppiness, I will often say O(n

2

), when in fact the stronger statement Θ(n

2

) holds.

(This is just because it is easier to say “oh” than “theta”.)

Summations: Summations naturally arise in the analysis of iterative algorithms. Also, more complex forms of analy-

sis, such as recurrences, are often solved by reducing them to summations. Solving a summation means reducing

it to a closed formformula, that is, one having no summations, recurrences, integrals, or other complex operators.

In algorithm design it is often not necessary to solve a summation exactly, since an asymptotic approximation or

close upper bound is usually good enough. Here are some common summations and some tips to use in solving

summations.

Constant Series: For integers a and b,

b

i=a

1 = max(b −a + 1, 0).

Notice that when b = a − 1, there are no terms in the summation (since the index is assumed to count

upwards only), and the result is 0. Be careful to check that b ≥ a −1 before applying this formula blindly.

Arithmetic Series: For n ≥ 0,

n

i=0

i = 1 + 2 + +n =

n(n + 1)

2

.

This is Θ(n

2

). (The starting bound could have just as easily been set to 1 as 0.)

Lecture Notes 4 CMSC 451

Geometric Series: Let x ,= 1 be any constant (independent of n), then for n ≥ 0,

n

i=0

x

i

= 1 +x +x

2

+ +x

n

=

x

n+1

−1

x −1

.

If 0 < x < 1 then this is Θ(1). If x > 1, then this is Θ(x

n

), that is, the entire sum is proportional to the

last element of the series.

Quadratic Series: For n ≥ 0,

n

i=0

i

2

= 1

2

+ 2

2

+ +n

2

=

2n

3

+ 3n

2

+n

6

.

Linear-geometric Series: This arises in some algorithms based on trees and recursion. Let x ,= 1 be any

constant, then for n ≥ 0,

n−1

i=0

ix

i

= x + 2x

2

+ 3x

3

+nx

n

=

(n −1)x

(n+1)

−nx

n

+x

(x −1)

2

.

As n becomes large, this is asymptotically dominated by the term (n − 1)x

(n+1)

/(x − 1)

2

. The multi-

plicative term n −1 is very nearly equal to n for large n, and, since x is a constant, we may multiply this

times the constant (x −1)

2

/x without changing the asymptotics. What remains is Θ(nx

n

).

Harmonic Series: This arises often in probabilistic analyses of algorithms. It does not have an exact closed

form solution, but it can be closely approximated. For n ≥ 0,

H

n

=

n

i=1

1

i

= 1 +

1

2

+

1

3

+ +

1

n

= (lnn) +O(1).

There are also a few tips to learn about solving summations.

Summations with general bounds: When a summation does not start at the 1 or 0, as most of the above for-

mulas assume, you can just split it up into the difference of two summations. For example, for 1 ≤ a ≤ b

b

i=a

f(i) =

b

i=0

f(i) −

a−1

i=0

f(i).

Linearity of Summation: Constant factors and added terms can be split out to make summations simpler.

(4 + 3i(i −2)) =

4 + 3i

2

−6i =

4 + 3

i

2

−6

i.

Now the formulas can be to each summation individually.

Approximate using integrals: Integration and summation are closely related. (Integration is in some sense

a continuous form of summation.) Here is a handy formula. Let f(x) be any monotonically increasing

function (the function increases as x increases).

_

n

0

f(x)dx ≤

n

i=1

f(i) ≤

_

n+1

1

f(x)dx.

Example: Right Dominant Elements As an example of the use of summations in algorithm analysis, consider the

following simple problem. We are given a list L of numeric values. We say that an element of L is right

dominant if it is strictly larger than all the elements that follow it in the list. Note that the last element of the list

Lecture Notes 5 CMSC 451

is always right dominant, as is the last occurrence of the maximum element of the array. For example, consider

the following list.

L = ¸10, 9, 5, 13, 2, 7, 1, 8, 4, 6, 3)

The sequence of right dominant elements are ¸13, 8, 6, 3).

In order to make this more concrete, we should think about how L is represented. It will make a difference

whether L is represented as an array (allowing for random access), a doubly linked list (allowing for sequential

access in both directions), or a singly linked list (allowing for sequential access in only one direction). Among

the three possible representations, the array representation seems to yield the simplest and clearest algorithm.

However, we will design the algorithm in such a way that it only performs sequential scans, so it could also

be implemented using a singly linked or doubly linked list. (This is common in algorithms. Chose your rep-

resentation to make the algorithm as simple and clear as possible, but give thought to how it may actually be

implemented. Remember that algorithms are read by humans, not compilers.) We will assume here that the

array L of size n is indexed from 1 to n.

Think for a moment how you would solve this problem. Can you see an O(n) time algorithm? (If not, think

a little harder.) To illustrate summations, we will ﬁrst present a naive O(n

2

) time algorithm, which operates

by simply checking for each element of the array whether all the subsequent elements are strictly smaller.

(Although this example is pretty stupid, it will also serve to illustrate the sort of style that we will use in

presenting algorithms.)

Right Dominant Elements (Naive Solution)

// Input: List L of numbers given as an array L[1..n]

// Returns: List D containing the right dominant elements of L

RightDominant(L) {

D = empty list

for (i = 1 to n)

isDominant = true

for (j = i+1 to n)

if (A[i] <= A[j]) isDominant = false

if (isDominant) append A[i] to D

}

return D

}

If I were programming this, I would rewrite the inner (j) loop as a while loop, since we can terminate the

loop as soon as we ﬁnd that A[i] is not dominant. Again, this sort of optimization is good to keep in mind in

programming, but will be omitted since it will not affect the worst-case running time.

The time spent in this algorithm is dominated (no pun intended) by the time spent in the inner (j) loop. On the

ith iteration of the outer loop, the inner loop is executed from i + 1 to n, for a total of n −(i + 1) + 1 = n −i

times. (Recall the rule for the constant series above.) Each iteration of the inner loop takes constant time. Thus,

up to a constant factor, the running time, as a function of n, is given by the following summation:

T(n) =

n

i=1

(n −i).

To solve this summation, let us expand it, and put it into a form such that the above formulas can be used.

T(n) = (n −1) + (n −2) +. . . + 2 + 1 + 0

= 0 + 1 + 2 +. . . + (n −2) + (n −1)

=

n−1

i=0

i =

(n −1)n

2

.

Lecture Notes 6 CMSC 451

The last step comes from applying the formula for the linear series (using n −1 in place of n in the formula).

As mentioned above, there is a simple O(n) time algorithm for this problem. As an exercise, see if you can ﬁnd

it. As an additional challenge, see if you can design your algorithm so it only performs a single left-to-right scan

of the list L. (You are allowed to use up to O(n) working storage to do this.)

Recurrences: Another useful mathematical tool in algorithm analysis will be recurrences. They arise naturally in the

analysis of divide-and-conquer algorithms. Recall that these algorithms have the following general structure.

Divide: Divide the problem into two or more subproblems (ideally of roughly equal sizes),

Conquer: Solve each subproblem recursively, and

Combine: Combine the solutions to the subproblems into a single global solution.

How do we analyze recursive procedures like this one? If there is a simple pattern to the sizes of the recursive

calls, then the best way is usually by setting up a recurrence, that is, a function which is deﬁned recursively in

terms of itself. Here is a typical example. Suppose that we break the problem into two subproblems, each of size

roughly n/2. (We will assume exactly n/2 for simplicity.). The additional overhead of splitting and merging

the solutions is O(n). When the subproblems are reduced to size 1, we can solve them in O(1) time. We will

ignore constant factors, writing O(n) just as n, yielding the following recurrence:

T(n) = 1 if n = 1,

T(n) = 2T(n/2) +n if n > 1.

Note that, since we assume that n is an integer, this recurrence is not well deﬁned unless n is a power of 2 (since

otherwise n/2 will at some point be a fraction). To be formally correct, I should either write ¸n/2| or restrict

the domain of n, but I will often be sloppy in this way.

There are a number of methods for solving the sort of recurrences that show up in divide-and-conquer algo-

rithms. The easiest method is to apply the Master Theorem, given in CLRS. Here is a slightly more restrictive

version, but adequate for a lot of instances. See CLRS for the more complete version of the Master Theorem

and its proof.

Theorem: (Simpliﬁed Master Theorem) Let a ≥ 1, b > 1 be constants and let T(n) be the recurrence

T(n) = aT(n/b) +cn

k

,

deﬁned for n ≥ 0.

Case 1: a > b

k

then T(n) is Θ(n

log

b

a

).

Case 2: a = b

k

then T(n) is Θ(n

k

log n).

Case 3: a < b

k

then T(n) is Θ(n

k

).

Using this version of the Master Theorem we can see that in our recurrence a = 2, b = 2, and k = 1, so a = b

k

and Case 2 applies. Thus T(n) is Θ(nlog n).

There many recurrences that cannot be put into this form. For example, the following recurrence is quite

common: T(n) = 2T(n/2) +nlog n. This solves to T(n) = Θ(nlog

2

n), but the Master Theorem (either this

form or the one in CLRS will not tell you this.) For such recurrences, other methods are needed.

Lecture 3: Review of Sorting and Selection

Read: Review Chapts. 6–9 in CLRS.

Lecture Notes 7 CMSC 451

Review of Sorting: Sorting is among the most basic problems in algorithm design. We are given a sequence of items,

each associated with a given key value. The problem is to permute the items so that they are in increasing (or

decreasing) order by key. Sorting is important because it is often the ﬁrst step in more complex algorithms.

Sorting algorithms are usually divided into two classes, internal sorting algorithms, which assume that data is

stored in an array in main memory, and external sorting algorithm, which assume that data is stored on disk or

some other device that is best accessed sequentially. We will only consider internal sorting.

You are probably familiar with one or more of the standard simple Θ(n

2

) sorting algorithms, such as Insertion-

Sort, SelectionSort and BubbleSort. (By the way, these algorithms are quite acceptable for small lists of, say,

fewer than 20 elements.) BubbleSort is the easiest one to remember, but it widely considered to be the worst of

the three.

The three canonical efﬁcient comparison-based sorting algorithms are MergeSort, QuickSort, and HeapSort. All

run in Θ(nlog n) time. Sorting algorithms often have additional properties that are of interest, depending on the

application. Here are two important properties.

In-place: The algorithm uses no additional array storage, and hence (other than perhaps the system’s recursion

stack) it is possible to sort very large lists without the need to allocate additional working storage.

Stable: A sorting algorithm is stable if two elements that are equal remain in the same relative position after

sorting is completed. This is of interest, since in some sorting applications you sort ﬁrst on one key and

then on another. It is nice to know that two items that are equal on the second key, remain sorted on the

ﬁrst key.

Here is a quick summary of the fast sorting algorithms. If you are not familiar with any of these, check out the

descriptions in CLRS. They are shown schematically in Fig. 1

QuickSort: It works recursively, by ﬁrst selecting a random “pivot value” from the array. Then it partitions the

array into elements that are less than and greater than the pivot. Then it recursively sorts each part.

QuickSort is widely regarded as the fastest of the fast sorting algorithms (on modern machines). One

explanation is that its inner loop compares elements against a single pivot value, which can be stored in

a register for fast access. The other algorithms compare two elements in the array. This is considered

an in-place sorting algorithm, since it uses no other array storage. (It does implicitly use the system’s

recursion stack, but this is usually not counted.) It is not stable. There is a stable version of QuickSort,

but it is not in-place. This algorithm is Θ(nlog n) in the expected case, and Θ(n

2

) in the worst case. If

properly implemented, the probability that the algorithm takes asymptotically longer (assuming that the

pivot is chosen randomly) is extremely small for large n.

QuickSort:

MergeSort:

HeapSort:

Heap

extractMax

x partition < x > x x

sort sort

x

split

sort

merge

buildHeap

Fig. 1: Common O(nlog n) comparison-based sorting algorithms.

Lecture Notes 8 CMSC 451

MergeSort: MergeSort also works recursively. It is a classical divide-and-conquer algorithm. The array is split

into two subarrays of roughly equal size. They are sorted recursively. Then the two sorted subarrays are

merged together in Θ(n) time.

MergeSort is the only stable sorting algorithm of these three. The downside is the MergeSort is the only

algorithm of the three that requires additional array storage (ignoring the recursion stack), and thus it is

not in-place. This is because the merging process merges the two arrays into a third array. Although it is

possible to merge arrays in-place, it cannot be done in Θ(n) time.

HeapSort: HeapSort is based on a nice data structure, called a heap, which is an efﬁcient implementation of a

priority queue data structure. A priority queue supports the operations of inserting a key, and deleting the

element with the smallest key value. A heap can be built for n keys in Θ(n) time, and the minimum key

can be extracted in Θ(log n) time. HeapSort is an in-place sorting algorithm, but it is not stable.

HeapSort works by building the heap (ordered in reverse order so that the maximum can be extracted

efﬁciently) and then repeatedly extracting the largest element. (Why it extracts the maximum rather than

the minimum is an implementation detail, but this is the key to making this work as an in-place sorting

algorithm.)

If you only want to extract the k smallest values, a heap can allow you to do this is Θ(n+k log n) time. A

heap has the additional advantage of being used in contexts where the priority of elements changes. Each

change of priority (key value) can be processed in Θ(log n) time.

Which sorting algorithm should you implement when implementing your programs? The correct answer is

probably “none of them”. Unless you know that your input has some special properties that suggest a much

faster alternative, it is best to rely on the library sorting procedure supplied on your system. Presumably, it

has been engineered to produce the best performance for your system, and saves you from debugging time.

Nonetheless, it is important to learn about sorting algorithms, since the fundamental concepts covered there

apply to much more complex algorithms.

Selection: A simpler, related problem to sorting is selection. The selection problem is, given an array A of n numbers

(not sorted), and an integer k, where 1 ≤ k ≤ n, return the kth smallest value of A. Although selection can be

solved in O(nlog n) time, by ﬁrst sorting A and then returning the kth element of the sorted list, it is possible

to select the kth smallest element in O(n) time. The algorithm is a variant of QuickSort.

Lower Bounds for Comparison-Based Sorting: The fact that O(nlog n) sorting algorithms are the fastest around

for many years, suggests that this may be the best that we can do. Can we sort faster? The claim is no, pro-

vided that the algorithm is comparison-based. A comparison-based sorting algorithm is one in which algorithm

permutes the elements based solely on the results of the comparisons that the algorithm makes between pairs of

elements.

All of the algorithms we have discussed so far are comparison-based. We will see that exceptions exist in

special cases. This does not preclude the possibility of sorting algorithms whose actions are determined by

other operations, as we shall see below. The following theorem gives the lower bound on comparison-based

sorting.

Theorem: Any comparison-based sorting algorithm has worst-case running time Ω(nlog n).

We will not present a proof of this theorem, but the basic argument follows from a simple analysis of the number

of possibilities and the time it takes to distinguish among them. There are n! ways to permute a given set of

n numbers. Any sorting algorithm must be able to distinguish between each of these different possibilities,

since two different permutations need to treated differently. Since each comparison leads to only two possible

outcomes, the execution of the algorithmcan be viewed as a binary tree. (This is a bit abstract, but given a sorting

algorithm it is not hard, but quite tedious, to trace its execution, and set up a new node each time a decision is

made.) This binary tree, called a decision tree, must have at least n! leaves, one for each of the possible input

permutations. Such a tree, even if perfectly balanced, must height at least lg(n!). By Stirling’s approximation, n!

Lecture Notes 9 CMSC 451

is, up to constant factors, roughly (n/e)

n

. Plugging this in and simplifying yields the Ω(nlog n) lower bound.

This can also be generalized to show that the average-case time to sort is also Ω(nlog n).

Linear Time Sorting: The Ω(nlog n) lower bound implies that if we hope to sort numbers faster than in O(nlog n)

time, we cannot do it by making comparisons alone. In some special cases, it is possible to sort without the

use of comparisons. This leads to the possibility of sorting in linear (that is, O(n)) time. Here are three such

algorithms.

Counting Sort: Counting sort assumes that each input is an integer in the range from 1 to k. The algorithm

sorts in Θ(n + k) time. Thus, if k is O(n), this implies that the resulting sorting algorithm runs in Θ(n)

time. The algorithm requires an additional Θ(n + k) working storage but has the nice feature that it is

stable. The algorithm is remarkably simple, but deceptively clever. You are referred to CLRS for the

details.

Radix Sort: The main shortcoming of CountingSort is that (due to space requirements) it is only practical for

a very small ranges of integers. If the integers are in the range from say, 1 to a million, we may not want

to allocate an array of a million elements. RadixSort provides a nice way around this by sorting numbers

one digit, or one byte, or generally, some groups of bits, at a time. As the number of bits in each group

increases, the algorithm is faster, but the space requirements go up.

The idea is very simple. Let’s think of our list as being composed of n integers, each having d decimal

digits (or digits in any base). To sort these integers we simply sort repeatedly, starting at the lowest order

digit, and ﬁnishing with the highest order digit. Since the sorting algorithm is stable, we know that if the

numbers are already sorted with respect to low order digits, and then later we sort with respect to high

order digits, numbers having the same high order digit will remain sorted with respect to their low order

digit. An example is shown in Figure 2.

Input Output

576 49[4] 9[5]4 [1]76 176

494 19[4] 5[7]6 [1]94 194

194 95[4] 1[7]6 [2]78 278

296 =⇒ 57[6] =⇒ 2[7]8 =⇒ [2]96 =⇒ 296

278 29[6] 4[9]4 [4]94 494

176 17[6] 1[9]4 [5]76 576

954 27[8] 2[9]6 [9]54 954

Fig. 2: Example of RadixSort.

The running time is Θ(d(n +k)) where d is the number of digits in each value, n is the length of the list,

and k is the number of distinct values each digit may have. The space needed is Θ(n +k).

A common application of this algorithm is for sorting integers over some range that is larger than n, but

still polynomial in n. For example, suppose that you wanted to sort a list of integers in the range from 1

to n

2

. First, you could subtract 1 so that they are now in the range from 0 to n

2

− 1. Observe that any

number in this range can be expressed as 2-digit number, where each digit is over the range from 0 to

n − 1. In particular, given any integer L in this range, we can write L = an + b, where a = ¸L/n| and

b = L mod n. Now, we can think of L as the 2-digit number (a, b). So, we can radix sort these numbers

in time Θ(2(n +n)) = Θ(n). In general this works to sort any n numbers over the range from 1 to n

d

, in

Θ(dn) time.

BucketSort: CountingSort and RadixSort are only good for sorting small integers, or at least objects (like

characters) that can be encoded as small integers. What if you want to sort a set of ﬂoating-point numbers?

In the worst-case you are pretty much stuck with using one of the comparison-based sorting algorithms,

such as QuickSort, MergeSort, or HeapSort. However, in special cases where you have reason to believe

that your numbers are roughly uniformly distributed over some range, then it is possible to do better. (Note

Lecture Notes 10 CMSC 451

that this is a strong assumption. This algorithm should not be applied unless you have good reason to

believe that this is the case.)

Suppose that the numbers to be sorted range over some interval, say [0, 1). (It is possible in O(n) time

to ﬁnd the maximum and minimum values, and scale the numbers to ﬁt into this range.) The idea is

the subdivide this interval into n subintervals. For example, if n = 100, the subintervals would be

[0, 0.01), [0.01, 0.02), [0.02, 0.03), and so on. We create n different buckets, one for each interval. Then

we make a pass through the list to be sorted, and using the ﬂoor function, we can map each value to its

bucket index. (In this case, the index of element x would be ¸100x|.) We then sort each bucket in as-

cending order. The number of points per bucket should be fairly small, so even a quadratic time sorting

algorithm (e.g. BubbleSort or InsertionSort) should work. Finally, all the sorted buckets are concatenated

together.

The analysis relies on the fact that, assuming that the numbers are uniformly distributed, the number of

elements lying within each bucket on average is a constant. Thus, the expected time needed to sort each

bucket is O(1). Since there are n buckets, the total sorting time is Θ(n). An example illustrating this idea

is given in Fig. 3.

.81 .17 .59 .38 .86 .14 .10 .71 .42 .56

9

4

B

0

1

2

3

5

6

7

8

.59

.86 .81

.71

.56

.42

.38

.17 .14 .10

A

Fig. 3: BucketSort.

Lecture 4: Dynamic Programming: Longest Common Subsequence

Read: Introduction to Chapt 15, and Section 15.4 in CLRS.

Dynamic Programming: We begin discussion of an important algorithm design technique, called dynamic program-

ming (or DP for short). The technique is among the most powerful for designing algorithms for optimization

problems. (This is true for two reasons. Dynamic programming solutions are based on a few common elements.

Dynamic programming problems are typically optimization problems (ﬁnd the minimum or maximum cost so-

lution, subject to various constraints). The technique is related to divide-and-conquer, in the sense that it breaks

problems down into smaller problems that it solves recursively. However, because of the somewhat different

nature of dynamic programming problems, standard divide-and-conquer solutions are not usually efﬁcient. The

basic elements that characterize a dynamic programming algorithm are:

Substructure: Decompose your problem into smaller (and hopefully simpler) subproblems. Express the solu-

tion of the original problem in terms of solutions for smaller problems.

Table-structure: Store the answers to the subproblems in a table. This is done because subproblem solutions

are reused many times.

Bottom-up computation: Combine solutions on smaller subproblems to solve larger subproblems. (Our text

also discusses a top-down alternative, called memoization.)

Lecture Notes 11 CMSC 451

The most important question in designing a DP solution to a problem is how to set up the subproblem structure.

This is called the formulation of the problem. Dynamic programming is not applicable to all optimization

problems. There are two important elements that a problem must have in order for DP to be applicable.

Optimal substructure: (Sometimes called the principle of optimality.) It states that for the global problem to

be solved optimally, each subproblem should be solved optimally. (Not all optimization problems satisfy

this. Sometimes it is better to lose a little on one subproblem in order to make a big gain on another.)

Polynomially many subproblems: An important aspect to the efﬁciency of DP is that the total number of

subproblems to be solved should be at most a polynomial number.

Strings: One important area of algorithm design is the study of algorithms for character strings. There are a number

of important problems here. Among the most important has to do with efﬁciently searching for a substring

or generally a pattern in large piece of text. (This is what text editors and programs like “grep” do when you

perform a search.) In many instances you do not want to ﬁnd a piece of text exactly, but rather something that is

similar. This arises for example in genetics research and in document retrieval on the web. One common method

of measuring the degree of similarity between two strings is to compute their longest common subsequence.

Longest Common Subsequence: Let us think of character strings as sequences of characters. Given two sequences

X = ¸x

1

, x

2

, . . . , x

m

) and Z = ¸z

1

, z

2

, . . . , z

k

), we say that Z is a subsequence of X if there is a strictly in-

creasing sequence of k indices ¸i

1

, i

2

, . . . , i

k

) (1 ≤ i

1

< i

2

< . . . < i

k

≤ n) such that Z = ¸X

i1

, X

i2

, . . . , X

i

k

).

For example, let X = ¸ABRACADABRA) and let Z = ¸AADAA), then Z is a subsequence of X.

Given two strings X and Y , the longest common subsequence of X and Y is a longest sequence Z that is a

subsequence of both X and Y . For example, let X = ¸ABRACADABRA) and let Y = ¸YABBADABBADOO).

Then the longest common subsequence is Z = ¸ABADABA). See Fig. 4

O O D B Y A A D B A B A

X =

Y = B

A

LCS = A B A D A B A

A R B A R B A D A C

Fig. 4: An example of the LCS of two strings X and Y .

The Longest Common Subsequence Problem (LCS) is the following. Given two sequences X = ¸x

1

, . . . , x

m

)

and Y = ¸y

1

, . . . , y

n

) determine a longest common subsequence. Note that it is not always unique. For example

the LCS of ¸ABC) and ¸BAC) is either ¸AC) or ¸BC).

DP Formulation for LCS: The simple brute-force solution to the problem would be to try all possible subsequences

from one string, and search for matches in the other string, but this is hopelessly inefﬁcient, since there are an

exponential number of possible subsequences.

Instead, we will derive a dynamic programming solution. In typical DP fashion, we need to break the prob-

lem into smaller pieces. There are many ways to do this for strings, but it turns out for this problem that

considering all pairs of preﬁxes will sufﬁce for us. A preﬁx of a sequence is just an initial string of values,

X

i

= ¸x

1

, x

2

, . . . , x

i

). X

0

is the empty sequence.

The idea will be to compute the longest common subsequence for every possible pair of preﬁxes. Let c[i, j]

denote the length of the longest common subsequence of X

i

and Y

j

. For example, in the above case we have

X

5

= ¸ABRAC) and Y

6

= ¸YABBAD). Their longest common subsequence is ¸ABA). Thus, c[5, 6] = 3.

Which of the c[i, j] values do we compute? Since we don’t know which will lead to the ﬁnal optimum, we

compute all of them. Eventually we are interested in c[m, n] since this will be the LCS of the two entire strings.

The idea is to compute c[i, j] assuming that we already know the values of c[i

, j

], for i

≤ i and j

≤ j (but

not both equal). Here are the possible cases.

Lecture Notes 12 CMSC 451

Basis: c[i, 0] = c[j, 0] = 0. If either sequence is empty, then the longest common subsequence is empty.

Last characters match: Suppose x

i

= y

j

. For example: Let X

i

= ¸ABCA) and let Y

j

= ¸DACA). Since

both end in A, we claim that the LCS must also end in A. (We will leave the proof as an exercise.) Since

the A is part of the LCS we may ﬁnd the overall LCS by removing A from both sequences and taking the

LCS of X

i−1

= ¸ABC) and Y

j−1

= ¸DAC) which is ¸AC) and then adding A to the end, giving ¸ACA)

as the answer. (At ﬁrst you might object: But how did you know that these two A’s matched with each

other. The answer is that we don’t, but it will not make the LCS any smaller if we do.) This is illustrated

at the top of Fig. 5.

if x

i

= y

j

then c[i, j] = c[i −1, j −1] + 1

LCS

Y

X A

yj

A A

j

j

Y

i−1

i

X A

add to LCS Last chars match:

j−1

i−1

j−1

x

B

LCS

X

LCS

A

Y

max

j

skip y

i

skip x

A

B

x

i

match

Last chars do not

y

i

B

A

j

Y

i

X

j

Y

i

X

Fig. 5: The possibe cases in the DP formulation of LCS.

Last characters do not match: Suppose that x

i

,= y

j

. In this case x

i

and y

j

cannot both be in the LCS (since

they would have to be the last character of the LCS). Thus either x

i

is not part of the LCS, or y

j

is not part

of the LCS (and possibly both are not part of the LCS).

At this point it may be tempting to try to make a “smart” choice. By analyzing the last few characters

of X

i

and Y

j

, perhaps we can ﬁgure out which character is best to discard. However, this approach is

doomed to failure (and you are strongly encouraged to think about this, since it is a common point of

confusion.) Instead, our approach is to take advantage of the fact that we have already precomputed

smaller subproblems, and use these results to guide us.

In the ﬁrst case (x

i

is not in the LCS) the LCS of X

i

and Y

j

is the LCS of X

i−1

and Y

j

, which is c[i −1, j].

In the second case (y

j

is not in the LCS) the LCS is the LCS of X

i

and Y

j−1

which is c[i, j − 1]. We do

not know which is the case, so we try both and take the one that gives us the longer LCS. This is illustrated

at the bottom half of Fig. 5.

if x

i

,= y

j

then c[i, j] = max(c[i −1, j], c[i, j −1])

Combining these observations we have the following formulation:

c[i, j] =

_

_

_

0 if i = 0 or j = 0,

c[i −1, j −1] + 1 if i, j > 0 and x

i

= y

j

,

max(c[i, j −1], c[i −1, j]) if i, j > 0 and x

i

,= y

j

.

Implementing the Formulation: The task now is to simply implement this formulation. We concentrate only on

computing the maximum length of the LCS. Later we will see how to extract the actual sequence. We will store

some helpful pointers in a parallel array, b[0..m, 0..n]. The code is shown below, and an example is illustrated

in Fig. 6

Lecture Notes 13 CMSC 451

LCS Length Table with back pointers included

2 2 3

=n

2

1

2 2 1

2 2 1 1

1 1 1

1

B

D

C

B

A

B C D B

4

3

2

1

0

4 3 2 0

5 m= 0

B

4

3

2

1

0

4 3 2 1 0

5 m=

=n

start here

X = BACDB

X: X:

Y: Y:

D

1

1 1 1 1

0

0

0

0

0 0 0 0 0

B

D

C

B

A

B C

0

1 1

1 1 1 1

1 1 1 1

0

0

0

0

0 0 0 0 0

2

Y = BDCB

LCS = BCB

3 2 2 1

2 2 2 1

2

Fig. 6: Longest common subsequence example for the sequences X = ¸BACDB) and Y = ¸BCDB). The numeric

table entries are the values of c[i, j] and the arrow entries are used in the extraction of the sequence.

Build LCS Table

LCS(x[1..m], y[1..n]) { // compute LCS table

int c[0..m, 0..n]

for i = 0 to m // init column 0

c[i,0] = 0; b[i,0] = SKIPX

for j = 0 to n // init row 0

c[0,j] = 0; b[0,j] = SKIPY

for i = 1 to m // fill rest of table

for j = 1 to n

if (x[i] == y[j]) // take X[i] (Y[j]) for LCS

c[i,j] = c[i-1,j-1]+1; b[i,j] = addXY

else if (c[i-1,j] >= c[i,j-1]) // X[i] not in LCS

c[i,j] = c[i-1,j]; b[i,j] = skipX

else // Y[j] not in LCS

c[i,j] = c[i,j-1]; b[i,j] = skipY

return c[m,n] // return length of LCS

}

Extracting the LCS

getLCS(x[1..m], y[1..n], b[0..m,0..n]) {

LCSstring = empty string

i = m; j = n // start at lower right

while(i != 0 && j != 0) // go until upper left

switch b[i,j]

case addXY: // add X[i] (=Y[j])

add x[i] (or equivalently y[j]) to front of LCSstring

i--; j--; break

case skipX: i--; break // skip X[i]

case skipY: j--; break // skip Y[j]

return LCSstring

}

Lecture Notes 14 CMSC 451

The running time of the algorithm is clearly O(mn) since there are two nested loops with m and n iterations,

respectively. The algorithm also uses O(mn) space.

Extracting the Actual Sequence: Extracting the ﬁnal LCS is done by using the back pointers stored in b[0..m, 0..n].

Intuitively b[i, j] = add

XY

means that X[i] and Y [j] together form the last character of the LCS. So we take

this common character, and continue with entry b[i −1, j −1] to the northwest (¸). If b[i, j] = skip

X

, then we

know that X[i] is not in the LCS, and so we skip it and go to b[i −1, j] above us (↑). Similarly, if b[i, j] = skip

Y

,

then we know that Y [j] is not in the LCS, and so we skip it and go to b[i, j −1] to the left (←). Following these

back pointers, and outputting a character with each diagonal move gives the ﬁnal subsequence.

Lecture 5: Dynamic Programming: Chain Matrix Multiplication

Read: Chapter 15 of CLRS, and Section 15.2 in particular.

Chain Matrix Multiplication: This problem involves the question of determining the optimal sequence for perform-

ing a series of operations. This general class of problem is important in compiler design for code optimization

and in databases for query optimization. We will study the problem in a very restricted instance, where the

dynamic programming issues are easiest to see.

Suppose that we wish to multiply a series of matrices

A

1

A

2

. . . A

n

Matrix multiplication is an associative but not a commutative operation. This means that we are free to paren-

thesize the above multiplication however we like, but we are not free to rearrange the order of the matrices. Also

recall that when two (nonsquare) matrices are being multiplied, there are restrictions on the dimensions. A pq

matrix has p rows and q columns. You can multiply a p q matrix A times a q r matrix B, and the result

will be a p r matrix C. (The number of columns of A must equal the number of rows of B.) In particular for

1 ≤ i ≤ p and 1 ≤ j ≤ r,

C[i, j] =

q

k=1

A[i, k]B[k, j].

This corresponds to the (hopefully familiar) rule that the [i, j] entry of C is the dot product of the ith (horizontal)

row of Aand the jth (vertical) column of B. Observe that there are pr total entries in C and each takes O(q) time

to compute, thus the total time to multiply these two matrices is proportional to the product of the dimensions,

pqr.

B C

=

A

p

q

q

r

r

Multiplication

time = pqr

=

*

p

Fig. 7: Matrix Multiplication.

Note that although any legal parenthesization will lead to a valid result, not all involve the same number of

operations. Consider the case of 3 matrices: A

1

be 5 4, A

2

be 4 6 and A

3

be 6 2.

multCost[((A

1

A

2

)A

3

)] = (5 4 6) + (5 6 2) = 180,

multCost[(A

1

(A

2

A

3

))] = (4 6 2) + (5 4 2) = 88.

Even for this small example, considerable savings can be achieved by reordering the evaluation sequence.

Lecture Notes 15 CMSC 451

Chain Matrix Multiplication Problem: Given a sequence of matrices A

1

, A

2

, . . . , A

n

and dimensions p

0

, p

1

, . . . , p

n

where A

i

is of dimension p

i−1

p

i

, determine the order of multiplication (represented, say, as a binary

tree) that minimizes the number of operations.

Important Note: This algorithm does not perform the multiplications, it just determines the best order in which

to perform the multiplications.

Naive Algorithm: We could write a procedure which tries all possible parenthesizations. Unfortunately, the number

of ways of parenthesizing an expression is very large. If you have just one or two matrices, then there is only

one way to parenthesize. If you have n items, then there are n − 1 places where you could break the list with

the outermost pair of parentheses, namely just after the 1st item, just after the 2nd item, etc., and just after the

(n − 1)st item. When we split just after the kth item, we create two sublists to be parenthesized, one with k

items, and the other with n −k items. Then we could consider all the ways of parenthesizing these. Since these

are independent choices, if there are L ways to parenthesize the left sublist and R ways to parenthesize the right

sublist, then the total is L R. This suggests the following recurrence for P(n), the number of different ways of

parenthesizing n items:

P(n) =

_

1 if n = 1,

n−1

k=1

P(k)P(n −k) if n ≥ 2.

This is related to a famous function in combinatorics called the Catalan numbers (which in turn is related to the

number of different binary trees on n nodes). In particular P(n) = C(n − 1), where C(n) is the nth Catalan

number:

C(n) =

1

n + 1

_

2n

n

_

.

Applying Stirling’s formula (which is given in our text), we ﬁnd that C(n) ∈ Ω(4

n

/n

3/2

). Since 4

n

is exponen-

tial and n

3/2

is just polynomial, the exponential will dominate, implying that function grows very fast. Thus,

this will not be practical except for very small n. In summary, brute force is not an option.

Dynamic Programming Approach: This problem, like other dynamic programming problems involves determining

a structure (in this case, a parenthesization). We want to break the problem into subproblems, whose solutions

can be combined to solve the global problem. As is common to any DP solution, we need to ﬁnd some way to

break the problem into smaller subproblems, and we need to determine a recursive formulation, which represents

the optimum solution to each problem in terms of solutions to the subproblems. Let us think of how we can do

this.

Since matrices cannot be reordered, it makes sense to think about sequences of matrices. Let A

i..j

denote the

result of multiplying matrices i through j. It is easy to see that A

i..j

is a p

i−1

p

j

matrix. (Think about this for

a second to be sure you see why.) Now, in order to determine how to perform this multiplication optimally, we

need to make many decisions. What we want to do is to break the problem into problems of a similar structure.

In parenthesizing the expression, we can consider the highest level of parenthesization. At this level we are

simply multiplying two matrices together. That is, for any k, 1 ≤ k ≤ n −1,

A

1..n

= A

1..k

A

k+1..n

.

Thus the problem of determining the optimal sequence of multiplications is broken up into two questions: how

do we decide where to split the chain (what is k?) and how do we parenthesize the subchains A

1..k

and A

k+1..n

?

The subchain problems can be solved recursively, by applying the same scheme.

So, let us think about the problem of determining the best value of k. At this point, you may be tempted to

consider some clever ideas. For example, since we want matrices with small dimensions, pick the value of k

that minimizes p

k

. Although this is not a bad idea, in principle. (After all it might work. It just turns out

that it doesn’t in this case. This takes a bit of thinking, which you should try.) Instead, as is true in almost all

dynamic programming solutions, we will do the dumbest thing of simply considering all possible choices of k,

and taking the best of them. Usually trying all possible choices is bad, since it quickly leads to an exponential

Lecture Notes 16 CMSC 451

number of total possibilities. What saves us here is that there are only O(n

2

) different sequences of matrices.

(There are

_

n

2

_

= n(n −1)/2 ways of choosing i and j to form A

i..j

to be precise.) Thus, we do not encounter

the exponential growth.

Notice that our chain matrix multiplication problem satisﬁes the principle of optimality, because once we decide

to break the sequence into the product A

1..k

A

k+1..n

, we should compute each subsequence optimally. That is,

for the global problem to be solved optimally, the subproblems must be solved optimally as well.

Dynamic Programming Formulation: We will store the solutions to the subproblems in a table, and build the table

in a bottom-up manner. For 1 ≤ i ≤ j ≤ n, let m[i, j] denote the minimum number of multiplications needed

to compute A

i..j

. The optimum cost can be described by the following recursive formulation.

Basis: Observe that if i = j then the sequence contains only one matrix, and so the cost is 0. (There is nothing

to multiply.) Thus, m[i, i] = 0.

Step: If i < j, then we are asking about the product A

i..j

. This can be split by considering each k, i ≤ k < j,

as A

i..k

times A

k+1..j

.

The optimum times to compute A

i..k

and A

k+1..j

are, by deﬁnition, m[i, k] and m[k + 1, j], respectively.

We may assume that these values have been computed previously and are already stored in our array. Since

A

i..k

is a p

i−1

p

k

matrix, and A

k+1..j

is a p

k

p

j

matrix, the time to multiply them is p

i−1

p

k

p

j

. This

suggests the following recursive rule for computing m[i, j].

m[i, i] = 0

m[i, j] = min

i≤k<j

(m[i, k] +m[k + 1, j] +p

i−1

p

k

p

j

) for i < j.

i i+1 k k+1 j

k+1..j

A

A

A A A A A

i..k

i..j

A

?

... ...

Fig. 8: Dynamic Programming Formulation.

It is not hard to convert this rule into a procedure, which is given below. The only tricky part is arranging the

order in which to compute the values. In the process of computing m[i, j] we need to access values m[i, k] and

m[k +1, j] for k lying between i and j. This suggests that we should organize our computation according to the

number of matrices in the subsequence. Let L = j−i+1 denote the length of the subchain being multiplied. The

subchains of length 1 (m[i, i]) are trivial to compute. Then we build up by computing the subchains of lengths

2, 3, . . . , n. The ﬁnal answer is m[1, n]. We need to be a little careful in setting up the loops. If a subchain of

length L starts at position i, then j = i + L − 1. Since we want j ≤ n, this means that i + L − 1 ≤ n, or in

other words, i ≤ n−L+1. So our loop for i runs from 1 to n−L+1 (in order to keep j in bounds). The code

is presented below.

The array s[i, j] will be explained later. It is used to extract the actual sequence. The running time of the

procedure is Θ(n

3

). We’ll leave this as an exercise in solving sums, but the key is that there are three nested

loops, and each can iterate at most n times.

Extracting the ﬁnal Sequence: Extracting the actual multiplication sequence is a fairly easy extension. The basic

idea is to leave a split marker indicating what the best split is, that is, the value of k that leads to the minimum

Lecture Notes 17 CMSC 451

Chain Matrix Multiplication

Matrix-Chain(array p[1..n]) {

array s[1..n-1,2..n]

for i = 1 to n do m[i,i] = 0; // initialize

for L = 2 to n do { // L = length of subchain

for i = 1 to n-L+1 do {

j = i + L - 1;

m[i,j] = INFINITY;

for k = i to j-1 do { // check all splits

q = m[i, k] + m[k+1, j] + p[i-1]*p[k]*p[j]

if (q < m[i, j]) {

m[i,j] = q;

s[i,j] = k;

}

}

}

}

return m[1,n] (final cost) and s (splitting markers);

}

value of m[i, j]. We can maintain a parallel array s[i, j] in which we will store the value of k providing the

optimal split. For example, suppose that s[i, j] = k. This tells us that the best way to multiply the subchain

A

i..j

is to ﬁrst multiply the subchain A

i..k

and then multiply the subchain A

k+1..j

, and ﬁnally multiply these

together. Intuitively, s[i, j] tells us what multiplication to perform last. Note that we only need to store s[i, j]

when we have at least two matrices, that is, if j > i.

The actual multiplication algorithm uses the s[i, j] value to determine how to split the current sequence. Assume

that the matrices are stored in an array of matrices A[1..n], and that s[i, j] is global to this recursive procedure.

The recursive procedure Mult does this computation and below returns a matrix.

Extracting Optimum Sequence

Mult(i, j) {

if (i == j) // basis case

return A[i];

else {

k = s[i,j]

X = Mult(i, k) // X = A[i]...A[k]

Y = Mult(k+1, j) // Y = A[k+1]...A[j]

return X*Y; // multiply matrices X and Y

}

}

In the ﬁgure below we show an example. This algorithm is tricky, so it would be a good idea to trace through

this example (and the one given in the text). The initial set of dimensions are ¸5, 4, 6, 2, 7) meaning that we

are multiplying A

1

(5 4) times A

2

(4 6) times A

3

(6 2) times A

4

(2 7). The optimal sequence is

((A

1

(A

2

A

3

))A

4

).

Lecture 6: Dynamic Programming: Minimum Weight Triangulation

Read: This is not covered in CLRS.

Lecture Notes 18 CMSC 451

i

1

s[i,j]

2 3

1 3

3

j

2

3

4

2

3

0

p

4

p

3

p

Final order

4

A

3

A

2

A

1

A

4

A

3

A

2

A

1

A

3

2

1

1

m[i,j]

1

2

3

4

1

2

3

4

4

2

p

1

p

5

158

88

120 48

104

84

0 0 0 0

i j

7 2 6

Fig. 9: Chain Matrix Multiplication Example.

Polygons and Triangulations: Let’s consider a geometric problem that outwardly appears to be quite different from

chain-matrix multiplication, but actually has remarkable similarities. We begin with a number of deﬁnitions.

Deﬁne a polygon to be a piecewise linear closed curve in the plane. In other words, we form a cycle by joining

line segments end to end. The line segments are called the sides of the polygon and the endpoints are called the

vertices. A polygon is simple if it does not cross itself, that is, if the sides do not intersect one another except

for two consecutive sides sharing a common vertex. A simple polygon subdivides the plane into its interior, its

boundary and its exterior. A simple polygon is said to be convex if every interior angle is at most 180 degrees.

Vertices with interior angle equal to 180 degrees are normally allowed, but for this problem we will assume that

no such vertices exist.

Polygon Simple polygon Convex polygon

Fig. 10: Polygons.

Given a convex polygon, we assume that its vertices are labeled in counterclockwise order P = ¸v

1

, . . . , v

n

).

We will assume that indexing of vertices is done modulo n, so v

0

= v

n

. This polygon has n sides, v

i−1

v

i

.

Given two nonadjacent sides v

i

and v

j

, where i < j−1, the line segment v

i

v

j

is a chord. (If the polygon is simple

but not convex, we include the additional requirement that the interior of the segment must lie entirely in the

interior of P.) Any chord subdivides the polygon into two polygons: ¸v

i

, v

i+1

, . . . , v

j

), and ¸v

j

, v

j+1

, . . . , v

i

).

A triangulation of a convex polygon P is a subdivision of the interior of P into a collection of triangles with

disjoint interiors, whose vertices are drawn from the vertices of P. Equivalently, we can deﬁne a triangulation

as a maximal set T of nonintersecting chords. (In other words, every chord that is not in T intersects the interior

of some chord in T.) It is easy to see that such a set of chords subdivides the interior of the polygon into a

collection of triangles with pairwise disjoint interiors (and hence the name triangulation). It is not hard to prove

(by induction) that every triangulation of an n-sided polygon consists of n − 3 chords and n − 2 triangles.

Triangulations are of interest for a number of reasons. Many geometric algorithm operate by ﬁrst decomposing

a complex polygonal shape into triangles.

In general, given a convex polygon, there are many possible triangulations. In fact, the number is exponential in

n, the number of sides. Which triangulation is the “best”? There are many criteria that are used depending on

the application. One criterion is to imagine that you must “pay” for the ink you use in drawing the triangulation,

and you want to minimize the amount of ink you use. (This may sound fanciful, but minimizing wire length is an

Lecture Notes 19 CMSC 451

important condition in chip design. Further, this is one of many properties which we could choose to optimize.)

This suggests the following optimization problem:

Minimum-weight convex polygon triangulation: Given a convex polygon determine the triangulation that

minimizes the sum of the perimeters of its triangles. (See Fig. 11.)

Lower weight triangulation A triangulation

Fig. 11: Triangulations of convex polygons, and the minimum weight triangulation.

Given three distinct vertices v

i

, v

j

, v

k

, we deﬁne the weight of the associated triangle by the weight function

w(v

i

, v

j

, v

k

) = [v

i

v

j

[ +[v

j

v

k

[ +[v

k

v

i

[,

where [v

i

v

j

[ denotes the length of the line segment v

i

v

j

.

Dynamic Programming Solution: Let us consider an (n + 1)-sided polygon P = ¸v

0

, v

1

, . . . , v

n

). Let us assume

that these vertices have been numbered in counterclockwise order. To derive a DP formulation we need to deﬁne

a set of subproblems from which we can derive the optimum solution. For 0 ≤ i < j ≤ n, deﬁne t[i, j] to be the

weight of the minimum weight triangulation for the subpolygon that lies to the right of directed chord v

i

v

j

, that

is, the polygon with the counterclockwise vertex sequence ¸v

i

, v

i+1

, . . . , v

j

). Observe that if we can compute

this quantity for all such i and j, then the weight of the minimum weight triangulation of the entire polygon can

be extracted as t[0, n]. (As usual, we only compute the minimum weight. But, it is easy to modify the procedure

to extract the actual triangulation.)

As a basis case, we deﬁne the weight of the trivial “2-sided polygon” to be zero, implying that t[i, i + 1] = 0.

In general, to compute t[i, j], consider the subpolygon ¸v

i

, v

i+1

, . . . , v

j

), where j > i +1. One of the chords of

this polygon is the side v

i

v

j

. We may split this subpolygon by introducing a triangle whose base is this chord,

and whose third vertex is any vertex v

k

, where i < k < j. This subdivides the polygon into the subpolygons

¸v

i

, v

i+1

, . . . v

k

) and ¸v

k

, v

k+1

, . . . v

j

) whose minimum weights are already known to us as t[i, k] and t[k, j].

In addition we should consider the weight of the newly added triangle ´v

i

v

k

v

j

. Thus, we have the following

recursive rule:

t[i, j] =

_

0 if j = i + 1

min

i<k<j

(t[i, k] +t[k, j] +w(v

i

v

k

v

j

)) if j > i + 1.

The ﬁnal output is the overall minimum weight, which is, t[0, n]. This is illustrated in Fig. 12

Note that this has almost exactly the same structure as the recursive deﬁnition used in the chain matrix multipli-

cation algorithm (except that some indices are different by 1.) The same Θ(n

3

) algorithm can be applied with

only minor changes.

Relationship to Binary Trees: One explanation behind the similarity of triangulations and the chain matrix multipli-

cation algorithm is to observe that both are fundamentally related to binary trees. In the case of the chain matrix

multiplication, the associated binary tree is the evaluation tree for the multiplication, where the leaves of the

tree correspond to the matrices, and each node of the tree is associated with a product of a sequence of two or

more matrices. To see that there is a similar correspondence here, consider an (n + 1)-sided convex polygon

P = ¸v

0

, v

1

, . . . , v

n

), and ﬁx one side of the polygon (say v

0

v

n

). Now consider a rooted binary tree whose root

node is the triangle containing side v

0

v

n

, whose internal nodes are the nodes of the dual tree, and whose leaves

Lecture Notes 20 CMSC 451

k

i k j

n

v

j

i

v

v

v

0

v

Triangulate

at cost t[i,k]

at cost t[k,j]

cost=w(v ,v , v )

Triangulate

Fig. 12: Triangulations and tree structure.

correspond to the remaining sides of the tree. Observe that partitioning the polygon into triangles is equivalent

to a binary tree with n leaves, and vice versa. This is illustrated in Fig. 13. Note that every triangle is associated

with an internal node of the tree and every edge of the original polygon, except for the distinguished starting

side v

0

v

n

, is associated with a leaf node of the tree.

v

11

1

2

3

4

5

6

7

8

9

10

root

A

6

root

v

v

v

v

v

v

v

v

v

v

v

0

A

2 A A

4

A

7 1

A

5

A

8

A A

11 9

A

10

A

3

9

A

8

A

7

A

6

A

5

A

2

A

1

A

4

A

3

A

11

A

10

A

Fig. 13: Triangulations and tree structure.

Once you see this connection. Then the following two observations follow easily. Observe that the associated

binary tree has n leaves, and hence (by standard results on binary trees) n − 1 internal nodes. Since each

internal node other than the root has one edge entering it, there are n−2 edges between the internal nodes. Each

internal node corresponds to one triangle, and each edge between internal nodes corresponds to one chord of the

triangulation.

Lecture 7: Greedy Algorithms: Activity Selection and Fractional Knapack

Read: Sections 16.1 and 16.2 in CLRS.

Greedy Algorithms: In many optimization algorithms a series of selections need to be made. In dynamic program-

ming we saw one way to make these selections. Namely, the optimal solution is described in a recursive manner,

and then is computed “bottom-up”. Dynamic programming is a powerful technique, but it often leads to algo-

rithms with higher than desired running times. Today we will consider an alternative design technique, called

greedy algorithms. This method typically leads to simpler and faster algorithms, but it is not as powerful or as

widely applicable as dynamic programming. We will give some examples of problems that can be solved by

greedy algorithms. (Later in the semester, we will see that this technique can be applied to a number of graph

problems as well.) Even when greedy algorithms do not produce the optimal solution, they often provide fast

heuristics (nonoptimal solution strategies), are often used in ﬁnding good approximations.

Lecture Notes 21 CMSC 451

Activity Scheduling: Activity scheduling and it is a very simple scheduling problem. We are given a set S =

¦1, 2, . . . , n¦ of n activities that are to be scheduled to use some resource, where each activity must be started

at a given start time s

i

and ends at a given ﬁnish time f

i

. For example, these might be lectures that are to be

given in a lecture hall, where the lecture times have been set up in advance, or requests for boats to use a repair

facility while they are in port.

Because there is only one resource, and some start and ﬁnish times may overlap (and two lectures cannot be

given in the same room at the same time), not all the requests can be honored. We say that two activities i and

j are noninterfering if their start-ﬁnish intervals do not overlap, more formally, [s

i

, f

i

) ∩ [s

j

, f

j

) = ∅. (Note

that making the intervals half open, two consecutive activities are not considered to interfere.) The activity

scheduling problem is to select a maximum-size set of mutually noninterfering activities for use of the resource.

(Notice that goal here is maximum number of activities, not maximum utilization. Of course different criteria

could be considered, but the greedy approach may not be optimal in general.)

How do we schedule the largest number of activities on the resource? Intuitively, we do not like long activities,

because they occupy the resource and keep us from honoring other requests. This suggests the following greedy

strategy: repeatedly select the activity with the smallest duration (f

i

−s

i

) and schedule it, provided that it does

not interfere with any previously scheduled activities. Although this seems like a reasonable strategy, this turns

out to be nonoptimal. (See Problem 17.1-4 in CLRS). Sometimes the design of a correct greedy algorithm

requires trying a few different strategies, until hitting on one that works.

Here is a greedy strategy that does work. The intuition is the same. Since we do not like activities that take a

long time, let us select the activity that ﬁnishes ﬁrst and schedule it. Then, we skip all activities that interfere

with this one, and schedule the next one that has the earliest ﬁnish time, and so on. To make the selection process

faster, we assume that the activities have been sorted by their ﬁnish times, that is,

f

1

≤ f

2

≤ . . . ≤ f

n

,

Assuming this sorting, the pseudocode for the rest of the algorithm is presented below. The output is the list A

of scheduled activities. The variable prev holds the index of the most recently scheduled activity at any time, in

order to determine interferences.

Greedy Activity Scheduler

schedule(s[1..n], f[1..n]) { // given start and finish times

// we assume f[1..n] already sorted

List A = <1> // schedule activity 1 first

prev = 1

for i = 2 to n

if (s[i] >= f[prev]) { // no interference?

append i to A; prev = i // schedule i next

}

return A

}

It is clear that the algorithm is quite simple and efﬁcient. The most costly activity is that of sorting the activities

by ﬁnish time, so the total running time is Θ(nlog n). Fig. 14 shows an example. Each activity is represented

by its start-ﬁnish time interval. Observe that the intervals are sorted by ﬁnish time. Event 1 is scheduled ﬁrst. It

interferes with activity 2 and 3. Then Event 4 is scheduled. It interferes with activity 5 and 6. Finally, activity 7

is scheduled, and it intereferes with the remaining activity. The ﬁnal output is ¦1, 4, 7¦. Note that this is not the

only optimal schedule. ¦2, 4, 7¦ is also optimal.

Proof of Optimality: Our proof of optimality is based on showing that the ﬁrst choice made by the algorithm is the

best possible, and then using induction to show that the rest of the choices result in an optimal schedule. Proofs

of optimality for greedy algorithms follow a similar structure. Suppose that you have any nongreedy solution.

Lecture Notes 22 CMSC 451

4

1

4

1 1

Add 7:

Sched 7; Skip 8

Sched 4; Skip 5,6

Sched 1; Skip 2,3

Input:

3

2

3

2

3

5

6

2

7 7

5

Add 1:

7

6

7

Add 4:

8

4

8

6

5

4

2

1

3

5

8

8

6

Fig. 14: An example of the greedy algorithm for activity scheduling. The ﬁnal schedule is ¦1, 4, 7¦.

Show that its cost can be reduced by being “greedier” at some point in the solution. This proof is complicated a

bit by the fact that there may be multiple solutions. Our approach is to show that any schedule that is not greedy

can be made more greedy, without decreasing the number of activities.

Claim: The greedy algorithm gives an optimal solution to the activity scheduling problem.

Proof: Consider any optimal schedule A that is not the greedy schedule. We will construct a new optimal

schedule A

**that is in some sense “greedier” than A. Order the activities in increasing order of ﬁnish
**

time. Let A = ¸x

1

, x

2

, . . . , x

k

) be the activities of A. Since A is not the same as the greedy schedule,

consider the ﬁrst activity x

j

where these two schedules differ. That is, the greedy schedule is of the form

G = ¸x

1

, x

2

, . . . , x

j−1

, g

j

, . . .) where g

j

,= x

j

. (Note that k ≥ j, since otherwise G would have more

activities than the optimal schedule, which would be a contradiction.) The greedy algorithm selects the

activity with the earliest ﬁnish time that does not conﬂict with any earlier activity. Thus, we know that g

j

does not conﬂict with any earlier activity, and it ﬁnishes before x

j

.

Consider the modiﬁed “greedier” schedule A

**that results by replacing x
**

j

with g

j

in the schedule A. (See

Fig. 15.) That is,

A

= ¸x

1

, x

2

, . . . , x

j−1

, g

j

, x

j+1

, . . . , x

k

).

1 5

x

4

G:

x

1

x

2

A:

x x

2

x

3

x

g

3

g

3 A’:

x

1

x

2

x

4

x

5

Fig. 15: Proof of optimality for the greedy schedule (j = 3).

This is a feasible schedule. (Since g

j

cannot conﬂict with the earlier activities, and it does not conﬂict with

later activities, because it ﬁnishes before x

j

.) It has the same number of activities as A, and therefore A

**Lecture Notes 23 CMSC 451
**

is also optimal. By repeating this process, we will eventually convert A into G, without decreasing the

number of activities. Therefore, G is also optimal.

Fractional Knapsack Problem: The classical (0-1) knapsack problem is a famous optimization problem. A thief is

robbing a store, and ﬁnds n items which can be taken. The ith item is worth v

i

dollars and weighs w

i

pounds,

where v

i

and w

i

are integers. He wants to take as valuable a load as possible, but has a knapsack that can only

carry W total pounds. Which items should he take? (The reason that this is called 0-1 knapsack is that each

item must be left (0) or taken entirely (1). It is not possible to take a fraction of an item or multiple copies of an

item.) This optimization problem arises in industrial packing applications. For example, you may want to ship

some subset of items on a truck of limited capacity.

In contrast, in the fractional knapsack problem the setup is exactly the same, but the thief is allowed to take any

fraction of an item for a fraction of the weight and a fraction of the value. So, you might think of each object as

being a sack of gold, which you can partially empty out before taking.

The 0-1 knapsack problem is hard to solve, and in fact it is an NP-complete problem (meaning that there

probably doesn’t exist an efﬁcient solution). However, there is a very simple and efﬁcient greedy algorithm for

the fractional knapsack problem.

As in the case of the other greedy algorithms we have seen, the idea is to ﬁnd the right order in which to process

items. Intuitively, it is good to have high value and bad to have high weight. This suggests that we ﬁrst sort the

items according to some function that is an decreases with value and increases with weight. There are a few

choices that you might try here, but only one works. Let ρ

i

= v

i

/w

i

denote the value-per-pound ratio. We sort

the items in decreasing order of ρ

i

, and add them in this order. If the item ﬁts, we take it all. At some point

there is an item that does not ﬁt in the remaining space. We take as much of this item as possible, thus ﬁlling

the knapsack entirely. This is illustrated in Fig. 16

40

20 20

30

20

5

+

5 $30

$270

+

$100

$140

ρ=

40

35

5

10

20

30

40

knapsack

4.0 6.0 2.0 5.0 3.0

$30

60

$30 $20 $160 $90 $100

fractional problem.

Greedy solution to

to 0−1 problem.

Greedy solution

to 0−1 problem.

Optimal solution

Input

$100

$90

+

+

$220 $260

$160

+

$100

Fig. 16: Example for the fractional knapsack problem.

Correctness: It is intuitively easy to see that the greedy algorithm is optimal for the fractional problem. Given a room

with sacks of gold, silver, and bronze, you would obviously take as much gold as possible, then take as much

silver as possible, and then as much bronze as possible. But it would never beneﬁt you to take a little less gold

so that you could replace it with an equal volume of bronze.

More formally, suppose to the contrary that the greedy algorithm is not optimal. This would mean that there is

an alternate selection that is optimal. Sort the items of the alternate selection in decreasing order by ρ values.

Consider the ﬁrst item i on which the two selections differ. By deﬁnition, greedy takes a greater amount of item

i than the alternate (because the greedy always takes as much as it can). Let us say that greedy takes x more

Lecture Notes 24 CMSC 451

units of object i than the alternate does. All the subsequent elements of the alternate selection are of lesser value

than v

i

. By replacing x units of any such items with x units of itemi, we would increase the overall value of the

alternate selection. However, this implies that the alternate selection is not optimal, a contradiction.

Nonoptimality for the 0-1 Knapsack: Next we show that the greedy algorithm is not generally optimal in the 0-1

knapsack problem. Consider the example shown in Fig. 16. If you were to sort the items by ρ

i

, then you would

ﬁrst take the items of weight 5, then 20, and then (since the item of weight 40 does not ﬁt) you would settle for

the item of weight 30, for a total value of $30 + $100 + $90 = $220. On the other hand, if you had been less

greedy, and ignored the item of weight 5, then you could take the items of weights 20 and 40 for a total value of

$100 + $160 = $260. This feature of “delaying gratiﬁcation” in order to come up with a better overall solution

is your indication that the greedy solution is not optimal.

Lecture 8: Greedy Algorithms: Huffman Coding

Read: Section 16.3 in CLRS.

Huffman Codes: Huffman codes provide a method of encoding data efﬁciently. Normally when characters are coded

using standard codes like ASCII, each character is represented by a ﬁxed-length codeword of bits (e.g. 8 bits

per character). Fixed-length codes are popular, because its is very easy to break a string up into its individual

characters, and to access individual characters and substrings by direct indexing. However, ﬁxed-length codes

may not be the most efﬁcient from the perspective of minimizing the total quantity of data.

Consider the following example. Suppose that we want to encode strings over the (rather limited) 4-character

alphabet C = ¦a, b, c, d¦. We could use the following ﬁxed-length code:

Character a b c d

Fixed-Length Codeword 00 01 10 11

A string such as “abacdaacac” would be encoded by replacing each of its characters by the corresponding binary

codeword.

a b a c d a a c a c

00 01 00 10 11 00 00 10 00 10

The ﬁnal 20-character binary string would be “00010010110000100010”.

Now, suppose that you knew the relative probabilities of characters in advance. (This might happen by analyzing

many strings over a long period of time. In applications like data compression, where you want to encode one

ﬁle, you can just scan the ﬁle and determine the exact frequencies of all the characters.) You can use this

knowledge to encode strings differently. Frequently occurring characters are encoded using fewer bits and less

frequent characters are encoded using more bits. For example, suppose that characters are expected to occur

with the following probabilities. We could design a variable-length code which would do a better job.

Character a b c d

Probability 0.60 0.05 0.30 0.05

Variable-Length Codeword 0 110 10 111

Notice that there is no requirement that the alphabetical order of character correspond to any sort of ordering

applied to the codewords. Now, the same string would be encoded as follows.

a b a c d a a c a c

0 110 0 10 111 0 0 10 0 10

Lecture Notes 25 CMSC 451

Thus, the resulting 17-character string would be “01100101110010010”. Thus, we have achieved a savings of

3 characters, by using this alternative code. More generally, what would be the expected savings for a string of

length n? For the 2-bit ﬁxed-length code, the length of the encoded string is just 2n bits. For the variable-length

code, the expected length of a single encoded character is equal to the sum of code lengths times the respective

probabilities of their occurrences. The expected encoded string length is just n times the expected encoded

character length.

n(0.60 1 + 0.05 3 + 0.30 2 + 0.05 3) = n(0.60 + 0.15 + 0.60 + 0.15) = 1.5n.

Thus, this would represent a 25% savings in expected encoding length. The question that we will consider today

is how to form the best code, assuming that the probabilities of character occurrences are known.

Preﬁx Codes: One issue that we didn’t consider in the example above is whether we will be able to decode the string,

once encoded. In fact, this code was chosen quite carefully. Suppose that instead of coding the character ‘a’

as 0, we had encoded it as 1. Now, the encoded string “111” is ambiguous. It might be “d” and it might be

“aaa”. How can we avoid this sort of ambiguity? You might suggest that we add separation markers between

the encoded characters, but this will tend to lengthen the encoding, which is undesirable. Instead, we would like

the code to have the property that it can be uniquely decoded.

Note that in both the variable-length codes given in the example above no codeword is a preﬁx of another. This

turns out to be the key property. Observe that if two codewords did share a common preﬁx, e.g. a → 001 and

b → 00101, then when we see 00101 . . . how do we know whether the ﬁrst character of the encoded message

is a or b. Conversely, if no codeword is a preﬁx of any other, then as soon as we see a codeword appearing as

a preﬁx in the encoded text, then we know that we may decode this without fear of it matching some longer

codeword. Thus we have the following deﬁnition.

Preﬁx Code: An assignment of codewords to characters so that no codeword is a preﬁx of any other.

Observe that any binary preﬁx coding can be described by a binary tree in which the codewords are the leaves

of the tree, and where a left branch means “0” and a right branch means “1”. The code given earlier is shown

in the following ﬁgure. The length of a codeword is just its depth in the tree. The code given earlier is a preﬁx

code, and its corresponding tree is shown in the following ﬁgure.

111 110

10

0

0

0

1

1

1 0

d b

c

a

Fig. 17: Preﬁx codes.

Decoding a preﬁx code is simple. We just traverse the tree from root to leaf, letting the input character tell

us which branch to take. On reaching a leaf, we output the corresponding character, and return to the root to

continue the process.

Expected encoding length: Once we know the probabilities of the various characters, we can determine the total

length of the encoded text. Let p(x) denote the probability of seeing character x, and let d

T

(x) denote the

length of the codeword (depth in the tree) relative to some preﬁx tree T. The expected number of bits needed to

encode a text with n characters is given in the following formula:

B(T) = n

x∈C

p(x)d

T

(x).

Lecture Notes 26 CMSC 451

This suggests the following problem:

Optimal Code Generation: Given an alphabet C and the probabilities p(x) of occurrence for each character

x ∈ C, compute a preﬁx code T that minimizes the expected length of the encoded bit-string, B(T).

Note that the optimal code is not unique. For example, we could have complemented all of the bits in our earlier

code without altering the expected encoded string length. There is a very simple algorithm for ﬁnding such a

code. It was invented in the mid 1950’s by David Huffman, and is called a Huffman code.. By the way, this

code is used by the Unix utility pack for ﬁle compression. (There are better compression methods however. For

example, compress, gzip and many others are based on a more sophisticated method called the Lempel-Ziv

coding.)

Huffman’s Algorithm: Here is the intuition behind the algorithm. Recall that we are given the occurrence probabil-

ities for the characters. We are going to build the tree up from the leaf level. We will take two characters x and

y, and “merge” them into a single super-character called z, which then replaces x and y in the alphabet. The

character z will have a probability equal to the sum of x and y’s probabilities. Then we continue recursively

building the code on the new alphabet, which has one fewer character. When the process is completed, we know

the code for z, say 010. Then, we append a 0 and 1 to this codeword, given 0100 for x and 0101 for y.

Another way to think of this, is that we merge x and y as the left and right children of a root node called z. Then

the subtree for z replaces x and y in the list of characters. We repeat this process until only one super-character

remains. The resulting tree is the ﬁnal preﬁx tree. Since x and y will appear at the bottom of the tree, it seem

most logical to select the two characters with the smallest probabilities to perform the operation on. The result

is Huffman’s algorithm. It is illustrated in the following ﬁgure.

The pseudocode for Huffman’s algorithm is given below. Let C denote the set of characters. Each character

x ∈ C is associated with an occurrence probability x.prob. Initially, the characters are all stored in a priority

queue Q. Recall that this data structure can be built initially in O(n) time, and we can extract the element with

the smallest key in O(log n) time and insert a new element in O(log n) time. The objects in Q are sorted by

probability. Note that with each execution of the for-loop, the number of items in the queue decreases by one.

So, after n − 1 iterations, there is exactly one element left in the queue, and this is the root of the ﬁnal preﬁx

code tree.

Correctness: The big question that remains is why is this algorithm correct? Recall that the cost of any encoding tree

T is B(T) =

x

p(x)d

T

(x). Our approach will be to show that any tree that differs from the one constructed by

Huffman’s algorithm can be converted into one that is equal to Huffman’s tree without increasing its cost. First,

observe that the Huffman tree is a full binary tree, meaning that every internal node has exactly two children. It

would never pay to have an internal node with only one child (since such a node could be deleted), so we may

limit consideration to full binary trees.

Claim: Consider the two characters, x and y with the smallest probabilities. Then there is an optimal code tree

in which these two characters are siblings at the maximum depth in the tree.

Proof: Let T be any optimal preﬁx code tree, and let b and c be two siblings at the maximum depth of the

tree. Assume without loss of generality that p(b) ≤ p(c) and p(x) ≤ p(y) (if this is not true, then rename

these characters). Now, since x and y have the two smallest probabilities it follows that p(x) ≤ p(b) and

p(y) ≤ p(c). (In both cases they may be equal.) Because b and c are at the deepest level of the tree we

know that d(b) ≥ d(x) and d(c) ≥ d(y). (Again, they may be equal.) Thus, we have p(b) −p(x) ≥ 0 and

d(b) −d(x) ≥ 0, and hence their product is nonnegative. Now switch the positions of x and b in the tree,

resulting in a new tree T

**. This is illustrated in the following ﬁgure.
**

Next let us see how the cost changes as we go from T to T

**. Almost all the nodes contribute the same
**

to the expected cost. The only exception are nodes x and b. By subtracting the old contributions of these

Lecture Notes 27 CMSC 451

30

b: 48 d: 17 f: 13

smallest

smallest

smallest

smallest

22

12

a: 05 c: 07

e: 10

b: 48

d: 17 f: 13

30

smallest

b: 48 52

22

12

a: 05

0

b: 48

Final Tree

011

1

010

0

1

1

1

0

0

1

001

0001

1

0

0000

f: 13 d: 17

a: 05 c: 07

e: 10

f: 13 d: 17 e: 10

c: 07

e: 10

c: 07 a: 05

12

22

12

c: 07 a: 05

b: 48 d: 17 e: 10 f: 13

f: 13 e: 10 d: 17 c: 07 b: 48 a: 05

Fig. 18: Huffman’s Algorithm.

Lecture Notes 28 CMSC 451

Huffman’s Algorithm

Huffman(int n, character C[1..n]) {

Q = C; // priority queue

for i = 1 to n-1 {

z = new internal tree node;

z.left = x = Q.extractMin(); // extract smallest probabilities

z.right = y = Q.extractMin();

z.prob = x.prob + y.prob; // z’s probability is their sum

Q.insert(z); // insert z into queue

}

return the last element left in Q as the root;

}

T’’

−(p(b)−p(x))(d(b)−d(x))

Cost change =

< 0

Cost change =

−(p(c)−p(y))(d(c)−d(y))

< 0

T T’

x

y

c

y

x

b c

b

c

y

b

x

Fig. 19: Correctness of Huffman’s Algorithm.

nodes and adding in the new contributions we have

B(T

) = B(T) −p(x)d(x) +p(x)d(b) −p(b)d(b) +p(b)d(x)

= B(T) +p(x)(d(b) −d(x)) −p(b)(d(b) −d(x))

= B(T) −(p(b) −p(x))(d(b) −d(x))

≤ B(T) because (p(b) −p(x))(d(b) −d(x)) ≥ 0.

Thus the cost does not increase, implying that T

**is an optimal tree. By switching y with c we get a new
**

tree T

, which by a similar argument is also optimal. The ﬁnal tree T

**satisﬁes the statement of the claim.
**

The above theorem asserts that the ﬁrst step of Huffman’s algorithm is essentially the proper one to perform.

The complete proof of correctness for Huffman’s algorithm follows by induction on n (since with each step, we

eliminate exactly one character).

Claim: Huffman’s algorithm produces the optimal preﬁx code tree.

Proof: The proof is by induction on n, the number of characters. For the basis case, n = 1, the tree consists of

a single leaf node, which is obviously optimal.

Assume inductively that when strictly fewer than n characters, Huffman’s algorithm is guaranteed to pro-

duce the optimal tree. We want to show it is true with exactly n characters. Suppose we have exactly n

characters. The previous claim states that we may assume that in the optimal tree, the two characters of

lowest probability x and y will be siblings at the lowest level of the tree. Remove x and y, replacing them

with a new character z whose probability is p(z) = p(x) +p(y). Thus n −1 characters remain.

Consider any preﬁx code tree T made with this new set of n−1 characters. We can convert it into a preﬁx

code tree T

**for the original set of characters by undoing the previous operation and replacing z with x
**

Lecture Notes 29 CMSC 451

and y (adding a “0” bit for x and a “1” bit for y). The cost of the new tree is

B(T

) = B(T) −p(z)d(z) +p(x)(d(z) + 1) +p(y)(d(z) + 1)

= B(T) −(p(x) +p(y))d(z) + (p(x) +p(y))(d(z) + 1)

= B(T) + (p(x) +p(y))(d(z) + 1 −d(z))

= B(T) +p(x) +p(y).

Since the change in cost depends in no way on the structure of the tree T, to minimize the cost of the

ﬁnal tree T

**, we need to build the tree T on n − 1 characters optimally. By induction, this exactly what
**

Huffman’s algorithm does. Thus the ﬁnal tree is optimal.

Lecture 9: Graphs: Background and Breadth First Search

Read: Review Sections 22.1 and 22.2 CLR.

Graph Algorithms: We are now beginning a major new section of the course. We will be discussing algorithms for

both directed and undirected graphs. Intuitively, a graph is a collection of vertices or nodes, connected by a

collection of edges. Graphs are extremely important because they are a very ﬂexible mathematical model for

many application problems. Basically, any time you have a set of objects, and there is some “connection” or “re-

lationship” or “interaction” between pairs of objects, a graph is a good way to model this. Examples of graphs in

application include communication and transportation networks, VLSI and other sorts of logic circuits, surface

meshes used for shape description in computer-aided design and geographic information systems, precedence

constraints in scheduling systems. The list of application is almost too long to even consider enumerating it.

Most of the problems in computational graph theory that we will consider arise because they are of importance

to one or more of these application areas. Furthermore, many of these problems form the basic building blocks

from which more complex algorithms are then built.

Graphs and Digraphs: Most of you have encountered the notions of directed and undirected graphs in other courses,

so we will give a quick overview here.

Deﬁnition: A directed graph (or digraph) G = (V, E) consists of a ﬁnite set V , called the vertices or nodes,

and E, a set of ordered pairs, called the edges of G. (Another way of saying this is that E is a binary

relation on V .)

Observe that self-loops are allowed by this deﬁnition. Some deﬁnitions of graphs disallow this. Multiple edges

are not permitted (although the edges (v, w) and (w, v) are distinct).

1

3

4 1

2

3

4 2

Digraph Graph

Fig. 20: Digraph and graph example.

Deﬁnition: An undirected graph (or graph) G = (V, E) consists of a ﬁnite set V of vertices, and a set E of

unordered pairs of distinct vertices, called the edges. (Note that self-loops are not allowed).

Lecture Notes 30 CMSC 451

Note that directed graphs and undirected graphs are different (but similar) objects mathematically. Certain

notions (such as path) are deﬁned for both, but other notions (such as connectivity) may only be deﬁned for one,

or may be deﬁned differently.

We say that vertex v is adjacent to vertex u if there is an edge (u, v). In a directed graph, given the edge

e = (u, v), we say that u is the origin of e and v is the destination of e. In undirected graphs u and v are the

endpoints of the edge. The edge e is incident (meaning that it touches) both u and v.

In a digraph, the number of edges coming out of a vertex is called the out-degree of that vertex, and the number

of edges coming in is called the in-degree. In an undirected graph we just talk about the degree of a vertex as

the number of incident edges. By the degree of a graph, we usually mean the maximum degree of its vertices.

When discussing the size of a graph, we typically consider both the number of vertices and the number of edges.

The number of vertices is typically written as n or V , and the number of edges is written as m or E or e. Here

are some basic combinatorial facts about graphs and digraphs. We will leave the proofs to you. Given a graph

with V vertices and E edges then:

In a graph:

Number of edges: 0 ≤ E ≤

_

n

2

_

= n(n −1)/2 ∈ O(n

2

).

Sum of degrees:

v∈V

deg(v) = 2E.

In a digraph:

Number of edges: 0 ≤ E ≤ n

2

.

Sum of degrees:

v∈V

in-deg(v) =

v∈V

out-deg(v) = E.

Notice that generally the number of edges in a graph may be as large as quadratic in the number of vertices.

However, the large graphs that arise in practice typically have much fewer edges. A graph is said to be sparse if

E ∈ Θ(V ), and dense, otherwise. When giving the running times of algorithms, we will usually express it as a

function of both V and E, so that the performance on sparse and dense graphs will be apparent.

Paths and Cycles: A path in a graph or digraph is a sequence of vertices ¸v

0

, v

1

, . . . , v

k

) such that (v

i−1

, v

i

) is an

edge for i = 1, 2, . . . , k. The length of the path is the number of edges, k. A path is simple if all vertices and all

the edges are distinct. A cycle is a path containing at least one edge and for which v

0

= v

k

. A cycle is simple if

its vertices (except v

0

and v

k

) are distinct, and all its edges are distinct.

A graph or digraph is said to be acyclic if it contains no simple cycles. An acyclic connected graph is called a

free tree or simply tree for short. (The term “free” is intended to emphasize the fact that the tree has no root, in

contrast to a rooted tree, as is usually seen in data structures.) An acyclic undirected graph (which need not be

connected) is a collection of free trees, and is (naturally) called a forest. An acyclic digraph is called a directed

acyclic graph, or DAG for short.

Free Tree

cycle

Simple

cycle

Nonsimple DAG Forest

Fig. 21: Illustration of some graph terms.

We say that w is reachable from u if there is a path from u to w. Note that every vertex is reachable from itself

by a trivial path that uses zero edges. An undirected graph is connected if every vertex can reach every other

vertex. (Connectivity is a bit messier for digraphs, and we will deﬁne it later.) The subsets of mutually reachable

vertices partition the vertices of the graph into disjoint subsets, called the connected components of the graph.

Lecture Notes 31 CMSC 451

Representations of Graphs and Digraphs: There are two common ways of representing graphs and digraphs. First

we show how to represent digraphs. Let G = (V, E) be a digraph with n = [V [ and let e = [E[. We will assume

that the vertices of G are indexed ¦1, 2, . . . , n¦.

Adjacency Matrix: An n n matrix deﬁned for 1 ≤ v, w ≤ n.

A[v, w] =

_

1 if (v, w) ∈ E

0 otherwise.

If the digraph has weights we can store the weights in the matrix. For example if (v, w) ∈ E then

A[v, w] = W(v, w) (the weight on edge (v, w)). If (v, w) / ∈ E then generally W(v, w) need not be

deﬁned, but often we set it to some “special” value, e.g. A(v, w) = −1, or ∞. (By ∞ we mean (in

practice) some number which is larger than any allowable weight. In practice, this might be some machine

dependent constant like MAXINT.)

Adjacency List: An array Adj[1 . . . n] of pointers where for 1 ≤ v ≤ n, Adj[v] points to a linked list contain-

ing the vertices which are adjacent to v (i.e. the vertices that can be reached from v by a single edge). If

the edges have weights then these weights may also be stored in the linked list elements.

3

1

1

0 1

0

1 1

0

0

2 3

1

2

3

2

1

Adjacency matrix

Adj

Adjacency list

3 2

2

3

1

3

2

1

1

Fig. 22: Adjacency matrix and adjacency list for digraphs.

We can represent undirected graphs using exactly the same representation, but we will store each edge twice. In

particular, we representing the undirected edge ¦v, w¦ by the two oppositely directed edges (v, w) and (w, v).

Notice that even though we represent undirected graphs in the same way that we represent digraphs, it is impor-

tant to remember that these two classes of objects are mathematically distinct from one another.

This can cause some complications. For example, suppose you write an algorithm that operates by marking

edges of a graph. You need to be careful when you mark edge (v, w) in the representation that you also mark

(w, v), since they are both the same edge in reality. When dealing with adjacency lists, it may not be convenient

to walk down the entire linked list, so it is common to include cross links between corresponding edges.

1

1

1

3

2

1

1 2 3

1

1

0 1

0

1 0

4

2 3

1

Adjacency list (with crosslinks) Adjacency matrix

Adj

4

1

1 2 4

4 2

0

0

1

1

1 0 4

3

2

1

4

3

1 3

3

Fig. 23: Adjacency matrix and adjacency list for graphs.

An adjacency matrix requires Θ(V

2

) storage and an adjacency list requires Θ(V + E) storage. The V arises

because there is one entry for each vertex in Adj . Since each list has out-deg(v) entries, when this is summed

over all vertices, the total number of adjacency list records is Θ(E). For sparse graphs the adjacency list

representation is more space efﬁcient.

Lecture Notes 32 CMSC 451

Graph Traversals: There are a number of approaches used for solving problems on graphs. One of the most impor-

tant approaches is based on the notion of systematically visiting all the vertices and edge of a graph. The reason

for this is that these traversals impose a type of tree structure (or generally a forest) on the graph, and trees are

usually much easier to reason about than general graphs.

Breadth-ﬁrst search: Given an graph G = (V, E), breadth-ﬁrst search starts at some source vertex s and “discovers”

which vertices are reachable from s. Deﬁne the distance between a vertex v and s to be the minimum number

of edges on a path from s to v. Breadth-ﬁrst search discovers vertices in increasing order of distance, and hence

can be used as an algorithm for computing shortest paths. At any given time there is a “frontier” of vertices that

have been discovered, but not yet processed. Breadth-ﬁrst search is named because it visits vertices across the

entire “breadth” of this frontier.

Initially all vertices (except the source) are colored white, meaning that they are undiscovered. When a vertex

has ﬁrst been discovered, it is colored gray (and is part of the frontier). When a gray vertex is processed, then it

becomes black.

The search makes use of a queue, a ﬁrst-in ﬁrst-out list, where elements are removed in the same order they

are inserted. The ﬁrst item in the queue (the next to be removed) is called the head of the queue. We will also

maintain arrays color[u] which holds the color of vertex u (either white, gray or black), pred[u] which points to

the predecessor of u (i.e. the vertex who ﬁrst discovered u, and d[u], the distance from s to u. Only the color

is really needed for the search (in fact it is only necessary to know whether a node is nonwhite). We include all

this information, because some applications of BFS use this additional information.

Breadth-First Search

BFS(G,s) {

for each u in V { // initialization

color[u] = white

d[u] = infinity

pred[u] = null

}

color[s] = gray // initialize source s

d[s] = 0

Q = {s} // put s in the queue

while (Q is nonempty) {

u = Q.Dequeue() // u is the next to visit

for each v in Adj[u] {

if (color[v] == white) { // if neighbor v undiscovered

color[v] = gray // ...mark it discovered

d[v] = d[u]+1 // ...set its distance

pred[v] = u // ...and its predecessor

Q.Enqueue(v) // ...put it in the queue

}

}

color[u] = black // we are done with u

}

}

Observe that the predecessor pointers of the BFS search deﬁne an inverted tree (an acyclic directed graph in

which the source is the root, and every other node has a unique path to the root). If we reverse these edges we

get a rooted unordered tree called a BFS tree for G. (Note that there are many potential BFS trees for a given

graph, depending on where the search starts, and in what order vertices are placed on the queue.) These edges

of G are called tree edges and the remaining edges of G are called cross edges.

It is not hard to prove that if G is an undirected graph, then cross edges always go between two nodes that are at

most one level apart in the BFS tree. (Can you see why this must be true?) Below is a sketch of a proof that on

Lecture Notes 33 CMSC 451

Q: a, c, d Q: c, d, e Q: d, e, b

Q: e, b Q: b, f, g

Q: (empty)

1 1 1 1

2

1

0

1 1

a

2

e b

c d

s

c

b e

1

2 2

0

s

g

c

e

f

a

d

b

b, f, g e

d

c a s

3 3

1 1 1

2 2 2

1 1

b b

c d

a

s

0

1 1 1

c d

a

s

f

d

a

s

0

2

1 1

e

c d

a

s

0

3 3

2

e

g f

c d

a

s

0

e

g

Fig. 24: Breadth-ﬁrst search: Example.

termination, d[v] is equal to the distance from s to v. (See the CLRS for a detailed proof.)

Theorem: Let δ(s, v) denote the length (number of edges) on the shortest path froms to v. Then, on termination

of the BFS procedure, d[v] = δ(s, v).

Proof: (Sketch) The proof is by induction on the length of the shortest path. Let u be the predecessor of v on

some shortest path from s to v, and among all such vertices the ﬁrst to be processed by the BFS. Thus,

δ(s, v) = δ(s, u) + 1. When u is processed, we have (by induction) d[u] = δ(s, u). Since v is a neighbor

of u, we set d[v] = d[u] + 1. Thus we have

d[v] = d[u] + 1 = δ(s, u) + 1 = δ(s, v),

as desired.

Analysis: The running time analysis of BFS is similar to the running time analysis of many graph traversal algorithms.

As done in CLR V = [V [ and E = [E[. Observe that the initialization portion requires Θ(V ) time. The real

meat is in the traversal loop. Since we never visit a vertex twice, the number of times we go through the while

loop is at most V (exactly V assuming each vertex is reachable from the source). The number of iterations

through the inner for loop is proportional to deg(u) + 1. (The +1 is because even if deg(u) = 0, we need to

spend a constant amount of time to set up the loop.) Summing up over all vertices we have the running time

T(V ) = V +

u∈V

(deg(u) + 1) = V +

u∈V

deg(u) +V = 2V + 2E ∈ Θ(V +E).

The analysis is essentially the same for directed graphs.

Lecture Notes 34 CMSC 451

Lecture 10: Depth-First Search

Read: Sections 23.2 and 23.3 in CLR.

Depth-First Search: The next traversal algorithm that we will study is called depth-ﬁrst search, and it has the nice

property that nontree edges have a good deal of mathematical structure.

Consider the problem of searching a castle for treasure. To solve it you might use the following strategy. As

you enter a room of the castle, paint some grafﬁti on the wall to remind yourself that you were already there.

Successively travel from room to room as long as you come to a place you haven’t already been. When you

return to the same room, try a different door leaving the room (assuming it goes somewhere you haven’t already

been). When all doors have been tried in a given room, then backtrack.

Notice that this algorithm is described recursively. In particular, when you enter a new room, you are beginning

a new search. This is the general idea behind depth-ﬁrst search.

Depth-First Search Algorithm: We assume we are given an directed graph G = (V, E). The same algorithm works

for undirected graphs (but the resulting structure imposed on the graph is different).

We use four auxiliary arrays. As before we maintain a color for each vertex: white means undiscovered, gray

means discovered but not ﬁnished processing, and black means ﬁnished. As before we also store predecessor

pointers, pointing back to the vertex that discovered a given vertex. We will also associate two numbers with

each vertex. These are time stamps. When we ﬁrst discover a vertex u store a counter in d[u] and when we are

ﬁnished processing a vertex we store a counter in f[u]. The purpose of the time stamps will be explained later.

(Note: Do not confuse the discovery time d[v] with the distance d[v] from BFS.) The algorithm is shown in code

block below, and illustrated in Fig. 25. As with BFS, DFS induces a tree structure. We will discuss this tree

structure further below.

Depth-First Search

DFS(G) { // main program

for each u in V { // initialization

color[u] = white;

pred[u] = null;

}

time = 0;

for each u in V

if (color[u] == white) // found an undiscovered vertex

DFSVisit(u); // start a new search here

}

DFSVisit(u) { // start a search at u

color[u] = gray; // mark u visited

d[u] = ++time;

for each v in Adj(u) do

if (color[v] == white) { // if neighbor v undiscovered

pred[v] = u; // ...set predecessor pointer

DFSVisit(v); // ...visit v

}

color[u] = black; // we’re done with u

f[u] = ++time;

}

Analysis: The running time of DFS is Θ(V +E). This is somewhat harder to see than the BFS analysis, because the

recursive nature of the algorithm obscures things. Normally, recurrences are good ways to analyze recursively

Lecture Notes 35 CMSC 451

3/4

2/5

3/4

f

2/5 2/..

DFS(f)

DFS(g)

f

c

b

a

return b

return c

3/..

b

c

a

b

c

f

g

1/..

7/..

6/..

a

b

c

1/..

3/4

6/9

7/8

12/13

11/14

return g

return f

return a

DFS(d)

DFS(e)

return e

return f

DFS(a)

DFS(b)

DFS(c)

3/4

g

a

b

c

f

g

d

e

1/10

6/9

7/8

2/5

1/10

2/5

b

c

a

d e

g

a

1/..

Fig. 25: Depth-First search tree.

deﬁned algorithms, but it is not true here, because there is no good notion of “size” that we can attach to each

recursive call.

First observe that if we ignore the time spent in the recursive calls, the main DFS procedure runs in O(V ) time.

Observe that each vertex is visited exactly once in the search, and hence the call DFSVisit() is made exactly

once for each vertex. We can just analyze each one individually and add up their running times. Ignoring the

time spent in the recursive calls, we can see that each vertex u can be processed in O(1+outdeg(u)) time. Thus

the total time used in the procedure is

T(V ) = V +

u∈V

(outdeg(u) + 1) = V +

u∈V

outdeg(u) +V = 2V +E ∈ Θ(V +E).

A similar analysis holds if we consider DFS for undirected graphs.

Tree structure: DFS naturally imposes a tree structure (actually a collection of trees, or a forest) on the structure

of the graph. This is just the recursion tree, where the edge (u, v) arises when processing vertex u we call

DFSVisit(v) for some neighbor v. For directed graphs the other edges of the graph can be classiﬁed as

follows:

Back edges: (u, v) where v is a (not necessarily proper) ancestor of u in the tree. (Thus, a self-loop is consid-

ered to be a back edge).

Forward edges: (u, v) where v is a proper descendent of u in the tree.

Cross edges: (u, v) where u and v are not ancestors or descendents of one another (in fact, the edge may go

between different trees of the forest).

It is not difﬁcult to classify the edges of a DFS tree by analyzing the values of colors of the vertices and/or

considering the time stamps. This is left as an exercise.

With undirected graphs, there are some important differences in the structure of the DFS tree. First, there is

really no distinction between forward and back edges. So, by convention, they are all called back edges by

convention. Furthermore, it can be shown that there can be no cross edges. (Can you see why not?)

Lecture Notes 36 CMSC 451

Time-stamp structure: There is also a nice structure to the time stamps. In CLR this is referred to as the parenthesis

structure. In particular, the following are easy to observe.

Lemma: (Parenthesis Lemma) Given a digraph G = (V, E), and any DFS tree for G and any two vertices

u, v ∈ V .

• u is a descendent of v if and only if [d[u], f[u]] ⊆ [d[v], f[v]].

• u is an ancestor of v if and only if [d[u], f[u]] ⊇ [d[v], f[v]].

• u is unrelated to v if and only if [d[u], f[u]] and [d[v], f[v]] are disjoint.

6/9

11/14

12/13 2/5

8

1/10

9 10 11 12 13 14

F

C

C

7/8

7 6 5 4 3 2 1

a

c

b f

d

e

g

3/4

B

C

e

d

g

f

c

b

a

Fig. 26: Parenthesis Lemma.

Cycles: The time stamps given by DFS allow us to determine a number of things about a graph or digraph. For

example, suppose you are given a graph or digraph. You run DFS. You can determine whether the graph

contains any cycles very easily. We do this with the help of the following two lemmas.

Lemma: Given a digraph G = (V, E), consider any DFS forest of G, and consider any edge (u, v) ∈ E. If this

edge is a tree, forward, or cross edge, then f[u] > f[v]. If the edge is a back edge then f[u] ≤ f[v].

Proof: For tree, forward, and back edges, the proof follows directly from the parenthesis lemma. (E.g. for a

forward edge (u, v), v is a descendent of u, and so v’s start-ﬁnish interval is contained within u’s, implying

that v has an earlier ﬁnish time.) For a cross edge (u, v) we know that the two time intervals are disjoint.

When we were processing u, v was not white (otherwise (u, v) would be a tree edge), implying that v was

started before u. Because the intervals are disjoint, v must have also ﬁnished before u.

Lemma: Consider a digraph G = (V, E) and any DFS forest for G. G has a cycle if and only the DFS forest

has a back edge.

Proof: (⇐) If there is a back edge (u, v), then v is an ancestor of u, and by following tree edges from v to u

we get a cycle.

(⇒) We show the contrapositive. Suppose there are no back edges. By the lemma above, each of the

remaining types of edges, tree, forward, and cross all have the property that they go from vertices with

higher ﬁnishing time to vertices with lower ﬁnishing time. Thus along any path, ﬁnish times decrease

monotonically, implying there can be no cycle.

Beware: No back edges means no cycles. But you should not infer that there is some simple relationship

between the number of back edges and the number of cycles. For example, a DFS tree may only have a single

back edge, and there may anywhere from one up to an exponential number of simple cycles in the graph.

A similar theorem applies to undirected graphs, and is not hard to prove.

Lecture Notes 37 CMSC 451

Lecture 11: Topological Sort and Strong Components

Read: Sects. 22.3–22.5 in CLRS.

Directed Acyclic Graph: A directed acyclic graph is often called a DAG for short DAG’s arise in many applications

where there are precedence or ordering constraints. For example, if there are a series of tasks to be performed,

and certain tasks must precede other tasks (e.g. in construction you have to build the ﬁrst ﬂoor before you build

the second ﬂoor, but you can do the electrical wiring while you install the windows). In general a precedence

constraint graph is a DAG in which vertices are tasks and the edge (u, v) means that task u must be completed

before task v begins.

A topological sort of a DAG is a linear ordering of the vertices of the DAG such that for each edge (u, v), u

appears before v in the ordering. Note that in general, there may be many legal topological orders for a given

DAG.

To compute a topological ordering is actually very easy, given DFS. By the previous lemma, for every edge

(u, v) in a DAG, the ﬁnish time of u is greater than the ﬁnish time of v. Thus, it sufﬁces to output the vertices

in reverse order of ﬁnishing time. To do this we run a (stripped down) DFS, and when each vertex is ﬁnished

we add it to the front of a linked list. The ﬁnal linked list order will be the ﬁnal topological order. This is given

below.

Topological Sort

TopSort(G) {

for each (u in V) color[u] = white; // initialize

L = new linked_list; // L is an empty linked list

for each (u in V)

if (color[u] == white) TopVisit(u);

return L; // L gives final order

}

TopVisit(u) { // start a search at u

color[u] = gray; // mark u visited

for each (v in Adj(u))

if (color[v] == white) TopVisit(v);

Append u to the front of L; // on finishing u add to list

}

This is typical example of DFS is used in applications. Observe that the structure is essentially the same as the

basic DFS procedure, but we only include the elements of DFS that are needed for this application.

As an example we consider the DAG presented in CLRS for Professor Bumstead’s order of dressing. Bumstead

lists the precedences in the order in which he puts on his clothes in the morning. We do our depth-ﬁrst search in

a different order from the one given in CLRS, and so we get a different ﬁnal ordering. However both orderings

are legitimate, given the precedence constraints. As with depth-ﬁrst search, the running time of topological sort

is Θ(V +E).

Strong Components: Next we consider a very important connectivity problem with digraphs. When digraphs are

used in communication and transportation networks, people want to know that there networks are complete in

the sense that from any location it is possible to reach any other location in the digraph. A digraph is strongly

connected if for every pair of vertices, u, v ∈ V , u can reach v and vice versa.

We would like to write an algorithm that determines whether a digraph is strongly connected. In fact we will

solve a generalization of this problem, of computing the strongly connected components (or strong components

for short) of a digraph. In particular, we partition the vertices of the digraph into subsets such that the induced

subgraph of each subset is strongly connected. (These subsets should be as large as possible, and still have this

Lecture Notes 38 CMSC 451

jacket

Final order: socks, shirt, tie, shorts, pants, shoes, belt, jacket

7/8

2/9

1/10

4/5

3/6

15/16 11/14

12/13

shirt

shirt

jacket

tie

shoes

pants

shorts

belt tie

shoes

socks

belt

pants

shorts socks

Fig. 27: Topological sort.

property.) More formally, we say that two vertices u and v are mutually reachable if u and reach v and vice

versa. It is easy to see that mutual reachability is an equivalence relation. This equivalence relation partitions

the vertices into equivalence classes of mutually reachable vertices, and these are the strong components.

Observe that if we merge the vertices in each strong component into a single super vertex, and joint two su-

pervertices (A, B) if and only if there are vertices u ∈ A and v ∈ B such that (u, v) ∈ E, then the resulting

digraph, called the component digraph, is necessarily acyclic. (Can you see why?) Thus, we may be accurately

refer to it as the component DAG.

a b

c

d

e

f

d,e

f,g,h,i

a,b,c

Digraph and Strong Components Component DAG

i

h

g

Fig. 28: Strong Components.

The algorithm that we will present is an algorithm designer’s “dream” (and an algorithm student’s nightmare).

It is amazingly simple and efﬁcient, but it is so clever that it is very difﬁcult to even see how it works. We will

give some of the intuition that leads to the algorithm, but will not prove the algorithm’s correctness formally.

See CLRS for a formal proof.

Strong Components and DFS: By way of motivation, consider the DFS of the digraph shown in the following ﬁgure

(left). By deﬁnition of DFS, when you enter a strong component, every vertex in the component is reachable,

so the DFS does not terminate until all the vertices in the component have been visited. Thus all the vertices

in a strong component must appear in the same tree of the DFS forest. Observe that in the ﬁgure each strong

component is just a subtree of the DFS forest. Is it always true for any DFS? Unfortunately the answer is

no. In general, many strong components may appear in the same DFS tree. (See the DFS on the right for a

counterexample.) Does there always exist a way to order the DFS such that it is true? Fortunately, the answer is

yes.

Suppose that you knew the component DAG in advance. (This is ridiculous, because you would need to know

the strong components, and that is the problem we are trying to solve. But humor me for a moment.) Further

Lecture Notes 39 CMSC 451

suppose that you computed a reversed topological order on the component digraph. That is, (u, v) is an edge in

the component digraph, then v comes before u in this reversed order (not after as it would in a normal topological

ordering). Now, run DFS, but every time you need a new vertex to start the search from, select the next available

vertex according to this reverse topological order of the component digraph.

Here is an informal justiﬁcation. Clearly once the DFS starts within a given strong component, it must visit

every vertex within the component (and possibly some others) before ﬁnishing. If we do not start in reverse

topological, then the search may “leak out” into other strong components, and put them in the same DFS tree.

For example, in the ﬁgure below right, when the search is started at vertex a, not only does it visit its component

with b and c, but the it also visits the other components as well. However, by visiting components in reverse

topological order of the component tree, each search cannot “leak out” into other components, because other

components would have already have been visited earlier in the search.

b

a

c

d

i

e

g

f h

h

c

b

a

f

g

i

d

e

10/11 2/3

1/8

4/7

5/6

9/12 13/18

14/17

15/16

3/4

2/13

1/18

14/17

15/16 5/12

6/11

7/10

8/9

Fig. 29: Two depth-ﬁrst searches.

This leaves us with the intuition that if we could somehow order the DFS, so that it hits the strong components

according to a reverse topological order, then we would have an easy algorithm for computing strong compo-

nents. However, we do not know what the component DAG looks like. (After all, we are trying to solve the

strong component problem in the ﬁrst place). The “trick” behind the strong component algorithm is that we

can ﬁnd an ordering of the vertices that has essentially the necessary property, without actually computing the

component DAG.

The Plumber’s Algorithm: I call this algorithm the plumber’s algorithm (because it avoids leaks). Unfortunately it

is quite difﬁcult to understand why this algorithm works. I will present the algorithm, and refer you to CLRS

for the complete proof. First recall that G

R

(what CLRS calls G

T

) is the digraph with the same vertex set as G

but in which all edges have been reversed in direction. Given an adjacency list for G, it is possible to compute

G

R

in Θ(V +E) time. (I’ll leave this as an exercise.)

Observe that the strongly connected components are not affected by reversing all the digraph’s edges. If u and v

are mutually reachable in G, then certainly this is still true in G

R

. All that changes is that the component DAG

is completely reversed. The ordering trick is to order the vertices of G according to their ﬁnish times in a DFS.

Then visit the nodes of G

R

in decreasing order of ﬁnish times. All the steps of the algorithm are quite easy to

implement, and all operate in Θ(V +E) time. Here is the algorithm.

Correctness: Why visit vertices in decreasing order of ﬁnish times? Why use the reversal digraph? It is difﬁcult

to justify these elements formally. Here is some intuition, though. Recall that the main intent is to visit the

Lecture Notes 40 CMSC 451

Strong Components

StrongComp(G) {

Run DFS(G), computing finish times f[u] for each vertex u;

Compute R = Reverse(G), reversing all edges of G;

Sort the vertices of R (by CountingSort) in decreasing order of f[u];

Run DFS(R) using this order;

Each DFS tree is a strong component;

}

d

e

f

i

h

g

3

2

1

9

4

5

6 7

c

a b

8

6/11

7/10

a

c

b

d

e

f

i

g h

3/4

14/17

15/16 5/12

Final DFS with components Reversal with new vertex order Initial DFS

a

b

c

f

g

i

h

d

e

8/9

1/18

2/13

Fig. 30: Strong Components Algorithm

strong components in a reverse topological order. The question is how to order the vertices so that this is true.

Recall from the topological sorting algorithm, that in a DAG, ﬁnish times occur in reverse topological order

(i.e., the ﬁrst vertex in the topological order is the one with the highest ﬁnish time). So, if we wanted to visit

the components in reverse topological order, this suggests that we should visit the vertices in increasing order

of ﬁnish time, starting with the lowest ﬁnishing time. This is a good starting idea, but it turns out that it doesn’t

work. The reason is that there are many vertices in each strong component, and they all have different ﬁnish

times. For example, in the ﬁgure above observe that in the ﬁrst DFS (on the left) the lowest ﬁnish time (of 4) is

achieved by vertex c, and its strong component is ﬁrst, not last, in topological order.

It is tempting to give up in frustration at this point. But there is something to notice about the ﬁnish times. If

we consider the maximum ﬁnish time in each component, then these are related to the topological order of the

component DAG. In particular, given any strong component C, deﬁne f(C) to be the maximum ﬁnish time

among all vertices in this component.

f(C) = max

u∈C

f[u].

Lemma: Consider a digraph G = (V, E) and let C and C

**be two distinct strong components. If there is an
**

(u, v) of G such that u ∈ C and v ∈ C

, then f(C) > f(C

).

See the book for a complete proof. Here is a quick sketch. If the DFS visits C ﬁrst, then the DFS will leak into

C

(along edge (u, v) or some other edge), and then will visit everything in C

**before ﬁnally returning to C.
**

Thus, some vertex of C will ﬁnish later than every vertex of C

. On the other hand, suppose that C

is visited

ﬁrst. Because there is an edge from C to C

**, we know from the deﬁnition of the component DAG that there
**

cannot be a path from C

to C. So C

**will completely ﬁnish before we even start C. Thus all the ﬁnish times of
**

C will be larger than the ﬁnish times of C

.

For example, in the previous ﬁgure, the maximum ﬁnish times for each component are 18 (for ¦a, b, c¦), 17 (for

¦d, e¦), and 12 (for ¦f, g, h, i¦). The order ¸18, 17, 12) is a valid topological order for the component digraph.

Lecture Notes 41 CMSC 451

This is a big help. It tells us that if we run DFS and compute ﬁnish times, and then run a new DFS in decreasing

order of ﬁnish times, we will visit the components in topological order. The problem is that this is not what

we wanted. We wanted a reverse topological order for the component DAG. So, the ﬁnal trick is to reverse

the digraph, by forming G

R

. This does not change the strong components, but it reverses the edges of the

component graph, and so reverses the topological order, which is exactly what we wanted. In conclusion we

have:

Theorem: Consider a digraph G on which DFS has been run. Sort the vertices by decreasing order of ﬁnish

time. Then a DFS of the reversed digraph G

R

, visits the strong components according to a reversed

topological order of the component DAG of G

R

.

Lecture 12: Minimum Spanning Trees and Kruskal’s Algorithm

Read: Chapt 23 in CLRS, up through 23.2.

Minimum Spanning Trees: A common problem in communications networks and circuit design is that of connect-

ing together a set of nodes (communication sites or circuit components) by a network of minimal total length

(where length is the sum of the lengths of connecting wires). We assume that the network is undirected. To

minimize the length of the connecting network, it never pays to have any cycles (since we could break any

cycle without destroying connectivity and decrease the total length). Since the resulting connection graph is

connected, undirected, and acyclic, it is a free tree.

The computational problem is called the minimum spanning tree problem (MST for short). More formally, given

a connected, undirected graph G = (V, E), a spanning tree is an acyclic subset of edges T ⊆ E that connects

all the vertices together. Assuming that each edge (u, v) of G has a numeric weight or cost, w(u, v), (may be

zero or negative) we deﬁne the cost of a spanning tree T to be the sum of edges in the spanning tree

w(T) =

(u,v)∈T

w(u, v).

A minimum spanning tree (MST) is a spanning tree of minimum weight. Note that the minimum spanning tree

may not be unique, but it is true that if all the edge weights are distinct, then the MST will be distinct (this is a

rather subtle fact, which we will not prove). Fig. 31 shows three spanning trees for the same graph, where the

shaded rectangles indicate the edges in the spanning tree. The one on the left is not a minimum spanning tree,

and the other two are. (An interesting observation is that not only do the edges sum to the same value, but in

fact the same set of edge weights appear in the two MST’s. Is this a coincidence? We’ll see later.)

1

8

7

9

6

5 9

8

8 7

10

9

6

10

9

8 7

9

5

8

4

10

6

2

g

9

4

d

f

2

1

2

5

8

4

1

2 2 2

a

Cost = 22 Cost = 22 Cost = 33

a

b

c

e

g

f

d

b

c

e

g

f

d a

b

c

e

Fig. 31: Spanning trees (the middle and right are minimum spanning trees.

Steiner Minimum Trees: Minimum spanning trees are actually mentioned in the U.S. legal code. The reason is

that AT&T was a government supported monopoly at one time, and was responsible for handling all telephone

connections. If a company wanted to connect a collection of installations by an private internal phone system,

Lecture Notes 42 CMSC 451

AT&T was required (by law) to connect them in the minimum cost manner, which is clearly a spanning tree

. . . or is it?

Some companies discovered that they could actually reduce their connection costs by opening a new bogus

installation. Such an installation served no purpose other than to act as an intermediate point for connections.

An example is shown in Fig. 32. On the left, consider four installations that lie at the corners of a 1 1 square.

Assume that all edge lengths are just Euclidean distances. It is easy to see that the cost of any MST for this

conﬁguration is 3 (as shown on the left). However, if you introduce a new installation at the center, whose

distance to each of the other four points is 1/

√

2. It is now possible to connect these ﬁve points with a total cost

of 4/

√

2 = 2

√

2 ≈ 2.83. This is better than the MST.

Cost = 3

1

Steiner point

SMT MST

Cost = 2 sqrt(2) = 2.83

Fig. 32: Steiner Minimum tree.

In general, the problem of determining the lowest cost interconnection tree between a given set of nodes, assum-

ing that you are allowed additional nodes (called Steiner points) is called the Steiner minimum tree (or SMT

for short). An interesting fact is that although there is a simple greedy algorithm for MST’s (as we will see

below), the SMT problem is much harder, and in fact is NP-hard. (Luckily for AT&T, the US Legal code is

rather ambiguous on the point as to whether the phone company was required to use MST’s or SMT’s in making

connections.)

Generic approach: We will present two greedy algorithms (Kruskal’s and Prim’s algorithms) for computing a min-

imum spanning tree. Recall that a greedy algorithm is one that builds a solution by repeated selecting the

cheapest (or generally locally optimal choice) among all options at each stage. An important characteristic of

greedy algorithms is that once they make a choice, they never “unmake” this choice. Before presenting these

algorithms, let us review some basic facts about free trees. They are all quite easy to prove.

Lemma:

• A free tree with n vertices has exactly n −1 edges.

• There exists a unique path between any two vertices of a free tree.

• Adding any edge to a free tree creates a unique cycle. Breaking any edge on this cycle restores a free

tree.

Let G = (V, E) be an undirected, connected graph whose edges have numeric edge weights (which may be

positive, negative or zero). The intuition behind the greedy MST algorithms is simple, we maintain a subset of

edges A, which will initially be empty, and we will add edges one at a time, until A equals the MST. We say

that a subset A ⊆ E is viable if A is a subset of edges in some MST. (We cannot say “the” MST, since it is not

necessarily unique.) We say that an edge (u, v) ∈ E − A is safe if A ∪ ¦(u, v)¦ is viable. In other words, the

choice (u, v) is a safe choice to add so that A can still be extended to form an MST. Note that if A is viable it

cannot contain a cycle. A generic greedy algorithm operates by repeatedly adding any safe edge to the current

spanning tree. (Note that viability is a property of subsets of edges and safety is a property of a single edge.)

When is an edge safe? We consider the theoretical issues behind determining whether an edge is safe or not. Let S

be a subset of the vertices S ⊆ V . A cut (S, V − S) is just a partition of the vertices into two disjoint subsets.

An edge (u, v) crosses the cut if one endpoint is in S and the other is in V −S. Given a subset of edges A, we

Lecture Notes 43 CMSC 451

say that a cut respects A if no edge in A crosses the cut. It is not hard to see why respecting cuts are important

to this problem. If we have computed a partial MST, and we wish to know which edges can be added that do

not induce a cycle in the current MST, any edge that crosses a respecting cut is a possible candidate.

An edge of E is a light edge crossing a cut, if among all edges crossing the cut, it has the minimum weight

(the light edge may not be unique if there are duplicate edge weights). Intuition says that since all the edges

that cross a respecting cut do not induce a cycle, then the lightest edge crossing a cut is a natural choice. The

main theorem which drives both algorithms is the following. It essentially says that we can always augment A

by adding the minimum weight edge that crosses a cut which respects A. (It is stated in complete generality, so

that it can be applied to both algorithms.)

MST Lemma: Let G = (V, E) be a connected, undirected graph with real-valued weights on the edges. Let

A be a viable subset of E (i.e. a subset of some MST), let (S, V − S) be any cut that respects A, and let

(u, v) be a light edge crossing this cut. Then the edge (u, v) is safe for A.

Proof: It will simplify the proof to assume that all the edge weights are distinct. Let T be any MST for G (see

Fig. ). If T contains (u, v) then we are done. Suppose that no MST contains (u, v). We will derive a

contradiction.

8

7

4

9

4 4

x x x

T’ = T − (x,y) + (u,v) T + (u,v)

8

u

y

6

v

y

v u v u

y

A

Fig. 33: Proof of the MST Lemma. Edge (u, v) is the light edge crossing cut (S, V −S).

Add the edge (u, v) to T, thus creating a cycle. Since u and v are on opposite sides of the cut, and since

any cycle must cross the cut an even number of times, there must be at least one other edge (x, y) in T that

crosses the cut.

The edge (x, y) is not in A (because the cut respects A). By removing (x, y) we restore a spanning tree,

call it T

. We have

w(T

) = w(T) −w(x, y) +w(u, v).

Since (u, v) is lightest edge crossing the cut, we have w(u, v) < w(x, y). Thus w(T

) < w(T). This

contradicts the assumption that T was an MST.

Kruskal’s Algorithm: Kruskal’s algorithm works by attempting to add edges to the A in increasing order of weight

(lightest edges ﬁrst). If the next edge does not induce a cycle among the current set of edges, then it is added to

A. If it does, then this edge is passed over, and we consider the next edge in order. Note that as this algorithm

runs, the edges of A will induce a forest on the vertices. As the algorithm continues, the trees of this forest are

merged together, until we have a single tree containing all the vertices.

Observe that this strategy leads to a correct algorithm. Why? Consider the edge (u, v) that Kruskal’s algorithm

seeks to add next, and suppose that this edge does not induce a cycle in A. Let A

**denote the tree of the forest
**

A that contains vertex u. Consider the cut (A

, V − A

**). Every edge crossing the cut is not in A, and so this
**

cut respects A, and (u, v) is the light edge across the cut (because any lighter edge would have been considered

earlier by the algorithm). Thus, by the MST Lemma, (u, v) is safe.

Lecture Notes 44 CMSC 451

The only tricky part of the algorithm is how to detect efﬁciently whether the addition of an edge will create a

cycle in A. We could perform a DFS on subgraph induced by the edges of A, but this will take too much time.

We want a fast test that tells us whether u and v are in the same tree of A.

This can be done by a data structure (which we have not studied) called the disjoint set Union-Find data structure.

This data structure supports three operations:

Create-Set(u): Create a set containing a single item v.

Find-Set(u): Find the set that contains a given item u.

Union(u, v): Merge the set containing u and the set containing v into a common set.

You are not responsible for knowing how this data structure works (which is described in CLRS). You may

use it as a “black-box”. For our purposes it sufﬁces to know that each of these operations can be performed in

O(log n) time, on a set of size n. (The Union-Find data structure is quite interesting, because it can actually

perform a sequence of n operations much faster than O(nlog n) time. However we will not go into this here.

O(log n) time is fast enough for its use in Kruskal’s algorithm.)

In Kruskal’s algorithm, the vertices of the graph will be the elements to be stored in the sets, and the sets will be

vertices in each tree of A. The set A can be stored as a simple list of edges. The algorithm is shown below, and

an example is shown in Fig. 34.

Kruskal’s Algorithm

Kruskal(G=(V,E),w) {

A = {} // initially A is empty

for each (u in V) Create_Set(u) // create set for each vertex

Sort E in increasing order by weight w

for each ((u,v) from the sorted list) {

if (Find_Set(u) != Find_Set(v)) { // u and v in different trees

Add (u,v) to A

Union(u, v)

}

}

return A

}

2

6

5

9

9

7

10

8

2

10

4

8

9

8 7

9

5

6

2

2

1

2

4

5 5

2

1 1

2

2

1

2

a

4

8

4

9

7

6

10 10 9 8 7 6

9

9

8 8

10

7

6

8

9

9

2

4

8

9

8

10

7

9

5

6

10

8

9

8

7

9

5

6

a

b

c

e

g

d a

b

c

e

g

a

c

e

a

b

c

c

c c

c

1

4

8

2

1

2

a

c c

c

c c

c

c

e

c c

c

c

c c

2 2

5

4

1

a

c

c

c c

Fig. 34: Kruskal’s Algorithm. Each vertex is labeled according to the set that contains it.

Analysis: How long does Kruskal’s algorithm take? As usual, let V be the number of vertices and E be the number of

edges. Since the graph is connected, we may assume that E ≥ V −1. Observe that it takes Θ(E log E) time to

Lecture Notes 45 CMSC 451

sort the edges. The for-loop is iterated E times, and each iteration involves a constant number of accesses to the

Union-Find data structure on a collection of V items. Thus each access is Θ(V ) time, for a total of Θ(E log V ).

Thus the total running time is the sum of these, which is Θ((V +E) log V ). Since V is asymptotically no larger

than E, we could write this more simply as Θ(E log V ).

Lecture 13: Prim’s and Baruvka’s Algorithms for MSTs

Read: Chapt 23 in CLRS. Baruvka’s algorithm is not described in CLRS.

Prim’s Algorithm: Prim’s algorithmis another greedy algorithmfor minimumspanning trees. It differs fromKruskal’s

algorithm only in how it selects the next safe edge to add at each step. Its running time is essentially the same

as Kruskal’s algorithm, O((V +E) log V ). There are two reasons for studying Prim’s algorithm. The ﬁrst is to

show that there is more than one way to solve a problem (an important lesson to learn in algorithm design), and

the second is that Prim’s algorithm looks very much like another greedy algorithm, called Dijkstra’s algorithm,

that we will study for a completely different problem, shortest paths. Thus, not only is Prim’s a different way to

solve the same MST problem, it is also the same way to solve a different problem. (Whatever that means!)

Different ways to grow a tree: Kruskal’s algorithm worked by ordering the edges, and inserting them one by one

into the spanning tree, taking care never to introduce a cycle. Intuitively Kruskal’s works by merging or splicing

two trees together, until all the vertices are in the same tree.

In contrast, Prim’s algorithm builds the tree up by adding leaves one at a time to the current tree. We start with

a root vertex r (it can be any vertex). At any time, the subset of edges A forms a single tree (in Kruskal’s it

formed a forest). We look to add a single vertex as a leaf to the tree. The process is illustrated in the following

ﬁgure.

r

u

10

10

u

12

11

4

5

7

6

r

3

5

7

6

12

9

Fig. 35: Prim’s Algorithm.

Observe that if we consider the set of vertices S currently part of the tree, and its complement (V −S), we have

a cut of the graph and the current set of tree edges A respects this cut. Which edge should we add next? The

MST Lemma from the previous lecture tells us that it is safe to add the light edge. In the ﬁgure, this is the edge

of weight 4 going to vertex u. Then u is added to the vertices of S, and the cut changes. Note that some edges

that crossed the cut before are no longer crossing it, and others that were not crossing the cut are.

It is easy to see, that the key questions in the efﬁcient implementation of Prim’s algorithm is how to update the

cut efﬁciently, and how to determine the light edge quickly. To do this, we will make use of a priority queue

data structure. Recall that this is the data structure used in HeapSort. This is a data structure that stores a set of

items, where each item is associated with a key value. The priority queue supports three operations.

insert(u, key): Insert u with the key value key in Q.

extractMin(): Extract the item with the minimum key value in Q.

Lecture Notes 46 CMSC 451

decreaseKey(u, new key): Decrease the value of u’s key value to new key.

A priority queue can be implemented using the same heap data structure used in heapsort. All of the above

operations can be performed in O(log n) time, where n is the number of items in the heap.

What do we store in the priority queue? At ﬁrst you might think that we should store the edges that cross the

cut, since this is what we are removing with each step of the algorithm. The problem is that when a vertex is

moved from one side of the cut to the other, this results in a complicated sequence of updates.

There is a much more elegant solution, and this is what makes Prim’s algorithm so nice. For each vertex in

u ∈ V − S (not part of the current spanning tree) we associate u with a key value key[u], which is the weight

of the lightest edge going from u to any vertex in S. We also store in pred[u] the end vertex of this edge in S.

If there is not edge from u to a vertex in V − S, then we set its key value to +∞. We will also need to know

which vertices are in S and which are not. We do this by coloring the vertices in S black.

Here is Prim’s algorithm. The root vertex r can be any vertex in V .

Prim’s Algorithm

Prim(G,w,r) {

for each (u in V) { // initialization

key[u] = +infinity;

color[u] = white;

}

key[r] = 0; // start at root

pred[r] = nil;

Q = new PriQueue(V); // put vertices in Q

while (Q.nonEmpty()) { // until all vertices in MST

u = Q.extractMin(); // vertex with lightest edge

for each (v in Adj[u]) {

if ((color[v] == white) && (w(u,v) < key[v])) {

key[v] = w(u,v); // new lighter edge out of v

Q.decreaseKey(v, key[v]);

pred[v] = u;

}

}

color[u] = black;

}

[The pred pointers define the MST as an inverted tree rooted at r]

}

The following ﬁgure illustrates Prim’s algorithm. The arrows on edges indicate the predecessor pointers, and

the numeric label in each vertex is the key value.

To analyze Prim’s algorithm, we account for the time spent on each vertex as it is extracted from the priority

queue. It takes O(log V ) to extract this vertex from the queue. For each incident edge, we spend potentially

O(log V ) time decreasing the key of the neighboring vertex. Thus the time is O(log V + deg(u) log V ) time.

The other steps of the update are constant time. So the overall running time is

T(V, E) =

u∈V

(log V + deg(u) log V ) =

u∈V

(1 + deg(u)) log V

= log V

u∈V

(1 + deg(u)) = (log V )(V + 2E) = Θ((V +E) log V ).

Since G is connected, V is asymptotically no greater than E, so this is Θ(E log V ). This is exactly the same as

Kruskal’s algorithm.

Lecture Notes 47 CMSC 451

8

9

2

5

10

2

5

6

2

9

7 8

4

8

9

10

4

8

4

8

9

4

8

9

10

2

5

6

2

9

7 8

7

10

10

2

5

6

2

9

8

4

8

9

10

2

5

6

2

7

2

5

6

2

9

7 8

6

9

2

9

7 8

9

8

4

8

2

1

4

Q: 4,8,?,?,?,? Q: 8,8,10,?,? Q: 1,2,10,?

Q: 2,2,5 Q: 2,5 Q: <empty>

2 5

?

?

?

?

8

4

8

10 10

2

5

2

?

2

5

1

2 ? ?

1

8

1

1

1 1

1

Fig. 36: Prim’s Algorithm.

Baruvka’s Algorithm: We have seen two ways (Kruskal’s and Prim’s algorithms) for solving the MST problem. So,

it may seem like complete overkill to consider yet another algorithm. This one is called Baruvka’s algorithm.

It is actually the oldest of the three algorithms (invented in 1926, well before the ﬁrst computers). The reason

for studying this algorithm is that of the three algorithms, it is the easiest to implement on a parallel computer.

Unlike Kruskal’s and Prim’s algorithms, which add edges one at a time, Baruvka’s algorithm adds a whole set

of edges all at once to the MST.

Baruvka’s algorithm is similar to Kruskal’s algorithm, in the sense that it works by maintaining a collection

of disconnected trees. Let us call each subtree a component. Initially, each vertex is by itself in a one-vertex

component. Recall that with each stage of Kruskal’s algorithm, we add the lightest-weight edge that connects

two different components together. To prove Kruskal’s algorithm correct, we argued (from the MST Lemma)

that the lightest such edge will be safe to add to the MST.

In fact, a closer inspection of the proof reveals that the cheapest edge leaving any component is always safe.

This suggests a more parallel way to grow the MST. Each component determines the lightest edge that goes

from inside the component to outside the component (we don’t care where). We say that such an edge leaves the

component. Note that two components might select the same edge by this process. By the above observation,

all of these edges are safe, so we may add them all at once to the set A of edges in the MST. As a result, many

components will be merged together into a single component. We then apply DFS to the edges of A, to identify

the new components. This process is repeated until only one component remains. A fairly high-level description

of Baruvka’s algorithm is given below.

Baruvka’s Algorithm

Baruvka(G=(V,E), w) {

initialize each vertex to be its own component;

A = {}; // A holds edges of the MST

do {

for (each component C) {

find the lightest edge (u,v) with u in C and v not in C;

add {u,v} to A (unless it is already there);

}

apply DFS to graph H=(V,A), to compute the new components;

} while (there are 2 or more components);

return A; // return final MST edges

Lecture Notes 48 CMSC 451

There are a number of unspeciﬁed details in Baruvka’s algorithm, which we will not spell out in detail, except to

note that they can be solved in Θ(V +E) time through DFS. First, we may apply DFS, but only traversing the

edges of A to compute the components. Each DFS tree will correspond to a separate component. We label each

vertex with its component number as part of this process. With these labels it is easy to determine which edges

go between components (since their endpoints have different labels). Then we can traverse each component

again to determine the lightest edge that leaves the component. (In fact, with a little more cleverness, we can do

all this without having to perform two separate DFS’s.) The algorithm is illustrated in the ﬁgure below.

8

7

6 a

a

h h

h

h

h

a

h

12

15

2

9

14

3

10

11

4

13

1

h

c

a h

e

h

e c

a 6

7

8

12

15

2

9

14

3

10

11

4

13

1

a

10

e

i

g

h

c

a 6

7

8

12

15

2

9

14

3

10

11

4

13

1

f

h h

h

h

h

h

h

h 6

7

8

12

15

2

9

14

3

11

4

13

1

d

b

Fig. 37: Baruvka’s Algorithm.

Analysis: How long does Baruvka’s algorithm take? Observe that because each iteration involves doing a DFS, each

iteration (of the outer do-while loop) can be performed in Θ(V +E) time. The question is how many iterations

are required in general? We claim that there are never more than O(log n) iterations needed. To see why, let m

denote the number of components at some stage. Each of the m components, will merge with at least one other

component. Afterwards the number of remaining components could be a low as 1 (if they all merge together),

but never higher than m/2 (if they merge in pairs). Thus, the number of components decreases by at least

half with each iteration. Since we start with V components, this can happen at most lg V time, until only one

component remains. Thus, the total running time is Θ((V +E) log V ) time. Again, since G is connected, V is

asymptotically no larger than E, so we can write this more succinctly as Θ(E log V ). Thus all three algorithms

have the same asymptotic running time.

Lecture 14: Dijkstra’s Algorithm for Shortest Paths

Read: Chapt 24 in CLRS.

Shortest Paths: Consider the problem of computing shortest paths in a directed graph. We have already seen that

breadth-ﬁrst search is an O(V +E) algorithm for ﬁnding shortest paths from a single source vertex to all other

vertices, assuming that the graph has no edge weights. Suppose that the graph has edge weights, and we wish

to compute the shortest paths from a single source vertex to all other vertices in the graph.

By the way, there are other formulations of the shortest path problem. One may want just the shortest path

between a single pair of vertices. Most algorithms for this problem are variants of the single-source algorithm

that we will present. There is also a single sink problem, which can be solved in the transpose digraph (that is,

by reversing the edges). Computing all-pairs shortest paths can be solved by iterating a single-source algorithm

over all vertices, but there are other global methods that are faster.

Think of the vertices as cities, and the weights represent the cost of traveling from one city to another (nonex-

istent edges can be thought of a having inﬁnite cost). When edge weights are present, we deﬁne the length of a

Lecture Notes 49 CMSC 451

path to be the sum of edge weights along the path. Deﬁne the distance between two vertices, u and v, δ(u, v) to

be the length of the minimum length path from u to v. (δ(u, u) = 0 by considering path of 0 edges from u to

itself.)

Single Source Shortest Paths: The single source shortest path problem is as follows. We are given a directed graph

with nonnegative edge weights G = (V, E) and a distinguished source vertex, s ∈ V . The problem is to

determine the distance from the source vertex to every vertex in the graph.

It is possible to have graphs with negative edges, but in order for the shortest path to be well deﬁned, we need to

add the requirement that there be no cycles whose total cost is negative (otherwise you make the path inﬁnitely

short by cycling forever through such a cycle). The text discusses the Bellman-Ford algorithm for ﬁnding

shortest paths assuming negative weight edges but no negative-weight cycles are present. We will discuss a

simple greedy algorithm, called Dijkstra’s algorithm, which assumes there are no negative edge weights.

We will stress the task of computing the minimum distance from the source to each vertex. Computing the

actual path will be a fairly simple extension. As in breadth-ﬁrst search, for each vertex we will have a pointer

pred[v] which points back to the source. By following the predecessor pointers backwards from any vertex, we

will construct the reversal of the shortest path to v.

Shortest Paths and Relaxation: The basic structure of Dijkstra’s algorithm is to maintain an estimate of the shortest

path for each vertex, call this d[v]. (NOTE: Don’t confuse d[v] with the d[v] in the DFS algorithm. They are

completely different.) Intuitively d[v] will be the length of the shortest path that the algorithm knows of from

s to v. This, value will always greater than or equal to the true shortest path distance from s to v. Initially, we

know of no paths, so d[v] = ∞. Initially d[s] = 0 and all the other d[v] values are set to ∞. As the algorithm

goes on, and sees more and more vertices, it attempts to update d[v] for each vertex in the graph, until all the

d[v] values converge to the true shortest distances.

The process by which an estimate is updated is called relaxation. Here is how relaxation works. Intuitively, if

you can see that your solution is not yet reached an optimum value, then push it a little closer to the optimum.

In particular, if you discover a path from s to v shorter than d[v], then you need to update d[v]. This notion is

common to many optimization algorithms.

Consider an edge from a vertex u to v whose weight is w(u, v). Suppose that we have already computed current

estimates on d[u] and d[v]. We know that there is a path from s to u of weight d[u]. By taking this path and

following it with the edge (u, v) we get a path to v of length d[u] +w(u, v). If this path is better than the existing

path of length d[v] to v, we should update d[v] to the value d[u] + w(u, v). This is illustrated in Fig. 38. We

should also remember that the shortest path to v passes through u, which we do by updating v’s predecessor

pointer.

v

u

8

3 5

s

0

relax(u,v)

s

v

u

11

3 5

0

Fig. 38: Relaxation.

Relaxing an edge

Relax(u,v) {

if (d[u] + w(u,v) < d[v]) { // is the path through u shorter?

d[v] = d[u] + w(u,v) // yes, then take it

pred[v] = u // record that we go through u

}

}

Lecture Notes 50 CMSC 451

Observe that whenever we set d[v] to a ﬁnite value, there is always evidence of a path of that length. Therefore

d[v] ≥ δ(s, v). If d[v] = δ(s, v), then further relaxations cannot change its value.

It is not hard to see that if we perform Relax(u, v) repeatedly over all edges of the graph, the d[v] values will

eventually converge to the ﬁnal true distance value from s. The cleverness of any shortest path algorithm is

to perform the updates in a judicious manner, so the convergence is as fast as possible. In particular, the best

possible would be to order relaxation operations in such a way that each edge is relaxed exactly once. Dijkstra’s

algorithm does exactly this.

Dijkstra’s Algorithm: Dijkstra’s algorithm is based on the notion of performing repeated relaxations. Dijkstra’s

algorithm operates by maintaining a subset of vertices, S ⊆ V , for which we claim we “know” the true distance,

that is d[v] = δ(s, v). Initially S = ∅, the empty set, and we set d[s] = 0 and all others to +∞. One by one we

select vertices from V −S to add to S.

The set S can be implemented using an array of vertex colors. Initially all vertices are white, and we set

color[v] = black to indicate that v ∈ S.

How do we select which vertex among the vertices of V − S to add next to S? Here is where greedy selection

comes in. Dijkstra recognized that the best way in which to perform relaxations is by increasing order of distance

from the source. This way, whenever a relaxation is being performed, it is possible to infer that result of the

relaxation yields the ﬁnal distance value. To implement this, for each vertex in u ∈ V − S, we maintain a

distance estimate d[u]. The greedy thing to do is to take the vertex of V − S for which d[u] is minimum, that

is, take the unprocessed vertex that is closest (by our estimate) to s. Later we will justify why this is the proper

choice.

In order to perform this selection efﬁciently, we store the vertices of V − S in a priority queue (e.g. a heap),

where the key value of each vertex u is d[u]. Note the similarity with Prim’s algorithm, although a different

key value is used there. Also recall that if we implement the priority queue using a heap, we can perform the

operations Insert(), Extract Min(), and Decrease Key(), on a priority queue of size n each in O(log n) time.

Each vertex “knows” its location in the priority queue (e.g. has a cross reference link to the priority queue entry),

and each entry in the priority queue “knows” which vertex it represents. It is important when implementing the

priority queue that this cross reference information is updated.

Here is Dijkstra’s algorithm. (Note the remarkable similarity to Prim’s algorithm.) An example is presented in

Fig. 39.

Notice that the coloring is not really used by the algorithm, but it has been included to make the connection with

the correctness proof a little clearer. Because of the similarity between this and Prim’s algorithm, the running

time is the same, namely Θ(E log V ).

Correctness: Recall that d[v] is the distance value assigned to vertex v by Dijkstra’s algorithm, and let δ(s, v) denote

the length of the true shortest path from s to v. To see that Dijkstra’s algorithm correctly gives the ﬁnal true

distances, we need to show that d[v] = δ(s, v) when the algorithm terminates. This is a consequence of the

following lemma, which states that once a vertex u has been added to S (i.e. colored black), d[u] is the true

shortest distance from s to u. Since at the end of the algorithm, all vertices are in S, then all distance estimates

are correct.

Lemma: When a vertex u is added to S, d[u] = δ(s, u).

Proof: It will simplify the proof conceptually if we assume that all the edge weights are strictly positive (the

general case of nonnegative edges is presented in the text).

Suppose to the contrary that at some point Dijkstra’s algorithm ﬁrst attempts to add a vertex u to S for

which d[u] ,= δ(s, u). By our observations about relaxation, d[u] is never less than δ(s, u), thus we have

d[u] > δ(s, u). Consider the situation just prior to the insertion of u. Consider the true shortest path from

s to u. Because s ∈ S and u ∈ V − S, at some point this path must ﬁrst jump out of S. Let (x, y) be the

edge taken by the path, where x ∈ S and y ∈ V −S. (Note that it may be that x = s and/or y = u).

Lecture Notes 51 CMSC 451

Dijkstra’s Algorithm

Dijkstra(G,w,s) {

for each (u in V) { // initialization

d[u] = +infinity

color[u] = white

pred[u] = null

}

d[s] = 0 // dist to source is 0

Q = new PriQueue(V) // put all vertices in Q

while (Q.nonEmpty()) { // until all vertices processed

u = Q.extractMin() // select u closest to s

for each (v in Adj[u]) {

if (d[u] + w(u,v) < d[v]) { // Relax(u,v)

d[v] = d[u] + w(u,v)

Q.decreaseKey(v, d[v])

pred[v] = u

}

}

color[u] = black

}

[The pred pointers define an ‘‘inverted’’ shortest path tree]

}

s

3

2

6

5

2 5

8

1

7

6

7

2

1

8

4 5

5

2

3

s

5 3

s

7

2

1

8

4

5

2

3

s

7

2

1

8

4 5

5

2

3

s

2

2

0

5

2 7

4

2

8

0 2

5 6 7

10 ?

? ?

?

?

?

7

7

2

1

8

4 5

5

6

0 0

7

2

0

5

7

5

0

2 7

0

5

2

3

s

7

1

4 5

5

2

Fig. 39: Dijkstra’s Algorithm example.

pred[u]

s to u?

shorter path from

d[y] > d[u]

y

x

S

s

u

Fig. 40: Correctness of Dijkstra’s Algorithm.

Lecture Notes 52 CMSC 451

We argue that y ,= u. Why? Since x ∈ S we have d[x] = δ(s, x). (Since u was the ﬁrst vertex added to

S which violated this, all prior vertices satisfy this.) Since we applied relaxation to x when it was added,

we would have set d[y] = d[x] + w(x, y) = δ(s, y). Thus d[y] is correct, and by hypothesis, d[u] is not

correct, so they cannot be the same.

Now observe that since y appears somewhere along the shortest path from s to u (but not at u) and all

subsequent edges following y are of positive weight, we have δ(s, y) < δ(s, u), and thus

d[y] = δ(s, y) < δ(s, u) < d[u].

Thus y would have been added to S before u, in contradiction to our assumption that u is the next vertex

to be added to S.

Lecture 15: All-Pairs Shortest Paths

Read: Section 25.2 in CLRS.

All-Pairs Shortest Paths: We consider the generalization of the shortest path problem, to computing shortest paths

between all pairs of vertices. Let G = (V, E) be a directed graph with edge weights. If (u, v) E, is an edge

of G, then the weight of this edge is denoted w(u, v). Recall that the cost of a path is the sum of edge weights

along the path. The distance between two vertices δ(u, v) is the cost of the minimum cost path between them.

We will allow G to have negative cost edges, but we will not allow G to have any negative cost cycles.

We consider the problem of determining the cost of the shortest path between all pairs of vertices in a weighted

directed graph. We will present a Θ(n

3

) algorithm, called the Floyd-Warshall algorithm. This algorithm is

based on dynamic programming.

For this algorithm, we will assume that the digraph is represented as an adjacency matrix, rather than the more

common adjacency list. Although adjacency lists are generally more efﬁcient for sparse graphs, storing all the

inter-vertex distances will require Ω(n

2

) storage, so the savings is not justiﬁed here. Because the algorithm is

matrix-based, we will employ common matrix notation, using i, j and k to denote vertices rather than u, v, and

w as we usually do.

Input Format: The input is an n n matrix w of edge weights, which are based on the edge weights in the digraph.

We let w

ij

denote the entry in row i and column j of w.

w

ij

=

_

_

_

0 if i = j,

w(i, j) if i ,= j and (i, j) ∈ E,

+∞ if i ,= j and (i, j) / ∈ E.

Setting w

ij

= ∞if there is no edge, intuitively means that there is no direct link between these two nodes, and

hence the direct cost is inﬁnite. The reason for setting w

ii

= 0 is that there is always a trivial path of length 0

(using no edges) from any vertex to itself. (Note that in digraphs it is possible to have self-loop edges, and so

w(i, i) may generally be nonzero. It cannot be negative, since we assume that there are no negative cost cycles,

and if it is positive, there is no point in using it as part of any shortest path.)

The output will be an n n distance matrix D = d

ij

where d

ij

= δ(i, j), the shortest path cost from vertex i

to j. Recovering the shortest paths will also be an issue. To help us do this, we will also compute an auxiliary

matrix mid[i, j]. The value of mid[i, j] will be a vertex that is somewhere along the shortest path from i to j.

If the shortest path travels directly from i to j without passing through any other vertices, then mid[i, j] will be

set to null. These intermediate values behave somewhat like the predecessor pointers in Dijkstra’s algorithm, in

order to reconstruct the ﬁnal shortest path in Θ(n) time.

Lecture Notes 53 CMSC 451

Floyd-Warshall Algorithm: The Floyd-Warshall algorithm dates back to the early 60’s. Warshall was interested

in the weaker question of reachability: determine for each pair of vertices u and v, whether u can reach v.

Floyd realized that the same technique could be used to compute shortest paths with only minor variations. The

Floyd-Warshall algorithm runs in Θ(n

3

) time.

As with any DP algorithm, the key is reducing a large problem to smaller problems. A natural way of doing this

is by limiting the number of edges of the path, but it turns out that this does not lead to the fastest algorithm(but is

an approach worthy of consideration). The main feature of the Floyd-Warshall algorithm is in ﬁnding a the best

formulation for the shortest path subproblem. Rather than limiting the number of edges on the path, they instead

limit the set of vertices through which the path is allowed to pass. In particular, for a path p = ¸v

1

, v

2

, . . . , v

)

we say that the vertices v

2

, v

3

, . . . , v

−1

are the intermediate vertices of this path. Note that a path consisting of

a single edge has no intermediate vertices.

Formulation: Deﬁne d

(k)

ij

to be the shortest path from i to j such that any intermediate vertices on the path are

chosen from the set ¦1, 2, . . . , k¦.

In other words, we consider a path from i to j which either consists of the single edge (i, j), or it visits some

intermediate vertices along the way, but these intermediate can only be chosen from among ¦1, 2, . . . , k¦. The

path is free to visit any subset of these vertices, and to do so in any order. For example, in the digraph shown in

the Fig. 41(a), notice how the value of d

(k)

5,6

changes as k varies.

9

4

6

3

1

4

1

1

(b) (a)

(5,1,6)

5,6

(3)

5,6

(2)

5,6

(1)

5,6

(no path)

(5,4,1,6)

(5,3,2,6)

(5,2,6)

d

d

d

d

d

= 6

= 8

= 9

= 13

= INF

5,6

(0)

(k−1)

1

d

ij

(k−1)

d

ik

(k−1)

d

kj

i

k

j

Vertices 1,2,...,k−1

5

4 2

6

3

(4)

Fig. 41: Limiting intermediate vertices. For example d

(3)

5,6

can go through any combination of the intermediate vertices

¦1, 2, 3¦, of which ¸5, 3, 2, 6) has the lowest cost of 8.

Floyd-Warshall Update Rule: How do we compute d

(k)

ij

assuming that we have already computed the previous ma-

trix d

(k−1)

? There are two basic cases, depending on the ways that we might get from vertex i to vertex j,

assuming that the intermediate vertices are chosen from ¦1, 2, . . . , k¦:

Don’t go through k at all: Then the shortest path from i to j uses only intermediate vertices ¦1, . . . , k − 1¦

and hence the length of the shortest path is d

(k−1)

ij

.

Do go through k: First observe that a shortest path does not pass through the same vertex twice, so we can

assume that we pass through k exactly once. (The assumption that there are no negative cost cycles is

being used here.) That is, we go from i to k, and then from k to j. In order for the overall path to be as

short as possible we should take the shortest path from i to k, and the shortest path from k to j. Since of

these paths uses intermediate vertices only in ¦1, 2, . . . , k −1¦, the length of the path is d

(k−1)

ik

+d

(k−1)

kj

.

Lecture Notes 54 CMSC 451

This suggests the following recursive rule (the DP formulation) for computing d

(k)

, which is illustrated in

Fig. 41(b).

d

(0)

ij

= w

ij

,

d

(k)

ij

= min

_

d

(k−1)

ij

, d

(k−1)

ik

+d

(k−1)

kj

_

for k ≥ 1.

The ﬁnal answer is d

(n)

ij

because this allows all possible vertices as intermediate vertices. We could write a

recursive programto compute d

(k)

ij

, but this will be prohibitively slowbecause the same value may be reevaluated

many times. Instead, we compute it by storing the values in a table, and looking the values up as we need them.

Here is the complete algorithm. We have also included mid-vertex pointers, mid[i, j] for extracting the ﬁnal

shortest paths. We will leave the extraction of the shortest path as an exercise.

Floyd-Warshall Algorithm

Floyd_Warshall(int n, int w[1..n, 1..n]) {

array d[1..n, 1..n]

for i = 1 to n do { // initialize

for j = 1 to n do {

d[i,j] = W[i,j]

mid[i,j] = null

}

}

for k = 1 to n do // use intermediates {1..k}

for i = 1 to n do // ...from i

for j = 1 to n do // ...to j

if (d[i,k] + d[k,j]) < d[i,j]) {

d[i,j] = d[i,k] + d[k,j] // new shorter path length

mid[i,j] = k // new path is through k

}

return d // matrix of distances

}

An example of the algorithm’s execution is shown in Fig. 42.

Clearly the algorithm’s running time is Θ(n

3

). The space used by the algorithm is Θ(n

2

). Observe that we

deleted all references to the superscript (k) in the code. It is left as an exercise that this does not affect the

correctness of the algorithm. (Hint: The danger is that values may be overwritten and then used later in the same

phase. Consider which entries might be overwritten and then reused, they occur in row k and column k. It can

be shown that the overwritten values are equal to their original values.)

Lecture 16: NP-Completeness: Languages and NP

Read: Chapt 34 in CLRS, up through section 34.2.

Complexity Theory: At this point of the semester we have been building up your “bag of tricks” for solving algorith-

mic problems. Hopefully when presented with a problem you now have a little better idea of how to go about

solving the problem. What sort of design paradigm should be used (divide-and-conquer, DFS, greedy, dynamic

programming), what sort of data structures might be relevant (trees, heaps, graphs) and what representations

would be best (adjacency list, adjacency matrices), what is the running time of your algorithm.

All of this is ﬁne if it helps you discover an acceptably efﬁcient algorithm to solve your problem. The question

that often arises in practice is that you have tried every trick in the book, and nothing seems to work. Although

Lecture Notes 55 CMSC 451

9

5

1 3

12

9

1

2

1

7

1

6

5

2

12

9

5

6

12

9

5

1

(3)

d =

(1)

d =

(4)

1 8

4 4

2

8 1

2

4

8

4

1

3

4

5

d =

4

3

2

0 3 4 1

5 0 1 6

4 7 0 5

7 2 3 0

0 8 ? 1

? 0 1 ?

4 ? 0 ?

? 2 9 0

? 0 1 ?

0 8 ? 1

4 12 0 5

? 2 9 0

0 8 9 1

4 12 0 5

7 2 3 0

? 0 1 ?

4 12 0 5

? 2 3 0

3

7

4

7

1

1

2

3

8

5

? = infinity

5 0 1 6

0 8 9 1

1

1

4

3

2

1

4 2

3

1

4

3

2

1

4

3

2

d =

(2)

(0)

d =

Fig. 42: Floyd-Warshall Example. Newly updates entries are circled.

Lecture Notes 56 CMSC 451

your algorithm can solve small problems reasonably efﬁciently (e.g. n ≤ 20) the really large applications that

you want to solve (e.g. n = 1, 000 or n = 10, 000) your algorithm never terminates. When you analyze its

running time, you realize that it is running in exponential time, perhaps n

√

n

, or 2

n

, or 2

(2

n

)

, or n!, or worse!

Near the end of the 60’s where there was great success in ﬁnding efﬁcient solutions to many combinatorial prob-

lems, but there was also a growing list of problems for which there seemed to be no known efﬁcient algorithmic

solutions. People began to wonder whether there was some unknown paradigm that would lead to a solution

to these problems, or perhaps some proof that these problems are inherently hard to solve and no algorithmic

solutions exist that run under exponential time.

Near the end of the 60’s a remarkable discovery was made. Many of these hard problems were interrelated

in the sense that if you could solve any one of them in polynomial time, then you could solve all of them in

polynomial time. This discovery gave rise to the notion of NP-completeness, and created possibly the biggest

open problems in computer science: is P = NP? We will be studying this concept over the next few lectures.

This area is a radical departure from what we have been doing because the emphasis will change. The goal is

no longer to prove that a problem can be solved efﬁciently by presenting an algorithm for it. Instead we will be

trying to show that a problem cannot be solved efﬁciently. The question is how to do this?

Laying down the rules: We need some way to separate the class of efﬁciently solvable problems from inefﬁciently

solvable problems. We will do this by considering problems that can be solved in polynomial time.

When designing algorithms it has been possible for us to be rather informal with various concepts. We have

made use of the fact that an intelligent programmer could ﬁll in any missing details. However, the task of

proving that something cannot be done efﬁciently must be handled much more carefully, since we do not want

leave any “loopholes” that would allow someone to subvert the rules in an unreasonable way and claim to have

an efﬁcient solution when one does not really exist.

We have measured the running time of algorithms using worst-case complexity, as a function of n, the size of

the input. We have deﬁned input size variously for different problems, but the bottom line is the number of bits

(or bytes) that it takes to represent the input using any reasonably efﬁcient encoding. By a reasonably efﬁcient

encoding, we assume that there is not some signiﬁcantly shorter way of providing the same information. For

example, you could write numbers in unary notation 11111111

1

= 100

2

= 8 rather than binary, but that would

be unacceptably inefﬁcient. You could describe graphs in some highly inefﬁcient way, such as by listing all of

its cycles, but this would also be unacceptable. We will assume that numbers are expressed in binary or some

higher base and graphs are expressed using either adjacency matrices or adjacency lists.

We will usually restrict numeric inputs to be integers (as opposed to calling them “reals”), so that it is clear that

arithmetic can be performed efﬁciently. We have also assumed that operations on numbers can be performed in

constant time. From now on, we should be more careful and assume that arithmetic operations require at least

as much time as there are bits of precision in the numbers being stored.

Up until now all the algorithms we have seen have had the property that their worst-case running times are

bounded above by some polynomial in the input size, n. A polynomial time algorithm is any algorithm that

runs in time O(n

k

) where k is some constant that is independent of n. A problem is said to be solvable in

polynomial time if there is a polynomial time algorithm that solves it.

Some functions that do not “look” like polynomials (such as O(nlog n)) are bounded above by polynomials

(such as O(n

2

)). Some functions that do “look” like polynomials are not. For example, suppose you have an

algorithm which inputs a graph of size n and an integer k and runs in O(n

k

) time. Is this a polynomial? No,

because k is an input to the problem, so the user is allowed to choose k = n, implying that the running time

would be O(n

n

) which is not a polynomial in n. The important thing is that the exponent must be a constant

independent of n.

Of course, saying that all polynomial time algorithms are “efﬁcient” is untrue. An algorithm whose running

time is O(n

1000

) is certainly pretty inefﬁcient. Nonetheless, if an algorithm runs in worse than polynomial time

(e.g. 2

n

), then it is certainly not efﬁcient, except for very small values of n.

Lecture Notes 57 CMSC 451

Decision Problems: Many of the problems that we have discussed involve optimization of one form or another: ﬁnd

the shortest path, ﬁnd the minimum cost spanning tree, ﬁnd the minimum weight triangulation. For rather tech-

nical reasons, most NP-complete problems that we will discuss will be phrased as decision problems. A problem

is called a decision problem if its output is a simple “yes” or “no” (or you may think of this as True/False, 0/1,

accept/reject).

We will phrase many optimization problems in terms of decision problems. For example, the minimum spanning

tree decision problem might be: Given a weighted graph Gand an integer k, does Ghave a spanning tree whose

weight is at most k?

This may seem like a less interesting formulation of the problem. It does not ask for the weight of the minimum

spanning tree, and it does not even ask for the edges of the spanning tree that achieves this weight. However,

our job will be to show that certain problems cannot be solved efﬁciently. If we show that the simple decision

problem cannot be solved efﬁciently, then the more general optimization problem certainly cannot be solved

efﬁciently either.

Language Recognition Problems: Observe that a decision problem can also be thought of as a language recognition

problem. We could deﬁne a language L

L = ¦(G, k) [ G has a MST of weight at most k¦.

This set consists of pairs, the ﬁrst element is a graph (e.g. the adjacency matrix encoded as a string) followed

by an integer k encoded as a binary number. At ﬁrst it may seem strange expressing a graph as a string, but

obviously anything that is represented in a computer is broken down somehow into a string of bits.

When presented with an input string (G, k), the algorithm would answer “yes” if (G, k) ∈ L implying that G

has a spanning tree of weight at most k, and “no” otherwise. In the ﬁrst case we say that the algorithm “accepts”

the input and otherwise it “rejects” the input.

Given any language, we can ask the question of how hard it is to determine whether a given string is in the

language. For example, in the case of the MST language L, we can determine membership easily in polynomial

time. We just store the graph internally, run Kruskal’s algorithm, and see whether the ﬁnal optimal weight is at

most k. If so we accept, and otherwise we reject.

Deﬁnition: Deﬁne P to be the set of all languages for which membership can be tested in polynomial time.

(Intuitively, this corresponds to the set of all decisions problems that can be solved in polynomial time.)

Note that languages are sets of strings, and P is a set of languages. P is deﬁned in terms of how hard it is

computationally to recognized membership in the language. A set of languages that is deﬁned in terms of how

hard it is to determine membership is called a complexity class. Since we can compute minimum spanning trees

in polynomial time, we have L ∈ P.

Here is a harder one, though.

M = ¦(G, k) [ G has a simple path of length at least k¦.

Given a graph G and integer k how would you “recognize” whether it is in the language M? You might try

searching the graph for a simple paths, until ﬁnding one of length at least k. If you ﬁnd one then you can accept

and terminate. However, if not then you may spend a lot of time searching (especially if k is large, like n − 1,

and no such path exists). So is M ∈ P? No one knows the answer. In fact, we will show that M is NP-complete.

In what follows, we will be introducing a number of classes. We will jump back and forth between the terms

“language” and “decision problems”, but for our purposes they mean the same things. Before giving all the

technical deﬁnitions, let us say a bit about what the general classes look like at an intuitive level.

P: This is the set of all decision problems that can be solved in polynomial time. We will generally refer to

these problems as being “easy” or “efﬁciently solvable”. (Although this may be an exaggeration in many

cases.)

Lecture Notes 58 CMSC 451

NP: This is the set of all decision problems that can be veriﬁed in polynomial time. (We will give a deﬁnition

of this below.) This class contains P as a subset. Thus, it contains a number of easy problems, but it also

contains a number of problems that are believed to be very hard to solve. The term NP does not mean “not

polynomial”. Originally the term meant “nondeterministic polynomial time”. But it is bit more intuitive to

explain the concept from the perspective of veriﬁcation.

NP-hard: In spite of its name, to say that problem is NP-hard does not mean that it is hard to solve. Rather

it means that if we could solve this problem in polynomial time, then we could solve all NP problems in

polynomial time. Note that for a problem to be NP hard, it does not have to be in the class NP. Since it

is widely believed that all NP problems are not solvable in polynomial time, it is widely believed that no

NP-hard problem is solvable in polynomial time.

NP-complete: A problem is NP-complete if (1) it is in NP, and (2) it is NP-hard. That is, NPC = NP∩NP-hard.

The ﬁgure below illustrates one way that the sets P, NP, NP-hard, and NP-complete (NPC) might look. We

say might because we do not know whether all of these complexity classes are distinct or whether they are all

solvable in polynomial time. There are some problems in the ﬁgure that we will not discuss. One is Graph

Isomorphism, which asks whether two graphs are identical up to a renaming of their vertices. It is known that

this problem is in NP, but it is not known to be in P. The other is QBF, which stands for Quantiﬁed Boolean

Formulas. In this problem you are given a boolean formula with quantiﬁers (∃ and ∀) and you want to know

whether the formula is true or false. This problem is beyond the scope of this course, but may be discussed in

an advanced course on complexity theory.

NP

P

NP−Hard

One way that things ‘might’ be.

Hamiltonian Cycle

Graph Isomorphism?

MST

Strong connectivity

Satisfiability

Knapsack

QBF

NPC

No Ham. Cycle

Easy

Harder

Fig. 43: The (possible) structure of P, NP, and related complexity classes.

Polynomial Time Veriﬁcation and Certiﬁcates: Before talking about the class of NP-complete problems, it is im-

portant to introduce the notion of a veriﬁcation algorithm. Many language recognition problems that may be

very hard to solve, but they have the property that it is easy to verify whether a string is in the language.

Consider the following problem, called the Hamiltonian cycle problem. Given an undirected graph G, does G

have a cycle that visits every vertex exactly once. (There is a similar problem on directed graphs, and there is

also a version which asks whether there is a path that visits all vertices.) We can describe this problem as a

language recognition problem, where the language is

HC = ¦(G) [ G has a Hamiltonian cycle¦,

where (G) denotes an encoding of a graph G as a string. The Hamiltonian cycle problem seems to be much

harder, and there is no known polynomial time algorithm for this problem. For example, the ﬁgure below shows

two graphs, one which is Hamiltonian and one which is not.

However, suppose that a graph did have a Hamiltonian cycle. Then it would be a very easy matter for someone

to convince us of this. They would simply say “the cycle is ¸v

3

, v

7

, v

1

, . . . , v

13

)”. We could then inspect the

Lecture Notes 59 CMSC 451

Nonhamiltonian Hamiltonian

Fig. 44: Hamiltonian cycle.

graph, and check that this is indeed a legal cycle and that it visits all the vertices of the graph exactly once. Thus,

even though we know of no efﬁcient way to solve the Hamiltonian cycle problem, there is a very efﬁcient way

to verify that a given graph is in HC. The given cycle is called a certiﬁcate. This is some piece of information

which allows us to verify that a given string is in a language.

More formally, given a language L, and given x ∈ L, a veriﬁcation algorithm is an algorithm which given x

and a string y called the certiﬁcate, can verify that x is in the language L using this certiﬁcate as help. If x is

not in L then there is nothing to verify.

Note that not all languages have the property that they are easy to verify. For example, consider the following

languages:

UHC = ¦(G) [ G has a unique Hamiltonian cycle¦

HC = ¦(G) [ G has no Hamiltonian cycle¦.

Suppose that a graph G is in the language UHC. What information would someone give us that would allow

us to verify that G is indeed in the language? They could give us an example of the unique Hamiltonian cycle,

and we could verify that it is a Hamiltonian cycle, but what sort of certiﬁcate could they give us to convince us

that this is the only one? They could give another cycle that is NOT Hamiltonian, but this does not mean that

there is not another cycle somewhere that is Hamiltonian. They could try to list every other cycle of length n,

but this would not be at all efﬁcient, since there are n! possible cycles in general. Thus, it is hard to imagine

that someone could give us some information that would allow us to efﬁciently convince ourselves that a given

graph is in the language.

The class NP:

Deﬁnition: Deﬁne NP to be the set of all languages that can be veriﬁed by a polynomial time algorithm.

Why is the set called “NP” rather than “VP”? The original term NP stood for “nondeterministic polynomial

time”. This referred to a program running on a nondeterministic computer that can make guesses. Basically,

such a computer could nondeterministically guess the value of certiﬁcate, and then verify that the string is in

the language in polynomial time. We have avoided introducing nondeterminism here. It would be covered in a

course on complexity theory or formal language theory.

Like P, NP is a set of languages based on some complexity measure (the complexity of veriﬁcation). Observe

that P ⊆ NP. In other words, if we can solve a problem in polynomial time, then we can certainly verify

membership in polynomial time. (More formally, we do not even need to see a certiﬁcate to solve the problem,

we can solve it in polynomial time anyway).

However it is not known whether P = NP. It seems unreasonable to think that this should be so. In other words,

just being able to verify that you have a correct solution does not help you in ﬁnding the actual solution very

much. Most experts believe that P ,= NP, but no one has a proof of this. Next time we will deﬁne the notions of

NP-hard and NP-complete.

Lecture Notes 60 CMSC 451

Lecture 17: NP-Completeness: Reductions

Read: Chapt 34, through Section 34.4.

Summary: Last time we introduced a number of concepts, on the way to deﬁning NP-completeness. In particular,

the following concepts are important.

Decision Problems: are problems for which the answer is either yes or no. NP-complete problems are ex-

pressed as decision problems, and hence can be thought of as language recognition problems, assuming

that the input has been encoded as a string. We encode inputs as strings. For example:

HC = ¦G [ G has a Hamiltonian cycle¦

MST = ¦(G, x) [ G has a MST of cost at most x¦.

P: is the class of all decision problems which can be solved in polynomial time, O(n

k

) for some constant k.

For example MST ∈ P but HC is not known (and suspected not) to be in P.

Certiﬁcate: is a piece of evidence that allows us to verify in polynomial time that a string is in a given language.

For example, suppose that the language is the set of Hamiltonian graphs. To convince someone that a graph

is in this language, we could supply the certiﬁcate consisting of a sequence of vertices along the cycle. It is

easy to access the adjacency matrix to determine that this is a legitimate cycle in G. Therefore HC ∈ NP.

NP: is deﬁned to be the class of all languages that can be veriﬁed in polynomial time. Note that since all

languages in P can be solved in polynomial time, they can certainly be veriﬁed in polynomial time, so we

have P ⊆ NP. However, NP also seems to have some pretty hard problems to solve, such as HC.

Reductions: The class of NP-complete problems consists of a set of decision problems (languages) (a subset of the

class NP) that no one knows how to solve efﬁciently, but if there were a polynomial time solution for even a

single NP-complete problem, then every problem in NP would be solvable in polynomial time. To establish this,

we need to introduce the concept of a reduction.

Before discussing reductions, let us just consider the following question. Suppose that there are two problems,

H and U. We know (or you strongly believe at least) that H is hard, that is it cannot be solved in polynomial

time. On the other hand, the complexity of U is unknown, but we suspect that it too is hard. We want to prove

that U cannot be solved in polynomial time. How would we do this? We want to show that

(H / ∈ P) ⇒ (U / ∈ P).

To do this, we could prove the contrapositive,

(U ∈ P) ⇒ (H ∈ P).

In other words, to show that U is not solvable in polynomial time, we will suppose that there is an algorithm that

solves U in polynomial time, and then derive a contradiction by showing that H can be solved in polynomial

time.

How do we do this? Suppose that we have a subroutine that can solve any instance of problem U in polynomial

time. Then all we need to do is to show that we can use this subroutine to solve problem H in polynomial time.

Thus we have “reduced” problem H to problem U. It is important to note here that this supposed subroutine

is really a fantasy. We know (or strongly believe) that H cannot be solved in polynomial time, thus we are

essentially proving that the subroutine cannot exist, implying that U cannot be solved in polynomial time. (Be

sure that you understand this, this the basis behind all reductions.)

Example: 3-Colorability and Clique Cover: Let us consider an example to make this clearer. The following prob-

lem is well-known to be NP-complete, and hence it is strongly believed that the problem cannot be solved in

polynomial time.

Lecture Notes 61 CMSC 451

3-coloring (3Col): Given a graph G, can each of its vertices be labeled with one of 3 different “colors”, such

that no two adjacent vertices have the same label.

Coloring arises in various partitioning problems, where there is a constraint that two objects cannot be assigned

to the same set of the partition. The term “coloring” comes from the original application which was in map

drawing. Two countries that share a common border should be colored with different colors. It is well known

that planar graphs can be colored with 4 colors, and there exists a polynomial time algorithm for this. But

determining whether 3 colors are possible (even for planar graphs) seems to be hard and there is no known

polynomial time algorithm. In the ﬁgure below we give two graphs, one is 3-colorable and one is not.

3−colorable Not 3−colorable Clique cover (size = 3)

Fig. 45: 3-coloring and Clique Cover.

The 3Col problem will play the role of the hard problem H, which we strongly suspect to not be solvable in

polynomial time. For our unknown problem U, consider the following problem. Given a graph G = (V, E), we

say that a subset of vertices V

⊆ V forms a clique if for every pair of vertices u, v ∈ V

(u, v) ∈ E. That is,

the subgraph induced by V

is a complete graph.

Clique Cover (CCov): Given a graph G = (V, E) and an integer k, can we partition the vertex set into k

subsets of vertices V

1

, V

2

, . . . , V

k

, such that

i

V

i

= V , and that each V

i

is a clique of G.

The clique cover problem arises in applications of clustering. We put an edge between two nodes if they are

similar enough to be clustered in the same group. We want to know whether it is possible to cluster all the

vertices into k groups.

Suppose that you want to solve the CCov problem, but after a while of fruitless effort, you still cannot ﬁnd

a polynomial time algorithm for the CCov problem. How can you prove that CCov is likely to not have a

polynomial time solution? You know that 3Col is NP-complete, and hence experts believe that 3Col / ∈ P. You

feel that there is some connection between the CCov problem and the 3Col problem. Thus, you want to show

that

(3Col / ∈ P) ⇒ (CCov / ∈ P),

which you will show by proving the contrapositive

(CCov ∈ P) ⇒ (3Col ∈ P).

To do this, you assume that you have access to a subroutine CCov(G, k). Given a graph G and an integer k, this

subroutine returns true if G has a clique cover of size k and false otherwise, and furthermore, this subroutine

runs in polynomial time. How can we use this “alleged” subroutine to solve the well-known hard 3Col problem?

We want to write a polynomial time subroutine for 3Col, and this subroutine is allowed to call the subroutine

CCov(G, k) for any graph G and any integer k.

Both problems involve partitioning the vertices up into groups. The only difference here is that in one problem

the number of cliques is speciﬁed as part of the input and in the other the number of color classes is ﬁxed at 3.

In the clique cover problem, for two vertices to be in the same group they must be adjacent to each other. In the

3-coloring problem, for two vertices to be in the same color group, they must not be adjacent. In some sense,

the problems are almost the same, but the requirement adjacent/non-adjacent is exactly reversed.

Lecture Notes 62 CMSC 451

We claim that we can reduce the 3-coloring problem to the clique cover problem as follows. Given a graph G

for which we want to determine its 3-colorability, output the pair (G, 3) where G denotes the complement of G.

(That is, G is a graph on the same vertices, but (u, v) is an edge of G if and only if it is not an edge of G.) We

can then feed the pair (G, 3) into a subroutine for clique cover. This is illustrated in the ﬁgure below.

H G

_

3−colorable Coverable by 3 cliques

G

Not coverable

H

Not 3−colorable

_

Fig. 46: Clique covers in the complement.

Claim: A graph G is 3-colorable if and only if its complement G has a clique-cover of size 3. In other words,

G ∈ 3Col iff (G, 3) ∈ CCov.

Proof: (⇒) If G 3-colorable, then let V

1

, V

2

, V

3

be the three color classes. We claim that this is a clique cover

of size 3 for G, since if u and v are distinct vertices in V

i

, then ¦u, v¦ / ∈ E(G) (since adjacent vertices

cannot have the same color) which implies that ¦u, v¦ ∈ E(G). Thus every pair of distinct vertices in V

i

are adjacent in G.

(⇐) Suppose G has a clique cover of size 3, denoted V

1

, V

2

, V

3

. For i ∈ ¦1, 2, 3¦ give the vertices of V

i

color i. We assert that this is a legal coloring for G, since if distinct vertices u and v are both in V

i

, then

¦u, v¦ ∈ E(G) (since they are in a common clique), implying that ¦u, v¦ / ∈ E((G). Hence, two vertices

with the same color are not adjacent.

Polynomial-time reduction: We now take this intuition of reducing one problem to another through the use of a

subroutine call, and place it on more formal footing. Notice that in the example above, we converted an instance

of the 3-coloring problem (G) into an equivalent instance of the Clique Cover problem (G, 3).

Deﬁnition: We say that a language (i.e. decision problem) L

1

is polynomial-time reducible to language L

2

(written L

1

≤

P

L

2

) if there is a polynomial time computable function f, such that for all x, x ∈ L

1

if and

only if f(x) ∈ L

2

.

In the previous example we showed that

3Col ≤

P

CCov.

In particular we have f(G) = (G, 3). Note that it is easy to complement a graph in O(n

2

) (i.e. polynomial)

time (e.g. ﬂip 0’s and 1’s in the adjacency matrix). Thus f is computable in polynomial time.

Intuitively, saying that L

1

≤

P

L

2

means that “if L

2

is solvable in polynomial time, then so is L

1

.” This is

because a polynomial time subroutine for L

2

could be applied to f(x) to determine whether f(x) ∈ L

2

, or

equivalently whether x ∈ L

1

. Thus, in sense of polynomial time computability, L

1

is “no harder” than L

2

.

The way in which this is used in NP-completeness is exactly the converse. We usually have strong evidence that

L

1

is not solvable in polynomial time, and hence the reduction is effectively equivalent to saying “since L

1

is

not likely to be solvable in polynomial time, then L

2

is also not likely to be solvable in polynomial time.” Thus,

this is how polynomial time reductions can be used to show that problems are as hard to solve as known difﬁcult

problems.

Lemma: If L

1

≤

P

L

2

and L

2

∈ P then L

1

∈ P.

Lecture Notes 63 CMSC 451

Lemma: If L

1

≤

P

L

2

and L

1

/ ∈ P then L

2

/ ∈ P.

One important fact about reducibility is that it is transitive. In other words

Lemma: If L

1

≤

P

L

2

and L

2

≤

P

L

3

then L

1

≤

P

L

3

.

The reason is that if two functions f(x) and g(x) are computable in polynomial time, then their composition

f(g(x)) is computable in polynomial time as well. It should be noted that our text uses the term “reduction”

where most other books use the term “transformation”. The distinction is subtle, but people taking other courses

in complexity theory should be aware of this.

NP-completeness: The set of NP-complete problems are all problems in the complexity class NP, for which it is

known that if any one is solvable in polynomial time, then they all are, and conversely, if any one is not solvable

in polynomial time, then none are. This is made mathematically formal using the notion of polynomial time

reductions.

Deﬁnition: A language L is NP-hard if:

L

≤

P

L for all L

∈ NP.

(Note that L does not need to be in NP.)

Deﬁnition: A language L is NP-complete if:

(1) L ∈ NP and

(2) L is NP-hard.

An alternative (and usually easier way) to show that a problem is NP-complete is to use transitivity.

Lemma: L is NP-complete if

(1) L ∈ NP and

(2) L

≤

P

L for some known NP-complete language L

.

The reason is that all L

∈ NP are reducible to L

(since L

**is NP-complete and hence NP-hard) and hence by
**

transitivity L

**is reducible to L, implying that L is NP-hard.
**

This gives us a way to prove that problems are NP-complete, once we know that one problem is NP-complete.

Unfortunately, it appears to be almost impossible to prove that one problem is NP-complete, because the deﬁni-

tion says that we have to be able to reduce every problem in NP to this problem. There are inﬁnitely many such

problems, so how can we ever hope to do this? We will talk about this next time with Cook’s theorem. Cook

showed that there is one problem called SAT (short for boolean satisﬁability) that is NP-complete. To prove a

second problem is NP-complete, all we need to do is to show that our problem is in NP (and hence it is reducible

to SAT), and then to show that we can reduce SAT (or generally some known NPC problem) to our problem. It

follows that our problem is equivalent to SAT (with respect to solvability in polynomial time). This is illustrated

in the ﬁgure below.

Lecture 18: Cook’s Theorem, 3SAT, and Independent Set

Read: Chapter 34, through 34.5. The reduction given here is similar, but not the same as the reduction given in the

text.

Recap: So far we introduced the deﬁnitions of NP-completeness. Recall that we mentioned the following topics:

P: is the set of decision problems (or languages) that are solvable in polynomial time.

NP: is the set of decision problems (or languages) that can be veriﬁed in polynomial time,

Lecture Notes 64 CMSC 451

Proving a problem is in NP

Your problem

Known NPC

Proving a problem is NP−hard Resulting structure

NPC

NP

Your reduction

NP

NPC

NP

P P

NPC

P

SAT

SAT

Fig. 47: Structure of NPC and reductions.

Polynomial reduction: L

1

≤

P

L

2

means that there is a polynomial time computable function f such that

x ∈ L

1

if and only if f(x) ∈ L

2

. A more intuitive to think about this, is that if we had a subroutine to

solve L

2

in polynomial time, then we could use it to solve L

1

in polynomial time.

Polynomial reductions are transitive, that is, if L

1

≤

P

L

2

and L

2

≤

P

L

3

, then L

1

≤

P

L

3

.

NP-Hard: L is NP-hard if for all L

∈ NP, L

≤

P

L. Thus, if we could solve L in polynomial time, we could

solve all NP problems in polynomial time.

NP-Complete: L is NP-complete if (1) L ∈ NP and (2) L is NP-hard.

The importance of NP-complete problems should now be clear. If any NP-complete problems (and generally

any NP-hard problem) is solvable in polynomial time, then every NP-complete problem (and in fact every

problem in NP) is also solvable in polynomial time. Conversely, if we can prove that any NP-complete problem

(and generally any problem in NP) cannot be solved in polynomial time, then every NP-complete problem (and

generally every NP-hard problem) cannot be solved in polynomial time. Thus all NP-complete problems are

equivalent to one another (in that they are either all solvable in polynomial time, or none are).

An alternative way to show that a problem is NP-complete is to use transitivity of ≤

P

.

Lemma: L is NP-complete if

(1) L ∈ NP and

(2) L

≤

P

L for some NP-complete language L

.

Note: The known NP-complete problem L

**is reduced to the candidate NP-complete problem L. Keep this
**

order in mind.

Cook’s Theorem: Unfortunately, to use this lemma, we need to have at least one NP-complete problem to start the

ball rolling. Stephen Cook showed that such a problem existed. Cook’s theorem is quite complicated to prove,

but we’ll try to give a brief intuitive argument as to why such a problem might exist.

For a problem to be in NP, it must have an efﬁcient veriﬁcation procedure. Thus virtually all NP problems can

be stated in the form, “does there exists X such that P(X)”, where X is some structure (e.g. a set, a path, a

partition, an assignment, etc.) and P(X) is some property that X must satisfy (e.g. the set of objects must ﬁll

the knapsack, or the path must visit every vertex, or you may use at most k colors and no two adjacent vertices

can have the same color). In showing that such a problem is in NP, the certiﬁcate consists of giving X, and the

veriﬁcation involves testing that P(X) holds.

In general, any set X can be described by choosing a set of objects, which in turn can be described as choosing

the values of some boolean variables. Similarly, the property P(X) that you need to satisfy, can be described as

a boolean formula. Stephen Cook was looking for the most general possible property he could, since this should

represent the hardest problem in NP to solve. He reasoned that computers (which represent the most general

Lecture Notes 65 CMSC 451

type of computational devices known) could be described entirely in terms of boolean circuits, and hence in

terms of boolean formulas. If any problem were hard to solve, it would be one in which X is an assignment of

boolean values (true/false, 0/1) and P(X) could be any boolean formula. This suggests the following problem,

called the boolean satisﬁability problem.

SAT: Given a boolean formula, is there some way to assign truth values (0/1, true/false) to the variables of the

formula, so that the formula evaluates to true?

A boolean formula is a logical formula which consists of variables x

i

, and the logical operations x meaning the

negation of x, boolean-or (x∨y) and boolean-and (x∧y). Given a boolean formula, we say that it is satisﬁable

if there is a way to assign truth values (0 or 1) to the variables such that the ﬁnal result is 1. (As opposed to the

case where no matter how you assign truth values the result is always 0.)

For example,

(x

1

∧ (x

2

∨ x

3

)) ∧ ((x

2

∧ x

3

) ∨ x

1

)

is satisﬁable, by the assignment x

1

= 1, x

2

= 0, x

3

= 0 On the other hand,

(x

1

∨ (x

2

∧ x

3

)) ∧ (x

1

∨ (x

2

∧ x

3

)) ∧ (x

2

∨ x

3

) ∧ (x

2

∨ x

3

)

is not satisﬁable. (Observe that the last two clauses imply that one of x

2

and x

3

must be true and the other must

be false. This implies that neither of the subclauses involving x

2

and x

3

in the ﬁrst two clauses can be satisﬁed,

but x

1

cannot be set to satisfy them either.)

Cook’s Theorem: SAT is NP complete.

We will not prove this theorem. The proof would take about a full lecture (not counting the week or so of

background on Turing machines). In fact, it turns out that a even more restricted version of the satisﬁability

problem is NP-complete. A literal is a variable or its negation x or x. A formula is in 3-conjunctive normal

form (3-CNF) if it is the boolean-and of clauses where each clause is the boolean-or of exactly 3 literals. For

example

(x

1

∨ x

2

∨ x

3

) ∧ (x

1

∨ x

3

∨ x

4

) ∧ (x

2

∨ x

3

∨ x

4

)

is in 3-CNF form. 3SAT is the problem of determining whether a formula in 3-CNF is satisﬁable. It turns out that

it is possible to modify the proof of Cook’s theorem to show that the more restricted 3SAT is also NP-complete.

As an aside, note that if we replace the 3 in 3SAT with a 2, then everything changes. If a boolean formula is

given in 2SAT, then it is possible to determine its satisﬁability in polynomial time. Thus, even a seemingly small

change can be the difference between an efﬁcient algorithm and none.

NP-completeness proofs: Now that we know that 3SAT is NP-complete, we can use this fact to prove that other

problems are NP-complete. We will start with the independent set problem.

Independent Set (IS): Given an undirected graph G = (V, E) and an integer k does G contain a subset V

of

k vertices such that no two vertices in V

**are adjacent to one another.
**

For example, the graph shown in the ﬁgure below has an independent set (shown with shaded nodes) of size

4. The independent set problem arises when there is some sort of selection problem, but there are mutual

restrictions pairs that cannot both be selected. (For example, you want to invite as many of your friends to your

party, but many pairs do not get along, represented by edges between them, and you do not want to invite two

enemies.)

Note that if a graph has an independent set of size k, then it has an independent set of all smaller sizes. So the

corresponding optimization problem would be to ﬁnd an independent set of the largest size in a graph. Often

the vertices have weights, so we might talk about the problem of computing the independent set with the largest

total weight. However, since we want to show that the problem is hard to solve, we will consider the simplest

version of the problem.

Lecture Notes 66 CMSC 451

Fig. 48: Independent Set.

Claim: IS is NP-complete.

The proof involves two parts. First, we need to show that IS ∈ NP. The certiﬁcate consists of the k vertices of

V

. We simply verify that for each pair of vertex u, v ∈ V

**, there is no edge between them. Clearly this can be
**

done in polynomial time, by an inspection of the adjacency matrix.

boolean formula

polynomial time computable

graph and integer

no

(in 3−CNF)

yes

F

3SAT

f

IS

(G,k)

Fig. 49: Reduction of 3-SAT to IS.

Secondly, we need to establish that IS is NP-hard, which can be done by showing that some known NP-complete

problem (3SAT) is polynomial-time reducible to IS, that is, 3SAT ≤

P

IS. Let F be a boolean formula in 3-CNF

form (the boolean-and of clauses, each of which is the boolean-or of 3 literals). We wish to ﬁnd a polynomial

time computable function f that maps F into a input for the IS problem, a graph G and integer k. That is,

f(F) = (G, k), such that F is satisﬁable if and only if G has an independent set of size k. This will mean that

if we can solve the independent set problem for G and k in polynomial time, then we would be able to solve

3SAT in polynomial time.

An important aspect to reductions is that we do not attempt to solve the satisﬁability problem. (Remember: It

is NP-complete, and there is not likely to be any polynomial time solution.) So the function f must operate

without knowledge of whether F is satisﬁable. The idea is to translate the similar elements of the satisﬁable

problem to corresponding elements of the independent set problem.

What is to be selected?

3SAT: Which variables are assigned to be true. Equivalently, which literals are assigned true.

IS: Which vertices are to be placed in V

.

Requirements:

3SAT: Each clause must contain at least one literal whose value it true.

IS: V

**must contain at least k vertices.
**

Restrictions:

3SAT: If x

i

is assigned true, then x

i

must be false, and vice versa.

Lecture Notes 67 CMSC 451

IS: If u is selected to be in V

, and v is a neighbor of u, then v cannot be in V

.

We want a function f, which given any 3-CNF boolean formula F, converts it into a pair (G, k) such that the

above elements are translated properly. Our strategy will be to create one vertex for each literal that appears

within each clause. (Thus, if there are m clauses in F, there will be 3m vertices in G.) The vertices are grouped

into clause clusters, one for each clause. Selecting a true literal from some clause corresponds to selecting a

vertex to add to V

**. We set k to the number of clauses. This forces the independent set to pick one vertex
**

from each clause, thus, one literal from each clause is true. In order to keep the IS subroutine from selecting

two literals from some clause (and hence none from some other), we will connect all the vertices in each clause

cluster to each other. To keep the IS subroutine from selecting both a literal and its complement, we will put an

edge between each literal and its complement. This enforces the condition that if a literal is put in the IS (set to

true) then its complement literal cannot also be true. A formal description of the reduction is given below. The

input is a boolean formula F in 3-CNF, and the output is a graph G and integer k.

3SAT to IS Reduction

k ←number of clauses in F;

for each clause C in F

create a clause cluster of 3 vertices from the literals of C;

for each clause cluster (x1, x2, x3)

create an edge (xi, xj) between all pairs of vertices in the cluster;

for each vertex xi

create edges between xi and all its complement vertices xi;

return (G, k);

Given any reasonable encoding of F, it is an easy programming exercise to create G(say as an adjacency matrix)

in polynomial time. We claim that F is satisﬁable if and only if G has an independent set of size k.

Example: Suppose that we are given the 3-CNF formula:

(x

1

∨ x

2

∨ x

3

) ∧ (x

1

∨ x

2

∨ x

3

) ∧ (x

1

∨ x

2

∨ x

3

) ∧ (x

1

∨ x

2

∨ x

3

).

The reduction produces the graph shown in the following ﬁgure and sets k = 4.

1

x

2

x

3

Correctness (x1=x2=1, x3=0)

x

3

x

2

x

1

x x

The reduction

x

1

2

3

x

2

x

2

x

3

x

1

x

3

x

2

x

1

x

1

x x

3

x

1

x

2

x

3

x

3

x

2

x

1

Fig. 50: 3SAT to IS Reduction for (x

1

∨ x

2

∨ x

3

) ∧ (x

1

∨ x

2

∨ x

3

) ∧ (x

1

∨ x

2

∨ x

3

) ∧ (x

1

∨ x

2

∨ x

3

).

In our example, the formula is satisﬁed by the assignment x

1

= 1, x

2

= 1, and x

3

= 0. Note that the literal x

1

satisﬁes the ﬁrst and last clauses, x

2

satisﬁes the second, and x

3

satiﬁes the third. Observe that by selecting the

corresponding vertices from the clusters, we get an independent set of size k = 4.

Lecture Notes 68 CMSC 451

Correctness Proof: We claim that F is satisﬁable if and only if Ghas an independent set of size k. If F is satisﬁable,

then each of the k clauses of F must have at least one true literal. Let V

**denote the corresponding vertices
**

from each of the clause clusters (one from each cluster). Because we take vertices from each cluster, there are

no inter-cluster edges between them, and because we cannot set a variable and its complement to both be true,

there can be no edge of the form (x

i

, x

i

) between the vertices of V

. Thus, V

**is an independent set of size k.
**

Conversely, if Ghas an independent set V

**of size k. First observe that we must select a vertex from each clause
**

cluster, because there are k clusters, and we cannot take two vertices from the same cluster (because they are all

interconnected). Consider the assignment in which we set all of these literals to 1. This assignment is logically

consistent, because we cannot have two vertices labeled x

i

and x

i

in the same cluster. Finally the transformation

clearly runs in polynomial time. This completes the NP-completeness proof.

Observe that our reduction did not attempt to solve the IS problem nor to solve the 3SAT. Also observe that

the reduction had no knowledge of the solution to either problem. (We did not assume that the formula was

satisﬁable, nor did we assume we knew which variables to set to 1.) This is because computing these things

would require exponential time (by the best known algorithms). Instead the reduction simply translated the

input from one problem into an equivalent input to the other problem, while preserving the critical elements to

each problem.

Lecture 19: Clique, Vertex Cover, and Dominating Set

Read: Chapt 34 (up through 34.5). The dominating set proof is not given in our text.

Recap: Last time we gave a reduction from 3SAT (satisﬁability of boolean formulas in 3-CNF form) to IS (indepen-

dent set in graphs). Today we give a few more examples of reductions. Recall that to show that a problem is

NP-complete we need to show (1) that the problem is in NP (i.e. we can verify when an input is in the language),

and (2) that the problem is NP-hard, by showing that some known NP-complete problem can be reduced to this

problem (there is a polynomial time function that transforms an input for one problem into an equivalent input

for the other problem).

Some Easy Reductions: We consider some closely related NP-complete problems next.

Clique (CLIQUE): The clique problem is: given an undirected graph G = (V, E) and an integer k, does G

have a subset V

of k vertices such that for each distinct u, v ∈ V

**, ¦u, v¦ ∈ E. In other words, does G
**

have a k vertex subset whose induced subgraph is complete.

Vertex Cover (VC): A vertex cover in an undirected graph G = (V, E) is a subset of vertices V

⊆ V such that

every edge in G has at least one endpoint in V

**. The vertex cover problem (VC) is: given an undirected
**

graph G and an integer k, does G have a vertex cover of size k?

Dominating Set (DS): A dominating set in a graph G = (V, E) is a subset of vertices V

**such that every vertex
**

in the graph is either in V

or is adjacent to some vertex in V

**. The dominating set problem (DS) is: given
**

a graph G = (V, E) and an integer k, does G have a dominating set of size k?

Don’t confuse the clique (CLIQUE) problem with the clique-cover (CC) problem that we discussed in an earlier

lecture. The clique problem seeks to ﬁnd a single clique of size k, and the clique-cover problem seeks to partition

the vertices into k groups, each of which is a clique.

We have discussed the facts that cliques are of interest in applications dealing with clustering. The vertex cover

problem arises in various servicing applications. For example, you have a compute network and a program that

checks the integrity of the communication links. To save the space of installing the program on every computer

in the network, it sufﬁces to install it on all the computers forming a vertex cover. From these nodes all the

links can be tested. Dominating set is useful in facility location problems. For example, suppose we want to

select where to place a set of ﬁre stations such that every house in the city is within 2 minutes of the nearest

Lecture Notes 69 CMSC 451

ﬁre station. We create a graph in which two locations are adjacent if they are within 2 minutes of each other. A

minimum sized dominating set will be a minimum set of locations such that every other location is reachable

within 2 minutes from one of these sites.

The CLIQUE problem is obviously closely related to the independent set problem (IS): Given a graph G does it

have a k vertex subset that is completely disconnected. It is not quite as clear that the vertex cover problem is

related. However, the following lemma makes this connection clear as well.

G G G

V’ is CLIQUE of

iff

size k in G

iff

V’ is an IS of

size k in G

V−V’ is a VC of

size n−k in G

Fig. 51: Clique, Independent set, and Vertex Cover.

Lemma: Given an undirected graph G = (V, E) with n vertices and a subset V

**⊆ V of size k. The following
**

are equivalent:

(i) V

**is a clique of size k for the complement, G.
**

(ii) V

**is an independent set of size k for G.
**

(iii) V −V

**is a vertex cover of size n −k for G.
**

Proof:

(i) ⇒(ii): If V

is a clique for G, then for each u, v ∈ V

**, ¦u, v¦ is an edge of G implying that ¦u, v¦ is
**

not an edge of G, implying that V

**is an independent set for G.
**

(ii) ⇒(iii): If V

is an independent set for G, then for each u, v ∈ V

**, ¦u, v¦ is not an edge of G, implying
**

that every edge in G is incident to a vertex in V −V

, implying that V −V

is a VC for G.

(iii) ⇒(i): If V −V

is a vertex cover for G, then for any u, v ∈ V

**there is no edge ¦u, v¦ in G, implying
**

that there is an edge ¦u, v¦ in G, implying that V

is a clique in G. V

**is an independent set for G.
**

Thus, if we had an algorithm for solving any one of these problems, we could easily translate it into an algorithm

for the others. In particular, we have the following.

Theorem: CLIQUE is NP-complete.

CLIQUE ∈ NP: The certiﬁcate consists of the k vertices in the clique. Given such a certiﬁcate we can easily

verify in polynomial time that all pairs of vertices in the set are adjacent.

IS ≤

P

CLIQUE: We want to show that given an instance of the IS problem (G, k), we can produce an equiv-

alent instance of the CLIQUE problem in polynomial time. The reduction function f inputs G and k, and

outputs the pair (G, k). Clearly this can be done in polynomial time. By the above lemma, this instance is

equivalent.

Theorem: VC is NP-complete.

VC ∈ NP: The certiﬁcate consists of the k vertices in the vertex cover. Given such a certiﬁcate we can easily

verify in polynomial time that every edge is incident to one of these vertices.

Lecture Notes 70 CMSC 451

IS ≤

P

VC: We want to show that given an instance of the IS problem (G, k), we can produce an equivalent

instance of the VC problem in polynomial time. The reduction function f inputs G and k, computes the

number of vertices, n, and then outputs (G, n − k). Clearly this can be done in polynomial time. By the

lemma above, these instances are equivalent.

Note: Note that in each of the above reductions, the reduction function did not know whether G has an inde-

pendent set or not. It must run in polynomial time, and IS is an NP-complete problem. So it does not have time

to determine whether G has an independent set or which vertices are in the set.

Dominating Set: As with vertex cover, dominating set is an example of a graph covering problem. Here the condition

is a little different, each vertex is adjacent to the members of the dominating set, as opposed to each edge being

incident to each member of the dominating set. Obviously, if G is connected and has a vertex cover of size k,

then it has a dominating set of size k (the same set of vertices), but the converse is not necessarily true. However

the similarity suggests that if VC in NP-complete, then DS is likely to be NP-complete as well. The main result

of this section is just this.

Theorem: DS is NP-complete.

As usual the proof has two parts. First we show that DS ∈ NP. The certiﬁcate just consists of the subset V

in

the dominating set. In polynomial time we can determine whether every vertex is in V

or is adjacent to a vertex

in V

.

Reducing Vertex Cover to Dominating Set: Next we show that an existing NP-complete problem is reducible to

dominating set. We choose vertex cover and show that VC ≤

P

DS. We want a polynomial time function,

which given an instance of the vertex cover problem (G, k), produces an instance (G

, k

**) of the dominating set
**

problem, such that G has a vertex cover of size k if and only if G

has a dominating set of size k

.

How to we translate between these problems? The key difference is the condition. In VC: “every edge is incident

to a vertex in V

”. In DS: “every vertex is either in V

or is adjacent to a vertex in V

**”. Thus the translation must
**

somehow map the notion of “incident” to “adjacent”. Because incidence is a property of edges, and adjacency

is a property of vertices, this suggests that the reduction function maps edges of G into vertices in G

, such that

an incident edge in G is mapped to an adjacent vertex in G

.

This suggests the following idea (which does not quite work). We will insert a vertex into the middle of each

edge of the graph. In other words, for each edge ¦u, v¦, we will create a new special vertex, called w

uv

, and

replace the edge ¦u, v¦ with the two edges ¦u, w

uv

¦ and ¦v, w

uv

¦. The fact that u was incident to edge ¦u, v¦

has now been replaced with the fact that u is adjacent to the corresponding vertex w

uv

. We still need to dominate

the neighbor v. To do this, we will leave the edge ¦u, v¦ in the graph as well. Let G

**be the resulting graph.
**

This is still not quite correct though. Deﬁne an isolated vertex to be one that is incident to no edges. If u is

isolated it can only be dominated if it is included in the dominating set. Since it is not incident to any edges, it

does not need to be in the vertex cover. Let V

I

denote the isolated vertices in G, and let I denote the number of

isolated vertices. The number of vertices to request for the dominating set will be k

= k +I.

Now we can give the complete reduction. Given the pair (G, k) for the VC problem, we create a graph G

as

follows. Initially G

**= G. For each edge ¦u, v¦ in G we create a new vertex w
**

uv

in G

**and add edges ¦u, w
**

uv

¦

and ¦v, w

uv

¦ in G

. Let I denote the number of isolated vertices and set k

= k + I. Output (G

, k

). This

reduction illustrated in the following ﬁgure. Note that every step can be performed in polynomial time.

Correctness of the Reduction: To establish the correctness of the reduction, we need to show that G has a vertex

cover of size k if and only if G

has a dominating set of size k

. First we argue that if V

**is a vertex cover for G,
**

then V

= V

∪ V

I

is a dominating set for G

. Observe that

[V

[ = [V

∪ V

I

[ ≤ k +I = k

.

Note that [V

∪ V

I

[ might be of size less than k + I, if there are any isolated vertices in V

. If so, we can add

any vertices we like to make the size equal to k

.

Lecture Notes 71 CMSC 451

f

G

k=3

G’

k’=3+1=4

Fig. 52: Dominating set reduction.

To see that V

is a dominating set, ﬁrst observe that all the isolated vertices are in V

**and so they are dominated.
**

Second, each of the special vertices w

uv

in G

**corresponds to an edge ¦u, v¦ in G implying that either u or v is
**

in the vertex cover V

. Thus w

uv

is dominated by the same vertex in V

**Finally, each of the nonisolated original
**

vertices v is incident to at least one edge in G, and hence either it is in V

or else all of its neighbors are in V

. In

either case, v is either in V

or adjacent to a vertex in V

**. This is shown in the top part of the following ﬁgure.
**

vertex cover for G dominating set for G’

vertex cover for G using original vertices dominating set for G’

Fig. 53: Correctness of the VC to DS reduction (where k = 3 and I = 1).

Conversely, we claim that if G

has a dominating set V

of size k

= k + I then G has a vertex cover V

of

size k. Note that all I isolated vertices of G

must be in the dominating set. First, let V

= V

− V

I

be the

remaining k vertices. We might try to claim something like: V

**is a vertex cover for G. But this will not
**

necessarily work, because V

**may have vertices that are not part of the original graph G.
**

However, we claim that we never need to use any of the newly created special vertices in V

. In particular,

if some vertex w

uv

∈ V

, then modify V

by replacing w

uv

with u. (We could have just as easily replaced

it with v.) Observe that the vertex w

uv

is adjacent to only u and v, so it dominates itself and these other two

vertices. By using u instead, we still dominate u, v, and w

uv

(because u has edges going to v and w

uv

). Thus

by replacing w

u,v

with u we dominate the same vertices (and potentially more). Let V

**denote the resulting set
**

after this modiﬁcation. (This is shown in the lower middle part of the ﬁgure.)

We claim that V

**is a vertex cover for G. If, to the contrary there were an edge ¦u, v¦ of G that was not
**

covered (neither u nor v was in V

**) then the special vertex w
**

uv

would not be adjacent to any vertex of V

in

G

, contradicting the hypothesis that V

was a dominating set for G

.

Lecture Notes 72 CMSC 451

Lecture 20: Subset Sum

Read: Sections 34.5.5 in CLR.

Subset Sum: The Subset Sumproblem(SS) is the following. Given a ﬁnite set S of positive integers S = ¦w

1

, w

2

, . . . , w

n

¦

and a target value, t, we want to know whether there exists a subset S

**⊆ S that sums exactly to t.
**

This problem is a simpliﬁed version of the 0-1 Knapsack problem, presented as a decision problem. Recall

that in the 0-1 Knapsack problem, we are given a collection of objects, each with an associated weight w

i

and

associated value v

i

. We are given a knapsack of capacity W. The objective is to take as many objects as can ﬁt

in the knapsack’s capacity so as to maximize the value. (In the fractional knapsack we could take a portion of

an object. In the 0-1 Knapsack we either take an object entirely or leave it.) In the simplest version, suppose

that the value is the same as the weight, v

i

= w

i

. (This would occur for example if all the objects were made of

the same material, say, gold.) Then, the best we could hope to achieve would be to ﬁll the knapsack entirely. By

setting t = W, we see that the subset sum problem is equivalent to this simpliﬁed version of the 0-1 Knapsack

problem. It follows that if we can show that this simpler version is NP-complete, then certainly the more general

0-1 Knapsack problem (stated as a decision problem) is also NP-complete.

Consider the following example.

S = ¦3, 6, 9, 12, 15, 23, 32¦ and t = 33.

The subset S

**= ¦6, 12, 15¦ sums to t = 33, so the answer in this case is yes. If t = 34 the answer would be no.
**

Dynamic Programming Solution: There is a dynamic programming algorithm which solves the Subset Sum prob-

lem in O(n t) time.

2

The quantity n t is a polynomial function of n. This would seem to imply that the Subset Sum problem is in P.

But there is a important catch. Recall that in all NP-complete problems we assume (1) running time is measured

as a function of input size (number of bits) and (2) inputs must be encoded in a reasonable succinct manner. Let

us assume that the numbers w

i

and t are all b-bit numbers represented in base 2, using the fewest number of bits

possible. Then the input size is O(nb). The value of t may be as large as 2

b

. So the resulting algorithm has a

running time of O(n2

b

). This is polynomial in n, but exponential in b. Thus, this running time is not polynomial

as a function of the input size.

Note that an important consequence of this observation is that the SS problem is not hard when the numbers

involved are small. If the numbers involved are of a ﬁxed number of bits (a constant independent of n), then

the problem is solvable in polynomial time. However, we will show that in the general case, this problem is

NP-complete.

SS is NP-complete: The proof that Subset Sum (SS) is NP-complete involves the usual two elements.

(i) SS ∈ NP.

(ii) Some known NP-complete problem is reducible to SS. In particular, we will show that Vertex Cover (VC)

is reducible to SS, that is, VC ≤

P

SS.

To show that SS is in NP, we need to give a veriﬁcation procedure. Given S and t, the certiﬁcate is just the

indices of the numbers that form the subset S

**. We can add two b-bit numbers together in O(b) time. So, in
**

polynomial time we can compute the sum of elements in S

**, and verify that this sum equals t.
**

For the remainder of the proof we show how to reduce vertex cover to subset sum. We want a polynomial time

computable function f that maps an instance of the vertex cover (a graph G and integer k) to an instance of the

subset sum problem (a set of integers S and target integer t) such that G has a vertex cover of size k if and only

if S has a subset summing to t. Thus, if subset sum were solvable in polynomial time, so would vertex cover.

2

We will leave this as an exercise, but the formulation is, for 0 ≤ i ≤ n and 0 ≤ t

≤ t, S[i, t

] = 1 if there is a subset of {w

1

, w

2

, . . . , w

i

}

that sums to t

, and 0 otherwise. The ith row of this table can be computed in O(t) time, given the contents of the (i − 1)-st row.

Lecture Notes 73 CMSC 451

How can we encode the notion of selecting a subset of vertices that cover all the edges to that of selecting a

subset of numbers that sums to t? In the vertex cover problem we are selecting vertices, and in the subset sum

problem we are selecting numbers, so it seems logical that the reduction should map vertices into numbers. The

constraint that these vertices should cover all the edges must be mapped to the constraint that the sum of the

numbers should equal the target value.

An Initial Approach: Here is an idea, which does not work, but gives a sense of how to proceed. Let E denote the

number of edges in the graph. First number the edges of the graph from 1 through E. Then represent each vertex

v

i

as an E-element bit vector, where the j-th bit from the left is set to 1 if and only if the edge e

j

is incident

to vertex v

i

. (Another way to think of this is that these bit vectors form the rows of an incidence matrix for the

graph.) An example is shown below, in which k = 3.

1

1

1

6

7

1 0 0 0 0 0 0 0

1 1 1

1 1

1 1 1

1

0

0

0

0

0

0 0

0

0

0 0

0 0

0 0

0

0

0

1

1 1

0 0 0 0 0

0 0 0 0 0

0 0 0

0 0

6

3

e

2

e

1

e

8

e

7

e

6

e

5

e

v

2

v

4

v

3

e

1

e

2

e

3

e

4

e

v

v

1

3

2 4

5

7 v

v

v

v

1

5

v

4

e

5

e

6

e

7

e

8

v

v

v

v

Fig. 54: Encoding a graph as a collection of bit vectors.

Now, suppose we take any subset of vertices and form the logical-or of the corresponding bit vectors. If the

subset is a vertex cover, then every edge will be covered by at least one of these vertices, and so the logical-or

will be a bit vector of all 1’s, 1111 . . . 1. Conversely, if the logical-or is a bit vector of 1’s, then each edge has

been covered by some vertex, implying that the vertices form a vertex cover. (Later we will consider how to

encode the fact that there only allowed k vertices in the cover.)

1

0

1

1 0 0 0 0 0 0 0

1 1 1

1 1

1 1 1 1

1

1

1 1

0

0

0 0

0 0

0 0

0

0

1 1 1 1 1 1 1 1

v v

t =

0

0 0 0

0 0 0 0 0

0 0 0

0 0 0

0

0

0

0

0

0 0

7

3

e

4

e

5

e

6

e

7

e

8

e

1

e

2

e

v

2

v

4

v

2

v

3

v

3

v

4

e

1

e

2

e

v

v

1

3

2 4

5

6

v

v

v

v

1

5

6

7

v

3

e

4

e

5

e

6

e

7

e

8

v

v

v

v

Fig. 55: The logical-or of a vertex cover equals 1111 . . . 1.

Since bit vectors can be thought of as just a way of representing numbers in binary, this is starting to feel more

like the subset sum problem. The target would be the number whose bit vector is all 1’s. There are a number of

problems, however. First, logical-or is not the same as addition. For example, if both of the endpoints of some

edge are in the vertex cover, then its value in the corresponding column would be 2, not 1. Second, we have

no way of controlling how many vertices go into the vertex cover. (We could just take the logical-or of all the

vertices, and then the logical-or would certainly be a bit vectors of 1’s.)

Lecture Notes 74 CMSC 451

There are two ways in which addition differs signiﬁcantly from logical-or. The ﬁrst is the issue of carries. For

example, the 1101 ∨ 0011 = 1111, but in binary 1101 + 0011 = 1000. To ﬁx this, we recognize that we do

not have to use a binary (base-2) representation. In fact, we can assume any base system we want. Observe that

each column of the incidence matrix has at most two 1’s in any column, because each edge is incident to at most

two vertices. Thus, if use any base that is at least as large as base 3, we will never generate a carry to the next

position. In fact we will use base 4 (for reasons to be seen below). Note that the base of the number system is

just for own convenience of notation. Once the numbers have been formed, they will be converted into whatever

form our machine assumes for its input representation, e.g. decimal or binary.

The second difference between logical-or and addition is that an edge may generally be covered either once or

twice in the vertex cover. So, the ﬁnal sum of these numbers will be a number consisting of 1 and 2 digits, e.g.

1211 . . . 112. This does not provide us with a unique target value t. We know that no digit of our sum can be a

zero. To ﬁx this problem, we will create a set of E additional slack values. For 1 ≤ i ≤ E, the ith slack value

will consist of all 0’s, except for a single 1-digit in the ith position, e.g., 00000100000. Our target will be the

number 2222 . . . 222 (all 2’s). To see why this works, observe that from the numbers of our vertex cover, we

will get a sum consisting of 1’s and 2’s. For each position where there is a 1, we can supplement this value by

adding in the corresponding slack value. Thus we can boost any value consisting of 1’s and 2’s to all 2’s. On the

other hand, note that if there are any 0 values in the ﬁnal sum, we will not have enough slack values to convert

this into a 2.

There is one last issue. We are only allowed to place only k vertices in the vertex cover. We will handle this by

adding an additional column. For each number arising from a vertex, we will put a 1 in this additional column.

For each slack variable we will put a 0. In the target, we will require that this column sum to the value k, the

size of the vertex cover. Thus, to form the desired sum, we must select exactly k of the vertex values. Note that

since we only have a base-4 representation, there might be carries out of this last column (if k ≥ 4). But since

this is the last column, it will not affect any of the other aspects of the construction.

The Final Reduction: Here is the ﬁnal reduction, given the graph G = (V, E) and integer k for the vertex cover

problem.

(1) Create a set of n vertex values, x

1

, x

2

, . . . , x

n

using base-4 notation. The value x

i

is equal a 1 followed

by a sequence of E base-4 digits. The j-th digit is a 1 if edge e

j

is incident to vertex v

i

and 0 otherwise.

(2) Create E slack values y

1

, y

2

, . . . , y

E

, where y

i

is a 0 followed by E base-4 digits. The i-th digit of y

i

is 1

and all others are 0.

(3) Let t be the base-4 number whose ﬁrst digit is k (this may actually span multiple base-4 digits), and whose

remaining E digits are all 2.

(4) Convert the x

i

’s, the y

j

’s, and t into whatever base notation is used for the subset sum problem (e.g. base

10). Output the set S = ¦x

1

, . . . , x

n

, y

1

, . . . , y

E

¦ and t.

Observe that this can be done in polynomial time, in O(E

2

), in fact. The construction is illustrated in Fig. 56.

Correctness: We claim that G has a vertex cover of size k if and only if S has a subset that sums to t. If G has a

vertex cover V

**of size k, then we take the vertex values x
**

i

corresponding to the vertices of V

**, and for each
**

edge that is covered only once in V

**, we take the corresponding slack variable. It follows from the comments
**

made earlier that the lower-order E digits of the resulting sum will be of the form 222 . . . 2 and because there

are k elements in V

**, the leftmost digit of the sum will be k. Thus, the resulting subset sums to t.
**

Conversely, if S has a subset S

**that sums to t then we assert that it must select exactly k values from among
**

the vertex values, since the ﬁrst digit must sum to k. We claim that these vertices V

**form a vertex cover. In
**

particular, no edge can be left uncovered by V

**, since (because there are no carries) the corresponding column
**

would be 0 in the sum of vertex values. Thus, no matter what slack values we add, the resulting digit position

could not be equal to 2, and so this cannot be a solution to the subset sum problem.

Lecture Notes 75 CMSC 451

0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0

0

0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0

0

x

x

y

y

y

y

y

y

y

y

t

3 2 2 2 2 2 2 2 2

x

Slack values

Vertex values

vertex cover size (k=3)

x

0 0 0 0 0

0 0 0 0 0 0 0 0

1

1

1

1

1

1

1

1

x

x

x

0

e e e e e e e e

1 0 0 0 0 0 0 0

1 1 1

1 1

1

5

6

7

4

3

2

1

2

3

4

5

6

7

8

1 2 3 4 5 6 7 8

1

0

0

0

0

0

0 0

0

0

0 0

0 0

0 0

0

0

1

1

1

1

1

1

1

0 1 1 1

1

1

1 1

1 1

0 0 0 0 0

0 0 0 0 0

0 0 0

0 0

Fig. 56: Vertex cover to subset sum reduction.

0

0 0

0

0 0

0 0 0

0 0 0

0 0 0 0

0

0 0 0

0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0

0

y

y

y

y

y

y

y

t

3 2 2 2 2 2 2 2 2

x

Slack values

Vertex values

vertex cover size

(take one for each edge that has

only one endpoint in the cover)

(take those in vertex cover)

y

0 0 0 0

0 0 0 0 0 0 0

1

1

1

1

1

1

1

1

x

x

x

x

x

x

1

e e e e e e e e

1 0 0 0 0 0 0 0

1 1 1

1 1

1

5

6

7

4

3

2

1

2

3

4

5

6

7

8

1 2 3 4 5 6 7 8

1 0

0

0

0

0 0

0

0

0 0

0 0

0 0

0

0

1

1

1

1

1

1

0 0

0

1 1 1

1

1

1 1

1 1

0 0 0 0 0

0 0 0 0 0

0 0 0

0 0 0

Fig. 57: Correctness of the reduction.

Lecture Notes 76 CMSC 451

It is worth noting again that in this reduction, we needed to have large numbers. For example, the target value t

is at least as large as 4

E

≥ 4

n

(where n is the number of vertices in G). In our dynamic programming solution

W = t, so the DP algorithm would run in Ω(n4

n

) time, which is not polynomial time.

Lecture 21: Approximation Algorithms: VC and TSP

Read: Chapt 35 (up through 35.2) in CLRS.

Coping with NP-completeness: With NP-completeness we have seen that there are many important optimization

problems that are likely to be quite hard to solve exactly. Since these are important problems, we cannot simply

give up at this point, since people do need solutions to these problems. How do we cope with NP-completeness:

Use brute-force search: Even on the fastest parallel computers this approach is viable only for the smallest

instances of these problems.

Heuristics: A heuristic is a strategy for producing a valid solution, but there are no guarantees how close it is

to optimal. This is worthwhile if all else fails, or if lack of optimality is not really an issue.

General Search Methods: There are a number of very powerful techniques for solving general combinatorial

optimization problems that have been developed in the areas of AI and operations research. These go under

names such as branch-and-bound, A

∗

-search, simulated annealing, and genetic algorithms. The perfor-

mance of these approaches varies considerably from one problem to problem and instance to instance. But

in some cases they can perform quite well.

Approximation Algorithms: This is an algorithm that runs in polynomial time (ideally), and produces a solu-

tion that is within a guaranteed factor of the optimum solution.

Performance Bounds: Most NP-complete problems have been stated as decision problems for theoretical reasons.

However underlying most of these problems is a natural optimization problem. For example, the TSP optimiza-

tion problem is to ﬁnd the simple cycle of minimum cost in a digraph, the VC optimization problem is to ﬁnd

the vertex cover of minimum size, the clique optimization problem is to ﬁnd the clique of maximum size. Note

that sometimes we are minimizing and sometimes we are maximizing. An approximation algorithm is one that

returns a legitimate answer, but not necessarily one of the smallest size.

How do we measure how good an approximation algorithm is? We deﬁne the ratio bound of an approximation

algorithm as follows. Given an instance I of our problem, let C(I) be the cost of the solution produced by our

approximation algorithm, and let C

∗

(I) be the optimal solution. We will assume that costs are strictly positive

values. For a minimization problem we want C(I)/C

∗

(I) to be small, and for a maximization problem we want

C

∗

(I)/C(I) to be small. For any input size n, we say that the approximation algorithm achieves ratio bound

ρ(n), if for all I, [I[ = n we have

max

I

_

C(I)

C

∗

(I)

,

C

∗

(I)

C(I)

_

≤ ρ(n).

Observe that ρ(n) is always greater than or equal to 1, and it is equal to 1 if and only if the approximate solution

is the true optimum solution.

Some NP-complete problems can be approximated arbitrarily closely. Such an algorithm is given both the input,

and a real value > 0, and returns an answer whose ratio bound is at most (1 +). Such an algorithm is called

a polynomial time approximation scheme (or PTAS for short). The running time is a function of both n and .

As approaches 0, the running time increases beyond polynomial time. For example, the running time might be

O(n

1/

). If the running time depends only on a polynomial function of 1/ then it is called a fully polynomial-

time approximation scheme. For example, a running time like O((1/)

2

n

3

) would be such an example, whereas

O(n

1/

) and O(2

(1/)

n) are not.

Although NP-complete problems are equivalent with respect to whether they can be solved exactly in polynomial

time in the worst case, their approximability varies considerably.

Lecture Notes 77 CMSC 451

• For some NP-complete problems, it is very unlikely that any approximation algorithm exists. For example,

if the graph TSP problem had an approximation algorithm with a ratio bound of any value less than ∞,

then P = NP.

• Many NP-complete can be approximated, but the ratio bound is a (slow growing) function of n. For

example, the set cover problem (a generalization of the vertex cover problem), can be approximated to

within a factor of ln n. We will not discuss this algorithm, but it is covered in CLRS.

• Some NP-complete problems can be approximated to within a ﬁxed constant factor. We will discuss two

examples below.

• Some NP-complete problems have PTAS’s. One example is the subset problem (which we haven’t dis-

cussed, but is described in CLRS) and the Euclidean TSP problem.

In fact, much like NP-complete problems, there are collections of problems which are “believed” to be hard to

approximate and are equivalent in the sense that if any one can be approximated in polynomial time then they

all can be. This class is called Max-SNP complete. We will not discuss this further. Sufﬁce it to say that the

topic of approximation algorithms would ﬁll another course.

Vertex Cover: We begin by showing that there is an approximation algorithm for vertex cover with a ratio bound of 2,

that is, this algorithm will be guaranteed to ﬁnd a vertex cover whose size is at most twice that of the optimum.

Recall that a vertex cover is a subset of vertices such that every edge in the graph is incident to at least one of

these vertices. The vertex cover optimization problem is to ﬁnd a vertex cover of minimum size.

How does one go about ﬁnding an approximation algorithm. The ﬁrst approach is to try something that seems

like a “reasonably” good strategy, a heuristic. It turns out that many simple heuristics, when not optimal, can

often be proved to be close to optimal.

Here is an very simple algorithm, that guarantees an approximation within a factor of 2 for the vertex cover

problem. It is based on the following observation. Consider an arbitrary edge (u, v) in the graph. One of its

two vertices must be in the cover, but we do not know which one. The idea of this heuristic is to simply put

both vertices into the vertex cover. (You cannot get much stupider than this!) Then we remove all edges that are

incident to u and v (since they are now all covered), and recurse on the remaining edges. For every one vertex

that must be in the cover, we put two into our cover, so it is easy to see that the cover we generate is at most

twice the size of the optimum cover. The approximation is given in the ﬁgure below. Here is a more formal

proof of its approximation bound.

G and opt VC The 2−for−1 Heuristic

Fig. 58: The 2-for-1 heuristic for vertex cover.

Claim: ApproxVC yields a factor-2 approximation for Vertex Cover.

Proof: Consider the set C output by ApproxVC. Let C

∗

be the optimum VC. Let A be the set of edges selected

by the line marked with “(*)” in the ﬁgure. Observe that the size of C is exactly 2[A[ because we add two

vertices for each such edge. However note that in the optimum VC one of these two vertices must have

been added to the VC, and thus the size of C

∗

is at least [A[. Thus we have:

[C[

2

= [A[ ≤ [C

∗

[ ⇒

[C[

[C

∗

[

≤ 2.

Lecture Notes 78 CMSC 451

2-for-1 Approximation for VC

ApproxVC {

C = empty-set

while (E is nonempty) do {

(*) let (u,v) be any edge of E

add both u and v to C

remove from E all edges incident to either u or v

}

return C;

}

This proof illustrates one of the main features of the analysis of any approximation algorithm. Namely, that we

need some way of ﬁnding a bound on the optimal solution. (For minimization problems we want a lower bound,

for maximization problems an upper bound.) The bound should be related to something that we can compute in

polynomial time. In this case, the bound is related to the set of edges A, which form a maximal independent set

of edges.

The Greedy Heuristic: It seems that there is a very simple way to improve the 2-for-1 heuristic. This algorithm

simply selects any edge, and adds both vertices to the cover. Instead, why not concentrate instead on vertices of

high degree, since a vertex of high degree covers the maximum number of edges. This is greedy strategy. We

saw in the minimum spanning tree and shortest path problems that greedy strategies were optimal.

Here is the greedy heuristic. Select the vertex with the maximum degree. Put this vertex in the cover. Then

delete all the edges that are incident to this vertex (since they have been covered). Repeat the algorithm on the

remaining graph, until no more edges remain. This algorithm is illustrated in the ﬁgure below.

The Greedy Heuristic G and opt VC

Fig. 59: The greedy heuristic for vertex cover.

Greedy Approximation for VC

GreedyVC(G=(V,E)) {

C = empty-set;

while (E is nonempty) do {

let u be the vertex of maximum degree in G;

add u to C;

remove from E all edges incident to u;

}

return C;

}

It is interesting to note that on the example shown in the ﬁgure, the greedy heuristic actually succeeds in ﬁnd-

ing the optimum vertex cover. Can we prove that the greedy heuristic always outperforms the stupid 2-for-1

heuristic? The surprising answer is an emphatic “no”. In fact, it can be shown that the greedy heuristic does

not even have a constant performance bound. That is, it can perform arbitrarily poorly compared to the optimal

algorithm. It can be shown that the ratio bound grows as Θ(log n), where n is the number of vertices. (We leave

Lecture Notes 79 CMSC 451

this as a moderately difﬁcult exercise.) However, it should also be pointed out that the vertex cover constructed

by the greedy heuristic is (for typical graphs) smaller than that one computed by the 2-for-1 heuristic, so it would

probably be wise to run both algorithms and take the better of the two.

Traveling Salesman Problem: In the Traveling Salesperson Problem (TSP) we are given a complete undirected

graph with nonnegative edge weights, and we want to ﬁnd a cycle that visits all vertices and is of minimum

cost. Let c(u, v) denote the weight on edge (u, v). Given a set of edges A forming a tour we deﬁne c(A) to be

the sum of edge weights in A. Last time we mentioned that TSP (posed as a decision problem) is NP-complete.

For many of the applications of TSP, the problem satisﬁes something called the triangle inequality. Intuitively,

this says that the direct path fromu to w, is never longer than an indirect path. More formally, for all u, v, w ∈ V

c(u, w) ≤ c(u, v) +c(v, w).

There are many examples of graphs that satisfy the triangle inequality. For example, given any weighted graph,

if we deﬁne c(u, v) to be the shortest path length between u and v (computed, say by the Floyd-Warshall

algorithm), then it will satisfy the triangle inequality. Another example is if we are given a set of points in

the plane, and deﬁne a complete graph on these points, where c(u, v) is deﬁned to be the Euclidean distance

between these points, then the triangle inequality is also satisﬁed.

When the underlying cost function satisﬁes the triangle inequality there is an approximation algorithm for TSP

with a ratio-bound of 2. (In fact, there is a slightly more complex version of this algorithm that has a ratio bound

of 1.5, but we will not discuss it.) Thus, although this algorithm does not produce an optimal tour, the tour that

it produces cannot be worse than twice the cost of the optimal tour.

The key insight is to observe that a TSP with one edge removed is a spanning tree. However it is not necessarily

a minimum spanning tree. Therefore, the cost of the minimum TSP tour is at least as large as the cost of the

MST. We can compute MST’s efﬁciently, using, for example, either Kruskal’s or Prim’s algorithm. If we can

ﬁnd some way to convert the MST into a TSP tour while increasing its cost by at most a constant factor, then

we will have an approximation for TSP. We shall see that if the edge weights satisfy the triangle inequality, then

this is possible.

Here is how the algorithm works. Given any free tree there is a tour of the tree called a twice around tour that

traverses the edges of the tree twice, once in each direction. The ﬁgure below shows an example of this.

Shortcut tour

start

Optimum tour

start

MST Twice−around tour

Fig. 60: TSP Approximation.

This path is not simple because it revisits vertices, but we can make it simple by short-cutting, that is, we skip

over previously visited vertices. Notice that the ﬁnal order in which vertices are visited using the short-cuts is

exactly the same as a preorder traversal of the MST. (In fact, any subsequence of the twice-around tour which

visits each vertex exactly once will sufﬁce.) The triangle inequality assures us that the path length will not

increase when we take short-cuts.

Claim: Approx-TSP has a ratio bound of 2.

Proof: Let H denote the tour produced by this algorithmand let H

∗

be the optimumtour. Let T be the minimum

spanning tree. As we said before, since we can remove any edge of H

∗

resulting in a spanning tree, and

Lecture Notes 80 CMSC 451

TSP Approximation

ApproxTSP(G=(V,E)) {

T = minimum spanning tree for G

r = any vertex

L = list of vertices visited by a preorder walk ot T

starting with r

return L

}

since T is the minimum cost spanning tree we have

c(T) ≤ c(H

∗

).

Now observe that the twice around tour of T has cost 2c(T), since every edge in T is hit twice. By the

triangle inequality, when we short-cut an edge of T to form H we do not increase the cost of the tour, and

so we have

c(H) ≤ 2c(T).

Combining these we have

c(H)

2

≤ c(T) ≤ c(H

∗

) ⇒

c(H)

c(H

∗

)

≤ 2.

Lecture 22: The k-Center Approximation

Read: Today’s material is not covered in CLR.

Facility Location: Imagine that Blockbuster Video wants to open a 50 stores in some city. The company asks you to

determine the best locations for these stores. The condition is that you are to minimize the maximum distance

that any resident of the city must drive in order to arrive at the nearest store.

If we model the road network of the city as an undirected graph whose edge weights are the distances between

intersections, then this is an instance of the k-center problem. In the k-center problem we are given an undirected

graph G = (V, E) with nonnegative edge weights, and we are given an integer k. The problem is to compute

a subset of k vertices C ⊆ V , called centers, such that the maximum distance between any vertex in V and its

nearest center in C is minimized. (The optimization problem seeks to minimize the maximum distance and the

decision problem just asks whether there exists a set of centers that are within a given distance.)

More formally, let G = (V, E) denote the graph, and let w(u, v) denote the weight of edge (u, v). (w(u, v) =

w(v, u) because G is undirected.) We assume that all edge weights are nonnegative. For each pair of vertices,

u, v ∈ V , let d(u, v) = d(u, v) denote the distance between u to v, that is, the length of the shortest path from

u to v. (Note that the shortest path distance satisﬁes the triangle inequality. This will be used in our proof.)

Consider a subset C ⊆ V of vertices, the centers. For each vertex v ∈ V we can associate it with its nearest

center in C. (This is the nearest Blockbuster store to your house). For each center c

i

∈ C we deﬁne its

neighborhood to be the subset of vertices for which c

i

is the closest center. (These are the houses that are closest

to this center. See Fig. 61.) More formally, deﬁne:

V (c

i

) = ¦v ∈ V [ d(v, c

i

) ≤ d(v, c

j

), for i ,= j¦.

Let us assume for simplicity that there are no ties for the distances to the closest center (or that any such ties have

been broken arbitrarily). Then V (c

1

), V (c

2

), . . . , V (c

k

) forms a partition of the vertex set of G. The bottleneck

distance associated with each center is the distance to its farthest vertex in V (c

i

), that is,

D(c

i

) = max

v∈V (ci)

d(v, c

i

).

Lecture Notes 81 CMSC 451

6 5

4 5

9

5

8

6 8

4

V(c3) 5 7

Input graph (k=3) Optimumum Cost = 7

c1

c3

c2

V(c2)

V(c1)

6

5

9

5 6

7

6

4

5

8

9 8

5

9

5 6

7

6

Fig. 61: The k-center problem with optimum centers c

i

and neighborhood sets V (c

i

).

Finally, we deﬁne the overall bottleneck distance to be

D(C) = max

ci∈C

D(c

i

).

This is the maximum distance of any vertex from its nearest center. This distance is critical because it represents

the customer that must travel farthest to get to the nearest facility, the bottleneck vertex. Given this notation, we

can now formally deﬁne the problem.

k-center problem: Given a weighted undirected graph G = (V, E), and an integer k ≤ [V [, ﬁnd a subset

C ⊆ V of size k such that D(C) is minimized.

The decision-problem formulation of the k-center problem is NP-complete (reduction from dominating set). A

brute force solution to this problem would involve enumerating all k-element of subsets of V , and computing

D(C) for each one. However, letting n = [V [ and k, the number of possible subsets is

_

n

k

_

= Θ(n

k

). If k is

a function of n (which is reasonable), then this an exponential number of subsets. Given that the problem is

NP-complete, it is highly unlikely that a signiﬁcantly more efﬁcient exact algorithm exists in the worst-case. We

will show that there does exist an efﬁcient approximation algorithm for the problem.

Greedy Approximation Algorithm: Our approximation algorithm is based on a simple greedy algorithm that pro-

duces a bottleneck distance D(C) that is not more than twice the optimum bottleneck distance. We begin by

letting the ﬁrst center c

1

be any vertex in the graph (the lower left vertex, say, in the ﬁgure below). Compute

the distances between this vertex and all the other vertices in the graph (Fig. 62(b)). Consider the vertex that is

farthest from this center (the upper right vertex at distance 23 in the ﬁgure). This the bottleneck vertex for ¦c

1

¦.

We would like to select the next center so as to reduce this distance. So let us just make it the next center, called

c

2

. Then again we compute the distances from each vertex in the graph to the closer of c

1

and c

2

. (See Fig. 62(c)

where dashed lines indicate which vertices are closer to which center). Again we consider the bottleneck vertex

for the current centers ¦c

1

, c

2

¦. We place the next center at this vertex (see Fig. 62(d)). Again we compute the

distances from each vertex to its nearest center. Repeat this until all k centers have been selected. In Fig. 62(d),

the ﬁnal three greedy centers are shaded, and the ﬁnal bottleneck distance is 11.

Although the greedy approach has a certain intuitive appeal (because it attempts to ﬁnd the vertex that gives the

bottleneck distance, and then puts a center right on this vertex), it is not optimal. In the example shown in the

ﬁgure, the optimum solution (shown on the right) has a bottleneck cost of 9, which beats the 11 that the greedy

algorithm gave.

Here is a summary of the algorithm. For each vertex u, let d[u] denote the distance to the nearest center.

We know from Dijkstra’s algorithm how to compute the shortest path from a single source to all other vertices

in the graph. One way to solve the distance computation step above would be to invoke Dijkstra’s algorithm i

times. But there is an easier way. We can modify Dijkstra’s algorithm to operate as a multiple source algorithm.

In particular, in the initialization of Dijkstra’s single source algorithm, it sets d[s] = 0 and pred[s] = null. In

Lecture Notes 82 CMSC 451

9

6

11

9

4

(d)

6

(b) (c)

5

5

8

6 8

5

4

6

7

6

9

c2

c3

c2

c1 c1 c1

8

5

Greedy Cost = 11

(a)

9

6 8

5

4

6

7

6 5

9

5

19

14 6

12

19 14

5

9 8

6 8

5

4

6

7

6 5

9

5

23

5

6

4

9

5

11

6

12

9 8

6 8

5

4

6

7

6

9

5

Fig. 62: Greedy approximation to k-center.

Greedy Approximation for k-center

KCenterApprox(G, k) {

C = empty_set

for each u in V do // initialize distances

d[u] = INFINITY

for i = 1 to k do { // main loop

Find the vertex u such that d[u] is maximum

Add u to C // u is the current bottleneck vertex

// update distances

Compute the distance from each vertex v to its closest

vertex in C, denoted d[v]

}

return C // final centers

}

Lecture Notes 83 CMSC 451

the modiﬁed multiple source version, we do this for all the vertices of C. The ﬁnal greedy algorithm involves

running Dijkstra’s algorithm k times (once for each time through the for-loop). Recall that the running time of

Dijkstra’s algorithm is O((V + E) log V ). Under the reasonable assumption that E ≥ V , this is O(E log V ).

Thus, the overall running time is O(kE log V ).

Approximation Bound: How bad could greedy be? We will argue that it has a ratio bound of 2. To see that we can

get a factor of 2, consider a set of n + 1 vertices arranged in a linear graph, in which all edges are of weight 1.

The greedy algorithm might pick any initial vertex that it likes. Suppose it picks the leftmost vertex. Then the

maximum (bottleneck) distance is the distance to the rightmost vertex which is n. If we had instead chosen the

vertex in the middle, then the maximum distance would only be n/2, which is better by a factor of 2.

Opt

Greedy

Cost =n/2

Cost = n

Fig. 63: Worst-case for greedy.

We want to show that this approximation algorithm always produces a ﬁnal distance D(C) that is within a factor

of 2 of the distance of the optimal solution.

Let O = ¦o

1

, o

2

, . . . , o

k

¦ denote the centers of the optimal solution (shown as black dots in Fig. 64, and the lines

show the partition into the neighborhoods for each of these points). Let D

∗

= D(O) be the optimal bottleneck

distance.

Let G = ¦g

1

, g

2

, . . . , g

k

¦ be the centers found by the greedy approximation (shown as white dots in the ﬁgure

below). Also, let g

k+1

denote the next center that would have been added next, that is, the bottleneck vertex for

G. Let D(G) denote the bottleneck distance for G. Notice that the distance from g

k+1

to its nearest center is

equal D(G). The proof involves a simple application of the pigeon-hole principal.

<D

>D*

o

3

o

4

o

5

<D

o

1

g

1

g

2

g

3

g

6

g

5

g

4

o

2

opt

opt

Fig. 64: Analysis of the greedy heuristic for k = 5. The greedy centers are given as white dots and the optimal centers

as black dots. The regions represent the neighborhood sets V (o

i

) for the optimal centers.

Theorem: The greedy approximation has a ratio bound of 2, that is D(G)/D

∗

≤ 2.

Proof: Let G

= ¦g

1

, g

2

, . . . , g

k

, g

k+1

¦ be the (k + 1)-element set consisting of the greedy centers together

with the next greedy center g

k+1

First observe that for i ,= j, d(g

i

, g

j

) ≥ D(G). This follows as a result of

our greedy selection strategy. As each center is selected, it is selected to be at the maximum (bottleneck)

distance from all the previous centers. As we add more centers, the maximum distance between any pair

of centers decreases. Since the ﬁnal bottleneck distance is D(G), all the centers are at least this far apart

from one another.

Lecture Notes 84 CMSC 451

Each g

i

∈ G

**is associated with its closest center in the optimal solution, that is, each belongs to V (o
**

m

)

for some m. Because there are k centers in O, and k + 1 elements in G

**, it follows from the pigeon-hole
**

principal, that at least two centers of G

**are in the same set V (o
**

m

) for some m. (In the ﬁgure, the greedy

centers g

4

and g

5

are both in V (o

2

)). Let these be denoted g

i

and g

j

.

Since D

∗

is the bottleneck distance for O, we know that the distance from g

i

to o

k

is of length at most

D

∗

and similarly the distance from o

k

to g

j

is at most D

∗

. By concatenating these two paths, it follows

that there exists a path of length 2D

∗

from g

i

to g

j

, and hence we have d(g

i

, g

j

) ≤ 2D

∗

. But from the

comments above we have d(g

i

, g

j

) ≥ D(G). Therefore,

D(G) ≤ d(g

i

, g

j

) ≤ 2D

∗

,

from which the desired ratio follows.

Lecture 23: Approximations: Set Cover and Bin Packing

Read: Set cover is covered in Chapt 35.3. Bin packing is covered as an exercise in CLRS.

Set Cover: The set cover problem is a very important optimization problem. You are given a pair (X, F) where

X = ¦x

1

, x

2

, . . . , x

m

¦ is a ﬁnite set (a domain of elements) and F = ¦S

1

, S

2

, . . . , S

n

¦ is a family of subsets

of X, such that every element of X belongs to at least one set of F.

Consider a subset C ⊆ F. (This is a collection of sets over X.) We say that C covers the domain if every

element of X is in some set of C, that is

X =

_

Si∈C

S

i

.

The problem is to ﬁnd the minimum-sized subset C of F that covers X. Consider the example shown below.

The optimum set cover consists of the three sets ¦S

3

, S

4

, S

5

¦.

S

1

S

2

S

3

S

4

S

5

S

6

Fig. 65: Set cover.

Set cover can be applied to a number of applications. For example, suppose you want to set up security cameras

to cover a large art gallery. From each possible camera position, you can see a certain subset of the paintings.

Each such subset of paintings is a set in your system. You want to put up the fewest cameras to see all the

paintings.

Complexity of Set Cover: We have seen special cases of the set cover problems that are NP-complete. For example,

vertex cover is a type of set cover problem. The domain to be covered are the edges, and each vertex covers the

subset of incident edges. Thus, the decision-problem formulation of set cover (“does there exist a set cover of

size at most k?”) is NP-complete as well.

There is a factor-2 approximation for the vertex cover problem, but it cannot be applied to generate a factor-

2 approximation for set cover. In particular, the VC approximation relies on the fact that each element of the

domain (an edge) is in exactly 2 sets (one for each of its endpoints). Unfortunately, this is not true for the general

Lecture Notes 85 CMSC 451

set cover problem. In fact, it is known that there is no constant factor approximation to the set cover problem,

unless P = NP. This is unfortunate, because set cover is one of the most powerful NP-complete problems.

Today we will show that there is a reasonable approximation algorithm, the greedy heuristic, which achieves

an approximation bound of ln m, where m = [X[, the size of the underlying domain. (The book proves a

somewhat stronger result, that the approximation factor of ln m

where m

**≤ m is the size of the largest set in
**

F. However, their proof is more complicated.)

Greedy Set Cover: A simple greedy approach to set cover works by at each stage selecting the set that covers the

greatest number of “uncovered” elements.

Greedy Set Cover

Greedy-Set-Cover(X, F) {

U = X // U are the items to be covered

C = empty // C will be the sets in the cover

while (U is nonempty) { // there is someone left to cover

select S in F that covers the most elements of U

add S to C

U = U - S

}

return C

}

For the example given earlier the greedy-set cover algorithm would select S

1

(since it covers 6 out of 12 ele-

ments), then S

6

(since it covers 3 out of the remaining 6), then S

2

(since it covers 2 of the remaining 3) and

ﬁnally S

3

. Thus, it would return a set cover of size 4, whereas the optimal set cover has size 3.

What is the approximation factor? The problem with the greedy set cover algorithm is that it can be “fooled” into

picking the wrong set, over and over again. Consider the following example. The optimal set cover consists of

sets S

5

and S

6

, each of size 16. Initially all three sets S

1

, S

5

, and S

6

have 16 elements. If ties are broken in the

worst possible way, the greedy algorithm will ﬁrst select sets S

1

. We remove all the covered elements. Now S

2

,

S

5

and S

6

all cover 8 of the remaining elements. Again, if we choose poorly, S

2

is chosen. The pattern repeats,

choosing S

3

(size 4), S

4

(size 2) and ﬁnally S

5

and S

6

(each of size 1).

Thus, the optimum cover consisted of two sets, but we picked roughly lg m, where m = [X[, for a ratio bound

of (lg m)/2. (Recall the lg denotes logarithm base 2.) There were many cases where ties were broken badly

here, but it is possible to redesign the example such that there are no ties and yet the algorithm has essentially

the same ratio bound.

Optimum: {S5, S6}

Greedy: {S1, S2, S3, S4, S5, S6}

S6

S5

S4S3 S2 S1

Fig. 66: An example in which the Greedy Set cover performs poorly.

However we will show that the greedy set cover heuristic nevers performs worse than a factor of ln m. (Note

that this is natural log, not base 2.)

Before giving the proof, we need one important mathematical inequality.

Lemma: For all c > 0,

_

1 −

1

c

_

c

≤

1

e

.

where e is the base of the natural logarithm.

Lecture Notes 86 CMSC 451

Proof: We use the fact that for all x, 1 + x ≤ e

x

. (The two functions are equal when x = 0.) Now, if we

substitute −1/c for x we have (1 −1/c) ≤ e

−1/c

, and if we raise both sides to the cth power, we have the

desired result.

The theorem of the approximation bound for bin packing proven here is a bit weaker from the one in CLRS, but

I think it is easier to understand.

Theorem: Greedy set cover has the ratio bound of at most ln m where m = [X[.

Proof: Let c denote the size of the optimum set cover, and let g denote the size of the greedy set cover minus 1.

We will show that g/c ≤ ln m. (This is not quite what we wanted, but we are correct to within 1 set.)

Initially, there are m

0

= m elements left to be covered. We know that there is a cover of size c (the

optimal cover) and therefore by the pigeonhole principle, there must be at least one set that covers at least

m

0

/c elements. (Since otherwise, if every set covered less than m

0

/c elements, then no collection of c

sets could cover all m

0

elements.) Since the greedy algorithm selects the largest set, it will select a set

that covers at least this many elements. The number of elements that remain to be covered is at most

m

1

= m

0

−m

0

/c = m

0

(1 −1/c).

Applying the argument again, we know that we can cover these m

1

elements with a cover of size c (the

optimal cover), and hence there exists a subset that covers at least m

1

/c elements, leaving at most m

2

=

m

1

(1 −1/c) = m

0

(1 −1/c)

2

elements remaining.

If we apply this argument g times, each time we succeed in covering at least a fraction of (1 −1/c) of the

remaining elements. Then the number of elements that remain is uncovered after g sets have been chosen

by the greedy algorithm is at most m

g

= m

0

(1 −1/c)

g

.

How long can this go on? Consider the largest value of g such that after removing all but the last set of the

greedy cover, we still have some element remaining to be covered. Thus, we are interested in the largest

value of g such that

1 ≤ m

_

1 −

1

c

_

g

.

We can rewrite this as

1 ≤ m

__

1 −

1

c

_

c

_

g/c

.

By the inequality above we have

1 ≤ m

_

1

e

_

g/c

.

Now, if we multiply by e

g/c

and take natural logs we get that g satisﬁes:

e

g/c

≤ m ⇒

g

c

≤ ln m.

This completes the proof.

Even though the greedy set cover has this relatively bad ratio bound, it seems to perform reasonably well in

practice. Thus, the example shown above in which the approximation bound is (lg m)/2 is not “typical” of set

cover instances.

Bin Packing: Bin packing is another well-known NP-complete problem, which is a variant of the knapsack problem.

We are given a set of n objects, where s

i

denotes the size of the ith object. It will simplify the presentation to

assume that 0 < s

i

< 1. We want to put these objects into a set of bins. Each bin can hold a subset of objects

whose total size is at most 1. The problem is to partition the objects among the bins so as to use the fewest

possible bins. (Note that if your bin size is not 1, then you can reduce the problem into this form by simply

dividing all sizes by the size of the bin.)

Lecture Notes 87 CMSC 451

Bin packing arises in many applications. Many of these applications involve not only the size of the object but

their geometric shape as well. For example, these include packing boxes into a truck, or cutting the maximum

number of pieces of certain shapes out of a piece of sheet metal. However, even if we ignore the geometry,

and just consider the sizes of the objects, the decision problem is still NP-complete. (The reduction is from the

knapsack problem.)

Here is a simple heuristic algorithm for the bin packing problem, called the ﬁrst-ﬁt heuristic. We start with an

unlimited number of empty bins. We take each object in turn, and ﬁnd the ﬁrst bin that has space to hold this

object. We put this object in this bin. The algorithm is illustrated in the ﬁgure below. We claim that ﬁrst-ﬁt uses

at most twice as many bins as the optimum, that is, if the optimal solution uses b

∗

bins, and ﬁrst-ﬁt uses b

ff

bins,

then

b

ff

b

∗

≤ 2.

4

5

2

1

3

6

7

s s

s

s

s

s

s

Fig. 67: First-ﬁt Heuristic.

Theorem: The ﬁrst-ﬁt heuristic achieves a ratio bound of 2.

Proof: Consider an instance ¦s

1

, . . . , s

n

¦ of the bin packing problem. Let S =

i

s

i

denote the sum of all the

object sizes. Let b

∗

denote the optimal number of bins, and b

ff

denote the number of bins used by ﬁrst-ﬁt.

First observe that b

∗

≥ S. This is true, since no bin can hold a total capacity of more than 1 unit, and even

if we were to ﬁll each bin exactly to its capacity, we would need at least S bins. (In fact, since the number

of bins is an integer, we would need at least ¸S| bins.)

Next, we claim that b

ff

≤ 2S. To see this, let t

i

denote the total size of the objects that ﬁrst-ﬁt puts into

bin i. Consider bins i and i + 1 ﬁlled by ﬁrst-ﬁt. Assume that indexing is cyclical, so if i is the last index

(i = b

ff

) then i + 1 = 1. We claim that t

i

+ t

i+1

≥ 1. If not, then the contents of bins i and i + 1 could

both be put into the same bin, and hence ﬁrst-ﬁt would never have started to ﬁll the second bin, preferring

to keep everything in the ﬁrst bin. Thus we have:

b

ff

i=1

(t

i

+t

i+1

) ≥ b

ff

.

But this sum adds up all the elements twice, so it has a total value of 2S. Thus we have 2S ≥ b

ff

.

Combining this with the fact that b

∗

≥ S we have

b

ff

≤ 2S ≤ 2b

∗

,

implying that b

ff

/b

∗

≤ 2, as desired.

There are in fact a number of other heuristics for bin packing. Another example is best ﬁt, which attempts to

put the object into the bin in which it ﬁts most closely with the available space (assuming that there is sufﬁcient

available space). There is also a variant of ﬁrst-ﬁt, called ﬁrst ﬁt decreasing, in which the objects are ﬁrst sorted

in decreasing order of size. (This makes intuitive sense, because it is best to ﬁrst load the big items, and then try

to squeeze the smaller objects into the remaining space.)

Lecture Notes 88 CMSC 451

A more careful proof establishes that ﬁrst ﬁt has a approximation ratio that is a bit smaller than 2, and in fact

17/10 is possible. Best ﬁt has a very similar bound. First ﬁt decreasing has a signiﬁcantly better bound of

11/9 = 1.222 . . ..

Lecture 24: Final Review

Overview: This semester we have discussed general approaches to algorithm design. The intent has been to investi-

gate basic algorithm design paradigms: dynamic programming, greedy algorithms, depth-ﬁrst search, etc. And

to consider how these techniques can be applied on a number of well-deﬁned problems. We have also discussed

the class NP-completeness, of problems that believed to be very hard to solve, and ﬁnally some examples of

approximation algorithms.

How to use this information: In some sense, the algorithms you have learned here are rarely immediately applicable

to your later work (unless you go on to be an algorithm designer) because real world problems are always

messier than these simple abstract problems. However, there are some important lessons to take out of this

class.

Develop a clean mathematical model: Most real-world problems are messy. An important ﬁrst step in solving

any problem is to produce a simple and clean mathematical formulation. For example, this might involve

describing the problem as an optimization problem on graphs, sets, or strings. If you cannot clearly

describe what your algorithm is supposed to do, it is very difﬁcult to know when you have succeeded.

Create good rough designs: Before jumping in and starting coding, it is important to begin with a good rough

design. If your rough design is based on a bad paradigm (e.g. exhaustive enumeration, when depth-ﬁrst

search could have been applied) then no amount of additional tuning and reﬁning will save this bad design.

Prove your algorithm correct: Many times you come up with an idea that seems promising, only to ﬁnd out

later (after a lot of coding and testing) that it does not work. Prove that your algorithm is correct before

coding. Writing proofs is not always easy, but it may save you a few weeks of wasted programming time.

If you cannot see why it is correct, chances are that it is not correct at all.

Can it be improved?: Once you have a solution, try to come up with a better one. Is there some reason why a

better algorithm does not exist? (That is, can you establish a lower bound?) If your solution is exponential

time, then maybe your problem is NP-hard.

Prototype to generate better designs: We have attempted to analyze algorithms from an asymptotic perspec-

tive, which hides many of details of the running time (e.g. constant factors), but give a general perspective

for separating good designs from bad ones. After you have isolated the good designs, then it is time to start

prototyping and doing empirical tests to establish the real constant factors. A good proﬁling tool can tell

you which subroutines are taking the most time, and those are the ones you should work on improving.

Still too slow?: If your problem has an unacceptably high execution time, you might consider an approximation

algorithm. The world is full of heuristics, both good and bad. You should develop a good heuristic, and if

possible, prove a ratio bound for your algorithm. If you cannot prove a ratio bound, run many experiments

to see how good the actual performance is.

There is still much more to be learned about algorithm design, but we have covered a great deal of the basic

material. One direction is to specialize in some particular area, e.g. string pattern matching, computational

geometry, parallel algorithms, randomized algorithms, or approximation algorithms. It would be easy to devote

an entire semester to any one of these topics.

Another direction is to gain a better understanding of average-case analysis, which we have largely ignored.

Still another direction might be to study numerical algorithms (as covered in a course on numerical analysis),

or to consider general search strategies such as simulated annealing. Finally, an emerging area is the study of

algorithm engineering, which considers how to design algorithms that are both efﬁcient in a practical sense, as

well as a theoretical sense.

Lecture Notes 89 CMSC 451

Material for the ﬁnal exam:

Old Material: Know general results, but I will not ask too many detailed questions. Do not forget DFS and

DP. You will likely an algorithm design problem that will involve one of these two techniques.

All-Pairs Shortest paths: (Chapt 25.2.)

Floyd-Warshall Algorithm: All-pairs shortest paths, arbitrary edge weights (no negative cost cycles).

Running time O(V

3

).

NP-completeness: (Chapt 34.)

Basic concepts: Decision problems, polynomial time, the class P, certiﬁcates and the class NP, polynomial

time reductions.

NP-completeness reductions: You are responsible for knowing the following reductions.

• 3-coloring to clique cover.

• 3SAT to Independent Set (IS).

• Independent Set to Vertex Cover and Clique.

• Vertex Cover to Dominating Set.

• Vertex Cover to Subset Sum.

It is also a good idea to understand all the reductions that were used in the homework solutions, since

modiﬁcations of these will likely appear on the ﬁnal.

NP-complete reductions can be challenging. If you cannot see how to solve the problem, here are some

suggestions for maximizing partial credit.

All NP-complete proofs have a very speciﬁc form. Explain that you know the template, and try to ﬁll in as

many aspects as possible. Suppose that you want to prove that some problem B is NP-complete.

• B ∈ NP. This almost always easy, so don’t blow it. This basically involves specifying the certiﬁcate.

The certiﬁcate is almost always the thing that the problem is asking you to ﬁnd.

• For some known NP-complete problem A, A ≤

P

B. This means that you want to ﬁnd a polynomial

time function f that maps an instance of A to an instance of B. (Make sure to get the direction

correct!)

• Show the correctness of your reduction, by showing that x ∈ Aif and only if f(x) ∈ B. First suppose

that you have a solution to x and show how to map this to a solution for f(x). Then suppose that you

have a solution to f(x) and show how to map this to a solution for x.

If you cannot ﬁgure out what f is, at least tell me what you would like f to do. Explain which elements

of problem A will likely map to which elements of problem B. Remember that you are trying to translate

the elements of one problem into the common elements of the other problem.

I try to make at least one reduction on the exam similar to one that you have seen before, so make sure that

understand the ones that we have done either in class or on homework problems.

Approximation Algorithms: (Chapt. 35, up through 35.2.)

Vertex cover: Ratio bound of 2.

TSP with triangle inequality: Ratio bound of 2.

Set Cover: Ratio bound of ln m, where m = [X[.

Bin packing: Ratio bound of 2.

k-center: Ratio bound of 2.

Many approximation algorithms are simple. (Most are based on simple greedy heuristics.) The key to

proving many ratio bounds is ﬁrst coming up with a lower bound on the optimal solution (e.g., TSP

opt

≥

MST). Next, provide an upper bound on the cost of your heuristic relative to this same quantity (e.g., the

shortcut twice-around tour for the MST is at most twice the MST cost).

Lecture Notes 90 CMSC 451

Supplemental Lecture 1: Asymptotics

Read: Chapters 2–3 in CLRS.

Asymptotics: The formulas that are derived for the running times of program may often be quite complex. When

designing algorithms, the main purpose of the analysis is to get a sense for the trend in the algorithm’s running

time. (An exact analysis is probably best done by implementing the algorithm and measuring CPU seconds.) We

would like a simple way of representing complex functions, which captures the essential growth rate properties.

This is the purpose of asymptotics.

Asymptotic analysis is based on two simplifying assumptions, which hold in most (but not all) cases. But it is

important to understand these assumptions and the limitations of asymptotic analysis.

Large input sizes: We are most interested in how the running time grows for large values of n.

Ignore constant factors: The actual running time of the program depends on various constant factors in the im-

plementation (coding tricks, optimizations in compilation, speed of the underlying hardware, etc). There-

fore, we will ignore constant factors.

The justiﬁcation for considering large n is that if n is small, then almost any algorithm is fast enough. People are

most concerned about running times for large inputs. For the most part, these assumptions are reasonable when

making comparisons between functions that have signiﬁcantly different behaviors. For example, suppose we

have two programs, one whose running time is T

1

(n) = n

3

and another whose running time is T

2

(n) = 100n.

(The latter algorithm may be faster because it uses a more sophisticated and complex algorithm, and the added

sophistication results in a larger constant factor.) For small n (e.g., n ≤ 10) the ﬁrst algorithm is the faster of

the two. But as n becomes larger the relative differences in running time become much greater. Assuming one

million operations per second.

n T

1

(n) T

2

(n) T

1

(n)/T

2

(n)

10 0.001 sec 0.001 sec 1

100 1 sec 0.01 sec 100

1000 17 min 0.1 sec 10,000

10,000 11.6 days 1 sec 1,000,000

The clear lesson is that as input sizes grow, the performance of the asymptotically poorer algorithm degrades

much more rapidly.

These assumptions are not always reasonable. For example, in any particular application, n is a ﬁxed value. It

may be the case that one function is smaller than another asymptotically, but for your value of n, the asymptot-

ically larger value is ﬁne. Most of the algorithms that we will study this semester will have both low constants

and low asymptotic running times, so we will not need to worry about these issues.

Asymptotic Notation: To represent the running times of algorithms in a simpler form, we use asymptotic notation,

which essentially represents a function by its fastest growing term and ignores constant factors. For example,

suppose we have an algorithm whose (exact) worst-case running time is given by the following formula:

T(n) = 13n

3

+ 5n

2

−17n + 16.

As n becomes large, the 13n

3

term dominates the others. By ignoring constant factors, we might say that the

running time grows “on the order of” n

3

, which will will express mathematically as T(n) ∈ Θ(n

3

). This

intuitive deﬁnition is ﬁne for informal use. Let us consider how to make this idea mathematically formal.

Deﬁnition: Given any function g(n), we deﬁne Θ(g(n)) to be a set of functions:

Θ(g(n)) = ¦f(n) [ there exist strictly positive constants c

1

, c

2

, and n

0

such that

0 ≤ c

1

g(n) ≤ f(n) ≤ c

2

g(n) for all n ≥ n

0

¦.

Lecture Notes 91 CMSC 451

Let’s dissect this deﬁnition. Intuitively, what we want to say with “f(n) ∈ Θ(g(n))” is that f(n) and g(n) are

asymptotically equivalent. This means that they have essentially the same growth rates for large n. For example,

functions such as

4n

2

, (8n

2

+ 2n −3), (n

2

/5 +

√

n −10 log n), and n(n −3)

are all intuitively asymptotically equivalent, since as n becomes large, the dominant (fastest growing) term is

some constant times n

2

. In other words, they all grow quadratically in n. The portion of the deﬁnition that

allows us to select c

1

and c

2

is essentially saying “the constants do not matter because you may pick c

1

and

c

2

however you like to satisfy these conditions.” The portion of the deﬁnition that allows us to select n

0

is

essentially saying “we are only interested in large n, since you only have to satisfy the condition for all n bigger

than n

0

, and you may make n

0

as big a constant as you like.”

An example: Consider the function f(n) = 8n

2

+ 2n − 3. Our informal rule of keeping the largest term and

throwing away the constants suggests that f(n) ∈ Θ(n

2

) (since f grows quadratically). Let’s see why the

formal deﬁnition bears out this informal observation.

We need to show two things: ﬁrst, that f(n) does grows asymptotically at least as fast as n

2

, and second, that

f(n) grows no faster asymptotically than n

2

. We’ll do both very carefully.

Lower bound: f(n) grows asymptotically at least as fast as n

2

: This is established by the portion of the

deﬁnition that reads: (paraphrasing): “there exist positive constants c

1

and n

0

, such that f(n) ≥ c

1

n

2

for

all n ≥ n

0

.” Consider the following (almost correct) reasoning:

f(n) = 8n

2

+ 2n −3 ≥ 8n

2

−3 = 7n

2

+ (n

2

−3) ≥ 7n

2

= 7n

2

.

Thus, if we set c

1

= 7, then we are done. But in the above reasoning we have implicitly made the

assumptions that 2n ≥ 0 and n

2

−3 ≥ 0. These are not true for all n, but they are true for all sufﬁciently

large n. In particular, if n ≥

√

3, then both are true. So let us select n

0

=

√

3, and now we have

f(n) ≥ c

1

n

2

, for all n ≥ n

0

, which is what we need.

Upper bound: f(n) grows asymptotically no faster than n

2

: This is established by the portion of the deﬁnition

that reads “there exist positive constants c

2

and n

0

, such that f(n) ≤ c

2

n

2

for all n ≥ n

0

.” Consider the

following reasoning (which is almost correct):

f(n) = 8n

2

+ 2n −3 ≤ 8n

2

+ 2n ≤ 8n

2

+ 2n

2

= 10n

2

.

This means that if we let c

2

= 10, then we are done. We have implicitly made the assumption that

2n ≤ 2n

2

. This is not true for all n, but it is true for all n ≥ 1. So, let us select n

0

= 1, and now we have

f(n) ≤ c

2

n

2

for all n ≥ n

0

, which is what we need.

From the lower bound, we have n

0

≥

√

3 and from the upper bound we have n

0

≥ 1, and so combining these

we let n

0

be the larger of the two: n

0

=

√

3. Thus, in conclusion, if we let c

1

= 7, c

2

= 10, and n

0

=

√

3, then

we have

0 ≤ c

1

g(n) ≤ f(n) ≤ c

2

g(n) for all n ≥ n

0

,

and this is exactly what the deﬁnition requires. Since we have shown (by construction) the existence of con-

stants c

1

, c

2

, and n

0

, we have established that f(n) ∈ n

2

. (Whew! That was a lot more work than just the

informal notion of throwing away constants and keeping the largest term, but it shows how this informal notion

is implemented formally in the deﬁnition.)

Now let’s show why f(n) is not in some other asymptotic class. First, let’s show that f(n) / ∈ Θ(n). If this were

true, then we would have to satisfy both the upper and lower bounds. It turns out that the lower bound is satisﬁed

(because f(n) grows at least as fast asymptotically as n). But the upper bound is false. In particular, the upper

bound requires us to show “there exist positive constants c

2

and n

0

, such that f(n) ≤ c

2

n for all n ≥ n

0

.”

Informally, we know that as n becomes large enough f(n) = 8n

2

+2n−3 will eventually exceed c

2

n no matter

Lecture Notes 92 CMSC 451

how large we make c

2

(since f(n) is growing quadratically and c

2

n is only growing linearly). To show this

formally, suppose towards a contradiction that constants c

2

and n

0

did exist, such that 8n

2

+ 2n −3 ≤ c

2

n for

all n ≥ n

0

. Since this is true for all sufﬁciently large n then it must be true in the limit as n tends to inﬁnity. If

we divide both side by n we have:

lim

n→∞

_

8n + 2 −

3

n

_

≤ c

2

.

It is easy to see that in the limit the left side tends to ∞, and so no matter how large c

2

is, this statement is

violated. This means that f(n) / ∈ Θ(n).

Let’s show that f(n) / ∈ Θ(n

3

). Here the idea will be to violate the lower bound: “there exist positive constants

c

1

and n

0

, such that f(n) ≥ c

1

n

3

for all n ≥ n

0

.” Informally this is true because f(n) is growing quadratically,

and eventually any cubic function will exceed it. To show this formally, suppose towards a contradiction that

constants c

1

and n

0

did exist, such that 8n

2

+2n−3 ≥ c

1

n

3

for all n ≥ n

0

. Since this is true for all sufﬁciently

large n then it must be true in the limit as n tends to inﬁnity. If we divide both side by n

3

we have:

lim

n→∞

_

8

n

+

2

n

2

−

3

n

3

_

≥ c

1

.

It is easy to see that in the limit the left side tends to 0, and so the only way to satisfy this requirement is to set

c

1

= 0, but by hypothesis c

1

is positive. This means that f(n) / ∈ Θ(n

3

).

O-notation and Ω-notation: We have seen that the deﬁnition of Θ-notation relies on proving both a lower and upper

asymptotic bound. Sometimes we are only interested in proving one bound or the other. The O-notation allows

us to state asymptotic upper bounds and the Ω-notation allows us to state asymptotic lower bounds.

Deﬁnition: Given any function g(n),

O(g(n)) = ¦f(n) [ there exist positive constants c and n

0

such that

0 ≤ f(n) ≤ cg(n) for all n ≥ n

0

¦.

Deﬁnition: Given any function g(n),

Ω(g(n)) = ¦f(n) [ there exist positive constants c and n

0

such that

0 ≤ cg(n) ≤ f(n) for all n ≥ n

0

¦.

Compare this with the deﬁnition of Θ. You will see that O-notation only enforces the upper bound of the Θ

deﬁnition, and Ω-notation only enforces the lower bound. Also observe that f(n) ∈ Θ(g(n)) if and only if

f(n) ∈ O(g(n)) and f(n) ∈ Ω(g(n)). Intuitively, f(n) ∈ O(g(n)) means that f(n) grows asymptotically at

the same rate or slower than g(n). Whereas, f(n) ∈ O(g(n)) means that f(n) grows asymptotically at the same

rate or faster than g(n).

For example f(n) = 3n

2

+ 4n ∈ Θ(n

2

) but it is not in Θ(n) or Θ(n

3

). But f(n) ∈ O(n

2

) and in O(n

3

) but

not in O(n). Finally, f(n) ∈ Ω(n

2

) and in Ω(n) but not in Ω(n

3

).

The Limit Rule for Θ: The previous examples which used limits suggest alternative way of showing that f(n) ∈

Θ(g(n)).

Limit Rule for Θ-notation: Given positive functions f(n) and g(n), if

lim

n→∞

f(n)

g(n)

= c,

for some constant c > 0 (strictly positive but not inﬁnity), then f(n) ∈ Θ(g(n)).

Lecture Notes 93 CMSC 451

Limit Rule for O-notation: Given positive functions f(n) and g(n), if

lim

n→∞

f(n)

g(n)

= c,

for some constant c ≥ 0 (nonnegative but not inﬁnite), then f(n) ∈ O(g(n)).

Limit Rule for Ω-notation: Given positive functions f(n) and g(n), if

lim

n→∞

f(n)

g(n)

,= 0

(either a strictly positive constant or inﬁnity) then f(n) ∈ Ω(g(n)).

This limit rule can be applied in almost every instance (that I know of) where the formal deﬁnition can be used,

and it is almost always easier to apply than the formal deﬁnition. The only exceptions that I know of are strange

instances where the limit does not exist (e.g. f(n) = n

(1+sin n)

). But since most running times are fairly

well-behaved functions this is rarely a problem.

For example, recall the function f(n) = 8n

2

+ 2n − 3. To show that f(n) ∈ Θ(n

2

) we let g(n) = n

2

and

compute the limit. We have

lim

n→∞

8n

2

+ 2n −3

n

2

= lim

n→∞

8 +

2

n

−

3

n

2

= 8,

(since the two fractional terms tend to 0 in the limit). Since 8 is a nonzero constant, it follows that f(n) ∈

Θ(g(n)).

You may recall the important rules from calculus for evaluating limits. (If not, dredge out your calculus book to

remember.) Most of the rules are pretty self evident (e.g., the limit of a ﬁnite sum is the sum of the individual

limits). One important rule to remember is the following:

L’Hˆ opital’s rule: If f(n) and g(n) both approach 0 or both approach ∞in the limit, then

lim

n→∞

f(n)

g(n)

= lim

n→∞

f

(n)

g

(n)

,

where f

(n) and g

**(n) denote the derivatives of f and g relative to n.
**

Exponentials and Logarithms: Exponentials and logarithms are very important in analyzing algorithms. The fol-

lowing are nice to keep in mind. The terminology lg

b

n means (lg n)

b

.

Lemma: Given any positive constants a > 1, b, and c:

lim

n→∞

n

b

a

n

= 0 lim

n→∞

lg

b

n

n

c

= 0.

We won’t prove these, but they can be shown by taking appropriate powers, and then applying L’Hˆ opital’s rule.

The important bottom line is that polynomials always grow more slowly than exponentials whose base is greater

than 1. For example:

n

500

∈ O(2

n

).

For this reason, we will try to avoid exponential running times at all costs. Conversely, logarithmic powers

(sometimes called polylogarithmic functions) grow more slowly than any polynomial. For example:

lg

500

n ∈ O(n).

Lecture Notes 94 CMSC 451

For this reason, we will usually be happy to allow any number of additional logarithmic factors, if it means

avoiding any additional powers of n.

At this point, it should be mentioned that these last observations are really asymptotic results. They are true

in the limit for large n, but you should be careful just how high the crossover point is. For example, by my

calculations, lg

500

n ≤ n only for n > 2

6000

(which is much larger than input size you’ll ever see). Thus, you

should take this with a grain of salt. But, for small powers of logarithms, this applies to all reasonably large

input sizes. For example lg

2

n ≤ n for all n ≥ 16.

Asymptotic Intuition: To get a intuitive feeling for what common asymptotic running times map into in terms of

practical usage, here is a little list.

• Θ(1): Constant time; you can’t beat it!

• Θ(log n): This is typically the speed that most efﬁcient data structures operate in for a single access. (E.g.,

inserting a key into a balanced binary tree.) Also it is the time to ﬁnd an object in a sorted list of length n

by binary search.

• Θ(n): This is about the fastest that an algorithm can run, given that you need Θ(n) time just to read in all

the data.

• Θ(nlog n): This is the running time of the best sorting algorithms. Since many problems require sorting

the inputs, this is still considered quite efﬁcient.

• Θ(n

2

), Θ(n

3

), . . ..: Polynomial time. These running times are acceptable either when the exponent is

small or when the data size is not too large (e.g. n ≤ 1, 000).

• Θ(2

n

), Θ(3

n

): Exponential time. This is only acceptable when either (1) your know that you inputs will

be of very small size (e.g. n ≤ 50), or (2) you know that this is a worst-case running time that will rarely

occur in practical instances. In case (2), it would be a good idea to try to get a more accurate average case

analysis.

• Θ(n!), Θ(n

n

): Acceptable only for really small inputs (e.g. n ≤ 20).

Are their even bigger functions? Deﬁnitely! For example, if you want to see a function that grows inconceivably

fast, look up the deﬁnition of Ackerman’s function in our text.

Max Dominance Revisited: Returning to our Max Dominance algorithms, recall that one had a running time of

T

1

(n) = n

2

and the other had a running time of T

2

(n) = nlog n + n(n −1)/2. Expanding the latter function

and grouping terms in order of their growth rate we have

T

2

(n) =

n

2

2

+nlog n −

n

2

.

We will leave it as an easy exercise to show that both T

1

(n) and T

2

(n) are Θ(n

2

). Although the second

algorithm is twice as fast for large n (because of the 1/2 factor multiplying the n

2

term), this does not represent

a signiﬁcant improvement.

Supplemental Lecture 2: Max Dominance

Read: Review Chapters 1–4 in CLRS.

Faster Algorithm for Max-Dominance: Recall the max-dominance problem from the last two lectures. So far we

have introduced a simple brute-force algorithm that ran in O(n

2

) time, which operated by comparing all pairs

of points. Last time we considered a slight improvement, which sorted the points by their x-coordinate, and

then compared each point against the subsequent points in the sorted order. However, this improvement, only

improved matters by a constant factor. The question we consider today is whether there is an approach that is

signiﬁcantly better.

Lecture Notes 95 CMSC 451

A Major Improvement: The problem with the previous algorithm is that, even though we have cut the number of

comparisons roughly in half, each point is still making lots of comparisons. Can we save time by making only

one comparison for each point? The inner while loop is testing to see whether any point that follows P[i] in the

sorted list has a larger y-coordinate. This suggests, that if we knew which point among P[i + 1, . . . , n] had the

maximum y-coordinate, we could just test against that point.

How can we do this? Here is a simple observation. For any set of points, the point with the maximum y-

coordinate is the maximal point with the smallest x-coordiante. This suggests that we can sweep the points

backwards, from right to left. We keep track of the index j of the most recently seen maximal point. (Initially

the rightmost point is maximal.) When we encounter the point P[i], it is maximal if and only if P[i].y ≥ P[j].y.

This suggests the following algorithm.

Max Dominance: Sort and Reverse Scan

MaxDom3(P, n) {

Sort P in ascending order by x-coordinate;

output P[n]; // last point is always maximal

j = n;

for i = n-1 downto 1 {

if (P[i].y >= P[j].y) { // is P[i] maximal?

output P[i]; // yes..output it

j = i; // P[i] has the largest y so far

}

}

}

The running time of the for-loop is obviously O(n), because there is just a single loop that is executed n − 1

times, and the code inside takes constant time. The total running time is dominated by the O(nlog n) sorting

time, for a total of O(nlog n) time.

How much of an improvement is this? Probably the most accurate way to ﬁnd out would be to code the two up,

and compare their running times. But just to get a feeling, let’s look at the ratio of the running times, ignoring

constant factors:

n

2

nlg n

=

n

lg n

.

(I use the notation lg n to denote the logarithm base 2, ln n to denote the natural logarithm (base e) and log n

when I do not care about the base. Note that a change in base only affects the value of a logarithm function by

a constant amount, so inside of O-notation, we will usually just write log n.)

For relatively small values of n (e.g. less than 100), both algorithms are probably running fast enough that the

difference will be practically negligible. (Rule 1 of algorithm optimization: Don’t optimize code that is already

fast enough.) On larger inputs, say, n = 1, 000, the ratio of n to log n is about 1000/10 = 100, so there is a 100-

to-1 ratio in running times. Of course, we would need to factor in constant factors, but since we are not using

any really complex data structures, it is hard to imagine that the constant factors will differ by more than, say,

10. For even larger inputs, say, n = 1, 000, 000, we are looking at a ratio of roughly 1, 000, 000/20 = 50, 000.

This is quite a signiﬁcant difference, irrespective of the constant factors.

Divide and Conquer Approach: One problem with the previous algorithm is that it relies on sorting. This is nice

and clean (since it is usually easy to get good code for sorting without troubling yourself to write your own).

However, if you really wanted to squeeze the most efﬁciency out of your code, you might consider whether you

can solve this problem without invoking a sorting algorithm.

One of the basic maxims of algorithm design is to ﬁrst approach any problem using one of the standard algorithm

design paradigms, e.g. divide and conquer, dynamic programming, greedy algorithms, depth-ﬁrst search. We

will talk more about these methods as the semester continues. For this problem, divide-and-conquer is a natural

method to choose. What is this paradigm?

Lecture Notes 96 CMSC 451

Divide: Divide the problem into two subproblems (ideally of approximately equal sizes),

Conquer: Solve each subproblem recursively, and

Combine: Combine the solutions to the two subproblems into a global solution.

How shall we divide the problem? I can think of a couple of ways. One is similar to how MergeSort operates.

Just take the array of points P[1..n], and split into two subarrays of equal size P[1..n/2] and P[n/2 + 1..n].

Because we do not sort the points, there is no particular relationship between the points in one side of the list

from the other.

Another approach, which is more reminiscent of QuickSort is to select a random element from the list, called a

pivot, x = P[r], where r is a random integer in the range from 1 to n, and then partition the list into two sublists,

those elements whose x-coordinates are less than or equal to x and those that greater than x. This will not be

guaranteed to split the list into two equal parts, but on average it can be shown that it does a pretty good job.

Let’s consider the ﬁrst method. (The quicksort method will also work, but leads to a tougher analysis.) Here is

more concrete outline. We will describe the algorithm at a very high level. The input will be a point array, and

a point array will be returned. The key ingredient is a function that takes the maxima of two sets, and merges

them into an overall set of maxima.

Max Dominance: Divide-and-Conquer

MaxDom4(P, n) {

if (n == 1) return {P[1]}; // one point is trivially maximal

m = n/2; // midpoint of list

M1 = MaxDom4(P[1..m], m); // solve for first half

M2 = MaxDom4(P[m+1..n], n-m); // solve for second half

return MaxMerge(M1, M2); // merge the results

}

The general process is illustrated below.

The main question is how the procedure Max Merge() is implemented, because it does all the work. Let us

assume that it returns a list of points in sorted order according to x-coordinates of the maximal points. Observe

that if a point is to be maximal overall, then it must be maximal in one of the two sublists. However, just

because a point is maximal in some list, does not imply that it is globally maximal. (Consider point (7, 10) in

the example.) However, if it dominates all the points of the other sublist, then we can assert that it is maximal.

I will describe the procedure at a very high level. It operates by walking through each of the two sorted lists of

maximal points. It maintains two pointers, one pointing to the next unprocessed item in each list. Think of these

as ﬁngers. Take the ﬁnger pointing to the point with the smaller x-coordinate. If its y-coordinate is larger than

the y-coordinate of the point under the other ﬁnger, then this point is maximal, and is copied to the next position

of the result list. Otherwise it is not copied. In either case, we move to the next point in the same list, and repeat

the process. The result list is returned.

The details will be left as an exercise. Observe that because we spend a constant amount of time processing each

point (either copying it to the result list or skipping over it) the total execution time of this procedure is O(n).

Recurrences: How do we analyze recursive procedures like this one? If there is a simple pattern to the sizes of

the recursive calls, then the best way is usually by setting up a recurrence, that is, a function which is deﬁned

recursively in terms of itself.

We break the problem into two subproblems of size roughly n/2 (we will say exactly n/2 for simplicity), and

the additional overhead of merging the solutions is O(n). We will ignore constant factors, writing O(n) just as

n, giving:

T(n) = 1 if n = 1,

T(n) = 2T(n/2) +n if n > 1.

Lecture Notes 97 CMSC 451

(13,3)

(2,14)

(16,4)

(7,7)

(4,11)

(11,5)

2 4 6 10 8

2

4

6

8

10

12 14 16

12

(14,10)

2 4 6 8 10

2

4

6

8

10

12 14 16

12

14

(5,1)

(12,12)

(15,7)

(7,13)

6

8

10

12 14 16

12

14

(5,1)

(7,13)

(12,12)

(14,10)

(15,7)

(13,3)

(2,14)

(16,4)

(7,7)

(4,11)

(11,5)

(9,10)

14

4

(5,1)

(7,13)

(12,12)

(14,10)

(15,7)

(13,3)

(2,14)

(16,4)

(7,7)

(4,11)

(11,5)

Input and initial partition. Solutions to subproblems.

Merged solution.

(9,10) (9,10)

2 4 6 8 10

2

Fig. 68: Divide and conquer approach.

Lecture Notes 98 CMSC 451

Solving Recurrences by The Master Theorem: There are a number of methods for solving the sort of recurrences

that show up in divide-and-conquer algorithms. The easiest method is to apply the Master Theorem that is given

in CLRS. Here is a slightly more restrictive version, but adequate for a lot of instances. See CLRS for the more

complete version of the Master Theorem and its proof.

Theorem: (Simpliﬁed Master Theorem) Let a ≥ 1, b > 1 be constants and let T(n) be the recurrence

T(n) = aT(n/b) +cn

k

,

deﬁned for n ≥ 0.

Case (1): a > b

k

then T(n) is Θ(n

log

b

a

).

Case (2): a = b

k

then T(n) is Θ(n

k

log n).

Case (3): a < b

k

then T(n) is Θ(n

k

).

Using this version of the Master Theorem we can see that in our recurrence a = 2, b = 2, and k = 1, so a = b

k

and case (2) applies. Thus T(n) is Θ(nlog n).

There many recurrences that cannot be put into this form. For example, the following recurrence is quite

common: T(n) = 2T(n/2) +nlog n. This solves to T(n) = Θ(nlog

2

n), but the Master Theorem (either this

form or the one in CLRS will not tell you this.) For such recurrences, other methods are needed.

Expansion: A more basic method for solving recurrences is that of expansion (which CLRS calls iteration). This is

a rather painstaking process of repeatedly applying the deﬁnition of the recurrence until (hopefully) a simple

pattern emerges. This pattern usually results in a summation that is easy to solve. If you look at the proof in

CLRS for the Master Theorem, it is actually based on expansion.

Let us consider applying this to the following recurrence. We assume that n is a power of 3.

T(1) = 1

T(n) = 2T

_

n

3

_

+n if n > 1

First we expand the recurrence into a summation, until seeing the general pattern emerge.

T(n) = 2T

_

n

3

_

+n

= 2

_

2T

_

n

9

_

+

n

3

_

+n = 4T

_

n

9

_

+

_

n +

2n

3

_

= 4

_

2T

_

n

27

_

+

n

9

_

+

_

n +

2n

3

_

= 8T

_

n

27

_

+

_

n +

2n

3

+

4n

9

_

.

.

.

= 2

k

T

_

n

3

k

_

+

k−1

i=0

2

i

n

3

i

= 2

k

T

_

n

3

k

_

+n

k−1

i=0

(2/3)

i

.

The parameter k is the number of expansions (not to be confused with the value of k we introduced earlier on

the overhead). We want to know how many expansions are needed to arrive at the basis case. To do this we set

n/(3

k

) = 1, meaning that k = log

3

n. Substituting this in and using the identity a

log b

= b

log a

we have:

T(n) = 2

log

3

n

T(1) +n

log

3

n−1

i=0

(2/3)

i

= n

log

3

2

+n

log

3

n−1

i=0

(2/3)

i

.

Lecture Notes 99 CMSC 451

Next, we can apply the formula for the geometric series and simplify to get:

T(n) = n

log

3

2

+n

1 −(2/3)

log

3

n

1 −(2/3)

= n

log

3

2

+ 3n(1 −(2/3)

log

3

n

) = n

log

3

2

+ 3n(1 −n

log

3

(2/3)

)

= n

log

3

2

+ 3n(1 −n

(log

3

2)−1

) = n

log

3

2

+ 3n −3n

log

3

2

= 3n −2n

log

3

2

.

Since log

3

2 ≈ 0.631 < 1, T(n) is dominated by the 3n term asymptotically, and so it is Θ(n).

Induction and Constructive Induction: Another technique for solving recurrences (and this works for summations

as well) is to guess the solution, or the general form of the solution, and then attempt to verify its correctness

through induction. Sometimes there are parameters whose values you do not know. This is ﬁne. In the course

of the induction proof, you will usually ﬁnd out what these values must be. We will consider a famous example,

that of the Fibonacci numbers.

F

0

= 0

F

1

= 1

F

n

= F

n−1

+F

n−2

for n ≥ 2.

The Fibonacci numbers arise in data structure design. If you study AVL (height balanced) trees in data structures,

you will learn that the minimum-sized AVL trees are produced by the recursive construction given below. Let

L(i) denote the number of leaves in the minimum-sized AVL tree of height i. To construct a minimum-sized

AVL tree of height i, you create a root node whose children consist of a minimum-sized AVL tree of heights

i −1 and i −2. Thus the number of leaves obeys L(0) = L(1) = 1, L(i) = L(i −1) +L(i −2). It is easy to

see that L(i) = F

i+1

.

L(4)=5 L(3)=3 L(2)=2 L(1)=1 L(0) = 1

Fig. 69: Minimum-sized AVL trees.

If you expand the Fibonacci series for a number of terms, you will observe that F

n

appears to growexponentially,

but not as fast as 2

n

. It is tempting to conjecture that F

n

≤ φ

n−1

, for some real parameter φ, where 1 < φ < 2.

We can use induction to prove this and derive a bound on φ.

Lemma: For all integers n ≥ 1, F

n

≤ φ

n−1

for some constant φ, 1 < φ < 2.

Proof: We will try to derive the tightest bound we can on the value of φ.

Basis: For the basis cases we consider n = 1. Observe that F

1

= 1 ≤ φ

0

, as desired.

Induction step: For the induction step, let us assume that F

m

≤ φ

m−1

whenever 1 ≤ m < n. Using this

induction hypothesis we will show that the lemma holds for n itself, whenever n ≥ 2.

Since n ≥ 2, we have F

n

= F

n−1

+F

n−2

. Now, since n −1 and n −2 are both strictly less than n,

we can apply the induction hypothesis, from which we have

F

n

≤ φ

n−2

+φ

n−3

= φ

n−3

(1 +φ).

Lecture Notes 100 CMSC 451

We want to show that this is at most φ

n−1

(for a suitable choice of φ). Clearly this will be true if and

only if (1 +φ) ≤ φ

2

. This is not true for all values of φ (for example it is not true when φ = 1 but it

is true when φ = 2.)

At the critical value of φ this inequality will be an equality, implying that we want to ﬁnd the roots of

the equation

φ

2

−φ −1 = 0.

By the quadratic formula we have

φ =

1 ±

√

1 + 4

2

=

1 ±

√

5

2

.

Since

√

5 ≈ 2.24, observe that one of the roots is negative, and hence would not be a possible

candidate for φ. The positive root is

φ =

1 +

√

5

2

≈ 1.618.

There is a very subtle bug in the preceding proof. Can you spot it? The error occurs in the case n = 2. Here

we claim that F

2

= F

1

+ F

0

and then we apply the induction hypothesis to both F

1

and F

0

. But the induction

hypothesis only applies for m ≥ 1, and hence cannot be applied to F

0

! To ﬁx it we could include F

2

as part of

the basis case as well.

Notice not only did we prove the lemma by induction, but we actually determined the value of φ which makes

the lemma true. This is why this method is called constructive induction.

By the way, the value φ =

1

2

(1 +

√

5) is a famous constant in mathematics, architecture and art. It is the golden

ratio. Two numbers A and B satisfy the golden ratio if

A

B

=

A+B

A

.

It is easy to verify that A = φ and B = 1 satisﬁes this condition. This proportion occurs throughout the world

of art and architecture.

Supplemental Lecture 3: Recurrences and Generating Functions

Read: This material is not covered in CLR. There a good description of generating functions in D. E. Knuth, The Art

of Computer Programming, Vol 1.

Generating Functions: The method of constructive induction provided a way to get a bound on F

n

, but we did not

get an exact answer, and we had to generate a good guess before we were even able to start.

Let us consider an approach to determine an exact representation of F

n

, which requires no guesswork. This

method is based on a very elegant concept, called a generating function. Consider any inﬁnite sequence:

a

0

, a

1

, a

2

, a

3

, . . .

If we would like to “encode” this sequence succinctly, we could deﬁne a polynomial function such that these

are the coefﬁcients of the function:

G(z) = a

0

+a

1

z +a

2

z

2

+a

3

z

3

+. . .

This is called the generating function of the sequence. What is z? It is just a symbolic variable. We will (almost)

never assign it a speciﬁc value. Thus, every inﬁnite sequence of numbers has a corresponding generating func-

tion, and vice versa. What is the advantage of this representation? It turns out that we can perform arithmetic

Lecture Notes 101 CMSC 451

transformations on these functions (e.g., adding them, multiplying them, differentiating them) and this has a

corresponding effect on the underlying transformations. It turns out that some nicely-structured sequences (like

the Fibonacci numbers, and many sequences arising from linear recurrences) have generating functions that are

easy to write down and manipulate.

Let’s consider the generating function for the Fibonacci numbers:

G(z) = F

0

+F

1

z +F

2

z

2

+F

3

z

3

+. . .

= z +z

2

+ 2z

3

+ 3z

4

+ 5z

5

+. . .

The trick in dealing with generating functions is to ﬁgure out how various manipulations of the generating

function to generate algebraically equivalent forms. For example, notice that if we multiply the generating

function by a factor of z, this has the effect of shifting the sequence to the right:

G(z) = F

0

+ F

1

z + F

2

z

2

+ F

3

z

3

+ F

4

z

4

+ . . .

zG(z) = F

0

z + F

1

z

2

+ F

2

z

3

+ F

3

z

4

+ . . .

z

2

G(z) = F

0

z

2

+ F

1

z

3

+ F

2

z

4

+ . . .

Now, let’s try the following manipulation. Compute G(z) −zG(z) −z

2

G(z), and see what we get

(1 −z −z

2

)G(z) = F

0

+ (F

1

−F

0

)z + (F

2

−F

1

−F

0

)z

2

+ (F

3

−F

2

−F

1

)z

3

+. . . + (F

i

−F

i−1

−F

i−2

)z

i

+. . .

= z.

Observe that every term except the second is equal to zero by the deﬁnition of F

i

. (The particular manipulation

we picked was chosen to cause this cancellation to occur.) From this we may conclude that

G(z) =

z

1 −z −z

2

.

So, now we have an alternative representation for the Fibonacci numbers, as the coefﬁcients of this function if

expanded as a power series. So what good is this? The main goal is to get at the coefﬁcients of its power series

expansion. There are certain common tricks that people use to manipulate generating functions.

The ﬁrst is to observe that there are some functions for which it is very easy to get an power series expansion.

For example, the following is a simple consequence of the formula for the geometric series. If 0 < c < 1 then

∞

i=0

c

i

=

1

1 −c

.

Setting z = c, we have

1

1 −z

= 1 +z +z

2

+z

3

+. . .

(In other words, 1/(1−z) is the generating function for the sequence (1, 1, 1, . . .). In general, given an constant

a we have

1

1 −az

= 1 +az +a

2

z

2

+a

3

z

3

+. . .

is the generating function for (1, a, a

2

, a

3

, . . .). It would be great if we could modify our generating function to

be in the form of 1/(1 −az) for some constant a, since then we could then extract the coefﬁcients of the power

series easily.

In order to do this, we would like to rewrite the generating function in the following form:

G(z) =

z

1 −z −z

2

=

A

1 −az

+

B

1 −bz

,

Lecture Notes 102 CMSC 451

for some A, B, a, b. We will skip the steps in doing this, but it is not hard to verify the roots of (1 −az)(1 −bz)

(which are 1/a and 1/b) must be equal to the roots of 1 − z − z

2

. We can then solve for a and b by taking the

reciprocals of the roots of this quadratic. Then by some simple algebra we can plug these values in and solve

for A and B yielding:

G(z) =

z

1 −z −z

2

=

_

1/

√

5

1 −φz

+

−1/

√

5

1 −

ˆ

φ

_

=

1

√

5

_

1

1 −φz

−

1

1 −

ˆ

φ

_

,

where φ = (1 +

√

5)/2 and

ˆ

φ = (1 −

√

5)/2. (In particular, to determine A, multiply the equation by 1 −φz,

and then consider what happens when z = 1/φ. A similar trick can be applied to get B. In general, this is called

the method of partial fractions.)

Now we are in good shape, because we can extract the coefﬁcients for these two fractions from the above

function. From this we have the following:

G(z) =

1

√

5

( 1 + φz + φ

2

z

2

+ . . .

−1 + −

ˆ

φz + −

ˆ

φ

2

z

2

+ . . . )

Combining terms we have

G(z) =

1

√

5

∞

i=0

(φ

i

−

ˆ

φ

i

)z

i

.

We can now read off the coefﬁcients easily. In particular it follows that

F

n

=

1

√

5

(φ

n

−

ˆ

φ

n

).

This is an exact result, and no guesswork was needed. The only parts that involved some cleverness (beyond the

invention of generating functions) was (1) coming up with the simple closed form formula for G(z) by taking

appropriate differences and applying the rule for the recurrence, and (2) applying the method of partial fractions

to get the generating function into one for which we could easily read off the ﬁnal coefﬁcients.

This is a rather remarkable, because it says that we can express the integer F

n

as the sum of two powers of to

irrational numbers φ and

ˆ

φ. You might try this for a few speciﬁc values of n to see why this is true. By the way,

when you observe that

ˆ

φ < 1, it is clear that the ﬁrst term is the dominant one. Thus we have, for large enough

n, F

n

= φ

n

/

√

5, rounded to the nearest integer.

Supplemental Lecture 4: Medians and Selection

Read: Chapter 9 of CLRS.

Selection: We have discussed recurrences and the divide-and-conquer method of solving problems. Today we will

give a rather surprising (and very tricky) algorithm which shows the power of these techniques.

The problem that we will consider is very easy to state, but surprisingly difﬁcult to solve optimally. Suppose

that you are given a set of n numbers. Deﬁne the rank of an element to be one plus the number of elements

that are smaller than this element. Since duplicate elements make our life more complex (by creating multiple

elements of the same rank), we will make the simplifying assumption that all the elements are distinct for now.

It will be easy to get around this assumption later. Thus, the rank of an element is its ﬁnal position if the set is

sorted. The minimum is of rank 1 and the maximum is of rank n.

Of particular interest in statistics is the median. If n is odd then the median is deﬁned to be the element of rank

(n + 1)/2. When n is even there are two natural choices, namely the elements of ranks n/2 and (n/2) + 1. In

Lecture Notes 103 CMSC 451

statistics it is common to return the average of these two elements. We will deﬁne the median to be either of

these elements.

Medians are useful as measures of the central tendency of a set, especially when the distribution of values is

highly skewed. For example, the median income in a community is likely to be more meaningful measure of

the central tendency than the average is, since if Bill Gates lives in your community then his gigantic income

may signiﬁcantly bias the average, whereas it cannot have a signiﬁcant inﬂuence on the median. They are also

useful, since in divide-and-conquer applications, it is often desirable to partition a set about its median value,

into two sets of roughly equal size. Today we will focus on the following generalization, called the selection

problem.

Selection: Given a set A of n distinct numbers and an integer k, 1 ≤ k ≤ n, output the element of A of rank k.

The selection problem can easily be solved in Θ(nlog n) time, simply by sorting the numbers of A, and then

returning A[k]. The question is whether it is possible to do better. In particular, is it possible to solve this

problem in Θ(n) time? We will see that the answer is yes, and the solution is far from obvious.

The Sieve Technique: The reason for introducing this algorithm is that it illustrates a very important special case of

divide-and-conquer, which I call the sieve technique. We think of divide-and-conquer as breaking the problem

into a small number of smaller subproblems, which are then solved recursively. The sieve technique is a special

case, where the number of subproblems is just 1.

The sieve technique works in phases as follows. It applies to problems where we are interested in ﬁnding a

single item from a larger set of n items. We do not know which item is of interest, however after doing some

amount of analysis of the data, taking say Θ(n

k

) time, for some constant k, we ﬁnd that we do not know what

the desired item is, but we can identify a large enough number of elements that cannot be the desired value, and

can be eliminated from further consideration. In particular “large enough” means that the number of items is

at least some ﬁxed constant fraction of n (e.g. n/2, n/3, 0.0001n). Then we solve the problem recursively on

whatever items remain. Each of the resulting recursive solutions then do the same thing, eliminating a constant

fraction of the remaining set.

Applying the Sieve to Selection: To see more concretely how the sieve technique works, let us apply it to the selec-

tion problem. Recall that we are given an array A[1..n] and an integer k, and want to ﬁnd the k-th smallest

element of A. Since the algorithm will be applied inductively, we will assume that we are given a subarray

A[p..r] as we did in MergeSort, and we want to ﬁnd the kth smallest item (where k ≤ r − p + 1). The initial

call will be to the entire array A[1..n].

There are two principal algorithms for solving the selection problem, but they differ only in one step, which

involves judiciously choosing an item from the array, called the pivot element, which we will denote by x. Later

we will see how to choose x, but for now just think of it as a random element of A. We then partition A into

three parts. A[q] contains the element x, subarray A[p..q − 1] will contain all the elements that are less than x,

and A[q +1..r], will contain all the element that are greater than x. (Recall that we assumed that all the elements

are distinct.) Within each subarray, the items may appear in any order. This is illustrated below.

It is easy to see that the rank of the pivot x is q −p + 1 in A[p..r]. Let xRank = q −p + 1. If k = xRank, then

the pivot is the kth smallest, and we may just return it. If k < xRank, then we know that we need to recursively

search in A[p..q−1] and if k > xRank then we need to recursively search A[q+1..r]. In this latter case we have

eliminated q smaller elements, so we want to ﬁnd the element of rank k −q. Here is the complete pseudocode.

Notice that this algorithm satisﬁes the basic form of a sieve algorithm. It analyzes the data (by choosing the pivot

element and partitioning) and it eliminates some part of the data set, and recurses on the rest. When k = xRank

then we get lucky and eliminate everything. Otherwise we either eliminate the pivot and the right subarray or

the pivot and the left subarray.

We will discuss the details of choosing the pivot and partitioning later, but assume for now that they both take

Θ(n) time. The question that remains is how many elements did we succeed in eliminating? If x is the largest

Lecture Notes 104 CMSC 451

Before partitioing

After partitioing

2 6 4 1 3 7 9

pivot

3 5 1 9 4 6

x

p r

q p r

A[q+1..r] > x

A[p..q−1] < x

5

2 7

Partition

(pivot = 4)

9

7

5

6

(k=6−4=2)

Recurse

x_rnk=2 (DONE!)

6

5

5

6

(pivot = 6)

Partition

(k=2)

Recurse

x_rnk=3

(pivot = 7)

Partition

(k=6)

Initial

x_rnk=4

6

7

3

1

4

6

2

9

5

4

1

9

5

3

7

2

6

9

5

7

Fig. 70: Selection Algorithm.

Selection by the Sieve Technique

Select(array A, int p, int r, int k) { // return kth smallest of A[p..r]

if (p == r) return A[p] // only 1 item left, return it

else {

x = ChoosePivot(A, p, r) // choose the pivot element

q = Partition(A, p, r, x) // partition <A[p..q-1], x, A[q+1..r]>

xRank = q - p + 1 // rank of the pivot

if (k == xRank) return x // the pivot is the kth smallest

else if (k < xRank)

return Select(A, p, q-1, k) // select from left subarray

else

return Select(A, q+1, r, k-xRank)// select from right subarray

}

}

Lecture Notes 105 CMSC 451

or smallest element in the array, then we may only succeed in eliminating one element with each phase. In fact,

if x is one of the smallest elements of A or one of the largest, then we get into trouble, because we may only

eliminate it and the few smaller or larger elements of A. Ideally x should have a rank that is neither too large

nor too small.

Let us suppose for now (optimistically) that we are able to design the procedure Choose Pivot in such a

way that is eliminates exactly half the array with each phase, meaning that we recurse on the remaining n/2

elements. This would lead to the following recurrence.

T(n) =

_

1 if n = 1,

T(n/2) +n otherwise.

We can solve this either by expansion (iteration) or the Master Theorem. If we expand this recurrence level by

level we see that we get the summation

T(n) = n +

n

2

+

n

4

+ ≤

∞

i=0

n

2

i

= n

∞

i=0

1

2

i

.

Recall the formula for the inﬁnite geometric series. For any c such that [c[ < 1,

∞

i=0

c

i

= 1/(1 − c). Using

this we have

T(n) ≤ 2n ∈ O(n).

(This only proves the upper bound on the running time, but it is easy to see that it takes at least Ω(n) time, so

the total running time is Θ(n).)

This is a bit counterintuitive. Normally you would think that in order to design a Θ(n) time algorithm you could

only make a single, or perhaps a constant number of passes over the data set. In this algorithm we make many

passes (it could be as many as lg n). However, because we eliminate a constant fraction of elements with each

phase, we get this convergent geometric series in the analysis, which shows that the total running time is indeed

linear in n. This lesson is well worth remembering. It is often possible to achieve running times in ways that

you would not expect.

Note that the assumption of eliminating half was not critical. If we eliminated even one per cent, then the

recurrence would have been T(n) = T(99n/100) +n, and we would have gotten a geometric series involving

99/100, which is still less than 1, implying a convergent series. Eliminating any constant fraction would have

been good enough.

Choosing the Pivot: There are two issues that we have left unresolved. The ﬁrst is how to choose the pivot element,

and the second is how to partition the array. Both need to be solved in Θ(n) time. The second problem is a

rather easy programming exercise. Later, when we discuss QuickSort, we will discuss partitioning in detail.

For the rest of the lecture, let’s concentrate on how to choose the pivot. Recall that before we said that we might

think of the pivot as a random element of A. Actually this is not such a bad idea. Let’s see why.

The key is that we want the procedure to eliminate at least some constant fraction of the array after each parti-

tioning step. Let’s consider the top of the recurrence, when we are given A[1..n]. Suppose that the pivot x turns

out to be of rank q in the array. The partitioning algorithm will split the array into A[1..q − 1] < x, A[q] = x

and A[q + 1..n] > x. If k = q, then we are done. Otherwise, we need to search one of the two subarrays. They

are of sizes q − 1 and n − q, respectively. The subarray that contains the kth smallest element will generally

depend on what k is, so in the worst case, k will be chosen so that we have to recurse on the larger of the two

subarrays. Thus if q > n/2, then we may have to recurse on the left subarray of size q −1, and if q < n/2, then

we may have to recurse on the right subarray of size n −q. In either case, we are in trouble if q is very small, or

if q is very large.

If we could select q so that it is roughly of middle rank, then we will be in good shape. For example, if

n/4 ≤ q ≤ 3n/4, then the larger subarray will never be larger than 3n/4. Earlier we said that we might think

of the pivot as a random element of the array A. Actually this works pretty well in practice. The reason is that

Lecture Notes 106 CMSC 451

roughly half of the elements lie between ranks n/4 and 3n/4, so picking a random element as the pivot will

succeed about half the time to eliminate at least n/4. Of course, we might be continuously unlucky, but a careful

analysis will show that the expected running time is still Θ(n). We will return to this later.

Instead, we will describe a rather complicated method for computing a pivot element that achieves the desired

properties. Recall that we are given an array A[1..n], and we want to compute an element x whose rank is

(roughly) between n/4 and 3n/4. We will have to describe this algorithm at a very high level, since the details

are rather involved. Here is the description for Select Pivot:

Groups of 5: Partition A into groups of 5 elements, e.g. A[1..5], A[6..10], A[11..15], etc. There will be exactly

m = ¸n/5| such groups (the last one might have fewer than 5 elements). This can easily be done in Θ(n)

time.

Group medians: Compute the median of each group of 5. There will be m group medians. We do not need an

intelligent algorithm to do this, since each group has only a constant number of elements. For example, we

could just BubbleSort each group and take the middle element. Each will take Θ(1) time, and repeating

this ¸n/5| times will give a total running time of Θ(n). Copy the group medians to a new array B.

Median of medians: Compute the median of the group medians. For this, we will have to call the selection

algorithm recursively on B, e.g. Select(B, 1, m, k), where m = ¸n/5|, and k = ¸(m+ 1)/2|.

Let x be this median of medians. Return x as the desired pivot.

The algorithm is illustrated in the ﬁgure below. To establish the correctness of this procedure, we need to argue

that x satisﬁes the desired rank properties.

8

10

27

Group

29

11

58

39

60

55

1

21

52

19

48 63

12

23

3

24

37

57

14

6

48 24

57

14

25

30

43

2

32

3

63

12

52

23

64

34

17

44

5

19

8

27

10

41

25

25

43

30

32

2

63

52

12

23

3

34

44

17

27

10

8

19

48

41

60

1

29

11

39

58

Get median of medians

(Sorting of group medians is not really performed)

6

43

30

32

2

64

5

34

44

17 29

11

39

58 55

21

41

60

1

24

64

5

55

21

Get group medians

37

57

14

37

6

Fig. 71: Choosing the Pivot. 30 is the ﬁnal pivot.

Lemma: The element x is of rank at least n/4 and at most 3n/4 in A.

Proof: We will show that x is of rank at least n/4. The other part of the proof is essentially symmetrical. To

do this, we need to show that there are at least n/4 elements that are less than or equal to x. This is a bit

complicated, due to the ﬂoor and ceiling arithmetic, so to simplify things we will assume that n is evenly

divisible by 5. Consider the groups shown in the tabular form above. Observe that at least half of the group

medians are less than or equal to x. (Because x is their median.) And for each group median, there are

three elements that are less than or equal to this median within its group (because it is the median of its

Lecture Notes 107 CMSC 451

group). Therefore, there are at least 3((n/5)/2 = 3n/10 ≥ n/4 elements that are less than or equal to x

in the entire array.

Analysis: The last order of business is to analyze the running time of the overall algorithm. We achieved the main

goal, namely that of eliminating a constant fraction (at least 1/4) of the remaining list at each stage of the

algorithm. The recursive call in Select() will be made to list no larger than 3n/4. However, in order

to achieve this, within Select Pivot() we needed to make a recursive call to Select() on an array B

consisting of ¸n/5| elements. Everything else took only Θ(n) time. As usual, we will ignore ﬂoors and ceilings,

and write the Θ(n) as n for concreteness. The running time is

T(n) ≤

_

1 if n = 1,

T(n/5) +T(3n/4) +n otherwise.

This is a very strange recurrence because it involves a mixture of different fractions (n/5 and 3n/4). This

mixture will make it impossible to use the Master Theorem, and difﬁcult to apply iteration. However, this is a

good place to apply constructive induction. We know we want an algorithm that runs in Θ(n) time.

Theorem: There is a constant c, such that T(n) ≤ cn.

Proof: (by strong induction on n)

Basis: (n = 1) In this case we have T(n) = 1, and so T(n) ≤ cn as long as c ≥ 1.

Step: We assume that T(n

) ≤ cn

for all n

**< n. We will then show that T(n) ≤ cn. By deﬁnition we
**

have

T(n) = T(n/5) +T(3n/4) +n.

Since n/5 and 3n/4 are both less than n, we can apply the induction hypothesis, giving

T(n) ≤ c

n

5

+c

3n

4

+n = cn

_

1

5

+

3

4

_

+n

= cn

19

20

+n = n

_

19c

20

+ 1

_

.

This last expression will be ≤ cn, provided that we select c such that c ≥ (19c/20) + 1. Solving for

c we see that this is true provided that c ≥ 20.

Combining the constraints that c ≥ 1, and c ≥ 20, we see that by letting c = 20, we are done.

A natural question is why did we pick groups of 5? If you look at the proof above, you will see that it works for

any value that is strictly greater than 4. (You might try it replacing the 5 with 3, 4, or 6 and see what happens.)

Supplemental Lecture 5: Analysis of BucketSort

Probabilistic Analysis of BucketSort: We begin with a quick-and-dirty analysis of bucketsort. Since there are n

buckets, and the items fall uniformly between them, we would expect a constant number of items per bucket.

Thus, the expected insertion time for each bucket is only a constant. Therefore the expected running time of

the algorithm is Θ(n). This quick-and-dirty analysis is probably good enough to convince yourself of this

algorithm’s basic efﬁciency. A careful analysis involves understanding a bit about probabilistic analyses of

algorithms. Since we haven’t done any probabilistic analyses yet, let’s try doing this one. (This one is rather

typical.)

The ﬁrst thing to do in a probabilistic analysis is to deﬁne a random variable that describes the essential quantity

that determines the execution time. A discrete random variable can be thought of as variable that takes on some

Lecture Notes 108 CMSC 451

set of discrete values with certain probabilities. More formally, it is a function that maps some some discrete

sample space (the set of possible values) onto the reals (the probabilities). For 0 ≤ i ≤ n −1, let X

i

denote the

random variable that indicates the number of elements assigned to the i-th bucket.

Since the distribution is uniform, all of the random variables X

i

have the same probability distribution, so we

may as well talk about a single random variable X, which will work for any bucket. Since we are using a

quadratic time algorithm to sort the elements of each bucket, we are interested in the expected sorting time,

which is Θ(X

2

). So this leads to the key question, what is the expected value of X

2

, denoted E[X

2

].

Because the elements are assumed to be uniformly distributed, each element has an equal probability of going

into any bucket, or in particular, it has a probability of p = 1/n of going into the ith bucket. So how many items

do we expect will wind up in bucket i? We can analyze this by thinking of each element of Aas being represented

by a coin ﬂip (with a biased coin, which has a different probability of heads and tails). With probability p = 1/n

the number goes into bucket i, which we will interpret as the coin coming up heads. With probability 1 − 1/n

the item goes into some other bucket, which we will interpret as the coin coming up tails. Since we assume

that the elements of A are independent of each other, X is just the total number of heads we see after making n

tosses with this (biased) coin.

The number of times that a heads event occurs, given n independent trials in which each trial has two possible

outcomes is a well-studied problem in probability theory. Such trials are called Bernoulli trials (named after the

Swiss mathematician James Bernoulli). If p is the probability of getting a head, then the probability of getting k

heads in n tosses is given by the following important formula

P(X = k) =

_

n

k

_

p

k

(1 −p)

n−k

where

_

n

k

_

=

n!

k!(n −k)!

.

Although this looks messy, it is not too hard to see where it comes from. Basically p

k

is the probability of

tossing k heads, (1 − p)

n−k

is the probability of tossing n − k tails, and

_

n

k

_

is the total number of different

ways that the k heads could be distributed among the n tosses. This probability distribution (as a function of k,

for a given n and p) is called the binomial distribution, and is denoted b(k; n, p).

If you consult a standard textbook on probability and statistics, then you will see the two important facts that we

need to know about the binomial distribution. Namely, that its mean value E[X] and its variance Var[X] are

E[X] = np and Var[X] = E[X

2

] −E

2

[X] = np(1 −p).

We want to determine E[X

2

]. By the above formulas and the fact that p = 1/n we can derive this as

E[X

2

] = Var[X] +E

2

[X] = np(1 −p) + (np)

2

=

n

n

_

1 −

1

n

_

+

_

n

n

_

2

= 2 −

1

n

.

Thus, for large n the time to insert the items into any one of the linked lists is a just shade less than 2. Summing

up over all n buckets, gives a total running time of Θ(2n) = Θ(n). This is exactly what our quick-and-dirty

analysis gave us, but now we know it is true with conﬁdence.

Supplemental Lecture 6: Long Integer Multiplication

Read: This material on integer multiplication is not covered in CLRS.

Long Integer Multiplication: The following little algorithm shows a bit more about the surprising applications of

divide-and-conquer. The problem that we want to consider is how to perform arithmetic on long integers, and

multiplication in particular. The reason for doing arithmetic on long numbers stems from cryptography. Most

techniques for encryption are based on number-theoretic techniques. For example, the character string to be

encrypted is converted into a sequence of numbers, and encryption keys are stored as long integers. Efﬁcient

Lecture Notes 109 CMSC 451

encryption and decryption depends on being able to perform arithmetic on long numbers, typically containing

hundreds of digits.

Addition and subtraction on large numbers is relatively easy. If n is the number of digits, then these algorithms

run in Θ(n) time. (Go back and analyze your solution to the problem on Homework 1). But the standard

algorithm for multiplication runs in Θ(n

2

) time, which can be quite costly when lots of long multiplications are

needed.

This raises the question of whether there is a more efﬁcient way to multiply two very large numbers. It would

seem surprising if there were, since for centuries people have used the same algorithm that we all learn in grade

school. In fact, we will see that it is possible.

Divide-and-Conquer Algorithm: We know the basic grade-school algorithm for multiplication. We normally think

of this algorithm as applying on a digit-by-digit basis, but if we partition an n digit number into two “super

digits” with roughly n/2 each into longer sequences, the same multiplication rule still applies.

w

y

x

z

xz wz

xy wy

wy wz + xy xz

n

n/2 n/2

A

B

Product

Fig. 72: Long integer multiplication.

To avoid complicating things with ﬂoors and ceilings, let’s just assume that the number of digits n is a power of

2. Let Aand B be the two numbers to multiply. Let A[0] denote the least signiﬁcant digit and let A[n−1] denote

the most signiﬁcant digit of A. Because of the way we write numbers, it is more natural to think of the elements

of A as being indexed in decreasing order from left to right as A[n −1..0] rather than the usual A[0..n −1].

Let m = n/2. Let

w = A[n −1..m] x = A[m−1..0] and

y = B[n −1..m] z = B[m−1..0].

If we think of w, x, y and z as n/2 digit numbers, we can express A and B as

A = w 10

m

+x

B = y 10

m

+z,

and their product is

mult(A, B) = mult(w, y)10

2m

+ (mult(w, z) + mult(x, y))10

m

+ mult(x, z).

The operation of multiplying by 10

m

should be thought of as simply shifting the number over by m positions to

the right, and so is not really a multiplication. Observe that all the additions involve numbers involving roughly

n/2 digits, and so they take Θ(n) time each. Thus, we can express the multiplication of two long integers as the

result of four products on integers of roughly half the length of the original, and a constant number of additions

and shifts, each taking Θ(n) time. This suggests that if we were to implement this algorithm, its running time

would be given by the following recurrence

T(n) =

_

1 if n = 1,

4T(n/2) +n otherwise.

Lecture Notes 110 CMSC 451

If we apply the Master Theorem, we see that a = 4, b = 2, k = 1, and a > b

k

, implying that Case 1 holds and

the running time is Θ(n

lg 4

) = Θ(n

2

). Unfortunately, this is no better than the standard algorithm.

Faster Divide-and-Conquer Algorithm: Even though the above exercise appears to have gotten us nowhere, it ac-

tually has given us an important insight. It shows that the critical element is the number of multiplications on

numbers of size n/2. The number of additions (as long as it is a constant) does not affect the running time. So,

if we could ﬁnd a way to arrive at the same result algebraically, but by trading off multiplications in favor of

additions, then we would have a more efﬁcient algorithm. (Of course, we cannot simulate multiplication through

repeated additions, since the number of additions must be a constant, independent of n.)

The key turns out to be a algebraic “trick”. The quantities that we need to compute are C = wy, D = xz,

and E = (wz + xy). Above, it took us four multiplications to compute these. However, observe that if instead

we compute the following quantities, we can get everything we want, using only three multiplications (but with

more additions and subtractions).

C = mult(w, y)

D = mult(x, z)

E = mult((w +x), (y +z)) −C −D = (wy +wz +xy +xz) −wy −xz = (wz +xy).

Finally we have

mult(A, B) = C 10

2m

+E 10

m

+D.

Altogether we perform 3 multiplications, 4 additions, and 2 subtractions all of numbers with n/2 digitis. We

still need to shift the terms into their proper ﬁnal positions. The additions, subtractions, and shifts take Θ(n)

time in total. So the total running time is given by the recurrence:

T(n) =

_

1 if n = 1,

3T(n/2) +n otherwise.

Now when we apply the Master Theorem, we have a = 3, b = 2 and k = 1, yielding T(n) ∈ Θ(n

lg 3

) ≈

Θ(n

1.585

).

Is this really an improvement? This algorithm carries a larger constant factor because of the overhead of recur-

sion and the additional arithmetic operations. But asymptotics says that if n is large enough, then this algorithm

will be superior. For example, if we assume that the clever algorithm has overheads that are 5 times greater

than the simple algorithm (e.g. 5n

1.585

versus n

2

) then this algorithm beats the simple algorithm for n ≥ 50.

If the overhead was 10 times larger, then the crossover would occur for n ≥ 260. Although this may seem like

a very large number, recall that in cryptogrphy applications, encryption keys of this length and longer are quite

reasonable.

Supplemental Lecture 7: Dynamic Programming: 0–1 Knapsack Problem

Read: The introduction to Chapter 16 in CLR. The material on the Knapsack Problem is not presented in our text, but

is brieﬂy discussed in Section 17.2.

0-1 Knapsack Problem: Imagine that a burglar breaks into a museum and ﬁnds n items. Let v

i

denote the value of the

i-th item, and let w

i

denote the weight of the i-th item. The burglar carries a knapsack capable of holding total

weight W. The burglar wishes to carry away the most valuable subset items subject to the weight constraint.

For example, a burglar would rather steal diamonds before gold because the value per pound is better. But he

would rather steal gold before lead for the same reason. We assume that the burglar cannot take a fraction of an

object, so he/she must make a decision to take the object entirely or leave it behind. (There is a version of the

Lecture Notes 111 CMSC 451

problem where the burglar can take a fraction of an object for a fraction of the value and weight. This is much

easier to solve.)

More formally, given ¸v

1

, v

2

, . . . , v

n

) and ¸w

1

, w

2

. . . , w

n

), and W > 0, we wish to determine the subset

T ⊆ ¦1, 2, . . . , n¦ (of objects to “take”) that maximizes

i∈T

v

i

,

subject to

i∈T

w

i

≤ W.

Let us assume that the v

i

’s, w

i

’s and W are all positive integers. It turns out that this problem is NP-complete,

and so we cannot really hope to ﬁnd an efﬁcient solution. However if we make the same sort of assumption that

we made in counting sort, we can come up with an efﬁcient solution.

We assume that the w

i

’s are small integers, and that W itself is a small integer. We show that this problem

can be solved in O(nW) time. (Note that this is not very good if W is a large integer. But if we truncate our

numbers to lower precision, this gives a reasonable approximation algorithm.)

Here is how we solve the problem. We construct an array V [0..n, 0..W]. For 1 ≤ i ≤ n, and 0 ≤ j ≤ W, the

entry V [i, j] we will store the maximum value of any subset of objects ¦1, 2, . . . , i¦ that can ﬁt into a knapsack of

weight j. If we can compute all the entries of this array, then the array entry V [n, W] will contain the maximum

value of all n objects that can ﬁt into the entire knapsack of weight W.

To compute the entries of the array V we will imply an inductive approach. As a basis, observe that V [0, j] = 0

for 0 ≤ j ≤ W since if we have no items then we have no value. We consider two cases:

Leave object i: If we choose to not take object i, then the optimal value will come about by considering how

to ﬁll a knapsack of size j with the remaining objects ¦1, 2, . . . , i −1¦. This is just V [i −1, j].

Take object i: If we take object i, then we gain a value of v

i

but have used up w

i

of our capacity. With the

remaining j −w

i

capacity in the knapsack, we can ﬁll it in the best possible way with objects ¦1, 2, . . . , i−

1¦. This is v

i

+V [i −1, j −w

i

]. This is only possible if w

i

≤ j.

Since these are the only two possibilities, we can see that we have the following rule for constructing the array

V . The ranges on i and j are i ∈ [0..n] and j ∈ [0..W].

V [0, j] = 0

V [i, j] =

_

V [i −1, j] if w

i

> j

max(V [i −1, j], v

i

+V [i −1, j −w

i

]) if w

i

≤ j

The ﬁrst line states that if there are no objects, then there is no value, irrespective of j. The second line

implements the rule above.

It is very easy to take these rules an produce an algorithm that computes the maximum value for the knapsack

in time proportional to the size of the array, which is O((n + 1)(W + 1)) = O(nW). The algorithm is given

below.

An example is shown in the ﬁgure below. The ﬁnal output is V [n, W] = V [4, 10] = 90. This reﬂects the

selection of items 2 and 4, of values $40 and $50, respectively and weights 4 + 3 ≤ 10.

The only missing detail is what items should we select to achieve the maximum. We will leave this as an

exercise. They key is to record for each entry V [i, j] in the matrix whether we got this entry by taking the ith

item or leaving it. With this information, it is possible to reconstruct the optimum knapsack contents.

Lecture Notes 112 CMSC 451

0-1 Knapsack Problem

KnapSack(v[1..n], w[1..n], n, W) {

allocate V[0..n][0..W];

for j = 0 to W do V[0, j] = 0; // initialization

for i = 1 to n do {

for j = 0 to W do {

leave_val = V[i-1, j]; // total value if we leave i

if (j >= w[i]) // enough capacity to take i

take_val = v[i] + V[i-1, j - w[i]]; // total value if we take i

else

take_val = -INFINITY; // cannot take i

V[i,j] = max(leave_val, take_val); // final value is max

}

}

return V[n, W];

}

Values of the objects are ¸10, 40, 30, 50).

Weights of the objects are ¸5, 4, 6, 3).

Capacity → j = 0 1 2 3 4 5 6 7 8 9 10

Item Value Weight 0 0 0 0 0 0 0 0 0 0 0

1 10 5 0 0 0 0 0 10 10 10 10 10 10

2 40 4 0 0 0 0 40 40 40 40 40 50 50

3 30 6 0 0 0 0 40 40 40 40 40 50 70

4 50 3 0 0 0 50 50 50 50 90 90 90 90

Final result is V [4, 10] = 90 (for taking items 2 and 4).

Fig. 73: 0–1 Knapsack Example.

Lecture Notes 113 CMSC 451

Supplemental Lecture 8: Dynamic Programming: Memoization

Read: Section 15.3 of CLRS.

Recursive Implementation: We have described dynamic programming as a method that involves the “bottom-up”

computation of a table. However, the recursive formulations that we have derived have been set up in a “top-

down” manner. Must the computation proceed bottom-up? Consider the following recursive implementation of

the chain-matrix multiplication algorithm. The call Rec-Matrix-Chain(p, i, j) computes and returns

the value of m[i, j]. The initial call is Rec-Matrix-Chain(p, 1, n). We only consider the cost here.

Recursive Chain Matrix Multiplication

Rec-Matrix-Chain(array p, int i, int j) {

if (i == j) m[i,j] = 0; // basis case

else {

m[i,j] = INFINITY; // initialize

for k = i to j-1 do { // try all splits

cost = Rec-Matrix-Chain(p, i, k) +

Rec-Matrix-Chain(p, k+1, j) + p[i-1]*p[k]*p[j];

if (cost < m[i,j]) m[i,j] = cost; // update if better

}

}

return m[i,j]; // return final cost

}

(Note that the table m[1..n, 1..n] is not really needed. We show it just to make the connection with the earlier

version clearer.) This version of the procedure certainly looks much simpler, and more closely resembles the

recursive formulation that we gave previously for this problem. So, what is wrong with this?

The answer is the running time is much higher than the Θ(n

3

) algorithm that we gave before. In fact, we will

see that its running time is exponential in n. This is unacceptably slow.

Let T(n) denote the running time of this algorithm on a sequence of matrices of length n. (That is, n = j−i+1.)

If i = j then we have a sequence of length 1, and the time is Θ(1). Otherwise, we do Θ(1) work and then

consider all possible ways of splitting the sequence of length n into two sequences, one of length k and the other

of length n −k, and invoke the procedure recursively on each one. So we get the following recurrence, deﬁned

for n ≥ 1. (We have replaced the Θ(1)’s with the constant 1.)

T(n) =

_

1 if n = 1,

1 +

n−1

k=1

(T(k) +T(n −k)) if n ≥ 2.

Claim: T(n) ≥ 2

n−1

.

Proof: The proof is by induction on n. Clearly this is true for n = 1, since T(1) = 1 = 2

0

. In general, for

n ≥ 2, the induction hypothesis is that T(m) ≥ 2

m−1

for all m < n. Using this we have

T(n) = 1 +

n−1

k=1

(T(k) +T(n −k)) ≥ 1 +

n−1

k=1

T(k)

≥ 1 +

n−1

k=1

2

k−1

= 1 +

n−2

k=0

2

k

= 1 + (2

n−1

−1) = 2

n−1

.

In the ﬁrst line we simply ignored the T(n−k) term, in the second line we applied the induction hypothesis,

and in the last line we applied the formula for the geometric series.

Lecture Notes 114 CMSC 451

Why is this so much worse than the dynamic programming version? If you “unravel” the recursive calls on a

reasonably long example, you will see that the procedure is called repeatedly with the same arguments. The

bottom-up version evaluates each entry exactly once.

Memoization: Is it possible to retain the nice top-down structure of the recursive solution, while keeping the same

O(n

3

) efﬁciency of the bottom-up version? The answer is yes, through a technique called memoization. Here

is the idea. Let’s reconsider the function Rec-Matrix-Chain() given above. It’s job is to compute m[i, j],

and return its value. As noted above, the main problem with the procedure is that it recomputes the same entries

over and over. So, we will ﬁx this by allowing the procedure to compute each entry exactly once. One way to

do this is to initialize every entry to some special value (e.g. UNDEFINED). Once an entries value has been

computed, it is never recomputed.

Memoized Chain Matrix Multiplication

Mem-Matrix-Chain(array p, int i, int j) {

if (m[i,j] != UNDEFINED) return m[i,j]; // already defined

else if (i == j) m[i,j] = 0; // basis case

else {

m[i,j] = INFINITY; // initialize

for k = i to j-1 do { // try all splits

cost = Mem-Matrix-Chain(p, i, k) +

Mem-Matrix-Chain(p, k+1, j) + p[i-1]*p[k]*p[j];

if (cost < m[i,j]) m[i,j] = cost; // update if better

}

}

return m[i,j]; // return final cost

}

This version runs in O(n

3

) time. Intuitively, this is because each of the O(n

2

) table entries is only computed

once, and the work needed to compute one table entry (most of it in the for-loop) is at most O(n).

Memoization is not usually used in practice, since it is generally slower than the bottom-up method. However,

in some DP problems, many of the table entries are simply not needed, and so bottom-up computation may

compute entries that are never needed. In these cases memoization may be a good idea. If you have know that

most of the table will not be needed, here is a way to save space. Rather than storing the whole table explicitly

as an array, you can store the “deﬁned” entries of the table in a hash table, using the index pair (i, j) as the hash

key. (See Chapter 11 in CLRS for more information on hashing.)

Supplemental Lecture 9: Articulation Points and Biconnectivity

Read: This material is not covered in CLR (except as Problem 23–2).

Articulation Points and Biconnected Graphs: Today we discuss another application of DFS, this time to a problem

on undirected graphs. Let G = (V, E) be a connected undirected graph. Consider the following deﬁnitions.

Articulation Point (or Cut Vertex): Is any vertex whose removal (together with the removal of any incident

edges) results in a disconnected graph.

Bridge: Is an edge whose removal results in a disconnected graph.

Biconnected: A graph is biconnected if it contains no articulation points. (In general a graph is k-connected, if

k vertices must be removed to disconnect the graph.)

Biconnected graphs and articulation points are of great interest in the design of network algorithms, because

these are the “critical” points, whose failure will result in the network becoming disconnected.

Lecture Notes 115 CMSC 451

Bridge

Articulation point

Biconnected

c

b

g

h

f

i

d

c g

h

components

j

i e e a

c

a

f b

d

j

a e

Fig. 74: Articulation Points and Bridges

Last time we observed that the notion of mutual reachability partitioned the vertices of a digraph into equivalence

classes. We would like to do the same thing here. We say that two edges e

1

and e

2

are cocyclic if either e

1

= e

2

or if there is a simple cycle that contains both edges. It is not too hard to verify that this deﬁnes an equivalence

relation on the edges of a graph. Notice that if two edges are cocyclic, then there are essentially two different

ways of getting from one edge to the other (by going around the the cycle each way).

Biconnected components: The biconnected components of a graph are the equivalence classes of the cocylicity

relation.

Notice that unlike strongly connected components of a digraph (which form a partition of the vertex set) the

biconnected components of a graph form a partition of the edge set. You might think for a while why this is so.

We give an algorithm for computing articulation points. An algorithm for computing bridges is simple modiﬁ-

cation to this procedure.

Articulation Points and DFS: In order to determine the articulation points of an undirected graph, we will call depth-

ﬁrst search, and use the tree structure provided by the search to aid us. In particular, let us ask ourselves if a

vertex u is an articulation point, how would we know it by its structure in the DFS tree?

We assume that G is connected (if not, we can apply this algorithm to each individual connected component).

So we assume is only one tree in the DFS forest. Because G is undirected, the DFS tree has a simpler structure.

First off, we cannot distinguish between forward edges and back edges, and we just call them back edges. Also,

there are no cross edges. (You should take a moment to convince yourself why this is true.)

For now, let us consider the typical case of a vertex u, where u is not a leaf and u is not the root. Let’s let

v

1

, v

2

, . . . , v

k

be the children of u. For each child there is a subtree of the DFS tree rooted at this child. If for

some child, there is no back edge going to a proper ancestor of u, then if we were to remove u, this subtree

would become disconnected from the rest of the graph, and hence u is an articulation point. On the other hand,

if every one of the subtrees rooted at the children of u have back edges to proper ancestors of u, then if u is

removed, the graph remains connected (the backedges hold everything together). This leads to the following.

Observation 1: An internal vertex u of the DFS tree (other than the root) is an articulation point if and only

there exists a subtree rooted at a child of u such that there is no back edge from any vertex in this subtree

to a proper ancestor of u.

Please check this condition carefully to see that you understand it. In particular, notice that the condition for

whether u is an articulation point depends on a test applied to its children. This is the most common source of

confusion for this algorithm.

What about the leaves? If u is a leaf, can it be an articulation point? Answer: No, because when you delete a

leaf from a tree, the rest of the tree remains connected, thus even ignoring the back edges, the graph is connected

after the deletion of a leaf from the DFS tree.

Lecture Notes 116 CMSC 451

Low[u]=d[v]

v

u

Fig. 75: Articulation Points and DFS

Observation 2: A leaf of the DFS tree is never an articulation point. Note that this is completely consistent

with Observation 1, since a leaf will not have any subtrees in the DFS tree, so we can delete the word

“internal” from Observation 1.

What about the root? Since there are no cross edges between the subtrees of the root if the root has two or more

children then it is an articulation point (since its removal separates these two subtrees). On the other hand, if

the root has only a single child, then (as in the case of leaves) its removal does not disconnect the DFS tree, and

hence cannot disconnect the graph in general.

Observation 3: The root of the DFS is an articulation point if and only if it has two or more children.

Articulation Points by DFS: Observations 1, 2, and 3 provide us with a structural characterization of which vertices

in the DFS tree are articulation points. How can we design an algorithm which tests these conditions? Checking

that the root has multiple children is an easy exercise. Checking Observation 1 is the hardest, but we will exploit

the structure of the DFS tree to help us.

The basic thing we need to check for is whether there is a back edge from some subtree to an ancestor of a given

vertex. How can we do this? It would be too expensive to keep track of all the back edges from each subtree

(because there may be Θ(e) back edges. A simpler scheme is to keep track of back edge that goes highest in the

tree (in the sense of going closest to the root). If any back edge goes to an ancestor of u, this one will.

How do we know how close a back edge goes to the root? As we travel from u towards the root, observe that

the discovery times of these ancestors of u get smaller and smaller (the root having the smallest discovery time

of 1). So we keep track of the back edge (v, w) that has the smallest value of d[w].

Low: Deﬁne Low[u] to be the minimum of d[u] and

¦d[w] [ where (v, w) is a back edge and v is a descendent of u¦.

The term “descendent” is used in the nonstrict sense, that is, v may be equal to u. Intuitively, Low[u] is the

highest (closest to the root) that you can get in the tree by taking any one backedge from either u or any

of its descendents. (Beware of this notation: “Low” means low discovery time, not low in the tree. In fact

Low[u] tends to be “high” in the tree, in the sense of being close to the root.)

To compute Low[u] we use the following simple rules: Suppose that we are performing DFS on the vertex u.

Initialization: Low[u] = d[u].

Back edge (u, v): Low[u] = min(Low[u], d[v]). Explanation: We have detected a new back edge coming out

of u. If this goes to a lower d value than the previous back edge then make this the new low.

Tree edge (u, v): Low[u] = min(Low[u], Low[v]). Explanation: Since v is in the subtree rooted at u any single

back edge leaving the tree rooted at v is a single back edge for the tree rooted at u.

Lecture Notes 117 CMSC 451

Observe that once Low[u] is computed for all vertices u, we can test whether a given nonroot vertex u is an

articulation point by Observation 1 as follows: u is an articulation point if and only if it has a child v in the

DFS tree for which Low[v] ≥ d[u] (since if there were a back edge from either v or one of its descendents to an

ancestor of v then we would have Low[v] < d[u]).

The Final Algorithm: There is one subtlety that we must watch for in designing the algorithm (in particular this is

true for any DFS on undirected graphs). When processing a vertex u, we need to know when a given edge (u, v)

is a back edge. How do we do this? An almost correct answer is to test whether v is colored gray (since all gray

vertices are ancestors of the current vertex). This is not quite correct because v may be the parent of v in the DFS

tree and we are just seeing the “other side” of the tree edge between v and u (recalling that in constructing the

adjacency list of an undirected graph we create two directed edges for each undirected edge). To test correctly

for a back edge we use the predecessor pointer to check that v is not the parent of u in the DFS tree.

The complete algorithm for computing articulation points is given below. The main procedure for DFS is the

same as before, except that it calls the following routine rather than DFSvisit().

Articulation Points

ArtPt(u) {

color[u] = gray

Low[u] = d[u] = ++time

for each (v in Adj(u)) {

if (color[v] == white) { // (u,v) is a tree edge

pred[v] = u

ArtPt(v)

Low[u] = min(Low[u], Low[v]) // update Low[u]

if (pred[u] == NULL) { // root: apply Observation 3

if (this is u’s second child)

Add u to set of articulation points

}

else if (Low[v] >= d[u]) { // internal node: apply Observation 1

Add u to set of articulation points

}

}

else if (v != pred[u]) { // (u,v) is a back edge

Low[u] = min(Low[u], d[v]) // update L[u]

}

}

}

An example is shown in the following ﬁgure. As with all DFS-based algorithms, the running time is Θ(n +e).

There are some interesting problems that we still have not discussed. We did not discuss how to compute the

bridges of a graph. This can be done by a small modiﬁcation of the algorithm above. We’ll leave it as an

exercise. (Notice that if ¦u, v¦ is a bridge then it does not follow that u and v are both articulation points.)

Another question is how to determine which edges are in the biconnected components. A hint here is to store

the edges in a stack as you go through the DFS search. When you come to an articulation point, you can show

that all the edges in the biconnected component will be consecutive in the stack.

Supplemental Lecture 10: Bellman-Ford Shortest Paths

Read: Section 24.1 in CLRS.

Bellman-Ford Algorithm: We saw that Dijkstra’s algorithm can solve the single-source shortest path problem, under

the assumption that the edge weights are nonnegative. We also saw that shortest paths are undeﬁned if you

Lecture Notes 118 CMSC 451

1

3

10

3

= articulation pt.

2

4

5

6

3

3

1

1 8

7

9

8

8

8

d

Low=1 d=1

e

i

j f

g

h

d

c

b

a

j

i a e

f b

c g

h

Fig. 76: Articulation Points.

have cycles of total negative cost. What if you have negative edge weights, but no negative cost cycles? We

shall present the Bellman-Ford algorithm, which solves this problem. This algorithm is slower that Dijkstra’s

algorithm, running in Θ(V E) time. In our version we will assume that there are no negative cost cycles. The

one presented in CLRS actually contains a bit of code that checks for this. (Check it out.)

Recall that we are given a graph G = (V, E) with numeric edge weights, w(u, v). Like Dijkstra’s algorithm, the

Bellman-Ford algorithm is based on performing repeated relaxations. (Recall that relaxation updates shortest

path information along a single edge. It was described in our discussion of Dijkstra’s algorithm.) Dijkstra’s

algorithm was based on the idea of organizing the relaxations in the best possible manner, namely in increasing

order of distance. Once relaxation is applied to an edge, it need never be relaxed again. This trick doesn’t seem

to work when dealing with graphs with negative edge weights. Instead, the Bellman-Ford algorithm simply

applies a relaxation to every edge in the graph, and repeats this V −1 times.

Bellman-Ford Algorithm

BellmanFord(G,w,s) {

for each (u in V) { // standard initialization

d[u] = +infinity

pred[u] = null

}

d[s] = 0

for i = 1 to V-1 { // repeat V-1 times

for each (u,v) in E { // relax along each edge

Relax(u,v)

}

}

}

The Θ(V E) running time is pretty obvious, since there are two main nested loops, one iterated V −1 times and

the other iterated E times. The interesting question is how and why it works.

Correctness of Bellman-Ford: I like to think of the Bellman-Ford as a sort of “BubbleSort analogue” for shortest

paths, in the sense that shortest path information is propagated sequentially along each shortest path in the graph.

Consider any shortest path from s to some other vertex u: ¸v

0

, v

1

, . . . , v

k

) where v

0

= s and v

k

= u. Since a

shortest path will never visit the same vertex twice, we know that k ≤ V −1, and hence the path consists of at

most V −1 edges. Since this is a shortest path we have δ(s, v

i

) (the true shortest path cost froms to v

i

) satisﬁes

δ(s, v

i

) = δ(s, v

i−1

) +w(v

i−1

, v

i

).

Lecture Notes 119 CMSC 451

−6 −6 −6

5

4

5

−6

5

8

4

8

4

8

5

4

8

phase phase phase

After 3rd relaxation After 2nd relaxation After 1st relaxation Initial configuration

8

9

2

8

0

4

0

8

7 0

2

?

?

? ? 0

Fig. 77: Bellman-Ford Algorithm.

We assert that after the ith pass of the “for-i” loop that d[v

i

] = δ(s, v

i

). The proof is by induction on i. Observe

that after the initialization (pass 0) we have d[v

1

] = d[s] = 0. In general, prior to the ith pass through the loop,

the induction hypothesis tells us that d[v

i−1

] = δ(s, v

i−1

). After the ith pass through the loop, we have done a

relaxation on the edge (v

i−1

, v

i

) (since we do relaxations along all the edges). Thus after the ith pass we have

d[v

i

] ≤ d[v

i−1

] +w(v

i−1

, v

i

) = δ(s, v

i−1

) +w(v

i−1

, v

i

) = δ(s, v

i

).

Recall from Dijkstra’s algorithm that d[v

i

] is never less than δ(s, v

i

) (since each time we do a relaxation there

exists a path that witnesses its value). Thus, d[v

i

] is in fact equal to δ(s, v

i

), completing the induction proof.

In summary, after i passes through the for loop, all vertices that are i edges away (along the shortest path tree)

from the source have the correct distance values stored in d[u]. Thus, after the (V − 1)st iteration of the for

loop, all vertices u have the correct distance values stored in d[u].

Supplemental Lecture 11: Network Flows and Matching

Read: Chapt 27 in CLR.

Maximum Flow: The Max Flow problem is one of the basic problems of algorithm design. Intuitively we can think

of a ﬂow network as a directed graph in which ﬂuid is ﬂowing along the edges of the graph. Each edge has

certain maximum capacity that it can carry. The idea is to ﬁnd out how much ﬂow we can push from one point

to another.

The max ﬂow problem has applications in areas like transportation, routing in networks. It is the simplest

problem in a line of many important problems having to do with the movement of commodities through a

network. These are often studied in business schools, and operations research.

Flow Networks: A ﬂow network G = (V, E) is a directed graph in which each edge (u, v) ∈ E has a nonegative

capacity c(u, v) ≥ 0. If (u, v) ,∈ E we model this by setting c(u, v) = 0. There are two special vertices: a

source s, and a sink t. We assume that every vertex lies on some path from the source to the sink (for otherwise

the vertex is of no use to us). (This implies that the digraph is connected, and hence e ≥ n −1.)

A ﬂow is a real valued function on pairs of vertices, f : V V → R which satisﬁes the following three

properties:

Capacity Constraint: For all u, v ∈ V , f(u, v) ≤ c(u, v).

Skew Symmetry: For all u, v ∈ V , f(u, v) = −f(v, u). (In other words, we can think of backwards ﬂow as

negative ﬂow. This is primarily for making algebraic analysis easier.)

Flow conservation: For all u ∈ V −¦s, t¦, we have

v∈V

f(u, v) = 0.

Lecture Notes 120 CMSC 451

(Given skew symmetry, this is equivalent to saying, ﬂow-in = ﬂow-out.) Note that ﬂow conservation

does NOT apply to the source and sink, since we think of ourselves as pumping ﬂow from s to t. Flow

conservation means that no ﬂow is lost anywhere else in the network, thus the ﬂow out of s will equal the

ﬂow into t.

The quantity f(u, v) is called the net ﬂow from u to v. The total value of the ﬂow f is deﬁned as

[f[ =

v∈V

f(s, v)

i.e. the ﬂow out of s. It turns out that this is also equal to

v∈V

f(v, t), the ﬂow into t. We will show this later.

The maximum-ﬂow problem is, given a ﬂow network, and source and sink vertices s and t, ﬁnd the ﬂow of

maximum value from s to t.

Example: Page 581 of CLR.

Multi-source, multi-sink ﬂow problems: It may seem overly restrictive to require that there is only a single source

and a single sink vertex. Many ﬂow problems have situations in which many source vertices s

1

, s

2

, . . . , s

k

and

many sink vertices t

1

, t

2

, . . . , t

l

. This can easily be modelled by just adding a special supersource s

and a

supersink t

, and attaching s

to all the s

i

and attach all the t

j

to t

**. We let these edges have inﬁnite capacity.
**

Now by pushing the maximum ﬂow from s

to t

**we are effectively producing the maximum ﬂow from all the
**

s

i

to all the t

j

’s.

Note that we don’t care which ﬂow from one source goes to another sink. If you require that the ﬂow from

source i goes ONLY to sink i, then you have a tougher problem called the multi-commodity ﬂow problem.

Set Notation: Sometimes rather than talking about the ﬂow from a vertex u to a vertex v, we want to talk about the

ﬂow from a SET of vertices X to another SET of vertices Y . To do this we extend the deﬁnition of f to sets by

deﬁning

f(X, Y ) =

x∈X

y ∈ Y f(x, y).

Using this notation we can deﬁne ﬂow balance for a vertex u more succintly by just writing f(u, V ) = 0. One

important special case of this concept is when X and Y deﬁne a cut (i.e. a partition of the vertex set into two

disjoint subsets X ⊆ V and Y = V − X). In this case f(X, Y ) can be thought of as the net amount of ﬂow

crossing over the cut.

From simple manipulations of the deﬁnition of ﬂow we can prove the following facts.

Lemma:

(i) f(X, X) = 0.

(ii) f(X, Y ) = −f(Y, X).

(iii) If X ∩ Y = ∅ then f(X ∪ Y, Z) = f(X, Z) +f(Y, Z) and f(Z, X ∪ Y ) = f(Z, X) +f(Z, Y ).

Ford-Fulkerson Method: The most basic concept on which all network-ﬂow algorithms work is the notion of aug-

menting ﬂows. The idea is to start with a ﬂow of size zero, and then incrementally make the ﬂow larger and

larger by ﬁnding a path along which we can push more ﬂow. A path in the network froms to t along which more

ﬂow can be pushed is called an augmenting path. This idea is given by the most simple method for computing

network ﬂows, called the Ford-Fulkerson method.

Almost all network ﬂow algorithms are based on this simple idea. They only differ in how they decide which

path or paths along which to push ﬂow. We will prove that when it is impossible to “push” any more ﬂow

through the network, we have reached the maximum possible ﬂow (i.e. a locally maximum ﬂow is globally

maximum).

Lecture Notes 121 CMSC 451

Ford-Fulkerson Network Flow

FordFulkerson(G, s, t) {

initialize flow f to 0;

while (there exists an augmenting path p) {

augment the flow along p;

}

output the final flow f;

}

Residual Network: To deﬁne the notion of an augmenting path, we ﬁrst deﬁne the notion of a residual network. Given

a ﬂow network Gand a ﬂowf, deﬁne the residual capacity of a pair u, v ∈ V to be c

f

(u, v) = c(u, v)−f(u, v).

Because of the capacity constraint, c

f

(u, v) ≥ 0. Observe that if c

f

(u, v) > 0 then it is possible to push more

ﬂow through the edge (u, v). Otherwise we say that the edge is saturated.

The residual network is the directed graph G

f

with the same vertex set as Gbut whose edges are the pairs (u, v)

such that c

f

(u, v) > 0. Each edge in the residual network is weighted with its residual capacity.

Example: Page 589 of CLR.

Lemma: Let f be a ﬂow in G and let f

be a ﬂow in G

f

. Then (f + f

) (deﬁned (f + f

)(u, v) = f(u, v) +

f

(u, v)) is a ﬂow in G. The value of the ﬂow is [f[ +[f

[.

Proof: Basically the residual network tells us how much additional ﬂow we can push through G. This implies

that f +f

never exceeds the overall edge capacities of G. The other rules for ﬂows are easy to verify.

Augmenting Paths: An augmenting path is a simple path from s to t in G

f

. The residual capacity of the path is

the MINIMUM capacity of any edge on the path. It is denoted c

f

(p). Observe that by pushing c

f

(p) units of

ﬂow along each edge of the path, we get a ﬂow in G

f

, and hence we can use this to augment the ﬂow in G.

(Remember that when deﬁning this ﬂow that whenever we push c

f

(p) units of ﬂow along any edge (u, v) of p,

we have to push −c

f

(p) units of ﬂow along the reverse edge (v, u) to maintain skew-symmetry. Since every

edge of the residual network has a strictly positive weight, the resulting ﬂow is strictly larger than the current

ﬂow for G.

In order to determine whether there exists an augmenting path froms to t is an easy problem. First we construct

the residual network, and then we run DFS or BFS on the residual network starting at s. If the search reaches

t then we know that a path exists (and can follow the predecessor pointers backwards to reconstruct it). Since

DFS and BFS take Θ(n +e) time, and it can be shown that the residual network has Θ(n +e) size, the running

time of Ford-Fulkerson is basically

Θ((n +e)(number of augmenting stages)).

Later we will analyze the latter quantity.

Correctness: To establish the correctness of the Ford-Fulkerson algorithm we need to delve more deeply into the

theory of ﬂows and cuts in networks. A cut, (S, T), in a ﬂow network is a partition of the vertex set into two

disjoint subsets S and T such that s ∈ S and t ∈ T. We deﬁne the ﬂow across the cut as f(S, T), and we deﬁne

the capcity of the cut as c(S, T). Note that in computing f(S, T) ﬂows from T to S are counted negatively (by

skew-symmetry), and in computing c(S, T) we ONLY count constraints on edges leading from S to T ignoring

those from T to S).

Lemma: The amount of ﬂow across any cut in the network is equal to [f[.

Lecture Notes 122 CMSC 451

Proof:

f(S, T) = f(S, V ) −f(S, S)

= f(S, V )

= f(s, V ) +f(S −s, V )

= f(s, V )

= [f[

(The fact that f(S − s, V ) = 0 comes from ﬂow conservation. f(u, V ) = 0 for all u other than s and t,

and since S −s is formed of such vertices the sum of their ﬂows will be zero also.)

Corollary: The value of any ﬂow is bounded from above by the capacity of any cut. (i.e. Maximum ﬂow ≤

Minimum cut).

Proof: You cannot push any more ﬂow through a cut than its capacity.

The correctness of the Ford-Fulkerson method is based on the following theorem, called the Max-Flow, Min-Cut

Theorem. It basically states that in any ﬂow network the minimum capacity cut acts like a bottleneck to limit

the maximum amount of ﬂow. Ford-Fulkerson algorithm terminates when it ﬁnds this bottleneck, and hence it

ﬁnds the minimum cut and maximum ﬂow.

Max-Flow Min-Cut Theorem: The following three conditions are equivalent.

(i) f is a maximum ﬂow in G,

(ii) The residual network G

f

contains no augmenting paths,

(iii) [f[ = c(S, T) for some cut (S, T) of G.

Proof: (i) ⇒(ii): If f is a max ﬂow and there were an augmenting path in G

f

, then by pushing ﬂow along this

path we would have a larger ﬂow, a contradiction.

(ii) ⇒ (iii): If there are no augmenting paths then s and t are not connected in the residual network. Let

S be those vertices reachable from s in the residual network and let T be the rest. (S, T) forms a cut.

Because each edge crossing the cut must be saturated with ﬂow, it follows that the ﬂow across the cut

equals the capacity of the cut, thus [f[ = c(S, T).

(iii) ⇒ (i): Since the ﬂow is never bigger than the capacity of any cut, if the ﬂow equals the capacity of

some cut, then it must be maximum (and this cut must be minimum).

Analysis of the Ford-Fulkerson method: The problem with the Ford-Fulkerson algorithm is that depending on how

it picks augmenting paths, it may spend an inordinate amount of time arriving a the ﬁnal maximum ﬂow. Con-

sider the following example (from page 596 in CLR). If the algorithm were smart enough to send ﬂow along

the edges of weight 1,000,000, the algorithm would terminate in two augmenting steps. However, if the algo-

rithm were to try to augment using the middle edge, it will continuously improve the ﬂow by only a single unit.

2,000,000 augmenting will be needed before we get the ﬁnal ﬂow. In general, Ford-Fulkerson can take time

Θ((n +e)[f

∗

[) where f

∗

is the maximum ﬂow.

An Improvement: We have shown that if the augmenting path was chosen in a bad way the algorithm could run for a

very long time before converging on the ﬁnal ﬂow. It seems (from the example we showed) that a more logical

way to push ﬂow is to select the augmenting path which holds the maximum amount of ﬂow. Computing this

path is equivalent to determining the path of maximum capacity from s to t in the residual network. (This is

exactly the same as the beer transport problem given on the last exam.) It is not known how fast this method

works in the worst case, but there is another simple strategy that is guaranteed to give good bounds (in terms of

n and e).

Lecture Notes 123 CMSC 451

Edmonds-Karp Algorithm: The Edmonds-Karp algorithm is Ford-Fulkerson, with one little change. When ﬁnding

the augmenting path, we use Breadth-First search in the residual network, starting at the source s, and thus we

ﬁnd the shortest augmenting path (where the length of the path is the number of edges on the path). We claim

that this choice is particularly nice in that, if we do so, the number of ﬂow augmentations needed will be at most

O(e n). Since each augmentation takes O(n +e) time to compute using BFS, the overall running time will be

O((n + e)e n) = O(n

2

e + e

2

n) ∈ O(e

2

n) (under the reasonable assumption that e ≥ n). (The best known

algorithm is essentially O(e nlog n).

The fact that Edmonds-Karp uses O(en) augmentations is based on the following observations.

Observation: If the edge (u, v) is an edge on the minimum length augmenting path from s to t in G

f

, then

δ

f

(s, v) = δ

f

(s, u) + 1.

Proof: This is a simple property of shortest paths. Since there is an edge from u to v, δ

f

(s, v) ≤ δ

f

(s, u) + 1,

and if δ

f

(s, v) < δ

f

(s, u) +1 then u would not be on the shortest path from s to v, and hence (u, v) is not

on any shortest path.

Lemma: For each vertex u ∈ V −¦s, t¦, let δ

f

(s, u) be the distance function froms to u in the residual network

G

f

. Then as we peform augmentations by the Edmonds-Karp algorithm the value of δ

f

(s, u) increases

monotonically with each ﬂow augmentation.

Proof: (Messy, but not too complicated. See the text.)

Theorem: The Edmonds-Karp algorithm makes at most O(n e) augmentations.

Proof: An edge in the augmenting path is critical if the residual capacity of the path equals the residual capacity

of this edge. In other words, after augmentation the critical edge becomes saturated, and disappears from

the residual graph.

How many times can an edge become critical before the algorithm terminates? Observe that when the

edge (u, v) is critical it lies on the shortest augmenting path, implying that δ

f

(s, v) = δ

f

(s, u) + 1. After

this it disappears from the residual graph. In order to reappear, it must be that we reduce ﬂow on this edge,

i.e. we push ﬂow along the reverse edge (v, u). For this to be the case we have (at some later ﬂow f

)

δ

f

(s, u) = δ

f

(s, v) + 1. Thus we have:

δ

f

(s, u) = δ

f

(s, v) + 1

≥ δ

f

(s, v) + 1 since dists increase with time

= (δ

f

(s, u) + 1) + 1

= δ

f

(s, u) + 2.

Thus, between the time that an edge becomes critical, its tail vertex increases in distance from the source

by two. This can only happen n/2 times, since no vertex can be further than n from the source. Thus, each

edge can become critical at most O(n) times, there are O(e) edges, hence after O(ne) augmentations, the

algorithm must terminate.

In summary, the Edmonds-Karp algorithm makes at most O(ne) augmentations and runs in O(ne

2

) time.

Maximum Matching: One of the important elements of network ﬂow is that it is a very general algorithm which is

capable of solving many problems. (An example is problem 3 in the homework.) We will give another example

here.

Consider the following problem, you are running a dating service and there are a set of men L and a set of

women R. Using a questionaire you establish which men are compatible which which women. Your task is

to pair up as many compatible pairs of men and women as possible, subject to the constraint that each man is

paired with at most one woman, and vice versa. (It may be that some men are not paired with any woman.)

This problem is modelled by giving an undirected graph whose vertex set is V = L ∪ R and whose edge set

consists of pairs (u, v), u ∈ L, v ∈ R such that u and v are compatible. The problem is to ﬁnd a matching,

Lecture Notes 124 CMSC 451

that is a subset of edges M such that for each v ∈ V , there is at most one edge of M incident to v. The desired

matching is the one that has the maximum number of edges, and is called a maximum matching.

Example: See page 601 in CLR.

The resulting undirected graph has the property that its vertex set can be divided into two groups such that all

its edges go from one group to the other (never within a group, unless the dating service is located on Dupont

Circle). This problem is called the maximum bipartite matching problem.

Reduction to Network Flow: We claim that if you have an algorithm for solving the network ﬂow problem, then you

can use this algorithm to solve the maximum bipartite matching problem. (Note that this idea does not work for

general undirected graphs.)

Construct a ﬂow network G

= (V

, E

) as follows. Let s and t be two new vertices and let V

= V ∪ ¦s, t¦.

E

= ¦(s, u)[u ∈ L¦ ∪ ¦(v, t)[v ∈ R¦ ∪ ¦(u, v)[(u, v) ∈ E¦.

Set the capacity of all edges in this network to 1.

Example: See page 602 in CLR.

Now, compute the maximum ﬂow in G

**. Although in general it can be that ﬂows are real numbers, observe that
**

the Ford-Fulkerson algorithm will only assign integer value ﬂows to the edges (and this is true of all existing

network ﬂow algorithms).

Since each vertex in L has exactly 1 incoming edge, it can have ﬂow along at most 1 outgoing edge, and since

each vertex in R has exactly 1 outgoing edge, it can have ﬂow along at most 1 incoming edge. Thus letting f

denote the maximum ﬂow, we can deﬁne a matching

M = ¦(u, v)[u ∈ L, v ∈ R, f(u, v) > 0¦.

We claim that this matching is maximum because for every matching there is a corresponding ﬂow of equal

value, and for every (integer) ﬂow there is a matching of equal value. Thus by maximizing one we maximize

the other.

Supplemental Lecture 12: Hamiltonian Path

Read: The reduction we present for Hamiltonian Path is completely different from the one in Chapt 36.5.4 of CLR.

Hamiltonian Cycle: Today we consider a collection of problems related to ﬁnding paths in graphs and digraphs.

Recall that given a graph (or digraph) a Hamiltonian cycle is a simple cycle that visits every vertex in the graph

(exactly once). A Hamiltonian path is a simple path that visits every vertex in the graph (exactly once). The

Hamiltonian cycle (HC) and Hamiltonian path (HP) problems ask whether a given graph (or digraph) has such

a cycle or path, respectively. There are four variations of these problems depending on whether the graph is

directed or undirected, and depending on whether you want a path or a cycle, but all of these problems are

NP-complete.

An important related problem is the traveling salesman problem (TSP). Given a complete graph (or digraph)

with integer edge weights, determine the cycle of minimum weight that visits all the vertices. Since the graph

is complete, such a cycle will always exist. The decision problem formulation is, given a complete weighted

graph G, and integer X, does there exist a Hamiltonian cycle of total weight at most X? Today we will prove

that Hamiltonian Cycle is NP-complete. We will leave TSP as an easy exercise. (It is done in Section 36.5.5 in

CLR.)

Lecture Notes 125 CMSC 451

Component Design: Up to now, most of the reductions that we have seen (for Clique, VC, and DS in particular) are

of a relatively simple variety. They are sometimes called local replacement reductions, because they operate by

making some local change throughout the graph.

We will present a much more complex style of reduction for the Hamiltonian path problem on directed graphs.

This type of reduction is called a component design reduction, because it involves designing special subgraphs,

sometimes called components or gadgets (also called widgets). whose job it is to enforce a particular constraint.

Very complex reductions may involve the creation of many gadgets. This one involves the construction of only

one. (See CLR’s presentation of HP for other examples of gadgets.)

The gadget that we will use in the directed Hamiltonian path reduction, called a DHP-gadget, is shown in the

ﬁgure below. It consists of three incoming edges labeled i

1

, i

2

, i

3

and three outgoing edges, labeled o

1

, o

2

, o

3

. It

was designed so it satisﬁed the following property, which you can verify. Intuitively it says that if you enter the

gadget on any subset of 1, 2 or 3 input edges, then there is a way to get through the gadget and hit every vertex

exactly once, and in doing so each path must end on the corresponding output edge.

Claim: Given the DHP-gadget:

• For any subset of input edges, there exists a set of paths which join each input edge i

1

, i

2

, or i

3

to

its respective output edge o

1

, o

2

, or o

3

such that together these paths visit every vertex in the gadget

exactly once.

• Any subset of paths that start on the input edges and end on the output edges, and visit all the vertices

of the gadget exactly once, must join corresponding inputs to corresponding outputs. (In other words,

a path that starts on input i

1

must exit on output o

1

.)

The proof is not hard, but involves a careful inspection of the gadget. It is probably easiest to see this on your

own, by starting with one, two, or three input paths, and attempting to get through the gadget without skipping

vertex and without visiting any vertex twice. To see whether you really understand the gadget, answer the

question of why there are 6 groups of triples. Would some other number work?

DHP is NP-complete: This gadget is an essential part of our proof that the directed Hamiltonian path problem is

NP-complete.

Theorem: The directed Hamiltonian Path problem is NP-complete.

Proof: DHP ∈ NP: The certiﬁcate consists of the sequence of vertices (or edges) in the path. It is an easy

matter to check that the path visits every vertex exactly once.

3SAT ≤

P

DHP: This will be the subject of the rest of this section.

Let us consider the similar elements between the two problems. In 3SAT we are selecting a truth assignment

for the variables of the formula. In DHP, we are deciding which edges will be a part of the path. In 3SAT there

must be at least one true literal for each clause. In DHP, each vertex must be visited exactly once.

We are given a boolean formula F in 3-CNF form (three literals per clause). We will convert this formula into

a digraph. Let x

1

, x

2

, . . . , x

m

denote the variables appearing in F. We will construct one DHP-gadget for each

clause in the formula. The inputs and outputs of each gadget correspond to the literals appearing in this clause.

Thus, the clause (x

2

∨x

5

∨x

8

) would generate a clause gadget with inputs labeled x

2

, x

5

, and x

8

, and the same

outputs.

The general structure of the digraph will consist of a series vertices, one for each variable. Each of these vertices

will have two outgoing paths, one taken if x

i

is set to true and one if x

i

is set to false. Each of these paths will

then pass through some number of DHP-gadgets. The true path for x

i

will pass through all the clause gadgets

for clauses in which x

i

appears, and the false path will pass through all the gadgets for clauses in which x

i

appears. (The order in which the path passes through the gadgets is unimportant.) When the paths for x

i

have

passed through their last gadgets, then they are joined to the next variable vertex, x

i+1

. This is illustrated in

the following ﬁgure. (The ﬁgure only shows a portion of the construction. There will be paths coming into

Lecture Notes 126 CMSC 451

i

2

i

1

i

What it looks like inside Gadget

3

i

3

o

2

o

1

o

3

i

2

i

1

i

3

o

2

o

1

o

3

i

2

1

Path with 3 entries

3

o

2

o

1

o

3

i

2

i

1

i

3

o

2

o

1

o

3

i

2

i

i

2

i

1

i

3

o

2

o

1

o

3

i

2

i

1

i

3

o

2

o

1

o

i

1

i

i

3

o

2

o

1

o

3

i

2

i

1

Path with 2 entries

Path with 1 entry

3

o

2

o

1

o

3

Fig. 78: DHP-Gadget and examples of path traversals.

Lecture Notes 127 CMSC 451

these same gadgets from other variables as well.) We add one ﬁnal vertex x

e

, and the last variable’s paths are

connected to x

e

. (If we wanted to reduce to Hamiltonian cycle, rather than Hamiltonian path, we could join x

e

back to x

1

.)

i

x

i

x

i

x

i

_ _ _ _

_ _

_

i+1

x

...

...

_

x

x

x

i

x

i

i

x

i

i

x

i

x

i

x

i

x

i

x

i

x

i

x

i

x

i

x

Fig. 79: General structure of reduction from 3SAT to DHP.

Note that for each variable, the Hamiltonian path must either use the true path or the false path, but it cannot use

both. If we choose the true path for x

i

to be in the Hamiltonian path, then we will have at least one path passing

through each of the gadgets whose corresponding clause contains x

i

, and if we chose the false path, then we

will have at least one path passing through each gadget for x

i

.

For example, consider the following boolean formula in 3-CNF. The construction yields the digraph shown in

the following ﬁgure.

(x

1

∨ x

2

∨ x

3

) ∧ (x

1

∨ x

2

∨ x

3

) ∧ (x

2

∨ x

1

∨ x

3

) ∧ (x

1

∨ x

3

∨ x

2

).

T

F

F

T

to

to

to

to

F

_

x

_

x

x

_

3 3 x

_

x

_

_

2

1 2

1

x

path starts here

T

2

x

2

x

3

x

1

x

2

x

1

x

2

x

3

x

3

x

1

x

x

2

x

e

x

3

x

3

Fig. 80: Example of the 3SAT to DHP reduction.

The Reduction: Let us give a more formal description of the reduction. Recall that we are given a boolean formula F

in 3-CNF. We create a digraph G as follows. For each variable x

i

appearing in F, we create a variable vertex,

named x

i

. We also create a vertex named x

e

(the ending vertex). For each clause c, we create a DHP-gadget

whose inputs and outputs are labeled with the three literals of c. (The order is unimportant, as long as each input

and its corresponding output are labeled the same.)

We join these vertices with the gadgets as follows. For each variable x

i

, consider all the clauses c

1

, c

2

, . . . , c

k

in

which x

i

appears as a literal (uncomplemented). Join x

i

by an edge to the input labeled with x

i

in the gadget for

c

1

, and in general join the the output of gadget c

j

labeled x

i

with the input of gadget c

j+1

with this same label.

Finally, join the output of the last gadget c

k

to the next vertex variable x

i+1

. (If this is the last variable, then

join it to x

e

instead.) The resulting chain of edges is called the true path for variable x

i

. Form a second chain

in exactly the same way, but this time joining the gadgets for the clauses in which x

i

appears. This is called

the false path for x

i

. The resulting digraph is the output of the reduction. Observe that the entire construction

can be performed in polynomial time, by simply inspecting the formula, creating the appropriate vertices, and

adding the appropriate edges to the digraph. The following lemma establishes the correctness of this reduction.

Lemma: The boolean formula F is satisﬁable if and only if the digraph Gproduced by the above reduction has

a Hamiltonian path.

Lecture Notes 128 CMSC 451

1

3

x

_

x

_

x

_

x

_

x

_

x

_

A nonsatisfying assignment misses some gadgets

A satisfying assignment hits all gadgets

3

to

to

F

T

T

Start here

to

to

F

T

F

Start here

1

3 3

e

x

2

x

1

x

3

x

2

x

2

x

2

x

3

x

x

x

1

x

3

x

2

x

1

x

2

x

1

x

2

x

3

e

x

2

Fig. 81: Correctness of the 3SAT to DHP reduction. The upper ﬁgure shows the Hamiltonian path resulting from the

satisfying assignment, x

1

= 1, x

2

= 1, x

3

= 0, and the lower ﬁgure shows the non-Hamiltonian path resulting from

the nonsatisfying assignment x

1

= 0, x

2

= 1, x

3

= 0.

Proof: We need to prove both the “only if” and the “if”.

⇒: Suppose that F has a satisfying assignment. We claim that G has a Hamiltonian path. This path will start at

the variable vertex x

1

, then will travel along either the true path or false path for x

1

, depending on whether

it is 1 or 0, respectively, in the assignment, and then it will continue with x

2

, then x

3

, and so on, until

reaching x

e

. Such a path will visit each variable vertex exactly once.

Because this is a satisfying assignment, we know that for each clause, either 1, 2, or 3 of its literals

will be true. This means that for each clause, either 1, 2, or 3, paths will attempt to travel through the

corresponding gadget. However, we have argued in the above claim that in this case it is possible to visit

every vertex in the gadget exactly once. Thus every vertex in the graph is visited exactly once, implying

that G has a Hamiltonian path.

⇐: Suppose that G has a Hamiltonian path. We assert that the form of the path must be essentially the same as

the one described in the previous part of this proof. In particular, the path must visit the variable vertices

in increasing order from x

1

until x

e

, because of the way in which these vertices are joined together.

Also observe that for each variable vertex, the path will proceed along either the true path or the false path.

If it proceeds along the true path, set the corresponding variable to 1 and otherwise set it to 0. We will

show that the resulting assignment is a satisfying assignment for F.

Any Hamiltonian path must visit all the vertices in every gadget. By the above claim about DHP-gadgets,

if a path visits all the vertices and enters along input edge then it must exit along the corresponding output

edge. Therefore, once the Hamiltonian path starts along the true or false path for some variable, it must

remain on edges with the same label. That is, if the path starts along the true path for x

i

, it must travel

through all the gadgets with the label x

i

until arriving at the variable vertex for x

i+1

. If it starts along the

false path, then it must travel through all gadgets with the label x

i

.

Since all the gadgets are visited and the paths must remain true to their initial assignments, it follows that

for each corresponding clause, at least one (and possibly 2 or three) of the literals must be true. Therefore,

this is a satisfying assignment.

Lecture Notes 129 CMSC 451

Supplemental Lecture 13: Subset Sum Approximation

Read: Section 37.4 in CLR.

Polynomial Approximation Schemes: Last time we saw that for some NP-complete problems, it is possible to ap-

proximate the problem to within a ﬁxed constant ratio bound. For example, the approximation algorithm pro-

duces an answer that is within a factor of 2 of the optimal solution. However, in practice, people would like to

the control the precision of the approximation. This is done by specifying a parameter > 0 as part of the input

to the approximation algorithm, and requiring that the algorithm produce an answer that is within a relative

error of of the optimal solution. It is understood that as tends to 0, the running time of the algorithm will

increase. Such an algorithm is called a polynomial approximation scheme.

For example, the running time of the algorithm might be O(2

(1/)

n

2

). It is easy to see that in such cases the user

pays a big penalty in running time as a function of . (For example, to produce a 1% error, the “constant” factor

would be 2

100

which would be around 4 quadrillion centuries on your 100 Mhz Pentium.) A fully polynomial

approximation scheme is one in which the running time is polynomial in both n and 1/. For example, a

running time of O((n/)

2

) would satisfy this condition. In such cases, reasonably accurate approximations are

computationally feasible.

Unfortunately, there are very few NP-complete problems with fully polynomial approximation schemes. In fact,

recently there has been strong evidence that many NP-complete problems do not have polynomial approximation

schemes (fully or otherwise). Today we will study one that does.

Subset Sum: Recall that in the subset sum problem we are given a set S of positive integers ¦x

1

, x

2

, . . . , x

n

¦ and a

target value t, and we are asked whether there exists a subset S

**⊆ S that sums exactly to t. The optimization
**

problem is to determine the subset whose sum is as large as possible but not larger than t.

This problem is basic to many packing problems, and is indirectly related to processor scheduling problems that

arise in operating systems as well. Suppose we are also given 0 < < 1. Let z

∗

≤ t denote the optimum sum.

The approximation problem is to return a value z ≤ t such that

z ≥ z

∗

(1 −).

If we think of this as a knapsack problem, we want our knapsack to be within a factor of (1 −) of being as full

as possible. So, if = 0.1, then the knapsack should be at least 90% as full as the best possible.

What do we mean by polynomial time here? Recall that the running time should be polynomial in the size of

the input length. Obviously n is part of the input length. But t and the numbers x

i

could also be huge binary

numbers. Normally we just assume that a binary number can ﬁt into a word of our computer, and do not count

their length. In this case we will to be on the safe side. Clearly t requires O(log t) digits to be store in the input.

We will take the input size to be n + log t.

Intuitively it is not hard to believe that it should be possible to determine whether we can ﬁll the knapsack to

within 90% of optimal. After all, we are used to solving similar sorts of packing problems all the time in real

life. But the mental heuristics that we apply to these problems are not necessarily easy to convert into efﬁcient

algorithms. Our intuition tells us that we can afford to be a little “sloppy” in keeping track of exactly full the

knapsack is at any point. The value of tells us just how sloppy we can be. Our approximation will do something

similar. First we consider an exponential time algorithm, and then convert it into an approximation algorithm.

Exponential Time Algorithm: This algorithm is a variation of the dynamic programming solution we gave for the

knapsack problem. Recall that there we used an 2-dimensional array to keep track of whether we could ﬁll a

knapsack of a given capacity with the ﬁrst i objects. We will do something similar here. As before, we will

concentrate on the question of which sums are possible, but determining the subsets that give these sums will

not be hard.

Let L

i

denote a list of integers that contains the sums of all 2

i

subsets of ¦x

1

, x

2

, . . . , x

i

¦ (including the empty

set whose sum is 0). For example, for the set ¦1, 4, 6¦ the corresponding list of sums contains ¸0, 1, 4, 5(=

Lecture Notes 130 CMSC 451

1 + 4), 6, 7(= 1 + 6), 10(= 4 + 6), 11(= 1 + 4 + 6)). Note that L

i

can have as many as 2

i

elements, but may

have fewer, since some subsets may have the same sum.

There are two things we will want to do for efﬁciency. (1) Remove any duplicates from L

i

, and (2) only keep

sums that are less than or equal to t. Let us suppose that we a procedure MergeLists(L1, L2) which

merges two sorted lists, and returns a sorted lists with all duplicates removed. This is essentially the procedure

used in MergeSort but with the added duplicate element test. As a bit of notation, let L + x denote the list

resulting by adding the number x to every element of list L. Thus ¸1, 4, 6) + 3 = ¸4, 7, 9). This gives the

following procedure for the subset sum problem.

Exact Subset Sum

Exact_SS(x[1..n], t) {

L = <0>;

for i = 1 to n do {

L = MergeLists(L, L+x[i]);

remove for L all elements greater than t;

}

return largest element in L;

}

For example, if S = ¦1, 4, 6¦ and t = 8 then the successive lists would be

L

0

= ¸0)

L

1

= ¸0) ∪ ¸0 + 1) = ¸0, 1)

L

2

= ¸0, 1) ∪ ¸0 + 4, 1 + 4) = ¸0, 1, 4, 5)

L

3

= ¸0, 1, 4, 5) ∪ ¸0 + 6, 1 + 6, 4 + 6, 5 + 6) = ¸0, 1, 4, 5, 6, 7, 10, 11).

The last list would have the elements 10 and 11 removed, and the ﬁnal answer would be 7. The algorithm runs

in Ω(2

n

) time in the worst case, because this is the number of sums that are generated if there are no duplicates,

and no items are removed.

Approximation Algorithm: To convert this into an approximation algorithm, we will introduce a “trim” the lists to

decrease their sizes. The idea is that if the list L contains two numbers that are very close to one another, e.g.

91, 048 and 91, 050, then we should not need to keep both of these numbers in the list. One of them is good

enough for future approximations. This will reduce the size of the lists that the algorithm needs to maintain.

But, how much trimming can we allow and still keep our approximation bound? Furthermore, will we be able

to reduce the list sizes from exponential to polynomial?

The answer to both these questions is yes, provided you apply a proper way of trimming the lists. We will trim

elements whose values are sufﬁciently close to each other. But we should deﬁne close in manner that is relative

to the sizes of the numbers involved. The trimming must also depend on . We select δ = /n. (Why? We will

see later that this is the value that makes everything work out in the end.) Note that 0 < δ < 1. Assume that the

elements of L are sorted. We walk through the list. Let z denote the last untrimmed element in L, and let y ≥ z

be the next element to be considered. If

y −z

y

≤ δ

then we trim y from the list. Equivalently, this means that the ﬁnal trimmed list cannot contain two value y and

z such that

(1 −δ)y ≤ z ≤ y.

We can think of z as representing y in the list.

For example, given δ = 0.1 and given the list

L = ¸10, 11, 12, 15, 20, 21, 22, 23, 24, 29),

Lecture Notes 131 CMSC 451

the trimmed list L

will consist of

L

= ¸10, 12, 15, 20, 23, 29).

Another way to visualize trimming is to break the interval from [1, t] into a set of buckets of exponentially

increasing size. Let d = 1/(1−δ). Note that d > 1. Consider the intervals [1, d], [d, d

2

], [d

2

, d

3

], . . . , [d

k−1

, d

k

]

where d

k

≥ t. If z ≤ y are in the same interval [d

i−1

, d

i

] then

y −z

y

≤

d

i

−d

i−1

d

i

= 1 −

1

d

= δ.

Thus, we cannot have more than one item within each bucket. We can think of trimming as a way of enforcing

the condition that items in our lists are not relatively too close to one another, by enforcing the condition that no

bucket has more than one item.

L

L’

1 2 4 8 16

Fig. 82: Trimming Lists for Approximate Subset Sum.

Claim: The number of distinct items in a trimmed list is O((nlog t)/), which is polynomial in input size and

1/.

Proof: We know that each pair of consecutive elements in a trimmed list differ by a ratio of at least d =

1/(1 −δ) > 1. Let k denote the number of elements in the trimmed list, ignoring the element of value 0.

Thus, the smallest nonzero value and maximum value in the the trimmed list differ by a ratio of at least

d

k−1

. Since the smallest (nonzero) element is at least as large as 1, and the largest is no larger than t, then

it follows that d

k−1

≤ t/1 = t. Taking the natural log of both sides we have (k −1) ln d ≤ ln t. Using the

facts that δ = /n and the log identity that ln(1 +x) ≤ x, we have

k −1 ≤

ln t

ln d

=

ln t

−ln(1 −δ)

≤

ln t

δ

=

nln t

k = O

_

nlog t

_

.

Observe that the input size is at least as large as n (since there are n numbers) and at least as large as log t

(since it takes log t digits to write down t on the input). Thus, this function is polynomial in the input size

and 1/.

The approximation algorithm operates as before, but in addition we call the procedure Trim given below.

For example, consider the set S = ¦104, 102, 201, 101¦ and t = 308 and = 0.20. We have δ = /4 = 0.05.

Here is a summary of the algorithm’s execution.

Lecture Notes 132 CMSC 451

Approximate Subset Sum

Trim(L, delta) {

let the elements of L be denoted y[1..m];

L’ = <y[1]>; // start with first item

last = y[1]; // last item to be added

for i = 2 to m do {

if (last < (1-delta) y[i]) { // different enough?

append y[i] to end of L’;

last = y[i];

}

}

}

Approx_SS(x[1..n], t, eps) {

delta = eps/n; // approx factor

L = <0>; // empty sum = 0

for i = 1 to n do {

L = MergeLists(L, L+x[i]); // add in next item

L = Trim(L, delta); // trim away "near" duplicates

remove for L all elements greater than t;

}

return largest element in L;

}

init: L

0

= ¸0)

merge: L

1

= ¸0, 104)

trim: L

1

= ¸0, 104)

remove: L

1

= ¸0, 104)

merge: L

2

= ¸0, 102, 104, 206)

trim: L

2

= ¸0, 102, 206)

remove: L

2

= ¸0, 102, 206)

merge: L

3

= ¸0, 102, 201, 206, 303, 407)

trim: L

3

= ¸0, 102, 201, 303, 407)

remove: L

3

= ¸0, 102, 201, 303)

merge: L

4

= ¸0, 101, 102, 201, 203, 302, 303, 404)

trim: L

4

= ¸0, 101, 201, 302, 404)

remove: L

4

= ¸0, 101, 201, 302)

The ﬁnal output is 302. The optimum is 307 = 104 + 102 + 101. So our actual relative error in this case is

within 2%.

The running time of the procedure is O(n[L[) which is O(n

2

ln t/) by the earlier claim.

Lecture Notes 133 CMSC 451

Approximation Analysis: The ﬁnal question is why the algorithm achieves an relative error of at most over the

optimum solution. Let Y

∗

denote the optimum (largest) subset sum and let Y denote the value returned by the

algorithm. We want to show that Y is not too much smaller than Y

∗

, that is,

Y ≥ Y

∗

(1 −).

Our proof will make use of an important inequality from real analysis.

Lemma: For n > 0 and a real numbers,

(1 +a) ≤

_

1 +

a

n

_

n

≤ e

a

.

Recall that our intuition was that we would allow a relative error of /n at each stage of the algorithm. Since the

algorithm has n stages, then the total relative error should be (obviously?) n(/n) = . The catch is that these

are relative, not absolute errors. These errors to not accumulate additively, but rather by multiplication. So we

need to be more careful.

Let L

∗

i

denote the i-th list in the exponential time (optimal) solution and let L

i

denote the i-th list in the approx-

imate algorithm. We claim that for each y ∈ L

∗

i

there exists a representative item z ∈ L

i

whose relative error

from y that satisﬁes

(1 −/n)

i

y ≤ z ≤ y.

The proof of the claim is by induction on i. Initially L

0

= L

∗

0

= ¸0), and so there is no error. Suppose by

induction that the above equation holds for each item in L

∗

i−1

. Consider an element y ∈ L

∗

i−1

. We know that

y will generate two elements in L

∗

i

: y and y + x

i

. We want to argue that there will be a representative that is

“close” to each of these items.

By our induction hypothesis, there is a representative element z in L

i−1

such that

(1 −/n)

i−1

y ≤ z ≤ y.

When we apply our algorithm, we will form two new items to add (initially) to L

i

: z and z + x

i

. Observe that

by adding x

i

to the inequality above and a little simpliﬁcation we get

(1 −/n)

i−1

(y +x

i

) ≤ z +x

i

≤ y +x

i

.

L

y

z y

i

z+x z

z’’ z’ y+x

i

*

i−1

L

i−1

L

*

i

L

i

Fig. 83: Subset sum approximation analysis.

The items z and z +x

i

might not appear in L

i

because they may be trimmed. Let z

and z

be their respective

representatives. Thus, z

and z

are elements of L

i

. We have

(1 −/n)z ≤ z

≤ z

(1 −/n)(z +x

i

) ≤ z

≤ z +x

i

.

Lecture Notes 134 CMSC 451

Combining these with the inequalities above we have

(1 −/n)

i−1

(1 −/n)y ≤ (1 −/n)

i

y ≤ z

≤ y

(1 −/n)

i−1

(1 −/n)(y +x

i

) ≤ (1 −/n)

i

(y +x

i

) ≤ z

≤ z +y

i

.

Since z and z

are in L

i

this is the desired result. This ends the proof of the claim.

Using our claim, and the fact that Y

∗

(the optimum answer) is the largest element of L

∗

n

and Y (the approximate

answer) is the largest element of L

n

we have

(1 −/n)

n

Y

∗

≤ Y ≤ Y

∗

.

This is not quite what we wanted. We wanted to show that (1 −)Y

∗

≤ Y . To complete the proof, we observe

from the lemma above (setting a = −) that

(1 −) ≤

_

1 −

n

_

n

.

This completes the approximate analysis.

Lecture Notes 135 CMSC 451

**Lecture 1: Course Introduction
**

Read: (All readings are from Cormen, Leiserson, Rivest and Stein, Introduction to Algorithms, 2nd Edition). Review Chapts. 1–5 in CLRS. What is an algorithm? Our text deﬁnes an algorithm to be any well-deﬁned computational procedure that takes some values as input and produces some values as output. Like a cooking recipe, an algorithm provides a step-by-step method for solving a computational problem. Unlike programs, algorithms are not dependent on a particular programming language, machine, system, or compiler. They are mathematical entities, which can be thought of as running on some sort of idealized computer with an inﬁnite random access memory and an unlimited word size. Algorithm design is all about the mathematical theory behind the design of good programs. Why study algorithm design? Programming is a very complex task, and there are a number of aspects of programming that make it so complex. The ﬁrst is that most programming projects are very large, requiring the coordinated efforts of many people. (This is the topic a course like software engineering.) The next is that many programming projects involve storing and accessing large quantities of data efﬁciently. (This is the topic of courses on data structures and databases.) The last is that many programming projects involve solving complex computational problems, for which simplistic or naive solutions may not be efﬁcient enough. The complex problems may involve numerical data (the subject of courses on numerical analysis), but often they involve discrete data. This is where the topic of algorithm design and analysis is important. Although the algorithms discussed in this course will often represent only a tiny fraction of the code that is generated in a large software system, this small fraction may be very important for the success of the overall project. An unfortunately common approach to this problem is to ﬁrst design an inefﬁcient algorithm and data structure to solve the problem, and then take this poor design and attempt to ﬁne-tune its performance. The problem is that if the underlying design is bad, then often no amount of ﬁne-tuning is going to make a substantial difference. The focus of this course is on how to design good algorithms, and how to analyze their efﬁciency. This is among the most basic aspects of good programming. Course Overview: This course will consist of a number of major sections. The ﬁrst will be a short review of some preliminary material, including asymptotics, summations, and recurrences and sorting. These have been covered in earlier courses, and so we will breeze through them pretty quickly. We will then discuss approaches to designing optimization algorithms, including dynamic programming and greedy algorithms. The next major focus will be on graph algorithms. This will include a review of breadth-ﬁrst and depth-ﬁrst search and their application in various problems related to connectivity in graphs. Next we will discuss minimum spanning trees, shortest paths, and network ﬂows. We will brieﬂy discuss algorithmic problems arising from geometric settings, that is, computational geometry. Most of the emphasis of the ﬁrst portion of the course will be on problems that can be solved efﬁciently, in the latter portion we will discuss intractability and NP-hard problems. These are problems for which no efﬁcient solution is known. Finally, we will discuss methods to approximate NP-hard problems, and how to prove how close these approximations are to the optimal solutions. Issues in Algorithm Design: Algorithms are mathematical objects (in contrast to the must more concrete notion of a computer program implemented in some programming language and executing on some machine). As such, we can reason about the properties of algorithms mathematically. When designing an algorithm there are two fundamental issues to be considered: correctness and efﬁciency. It is important to justify an algorithm’s correctness mathematically. For very complex algorithms, this typically requires a careful mathematical proof, which may require the proof of many lemmas and properties of the solution, upon which the algorithm relies. For simple algorithms (BubbleSort, for example) a short intuitive explanation of the algorithm’s basic invariants is sufﬁcient. (For example, in BubbleSort, the principal invariant is that on completion of the ith iteration, the last i elements are in their proper sorted positions.) Lecture Notes 2 CMSC 451

Establishing efﬁciency is a much more complex endeavor. Intuitively, an algorithm’s efﬁciency is a function of the amount of computational resources it requires, measured typically as execution time and the amount of space, or memory, that the algorithm uses. The amount of computational resources can be a complex function of the size and structure of the input set. In order to reduce matters to their simplest form, it is common to consider efﬁciency as a function of input size. Among all inputs of the same size, we consider the maximum possible running time. This is called worst-case analysis. It is also possible, and often more meaningful, to measure average-case analysis. Average-case analyses tend to be more complex, and may require that some probability distribution be deﬁned on the set of inputs. To keep matters simple, we will usually focus on worst-case analysis in this course. Throughout out this course, when you are asked to present an algorithm, this means that you need to do three things: • Present a clear, simple and unambiguous description of the algorithm (in pseudo-code, for example). They key here is “keep it simple.” Uninteresting details should be kept to a minimum, so that the key computational issues stand out. (For example, it is not necessary to declare variables whose purpose is obvious, and it is often simpler and clearer to simply say, “Add X to the end of list L” than to present code to do this or use some arcane syntax, such as “L.insertAtEnd(X).”) • Present a justiﬁcation or proof of the algorithm’s correctness. Your justiﬁcation should assume that the reader is someone of similar background as yourself, say another student in this class, and should be convincing enough make a skeptic believe that your algorithm does indeed solve the problem correctly. Avoid rambling about obvious or trivial elements. A good proof provides an overview of what the algorithm does, and then focuses on any tricky elements that may not be obvious. • Present a worst-case analysis of the algorithms efﬁciency, typically it running time (but also its space, if space is an issue). Sometimes this is straightforward, but if not, concentrate on the parts of the analysis that are not obvious. Note that the presentation does not need to be in this order. Often it is good to begin with an explanation of how you derived the algorithm, emphasizing particular elements of the design that establish its correctness and efﬁciency. Then, once this groundwork has been laid down, present the algorithm itself. If this seems to be a bit abstract now, don’t worry. We will see many examples of this process throughout the semester.

**Lecture 2: Mathematical Background
**

Read: Review Chapters 1–5 in CLRS. Algorithm Analysis: Today we will review some of the basic elements of algorithm analysis, which were covered in previous courses. These include asymptotics, summations, and recurrences. Asymptotics: Asymptotics involves O-notation (“big-Oh”) and its many relatives, Ω, Θ, o (“little-Oh”), ω. Asymptotic notation provides us with a way to simplify the functions that arise in analyzing algorithm running times by ignoring constant factors and concentrating on the trends for large values of n. For example, it allows us to reason that for three algorithms with the respective running times n3 log n + 4n2 + 52n log n ∈ Θ(n3 log n) 15n2 + 7n log3 n ∈ Θ(n2 ) 3n + 4 log5 n + 19n2 ∈ Θ(n2 ). Thus, the ﬁrst algorithm is signiﬁcantly slower for large n, while the other two are comparable, up to a constant factor. Since asymptotics were covered in earlier courses, I will assume that this is familiar to you. Nonetheless, here are a few facts to remember about asymptotic notation: Lecture Notes 3 CMSC 451

Also. 2 This is Θ(n ). such as n4 and n = n1/2 . b. and the result is 0. are often solved by reducing them to summations. (This is just because it is easier to say “oh” than “theta”. Exponential: A constant (not 1) raised to the power n. Solving a summation means reducing it to a closed form formula. which are strictly asymptotically smaller than exponential functions (assuming the base of the exponent is bigger than 1). b 1 = max(b − a + 1. b. 23n is not O(2n ). when in fact the stronger statement Θ(n2 ) holds. For example. Be careful to check that b ≥ a − 1 before applying this formula blindly. that is. Arithmetic Series: For n ≥ 0. The last rule above can be used to achieve this. An important fact is that polylogarithmic functions are strictly asymptotically smaller than polynomial function. such as recurrences. Constant Series: For integers a and b. For example.) Lecture Notes 4 CMSC 451 . Focus on large n: Asymptotic analysis means that we consider trends for large values of n. since an asymptotic approximation or close upper bound is usually good enough. i=a Notice that when b = a − 1. We will usually write this as log7 n. For example. (The starting bound could have just as easily been set to 1 as 0. express this as nlog2 3 ≈ n1. and exponential: These are the most common functions that arise in analyzing algorithms: Polylogarithmic: Powers of log n.Ignore constant factors: Multiplicative constant factors are ignored. such as (log n)7 . or other complex operators. Here are some common summations and some tips to use in solving summations. such as 3n . Polylog. Here a. nb cn = Avoid using log n in exponents.585 . For example. the fastest growing function of n is the only one that needs to be considered. For example. and c. Thus. there are no terms in the summation (since the index is assumed to count upwards only). Following the conventional sloppiness. the following formulas are useful. n i = 1 + 2 + ··· + n = i=0 2 n(n + 1) . Logarithm Simpliﬁcation: It is a good idea to ﬁrst simplify terms involving logarithms. Constant factors appearing exponents cannot be ignored. polynomial. c are constants: logb n loga (nc ) bloga n = loga n = Θ(loga n) loga b = c loga n = Θ(loga n) nloga b . recurrences. integrals. I will often say O(n2 ). provided that b > 0 and c > 1. more complex forms of analysis. √ Polynomial: Powers of n. rather than saying 3log2 n . In algorithm design it is often not necessary to solve a summation exactly. For example. if we let mean “asymptotically smaller” then loga n for any a. 347n is Θ(n). 3n2 log n + 25n log n + (log n)7 is Θ(n2 log n). 0).) Summations: Summations naturally arise in the analysis of iterative algorithms. one having no summations.

For example. Let f (x) be any monotonically increasing function (the function increases as x increases). Note that the last element of the list Lecture Notes 5 CMSC 451 . Summations with general bounds: When a summation does not start at the 1 or 0.) Here is a handy formula. 1 Example: Right Dominant Elements As an example of the use of summations in algorithm analysis. since x is a constant. Let x = 1 be any constant. It does not have an exact closed form solution. (Integration is in some sense a continuous form of summation. x−1 If 0 < x < 1 then this is Θ(1). Linearity of Summation: Constant factors and added terms can be split out to make summations simpler. n i2 = 12 + 22 + · · · + n2 = i=0 2n3 + 3n2 + n . What remains is Θ(nxn ). that is. We are given a list L of numeric values.Geometric Series: Let x = 1 be any constant (independent of n). you can just split it up into the difference of two summations. n 0 n n+1 f (x)dx ≤ i=1 f (i) ≤ f (x)dx. n Hn = i=1 1 1 1 1 = 1 + + + ··· + = (ln n) + O(1). (x − 1)2 As n becomes large. we may multiply this times the constant (x − 1)2 /x without changing the asymptotics. then for n ≥ 0. Approximate using integrals: Integration and summation are closely related. Now the formulas can be to each summation individually. Quadratic Series: For n ≥ 0. n−1 ixi = x + 2x2 + 3x3 · · · + nxn = i=0 (n − 1)x(n+1) − nxn + x . n xi = 1 + x + x2 + · · · + xn = i=0 xn+1 − 1 . 6 Linear-geometric Series: This arises in some algorithms based on trees and recursion. If x > 1. We say that an element of L is right dominant if it is strictly larger than all the elements that follow it in the list. consider the following simple problem. the entire sum is proportional to the last element of the series. this is asymptotically dominated by the term (n − 1)x(n+1) /(x − 1)2 . i 2 3 n There are also a few tips to learn about solving summations. (4 + 3i(i − 2)) = 4 + 3i2 − 6i = 4+3 i2 − 6 i. and. as most of the above formulas assume. For n ≥ 0. then for n ≥ 0. for 1 ≤ a ≤ b b b a−1 f (i) = i=a i=0 f (i) − i=0 f (i). then this is Θ(xn ). but it can be closely approximated. Harmonic Series: This arises often in probabilistic analyses of algorithms. The multiplicative term n − 1 is very nearly equal to n for large n.

this sort of optimization is good to keep in mind in programming. 8. up to a constant factor. since we can terminate the loop as soon as we ﬁnd that A[i] is not dominant. a doubly linked list (allowing for sequential access in both directions). 13. or a singly linked list (allowing for sequential access in only one direction). so it could also be implemented using a singly linked or doubly linked list. 3 . think a little harder. let us expand it. 8. consider the following list. it will also serve to illustrate the sort of style that we will use in presenting algorithms. + 2 + 1 + 0 0 + 1 + 2 + .n] // Returns: List D containing the right dominant elements of L RightDominant(L) { D = empty list for (i = 1 to n) isDominant = true for (j = i+1 to n) if (A[i] <= A[j]) isDominant = false if (isDominant) append A[i] to D } return D } If I were programming this. 6. T (n) = = = i=0 (n − 1) + (n − 2) + . (Recall the rule for the constant series above. 6. 1. we should think about how L is represented. 9.) Each iteration of the inner loop takes constant time. . 2 6 CMSC 451 Lecture Notes . To solve this summation. 5. + (n − 2) + (n − 1) n−1 i = (n − 1)n . not compilers. Think for a moment how you would solve this problem. but give thought to how it may actually be implemented. Among the three possible representations. Remember that algorithms are read by humans. . which operates by simply checking for each element of the array whether all the subsequent elements are strictly smaller. On the ith iteration of the outer loop. and put it into a form such that the above formulas can be used.. we will ﬁrst present a naive O(n2 ) time algorithm. The time spent in this algorithm is dominated (no pun intended) by the time spent in the inner (j) loop. L = 10. It will make a difference whether L is represented as an array (allowing for random access). Can you see an O(n) time algorithm? (If not. .) Right Dominant Elements (Naive Solution) // Input: List L of numbers given as an array L[1. Again. the array representation seems to yield the simplest and clearest algorithm. Chose your representation to make the algorithm as simple and clear as possible.) We will assume here that the array L of size n is indexed from 1 to n. is given by the following summation: n T (n) = i=1 (n − i). (This is common in algorithms. For example. In order to make this more concrete. Thus. 3 The sequence of right dominant elements are 13. However. as a function of n. for a total of n − (i + 1) + 1 = n − i times. (Although this example is pretty stupid.) To illustrate summations.is always right dominant. as is the last occurrence of the maximum element of the array. we will design the algorithm in such a way that it only performs sequential scans. 7. the inner loop is executed from i + 1 to n. the running time. 4. 2. I would rewrite the inner (j) loop as a while loop. but will be omitted since it will not affect the worst-case running time. .

Here is a typical example. When the subproblems are reduced to size 1. (You are allowed to use up to O(n) working storage to do this. given in CLRS. b = 2. See CLRS for the more complete version of the Master Theorem and its proof. Case 1: a > bk then T (n) is Θ(nlogb a ). For example. 6–9 in CLRS.) For such recurrences. Lecture 3: Review of Sorting and Selection Read: Review Chapts. There many recurrences that cannot be put into this form. each of size roughly n/2. (We will assume exactly n/2 for simplicity. but I will often be sloppy in this way. Note that. Case 2: a = bk then T (n) is Θ(nk log n). Theorem: (Simpliﬁed Master Theorem) Let a ≥ 1. there is a simple O(n) time algorithm for this problem. There are a number of methods for solving the sort of recurrences that show up in divide-and-conquer algorithms. Lecture Notes 7 CMSC 451 .The last step comes from applying the formula for the linear series (using n − 1 in place of n in the formula). since we assume that n is an integer. Case 3: a < bk then T (n) is Θ(nk ). we can solve them in O(1) time. We will ignore constant factors. but the Master Theorem (either this form or the one in CLRS will not tell you this. Conquer: Solve each subproblem recursively. Here is a slightly more restrictive version. then the best way is usually by setting up a recurrence. see if you can design your algorithm so it only performs a single left-to-right scan of the list L. so a = bk and Case 2 applies. As an exercise. other methods are needed. Divide: Divide the problem into two or more subproblems (ideally of roughly equal sizes). that is. b > 1 be constants and let T (n) be the recurrence T (n) = aT (n/b) + cnk . The easiest method is to apply the Master Theorem. the following recurrence is quite common: T (n) = 2T (n/2) + n log n. Thus T (n) is Θ(n log n). see if you can ﬁnd it. How do we analyze recursive procedures like this one? If there is a simple pattern to the sizes of the recursive calls. To be formally correct.) Recurrences: Another useful mathematical tool in algorithm analysis will be recurrences. I should either write n/2 or restrict the domain of n. As an additional challenge. but adequate for a lot of instances. and k = 1. if n > 1. Using this version of the Master Theorem we can see that in our recurrence a = 2. deﬁned for n ≥ 0. As mentioned above.). writing O(n) just as n. They arise naturally in the analysis of divide-and-conquer algorithms. a function which is deﬁned recursively in terms of itself. The additional overhead of splitting and merging the solutions is O(n). Suppose that we break the problem into two subproblems. this recurrence is not well deﬁned unless n is a power of 2 (since otherwise n/2 will at some point be a fraction). This solves to T (n) = Θ(n log2 n). yielding the following recurrence: T (n) T (n) = = 1 2T (n/2) + n if n = 1. and Combine: Combine the solutions to the subproblems into a single global solution. Recall that these algorithms have the following general structure.

Then it partitions the array into elements that are less than and greater than the pivot. and Θ(n2 ) in the worst case. which assume that data is stored in an array in main memory. but it widely considered to be the worst of the three. but this is usually not counted. Then it recursively sorts each part. fewer than 20 elements. The problem is to permute the items so that they are in increasing (or decreasing) order by key. and external sorting algorithm. We will only consider internal sorting. Sorting is important because it is often the ﬁrst step in more complex algorithms. Stable: A sorting algorithm is stable if two elements that are equal remain in the same relative position after sorting is completed. QuickSort. There is a stable version of QuickSort. depending on the application.) BubbleSort is the easiest one to remember. each associated with a given key value. remain sorted on the ﬁrst key. internal sorting algorithms. say. Sorting algorithms often have additional properties that are of interest. 1: Common O(n log n) comparison-based sorting algorithms. This is of interest. You are probably familiar with one or more of the standard simple Θ(n2 ) sorting algorithms. The three canonical efﬁcient comparison-based sorting algorithms are MergeSort. and HeapSort. If properly implemented. (It does implicitly use the system’s recursion stack. Sorting algorithms are usually divided into two classes. This algorithm is Θ(n log n) in the expected case. In-place: The algorithm uses no additional array storage. They are shown schematically in Fig. The other algorithms compare two elements in the array. these algorithms are quite acceptable for small lists of. the probability that the algorithm takes asymptotically longer (assuming that the pivot is chosen randomly) is extremely small for large n. If you are not familiar with any of these. check out the descriptions in CLRS. SelectionSort and BubbleSort. QuickSort is widely regarded as the fastest of the fast sorting algorithms (on modern machines). but it is not in-place.) It is not stable. since it uses no other array storage. 1 QuickSort: It works recursively.Review of Sorting: Sorting is among the most basic problems in algorithm design. by ﬁrst selecting a random “pivot value” from the array. This is considered an in-place sorting algorithm. We are given a sequence of items. Lecture Notes 8 CMSC 451 . It is nice to know that two items that are equal on the second key. (By the way. Here are two important properties. One explanation is that its inner loop compares elements against a single pivot value. All run in Θ(n log n) time. such as InsertionSort. which can be stored in a register for fast access. x QuickSort: x partition <x x >x sort sort split merge MergeSort: sort HeapSort: buildHeap Heap extractMax Fig. since in some sorting applications you sort ﬁrst on one key and then on another. and hence (other than perhaps the system’s recursion stack) it is possible to sort very large lists without the need to allocate additional working storage. Here is a quick summary of the fast sorting algorithms. which assume that data is stored on disk or some other device that is best accessed sequentially.

it cannot be done in Θ(n) time. one for each of the possible input permutations. and an integer k. Presumably. Although selection can be solved in O(n log n) time. All of the algorithms we have discussed so far are comparison-based. by ﬁrst sorting A and then returning the kth element of the sorted list. HeapSort is an in-place sorting algorithm. Any sorting algorithm must be able to distinguish between each of these different possibilities. return the kth smallest value of A. Can we sort faster? The claim is no. We will not present a proof of this theorem. as we shall see below. called a heap. By Stirling’s approximation. This does not preclude the possibility of sorting algorithms whose actions are determined by other operations. It is a classical divide-and-conquer algorithm. but given a sorting algorithm it is not hard. even if perfectly balanced. Selection: A simpler.MergeSort: MergeSort also works recursively. Each change of priority (key value) can be processed in Θ(log n) time. Such a tree. Unless you know that your input has some special properties that suggest a much faster alternative. (This is a bit abstract. it is important to learn about sorting algorithms. it is possible to select the kth smallest element in O(n) time. The array is split into two subarrays of roughly equal size. suggests that this may be the best that we can do. it has been engineered to produce the best performance for your system. and deleting the element with the smallest key value. The following theorem gives the lower bound on comparison-based sorting. A heap has the additional advantage of being used in contexts where the priority of elements changes. called a decision tree. where 1 ≤ k ≤ n. They are sorted recursively. a heap can allow you to do this is Θ(n + k log n) time. but this is the key to making this work as an in-place sorting algorithm. HeapSort: HeapSort is based on a nice data structure. but quite tedious. and saves you from debugging time. The downside is the MergeSort is the only algorithm of the three that requires additional array storage (ignoring the recursion stack). A priority queue supports the operations of inserting a key. Nonetheless. Lower Bounds for Comparison-Based Sorting: The fact that O(n log n) sorting algorithms are the fastest around for many years. must height at least lg(n!). We will see that exceptions exist in special cases. given an array A of n numbers (not sorted). A heap can be built for n keys in Θ(n) time. since the fundamental concepts covered there apply to much more complex algorithms. MergeSort is the only stable sorting algorithm of these three. Which sorting algorithm should you implement when implementing your programs? The correct answer is probably “none of them”. provided that the algorithm is comparison-based. to trace its execution. it is best to rely on the library sorting procedure supplied on your system. The algorithm is a variant of QuickSort. The selection problem is. (Why it extracts the maximum rather than the minimum is an implementation detail. and thus it is not in-place. since two different permutations need to treated differently. A comparison-based sorting algorithm is one in which algorithm permutes the elements based solely on the results of the comparisons that the algorithm makes between pairs of elements. There are n! ways to permute a given set of n numbers. Although it is possible to merge arrays in-place. Since each comparison leads to only two possible outcomes. This is because the merging process merges the two arrays into a third array. related problem to sorting is selection. and the minimum key can be extracted in Θ(log n) time. Then the two sorted subarrays are merged together in Θ(n) time. which is an efﬁcient implementation of a priority queue data structure. and set up a new node each time a decision is made.) This binary tree. n! Lecture Notes 9 CMSC 451 .) If you only want to extract the k smallest values. Theorem: Any comparison-based sorting algorithm has worst-case running time Ω(n log n). must have at least n! leaves. the execution of the algorithm can be viewed as a binary tree. but it is not stable. HeapSort works by building the heap (ordered in reverse order so that the maximum can be extracted efﬁciently) and then repeatedly extracting the largest element. but the basic argument follows from a simple analysis of the number of possibilities and the time it takes to distinguish among them.

So. or HeapSort. in Θ(dn) time. if k is O(n). What if you want to sort a set of ﬂoating-point numbers? In the worst-case you are pretty much stuck with using one of the comparison-based sorting algorithms. then it is possible to do better. where a = L/n and b = L mod n. up to constant factors. The running time is Θ(d(n + k)) where d is the number of digits in each value. For example. where each digit is over the range from 0 to n − 1. RadixSort provides a nice way around this by sorting numbers one digit. we can radix sort these numbers in time Θ(2(n + n)) = Θ(n). some groups of bits. To sort these integers we simply sort repeatedly. This leads to the possibility of sorting in linear (that is. 2: Example of RadixSort. An example is shown in Figure 2. and ﬁnishing with the highest order digit. we know that if the numbers are already sorted with respect to low order digits.is. given any integer L in this range. the algorithm is faster. The space needed is Θ(n + k). each having d decimal digits (or digits in any base). numbers having the same high order digit will remain sorted with respect to their low order digit. In particular. we may not want to allocate an array of a million elements. O(n)) time. suppose that you wanted to sort a list of integers in the range from 1 to n2 . b). The algorithm sorts in Θ(n + k) time. and k is the number of distinct values each digit may have. The idea is very simple. at a time. (Note Lecture Notes 10 CMSC 451 . but deceptively clever. but the space requirements go up. roughly (n/e)n . If the integers are in the range from say. and then later we sort with respect to high order digits. you could subtract 1 so that they are now in the range from 0 to n2 − 1. or at least objects (like characters) that can be encoded as small integers. In general this works to sort any n numbers over the range from 1 to nd . Linear Time Sorting: The Ω(n log n) lower bound implies that if we hope to sort numbers faster than in O(n log n) time. it is possible to sort without the use of comparisons. Thus. starting at the lowest order digit. Plugging this in and simplifying yields the Ω(n log n) lower bound. This can also be generalized to show that the average-case time to sort is also Ω(n log n). However. such as QuickSort. The algorithm requires an additional Θ(n + k) working storage but has the nice feature that it is stable. or one byte. Radix Sort: The main shortcoming of CountingSort is that (due to space requirements) it is only practical for a very small ranges of integers. or generally. we can write L = an + b. You are referred to CLRS for the details. Now. n is the length of the list. 1 to a million. Counting Sort: Counting sort assumes that each input is an integer in the range from 1 to k. we can think of L as the 2-digit number (a. BucketSort: CountingSort and RadixSort are only good for sorting small integers. Input 576 494 194 296 278 176 954 49[4] 19[4] 95[4] 57[6] 29[6] 17[6] 27[8] 9[5]4 5[7]6 1[7]6 2[7]8 4[9]4 1[9]4 2[9]6 [1]76 [1]94 [2]78 [2]96 [4]94 [5]76 [9]54 Output 176 194 278 296 494 576 954 =⇒ =⇒ =⇒ =⇒ Fig. In some special cases. we cannot do it by making comparisons alone. First. this implies that the resulting sorting algorithm runs in Θ(n) time. Observe that any number in this range can be expressed as 2-digit number. A common application of this algorithm is for sorting integers over some range that is larger than n. As the number of bits in each group increases. Let’s think of our list as being composed of n integers. The algorithm is remarkably simple. Since the sorting algorithm is stable. but still polynomial in n. MergeSort. Here are three such algorithms. in special cases where you have reason to believe that your numbers are roughly uniformly distributed over some range.

(It is possible in O(n) time to ﬁnd the maximum and minimum values.86 . so even a quadratic time sorting algorithm (e.71 . and Section 15. 0. 3: BucketSort.03).) Lecture Notes 11 CMSC 451 . The number of points per bucket should be fairly small. Thus. The basic elements that characterize a dynamic programming algorithm are: Substructure: Decompose your problem into smaller (and hopefully simpler) subproblems. 3. Express the solution of the original problem in terms of solutions for smaller problems.01). called memoization.10 . B 0 1 2 3 4 5 6 7 8 9 . Table-structure: Store the answers to the subproblems in a table. called dynamic programming (or DP for short). subject to various constraints). Dynamic programming solutions are based on a few common elements. the number of elements lying within each bucket on average is a constant.g. in the sense that it breaks problems down into smaller problems that it solves recursively. say [0. For example.17 .01.42 .42 . BubbleSort or InsertionSort) should work.59 . However.14 .59 . Dynamic programming problems are typically optimization problems (ﬁnd the minimum or maximum cost solution. The technique is related to divide-and-conquer. assuming that the numbers are uniformly distributed.02). all the sorted buckets are concatenated together. Lecture 4: Dynamic Programming: Longest Common Subsequence Read: Introduction to Chapt 15. and so on. 1). (In this case. and scale the numbers to ﬁt into this range. An example illustrating this idea is given in Fig.) Suppose that the numbers to be sorted range over some interval.71 .) The idea is the subdivide this interval into n subintervals. Bottom-up computation: Combine solutions on smaller subproblems to solve larger subproblems. because of the somewhat different nature of dynamic programming problems. [0.) We then sort each bucket in ascending order.38 . Since there are n buckets. This is done because subproblem solutions are reused many times. The analysis relies on the fact that.17 A .86 Fig.that this is a strong assumption.56 . if n = 100. The technique is among the most powerful for designing algorithms for optimization problems. Then we make a pass through the list to be sorted.38 . we can map each value to its bucket index. 0. the index of element x would be 100x . 0.02. Dynamic Programming: We begin discussion of an important algorithm design technique. We create n different buckets.81 . one for each interval.10 .14 . (This is true for two reasons. [0. standard divide-and-conquer solutions are not usually efﬁcient.81 . and using the ﬂoor function. the total sorting time is Θ(n). (Our text also discusses a top-down alternative.4 in CLRS. the subintervals would be [0.56 . the expected time needed to sort each bucket is O(1). Finally. This algorithm should not be applied unless you have good reason to believe that this is the case.

x2 . yn determine a longest common subsequence. There are two important elements that a problem must have in order for DP to be applicable. ik (1 ≤ i1 < i2 < . then Z is a subsequence of X. i2 . zk . (This is what text editors and programs like “grep” do when you perform a search. but this is hopelessly inefﬁcient. . There are a number of important problems here. n] since this will be the LCS of the two entire strings. the longest common subsequence of X and Y is a longest sequence Z that is a subsequence of both X and Y . Strings: One important area of algorithm design is the study of algorithms for character strings. . For example. we compute all of them. For example.) Polynomially many subproblems: An important aspect to the efﬁciency of DP is that the total number of subproblems to be solved should be at most a polynomial number.The most important question in designing a DP solution to a problem is how to set up the subproblem structure. . let X = ABRACADABRA and let Z = AADAA . j] values do we compute? Since we don’t know which will lead to the ﬁnal optimum. The idea will be to compute the longest common subsequence for every possible pair of preﬁxes. we need to break the problem into smaller pieces. (Not all optimization problems satisfy this. .) It states that for the global problem to be solved optimally. let X = ABRACADABRA and let Y = YABBADABBADOO . . we say that Z is a subsequence of X if there is a strictly increasing sequence of k indices i1 . for i ≤ i and j ≤ j (but not both equal). . Eventually we are interested in c[m. . . each subproblem should be solved optimally. . . Let c[i. . Instead. and search for matches in the other string. Dynamic programming is not applicable to all optimization problems. we will derive a dynamic programming solution. . z2 . 4 X= A B R A C A D A B R A LCS = A B A D A B A Y= Y A B B A D A B B A D O O Fig. Given two strings X and Y . Given two sequences X = x1 . 6] = 3. < ik ≤ n) such that Z = Xi1 . Then the longest common subsequence is Z = ABADABA . For example the LCS of ABC and BAC is either AC or BC . c[5. . xm and Z = z1 . . but rather something that is similar. j ].) In many instances you do not want to ﬁnd a piece of text exactly. There are many ways to do this for strings. Note that it is not always unique. The idea is to compute c[i. Lecture Notes 12 CMSC 451 . . j] assuming that we already know the values of c[i . This is called the formulation of the problem. . . j] denote the length of the longest common subsequence of Xi and Yj . Here are the possible cases. Optimal substructure: (Sometimes called the principle of optimality. Among the most important has to do with efﬁciently searching for a substring or generally a pattern in large piece of text. DP Formulation for LCS: The simple brute-force solution to the problem would be to try all possible subsequences from one string. . . In typical DP fashion. in the above case we have X5 = ABRAC and Y6 = YABBAD . Thus. This arises for example in genetics research and in document retrieval on the web. . . . Xi = x1 . . since there are an exponential number of possible subsequences. Sometimes it is better to lose a little on one subproblem in order to make a big gain on another. The Longest Common Subsequence Problem (LCS) is the following. . 4: An example of the LCS of two strings X and Y . . Xik . xi . x2 . . Which of the c[i. One common method of measuring the degree of similarity between two strings is to compute their longest common subsequence. xm and Y = y1 . Longest Common Subsequence: Let us think of character strings as sequences of characters. . For example. X0 is the empty sequence. but it turns out for this problem that considering all pairs of preﬁxes will sufﬁce for us. Given two sequences X = x1 . Their longest common subsequence is ABA . . A preﬁx of a sequence is just an initial string of values. See Fig. Xi2 . . .

giving ACA as the answer. We do not know which is the case. j] = max(c[i − 1. In the second case (yj is not in the LCS) the LCS is the LCS of Xi and Yj−1 which is c[i.) This is illustrated at the top of Fig. and an example is illustrated in Fig. then the longest common subsequence is empty. 0. c[i. j > 0 and xi = yj . or yj is not part of the LCS (and possibly both are not part of the LCS). but it will not make the LCS any smaller if we do.m. At this point it may be tempting to try to make a “smart” choice. The code is shown below. By analyzing the last few characters of Xi and Yj . j]) if i. j > 0 and xi = yj . For example: Let Xi = ABCA and let Yj = DACA . This is illustrated at the bottom half of Fig.) Instead. 0] = c[j. perhaps we can ﬁgure out which character is best to discard. so we try both and take the one that gives us the longer LCS. In this case xi and yj cannot both be in the LCS (since they would have to be the last character of the LCS). Later we will see how to extract the actual sequence. j − 1]) Combining these observations we have the following formulation: if i = 0 or j = 0. if xi = yj then c[i. j − 1]. j − 1] + 1 Xi Last chars match: Yj yj A xi A Xi−1 LCS Yj−1 A add to LCS A Last chars do not match Xi Yj yj B xi A Xi−1 LCS Yj max Xi LCS Yj−1 B A skip xi A B skip yj Fig. Last characters do not match: Suppose that xi = yj . j] = max(c[i.. j − 1] + 1 if i. if xi = yj then c[i. Thus either xi is not part of the LCS. 5. 6 Lecture Notes 13 CMSC 451 . c[i. In the ﬁrst case (xi is not in the LCS) the LCS of Xi and Yj is the LCS of Xi−1 and Yj . 0 c[i − 1.) Since the A is part of the LCS we may ﬁnd the overall LCS by removing A from both sequences and taking the LCS of Xi−1 = ABC and Yj−1 = DAC which is AC and then adding A to the end.. j]. Implementing the Formulation: The task now is to simply implement this formulation. (We will leave the proof as an exercise. 5. We will store some helpful pointers in a parallel array. j]. 0] = 0. our approach is to take advantage of the fact that we have already precomputed smaller subproblems.n]. and use these results to guide us. (At ﬁrst you might object: But how did you know that these two A’s matched with each other. Last characters match: Suppose xi = yj . b[0. j] = c[i − 1. this approach is doomed to failure (and you are strongly encouraged to think about this. which is c[i − 1. we claim that the LCS must also end in A. since it is a common point of confusion.Basis: c[i. j − 1]. If either sequence is empty. 5: The possibe cases in the DP formulation of LCS. We concentrate only on computing the maximum length of the LCS. Since both end in A. The answer is that we don’t. However. c[i − 1.

j] = c[i-1. break case skipX: i--. b[0.j] else c[i. b[i...m].j] = SKIPY for i = 1 to m for j = 1 to n if (x[i] == y[j]) c[i.j-1]) c[i.n]) { LCSstring = empty string i = m. b[i.m. b[i. b[0.j-1].. 0.m].j] = c[i-1. 6: Longest common subsequence example for the sequences X = BACDB and Y = BCDB .j] case addXY: // add X[i] (=Y[j]) add x[i] (or equivalently y[j]) to front of LCSstring i--.n] for i = 0 to m c[i. j = n // start at lower right while(i != 0 && j != 0) // go until upper left switch b[i. break // skip X[i] case skipY: j--.j] = 0.j-1]+1.. break // skip Y[j] return LCSstring } Lecture Notes 14 CMSC 451 . b[i. j--. Build LCS Table LCS(x[1.m.0.j] >= c[i..j] return c[m. The numeric table entries are the values of c[i.n]) { int c[0.. y[1.j] else if (c[i-1.j] = c[i.n] } // compute LCS table // init column 0 // init row 0 // fill rest of table // take X[i] (Y[j]) for LCS = addXY // X[i] not in LCS = skipX // Y[j] not in LCS = skipY // return length of LCS Extracting the LCS getLCS(x[1..0] = 0.Y: 0 0 1 X: 2 3 4 m= 5 0 B 0 A 0 C 0 D 0 B 0 1 B 0 1 1 1 1 1 2 0 1 1 1 2 2 3 0 1 1 2 2 2 4 =n 0 1 1 2 2 3 LCS = BCB 0 1 X: 2 3 4 m= 5 Y: 0 0 B A 0 0 1 0 1 1 1 1 1 2 0 1 1 1 2 2 3 0 1 1 2 2 2 4 =n 0 1 1 2 2 3 start here D C B X = BACDB Y = BDCB B D C B C 0 D 0 B 0 LCS Length Table with back pointers included Fig.n].. j] and the arrow entries are used in the extraction of the sequence.j]. y[1.0] = SKIPX for j = 0 to n c[0.

Consider the case of 3 matrices: A1 be 5 × 4. Similarly. and the result will be a p × r matrix C. 0. pqr. An Matrix multiplication is an associative but not a commutative operation. If b[i. We will study the problem in a very restricted instance. multCost[((A1 A2 )A3 )] = (5 · 4 · 6) + (5 · 6 · 2) = 180. j] = addXY means that X[i] and Y [j] together form the last character of the LCS.m. So we take this common character. A2 be 4 × 6 and A3 be 6 × 2. j] above us (↑). Extracting the Actual Sequence: Extracting the ﬁnal LCS is done by using the back pointers stored in b[0. Lecture Notes 15 CMSC 451 . considerable savings can be achieved by reordering the evaluation sequence. . The algorithm also uses O(mn) space. Even for this small example. Suppose that we wish to multiply a series of matrices A1 A2 . This corresponds to the (hopefully familiar) rule that the [i. not all involve the same number of operations. (The number of columns of A must equal the number of rows of B. and outputting a character with each diagonal move gives the ﬁnal subsequence. where the dynamic programming issues are easiest to see. j − 1] to the northwest ( ). and so we skip it and go to b[i − 1. Also recall that when two (nonsquare) matrices are being multiplied. Lecture 5: Dynamic Programming: Chain Matrix Multiplication Read: Chapter 15 of CLRS. respectively. then we know that Y [j] is not in the LCS. thus the total time to multiply these two matrices is proportional to the product of the dimensions. and Section 15. You can multiply a p × q matrix A times a q × r matrix B.2 in particular. j] = skipX . Following these back pointers. Chain Matrix Multiplication: This problem involves the question of determining the optimal sequence for performing a series of operations. k]B[k. j − 1] to the left (←).The running time of the algorithm is clearly O(mn) since there are two nested loops with m and n iterations. and continue with entry b[i − 1.n]. A * q p r q r B = = p C Multiplication time = pqr Fig. j] = k=1 A[i. This general class of problem is important in compiler design for code optimization and in databases for query optimization. 7: Matrix Multiplication. Note that although any legal parenthesization will lead to a valid result. Observe that there are pr total entries in C and each takes O(q) time to compute. multCost[(A1 (A2 A3 ))] = (4 · 6 · 2) + (5 · 4 · 2) = 88.) In particular for 1 ≤ i ≤ p and 1 ≤ j ≤ r. j] entry of C is the dot product of the ith (horizontal) row of A and the jth (vertical) column of B. and so we skip it and go to b[i. Intuitively b[i. This means that we are free to parenthesize the above multiplication however we like.. j]. then we know that X[i] is not in the LCS. there are restrictions on the dimensions. but we are not free to rearrange the order of the matrices. q C[i.. . j] = skipY . if b[i. A p × q matrix has p rows and q columns.

. Although this is not a bad idea. . it makes sense to think about sequences of matrices. pn where Ai is of dimension pi−1 × pi . the number of different ways of parenthesizing n items: 1 if n = 1. in principle. 1 ≤ k ≤ n − 1. This suggests the following recurrence for P (n).. for any k. P (n) = n−1 P (k)P (n − k) if n ≥ 2.n = A1. Since these are independent choices. Let Ai. So. If you have n items. as is true in almost all dynamic programming solutions. one with k items. and the other with n − k items.n . which represents the optimum solution to each problem in terms of solutions to the subproblems. (After all it might work. A1. pick the value of k that minimizes pk . whose solutions can be combined to solve the global problem.. we ﬁnd that C(n) ∈ Ω(4n /n3/2 ). brute force is not an option.n ? The subchain problems can be solved recursively. since we want matrices with small dimensions. determine the order of multiplication (represented. . the number of ways of parenthesizing an expression is very large. let us think about the problem of determining the best value of k. We want to break the problem into subproblems. Then we could consider all the ways of parenthesizing these. It just turns out that it doesn’t in this case. you may be tempted to consider some clever ideas. just after the 2nd item. When we split just after the kth item. In parenthesizing the expression. . namely just after the 1st item. and taking the best of them.j is a pi−1 × pj matrix. Since matrices cannot be reordered. then there is only one way to parenthesize. It is easy to see that Ai. where C(n) is the nth Catalan number: 1 2n . p1 . we can consider the highest level of parenthesization. . Thus.. this will not be practical except for very small n. if there are L ways to parenthesize the left sublist and R ways to parenthesize the right sublist.. we need to ﬁnd some way to break the problem into smaller subproblems. as a binary tree) that minimizes the number of operations. That is.k and Ak+1. As is common to any DP solution. . Important Note: This algorithm does not perform the multiplications. then the total is L · R. Dynamic Programming Approach: This problem. For example. A2 . An and dimensions p0 . At this point. . implying that function grows very fast.) Now. Naive Algorithm: We could write a procedure which tries all possible parenthesizations. At this level we are simply multiplying two matrices together. and we need to determine a recursive formulation. say.j denote the result of multiplying matrices i through j. which you should try. Unfortunately. we will do the dumbest thing of simply considering all possible choices of k. This takes a bit of thinking. C(n) = n+1 n Applying Stirling’s formula (which is given in our text). . In summary. . What we want to do is to break the problem into problems of a similar structure. then there are n − 1 places where you could break the list with the outermost pair of parentheses. Thus the problem of determining the optimal sequence of multiplications is broken up into two questions: how do we decide where to split the chain (what is k?) and how do we parenthesize the subchains A1. a parenthesization). like other dynamic programming problems involves determining a structure (in this case. it just determines the best order in which to perform the multiplications. we create two sublists to be parenthesized.) Instead. and just after the (n − 1)st item. In particular P (n) = C(n − 1). Since 4n is exponential and n3/2 is just polynomial.k · Ak+1..Chain Matrix Multiplication Problem: Given a sequence of matrices A1 . in order to determine how to perform this multiplication optimally. since it quickly leads to an exponential Lecture Notes 16 CMSC 451 .. by applying the same scheme. we need to make many decisions. If you have just one or two matrices. Usually trying all possible choices is bad. (Think about this for a second to be sure you see why. Let us think of how we can do this. the exponential will dominate. etc.. k=1 This is related to a famous function in combinatorics called the Catalan numbers (which in turn is related to the number of different binary trees on n nodes).

Since we want j ≤ n.j are. j] + pi−1 pk pj ) for i < j. j] denote the minimum number of multiplications needed to compute Ai. i] m[i. We may assume that these values have been computed previously and are already stored in our array. this means that i + L − 1 ≤ n. j] will be explained later.k is a pi−1 × pk matrix. For 1 ≤ i ≤ j ≤ n. (There is nothing to multiply. The code is presented below. In the process of computing m[i.. and so the cost is 0..j . If a subchain of length L starts at position i. Extracting the ﬁnal Sequence: Extracting the actual multiplication sequence is a fairly easy extension. That is. The subchains of length 1 (m[i.n . the subproblems must be solved optimally as well.. A k A k+1 . The optimum times to compute Ai. m[i. .k and Ak+1.. then j = i + L − 1. i]) are trivial to compute. It is not hard to convert this rule into a procedure. What saves us here is that there are only O(n2 ) different sequences of matrices.. ... as Ai. 8: Dynamic Programming Formulation. j] we need to access values m[i.. the time to multiply them is pi−1 pk pj . let m[i. .k A k+1. This suggests that we should organize our computation according to the number of matrices in the subsequence. Step: If i < j. Dynamic Programming Formulation: We will store the solutions to the subproblems in a table. m[i..number of total possibilities. and each can iterate at most n times. A i. The ﬁnal answer is m[1. We’ll leave this as an exercise in solving sums.k · Ak+1. for the global problem to be solved optimally. and build the table in a bottom-up manner. 3. and Ak+1. the value of k that leads to the minimum Lecture Notes 17 CMSC 451 . or in other words.. So our loop for i runs from 1 to n − L + 1 (in order to keep j in bounds).. i ≤ k < j. This suggests the following recursive rule for computing m[i. Notice that our chain matrix multiplication problem satisﬁes the principle of optimality.j . k] and m[k + 1. which is given below. Let L = j−i+1 denote the length of the subchain being multiplied. we should compute each subsequence optimally.. because once we decide to break the sequence into the product A1.j A i A i+1 . but the key is that there are three nested loops. The running time of the procedure is Θ(n3 ). we do not encounter 2 the exponential growth.j A i. Then we build up by computing the subchains of lengths 2. Since Ai. It is used to extract the actual sequence. This can be split by considering each k.k times Ak+1. (There are n = n(n − 1)/2 ways of choosing i and j to form Ai. that is. k] + m[k + 1.. i ≤ n − L + 1. . k] and m[k + 1.. n. We need to be a little careful in setting up the loops. j].j .j to be precise. j] = = 0 i≤k<j min (m[i. n]. j].j is a pk × pj matrix. j] for k lying between i and j. The optimum cost can be described by the following recursive formulation.. A j ? Fig. Basis: Observe that if i = j then the sequence contains only one matrix.) Thus. then we are asking about the product Ai.. m[i.) Thus. The only tricky part is arranging the order in which to compute the values... i] = 0. The basic idea is to leave a split marker indicating what the best split is. respectively. by deﬁnition. The array s[i.

We can maintain a parallel array s[i. j] + p[i-1]*p[k]*p[j] if (q < m[i.n]. This algorithm is tricky.A[j] // multiply matrices X and Y In the ﬁgure below we show an example..A[k] // Y = A[k+1]. Intuitively.Chain Matrix Multiplication Matrix-Chain(array p[1.i] = 0. The optimal sequence is ((A1 (A2 A3 ))A4 )..j . s[i.. k) Y = Mult(k+1. j] when we have at least two matrices.2. else { k = s[i.k and then multiply the subchain Ak+1.. The initial set of dimensions are 5. k] + m[k+1. The recursive procedure Mult does this computation and below returns a matrix. for k = i to j-1 do { // check all splits q = m[i. Lecture Notes 18 CMSC 451 . 6.n] for i = 1 to n do m[i. 2. Note that we only need to store s[i. if j > i. This tells us that the best way to multiply the subchain Ai. } value of m[i. The actual multiplication algorithm uses the s[i. j] value to determine how to split the current sequence..j] X = Mult(i. and ﬁnally multiply these together. 7 meaning that we are multiplying A1 (5 × 4) times A2 (4 × 6) times A3 (6 × 2) times A4 (2 × 7). j]. j) return X*Y.1.. so it would be a good idea to trace through this example (and the one given in the text). j]) { m[i. suppose that s[i. Lecture 6: Dynamic Programming: Minimum Weight Triangulation Read: This is not covered in CLRS.j] = INFINITY. and that s[i. j) { if (i == j) return A[i]. s[i.. m[i. j] = k.j] = k. j] in which we will store the value of k providing the optimal split. For example. j] is global to this recursive procedure. j] tells us what multiplication to perform last.j] = q.j is to ﬁrst multiply the subchain Ai.n] (final cost) and s (splitting markers). Extracting Optimum Sequence Mult(i. } } } } return m[1. Assume that the matrices are stored in an array of matrices A[1... // initialize for L = 2 to n do { // L = length of subchain for i = 1 to n-L+1 do { j = i + L .n-1. 4.n]) { array s[1... that is. } } // basis case // X = A[i].

. that is. the line segment vi vj is a chord. every chord that is not in T intersects the interior of some chord in T . but actually has remarkable similarities. where i < j−1. It is not hard to prove (by induction) that every triangulation of an n-sided polygon consists of n − 3 chords and n − 2 triangles. The line segments are called the sides of the polygon and the endpoints are called the vertices. A simple polygon is said to be convex if every interior angle is at most 180 degrees. the number is exponential in n. Which triangulation is the “best”? There are many criteria that are used depending on the application. whose vertices are drawn from the vertices of P . vj+1 . but for this problem we will assume that no such vertices exist. A triangulation of a convex polygon P is a subdivision of the interior of P into a collection of triangles with disjoint interiors. . (This may sound fanciful. vi . Triangulations are of interest for a number of reasons. In general. . Deﬁne a polygon to be a piecewise linear closed curve in the plane. we include the additional requirement that the interior of the segment must lie entirely in the interior of P . we form a cycle by joining line segments end to end. there are many possible triangulations. In fact. but minimizing wire length is an Lecture Notes 19 CMSC 451 . . the number of sides. we can deﬁne a triangulation as a maximal set T of nonintersecting chords.j] 3 i 0 A3 Final order 1 2 A2 A3 A4 Fig. This polygon has n sides. .) Any chord subdivides the polygon into two polygons: vi . its boundary and its exterior. (In other words. Given a convex polygon. Given two nonadjacent sides vi and vj . if the sides do not intersect one another except for two consecutive sides sharing a common vertex.4 j 2 1 0 5 A1 p0 p1 4 A2 p2 120 0 6 3 158 88 48 1 2 104 m[i. vi−1 vi . and vj . Vertices with interior angle equal to 180 degrees are normally allowed. vi+1 . . We will assume that indexing of vertices is done modulo n. so v0 = vn . . A polygon is simple if it does not cross itself. Equivalently. In other words. . vn . Polygons and Triangulations: Let’s consider a geometric problem that outwardly appears to be quite different from chain-matrix multiplication. and you want to minimize the amount of ink you use. vj . Many geometric algorithm operate by ﬁrst decomposing a complex polygonal shape into triangles. 9: Chain Matrix Multiplication Example.j] 3 84 0 2 7 A4 p3 p4 A1 i 4 j 2 3 1 4 1 1 3 3 2 3 3 2 s[i. given a convex polygon. We begin with a number of deﬁnitions. One criterion is to imagine that you must “pay” for the ink you use in drawing the triangulation. we assume that its vertices are labeled in counterclockwise order P = v1 .) It is easy to see that such a set of chords subdivides the interior of the polygon into a collection of triangles with pairwise disjoint interiors (and hence the name triangulation). 10: Polygons. (If the polygon is simple but not convex. . Polygon Simple polygon Convex polygon Fig. . A simple polygon subdivides the plane into its interior. . .

. (As usual. vj . it is easy to modify the procedure to extract the actual triangulation. .) A triangulation Lower weight triangulation Fig. .) This suggests the following optimization problem: Minimum-weight convex polygon triangulation: Given a convex polygon determine the triangulation that minimizes the sum of the perimeters of its triangles.) The same Θ(n3 ) algorithm can be applied with only minor changes.) As a basis case. . 11. . . Relationship to Binary Trees: One explanation behind the similarity of triangulations and the chain matrix multiplication algorithm is to observe that both are fundamentally related to binary trees. vk ) = |vi vj | + |vj vk | + |vk vi |. and each node of the tree is associated with a product of a sequence of two or more matrices. . j] + w(vi vk vj )) if j = i + 1 if j > i + 1. j] to be the weight of the minimum weight triangulation for the subpolygon that lies to the right of directed chord vi vj . k] + t[k. But. For 0 ≤ i < j ≤ n. we only compute the minimum weight. . This is illustrated in Fig. Given three distinct vertices vi . consider the subpolygon vi . i + 1] = 0. . this is one of many properties which we could choose to optimize. Dynamic Programming Solution: Let us consider an (n + 1)-sided polygon P = v0 . which is. vn . In the case of the chain matrix multiplication. n]. . vj whose minimum weights are already known to us as t[i. vn . . n]. Thus. where i < k < j. 12 Note that this has almost exactly the same structure as the recursive deﬁnition used in the chain matrix multiplication algorithm (except that some indices are different by 1. . where j > i + 1. vj . . . Observe that if we can compute this quantity for all such i and j. we deﬁne the weight of the trivial “2-sided polygon” to be zero. One of the chords of this polygon is the side vi vj . j]. whose internal nodes are the nodes of the dual tree. then the weight of the minimum weight triangulation of the entire polygon can be extracted as t[0. consider an (n + 1)-sided convex polygon P = v0 . . implying that t[i. and the minimum weight triangulation. deﬁne t[i. Further. The ﬁnal output is the overall minimum weight. t[0. vk . (See Fig. that is. . 11: Triangulations of convex polygons. vk and vk . vi+1 . where the leaves of the tree correspond to the matrices. and whose leaves Lecture Notes 20 CMSC 451 . . we have the following recursive rule: t[i. . v1 . and ﬁx one side of the polygon (say v0 vn ). To see that there is a similar correspondence here. vk+1 . To derive a DP formulation we need to deﬁne a set of subproblems from which we can derive the optimum solution. Now consider a rooted binary tree whose root node is the triangle containing side v0 vn . v1 . Let us assume that these vertices have been numbered in counterclockwise order. . to compute t[i. . j]. We may split this subpolygon by introducing a triangle whose base is this chord. This subdivides the polygon into the subpolygons vi . In general.important condition in chip design. the associated binary tree is the evaluation tree for the multiplication. j] = 0 mini<k<j (t[i. vj . the polygon with the counterclockwise vertex sequence vi . k] and t[k. vi+1 . where |vi vj | denotes the length of the line segment vi vj . . vi+1 . . we deﬁne the weight of the associated triangle by the weight function w(vi . vj . In addition we should consider the weight of the newly added triangle vi vk vj . and whose third vertex is any vertex vk .

they often provide fast heuristics (nonoptimal solution strategies). Then the following two observations follow easily.k] vk Fig.1 and 16. (Later in the semester. correspond to the remaining sides of the tree. we will see that this technique can be applied to a number of graph problems as well. there are n − 2 edges between the internal nodes.vn v0 vj Triangulate at cost t[k. vj ) Triangulate at cost t[i. Today we will consider an alternative design technique. and then is computed “bottom-up”. This method typically leads to simpler and faster algorithms. and each edge between internal nodes corresponds to one chord of the triangulation. called greedy algorithms. are often used in ﬁnding good approximations. 12: Triangulations and tree structure. except for the distinguished starting side v0 vn . Lecture Notes 21 CMSC 451 . This is illustrated in Fig. A1 v1 v0 root v11 root A11 v10 A10 v9 A9 v8 v4 A5 v 5 A6 v6 A7 v7 A8 A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 A11 A3 v3 A2 v2 A4 Fig. Once you see this connection. Observe that partitioning the polygon into triangles is equivalent to a binary tree with n leaves.) Even when greedy algorithms do not produce the optimal solution.j] vi cost=w(v i . In dynamic programming we saw one way to make these selections.2 in CLRS. 13. is associated with a leaf node of the tree. but it often leads to algorithms with higher than desired running times. and vice versa. We will give some examples of problems that can be solved by greedy algorithms. Dynamic programming is a powerful technique. the optimal solution is described in a recursive manner. 13: Triangulations and tree structure.vk. Observe that the associated binary tree has n leaves. and hence (by standard results on binary trees) n − 1 internal nodes. Greedy Algorithms: In many optimization algorithms a series of selections need to be made. Namely. but it is not as powerful or as widely applicable as dynamic programming. Lecture 7: Greedy Algorithms: Activity Selection and Fractional Knapack Read: Sections 16. Note that every triangle is associated with an internal node of the tree and every edge of the original polygon. Since each internal node other than the root has one edge entering it. Each internal node corresponds to one triangle.

n]) { // we assume f[1. Note that this is not the only optimal schedule. in order to determine interferences. {2. n} of n activities that are to be scheduled to use some resource. and then using induction to show that the rest of the choices result in an optimal schedule. provided that it does not interfere with any previously scheduled activities. . Since we do not like activities that take a long time. two consecutive activities are not considered to interfere. we assume that the activities have been sorted by their ﬁnish times. fi ) ∩ [sj . f1 ≤ f2 ≤ . more formally. The ﬁnal output is {1. . where each activity must be started at a given start time si and ends at a given ﬁnish time fi . Because there is only one resource. Although this seems like a reasonable strategy. let us select the activity that ﬁnishes ﬁrst and schedule it. (Notice that goal here is maximum number of activities. To make the selection process faster. ≤ fn . prev = i } return A } Greedy Activity Scheduler // given start and finish times // schedule activity 1 first // no interference? // schedule i next It is clear that the algorithm is quite simple and efﬁcient. 7}. . Then Event 4 is scheduled. This suggests the following greedy strategy: repeatedly select the activity with the smallest duration (fi − si ) and schedule it. not all the requests can be honored. and schedule the next one that has the earliest ﬁnish time. fj ) = ∅. and some start and ﬁnish times may overlap (and two lectures cannot be given in the same room at the same time). Suppose that you have any nongreedy solution.n] already sorted List A = <1> prev = 1 for i = 2 to n if (s[i] >= f[prev]) { append i to A.. Lecture Notes 22 CMSC 451 . 2. (See Problem 17. 14 shows an example. Here is a greedy strategy that does work.. but the greedy approach may not be optimal in general. Fig. Assuming this sorting. We are given a set S = {1. . because they occupy the resource and keep us from honoring other requests. Finally.) How do we schedule the largest number of activities on the resource? Intuitively. so the total running time is Θ(n log n). The intuition is the same. Each activity is represented by its start-ﬁnish time interval. and it intereferes with the remaining activity. 4.n]. Sometimes the design of a correct greedy algorithm requires trying a few different strategies. . The most costly activity is that of sorting the activities by ﬁnish time. Proof of Optimality: Our proof of optimality is based on showing that the ﬁrst choice made by the algorithm is the best possible. not maximum utilization. we skip all activities that interfere with this one. The variable prev holds the index of the most recently scheduled activity at any time. the pseudocode for the rest of the algorithm is presented below. It interferes with activity 2 and 3. schedule(s[1. Then. 7} is also optimal.) The activity scheduling problem is to select a maximum-size set of mutually noninterfering activities for use of the resource. these might be lectures that are to be given in a lecture hall. For example. we do not like long activities. Event 1 is scheduled ﬁrst. where the lecture times have been set up in advance. [si . that is. 4. Proofs of optimality for greedy algorithms follow a similar structure. (Note that making the intervals half open. this turns out to be nonoptimal. f[1. or requests for boats to use a repair facility while they are in port. We say that two activities i and j are noninterfering if their start-ﬁnish intervals do not overlap. The output is the list A of scheduled activities. activity 7 is scheduled. Of course different criteria could be considered. until hitting on one that works.. Observe that the intervals are sorted by ﬁnish time. It interferes with activity 5 and 6.Activity Scheduling: Activity scheduling and it is a very simple scheduling problem. . and so on.1-4 in CLRS).

where gj = xj . . A = x1 . Skip 2. xj−1 . since otherwise G would have more activities than the optimal schedule. We will construct a new optimal schedule A that is in some sense “greedier” than A. Show that its cost can be reduced by being “greedier” at some point in the solution. consider the ﬁrst activity xj where these two schedules differ. and it does not conﬂict with later activities.) The greedy algorithm selects the activity with the earliest ﬁnish time that does not conﬂict with any earlier activity. 14: An example of the greedy algorithm for activity scheduling. . x2 . That is. 15. xj−1 . . Proof: Consider any optimal schedule A that is not the greedy schedule. and therefore A Lecture Notes 23 CMSC 451 . 7}. xk . .6 Fig. (Note that k ≥ j.) That is. . we know that gj does not conﬂict with any earlier activity. Let A = x1 . . which would be a contradiction. Consider the modiﬁed “greedier” schedule A that results by replacing xj with gj in the schedule A. gj . This proof is complicated a bit by the fact that there may be multiple solutions. Since A is not the same as the greedy schedule.) It has the same number of activities as A. Thus. Claim: The greedy algorithm gives an optimal solution to the activity scheduling problem. xj+1 . (Since gj cannot conﬂict with the earlier activities. . x2 . . 15: Proof of optimality for the greedy schedule (j = 3). because it ﬁnishes before xj . A: G: x1 x1 x2 x2 x3 g3 x4 x5 A’: x1 x2 g3 x4 x5 Fig. . . 4. Our approach is to show that any schedule that is not greedy can be made more greedy.3 5 6 7 8 4 5 6 7 8 Add 4: Add 7: 1 2 3 4 5 6 7 8 Sched 7. This is a feasible schedule. . . and it ﬁnishes before xj . . the greedy schedule is of the form G = x1 . (See Fig. . The ﬁnal schedule is {1. . .Input: 1 2 3 4 Add 1: 1 2 3 Sched 1. Skip 5. . xk be the activities of A. gj . without decreasing the number of activities. Skip 8 1 2 3 4 5 6 7 8 Sched 4. . . x2 . Order the activities in increasing order of ﬁnish time.

At some point there is an item that does not ﬁt in the remaining space. we take it all. In contrast. So. As in the case of the other greedy algorithms we have seen. Therefore. you might think of each object as being a sack of gold. We sort the items in decreasing order of ρi . This would mean that there is an alternate selection that is optimal.0 3. By repeating this process. But it would never beneﬁt you to take a little less gold so that you could replace it with an equal volume of bronze. 16 35 40 60 $140 30 $90 40 $160 + 40 30 20 5 knapsack $30 $20 $100 $90 $160 ρ= 6. you may want to ship some subset of items on a truck of limited capacity. It is not possible to take a fraction of an item or multiple copies of an item. Fig. and then as much bronze as possible. suppose to the contrary that the greedy algorithm is not optimal.) This optimization problem arises in industrial packing applications. More formally. If the item ﬁts. We take as much of this item as possible. where vi and wi are integers. we will eventually convert A into G.0 4. but has a knapsack that can only carry W total pounds. you would obviously take as much gold as possible.0 2. Consider the ﬁrst item i on which the two selections differ.is also optimal.0 Greedy solution to fractional problem. This suggests that we ﬁrst sort the items according to some function that is an decreases with value and increases with weight. Intuitively. and bronze. but the thief is allowed to take any fraction of an item for a fraction of the weight and a fraction of the value. and in fact it is an NP-complete problem (meaning that there probably doesn’t exist an efﬁcient solution). and add them in this order.0 Input 5. Which items should he take? (The reason that this is called 0-1 knapsack is that each item must be left (0) or taken entirely (1). there is a very simple and efﬁcient greedy algorithm for the fractional knapsack problem. G is also optimal. Let us say that greedy takes x more Lecture Notes 24 CMSC 451 . 16: Example for the fractional knapsack problem. and ﬁnds n items which can be taken. He wants to take as valuable a load as possible. Fractional Knapsack Problem: The classical (0-1) knapsack problem is a famous optimization problem. 10 5 20 $100 + $30 $270 5 20 + + $100 + $30 $220 $260 Optimal solution to 0−1 problem. For example. There are a few choices that you might try here. greedy takes a greater amount of item i than the alternate (because the greedy always takes as much as it can). then take as much silver as possible. 20 $100 Greedy solution to 0−1 problem. silver. However. Sort the items of the alternate selection in decreasing order by ρ values. Let ρi = vi /wi denote the value-per-pound ratio. Correctness: It is intuitively easy to see that the greedy algorithm is optimal for the fractional problem. it is good to have high value and bad to have high weight. without decreasing the number of activities. which you can partially empty out before taking. Given a room with sacks of gold. the idea is to ﬁnd the right order in which to process items. The 0-1 knapsack problem is hard to solve. thus ﬁlling the knapsack entirely. A thief is robbing a store. but only one works. This is illustrated in Fig. in the fractional knapsack problem the setup is exactly the same. The ith item is worth vi dollars and weighs wi pounds. By deﬁnition.

d}.60 0 b 0. By replacing x units of any such items with x units of item i. Frequently occurring characters are encoded using fewer bits and less frequent characters are encoded using more bits. 16. This feature of “delaying gratiﬁcation” in order to come up with a better overall solution is your indication that the greedy solution is not optimal. then 20. We could use the following ﬁxed-length code: Character Fixed-Length Codeword a 00 b 01 c 10 d 11 A string such as “abacdaacac” would be encoded by replacing each of its characters by the corresponding binary codeword. Character Probability Variable-Length Codeword a 0. and ignored the item of weight 5.05 111 Notice that there is no requirement that the alphabetical order of character correspond to any sort of ordering applied to the codewords. if you had been less greedy. Normally when characters are coded using standard codes like ASCII. We could design a variable-length code which would do a better job. Now. a 0 Lecture Notes b 110 a 0 c 10 d 111 25 a 0 a 0 c 10 a 0 c 10 CMSC 451 . the same string would be encoded as follows. where you want to encode one ﬁle. If you were to sort the items by ρi . a 00 b 01 a 00 c 10 d 11 a 00 a 00 c 10 a 00 c 10 The ﬁnal 20-character binary string would be “00010010110000100010”. In applications like data compression. b.) You can use this knowledge to encode strings differently. Consider the example shown in Fig. 8 bits per character).05 110 c 0. Fixed-length codes are popular. However. and to access individual characters and substrings by direct indexing. each character is represented by a ﬁxed-length codeword of bits (e. this implies that the alternate selection is not optimal. a contradiction. For example.units of object i than the alternate does. Consider the following example. suppose that you knew the relative probabilities of characters in advance.30 10 d 0. Nonoptimality for the 0-1 Knapsack: Next we show that the greedy algorithm is not generally optimal in the 0-1 knapsack problem. suppose that characters are expected to occur with the following probabilities. you can just scan the ﬁle and determine the exact frequencies of all the characters. for a total value of $30 + $100 + $90 = $220. Suppose that we want to encode strings over the (rather limited) 4-character alphabet C = {a. Huffman Codes: Huffman codes provide a method of encoding data efﬁciently.3 in CLRS. c. All the subsequent elements of the alternate selection are of lesser value than vi . and then (since the item of weight 40 does not ﬁt) you would settle for the item of weight 30. (This might happen by analyzing many strings over a long period of time. then you could take the items of weights 20 and 40 for a total value of $100 + $160 = $260. ﬁxed-length codes may not be the most efﬁcient from the perspective of minimizing the total quantity of data. On the other hand. we would increase the overall value of the alternate selection. then you would ﬁrst take the items of weight 5. Now. Lecture 8: Greedy Algorithms: Huffman Coding Read: Section 16. However.g. because its is very easy to break a string up into its individual characters.

Thus, the resulting 17-character string would be “01100101110010010”. Thus, we have achieved a savings of 3 characters, by using this alternative code. More generally, what would be the expected savings for a string of length n? For the 2-bit ﬁxed-length code, the length of the encoded string is just 2n bits. For the variable-length code, the expected length of a single encoded character is equal to the sum of code lengths times the respective probabilities of their occurrences. The expected encoded string length is just n times the expected encoded character length. n(0.60 · 1 + 0.05 · 3 + 0.30 · 2 + 0.05 · 3) = n(0.60 + 0.15 + 0.60 + 0.15) = 1.5n. Thus, this would represent a 25% savings in expected encoding length. The question that we will consider today is how to form the best code, assuming that the probabilities of character occurrences are known. Preﬁx Codes: One issue that we didn’t consider in the example above is whether we will be able to decode the string, once encoded. In fact, this code was chosen quite carefully. Suppose that instead of coding the character ‘a’ as 0, we had encoded it as 1. Now, the encoded string “111” is ambiguous. It might be “d” and it might be “aaa”. How can we avoid this sort of ambiguity? You might suggest that we add separation markers between the encoded characters, but this will tend to lengthen the encoding, which is undesirable. Instead, we would like the code to have the property that it can be uniquely decoded. Note that in both the variable-length codes given in the example above no codeword is a preﬁx of another. This turns out to be the key property. Observe that if two codewords did share a common preﬁx, e.g. a → 001 and b → 00101, then when we see 00101 . . . how do we know whether the ﬁrst character of the encoded message is a or b. Conversely, if no codeword is a preﬁx of any other, then as soon as we see a codeword appearing as a preﬁx in the encoded text, then we know that we may decode this without fear of it matching some longer codeword. Thus we have the following deﬁnition. Preﬁx Code: An assignment of codewords to characters so that no codeword is a preﬁx of any other. Observe that any binary preﬁx coding can be described by a binary tree in which the codewords are the leaves of the tree, and where a left branch means “0” and a right branch means “1”. The code given earlier is shown in the following ﬁgure. The length of a codeword is just its depth in the tree. The code given earlier is a preﬁx code, and its corresponding tree is shown in the following ﬁgure.

0 a 0 0 c 10

1 1 0 b 110 1 d 111

Fig. 17: Preﬁx codes. Decoding a preﬁx code is simple. We just traverse the tree from root to leaf, letting the input character tell us which branch to take. On reaching a leaf, we output the corresponding character, and return to the root to continue the process. Expected encoding length: Once we know the probabilities of the various characters, we can determine the total length of the encoded text. Let p(x) denote the probability of seeing character x, and let dT (x) denote the length of the codeword (depth in the tree) relative to some preﬁx tree T . The expected number of bits needed to encode a text with n characters is given in the following formula: B(T ) = n

x∈C

p(x)dT (x).

Lecture Notes

26

CMSC 451

This suggests the following problem: Optimal Code Generation: Given an alphabet C and the probabilities p(x) of occurrence for each character x ∈ C, compute a preﬁx code T that minimizes the expected length of the encoded bit-string, B(T ). Note that the optimal code is not unique. For example, we could have complemented all of the bits in our earlier code without altering the expected encoded string length. There is a very simple algorithm for ﬁnding such a code. It was invented in the mid 1950’s by David Huffman, and is called a Huffman code.. By the way, this code is used by the Unix utility pack for ﬁle compression. (There are better compression methods however. For example, compress, gzip and many others are based on a more sophisticated method called the Lempel-Ziv coding.) Huffman’s Algorithm: Here is the intuition behind the algorithm. Recall that we are given the occurrence probabilities for the characters. We are going to build the tree up from the leaf level. We will take two characters x and y, and “merge” them into a single super-character called z, which then replaces x and y in the alphabet. The character z will have a probability equal to the sum of x and y’s probabilities. Then we continue recursively building the code on the new alphabet, which has one fewer character. When the process is completed, we know the code for z, say 010. Then, we append a 0 and 1 to this codeword, given 0100 for x and 0101 for y. Another way to think of this, is that we merge x and y as the left and right children of a root node called z. Then the subtree for z replaces x and y in the list of characters. We repeat this process until only one super-character remains. The resulting tree is the ﬁnal preﬁx tree. Since x and y will appear at the bottom of the tree, it seem most logical to select the two characters with the smallest probabilities to perform the operation on. The result is Huffman’s algorithm. It is illustrated in the following ﬁgure. The pseudocode for Huffman’s algorithm is given below. Let C denote the set of characters. Each character x ∈ C is associated with an occurrence probability x.prob. Initially, the characters are all stored in a priority queue Q. Recall that this data structure can be built initially in O(n) time, and we can extract the element with the smallest key in O(log n) time and insert a new element in O(log n) time. The objects in Q are sorted by probability. Note that with each execution of the for-loop, the number of items in the queue decreases by one. So, after n − 1 iterations, there is exactly one element left in the queue, and this is the root of the ﬁnal preﬁx code tree. Correctness: The big question that remains is why is this algorithm correct? Recall that the cost of any encoding tree T is B(T ) = x p(x)dT (x). Our approach will be to show that any tree that differs from the one constructed by Huffman’s algorithm can be converted into one that is equal to Huffman’s tree without increasing its cost. First, observe that the Huffman tree is a full binary tree, meaning that every internal node has exactly two children. It would never pay to have an internal node with only one child (since such a node could be deleted), so we may limit consideration to full binary trees. Claim: Consider the two characters, x and y with the smallest probabilities. Then there is an optimal code tree in which these two characters are siblings at the maximum depth in the tree. Proof: Let T be any optimal preﬁx code tree, and let b and c be two siblings at the maximum depth of the tree. Assume without loss of generality that p(b) ≤ p(c) and p(x) ≤ p(y) (if this is not true, then rename these characters). Now, since x and y have the two smallest probabilities it follows that p(x) ≤ p(b) and p(y) ≤ p(c). (In both cases they may be equal.) Because b and c are at the deepest level of the tree we know that d(b) ≥ d(x) and d(c) ≥ d(y). (Again, they may be equal.) Thus, we have p(b) − p(x) ≥ 0 and d(b) − d(x) ≥ 0, and hence their product is nonnegative. Now switch the positions of x and b in the tree, resulting in a new tree T . This is illustrated in the following ﬁgure. Next let us see how the cost changes as we go from T to T . Almost all the nodes contribute the same to the expected cost. The only exception are nodes x and b. By subtracting the old contributions of these

Lecture Notes

27

CMSC 451

smallest a: 05 b: 48 c: 07 d: 17 e: 10 f: 13 smallest 12 a: 05 b: 48 c: 07 smallest 22 12 a: 05 c: 07 smallest 22 12 a: 05 c: 07 smallest 52 22 12 a: 05 c: 07 Final Tree 0 0 0 0 a: 05 0000 1 c: 07 0001 1 e: 10 001 d: 17 010 1 0 1 f: 13 011 1 b: 48 1 e: 10 d: 17 b: 48 30 f: 13 e: 10 b: 48 d: 17 30 f: 13 b: 48 e: 10 d: 17 f: 13 d: 17 e: 10 f: 13

Fig. 18: Huffman’s Algorithm.

Lecture Notes

28

CMSC 451

} T x y c b y c T’ b c x y T’’ b x Cost change = −(p(b)−p(x))(d(b)−d(x)) <0 Cost change = −(p(c)−p(y))(d(c)−d(y)) <0 Fig. The previous claim states that we may assume that in the optimal tree.insert(z). We can convert it into a preﬁx code tree T for the original set of characters by undoing the previous operation and replacing z with x Lecture Notes 29 CMSC 451 . the two characters of lowest probability x and y will be siblings at the lowest level of the tree. // priority queue for i = 1 to n-1 { z = new internal tree node. Assume inductively that when strictly fewer than n characters. we eliminate exactly one character). Thus n − 1 characters remain. the tree consists of a single leaf node. 19: Correctness of Huffman’s Algorithm. nodes and adding in the new contributions we have B(T ) = B(T ) − p(x)d(x) + p(x)d(b) − p(b)d(b) + p(b)d(x) = B(T ) + p(x)(d(b) − d(x)) − p(b)(d(b) − d(x)) = B(T ) − (p(b) − p(x))(d(b) − d(x)) ≤ B(T ) because (p(b) − p(x))(d(b) − d(x)) ≥ 0. For the basis case. character C[1. n = 1. // extract smallest probabilities z.prob. Thus the cost does not increase.left = x = Q. // z’s probability is their sum Q. the number of characters.extractMin(). The complete proof of correctness for Huffman’s algorithm follows by induction on n (since with each step.n]) { Q = C. The ﬁnal tree T satisﬁes the statement of the claim.extractMin(). Suppose we have exactly n characters. The above theorem asserts that the ﬁrst step of Huffman’s algorithm is essentially the proper one to perform. which by a similar argument is also optimal. Proof: The proof is by induction on n. Remove x and y. Claim: Huffman’s algorithm produces the optimal preﬁx code tree. z.prob + y. Consider any preﬁx code tree T made with this new set of n − 1 characters. Huffman’s algorithm is guaranteed to produce the optimal tree. We want to show it is true with exactly n characters. z. By switching y with c we get a new tree T . which is obviously optimal. // insert z into queue } return the last element left in Q as the root.. implying that T is an optimal tree. replacing them with a new character z whose probability is p(z) = p(x) + p(y).prob = x.right = y = Q.Huffman’s Algorithm Huffman(int n.

(Note that self-loops are not allowed). The list of application is almost too long to even consider enumerating it. to minimize the cost of the ﬁnal tree T .1 and 22. Deﬁnition: A directed graph (or digraph) G = (V. many of these problems form the basic building blocks from which more complex algorithms are then built. v) are distinct). a graph is a good way to model this. E) consists of a ﬁnite set V of vertices. (Another way of saying this is that E is a binary relation on V . Lecture 9: Graphs: Background and Breadth First Search Read: Review Sections 22.) Observe that self-loops are allowed by this deﬁnition. called the vertices or nodes. Some deﬁnitions of graphs disallow this.2 CLR. precedence constraints in scheduling systems. surface meshes used for shape description in computer-aided design and geographic information systems. Deﬁnition: An undirected graph (or graph) G = (V. 1 2 Digraph 4 3 1 2 Graph 3 4 Fig. Intuitively. Most of the problems in computational graph theory that we will consider arise because they are of importance to one or more of these application areas. called the edges. Graphs are extremely important because they are a very ﬂexible mathematical model for many application problems. a set of ordered pairs. connected by a collection of edges. 20: Digraph and graph example. Since the change in cost depends in no way on the structure of the tree T . this exactly what Huffman’s algorithm does. and E. we need to build the tree T on n − 1 characters optimally. and a set E of unordered pairs of distinct vertices. any time you have a set of objects. w) and (w. Examples of graphs in application include communication and transportation networks. Graphs and Digraphs: Most of you have encountered the notions of directed and undirected graphs in other courses. Multiple edges are not permitted (although the edges (v. By induction. VLSI and other sorts of logic circuits. so we will give a quick overview here. Thus the ﬁnal tree is optimal. E) consists of a ﬁnite set V . Furthermore. Basically. The cost of the new tree is B(T ) = B(T ) − p(z)d(z) + p(x)(d(z) + 1) + p(y)(d(z) + 1) = B(T ) − (p(x) + p(y))d(z) + (p(x) + p(y))(d(z) + 1) = B(T ) + (p(x) + p(y))(d(z) + 1 − d(z)) = B(T ) + p(x) + p(y). Graph Algorithms: We are now beginning a major new section of the course. Lecture Notes 30 CMSC 451 . We will be discussing algorithms for both directed and undirected graphs. and there is some “connection” or “relationship” or “interaction” between pairs of objects. called the edges of G. a graph is a collection of vertices or nodes.and y (adding a “0” bit for x and a “1” bit for y).

given the edge e = (u. The number of vertices is typically written as n or V . the large graphs that arise in practice typically have much fewer edges. otherwise.) An acyclic undirected graph (which need not be connected) is a collection of free trees. and the number of edges coming in is called the in-degree. so that the performance on sparse and dense graphs will be apparent. and is (naturally) called a forest. and dense. but other notions (such as connectivity) may only be deﬁned for one. Lecture Notes 31 CMSC 451 . and the number of edges is written as m or E or e. When giving the running times of algorithms.) The subsets of mutually reachable vertices partition the vertices of the graph into disjoint subsets. In undirected graphs u and v are the endpoints of the edge. Here are some basic combinatorial facts about graphs and digraphs. A graph is said to be sparse if E ∈ Θ(V ). the number of edges coming out of a vertex is called the out-degree of that vertex. called the connected components of the graph. vk such that (vi−1 . 21: Illustration of some graph terms. v∈V In a digraph: Number of edges: 0 ≤ E ≤ n2 . Paths and Cycles: A path in a graph or digraph is a sequence of vertices v0 . we typically consider both the number of vertices and the number of edges. v∈V Notice that generally the number of edges in a graph may be as large as quadratic in the number of vertices. or may be deﬁned differently. vi ) is an edge for i = 1. . v1 . A cycle is a path containing at least one edge and for which v0 = vk . The edge e is incident (meaning that it touches) both u and v. An acyclic connected graph is called a free tree or simply tree for short. . When discussing the size of a graph. A graph or digraph is said to be acyclic if it contains no simple cycles. A cycle is simple if its vertices (except v0 and vk ) are distinct. A path is simple if all vertices and all the edges are distinct. . We say that w is reachable from u if there is a path from u to w. (Connectivity is a bit messier for digraphs. An undirected graph is connected if every vertex can reach every other vertex. Given a graph with V vertices and E edges then: In a graph: Number of edges: 0 ≤ E ≤ n = n(n − 1)/2 ∈ O(n2 ). as is usually seen in data structures. . Note that every vertex is reachable from itself by a trivial path that uses zero edges. By the degree of a graph. However. In a directed graph. and all its edges are distinct. v). we say that u is the origin of e and v is the destination of e. and we will deﬁne it later. Sum of degrees: v∈V in-deg(v) = out-deg(v) = E. . We will leave the proofs to you. An acyclic digraph is called a directed acyclic graph. we will usually express it as a function of both V and E. The length of the path is the number of edges. or DAG for short. 2. We say that vertex v is adjacent to vertex u if there is an edge (u. . . in contrast to a rooted tree. (The term “free” is intended to emphasize the fact that the tree has no root. In an undirected graph we just talk about the degree of a vertex as the number of incident edges. v). Certain notions (such as path) are deﬁned for both. we usually mean the maximum degree of its vertices. .Note that directed graphs and undirected graphs are different (but similar) objects mathematically. k. In a digraph. Simple cycle Nonsimple cycle Free Tree Forest DAG Fig. 2 Sum of degrees: deg(v) = 2E. k.

w) need not be / deﬁned. . e. Since each list has out-deg(v) entries. 23: Adjacency matrix and adjacency list for graphs. w) ∈ E then generally W (v. the total number of adjacency list records is Θ(E). . A[v. but we will store each edge twice. Adj 1 2 3 4 1 0 1 1 1 2 1 0 1 0 3 1 1 0 1 4 1 0 1 0 3 4 1 1 2 4 3 1 2 2 1 3 3 4 1 2 4 3 Adjacency matrix Adjacency list (with crosslinks) Fig. w} by the two oppositely directed edges (v. Adj[v] points to a linked list containing the vertices which are adjacent to v (i. w] = W (v. w) in the representation that you also mark (w. w) ∈ E then A[v. The V arises because there is one entry for each vertex in Adj . v). it is important to remember that these two classes of objects are mathematically distinct from one another. n] of pointers where for 1 ≤ v ≤ n. it may not be convenient to walk down the entire linked list. v). If the edges have weights then these weights may also be stored in the linked list elements. For sparse graphs the adjacency list representation is more space efﬁcient. For example if (v. An adjacency matrix requires Θ(V 2 ) storage and an adjacency list requires Θ(V + E) storage. n}.Representations of Graphs and Digraphs: There are two common ways of representing graphs and digraphs. E) be a digraph with n = |V | and let e = |E|. . A(v. the vertices that can be reached from v by a single edge). since they are both the same edge in reality. w] = 1 0 if (v. If (v. so it is common to include cross links between corresponding edges. This can cause some complications. First we show how to represent digraphs. w) ∈ E otherwise.e. but often we set it to some “special” value. We can represent undirected graphs using exactly the same representation. or ∞.g. when this is summed over all vertices. You need to be careful when you mark edge (v. .) Adjacency List: An array Adj[1 . Notice that even though we represent undirected graphs in the same way that we represent digraphs. w) (the weight on edge (v. w)). this might be some machine dependent constant like MAXINT. . In particular. w ≤ n. For example. 1 1 0 0 2 1 0 1 3 1 1 0 Adj 1 2 3 1 3 2 2 3 1 1 2 3 2 3 Adjacency matrix Adjacency list Fig. w) and (w. Adjacency Matrix: An n × n matrix deﬁned for 1 ≤ v. In practice. (By ∞ we mean (in practice) some number which is larger than any allowable weight. We will assume that the vertices of G are indexed {1. w) = −1. 22: Adjacency matrix and adjacency list for digraphs. 2. we representing the undirected edge {v. . Let G = (V. If the digraph has weights we can store the weights in the matrix. When dealing with adjacency lists. Lecture Notes 32 CMSC 451 . suppose you write an algorithm that operates by marking edges of a graph.

because some applications of BFS use this additional information. meaning that they are undiscovered. The search makes use of a queue.Graph Traversals: There are a number of approaches used for solving problems on graphs.) These edges of G are called tree edges and the remaining edges of G are called cross edges. the vertex who ﬁrst discovered u. When a vertex has ﬁrst been discovered. pred[u] which points to the predecessor of u (i.s) { for each u in V { color[u] = white d[u] = infinity pred[u] = null } color[s] = gray d[s] = 0 Q = {s} while (Q is nonempty) { u = Q. gray or black).. but not yet processed... E).set its distance . and trees are usually much easier to reason about than general graphs. and hence can be used as an algorithm for computing shortest paths. the distance from s to u. and in what order vertices are placed on the queue.mark it discovered . Breadth-ﬁrst search is named because it visits vertices across the entire “breadth” of this frontier. At any given time there is a “frontier” of vertices that have been discovered. breadth-ﬁrst search starts at some source vertex s and “discovers” which vertices are reachable from s. Breadth-ﬁrst search: Given an graph G = (V. Initially all vertices (except the source) are colored white. and every other node has a unique path to the root). Breadth-ﬁrst search discovers vertices in increasing order of distance. The reason for this is that these traversals impose a type of tree structure (or generally a forest) on the graph. and d[u].. Breadth-First Search BFS(G...Enqueue(v) } } color[u] = black } } // initialization // initialize source s // put s in the queue // u is the next to visit // // // // // if neighbor v undiscovered .e.. We will also maintain arrays color[u] which holds the color of vertex u (either white. it is colored gray (and is part of the frontier). then it becomes black. The ﬁrst item in the queue (the next to be removed) is called the head of the queue. then cross edges always go between two nodes that are at most one level apart in the BFS tree. a ﬁrst-in ﬁrst-out list. One of the most important approaches is based on the notion of systematically visiting all the vertices and edge of a graph. When a gray vertex is processed.put it in the queue // we are done with u Observe that the predecessor pointers of the BFS search deﬁne an inverted tree (an acyclic directed graph in which the source is the root. If we reverse these edges we get a rooted unordered tree called a BFS tree for G.Dequeue() for each v in Adj[u] { if (color[v] == white) { color[v] = gray d[v] = d[u]+1 pred[v] = u Q. where elements are removed in the same order they are inserted. Deﬁne the distance between a vertex v and s to be the minimum number of edges on a path from s to v. We include all this information. Only the color is really needed for the search (in fact it is only necessary to know whether a node is nonwhite). depending on where the search starts. (Can you see why this must be true?) Below is a sketch of a proof that on Lecture Notes 33 CMSC 451 . (Note that there are many potential BFS trees for a given graph. It is not hard to prove that if G is an undirected graph.and its predecessor ..

f. we have (by induction) d[u] = δ(s. u) + 1 = δ(s. (The +1 is because even if deg(u) = 0. v) denote the length (number of edges) on the shortest path from s to v. d[v] is equal to the distance from s to v. The real meat is in the traversal loop. f. Since v is a neighbor of u. u). c. When u is processed. v).) Summing up over all vertices we have the running time T (V ) = V + u∈V (deg(u) + 1) = V + u∈V deg(u) + V = 2V + 2E ∈ Θ(V + E). Thus we have d[v] = d[u] + 1 = δ(s.) Theorem: Let δ(s. b f Q: a. Then. Lecture Notes 34 CMSC 451 . we set d[v] = d[u] + 1. d 1 d Fig. e. v) = δ(s. (See the CLRS for a detailed proof. the number of times we go through the while loop is at most V (exactly V assuming each vertex is reachable from the source). g 1 d e a 1 e 2 1 d e c g c a 1 e 2 s0 1 c b2 d s0 a 1 e 2 f 3 1 c b2 3 g s0 1 c b2 Q: e. we need to spend a constant amount of time to set up the loop. Let u be the predecessor of v on some shortest path from s to v. As done in CLR V = |V | and E = |E|. d. e s0 1 c b2 3 g Q: b.a s d b s a 1 s0 1 c 1 d a a 1 e 2 b. Thus. Since we never visit a vertex twice. The number of iterations through the inner for loop is proportional to deg(u) + 1. Observe that the initialization portion requires Θ(V ) time. Proof: (Sketch) The proof is by induction on the length of the shortest path. as desired. The analysis is essentially the same for directed graphs. d[v] = δ(s. b 1 d 1 d Q: d. g a 1 e 2 Q: (empty) f 3 s0 1 c Q: c. Analysis: The running time analysis of BFS is similar to the running time analysis of many graph traversal algorithms. δ(s. 24: Breadth-ﬁrst search: Example. termination. on termination of the BFS procedure. v). and among all such vertices the ﬁrst to be processed by the BFS. u) + 1.

When you return to the same room. This is somewhat harder to see than the BFS analysis. } DFSVisit(u) { color[u] = gray. As before we also store predecessor pointers. for each u in V if (color[u] == white) DFSVisit(u). then backtrack. As with BFS. try a different door leaving the room (assuming it goes somewhere you haven’t already been).set predecessor pointer // . for each v in Adj(u) do if (color[v] == white) { pred[v] = u. We use four auxiliary arrays. f[u] = ++time. because the recursive nature of the algorithm obscures things. To solve it you might use the following strategy. } time = 0. and illustrated in Fig. These are time stamps. Depth-First Search DFS(G) { for each u in V { color[u] = white. } color[u] = black. When all doors have been tried in a given room.Lecture 10: Depth-First Search Read: Sections 23.visit v // we’re done with u Analysis: The running time of DFS is Θ(V + E). E). In particular. (Note: Do not confuse the discovery time d[v] with the distance d[v] from BFS. recurrences are good ways to analyze recursively Lecture Notes 35 CMSC 451 .. gray means discovered but not ﬁnished processing. This is the general idea behind depth-ﬁrst search. pred[u] = null. Depth-First Search Algorithm: We assume we are given an directed graph G = (V. As before we maintain a color for each vertex: white means undiscovered. The purpose of the time stamps will be explained later.3 in CLR. and it has the nice property that nontree edges have a good deal of mathematical structure.. 25. The same algorithm works for undirected graphs (but the resulting structure imposed on the graph is different). d[u] = ++time.. When we ﬁrst discover a vertex u store a counter in d[u] and when we are ﬁnished processing a vertex we store a counter in f [u]. when you enter a new room. and black means ﬁnished.2 and 23.) The algorithm is shown in code block below. DFS induces a tree structure. As you enter a room of the castle. DFSVisit(v). Depth-First Search: The next traversal algorithm that we will study is called depth-ﬁrst search. paint some grafﬁti on the wall to remind yourself that you were already there. Normally.. Successively travel from room to room as long as you come to a place you haven’t already been. We will discuss this tree structure further below. } // main program // initialization // found an undiscovered vertex // start a new search here // start a search at u // mark u visited // if neighbor v undiscovered // . pointing back to the vertex that discovered a given vertex. We will also associate two numbers with each vertex. you are beginning a new search. Consider the problem of searching a castle for treasure. Notice that this algorithm is described recursively.

.d DFS(a) DFS(b) e DFS(c) 1/. c a 1/10 g 3/. For directed graphs the other edges of the graph can be classiﬁed as follows: Back edges: (u. 25: Depth-First search tree. Furthermore. by convention. Cross edges: (u. they are all called back edges by convention. we can see that each vertex u can be processed in O(1 + outdeg(u)) time. Tree structure: DFS naturally imposes a tree structure (actually a collection of trees. b b 2/5 b 2/5 6/. v) where u and v are not ancestors or descendents of one another (in fact. a return c return b 1/. and hence the call DFSVisit() is made exactly once for each vertex.. We can just analyze each one individually and add up their running times. the edge may go between different trees of the forest).. where the edge (u. (Thus. v) arises when processing vertex u we call DFSVisit(v) for some neighbor v. So. a f b a f 2/. Thus the total time used in the procedure is T (V ) = V + u∈V (outdeg(u) + 1) = V + u∈V outdeg(u) + V = 2V + E ∈ Θ(V + E). Observe that each vertex is visited exactly once in the search. but it is not true here. It is not difﬁcult to classify the edges of a DFS tree by analyzing the values of colors of the vertices and/or considering the time stamps. it can be shown that there can be no cross edges. v) where v is a proper descendent of u in the tree. First observe that if we ignore the time spent in the recursive calls. This is left as an exercise. or a forest) on the structure of the graph. Forward edges: (u.. a DFS(f) DFS(g) 1/. First. Ignoring the time spent in the recursive calls. With undirected graphs... a self-loop is considered to be a back edge). c d c 3/4 c g 3/4 7/.. because there is no good notion of “size” that we can attach to each recursive call. there are some important differences in the structure of the DFS tree. This is just the recursion tree. deﬁned algorithms. (Can you see why not?) Lecture Notes 36 CMSC 451 . v) where v is a (not necessarily proper) ancestor of u in the tree. there is really no distinction between forward and back edges. a DFS(d) DFS(e) b return e 2/5 return f c 3/4 1/10 11/14 b f 2/5 6/9 12/13 e return g return f return a f 6/9 c g 3/4 7/8 g 7/8 Fig. A similar analysis holds if we consider DFS for undirected graphs. the main DFS procedure runs in O(V ) time.

suppose you are given a graph or digraph. (E. v was not white (otherwise (u. f [v]] are disjoint. Proof: (⇐) If there is a back edge (u. and so v’s start-ﬁnish interval is contained within u’s.) For a cross edge (u. Because the intervals are disjoint. E). You run DFS. Lemma: (Parenthesis Lemma) Given a digraph G = (V. We do this with the help of the following two lemmas. E) and any DFS forest for G. and back edges. for a forward edge (u. then f [u] > f [v]. implying that v has an earlier ﬁnish time. When we were processing u. f [u]] ⊆ [d[v]. the following are easy to observe. implying there can be no cycle. consider any DFS forest of G. For example. In particular. Lemma: Consider a digraph G = (V. If the edge is a back edge then f [u] ≤ f [v]. v) ∈ E. a 1/10 C d 11/14 a d b 2/5 6/9 f F B C C e 12/13 b c f g e c 3/4 g 7/8 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Fig. and by following tree edges from v to u we get a cycle. You can determine whether the graph contains any cycles very easily. v). tree. forward. v is a descendent of u. By the lemma above. For example. and any DFS tree for G and any two vertices u. (⇒) We show the contrapositive. or cross edge. f [v]]. a DFS tree may only have a single back edge. and is not hard to prove. • u is a descendent of v if and only if [d[u]. Beware: No back edges means no cycles. and consider any edge (u. Cycles: The time stamps given by DFS allow us to determine a number of things about a graph or digraph. Lemma: Given a digraph G = (V. then v is an ancestor of u. f [u]] and [d[v]. f [u]] ⊇ [d[v]. forward. • u is an ancestor of v if and only if [d[u]. v) would be a tree edge). and there may anywhere from one up to an exponential number of simple cycles in the graph. In CLR this is referred to as the parenthesis structure. and cross all have the property that they go from vertices with higher ﬁnishing time to vertices with lower ﬁnishing time. G has a cycle if and only the DFS forest has a back edge.g. v) we know that the two time intervals are disjoint. E).Time-stamp structure: There is also a nice structure to the time stamps. Thus along any path. v). v must have also ﬁnished before u. ﬁnish times decrease monotonically. forward. Proof: For tree. the proof follows directly from the parenthesis lemma. If this edge is a tree. Suppose there are no back edges. But you should not infer that there is some simple relationship between the number of back edges and the number of cycles. v ∈ V . • u is unrelated to v if and only if [d[u]. Lecture Notes 37 CMSC 451 . f [v]]. each of the remaining types of edges. implying that v was started before u. 26: Parenthesis Lemma. A similar theorem applies to undirected graphs.

v ∈ V . However both orderings are legitimate. and certain tasks must precede other tasks (e. the ﬁnish time of u is greater than the ﬁnish time of v. In fact we will solve a generalization of this problem. for every edge (u. We would like to write an algorithm that determines whether a digraph is strongly connected. u can reach v and vice versa. // on finishing u add to list } This is typical example of DFS is used in applications. In general a precedence constraint graph is a DAG in which vertices are tasks and the edge (u. A topological sort of a DAG is a linear ordering of the vertices of the DAG such that for each edge (u.Lecture 11: Topological Sort and Strong Components Read: Sects. it sufﬁces to output the vertices in reverse order of ﬁnishing time. A digraph is strongly connected if for every pair of vertices. To do this we run a (stripped down) DFS.3–22. Topological Sort TopSort(G) { for each (u in V) color[u] = white. We do our depth-ﬁrst search in a different order from the one given in CLRS. For example. we partition the vertices of the digraph into subsets such that the induced subgraph of each subset is strongly connected. and when each vertex is ﬁnished we add it to the front of a linked list. // mark u visited for each (v in Adj(u)) if (color[v] == white) TopVisit(v). u appears before v in the ordering. // initialize L = new linked_list. Note that in general. the running time of topological sort is Θ(V + E). v) means that task u must be completed before task v begins. and still have this Lecture Notes 38 CMSC 451 . v). u. if there are a series of tasks to be performed. This is given below. there may be many legal topological orders for a given DAG. v) in a DAG. Bumstead lists the precedences in the order in which he puts on his clothes in the morning. and so we get a different ﬁnal ordering. By the previous lemma. Observe that the structure is essentially the same as the basic DFS procedure. (These subsets should be as large as possible. When digraphs are used in communication and transportation networks.5 in CLRS. Directed Acyclic Graph: A directed acyclic graph is often called a DAG for short DAG’s arise in many applications where there are precedence or ordering constraints. In particular. of computing the strongly connected components (or strong components for short) of a digraph. // L is an empty linked list for each (u in V) if (color[u] == white) TopVisit(u). As with depth-ﬁrst search. Append u to the front of L. 22. given DFS. but you can do the electrical wiring while you install the windows). return L.g. // L gives final order } TopVisit(u) { // start a search at u color[u] = gray. people want to know that there networks are complete in the sense that from any location it is possible to reach any other location in the digraph. The ﬁnal linked list order will be the ﬁnal topological order. Strong Components: Next we consider a very important connectivity problem with digraphs. To compute a topological ordering is actually very easy. Thus. given the precedence constraints. but we only include the elements of DFS that are needed for this application. in construction you have to build the ﬁrst ﬂoor before you build the second ﬂoor. As an example we consider the DAG presented in CLRS for Professor Bumstead’s order of dressing.

when you enter a strong component.g. pants. the answer is yes.h. Observe that in the ﬁgure each strong component is just a subtree of the DFS forest. called the component digraph. Strong Components and DFS: By way of motivation.b.i h d. It is amazingly simple and efﬁcient. By deﬁnition of DFS. belt. This equivalence relation partitions the vertices into equivalence classes of mutually reachable vertices. (See the DFS on the right for a counterexample.) Further Lecture Notes 39 CMSC 451 . 28: Strong Components. we may be accurately refer to it as the component DAG.e b a. In general. Suppose that you knew the component DAG in advance. so the DFS does not terminate until all the vertices in the component have been visited. But humor me for a moment. but will not prove the algorithm’s correctness formally. shirt. is necessarily acyclic.shorts pants shirt belt tie jacket socks shoes 1/10 shorts 2/9 pants shirt tie 11/14 12/13 socks 15/16 3/6 belt 7/8 shoes 4/5 jacket Final order: socks. jacket Fig. consider the DFS of the digraph shown in the following ﬁgure (left). but it is so clever that it is very difﬁcult to even see how it works.c Digraph and Strong Components Component DAG Fig. then the resulting digraph. we say that two vertices u and v are mutually reachable if u and reach v and vice versa. property. v) ∈ E. every vertex in the component is reachable. B) if and only if there are vertices u ∈ A and v ∈ B such that (u. and that is the problem we are trying to solve. shoes. Thus all the vertices in a strong component must appear in the same tree of the DFS forest. Is it always true for any DFS? Unfortunately the answer is no. and joint two supervertices (A.) More formally.) Does there always exist a way to order the DFS such that it is true? Fortunately. We will give some of the intuition that leads to the algorithm. (This is ridiculous. because you would need to know the strong components. tie. See CLRS for a formal proof. many strong components may appear in the same DFS tree. 27: Topological sort. It is easy to see that mutual reachability is an equivalence relation. (Can you see why?) Thus. Observe that if we merge the vertices in each strong component into a single super vertex. and these are the strong components. c a d f e g i f. shorts. The algorithm that we will present is an algorithm designer’s “dream” (and an algorithm student’s nightmare).

That is. v) is an edge in the component digraph. by visiting components in reverse topological order of the component tree. Here is the algorithm. Here is an informal justiﬁcation. we are trying to solve the strong component problem in the ﬁrst place). and refer you to CLRS for the complete proof. If u and v are mutually reachable in G. run DFS. Given an adjacency list for G. 29: Two depth-ﬁrst searches. Unfortunately it is quite difﬁcult to understand why this algorithm works. then certainly this is still true in GR . Recall that the main intent is to visit the Lecture Notes 40 CMSC 451 . If we do not start in reverse topological. Then visit the nodes of GR in decreasing order of ﬁnish times. then v comes before u in this reversed order (not after as it would in a normal topological ordering).) Observe that the strongly connected components are not affected by reversing all the digraph’s edges. when the search is started at vertex a. First recall that GR (what CLRS calls GT ) is the digraph with the same vertex set as G but in which all edges have been reversed in direction. because other components would have already have been visited earlier in the search. Here is some intuition. (u. but every time you need a new vertex to start the search from. However. This leaves us with the intuition that if we could somehow order the DFS. All that changes is that the component DAG is completely reversed. Now. (After all. not only does it visit its component with b and c. and all operate in Θ(V + E) time. without actually computing the component DAG. The Plumber’s Algorithm: I call this algorithm the plumber’s algorithm (because it avoids leaks). (I’ll leave this as an exercise. then we would have an easy algorithm for computing strong components. it is possible to compute GR in Θ(V + E) time. and put them in the same DFS tree. then the search may “leak out” into other strong components. For example. though. select the next available vertex according to this reverse topological order of the component digraph. but the it also visits the other components as well. so that it hits the strong components according to a reverse topological order. I will present the algorithm. it must visit every vertex within the component (and possibly some others) before ﬁnishing. The ordering trick is to order the vertices of G according to their ﬁnish times in a DFS. All the steps of the algorithm are quite easy to implement. i e 1/8 9/12 d 10/11 c a 13/18 b h 2/3 4/7 f a 14/17 c g 5/6 15/16 b 3/4 2/13 1/18 d 14/17 e 15/16 f 5/12 g 6/11 i 7/10 h 8/9 Fig. we do not know what the component DAG looks like. The “trick” behind the strong component algorithm is that we can ﬁnd an ordering of the vertices that has essentially the necessary property. each search cannot “leak out” into other components. in the ﬁgure below right. However. Clearly once the DFS starts within a given strong component.suppose that you computed a reversed topological order on the component digraph. Correctness: Why visit vertices in decreasing order of ﬁnish times? Why use the reversal digraph? It is difﬁcult to justify these elements formally.

then these are related to the topological order of the component DAG.Strong Components StrongComp(G) { Run DFS(G). v) of G such that u ∈ C and v ∈ C . Here is a quick sketch. we know from the deﬁnition of the component DAG that there cannot be a path from C to C. and then will visit everything in C before ﬁnally returning to C. If the DFS visits C ﬁrst.. starting with the lowest ﬁnishing time. h. In particular. Lecture Notes 41 CMSC 451 .e. the maximum ﬁnish times for each component are 18 (for {a. Compute R = Reverse(G). If we consider the maximum ﬁnish time in each component. If there is an (u. suppose that C is visited ﬁrst. Run DFS(R) using this order. v) or some other edge). 17 (for {d. } a b 2/13 1/18 d 14/17 9 c b a c d e g f i h c 3/4 f g i h 5/12 e 15/16 a 1 4 6/11 d e 2 f 7/10 3 5 h i 8 b g 6 7 8/9 Initial DFS Reversal with new vertex order Final DFS with components Fig. in the previous ﬁgure. For example. Thus all the ﬁnish times of C will be larger than the ﬁnish times of C . and its strong component is ﬁrst. then the DFS will leak into C (along edge (u. computing finish times f[u] for each vertex u. some vertex of C will ﬁnish later than every vertex of C . b. The question is how to order the vertices so that this is true. See the book for a complete proof. E) and let C and C be two distinct strong components. 30: Strong Components Algorithm strong components in a reverse topological order. 17. u∈C Lemma: Consider a digraph G = (V. but it turns out that it doesn’t work. For example. in topological order. This is a good starting idea. reversing all edges of G. in the ﬁgure above observe that in the ﬁrst DFS (on the left) the lowest ﬁnish time (of 4) is achieved by vertex c. i}). c}). Each DFS tree is a strong component. deﬁne f (C) to be the maximum ﬁnish time among all vertices in this component. The reason is that there are many vertices in each strong component. Sort the vertices of R (by CountingSort) in decreasing order of f[u]. that in a DAG. and they all have different ﬁnish times. ﬁnish times occur in reverse topological order (i. So C will completely ﬁnish before we even start C. It is tempting to give up in frustration at this point. Recall from the topological sorting algorithm. then f (C) > f (C ). e}). So. g. Thus. given any strong component C. the ﬁrst vertex in the topological order is the one with the highest ﬁnish time). On the other hand. But there is something to notice about the ﬁnish times. Because there is an edge from C to C . f (C) = max f [u]. 12 is a valid topological order for the component digraph. and 12 (for {f. if we wanted to visit the components in reverse topological order. not last. The order 18. this suggests that we should visit the vertices in increasing order of ﬁnish time.

This is a big help. Minimum Spanning Trees: A common problem in communications networks and circuit design is that of connecting together a set of nodes (communication sites or circuit components) by a network of minimal total length (where length is the sum of the lengths of connecting wires). 31: Spanning trees (the middle and right are minimum spanning trees. undirected graph G = (V. v). we will visit the components in topological order. The reason is that AT&T was a government supported monopoly at one time. Since the resulting connection graph is connected. The one on the left is not a minimum spanning tree. and then run a new DFS in decreasing order of ﬁnish times. Assuming that each edge (u. Lecture Notes 42 CMSC 451 . We wanted a reverse topological order for the component DAG. up through 23. Note that the minimum spanning tree may not be unique. v). it never pays to have any cycles (since we could break any cycle without destroying connectivity and decrease the total length). w(u. (An interesting observation is that not only do the edges sum to the same value. and was responsible for handling all telephone connections. It tells us that if we run DFS and compute ﬁnish times. undirected. 31 shows three spanning trees for the same graph. (may be zero or negative) we deﬁne the cost of a spanning tree T to be the sum of edges in the spanning tree w(T ) = (u.v)∈T w(u. which is exactly what we wanted. and acyclic. We assume that the network is undirected. E). Then a DFS of the reversed digraph GR . Is this a coincidence? We’ll see later. More formally.S. If a company wanted to connect a collection of installations by an private internal phone system. then the MST will be distinct (this is a rather subtle fact. and the other two are. Lecture 12: Minimum Spanning Trees and Kruskal’s Algorithm Read: Chapt 23 in CLRS. by forming GR . the ﬁnal trick is to reverse the digraph. a spanning tree is an acyclic subset of edges T ⊆ E that connects all the vertices together. To minimize the length of the connecting network. given a connected. but it is true that if all the edge weights are distinct. A minimum spanning tree (MST) is a spanning tree of minimum weight. and so reverses the topological order. Steiner Minimum Trees: Minimum spanning trees are actually mentioned in the U. So. which we will not prove). it is a free tree. v) of G has a numeric weight or cost. visits the strong components according to a reversed topological order of the component DAG of GR . This does not change the strong components. Fig. but it reverses the edges of the component graph. Sort the vertices by decreasing order of ﬁnish time.) 4 a 8 c b 9 2 1 Cost = 33 10 8 d 7 9 f 5 2 e 6 g a 8 4 b 9 2 c 10 8 d 7 9 f 5 2 e 6 g a 8 4 b 9 2 c 10 8 d 7 9 f 5 2 e 6 g 1 Cost = 22 1 Cost = 22 Fig. but in fact the same set of edge weights appear in the two MST’s. legal code. where the shaded rectangles indicate the edges in the spanning tree.2. The problem is that this is not what we wanted. In conclusion we have: Theorem: Consider a digraph G on which DFS has been run. The computational problem is called the minimum spanning tree problem (MST for short).

However.) When is an edge safe? We consider the theoretical issues behind determining whether an edge is safe or not. • There exists a unique path between any two vertices of a free tree. Before presenting these algorithms. we Lecture Notes 43 CMSC 451 . E) be an undirected. Breaking any edge on this cycle restores a free tree. In other words.) We say that an edge (u. . MST SMT Steiner point 1 Cost = 3 Cost = 2 sqrt(2) = 2. negative or zero). the SMT problem is much harder. the problem of determining the lowest cost interconnection tree between a given set of nodes. The intuition behind the greedy MST algorithms is simple. 32. whose √ distance to each of the other four points is 1/ 2. Recall that a greedy algorithm is one that builds a solution by repeated selecting the cheapest (or generally locally optimal choice) among all options at each stage. Given a subset of edges A. they never “unmake” this choice. and in fact is NP-hard. A generic greedy algorithm operates by repeatedly adding any safe edge to the current spanning tree. assuming that you are allowed additional nodes (called Steiner points) is called the Steiner minimum tree (or SMT for short). Let S be a subset of the vertices S ⊆ V . An important characteristic of greedy algorithms is that once they make a choice.AT&T was required (by law) to connect them in the minimum cost manner. the US Legal code is rather ambiguous on the point as to whether the phone company was required to use MST’s or SMT’s in making connections. connected graph whose edges have numeric edge weights (which may be positive. (We cannot say “the” MST. Let G = (V. An edge (u. This is better than the MST. An interesting fact is that although there is a simple greedy algorithm for MST’s (as we will see below). since it is not necessarily unique. v) ∈ E − A is safe if A ∪ {(u. until A equals the MST.) Generic approach: We will present two greedy algorithms (Kruskal’s and Prim’s algorithms) for computing a minimum spanning tree. 32: Steiner Minimum tree. which is clearly a spanning tree . let us review some basic facts about free trees. Note that if A is viable it cannot contain a cycle. and we will add edges one at a time.83. the choice (u. A cut (S. Lemma: • A free tree with n vertices has exactly n − 1 edges. v)} is viable. It is now possible to connect these ﬁve points with a total cost √ √ of 4/ 2 = 2 2 ≈ 2. V − S) is just a partition of the vertices into two disjoint subsets. v) is a safe choice to add so that A can still be extended to form an MST. On the left. or is it? Some companies discovered that they could actually reduce their connection costs by opening a new bogus installation. They are all quite easy to prove. Assume that all edge lengths are just Euclidean distances. (Note that viability is a property of subsets of edges and safety is a property of a single edge.83 Fig. It is easy to see that the cost of any MST for this conﬁguration is 3 (as shown on the left). v) crosses the cut if one endpoint is in S and the other is in V − S. which will initially be empty. we maintain a subset of edges A. consider four installations that lie at the corners of a 1 × 1 square. In general. if you introduce a new installation at the center. Such an installation served no purpose other than to act as an intermediate point for connections. We say that a subset A ⊆ E is viable if A is a subset of edges in some MST. . (Luckily for AT&T. • Adding any edge to a free tree creates a unique cycle. An example is shown in Fig.

v) then we are done. Add the edge (u. Edge (u. then the lightest edge crossing a cut is a natural choice.e. Since (u. v) is the light edge across the cut (because any lighter edge would have been considered earlier by the algorithm). We have w(T ) = w(T ) − w(x. a subset of some MST). V − S) be any cut that respects A. (It is stated in complete generality. and since any cycle must cross the cut an even number of times. E) be a connected. If it does. v) is lightest edge crossing the cut. Thus. Consider the cut (A . Proof: It will simplify the proof to assume that all the edge weights are distinct.y) + (u. then this edge is passed over. Then the edge (u. V − A ). until we have a single tree containing all the vertices. v) to T . v). Thus w(T ) < w(T ). and we consider the next edge in order. If the next edge does not induce a cycle among the current set of edges. It is not hard to see why respecting cuts are important to this problem. and let (u. Let A be a viable subset of E (i. The main theorem which drives both algorithms is the following. by the MST Lemma. By removing (x. then it is added to A.v) v u 4 v u 4 v T + (u. let (S. and (u. y) is not in A (because the cut respects A). y) we restore a spanning tree. An edge of E is a light edge crossing a cut. call it T . v) is safe. Let T be any MST for G (see Fig. Every edge crossing the cut is not in A. thus creating a cycle. We will derive a contradiction. v). and we wish to know which edges can be added that do not induce a cycle in the current MST. Kruskal’s Algorithm: Kruskal’s algorithm works by attempting to add edges to the A in increasing order of weight (lightest edges ﬁrst). v) is the light edge crossing cut (S. This contradicts the assumption that T was an MST.say that a cut respects A if no edge in A crosses the cut. Since u and v are on opposite sides of the cut. and suppose that this edge does not induce a cycle in A. 9 u 4 7 x 6 A 8 y x 8 y x y T’ = T − (x. Why? Consider the edge (u. v) be a light edge crossing this cut. there must be at least one other edge (x. y). If T contains (u. undirected graph with real-valued weights on the edges. it has the minimum weight (the light edge may not be unique if there are duplicate edge weights). It essentially says that we can always augment A by adding the minimum weight edge that crosses a cut which respects A. Lecture Notes 44 CMSC 451 . and so this cut respects A. Suppose that no MST contains (u. y) + w(u. y) in T that crosses the cut. if among all edges crossing the cut. ).) MST Lemma: Let G = (V. so that it can be applied to both algorithms. v) is safe for A. v) that Kruskal’s algorithm seeks to add next. 33: Proof of the MST Lemma. Note that as this algorithm runs. V − S). As the algorithm continues. any edge that crosses a respecting cut is a possible candidate. the trees of this forest are merged together. Observe that this strategy leads to a correct algorithm. Intuition says that since all the edges that cross a respecting cut do not induce a cycle. The edge (x.v) Fig. the edges of A will induce a forest on the vertices. v) < w(x. (u. Let A denote the tree of the forest A that contains vertex u. we have w(u. If we have computed a partial MST.

34. and the sets will be vertices in each tree of A. Union(u. because it can actually perform a sequence of n operations much faster than O(n log n) time. v): Merge the set containing u and the set containing v into a common set. Analysis: How long does Kruskal’s algorithm take? As usual.E). Observe that it takes Θ(E log E) time to Lecture Notes 45 CMSC 451 . and an example is shown in Fig. You may use it as a “black-box”.) In Kruskal’s algorithm.v) to A Union(u. 34: Kruskal’s Algorithm. on a set of size n. (The Union-Find data structure is quite interesting. Each vertex is labeled according to the set that contains it. We want a fast test that tells us whether u and v are in the same tree of A. Find-Set(u): Find the set that contains a given item u. we may assume that E ≥ V − 1. but this will take too much time. the vertices of the graph will be the elements to be stored in the sets. However we will not go into this here. The set A can be stored as a simple list of edges. This can be done by a data structure (which we have not studied) called the disjoint set Union-Find data structure. We could perform a DFS on subgraph induced by the edges of A. v) } } return A } 1 4 a 8 b 9 10 8 d 2 1 10 8 7 c 2 1 9 7 9 e 6 5 c 2 g 2 4 a 8 b 9 10 8 c 2 1 10 8 7 c 2 1 9 7 9 e 6 5 c 2 g 2 4 a 8 b 9 10 8 c 2 1 10 8 7 c 2 1 9 7 9 e 6 5 c 2 4 e 6 5 c 2 c c c c c 10 9 9 4 c 8 c 9 c c 6 5 c 2 8 7 6 4 c a 8 a 9 c c 6 5 c 2 c 5 4 a 8 a 9 c Fig. let V be the number of vertices and E be the number of edges. This data structure supports three operations: Create-Set(u): Create a set containing a single item v. For our purposes it sufﬁces to know that each of these operations can be performed in O(log n) time. O(log n) time is fast enough for its use in Kruskal’s algorithm.w) { A = {} // initially A is empty for each (u in V) Create_Set(u) // create set for each vertex Sort E in increasing order by weight w for each ((u. The algorithm is shown below. Kruskal’s Algorithm Kruskal(G=(V.v) from the sorted list) { if (Find_Set(u) != Find_Set(v)) { // u and v in different trees Add (u. You are not responsible for knowing how this data structure works (which is described in CLRS).The only tricky part of the algorithm is how to detect efﬁciently whether the addition of an edge will create a cycle in A. Since the graph is connected.

Baruvka’s algorithm is not described in CLRS. called Dijkstra’s algorithm. extractMin(): Extract the item with the minimum key value in Q. We start with a root vertex r (it can be any vertex). Observe that if we consider the set of vertices S currently part of the tree. Recall that this is the data structure used in HeapSort. the subset of edges A forms a single tree (in Kruskal’s it formed a forest). Intuitively Kruskal’s works by merging or splicing two trees together. It is easy to see. this is the edge of weight 4 going to vertex u. 35: Prim’s Algorithm. we could write this more simply as Θ(E log V ). it is also the same way to solve a different problem. and its complement (V − S). The ﬁrst is to show that there is more than one way to solve a problem (an important lesson to learn in algorithm design). This is a data structure that stores a set of items. Thus each access is Θ(V ) time. Which edge should we add next? The MST Lemma from the previous lecture tells us that it is safe to add the light edge. and each iteration involves a constant number of accesses to the Union-Find data structure on a collection of V items. key): Insert u with the key value key in Q. taking care never to introduce a cycle. shortest paths. To do this. we will make use of a priority queue data structure. which is Θ((V + E) log V ). Since V is asymptotically no larger than E. Lecture Notes 46 CMSC 451 . not only is Prim’s a different way to solve the same MST problem. Thus the total running time is the sum of these. It differs from Kruskal’s algorithm only in how it selects the next safe edge to add at each step. and the second is that Prim’s algorithm looks very much like another greedy algorithm. for a total of Θ(E log V ). O((V + E) log V ). In the ﬁgure. until all the vertices are in the same tree. insert(u. We look to add a single vertex as a leaf to the tree. Its running time is essentially the same as Kruskal’s algorithm. and inserting them one by one into the spanning tree. that we will study for a completely different problem. Note that some edges that crossed the cut before are no longer crossing it. Prim’s Algorithm: Prim’s algorithm is another greedy algorithm for minimum spanning trees. we have a cut of the graph and the current set of tree edges A respects this cut. Lecture 13: Prim’s and Baruvka’s Algorithms for MSTs Read: Chapt 23 in CLRS. and the cut changes. and others that were not crossing the cut are. where each item is associated with a key value. At any time. Thus. Then u is added to the vertices of S. that the key questions in the efﬁcient implementation of Prim’s algorithm is how to update the cut efﬁciently. The priority queue supports three operations. and how to determine the light edge quickly. The for-loop is iterated E times. 12 10 6 r 11 4 u 5 7 r 12 10 6 7 u 5 3 9 Fig. In contrast. (Whatever that means!) Different ways to grow a tree: Kruskal’s algorithm worked by ordering the edges. Prim’s algorithm builds the tree up by adding leaves one at a time to the current tree.sort the edges. There are two reasons for studying Prim’s algorithm. The process is illustrated in the following ﬁgure.

The root vertex r can be any vertex in V . All of the above operations can be performed in O(log n) time. Since G is connected.decreaseKey(u.decreaseKey(v. since this is what we are removing with each step of the algorithm. // new lighter edge out of v Q.v). new key): Decrease the value of u’s key value to new key. The arrows on edges indicate the predecessor pointers.w. color[u] = white.v) < key[v])) { key[v] = w(u. For each incident edge. } } color[u] = black. The problem is that when a vertex is moved from one side of the cut to the other. and the numeric label in each vertex is the key value. A priority queue can be implemented using the same heap data structure used in heapsort. and this is what makes Prim’s algorithm so nice.nonEmpty()) { // until all vertices in MST u = Q. this results in a complicated sequence of updates. } [The pred pointers define the MST as an inverted tree rooted at r] } The following ﬁgure illustrates Prim’s algorithm. then we set its key value to +∞.extractMin(). What do we store in the priority queue? At ﬁrst you might think that we should store the edges that cross the cut. so this is Θ(E log V ). E) = u∈V (log V + deg(u) log V ) = u∈V (1 + deg(u)) log V = log V u∈V (1 + deg(u)) = (log V )(V + 2E) = Θ((V + E) log V ). Here is Prim’s algorithm. There is a much more elegant solution. If there is not edge from u to a vertex in V − S. We also store in pred[u] the end vertex of this edge in S. we account for the time spent on each vertex as it is extracted from the priority queue. pred[v] = u. Q = new PriQueue(V). To analyze Prim’s algorithm. // start at root pred[r] = nil. // vertex with lightest edge for each (v in Adj[u]) { if ((color[v] == white) && (w(u. we spend potentially O(log V ) time decreasing the key of the neighboring vertex.r) { for each (u in V) { // initialization key[u] = +infinity. We will also need to know which vertices are in S and which are not. It takes O(log V ) to extract this vertex from the queue. So the overall running time is T (V. } key[r] = 0. Prim’s Algorithm Prim(G. Thus the time is O(log V + deg(u) log V ) time. where n is the number of items in the heap. V is asymptotically no greater than E. We do this by coloring the vertices in S black. The other steps of the update are constant time. For each vertex in u ∈ V − S (not part of the current spanning tree) we associate u with a key value key[u]. key[v]). This is exactly the same as Kruskal’s algorithm. which is the weight of the lightest edge going from u to any vertex in S. Lecture Notes 47 CMSC 451 . // put vertices in Q while (Q.

Baruvka’s algorithm is similar to Kruskal’s algorithm. Unlike Kruskal’s and Prim’s algorithms. Baruvka’s Algorithm Baruvka(G=(V. By the above observation.? 10 4 ? 6 8 7 9 ? 5 ? 9 2 2 8 ? 1 Q: <empty> 10 4 4 8 Q: 8. a closer inspection of the proof reveals that the cheapest edge leaving any component is always safe. 36: Prim’s Algorithm. As a result. all of these edges are safe. So. This process is repeated until only one component remains. In fact. add {u. This one is called Baruvka’s algorithm. well before the ﬁrst computers). // A holds edges of the MST do { for (each component C) { find the lightest edge (u. we argued (from the MST Lemma) that the lightest such edge will be safe to add to the MST. We then apply DFS to the edges of A.?. to identify the new components. w) { initialize each vertex to be its own component.2.v} to A (unless it is already there). Each component determines the lightest edge that goes from inside the component to outside the component (we don’t care where).v) with u in C and v not in C.?. The reason for studying this algorithm is that of the three algorithms. // return final MST edges Lecture Notes 48 CMSC 451 .5 10 8 4 8 Q: 1.4 8 Q: 4. many components will be merged together into a single component. so we may add them all at once to the set A of edges in the MST.8.? 10 10 6 8 7 9 2 5 ? 9 2 2 1 1 1 Q: 2.8. return A.?. } while (there are 2 or more components). Baruvka’s Algorithm: We have seen two ways (Kruskal’s and Prim’s algorithms) for solving the MST problem. We say that such an edge leaves the component. it may seem like complete overkill to consider yet another algorithm. It is actually the oldest of the three algorithms (invented in 1926.E).10.? 10 10 6 8 7 9 8 5 ? 9 2 2 8 ? 1 Q: 2. This suggests a more parallel way to grow the MST. Initially. } apply DFS to graph H=(V. A fairly high-level description of Baruvka’s algorithm is given below. To prove Kruskal’s algorithm correct.5 10 5 6 5 2 2 4 9 8 4 9 8 5 6 5 2 2 2 4 9 8 5 6 5 2 2 8 2 1 7 9 8 2 1 7 9 8 2 2 1 7 9 Fig. Baruvka’s algorithm adds a whole set of edges all at once to the MST. each vertex is by itself in a one-vertex component. Let us call each subtree a component. Recall that with each stage of Kruskal’s algorithm. Note that two components might select the same edge by this process. A = {}.10. we add the lightest-weight edge that connects two different components together.A).2. which add edges one at a time. in the sense that it works by maintaining a collection of disconnected trees. it is the easiest to implement on a parallel computer. to compute the new components.?.

V is asymptotically no larger than E. by reversing the edges). We have already seen that breadth-ﬁrst search is an O(V + E) algorithm for ﬁnding shortest paths from a single source vertex to all other vertices. With these labels it is easy to determine which edges go between components (since their endpoints have different labels). Thus. Suppose that the graph has edge weights. Each of the m components. each iteration (of the outer do-while loop) can be performed in Θ(V + E) time. but only traversing the edges of A to compute the components. but never higher than m/2 (if they merge in pairs). we deﬁne the length of a Lecture Notes 49 CMSC 451 . this can happen at most lg V time. will merge with at least one other component. we may apply DFS. which we will not spell out in detail. Since we start with V components.) The algorithm is illustrated in the ﬁgure below. Thus. but there are other global methods that are faster. Afterwards the number of remaining components could be a low as 1 (if they all merge together). let m denote the number of components at some stage. Lecture 14: Dijkstra’s Algorithm for Shortest Paths Read: Chapt 24 in CLRS. When edge weights are present. By the way. Again. To see why. since G is connected. the number of components decreases by at least half with each iteration. we can do all this without having to perform two separate DFS’s. Analysis: How long does Baruvka’s algorithm take? Observe that because each iteration involves doing a DFS. Thus all three algorithms have the same asymptotic running time. 37: Baruvka’s Algorithm. One may want just the shortest path between a single pair of vertices. (In fact. so we can write this more succinctly as Θ(E log V ).There are a number of unspeciﬁed details in Baruvka’s algorithm. Think of the vertices as cities. except to note that they can be solved in Θ(V + E) time through DFS. First. with a little more cleverness. Shortest Paths: Consider the problem of computing shortest paths in a directed graph. Most algorithms for this problem are variants of the single-source algorithm that we will present. Then we can traverse each component again to determine the lightest edge that leaves the component. and the weights represent the cost of traveling from one city to another (nonexistent edges can be thought of a having inﬁnite cost). b 1 a 14 9 2 c 15 13 8 9 2 h 15 h 12 h 11 4 6 h 7 8 d 12 e 11 4 10 6 g 7 13 f 10 h 3 i a 14 1 9 2 c 15 13 8 9 2 a 15 a 12 h 11 4 6 h 7 a 8 c 12 e 11 4 10 6 e 7 13 h 10 h 3 h h 1 h 14 h h 3 h a 14 1 a h h 3 h Fig. which can be solved in the transpose digraph (that is. Computing all-pairs shortest paths can be solved by iterating a single-source algorithm over all vertices. There is also a single sink problem. Each DFS tree will correspond to a separate component. and we wish to compute the shortest paths from a single source vertex to all other vertices in the graph. the total running time is Θ((V + E) log V ) time. We label each vertex with its component number as part of this process. until only one component remains. assuming that the graph has no edge weights. there are other formulations of the shortest path problem. The question is how many iterations are required in general? We claim that there are never more than O(log n) iterations needed.

it attempts to update d[v] for each vertex in the graph. As the algorithm goes on. The problem is to determine the distance from the source vertex to every vertex in the graph. but in order for the shortest path to be well deﬁned. v) we get a path to v of length d[u]+ w(u. we need to add the requirement that there be no cycles whose total cost is negative (otherwise you make the path inﬁnitely short by cycling forever through such a cycle). This notion is common to many optimization algorithms. then take it // record that we go through u Lecture Notes 50 CMSC 451 .) Intuitively d[v] will be the length of the shortest path that the algorithm knows of from s to v. 3 5 8 v Relaxing an edge Relax(u. We know that there is a path from s to u of weight d[u]. We should also remember that the shortest path to v passes through u. for each vertex we will have a pointer pred[v] which points back to the source. then you need to update d[v].v) s 0 Fig. They are completely different. we will construct the reversal of the shortest path to v. 38: Relaxation. 38. u and v. We are given a directed graph with nonnegative edge weights G = (V. The text discusses the Bellman-Ford algorithm for ﬁnding shortest paths assuming negative weight edges but no negative-weight cycles are present.v) { if (d[u] + w(u. if you can see that your solution is not yet reached an optimum value. then push it a little closer to the optimum. v). E) and a distinguished source vertex. value will always greater than or equal to the true shortest path distance from s to v. We will stress the task of computing the minimum distance from the source to each vertex. By following the predecessor pointers backwards from any vertex. if you discover a path from s to v shorter than d[v]. we know of no paths. Computing the actual path will be a fairly simple extension.path to be the sum of edge weights along the path. Deﬁne the distance between two vertices. δ(u. Initially. If this path is better than the existing path of length d[v] to v. s ∈ V . Initially d[s] = 0 and all the other d[v] values are set to ∞. u) = 0 by considering path of 0 edges from u to itself.) Single Source Shortest Paths: The single source shortest path problem is as follows. This is illustrated in Fig. In particular. so d[v] = ∞. (NOTE: Don’t confuse d[v] with the d[v] in the DFS algorithm. v). The process by which an estimate is updated is called relaxation. Intuitively. call this d[v]. (δ(u.v) < d[v]) { d[v] = d[u] + w(u. By taking this path and following it with the edge (u. until all the d[v] values converge to the true shortest distances. which we do by updating v’s predecessor pointer. Consider an edge from a vertex u to v whose weight is w(u. As in breadth-ﬁrst search. v). Here is how relaxation works. It is possible to have graphs with negative edges. This. and sees more and more vertices. called Dijkstra’s algorithm. which assumes there are no negative edge weights. Suppose that we have already computed current estimates on d[u] and d[v].v) pred[v] = u } } // is the path through u shorter? // yes. u s 0 3 5 11 u v relax(u. We will discuss a simple greedy algorithm. v) to be the length of the minimum length path from u to v. Shortest Paths and Relaxation: The basic structure of Dijkstra’s algorithm is to maintain an estimate of the shortest path for each vertex. we should update d[v] to the value d[u] + w(u.

a heap). v) denote the length of the true shortest path from s to v. it is possible to infer that result of the relaxation yields the ﬁnal distance value. on a priority queue of size n each in O(log n) time. then further relaxations cannot change its value. u). Dijkstra’s algorithm operates by maintaining a subset of vertices.Observe that whenever we set d[v] to a ﬁnite value. and each entry in the priority queue “knows” which vertex it represents. Consider the true shortest path from s to u. d[u] is never less than δ(s. u). This way. Consider the situation just prior to the insertion of u. the running time is the same.g. y) be the edge taken by the path. that is. colored black). To implement this. In particular. To see that Dijkstra’s algorithm correctly gives the ﬁnal true distances. and let δ(s. and Decrease Key(). The set S can be implemented using an array of vertex colors. then all distance estimates are correct. 39. so the convergence is as fast as possible. d[u] = δ(s. If d[v] = δ(s. and we set d[s] = 0 and all others to +∞. we can perform the operations Insert(). v) repeatedly over all edges of the graph. v). In order to perform this selection efﬁciently. Each vertex “knows” its location in the priority queue (e. for each vertex in u ∈ V − S. Because of the similarity between this and Prim’s algorithm. Note the similarity with Prim’s algorithm. The greedy thing to do is to take the vertex of V − S for which d[u] is minimum. Notice that the coloring is not really used by the algorithm. S ⊆ V . Dijkstra’s algorithm does exactly this. that is d[v] = δ(s.e. It is important when implementing the priority queue that this cross reference information is updated. Suppose to the contrary that at some point Dijkstra’s algorithm ﬁrst attempts to add a vertex u to S for which d[u] = δ(s. namely Θ(E log V ). Proof: It will simplify the proof conceptually if we assume that all the edge weights are strictly positive (the general case of nonnegative edges is presented in the text).) An example is presented in Fig. v). the best possible would be to order relaxation operations in such a way that each edge is relaxed exactly once. By our observations about relaxation. Initially S = ∅. has a cross reference link to the priority queue entry). Later we will justify why this is the proper choice. which states that once a vertex u has been added to S (i. there is always evidence of a path of that length. v) when the algorithm terminates. How do we select which vertex among the vertices of V − S to add next to S? Here is where greedy selection comes in. where the key value of each vertex u is d[u]. thus we have d[u] > δ(s.g. u). we need to show that d[v] = δ(s. Let (x. and we set color[v] = black to indicate that v ∈ S. the empty set. (Note that it may be that x = s and/or y = u). Also recall that if we implement the priority queue using a heap. Here is Dijkstra’s algorithm. all vertices are in S. Therefore d[v] ≥ δ(s. we maintain a distance estimate d[u]. Dijkstra recognized that the best way in which to perform relaxations is by increasing order of distance from the source. The cleverness of any shortest path algorithm is to perform the updates in a judicious manner. Extract Min(). u). d[u] is the true shortest distance from s to u. we store the vertices of V − S in a priority queue (e. One by one we select vertices from V − S to add to S. for which we claim we “know” the true distance. although a different key value is used there. Dijkstra’s Algorithm: Dijkstra’s algorithm is based on the notion of performing repeated relaxations. Since at the end of the algorithm. whenever a relaxation is being performed. It is not hard to see that if we perform Relax(u. but it has been included to make the connection with the correctness proof a little clearer. This is a consequence of the following lemma. v). Lecture Notes 51 CMSC 451 . Correctness: Recall that d[v] is the distance value assigned to vertex v by Dijkstra’s algorithm. take the unprocessed vertex that is closest (by our estimate) to s. Initially all vertices are white. where x ∈ S and y ∈ V − S. at some point this path must ﬁrst jump out of S. Lemma: When a vertex u is added to S. (Note the remarkable similarity to Prim’s algorithm. Because s ∈ S and u ∈ V − S. the d[v] values will eventually converge to the ﬁnal true distance value from s.

40: Correctness of Dijkstra’s Algorithm.w.v) < d[v]) { // d[v] = d[u] + w(u.extractMin() // for each (v in Adj[u]) { if (d[u] + w(u.v) Q.Dijkstra’s Algorithm Dijkstra(G.decreaseKey(v. Lecture Notes 52 CMSC 451 . S s pred[u] u shorter path from s to u? x y d[y] > d[u] Fig.v) shortest path tree] ? 7 s 0 2 ? 5 7 s 0 2 2 3 2 5 3 2 1 8 ? 4 5 ? 6 4 5 7 0 7 s 0 2 7 3 2 2 1 8 ? 4 5 ? 6 4 5 7 2 7 s 0 2 5 3 2 2 1 8 10 4 5 7 6 4 5 7 5 1 8 5 1 8 5 1 8 6 7 s 0 2 5 3 2 2 7 7 s 0 2 5 3 2 2 5 5 5 Fig. 39: Dijkstra’s Algorithm example.s) { for each (u in V) { // d[u] = +infinity color[u] = white pred[u] = null } d[s] = 0 // Q = new PriQueue(V) // while (Q. d[v]) pred[v] = u } } color[u] = black } [The pred pointers define an ‘‘inverted’’ } initialization dist to source is 0 put all vertices in Q until all vertices processed select u closest to s Relax(u.nonEmpty()) { // u = Q.

Why? Since x ∈ S we have d[x] = δ(s. and hence the direct cost is inﬁnite. For this algorithm. Although adjacency lists are generally more efﬁcient for sparse graphs. x). Now observe that since y appears somewhere along the shortest path from s to u (but not at u) and all subsequent edges following y are of positive weight. j and k to denote vertices rather than u. and thus d[y] = δ(s. Thus y would have been added to S before u. using i. we will employ common matrix notation. the shortest path cost from vertex i to j. we would have set d[y] = d[x] + w(x. called the Floyd-Warshall algorithm. v) E. so they cannot be the same. then mid[i. y) < δ(s. there is no point in using it as part of any shortest path. The reason for setting wii = 0 is that there is always a trivial path of length 0 (using no edges) from any vertex to itself. in contradiction to our assumption that u is the next vertex to be added to S. We consider the problem of determining the cost of the shortest path between all pairs of vertices in a weighted directed graph. To help us do this. y) = δ(s. 0 w(i. j) ∈ E. If the shortest path travels directly from i to j without passing through any other vertices. to computing shortest paths between all pairs of vertices. in order to reconstruct the ﬁnal shortest path in Θ(n) time. (Note that in digraphs it is possible to have self-loop edges. We let wij denote the entry in row i and column j of w. j] will be a vertex that is somewhere along the shortest path from i to j. The distance between two vertices δ(u. which are based on the edge weights in the digraph. j) if i = j and (i. The value of mid[i. so the savings is not justiﬁed here. Lecture Notes 53 CMSC 451 . j). Thus d[y] is correct. all prior vertices satisfy this. v) is the cost of the minimum cost path between them. j]. Input Format: The input is an n × n matrix w of edge weights. is an edge of G.2 in CLRS. j] will be set to null. and w as we usually do. and if it is positive. If (u. then the weight of this edge is denoted w(u. y). but we will not allow G to have any negative cost cycles. and so w(i. u) < d[u]. and by hypothesis. wij = +∞ if i = j and (i. v). (Since u was the ﬁrst vertex added to S which violated this. This algorithm is based on dynamic programming. rather than the more common adjacency list. d[u] is not correct. We will allow G to have negative cost edges. storing all the inter-vertex distances will require Ω(n2 ) storage. we will assume that the digraph is represented as an adjacency matrix. All-Pairs Shortest Paths: We consider the generalization of the shortest path problem. We will present a Θ(n3 ) algorithm. intuitively means that there is no direct link between these two nodes. v.) The output will be an n × n distance matrix D = dij where dij = δ(i. Let G = (V. Because the algorithm is matrix-based. / Setting wij = ∞ if there is no edge.We argue that y = u. i) may generally be nonzero. if i = j. j) ∈ E. It cannot be negative. Lecture 15: All-Pairs Shortest Paths Read: Section 25. we have δ(s. since we assume that there are no negative cost cycles.) Since we applied relaxation to x when it was added. y) < δ(s. E) be a directed graph with edge weights. These intermediate values behave somewhat like the predecessor pointers in Dijkstra’s algorithm. Recovering the shortest paths will also be an issue. u). we will also compute an auxiliary matrix mid[i. Recall that the cost of a path is the sum of edge weights along the path.

assuming that the intermediate vertices are chosen from {1. 6 has the lowest cost of 8. . i = INF (no path) Vertices 1. In other words. k − 1}.. .Floyd-Warshall Algorithm: The Floyd-Warshall algorithm dates back to the early 60’s. but these intermediate can only be chosen from among {1. k}: Don’t go through k at all: Then the shortest path from i to j uses only intermediate vertices {1. . but it turns out that this does not lead to the fastest algorithm (but is an approach worthy of consideration). . k}. The Floyd-Warshall algorithm runs in Θ(n3 ) time. whether u can reach v.k−1 (k−1) (k−1) (k) 5 1 4 1 1 9 6 1 4 2 3 4 (0) d5. . of which 5. For example d5. . . and hence the length of the shortest path is dij Do go through k: First observe that a shortest path does not pass through the same vertex twice. Note that a path consisting of a single edge has no intermediate vertices. the key is reducing a large problem to smaller problems. Floyd realized that the same technique could be used to compute shortest paths with only minor variations. depending on the ways that we might get from vertex i to vertex j. Since of (k−1) (k−1) + dkj . In particular. . . 2. and then from k to j. or it visits some intermediate vertices along the way. For example. Rather than limiting the number of edges on the path.1.. v −1 are the intermediate vertices of this path. 2.6) (5. so we can assume that we pass through k exactly once. in the digraph shown in (k) the Fig. . 3}. 2. notice how the value of d5.2.6) (5. 2. . and the shortest path from k to j.6 = 13 (5. As with any DP algorithm.6 = 9 d5. . for a path p = v1 .1. j).) That is. we consider a path from i to j which either consists of the single edge (i.. Warshall was interested in the weaker question of reachability: determine for each pair of vertices u and v. and to do so in any order. 3.6 can go through any combination of the intermediate vertices {1. . . v3 . . . . . the length of the path is dik (k) Lecture Notes 54 CMSC 451 .6 = 6 (a) (3) (1) (2) (3) (5. 41(a). 41: Limiting intermediate vertices. ..2. k − 1} (k−1) . . k}. v2 .2. A natural way of doing this is by limiting the number of edges of the path. The path is free to visit any subset of these vertices. they instead limit the set of vertices through which the path is allowed to pass.6 dij (k−1) j 3 d5.4.6) dik dkj k (b) (4) Fig.6) d5. . 2. Formulation: Deﬁne dij to be the shortest path from i to j such that any intermediate vertices on the path are chosen from the set {1. . .6 = 8 6 d5. .6 changes as k varies. Floyd-Warshall Update Rule: How do we compute dij assuming that we have already computed the previous matrix d(k−1) ? There are two basic cases. 2.3. . (The assumption that there are no negative cost cycles is being used here. . The main feature of the Floyd-Warshall algorithm is in ﬁnding a the best formulation for the shortest path subproblem. v we say that the vertices v2 . In order for the overall path to be as short as possible we should take the shortest path from i to k. we go from i to k. these paths uses intermediate vertices only in {1. . .

The ﬁnal answer is dij because this allows all possible vertices as intermediate vertices.j] = W[i. Consider which entries might be overwritten and then reused.from i . Instead. 41(b). Clearly the algorithm’s running time is Θ(n3 ). What sort of design paradigm should be used (divide-and-conquer. (Hint: The danger is that values may be overwritten and then used later in the same phase. j] for extracting the ﬁnal shortest paths. dik (k−1) + dkj (k−1) for k ≥ 1.k} . The space used by the algorithm is Θ(n2 ). up through section 34. We have also included mid-vertex pointers.This suggests the following recursive rule (the DP formulation) for computing d(k) . We could write a recursive program to compute dij . mid[i. which is illustrated in Fig. graphs) and what representations would be best (adjacency list.. It can be shown that the overwritten values are equal to their original values. 1. they occur in row k and column k.n] for i = 1 to n do { for j = 1 to n do { d[i.to j new shorter path length new path is through k // matrix of distances An example of the algorithm’s execution is shown in Fig. Complexity Theory: At this point of the semester we have been building up your “bag of tricks” for solving algorithmic problems. Hopefully when presented with a problem you now have a little better idea of how to go about solving the problem. dynamic programming).j]) < d[i.j] mid[i.k] + d[k.j] = k } return d } (k) // initialize // // // { // // use intermediates {1. we compute it by storing the values in a table. It is left as an exercise that this does not affect the correctness of the algorithm. All of this is ﬁne if it helps you discover an acceptably efﬁcient algorithm to solve your problem..k] + d[k.. Observe that we deleted all references to the superscript (k) in the code. greedy.j] mid[i.. = min dij (k−1) (k) . Although Lecture Notes 55 CMSC 451 .) Lecture 16: NP-Completeness: Languages and NP Read: Chapt 34 in CLRS. The question that often arises in practice is that you have tried every trick in the book.n]) { array d[1. dij dij (n) (0) = wij ..j]) d[i. but this will be prohibitively slow because the same value may be reevaluated many times. Floyd-Warshall Algorithm Floyd_Warshall(int n. adjacency matrices). what is the running time of your algorithm. and looking the values up as we need them. Here is the complete algorithm. heaps. 1.. what sort of data structures might be relevant (trees.2. DFS. and nothing seems to work. int w[1. We will leave the extraction of the shortest path as an exercise.n.j] = d[i. 42...n.j] = null } } for k = 1 to n do for i = 1 to n do for j = 1 to n do if (d[i..

42: Floyd-Warshall Example. Newly updates entries are circled.1 1 4 9 3 4 2 8 2 1 d = (0) 0 ? 4 ? 8 0 ? 2 ? 1 0 9 1 ? ? 0 1 4 5 9 4 2 1 3 8 2 12 d = (1) 0 ? 4 ? 8 0 12 2 ? 1 0 9 1 ? 5 0 1 ? = infinity 1 1 4 5 3 9 3 1 1 4 5 6 3 3 7 4 1 4 3 2 7 2 5 1 4 2 8 2 12 1 d = (2) 0 8 ? 0 4 12 ? 2 9 1 0 3 1 ? 5 0 5 2 8 2 12 1 4 5 6 7 4 d = (3) 9 3 3 0 5 4 7 8 0 12 2 9 1 0 3 1 6 5 0 1 d = (4) 0 5 4 7 3 0 7 2 4 1 0 3 1 6 5 0 Fig. Lecture Notes 56 CMSC 451 .

suppose you have an algorithm which inputs a graph of size n and an integer k and runs in O(nk ) time. By a reasonably efﬁcient encoding. or 2n . so that it is clear that arithmetic can be performed efﬁciently. the size of the input. From now on. if an algorithm runs in worse than polynomial time (e.g. People began to wonder whether there was some unknown paradigm that would lead to a solution to these problems. then you could solve all of them in polynomial time. Lecture Notes 57 CMSC 451 . you realize that it is running in exponential time. since we do not want leave any “loopholes” that would allow someone to subvert the rules in an unreasonable way and claim to have an efﬁcient solution when one does not really exist. We have also assumed that operations on numbers can be performed in constant time. Instead we will be trying to show that a problem cannot be solved efﬁciently. When designing algorithms it has been possible for us to be rather informal with various concepts. or worse! Near the end of the 60’s where there was great success in ﬁnding efﬁcient solutions to many combinatorial problems. saying that all polynomial time algorithms are “efﬁcient” is untrue. We will usually restrict numeric inputs to be integers (as opposed to calling them “reals”). but the bottom line is the number of bits (or bytes) that it takes to represent the input using any reasonably efﬁcient encoding. We will do this by considering problems that can be solved in polynomial time. except for very small values of n. You could describe graphs in some highly inefﬁcient way. n = 1. A polynomial time algorithm is any algorithm that runs in time O(nk ) where k is some constant that is independent of n. The important thing is that the exponent must be a constant independent of n. However. A problem is said to be solvable in polynomial time if there is a polynomial time algorithm that solves it. but there was also a growing list of problems for which there seemed to be no known efﬁcient algorithmic solutions. The goal is no longer to prove that a problem can be solved efﬁciently by presenting an algorithm for it. as a function of n.g. so the user is allowed to choose k = n. we should be more careful and assume that arithmetic operations require at least as much time as there are bits of precision in the numbers being stored. We have deﬁned input size variously for different problems.your algorithm can solve small problems reasonably efﬁciently (e. n ≤ 20) the really large applications that you want to solve (e. The question is how to do this? Laying down the rules: We need some way to separate the class of efﬁciently solvable problems from inefﬁciently solvable problems. you could write numbers in unary notation 111111111 = 1002 = 8 rather than binary. Near the end of the 60’s a remarkable discovery was made. We will assume that numbers are expressed in binary or some higher base and graphs are expressed using either adjacency matrices or adjacency lists. Of course. Some functions that do “look” like polynomials are not. Nonetheless. For example. then it is certainly not efﬁcient. or n!. or perhaps some proof that these problems are inherently hard to solve and no algorithmic solutions exist that run under exponential time. but this would also be unacceptable. Up until now all the algorithms we have seen have had the property that their worst-case running times are bounded above by some polynomial in the input size. We have made use of the fact that an intelligent programmer could ﬁll in any missing details. When you analyze its √ n running time. An algorithm whose running time is O(n1000 ) is certainly pretty inefﬁcient. perhaps n n . and created possibly the biggest open problems in computer science: is P = NP? We will be studying this concept over the next few lectures. implying that the running time would be O(nn ) which is not a polynomial in n. 2n ). 000 or n = 10. 000) your algorithm never terminates. because k is an input to the problem. Is this a polynomial? No. the task of proving that something cannot be done efﬁciently must be handled much more carefully. For example.g. We have measured the running time of algorithms using worst-case complexity. This discovery gave rise to the notion of NP-completeness. Some functions that do not “look” like polynomials (such as O(n log n)) are bounded above by polynomials (such as O(n2 )). but that would be unacceptably inefﬁcient. n. or 2(2 ) . Many of these hard problems were interrelated in the sense that if you could solve any one of them in polynomial time. we assume that there is not some signiﬁcantly shorter way of providing the same information. This area is a radical departure from what we have been doing because the emphasis will change. such as by listing all of its cycles.

(Although this may be an exaggeration in many cases. P is deﬁned in terms of how hard it is computationally to recognized membership in the language. like n − 1. accept/reject). ﬁnd the minimum cost spanning tree. However. our job will be to show that certain problems cannot be solved efﬁciently. then the more general optimization problem certainly cannot be solved efﬁciently either. P: This is the set of all decision problems that can be solved in polynomial time. We could deﬁne a language L L = {(G. most NP-complete problems that we will discuss will be phrased as decision problems. A problem is called a decision problem if its output is a simple “yes” or “no” (or you may think of this as True/False. and it does not even ask for the edges of the spanning tree that achieves this weight. we can ask the question of how hard it is to determine whether a given string is in the language. k) ∈ L implying that G has a spanning tree of weight at most k. If you ﬁnd one then you can accept and terminate. In the ﬁrst case we say that the algorithm “accepts” the input and otherwise it “rejects” the input. but for our purposes they mean the same things. M = {(G. For rather technical reasons. but obviously anything that is represented in a computer is broken down somehow into a string of bits. k). When presented with an input string (G. For example.g. At ﬁrst it may seem strange expressing a graph as a string. However. If we show that the simple decision problem cannot be solved efﬁciently. In fact. Given a graph G and integer k how would you “recognize” whether it is in the language M ? You might try searching the graph for a simple paths. we will be introducing a number of classes.Decision Problems: Many of the problems that we have discussed involve optimization of one form or another: ﬁnd the shortest path. This set consists of pairs. the adjacency matrix encoded as a string) followed by an integer k encoded as a binary number. For example. We will jump back and forth between the terms “language” and “decision problems”. k) | G has a simple path of length at least k}. and P is a set of languages. we will show that M is NP-complete. It does not ask for the weight of the minimum spanning tree. we have L ∈ P. the algorithm would answer “yes” if (G.) Lecture Notes 58 CMSC 451 . let us say a bit about what the general classes look like at an intuitive level. We just store the graph internally. Before giving all the technical deﬁnitions.) Note that languages are sets of strings. In what follows. Given any language. and no such path exists). the ﬁrst element is a graph (e. Deﬁnition: Deﬁne P to be the set of all languages for which membership can be tested in polynomial time. if not then you may spend a lot of time searching (especially if k is large. and see whether the ﬁnal optimal weight is at most k. Since we can compute minimum spanning trees in polynomial time. If so we accept. (Intuitively. Language Recognition Problems: Observe that a decision problem can also be thought of as a language recognition problem. We will phrase many optimization problems in terms of decision problems. and otherwise we reject. until ﬁnding one of length at least k. in the case of the MST language L. and “no” otherwise. 0/1. run Kruskal’s algorithm. Here is a harder one. k) | G has a MST of weight at most k}. So is M ∈ P? No one knows the answer. we can determine membership easily in polynomial time. We will generally refer to these problems as being “easy” or “efﬁciently solvable”. the minimum spanning tree decision problem might be: Given a weighted graph G and an integer k. this corresponds to the set of all decisions problems that can be solved in polynomial time. A set of languages that is deﬁned in terms of how hard it is to determine membership is called a complexity class. though. ﬁnd the minimum weight triangulation. does G have a spanning tree whose weight is at most k? This may seem like a less interesting formulation of the problem.

v13 ”. But it is bit more intuitive to explain the concept from the perspective of veriﬁcation. . The Hamiltonian cycle problem seems to be much harder. . The other is QBF. Thus. We could then inspect the Lecture Notes 59 CMSC 451 . Fig. then we could solve all NP problems in polynomial time. Since it is widely believed that all NP problems are not solvable in polynomial time. Originally the term meant “nondeterministic polynomial time”. and (2) it is NP-hard.) This class contains P as a subset. In this problem you are given a boolean formula with quantiﬁers (∃ and ∀) and you want to know whether the formula is true or false. NP-hard. NP. which asks whether two graphs are identical up to a renaming of their vertices. There are some problems in the ﬁgure that we will not discuss. to say that problem is NP-hard does not mean that it is hard to solve. and related complexity classes. and NP-complete (NPC) might look. Polynomial Time Veriﬁcation and Certiﬁcates: Before talking about the class of NP-complete problems. (There is a similar problem on directed graphs. Many language recognition problems that may be very hard to solve. The ﬁgure below illustrates one way that the sets P. NP-hard: In spite of its name. it is important to introduce the notion of a veriﬁcation algorithm. it does not have to be in the class NP. We say might because we do not know whether all of these complexity classes are distinct or whether they are all solvable in polynomial time. This problem is beyond the scope of this course. Consider the following problem. . but may be discussed in an advanced course on complexity theory. The term NP does not mean “not polynomial”. does G have a cycle that visits every vertex exactly once. but it also contains a number of problems that are believed to be very hard to solve. but they have the property that it is easy to verify whether a string is in the language. It is known that this problem is in NP. Note that for a problem to be NP hard. One is Graph Isomorphism.) We can describe this problem as a language recognition problem. where (G) denotes an encoding of a graph G as a string. For example. v7 . the ﬁgure below shows two graphs. one which is Hamiltonian and one which is not. it is widely believed that no NP-hard problem is solvable in polynomial time. NPC = NP∩NP-hard. Given an undirected graph G. v1 . NP-complete: A problem is NP-complete if (1) it is in NP. which stands for Quantiﬁed Boolean Formulas.NP: This is the set of all decision problems that can be veriﬁed in polynomial time. Then it would be a very easy matter for someone to convince us of this. (We will give a deﬁnition of this below. Rather it means that if we could solve this problem in polynomial time. That is. QBF NP−Hard No Ham. They would simply say “the cycle is v3 . and there is also a version which asks whether there is a path that visits all vertices. where the language is HC = {(G) | G has a Hamiltonian cycle}. Cycle Knapsack Hamiltonian Cycle Satisfiability Graph Isomorphism? MST Strong connectivity Harder NPC NP P Easy One way that things ‘might’ be. 43: The (possible) structure of P. but it is not known to be in P. However. and there is no known polynomial time algorithm for this problem. suppose that a graph did have a Hamiltonian cycle. called the Hamiltonian cycle problem. NP. . it contains a number of easy problems.

Note that not all languages have the property that they are easy to verify. We have avoided introducing nondeterminism here. Why is the set called “NP” rather than “VP”? The original term NP stood for “nondeterministic polynomial time”. What information would someone give us that would allow us to verify that G is indeed in the language? They could give us an example of the unique Hamiltonian cycle. can verify that x is in the language L using this certiﬁcate as help. a veriﬁcation algorithm is an algorithm which given x and a string y called the certiﬁcate. Like P. Next time we will deﬁne the notions of NP-hard and NP-complete. Thus. we can solve it in polynomial time anyway). For example. The class NP: Deﬁnition: Deﬁne NP to be the set of all languages that can be veriﬁed by a polynomial time algorithm. Thus. NP is a set of languages based on some complexity measure (the complexity of veriﬁcation). there is a very efﬁcient way to verify that a given graph is in HC. The given cycle is called a certiﬁcate. This is some piece of information which allows us to verify that a given string is in a language. In other words. and given x ∈ L. given a language L. Most experts believe that P = NP. Lecture Notes 60 CMSC 451 . However it is not known whether P = NP. They could try to list every other cycle of length n. but what sort of certiﬁcate could they give us to convince us that this is the only one? They could give another cycle that is NOT Hamiltonian. consider the following languages: UHC = {(G) | G has a unique Hamiltonian cycle} HC = {(G) | G has no Hamiltonian cycle}. If x is not in L then there is nothing to verify. since there are n! possible cycles in general. just being able to verify that you have a correct solution does not help you in ﬁnding the actual solution very much. 44: Hamiltonian cycle. even though we know of no efﬁcient way to solve the Hamiltonian cycle problem. It would be covered in a course on complexity theory or formal language theory. but this does not mean that there is not another cycle somewhere that is Hamiltonian. Basically. More formally.Nonhamiltonian Hamiltonian Fig. we do not even need to see a certiﬁcate to solve the problem. and we could verify that it is a Hamiltonian cycle. (More formally. but no one has a proof of this. such a computer could nondeterministically guess the value of certiﬁcate. Suppose that a graph G is in the language UHC. it is hard to imagine that someone could give us some information that would allow us to efﬁciently convince ourselves that a given graph is in the language. then we can certainly verify membership in polynomial time. Observe that P ⊆ NP. if we can solve a problem in polynomial time. This referred to a program running on a nondeterministic computer that can make guesses. but this would not be at all efﬁcient. and then verify that the string is in the language in polynomial time. graph. It seems unreasonable to think that this should be so. In other words. and check that this is indeed a legal cycle and that it visits all the vertices of the graph exactly once.

we could supply the certiﬁcate consisting of a sequence of vertices along the cycle. we need to introduce the concept of a reduction. the following concepts are important. but if there were a polynomial time solution for even a single NP-complete problem.Lecture 17: NP-Completeness: Reductions Read: Chapt 34. We know (or you strongly believe at least) that H is hard. We encode inputs as strings.) Example: 3-Colorability and Clique Cover: Let us consider an example to make this clearer. (U ∈ P) ⇒ (H ∈ P). and hence it is strongly believed that the problem cannot be solved in polynomial time. that is it cannot be solved in polynomial time. let us just consider the following question. For example: = {G | G has a Hamiltonian cycle} MST = {(G. and then derive a contradiction by showing that H can be solved in polynomial time. O(nk ) for some constant k. In other words. Therefore HC ∈ NP. NP-complete problems are expressed as decision problems. / / To do this. on the way to deﬁning NP-completeness. We want to prove that U cannot be solved in polynomial time. However. We know (or strongly believe) that H cannot be solved in polynomial time. x) | G has a MST of cost at most x}. assuming that the input has been encoded as a string. Note that since all languages in P can be solved in polynomial time.4. It is easy to access the adjacency matrix to determine that this is a legitimate cycle in G. they can certainly be veriﬁed in polynomial time. It is important to note here that this supposed subroutine is really a fantasy. Thus we have “reduced” problem H to problem U . we could prove the contrapositive. and hence can be thought of as language recognition problems. the complexity of U is unknown. Certiﬁcate: is a piece of evidence that allows us to verify in polynomial time that a string is in a given language. implying that U cannot be solved in polynomial time. Before discussing reductions. Then all we need to do is to show that we can use this subroutine to solve problem H in polynomial time. NP: is deﬁned to be the class of all languages that can be veriﬁed in polynomial time. suppose that the language is the set of Hamiltonian graphs. we will suppose that there is an algorithm that solves U in polynomial time. Suppose that there are two problems. Summary: Last time we introduced a number of concepts. For example MST ∈ P but HC is not known (and suspected not) to be in P. How would we do this? We want to show that (H ∈ P) ⇒ (U ∈ P). Reductions: The class of NP-complete problems consists of a set of decision problems (languages) (a subset of the class NP) that no one knows how to solve efﬁciently. (Be sure that you understand this. How do we do this? Suppose that we have a subroutine that can solve any instance of problem U in polynomial time. such as HC. but we suspect that it too is hard. The following problem is well-known to be NP-complete. this the basis behind all reductions. so we have P ⊆ NP. Lecture Notes 61 CMSC 451 . to show that U is not solvable in polynomial time. H and U . NP also seems to have some pretty hard problems to solve. To convince someone that a graph is in this language. Decision Problems: are problems for which the answer is either yes or no. thus we are essentially proving that the subroutine cannot exist. through Section 34. HC P: is the class of all decision problems which can be solved in polynomial time. For example. then every problem in NP would be solvable in polynomial time. On the other hand. To establish this. In particular.

3−colorable Not 3−colorable Clique cover (size = 3) Fig. such that i Vi = V . the subgraph induced by V is a complete graph. we say that a subset of vertices V ⊆ V forms a clique if for every pair of vertices u. k). Given a graph G and an integer k. for two vertices to be in the same group they must be adjacent to each other. . Suppose that you want to solve the CCov problem. The 3Col problem will play the role of the hard problem H. E) and an integer k. consider the following problem. We want to know whether it is possible to cluster all the vertices into k groups. and this subroutine is allowed to call the subroutine CCov(G. The only difference here is that in one problem the number of cliques is speciﬁed as part of the input and in the other the number of color classes is ﬁxed at 3. / / which you will show by proving the contrapositive (CCov ∈ P) ⇒ (3Col ∈ P). The clique cover problem arises in applications of clustering. this subroutine returns true if G has a clique cover of size k and false otherwise. you still cannot ﬁnd a polynomial time algorithm for the CCov problem. the problems are almost the same. Given a graph G = (V. But determining whether 3 colors are possible (even for planar graphs) seems to be hard and there is no known polynomial time algorithm. you want to show that (3Col ∈ P) ⇒ (CCov ∈ P). for two vertices to be in the same color group. v) ∈ E. Lecture Notes 62 CMSC 451 . and hence experts believe that 3Col ∈ P. but after a while of fruitless effort.3-coloring (3Col): Given a graph G. 45: 3-coloring and Clique Cover. one is 3-colorable and one is not. In some sense. . but the requirement adjacent/non-adjacent is exactly reversed. Thus. such that no two adjacent vertices have the same label. and furthermore. That is. . v ∈ V (u. Clique Cover (CCov): Given a graph G = (V. . It is well known that planar graphs can be colored with 4 colors. you assume that you have access to a subroutine CCov(G. Coloring arises in various partitioning problems. How can you prove that CCov is likely to not have a polynomial time solution? You know that 3Col is NP-complete. E). k) for any graph G and any integer k. Vk . How can we use this “alleged” subroutine to solve the well-known hard 3Col problem? We want to write a polynomial time subroutine for 3Col. can we partition the vertex set into k subsets of vertices V1 . For our unknown problem U . V2 . which we strongly suspect to not be solvable in polynomial time. In the 3-coloring problem. they must not be adjacent. Both problems involve partitioning the vertices up into groups. The term “coloring” comes from the original application which was in map drawing. We put an edge between two nodes if they are similar enough to be clustered in the same group. Two countries that share a common border should be colored with different colors. can each of its vertices be labeled with one of 3 different “colors”. In the clique cover problem. this subroutine runs in polynomial time. You / feel that there is some connection between the CCov problem and the 3Col problem. To do this. In the ﬁgure below we give two graphs. where there is a constraint that two objects cannot be assigned to the same set of the partition. and there exists a polynomial time algorithm for this. and that each Vi is a clique of G.

G is a graph on the same vertices. Hence. Claim: A graph G is 3-colorable if and only if its complement G has a clique-cover of size 3. we converted an instance of the 3-coloring problem (G) into an equivalent instance of the Clique Cover problem (G. Thus every pair of distinct vertices in Vi are adjacent in G. 3} give the vertices of Vi color i. This is illustrated in the ﬁgure below. (⇐) Suppose G has a clique cover of size 3. this is how polynomial time reductions can be used to show that problems are as hard to solve as known difﬁcult problems.) We can then feed the pair (G.e. V3 be the three color classes. such that for all x. ﬂip 0’s and 1’s in the adjacency matrix). x ∈ L1 if and only if f (x) ∈ L2 . or equivalently whether x ∈ L1 . 3) where G denotes the complement of G. 3). then {u. We claim that this is a clique cover / of size 3 for G. We assert that this is a legal coloring for G. then let V1 .” This is because a polynomial time subroutine for L2 could be applied to f (x) to determine whether f (x) ∈ L2 .g. Note that it is easy to complement a graph in O(n2 ) (i. polynomial) time (e. In the previous example we showed that 3Col ≤P CCov. Deﬁnition: We say that a language (i. in sense of polynomial time computability. v} ∈ E((G). G ∈ 3Col iff (G. L1 is “no harder” than L2 . (That is.” Thus. v} ∈ E(G) (since they are in a common clique). Thus f is computable in polynomial time. Lemma: If L1 ≤P L2 and L2 ∈ P then L1 ∈ P . V2 . Polynomial-time reduction: We now take this intuition of reducing one problem to another through the use of a subroutine call. 46: Clique covers in the complement. and place it on more formal footing. denoted V1 . then so is L1 . Thus. two vertices / with the same color are not adjacent. saying that L1 ≤P L2 means that “if L2 is solvable in polynomial time. 3) ∈ CCov.e. In other words. v) is an edge of G if and only if it is not an edge of G. implying that {u. and hence the reduction is effectively equivalent to saying “since L1 is not likely to be solvable in polynomial time. then {u. V2 . 2. Notice that in the example above. Given a graph G for which we want to determine its 3-colorability. v} ∈ E(G) (since adjacent vertices cannot have the same color) which implies that {u. then L2 is also not likely to be solvable in polynomial time. Lecture Notes 63 CMSC 451 . decision problem) L1 is polynomial-time reducible to language L2 (written L1 ≤P L2 ) if there is a polynomial time computable function f . 3). 3) into a subroutine for clique cover. output the pair (G. The way in which this is used in NP-completeness is exactly the converse. since if distinct vertices u and v are both in Vi . since if u and v are distinct vertices in Vi . We usually have strong evidence that L1 is not solvable in polynomial time. V3 .We claim that we can reduce the 3-coloring problem to the clique cover problem as follows. Proof: (⇒) If G 3-colorable. Intuitively. In particular we have f (G) = (G. For i ∈ {1. but (u. _ G G H _ H 3−colorable Coverable by 3 cliques Not 3−colorable Not coverable Fig. v} ∈ E(G).

3SAT. The distinction is subtle. then their composition f (g(x)) is computable in polynomial time as well. Recap: So far we introduced the deﬁnitions of NP-completeness. because the deﬁnition says that we have to be able to reduce every problem in NP to this problem. it appears to be almost impossible to prove that one problem is NP-complete. then they all are. Lecture Notes 64 CMSC 451 . and Independent Set Read: Chapter 34. then none are. There are inﬁnitely many such problems. It follows that our problem is equivalent to SAT (with respect to solvability in polynomial time). This is illustrated in the ﬁgure below. Lemma: L is NP-complete if (1) L ∈ N P and (2) L ≤P L for some known NP-complete language L . and conversely. / / One important fact about reducibility is that it is transitive. but people taking other courses in complexity theory should be aware of this. The reason is that if two functions f (x) and g(x) are computable in polynomial time. Deﬁnition: A language L is NP-hard if: L ≤P L for all L ∈ NP. all we need to do is to show that our problem is in NP (and hence it is reducible to SAT). Recall that we mentioned the following topics: P: is the set of decision problems (or languages) that are solvable in polynomial time. This gives us a way to prove that problems are NP-complete. if any one is not solvable in polynomial time. for which it is known that if any one is solvable in polynomial time. and then to show that we can reduce SAT (or generally some known NPC problem) to our problem. once we know that one problem is NP-complete.Lemma: If L1 ≤P L2 and L1 ∈ P then L2 ∈ P . so how can we ever hope to do this? We will talk about this next time with Cook’s theorem. This is made mathematically formal using the notion of polynomial time reductions.5. implying that L is NP-hard. Unfortunately. but not the same as the reduction given in the text. The reason is that all L ∈ N P are reducible to L (since L is NP-complete and hence NP-hard) and hence by transitivity L is reducible to L. Lecture 18: Cook’s Theorem.) Deﬁnition: A language L is NP-complete if: (1) L ∈ N P and (2) L is NP-hard. An alternative (and usually easier way) to show that a problem is NP-complete is to use transitivity. The reduction given here is similar. It should be noted that our text uses the term “reduction” where most other books use the term “transformation”. Cook showed that there is one problem called SAT (short for boolean satisﬁability) that is NP-complete. NP-completeness: The set of NP-complete problems are all problems in the complexity class NP. In other words Lemma: If L1 ≤P L2 and L2 ≤P L3 then L1 ≤P L3 . (Note that L does not need to be in NP. NP: is the set of decision problems (or languages) that can be veriﬁed in polynomial time. To prove a second problem is NP-complete. through 34.

the set of objects must ﬁll the knapsack. and the veriﬁcation involves testing that P (X) holds. An alternative way to show that a problem is NP-complete is to use transitivity of ≤P . or none are). but we’ll try to give a brief intuitive argument as to why such a problem might exist.g. He reasoned that computers (which represent the most general Lecture Notes 65 CMSC 451 . that is. a path. then every NP-complete problem (and generally every NP-hard problem) cannot be solved in polynomial time. any set X can be described by choosing a set of objects. then every NP-complete problem (and in fact every problem in NP) is also solvable in polynomial time. Stephen Cook showed that such a problem existed. is that if we had a subroutine to solve L2 in polynomial time. where X is some structure (e. NP-Hard: L is NP-hard if for all L ∈ NP. the property P (X) that you need to satisfy. Thus all NP-complete problems are equivalent to one another (in that they are either all solvable in polynomial time. In showing that such a problem is in NP.g. Cook’s theorem is quite complicated to prove. Thus virtually all NP problems can be stated in the form.SAT Your problem Your reduction Known NPC SAT NPC NP P NPC NPC NP P Proving a problem is NP−hard NP P Resulting structure Proving a problem is in NP Fig. then L1 ≤P L3 . etc. Cook’s Theorem: Unfortunately. a set. can be described as a boolean formula. In general. Keep this order in mind. Lemma: L is NP-complete if (1) L ∈ N P and (2) L ≤P L for some NP-complete language L . it must have an efﬁcient veriﬁcation procedure. For a problem to be in NP. A more intuitive to think about this. which in turn can be described as choosing the values of some boolean variables. the certiﬁcate consists of giving X. an assignment. Thus. we need to have at least one NP-complete problem to start the ball rolling. to use this lemma. we could solve all NP problems in polynomial time. if we can prove that any NP-complete problem (and generally any problem in NP) cannot be solved in polynomial time. or you may use at most k colors and no two adjacent vertices can have the same color). Polynomial reductions are transitive. Note: The known NP-complete problem L is reduced to the candidate NP-complete problem L. “does there exists X such that P (X)”. or the path must visit every vertex. since this should represent the hardest problem in NP to solve. Conversely. Polynomial reduction: L1 ≤P L2 means that there is a polynomial time computable function f such that x ∈ L1 if and only if f (x) ∈ L2 .) and P (X) is some property that X must satisfy (e. then we could use it to solve L1 in polynomial time. Stephen Cook was looking for the most general possible property he could. NP-Complete: L is NP-complete if (1) L ∈ NP and (2) L is NP-hard. if L1 ≤P L2 and L2 ≤P L3 . if we could solve L in polynomial time. 47: Structure of NPC and reductions. a partition. L ≤P L. Similarly. If any NP-complete problems (and generally any NP-hard problem) is solvable in polynomial time. The importance of NP-complete problems should now be clear.

type of computational devices known) could be described entirely in terms of boolean circuits, and hence in terms of boolean formulas. If any problem were hard to solve, it would be one in which X is an assignment of boolean values (true/false, 0/1) and P (X) could be any boolean formula. This suggests the following problem, called the boolean satisﬁability problem. SAT: Given a boolean formula, is there some way to assign truth values (0/1, true/false) to the variables of the formula, so that the formula evaluates to true? A boolean formula is a logical formula which consists of variables xi , and the logical operations x meaning the negation of x, boolean-or (x ∨ y) and boolean-and (x ∧ y). Given a boolean formula, we say that it is satisﬁable if there is a way to assign truth values (0 or 1) to the variables such that the ﬁnal result is 1. (As opposed to the case where no matter how you assign truth values the result is always 0.) For example, (x1 ∧ (x2 ∨ x3 )) ∧ ((x2 ∧ x3 ) ∨ x1 ) is satisﬁable, by the assignment x1 = 1, x2 = 0, x3 = 0 On the other hand, (x1 ∨ (x2 ∧ x3 )) ∧ (x1 ∨ (x2 ∧ x3 )) ∧ (x2 ∨ x3 ) ∧ (x2 ∨ x3 ) is not satisﬁable. (Observe that the last two clauses imply that one of x2 and x3 must be true and the other must be false. This implies that neither of the subclauses involving x2 and x3 in the ﬁrst two clauses can be satisﬁed, but x1 cannot be set to satisfy them either.) Cook’s Theorem: SAT is NP complete. We will not prove this theorem. The proof would take about a full lecture (not counting the week or so of background on Turing machines). In fact, it turns out that a even more restricted version of the satisﬁability problem is NP-complete. A literal is a variable or its negation x or x. A formula is in 3-conjunctive normal form (3-CNF) if it is the boolean-and of clauses where each clause is the boolean-or of exactly 3 literals. For example (x1 ∨ x2 ∨ x3 ) ∧ (x1 ∨ x3 ∨ x4 ) ∧ (x2 ∨ x3 ∨ x4 ) is in 3-CNF form. 3SAT is the problem of determining whether a formula in 3-CNF is satisﬁable. It turns out that it is possible to modify the proof of Cook’s theorem to show that the more restricted 3SAT is also NP-complete. As an aside, note that if we replace the 3 in 3SAT with a 2, then everything changes. If a boolean formula is given in 2SAT, then it is possible to determine its satisﬁability in polynomial time. Thus, even a seemingly small change can be the difference between an efﬁcient algorithm and none. NP-completeness proofs: Now that we know that 3SAT is NP-complete, we can use this fact to prove that other problems are NP-complete. We will start with the independent set problem. Independent Set (IS): Given an undirected graph G = (V, E) and an integer k does G contain a subset V of k vertices such that no two vertices in V are adjacent to one another. For example, the graph shown in the ﬁgure below has an independent set (shown with shaded nodes) of size 4. The independent set problem arises when there is some sort of selection problem, but there are mutual restrictions pairs that cannot both be selected. (For example, you want to invite as many of your friends to your party, but many pairs do not get along, represented by edges between them, and you do not want to invite two enemies.) Note that if a graph has an independent set of size k, then it has an independent set of all smaller sizes. So the corresponding optimization problem would be to ﬁnd an independent set of the largest size in a graph. Often the vertices have weights, so we might talk about the problem of computing the independent set with the largest total weight. However, since we want to show that the problem is hard to solve, we will consider the simplest version of the problem. Lecture Notes 66 CMSC 451

Fig. 48: Independent Set. Claim: IS is NP-complete. The proof involves two parts. First, we need to show that IS ∈ NP. The certiﬁcate consists of the k vertices of V . We simply verify that for each pair of vertex u, v ∈ V , there is no edge between them. Clearly this can be done in polynomial time, by an inspection of the adjacency matrix.

boolean formula (in 3−CNF) 3SAT F f (G,k) IS yes no graph and integer

polynomial time computable

Fig. 49: Reduction of 3-SAT to IS. Secondly, we need to establish that IS is NP-hard, which can be done by showing that some known NP-complete problem (3SAT) is polynomial-time reducible to IS, that is, 3SAT ≤P IS. Let F be a boolean formula in 3-CNF form (the boolean-and of clauses, each of which is the boolean-or of 3 literals). We wish to ﬁnd a polynomial time computable function f that maps F into a input for the IS problem, a graph G and integer k. That is, f (F ) = (G, k), such that F is satisﬁable if and only if G has an independent set of size k. This will mean that if we can solve the independent set problem for G and k in polynomial time, then we would be able to solve 3SAT in polynomial time. An important aspect to reductions is that we do not attempt to solve the satisﬁability problem. (Remember: It is NP-complete, and there is not likely to be any polynomial time solution.) So the function f must operate without knowledge of whether F is satisﬁable. The idea is to translate the similar elements of the satisﬁable problem to corresponding elements of the independent set problem. What is to be selected? 3SAT: Which variables are assigned to be true. Equivalently, which literals are assigned true. IS: Which vertices are to be placed in V . Requirements: 3SAT: Each clause must contain at least one literal whose value it true. IS: V must contain at least k vertices. Restrictions: 3SAT: If xi is assigned true, then xi must be false, and vice versa. Lecture Notes 67 CMSC 451

IS: If u is selected to be in V , and v is a neighbor of u, then v cannot be in V . We want a function f , which given any 3-CNF boolean formula F , converts it into a pair (G, k) such that the above elements are translated properly. Our strategy will be to create one vertex for each literal that appears within each clause. (Thus, if there are m clauses in F , there will be 3m vertices in G.) The vertices are grouped into clause clusters, one for each clause. Selecting a true literal from some clause corresponds to selecting a vertex to add to V . We set k to the number of clauses. This forces the independent set to pick one vertex from each clause, thus, one literal from each clause is true. In order to keep the IS subroutine from selecting two literals from some clause (and hence none from some other), we will connect all the vertices in each clause cluster to each other. To keep the IS subroutine from selecting both a literal and its complement, we will put an edge between each literal and its complement. This enforces the condition that if a literal is put in the IS (set to true) then its complement literal cannot also be true. A formal description of the reduction is given below. The input is a boolean formula F in 3-CNF, and the output is a graph G and integer k.

3SAT to IS Reduction k ← number of clauses in F ; for each clause C in F create a clause cluster of 3 vertices from the literals of C; for each clause cluster (x1 , x2 , x3 ) create an edge (xi , xj ) between all pairs of vertices in the cluster; for each vertex xi create edges between xi and all its complement vertices xi ; return (G, k);

Given any reasonable encoding of F , it is an easy programming exercise to create G (say as an adjacency matrix) in polynomial time. We claim that F is satisﬁable if and only if G has an independent set of size k. Example: Suppose that we are given the 3-CNF formula: (x1 ∨ x2 ∨ x3 ) ∧ (x1 ∨ x2 ∨ x3 ) ∧ (x1 ∨ x2 ∨ x3 ) ∧ (x1 ∨ x2 ∨ x3 ). The reduction produces the graph shown in the following ﬁgure and sets k = 4.

x1 x1 x2 x3 x1 x2 x3 x2 x3 x1 x2 x3 x1 x2 x3 x1 x2 x3 x1 x2 x3 x1 x2 x3

The reduction

Correctness (x1=x2=1, x3=0)

Fig. 50: 3SAT to IS Reduction for (x1 ∨ x2 ∨ x3 ) ∧ (x1 ∨ x2 ∨ x3 ) ∧ (x1 ∨ x2 ∨ x3 ) ∧ (x1 ∨ x2 ∨ x3 ). In our example, the formula is satisﬁed by the assignment x1 = 1, x2 = 1, and x3 = 0. Note that the literal x1 satisﬁes the ﬁrst and last clauses, x2 satisﬁes the second, and x3 satiﬁes the third. Observe that by selecting the corresponding vertices from the clusters, we get an independent set of size k = 4. Lecture Notes 68 CMSC 451

The dominating set problem (DS) is: given a graph G = (V. does G have a subset V of k vertices such that for each distinct u. For example. xi ) between the vertices of V . it sufﬁces to install it on all the computers forming a vertex cover. E) is a subset of vertices V ⊆ V such that every edge in G has at least one endpoint in V . {u. you have a compute network and a program that checks the integrity of the communication links. if G has an independent set V of size k. does G have a dominating set of size k? Don’t confuse the clique (CLIQUE) problem with the clique-cover (CC) problem that we discussed in an earlier lecture. Vertex Cover (VC): A vertex cover in an undirected graph G = (V. and the clique-cover problem seeks to partition the vertices into k groups. First observe that we must select a vertex from each clause cluster. does G have a vertex cover of size k? Dominating Set (DS): A dominating set in a graph G = (V. Some Easy Reductions: We consider some closely related NP-complete problems next. E) and an integer k.e. and (2) that the problem is NP-hard. Let V denote the corresponding vertices from each of the clause clusters (one from each cluster).) This is because computing these things would require exponential time (by the best known algorithms). Vertex Cover. v} ∈ E. This completes the NP-completeness proof. does G have a k vertex subset whose induced subgraph is complete. suppose we want to select where to place a set of ﬁre stations such that every house in the city is within 2 minutes of the nearest Lecture Notes 69 CMSC 451 . Conversely. We have discussed the facts that cliques are of interest in applications dealing with clustering. The clique problem seeks to ﬁnd a single clique of size k. In other words. The vertex cover problem (VC) is: given an undirected graph G and an integer k. Observe that our reduction did not attempt to solve the IS problem nor to solve the 3SAT. because we cannot have two vertices labeled xi and xi in the same cluster. Consider the assignment in which we set all of these literals to 1. E) and an integer k. Today we give a few more examples of reductions. E) is a subset of vertices V such that every vertex in the graph is either in V or is adjacent to some vertex in V . Finally the transformation clearly runs in polynomial time. nor did we assume we knew which variables to set to 1. while preserving the critical elements to each problem. there can be no edge of the form (xi . there are no inter-cluster edges between them. Recap: Last time we gave a reduction from 3SAT (satisﬁability of boolean formulas in 3-CNF form) to IS (independent set in graphs). and because we cannot set a variable and its complement to both be true. because there are k clusters. The dominating set proof is not given in our text. then each of the k clauses of F must have at least one true literal. The vertex cover problem arises in various servicing applications. Recall that to show that a problem is NP-complete we need to show (1) that the problem is in NP (i. Also observe that the reduction had no knowledge of the solution to either problem. Lecture 19: Clique. and Dominating Set Read: Chapt 34 (up through 34. Instead the reduction simply translated the input from one problem into an equivalent input to the other problem. and we cannot take two vertices from the same cluster (because they are all interconnected). (We did not assume that the formula was satisﬁable.Correctness Proof: We claim that F is satisﬁable if and only if G has an independent set of size k. v ∈ V . each of which is a clique. From these nodes all the links can be tested. V is an independent set of size k.5). This assignment is logically consistent. we can verify when an input is in the language). If F is satisﬁable. Dominating set is useful in facility location problems. Because we take vertices from each cluster. Thus. For example. by showing that some known NP-complete problem can be reduced to this problem (there is a polynomial time function that transforms an input for one problem into an equivalent input for the other problem). To save the space of installing the program on every computer in the network. Clique (CLIQUE): The clique problem is: given an undirected graph G = (V.

Lecture Notes 70 CMSC 451 . V is an independent set for G. Thus. Theorem: VC is NP-complete. v ∈ V there is no edge {u. The reduction function f inputs G and k. Theorem: CLIQUE is NP-complete. implying that every edge in G is incident to a vertex in V − V . Lemma: Given an undirected graph G = (V. Given such a certiﬁcate we can easily verify in polynomial time that all pairs of vertices in the set are adjacent. (iii) ⇒ (i): If V − V is a vertex cover for G. v ∈ V . We create a graph in which two locations are adjacent if they are within 2 minutes of each other. (iii) V − V is a vertex cover of size n − k for G. CLIQUE ∈ NP: The certiﬁcate consists of the k vertices in the clique. Clearly this can be done in polynomial time. implying that V − V is a VC for G. Independent set. this instance is equivalent. E) with n vertices and a subset V ⊆ V of size k. v} in G. v} is not an edge of G. In particular. However. implying that V is an independent set for G.ﬁre station. we can produce an equivalent instance of the CLIQUE problem in polynomial time. It is not quite as clear that the vertex cover problem is related. v} is an edge of G implying that {u. A minimum sized dominating set will be a minimum set of locations such that every other location is reachable within 2 minutes from one of these sites. (ii) ⇒ (iii): If V is an independent set for G. k). Given such a certiﬁcate we can easily verify in polynomial time that every edge is incident to one of these vertices. G. we could easily translate it into an algorithm for the others. we have the following. then for any u. (ii) V is an independent set of size k for G. v ∈ V . and Vertex Cover. The CLIQUE problem is obviously closely related to the independent set problem (IS): Given a graph G does it have a k vertex subset that is completely disconnected. G V’ is CLIQUE of size k in G iff G V’ is an IS of size k in G iff G V−V’ is a VC of size n−k in G Fig. then for each u. IS ≤P CLIQUE: We want to show that given an instance of the IS problem (G. k). implying that there is an edge {u. if we had an algorithm for solving any one of these problems. By the above lemma. then for each u. VC ∈ NP: The certiﬁcate consists of the k vertices in the vertex cover. {u. {u. v} in G. and outputs the pair (G. 51: Clique. the following lemma makes this connection clear as well. implying that V is a clique in G. Proof: (i) ⇒ (ii): If V is a clique for G. The following are equivalent: (i) V is a clique of size k for the complement. v} is not an edge of G.

and IS is an NP-complete problem. It must run in polynomial time. Given the pair (G. Dominating Set: As with vertex cover. Note: Note that in each of the above reductions. Deﬁne an isolated vertex to be one that is incident to no edges.IS ≤P VC: We want to show that given an instance of the IS problem (G. If u is isolated it can only be dominated if it is included in the dominating set. The certiﬁcate just consists of the subset V in the dominating set. Note that |V ∪ VI | might be of size less than k + I. such that G has a vertex cover of size k if and only if G has a dominating set of size k . To do this. Let G be the resulting graph. In VC: “every edge is incident to a vertex in V ”. We still need to dominate the neighbor v. if G is connected and has a vertex cover of size k. The number of vertices to request for the dominating set will be k = k + I. v} in the graph as well. Obviously. the reduction function did not know whether G has an independent set or not. for each edge {u. we can produce an equivalent instance of the VC problem in polynomial time. k). if there are any isolated vertices in V . Thus the translation must somehow map the notion of “incident” to “adjacent”. Since it is not incident to any edges. it does not need to be in the vertex cover. Reducing Vertex Cover to Dominating Set: Next we show that an existing NP-complete problem is reducible to dominating set. Lecture Notes 71 CMSC 451 . We will insert a vertex into the middle of each edge of the graph. k ). and replace the edge {u. First we argue that if V is a vertex cover for G. such that an incident edge in G is mapped to an adjacent vertex in G . We want a polynomial time function. called wuv . computes the number of vertices. k ) of the dominating set problem. then V = V ∪ VI is a dominating set for G . wuv } and {v. For each edge {u. but the converse is not necessarily true. we will create a new special vertex. So it does not have time to determine whether G has an independent set or which vertices are in the set. as opposed to each edge being incident to each member of the dominating set. Clearly this can be done in polynomial time. dominating set is an example of a graph covering problem. and let I denote the number of isolated vertices. k) for the VC problem. However the similarity suggests that if VC in NP-complete. this suggests that the reduction function maps edges of G into vertices in G . Here the condition is a little different. Initially G = G. We choose vertex cover and show that VC ≤P DS. Now we can give the complete reduction. This reduction illustrated in the following ﬁgure. and adjacency is a property of vertices. then it has a dominating set of size k (the same set of vertices). In polynomial time we can determine whether every vertex is in V or is adjacent to a vertex in V . In DS: “every vertex is either in V or is adjacent to a vertex in V ”. we will leave the edge {u. wuv }. This suggests the following idea (which does not quite work). wuv } and {v. we can add any vertices we like to make the size equal to k . we create a graph G as follows. which given an instance of the vertex cover problem (G. produces an instance (G . we need to show that G has a vertex cover of size k if and only if G has a dominating set of size k . v} with the two edges {u. Correctness of the Reduction: To establish the correctness of the reduction. If so. wuv } in G . By the lemma above. and then outputs (G. Note that every step can be performed in polynomial time. Let I denote the number of isolated vertices and set k = k + I. n − k). k). v} in G we create a new vertex wuv in G and add edges {u. Theorem: DS is NP-complete. n. As usual the proof has two parts. these instances are equivalent. Output (G . Because incidence is a property of edges. v} has now been replaced with the fact that u is adjacent to the corresponding vertex wuv . How to we translate between these problems? The key difference is the condition. then DS is likely to be NP-complete as well. In other words. The fact that u was incident to edge {u. The reduction function f inputs G and k. This is still not quite correct though. The main result of this section is just this. each vertex is adjacent to the members of the dominating set. First we show that DS ∈ NP. v}. Let VI denote the isolated vertices in G. Observe that |V | = |V ∪ VI | ≤ k + I = k .

v. In particular. v} in G implying that either u or v is in the vertex cover V . if some vertex wuv ∈ V . vertex cover for G dominating set for G’ dominating set for G’ using original vertices vertex cover for G Fig. We might try to claim something like: V is a vertex cover for G. 52: Dominating set reduction. and hence either it is in V or else all of its neighbors are in V . v} of G that was not covered (neither u nor v was in V ) then the special vertex wuv would not be adjacent to any vertex of V in G . Thus wuv is dominated by the same vertex in V Finally.f G k=3 G’ k’=3+1=4 Fig. This is shown in the top part of the following ﬁgure. ﬁrst observe that all the isolated vertices are in V and so they are dominated. let V = V − VI be the remaining k vertices. we still dominate u. Thus by replacing wu.v with u we dominate the same vertices (and potentially more). (This is shown in the lower middle part of the ﬁgure.) Observe that the vertex wuv is adjacent to only u and v. (We could have just as easily replaced it with v. then modify V by replacing wuv with u. each of the nonisolated original vertices v is incident to at least one edge in G. to the contrary there were an edge {u. But this will not necessarily work. However. Note that all I isolated vertices of G must be in the dominating set. Conversely. By using u instead.) We claim that V is a vertex cover for G. so it dominates itself and these other two vertices. and wuv (because u has edges going to v and wuv ). In either case. contradicting the hypothesis that V was a dominating set for G . because V may have vertices that are not part of the original graph G. First. If. 53: Correctness of the VC to DS reduction (where k = 3 and I = 1). To see that V is a dominating set. we claim that if G has a dominating set V of size k = k + I then G has a vertex cover V of size k. Second. v is either in V or adjacent to a vertex in V . we claim that we never need to use any of the newly created special vertices in V . each of the special vertices wuv in G corresponds to an edge {u. Lecture Notes 72 CMSC 451 . Let V denote the resulting set after this modiﬁcation.

w .Lecture 20: Subset Sum Read: Sections 34. . . The ith row of this table can be computed in O(t) time.2 The quantity n · t is a polynomial function of n. VC ≤P SS. . using the fewest number of bits possible. We are given a knapsack of capacity W . This is polynomial in n. We want a polynomial time computable function f that maps an instance of the vertex cover (a graph G and integer k) to an instance of the subset sum problem (a set of integers S and target integer t) such that G has a vertex cover of size k if and only if S has a subset summing to t. w } 1 2 i that sums to t . and verify that this sum equals t. given the contents of the (i − 1)-st row. . then certainly the more general 0-1 Knapsack problem (stated as a decision problem) is also NP-complete. so the answer in this case is yes. For the remainder of the proof we show how to reduce vertex cover to subset sum. . 12.) Then. 32} and t = 33. In particular. this running time is not polynomial as a function of the input size. we see that the subset sum problem is equivalent to this simpliﬁed version of the 0-1 Knapsack problem. 15} sums to t = 33. t. Consider the following example. but exponential in b. . If the numbers involved are of a ﬁxed number of bits (a constant independent of n). the certiﬁcate is just the indices of the numbers that form the subset S . Thus. Dynamic Programming Solution: There is a dynamic programming algorithm which solves the Subset Sum problem in O(n · t) time. The objective is to take as many objects as can ﬁt in the knapsack’s capacity so as to maximize the value. 9. 12.5 in CLR. w2 . In the 0-1 Knapsack we either take an object entirely or leave it. we will show that Vertex Cover (VC) is reducible to SS. vi = wi . we want to know whether there exists a subset S ⊆ S that sums exactly to t. SS is NP-complete: The proof that Subset Sum (SS) is NP-complete involves the usual two elements. presented as a decision problem. Recall that in the 0-1 Knapsack problem. (i) SS ∈ NP. Lecture Notes 73 CMSC 451 . If t = 34 the answer would be no. So. suppose that the value is the same as the weight. This would seem to imply that the Subset Sum problem is in P. Then the input size is O(nb). we are given a collection of objects. we will show that in the general case. Let us assume that the numbers wi and t are all b-bit numbers represented in base 2. and 0 otherwise. Note that an important consequence of this observation is that the SS problem is not hard when the numbers involved are small. S = {3. Subset Sum: The Subset Sum problem (SS) is the following. Recall that in all NP-complete problems we assume (1) running time is measured as a function of input size (number of bits) and (2) inputs must be encoded in a reasonable succinct manner. 23. wn } and a target value. for 0 ≤ i ≤ n and 0 ≤ t ≤ t. We can add two b-bit numbers together in O(b) time. It follows that if we can show that this simpler version is NP-complete. t ] = 1 if there is a subset of {w . However. But there is a important catch. but the formulation is. To show that SS is in NP. that is. The value of t may be as large as 2b . The subset S = {6. Given S and t. 2 We will leave this as an exercise. (In the fractional knapsack we could take a portion of an object. then the problem is solvable in polynomial time. in polynomial time we can compute the sum of elements in S . we need to give a veriﬁcation procedure. (ii) Some known NP-complete problem is reducible to SS. Thus. so would vertex cover. . if subset sum were solvable in polynomial time. say. this problem is NP-complete. gold. So the resulting algorithm has a running time of O(n2b ). . S[i. the best we could hope to achieve would be to ﬁll the knapsack entirely.5.) In the simplest version. By setting t = W . (This would occur for example if all the objects were made of the same material. Given a ﬁnite set S of positive integers S = {w1 . This problem is a simpliﬁed version of the 0-1 Knapsack problem. each with an associated weight wi and associated value vi . 15. 6.

then every edge will be covered by at least one of these vertices.) v7 v1 e1 e2 v2 e3 v3 e4 v6 e6 e5 v5 e7 v4 e8 e1 e2 e3 e4 e5 e6 e7 e8 v1 v2 v3 v4 v5 v6 v7 t= 1 1 0 0 0 0 0 1 0 1 0 0 0 1 0 1 0 1 1 0 0 0 0 1 0 0 1 0 0 0 1 1 0 0 0 1 0 0 1 1 0 0 0 1 0 1 0 1 0 0 0 1 1 0 0 1 0 0 1 1 0 0 0 1 v2 v v3 v v4 Fig. Then represent each vertex vi as an E-element bit vector. Now. but gives a sense of how to proceed. Let E denote the number of edges in the graph.How can we encode the notion of selecting a subset of vertices that cover all the edges to that of selecting a subset of numbers that sums to t? In the vertex cover problem we are selecting vertices. this is starting to feel more like the subset sum problem. and in the subset sum problem we are selecting numbers.) Lecture Notes 74 CMSC 451 . v7 v1 e1 e2 v2 e3 v3 e4 v6 e6 e5 v5 e7 v4 e8 e1 e2 e3 e4 e5 e6 e7 e8 v1 v2 v3 v4 v5 v6 v7 1 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 1 0 0 0 0 1 1 0 0 0 Fig. implying that the vertices form a vertex cover. 55: The logical-or of a vertex cover equals 1111 .) An example is shown below. so it seems logical that the reduction should map vertices into numbers. For example. if the logical-or is a bit vector of 1’s. . 1111 . then each edge has been covered by some vertex. 1. (Another way to think of this is that these bit vectors form the rows of an incidence matrix for the graph. logical-or is not the same as addition. First number the edges of the graph from 1 through E. The constraint that these vertices should cover all the edges must be mapped to the constraint that the sum of the numbers should equal the target value. if both of the endpoints of some edge are in the vertex cover. . If the subset is a vertex cover. then its value in the corresponding column would be 2. and so the logical-or will be a bit vector of all 1’s. An Initial Approach: Here is an idea. however. which does not work. (We could just take the logical-or of all the vertices. Second. Since bit vectors can be thought of as just a way of representing numbers in binary. . 1. Conversely. First. There are a number of problems. The target would be the number whose bit vector is all 1’s. in which k = 3. (Later we will consider how to encode the fact that there only allowed k vertices in the cover. not 1. where the j-th bit from the left is set to 1 if and only if the edge ej is incident to vertex vi . we have no way of controlling how many vertices go into the vertex cover. suppose we take any subset of vertices and form the logical-or of the corresponding bit vectors. 54: Encoding a graph as a collection of bit vectors. . and then the logical-or would certainly be a bit vectors of 1’s.

Thus. yE . Thus we can boost any value consisting of 1’s and 2’s to all 2’s. we can assume any base system we want. . On the other hand. no matter what slack values we add. . we can supplement this value by adding in the corresponding slack value.g. x2 . Observe that this can be done in polynomial time. xn using base-4 notation. in O(E 2 ). Correctness: We claim that G has a vertex cover of size k if and only if S has a subset that sums to t. 112. We claim that these vertices V form a vertex cover. . Conversely. since the ﬁrst digit must sum to k. (1) Create a set of n vertex values. e. Observe that each column of the incidence matrix has at most two 1’s in any column. if S has a subset S that sums to t then we assert that it must select exactly k values from among the vertex values. we must select exactly k of the vertex values. xn . E) and integer k for the vertex cover problem. . But since this is the last column. Thus. In the target. For 1 ≤ i ≤ E. except for a single 1-digit in the ith position. base 10). 222 (all 2’s). The i-th digit of yi is 1 and all others are 0. note that if there are any 0 values in the ﬁnal sum. the size of the vertex cover. For each position where there is a 1. The second difference between logical-or and addition is that an edge may generally be covered either once or twice in the vertex cover. So. then we take the vertex values xi corresponding to the vertices of V . and so this cannot be a solution to the subset sum problem. decimal or binary. . given the graph G = (V. To see why this works. and whose remaining E digits are all 2. Thus. Once the numbers have been formed. they will be converted into whatever form our machine assumes for its input representation. . 00000100000. In fact we will use base 4 (for reasons to be seen below). . It follows from the comments made earlier that the lower-order E digits of the resulting sum will be of the form 222 . we will require that this column sum to the value k. To ﬁx this problem. . We are only allowed to place only k vertices in the vertex cover. The j-th digit is a 1 if edge ej is incident to vertex vi and 0 otherwise. observe that from the numbers of our vertex cover. e. (2) Create E slack values y1 . in fact. We will handle this by adding an additional column. e.. y1 . the resulting digit position could not be equal to 2. we take the corresponding slack variable. there might be carries out of this last column (if k ≥ 4). the leftmost digit of the sum will be k. y2 . 1211 . Note that the base of the number system is just for own convenience of notation. . and for each edge that is covered only once in V . For example. Output the set S = {x1 . no edge can be left uncovered by V . but in binary 1101 + 0011 = 1000. . we recognize that we do not have to use a binary (base-2) representation. we will put a 1 in this additional column. There is one last issue. . the 1101 ∨ 0011 = 1111. the yj ’s.g. . x1 . For each number arising from a vertex. If G has a vertex cover V of size k. . it will not affect any of the other aspects of the construction.g. The value xi is equal a 1 followed by a sequence of E base-4 digits. Lecture Notes 75 CMSC 451 . since (because there are no carries) the corresponding column would be 0 in the sum of vertex values. . we will get a sum consisting of 1’s and 2’s. because each edge is incident to at most two vertices. the ﬁnal sum of these numbers will be a number consisting of 1 and 2 digits. . (3) Let t be the base-4 number whose ﬁrst digit is k (this may actually span multiple base-4 digits).g. Thus. . . and t into whatever base notation is used for the subset sum problem (e. . . For each slack variable we will put a 0. . where yi is a 0 followed by E base-4 digits. . 56. if use any base that is at least as large as base 3. This does not provide us with a unique target value t. Note that since we only have a base-4 representation. the ith slack value will consist of all 0’s. the resulting subset sums to t. In fact. To ﬁx this. We know that no digit of our sum can be a zero. we will create a set of E additional slack values.There are two ways in which addition differs signiﬁcantly from logical-or. to form the desired sum. we will not have enough slack values to convert this into a 2. In particular. The construction is illustrated in Fig. The Final Reduction: Here is the ﬁnal reduction. (4) Convert the xi ’s. The ﬁrst is the issue of carries. yE } and t. 2 and because there are k elements in V . we will never generate a carry to the next position. . Our target will be the number 2222 .

e1 e2 e3 e4 e5 e6 e7 e8 x1 x2 x3 x4 x5 x6 x7 y1 y2 y3 y4 y5 y6 y7 y8 t 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 3 1 1 0 0 0 0 0 1 0 0 0 0 0 0 0 2 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 2 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 2 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 2 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 2 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 2 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 2 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 2 Vertex values Slack values vertex cover size (k=3) Fig. Lecture Notes 76 CMSC 451 . 56: Vertex cover to subset sum reduction. 57: Correctness of the reduction. e1 e2 e3 e4 e5 e6 e7 e8 x1 x2 x3 x4 x5 x6 x7 y1 y2 y3 y4 y5 y6 y7 y8 t 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 3 1 1 0 0 0 0 0 1 0 0 0 0 0 0 0 2 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 2 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 2 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 2 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 2 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 2 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 2 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 2 Vertex values (take those in vertex cover) Slack values (take one for each edge that has only one endpoint in the cover) vertex cover size Fig.

In our dynamic programming solution W = t.2) in CLRS. the clique optimization problem is to ﬁnd the clique of maximum size. the TSP optimization problem is to ﬁnd the simple cycle of minimum cost in a digraph. we needed to have large numbers. their approximability varies considerably. Heuristics: A heuristic is a strategy for producing a valid solution. and for a maximization problem we want C ∗ (I)/C(I) to be small. and returns an answer whose ratio bound is at most (1 + ). These go under names such as branch-and-bound. we cannot simply give up at this point. For example. We will assume that costs are strictly positive values. the running time increases beyond polynomial time. let C(I) be the cost of the solution produced by our approximation algorithm. How do we cope with NP-completeness: Use brute-force search: Even on the fastest parallel computers this approach is viable only for the smallest instances of these problems. Coping with NP-completeness: With NP-completeness we have seen that there are many important optimization problems that are likely to be quite hard to solve exactly. and produces a solution that is within a guaranteed factor of the optimum solution. Such an algorithm is called a polynomial time approximation scheme (or PTAS for short). Such an algorithm is given both the input. However underlying most of these problems is a natural optimization problem. For any input size n. we say that the approximation algorithm achieves ratio bound ρ(n). But in some cases they can perform quite well. Note that sometimes we are minimizing and sometimes we are maximizing. and a real value > 0. The running time is a function of both n and . As approaches 0. if for all I. For a minimization problem we want C(I)/C ∗ (I) to be small. which is not polynomial time. This is worthwhile if all else fails. General Search Methods: There are a number of very powerful techniques for solving general combinatorial optimization problems that have been developed in the areas of AI and operations research. since people do need solutions to these problems. whereas O(n1/ ) and O(2(1/ ) n) are not. but there are no guarantees how close it is to optimal. so the DP algorithm would run in Ω(n4n ) time.It is worth noting again that in this reduction. How do we measure how good an approximation algorithm is? We deﬁne the ratio bound of an approximation algorithm as follows. or if lack of optimality is not really an issue. Given an instance I of our problem. An approximation algorithm is one that returns a legitimate answer. but not necessarily one of the smallest size. A∗ -search. Since these are important problems. the running time might be O(n 1/ ). a running time like O((1/ )2 n3 ) would be such an example. |I| = n we have C(I) C ∗ (I) . the target value t is at least as large as 4E ≥ 4n (where n is the number of vertices in G). Although NP-complete problems are equivalent with respect to whether they can be solved exactly in polynomial time in the worst case. Performance Bounds: Most NP-complete problems have been stated as decision problems for theoretical reasons. For example. Lecture 21: Approximation Algorithms: VC and TSP Read: Chapt 35 (up through 35. and genetic algorithms. and let C ∗ (I) be the optimal solution. and it is equal to 1 if and only if the approximate solution is the true optimum solution. Some NP-complete problems can be approximated arbitrarily closely. Approximation Algorithms: This is an algorithm that runs in polynomial time (ideally). For example. For example. max I C ∗ (I) C(I) Observe that ρ(n) is always greater than or equal to 1. simulated annealing. If the running time depends only on a polynomial function of 1/ then it is called a fully polynomialtime approximation scheme. The performance of these approaches varies considerably from one problem to problem and instance to instance. the VC optimization problem is to ﬁnd the vertex cover of minimum size. Lecture Notes 77 CMSC 451 . ≤ ρ(n).

We will not discuss this further. It turns out that many simple heuristics. when not optimal. if the graph TSP problem had an approximation algorithm with a ratio bound of any value less than ∞. we put two into our cover. but we do not know which one. but is described in CLRS) and the Euclidean TSP problem. v) in the graph. and recurse on the remaining edges. For every one vertex that must be in the cover. (You cannot get much stupider than this!) Then we remove all edges that are incident to u and v (since they are now all covered). • Some NP-complete problems have PTAS’s. The idea of this heuristic is to simply put both vertices into the vertex cover. Here is an very simple algorithm. • Many NP-complete can be approximated. • Some NP-complete problems can be approximated to within a ﬁxed constant factor. a heuristic. This class is called Max-SNP complete. Let C ∗ be the optimum VC. so it is easy to see that the cover we generate is at most twice the size of the optimum cover. Sufﬁce it to say that the topic of approximation algorithms would ﬁll another course. then P = NP. 58: The 2-for-1 heuristic for vertex cover. Thus we have: |C| = |A| ≤ |C ∗ | 2 Lecture Notes 78 ⇒ |C| ≤ 2. and thus the size of C ∗ is at least |A|. Vertex Cover: We begin by showing that there is an approximation algorithm for vertex cover with a ratio bound of 2. |C ∗ | CMSC 451 . One of its two vertices must be in the cover. The ﬁrst approach is to try something that seems like a “reasonably” good strategy. Observe that the size of C is exactly 2|A| because we add two vertices for each such edge. Here is a more formal proof of its approximation bound. For example. can be approximated to within a factor of ln n. it is very unlikely that any approximation algorithm exists. Consider an arbitrary edge (u. The approximation is given in the ﬁgure below. much like NP-complete problems. We will not discuss this algorithm. However note that in the optimum VC one of these two vertices must have been added to the VC. Proof: Consider the set C output by ApproxVC.• For some NP-complete problems. can often be proved to be close to optimal. but the ratio bound is a (slow growing) function of n. this algorithm will be guaranteed to ﬁnd a vertex cover whose size is at most twice that of the optimum. We will discuss two examples below. that guarantees an approximation within a factor of 2 for the vertex cover problem. For example. there are collections of problems which are “believed” to be hard to approximate and are equivalent in the sense that if any one can be approximated in polynomial time then they all can be. but it is covered in CLRS. It is based on the following observation. the set cover problem (a generalization of the vertex cover problem). Claim: ApproxVC yields a factor-2 approximation for Vertex Cover. In fact. How does one go about ﬁnding an approximation algorithm. G and opt VC The 2−for−1 Heuristic Fig. that is. One example is the subset problem (which we haven’t discussed. Let A be the set of edges selected by the line marked with “(*)” in the ﬁgure. Recall that a vertex cover is a subset of vertices such that every edge in the graph is incident to at least one of these vertices. The vertex cover optimization problem is to ﬁnd a vertex cover of minimum size.

Select the vertex with the maximum degree.E)) { C = empty-set. Put this vertex in the cover. The Greedy Heuristic: It seems that there is a very simple way to improve the 2-for-1 heuristic. Instead. and adds both vertices to the cover. Can we prove that the greedy heuristic always outperforms the stupid 2-for-1 heuristic? The surprising answer is an emphatic “no”. the bound is related to the set of edges A. In fact. This algorithm is illustrated in the ﬁgure below. That is. Here is the greedy heuristic. which form a maximal independent set of edges. it can be shown that the greedy heuristic does not even have a constant performance bound. since a vertex of high degree covers the maximum number of edges.2-for-1 Approximation for VC ApproxVC { C = empty-set while (E is nonempty) do { (*) let (u. Namely. for maximization problems an upper bound. } return C. that we need some way of ﬁnding a bound on the optimal solution. This algorithm simply selects any edge. Repeat the algorithm on the remaining graph. In this case. until no more edges remain. Greedy Approximation for VC GreedyVC(G=(V. where n is the number of vertices. remove from E all edges incident to u.) The bound should be related to something that we can compute in polynomial time. the greedy heuristic actually succeeds in ﬁnding the optimum vertex cover. We saw in the minimum spanning tree and shortest path problems that greedy strategies were optimal. This is greedy strategy. (For minimization problems we want a lower bound. } It is interesting to note that on the example shown in the ﬁgure. it can perform arbitrarily poorly compared to the optimal algorithm. while (E is nonempty) do { let u be the vertex of maximum degree in G. (We leave Lecture Notes 79 CMSC 451 . add u to C.v) be any edge of E add both u and v to C remove from E all edges incident to either u or v } return C. 59: The greedy heuristic for vertex cover. why not concentrate instead on vertices of high degree. Then delete all the edges that are incident to this vertex (since they have been covered). G and opt VC The Greedy Heuristic Fig. It can be shown that the ratio bound grows as Θ(log n). } This proof illustrates one of the main features of the analysis of any approximation algorithm.

so it would probably be wise to run both algorithms and take the better of the two. Traveling Salesman Problem: In the Traveling Salesperson Problem (TSP) we are given a complete undirected graph with nonnegative edge weights. Here is how the algorithm works. is never longer than an indirect path. w). the cost of the minimum TSP tour is at least as large as the cost of the MST.this as a moderately difﬁcult exercise. (In fact. and Lecture Notes 80 CMSC 451 . v) denote the weight on edge (u. but we will not discuss it. v) + c(v. The ﬁgure below shows an example of this. start start MST Twice−around tour Shortcut tour Optimum tour Fig. For many of the applications of TSP. For example. Proof: Let H denote the tour produced by this algorithm and let H ∗ be the optimum tour. Let T be the minimum spanning tree. 60: TSP Approximation. v) to be the shortest path length between u and v (computed. Therefore. Intuitively. for example. but we can make it simple by short-cutting. once in each direction. Given any free tree there is a tour of the tree called a twice around tour that traverses the edges of the tree twice. although this algorithm does not produce an optimal tour. then it will satisfy the triangle inequality. the problem satisﬁes something called the triangle inequality. that is.5. This path is not simple because it revisits vertices. since we can remove any edge of H ∗ resulting in a spanning tree. If we can ﬁnd some way to convert the MST into a TSP tour while increasing its cost by at most a constant factor. We can compute MST’s efﬁciently.) However. Last time we mentioned that TSP (posed as a decision problem) is NP-complete. say by the Floyd-Warshall algorithm). it should also be pointed out that the vertex cover constructed by the greedy heuristic is (for typical graphs) smaller than that one computed by the 2-for-1 heuristic. v) is deﬁned to be the Euclidean distance between these points. More formally. and we want to ﬁnd a cycle that visits all vertices and is of minimum cost. Claim: Approx-TSP has a ratio bound of 2. v). Let c(u. for all u. this says that the direct path from u to w. then this is possible. there is a slightly more complex version of this algorithm that has a ratio bound of 1. then we will have an approximation for TSP. However it is not necessarily a minimum spanning tree. where c(u. v. if we deﬁne c(u. and deﬁne a complete graph on these points. As we said before. any subsequence of the twice-around tour which visits each vertex exactly once will sufﬁce. Notice that the ﬁnal order in which vertices are visited using the short-cuts is exactly the same as a preorder traversal of the MST. There are many examples of graphs that satisfy the triangle inequality. either Kruskal’s or Prim’s algorithm. (In fact. using.) The triangle inequality assures us that the path length will not increase when we take short-cuts. Given a set of edges A forming a tour we deﬁne c(A) to be the sum of edge weights in A. We shall see that if the edge weights satisfy the triangle inequality. w) ≤ c(u.) Thus. given any weighted graph. we skip over previously visited vertices. Another example is if we are given a set of points in the plane. then the triangle inequality is also satisﬁed. the tour that it produces cannot be worse than twice the cost of the optimal tour. When the underlying cost function satisﬁes the triangle inequality there is an approximation algorithm for TSP with a ratio-bound of 2. w ∈ V c(u. The key insight is to observe that a TSP with one edge removed is a spanning tree.

c(H ∗ ) Lecture 22: The k-Center Approximation Read: Today’s material is not covered in CLR. v) denote the weight of edge (u. 61. deﬁne: V (ci ) = {v ∈ V | d(v. cj ). let G = (V. u. and we are given an integer k.) Consider a subset C ⊆ V of vertices. If we model the road network of the city as an undirected graph whose edge weights are the distances between intersections. .TSP Approximation ApproxTSP(G=(V. called centers. v) = d(u. Facility Location: Imagine that Blockbuster Video wants to open a 50 stores in some city. such that the maximum distance between any vertex in V and its nearest center in C is minimized.) More formally. ci ). when we short-cut an edge of T to form H we do not increase the cost of the tour. In the k-center problem we are given an undirected graph G = (V. (The optimization problem seeks to minimize the maximum distance and the decision problem just asks whether there exists a set of centers that are within a given distance. This will be used in our proof. v∈V (ci ) Lecture Notes 81 CMSC 451 . The company asks you to determine the best locations for these stores. then this is an instance of the k-center problem. since every edge in T is hit twice. (These are the houses that are closest to this center. Let us assume for simplicity that there are no ties for the distances to the closest center (or that any such ties have been broken arbitrarily). E) denote the graph. See Fig. the centers. (Note that the shortest path distance satisﬁes the triangle inequality.) We assume that all edge weights are nonnegative. that is. ci ) ≤ d(v. V (ck ) forms a partition of the vertex set of G. v) = w(v. u) because G is undirected. For each pair of vertices. For each center ci ∈ C we deﬁne its neighborhood to be the subset of vertices for which ci is the closest center. the length of the shortest path from u to v. and let w(u. The condition is that you are to minimize the maximum distance that any resident of the city must drive in order to arrive at the nearest store. v ∈ V . The problem is to compute a subset of k vertices C ⊆ V . let d(u. that is. D(ci ) = max d(v. The bottleneck distance associated with each center is the distance to its farthest vertex in V (ci ). v).) More formally. Now observe that the twice around tour of T has cost 2c(T ). E) with nonnegative edge weights. v) denote the distance between u to v. Then V (c1 ).E)) { T = minimum spanning tree for G r = any vertex L = list of vertices visited by a preorder walk ot T starting with r return L } since T is the minimum cost spanning tree we have c(T ) ≤ c(H ∗ ). (w(u. For each vertex v ∈ V we can associate it with its nearest center in C. for i = j}. . and so we have c(H) ≤ 2c(T ). Combining these we have c(H) ≤ c(T ) ≤ c(H ∗ ) 2 ⇒ c(H) ≤ 2. (This is the nearest Blockbuster store to your house). By the triangle inequality. . . V (c2 ).

This distance is critical because it represents the customer that must travel farthest to get to the nearest facility. and the ﬁnal bottleneck distance is 11. let d[u] denote the distance to the nearest center. Again we compute the distances from each vertex to its nearest center. One way to solve the distance computation step above would be to invoke Dijkstra’s algorithm i times. In Lecture Notes 82 CMSC 451 . 62(c) where dashed lines indicate which vertices are closer to which center). A brute force solution to this problem would involve enumerating all k-element of subsets of V . We know from Dijkstra’s algorithm how to compute the shortest path from a single source to all other vertices in the graph. the ﬁnal three greedy centers are shaded. we deﬁne the overall bottleneck distance to be D(C) = max D(ci ). We will show that there does exist an efﬁcient approximation algorithm for the problem. the optimum solution (shown on the right) has a bottleneck cost of 9. in the ﬁgure below). E). 62(d)). We can modify Dijkstra’s algorithm to operate as a multiple source algorithm. For each vertex u. This the bottleneck vertex for {c1 }. Given this notation. say. ci ∈C This is the maximum distance of any vertex from its nearest center. which beats the 11 that the greedy algorithm gave. Although the greedy approach has a certain intuitive appeal (because it attempts to ﬁnd the vertex that gives the bottleneck distance. we can now formally deﬁne the problem. If k is k a function of n (which is reasonable). the number of possible subsets is n = Θ(nk ). it is highly unlikely that a signiﬁcantly more efﬁcient exact algorithm exists in the worst-case. However. In particular. Finally. Greedy Approximation Algorithm: Our approximation algorithm is based on a simple greedy algorithm that produces a bottleneck distance D(C) that is not more than twice the optimum bottleneck distance. In the example shown in the ﬁgure. We place the next center at this vertex (see Fig. 62(b)). c2 }. So let us just make it the next center. 62(d). The decision-problem formulation of the k-center problem is NP-complete (reduction from dominating set). and computing D(C) for each one. called c2 . (See Fig. Repeat this until all k centers have been selected. ﬁnd a subset C ⊆ V of size k such that D(C) is minimized. it is not optimal. We begin by letting the ﬁrst center c1 be any vertex in the graph (the lower left vertex. In Fig. Then again we compute the distances from each vertex in the graph to the closer of c1 and c2 . then this an exponential number of subsets. Compute the distances between this vertex and all the other vertices in the graph (Fig. Given that the problem is NP-complete. We would like to select the next center so as to reduce this distance. letting n = |V | and k. the bottleneck vertex. Again we consider the bottleneck vertex for the current centers {c1 . But there is an easier way. and an integer k ≤ |V |. it sets d[s] = 0 and pred[s] = null. Consider the vertex that is farthest from this center (the upper right vertex at distance 23 in the ﬁgure). and then puts a center right on this vertex). Here is a summary of the algorithm. 61: The k-center problem with optimum centers ci and neighborhood sets V (ci ). k-center problem: Given a weighted undirected graph G = (V. in the initialization of Dijkstra’s single source algorithm.V(c2) 5 9 8 5 6 6 8 7 9 5 V(c1) Input graph (k=3) 6 4 5 9 5 5 c1 6 8 5 c2 7 7 6 6 8 9 6 4 4 c3 5 5 V(c3) Optimumum Cost = 7 Fig.

5 9 8 5 6 (a) 6 7 6 4 9 5 8 14 9 5 5 c1 5 19 7 6 23 4 11 9 5 5 c1 5 6 7 6 c2 4 11 9 5 5 c1 5 6 7 6 c2 4 8 12 6 9 19 5 8 12 6 9 4 5 8 c3 6 9 4 5 6 6 (b) 8 14 6 6 (c) 8 9 6 6 (d) 8 9 Greedy Cost = 11 Fig. 62: Greedy approximation to k-center. denoted d[v] } return C // final centers } Lecture Notes 83 CMSC 451 . k) { C = empty_set for each u in V do // initialize distances d[u] = INFINITY for i = 1 to k do { // main loop Find the vertex u such that d[u] is maximum Add u to C // u is the current bottleneck vertex // update distances Compute the distance from each vertex v to its closest vertex in C. Greedy Approximation for k-center KCenterApprox(G.

the modiﬁed multiple source version. ok } denote the centers of the optimal solution (shown as black dots in Fig. Let G = {g1 . The proof involves a simple application of the pigeon-hole principal. Lecture Notes 84 CMSC 451 . The greedy algorithm might pick any initial vertex that it likes. we do this for all the vertices of C. This follows as a result of our greedy selection strategy. gk+1 } be the (k + 1)-element set consisting of the greedy centers together with the next greedy center gk+1 First observe that for i = j. consider a set of n + 1 vertices arranged in a linear graph. which is better by a factor of 2. gk } be the centers found by the greedy approximation (shown as white dots in the ﬁgure below). . . g2 . Let O = {o1 . then the maximum distance would only be n/2. 64: Analysis of the greedy heuristic for k = 5. To see that we can get a factor of 2. Since the ﬁnal bottleneck distance is D(G). that is D(G)/D∗ ≤ 2. Recall that the running time of Dijkstra’s algorithm is O((V + E) log V ). As we add more centers. 63: Worst-case for greedy. gj ) ≥ D(G). d(gi . Also. The ﬁnal greedy algorithm involves running Dijkstra’s algorithm k times (once for each time through the for-loop). Approximation Bound: How bad could greedy be? We will argue that it has a ratio bound of 2. all the centers are at least this far apart from one another. let gk+1 denote the next center that would have been added next. The regions represent the neighborhood sets V (oi ) for the optimal centers. Notice that the distance from gk+1 to its nearest center is equal D(G). Suppose it picks the leftmost vertex. As each center is selected. Greedy Opt Cost = n Cost =n/2 Fig. . 64. If we had instead chosen the vertex in the middle. this is O(E log V ). . . . . Let D(G) denote the bottleneck distance for G. Proof: Let G = {g1 . in which all edges are of weight 1. o2 . g3 o1 g1 g5 <Dopt >D* o2 <Dopt o3 g6 o5 g4 o4 g2 Fig. the overall running time is O(kE log V ). . . The greedy centers are given as white dots and the optimal centers as black dots. . gk . . the maximum distance between any pair of centers decreases. g2 . . Let D∗ = D(O) be the optimal bottleneck distance. the bottleneck vertex for G. Under the reasonable assumption that E ≥ V . Theorem: The greedy approximation has a ratio bound of 2. that is. and the lines show the partition into the neighborhoods for each of these points). Thus. Then the maximum (bottleneck) distance is the distance to the rightmost vertex which is n. it is selected to be at the maximum (bottleneck) distance from all the previous centers. We want to show that this approximation algorithm always produces a ﬁnal distance D(C) that is within a factor of 2 of the distance of the optimal solution.

There is a factor-2 approximation for the vertex cover problem. You are given a pair (X. The domain to be covered are the edges. (In the ﬁgure. S2 . In particular. from which the desired ratio follows. x2 . suppose you want to set up security cameras to cover a large art gallery. Thus. such that every element of X belongs to at least one set of F . Therefore. X= Si ∈C The problem is to ﬁnd the minimum-sized subset C of F that covers X. . Bin packing is covered as an exercise in CLRS. gj ) ≤ 2D∗ . each belongs to V (om ) for some m.Each gi ∈ G is associated with its closest center in the optimal solution. . The optimum set cover consists of the three sets {S3 . (This is a collection of sets over X. For example. Let these be denoted gi and gj . Because there are k centers in O. Unfortunately. S5 }. the greedy centers g4 and g5 are both in V (o2 )). F ) where X = {x1 . S3 S1 S4 S5 S2 S6 Fig. and each vertex covers the subset of incident edges. You want to put up the fewest cameras to see all the paintings. xm } is a ﬁnite set (a domain of elements) and F = {S1 . it follows from the pigeon-hole principal. the decision-problem formulation of set cover (“does there exist a set cover of size at most k?”) is NP-complete as well. But from the comments above we have d(gi . .3. gj ) ≤ 2D∗ . Consider a subset C ⊆ F . Lecture 23: Approximations: Set Cover and Bin Packing Read: Set cover is covered in Chapt 35. that at least two centers of G are in the same set V (om ) for some m. Set Cover: The set cover problem is a very important optimization problem. but it cannot be applied to generate a factor2 approximation for set cover. Consider the example shown below. Complexity of Set Cover: We have seen special cases of the set cover problems that are NP-complete. and k + 1 elements in G . that is Si . .) We say that C covers the domain if every element of X is in some set of C. . this is not true for the general Lecture Notes 85 CMSC 451 . it follows that there exists a path of length 2D∗ from gi to gj . gj ) ≥ D(G). . 65: Set cover. vertex cover is a type of set cover problem. . S4 . and hence we have d(gi . D(G) ≤ d(gi . we know that the distance from gi to ok is of length at most D∗ and similarly the distance from ok to gj is at most D∗ . the VC approximation relies on the fact that each element of the domain (an edge) is in exactly 2 sets (one for each of its endpoints). Set cover can be applied to a number of applications. For example. that is. Sn } is a family of subsets of X. Each such subset of paintings is a set in your system. Since D∗ is the bottleneck distance for O. . you can see a certain subset of the paintings. By concatenating these two paths. From each possible camera position.

S3.) There were many cases where ties were broken badly here. that the approximation factor of ln m where m ≤ m is the size of the largest set in F . S2 is chosen. but it is possible to redesign the example such that there are no ties and yet the algorithm has essentially the same ratio bound. If ties are broken in the worst possible way. over and over again. Thus.) Before giving the proof. F) { U = X // U are the items to be covered C = empty // C will be the sets in the cover while (U is nonempty) { // there is someone left to cover select S in F that covers the most elements of U add S to C U = U . which achieves an approximation bound of ln m. Lecture Notes 86 CMSC 451 1 c c ≤ 1 . (Note that this is natural log. Today we will show that there is a reasonable approximation algorithm. In fact. Lemma: For all c > 0.) Greedy Set Cover: A simple greedy approach to set cover works by at each stage selecting the set that covers the greatest number of “uncovered” elements. S4 (size 2) and ﬁnally S5 and S6 (each of size 1). Greedy Set Cover Greedy-Set-Cover(X. Consider the following example. e . then S6 (since it covers 3 out of the remaining 6). it is known that there is no constant factor approximation to the set cover problem. What is the approximation factor? The problem with the greedy set cover algorithm is that it can be “fooled” into picking the wrong set. 66: An example in which the Greedy Set cover performs poorly. choosing S3 (size 4). We remove all the covered elements. because set cover is one of the most powerful NP-complete problems. Now S2 . it would return a set cover of size 4. S2. the greedy heuristic. Initially all three sets S1 . where m = |X|. then S2 (since it covers 2 of the remaining 3) and ﬁnally S3 . the greedy algorithm will ﬁrst select sets S1 . we need one important mathematical inequality. The optimal set cover consists of sets S5 and S6 . and S6 have 16 elements. S6} Greedy: {S1.set cover problem. S5 and S6 all cover 8 of the remaining elements. S4S3 S5 S6 S2 S1 Optimum: {S5. but we picked roughly lg m. This is unfortunate. 1− where e is the base of the natural logarithm. whereas the optimal set cover has size 3. (Recall the lg denotes logarithm base 2. However. the optimum cover consisted of two sets. Thus. each of size 16. not base 2. S5 . (The book proves a somewhat stronger result. where m = |X|. S6} Fig. S4.S } return C } For the example given earlier the greedy-set cover algorithm would select S1 (since it covers 6 out of 12 elements). if we choose poorly. for a ratio bound of (lg m)/2. their proof is more complicated. S5. The pattern repeats. Again. However we will show that the greedy set cover heuristic nevers performs worse than a factor of ln m. unless P = NP. the size of the underlying domain.

there are m0 = m elements left to be covered. (The two functions are equal when x = 0.) Now. it will select a set that covers at least this many elements. we are interested in the largest value of g such that g 1 . If we apply this argument g times.) Initially.) Lecture Notes 87 CMSC 451 ⇒ g ≤ ln m. c . 1 + x ≤ ex . Applying the argument again. Even though the greedy set cover has this relatively bad ratio bound. we still have some element remaining to be covered. each time we succeed in covering at least a fraction of (1 − 1/c) of the remaining elements. Proof: Let c denote the size of the optimum set cover. which is a variant of the knapsack problem. (Note that if your bin size is not 1. We will show that g/c ≤ ln m. but I think it is easier to understand. Thus. The problem is to partition the objects among the bins so as to use the fewest possible bins. if we substitute −1/c for x we have (1 − 1/c) ≤ e−1/c . it seems to perform reasonably well in practice. Bin Packing: Bin packing is another well-known NP-complete problem. and hence there exists a subset that covers at least m1 /c elements. We are given a set of n objects. leaving at most m2 = m1 (1 − 1/c) = m0 (1 − 1/c)2 elements remaining. we have the desired result. we know that we can cover these m1 elements with a cover of size c (the optimal cover). where si denotes the size of the ith object.Proof: We use the fact that for all x. Thus. It will simplify the presentation to assume that 0 < si < 1. How long can this go on? Consider the largest value of g such that after removing all but the last set of the greedy cover. then no collection of c sets could cover all m0 elements. the example shown above in which the approximation bound is (lg m)/2 is not “typical” of set cover instances. there must be at least one set that covers at least m0 /c elements. The theorem of the approximation bound for bin packing proven here is a bit weaker from the one in CLRS. We want to put these objects into a set of bins. Now.) Since the greedy algorithm selects the largest set. then you can reduce the problem into this form by simply dividing all sizes by the size of the bin. Then the number of elements that remain is uncovered after g sets have been chosen by the greedy algorithm is at most mg = m0 (1 − 1/c)g . if we multiply by eg/c and take natural logs we get that g satisﬁes: eg/c ≤ m This completes the proof. 1≤m 1− c We can rewrite this as 1≤m By the inequality above we have 1≤m 1 e g/c 1− 1 c c g/c . Each bin can hold a subset of objects whose total size is at most 1. and let g denote the size of the greedy set cover minus 1. We know that there is a cover of size c (the optimal cover) and therefore by the pigeonhole principle. but we are correct to within 1 set. if every set covered less than m0 /c elements. and if we raise both sides to the cth power. . The number of elements that remain to be covered is at most m1 = m0 − m0 /c = m0 (1 − 1/c). (This is not quite what we wanted. (Since otherwise. Theorem: Greedy set cover has the ratio bound of at most ln m where m = |X|.

We put this object in this bin.) Here is a simple heuristic algorithm for the bin packing problem. so it has a total value of 2S. preferring to keep everything in the ﬁrst bin. For example. we claim that bff ≤ 2S. . and bff denote the number of bins used by ﬁrst-ﬁt. We claim that ti + ti+1 ≥ 1. . We take each object in turn. (This makes intuitive sense. . and ﬁnd the ﬁrst bin that has space to hold this object. Let b∗ denote the optimal number of bins.) Next. To see this. sn } of the bin packing problem. 67: First-ﬁt Heuristic. if the optimal solution uses b∗ bins. Theorem: The ﬁrst-ﬁt heuristic achieves a ratio bound of 2. and then try to squeeze the smaller objects into the remaining space. so if i is the last index (i = bff ) then i + 1 = 1. Thus we have 2S ≥ bff . and even if we were to ﬁll each bin exactly to its capacity. This is true.Bin packing arises in many applications. The algorithm is illustrated in the ﬁgure below. Assume that indexing is cyclical. There are in fact a number of other heuristics for bin packing. since the number of bins is an integer. Proof: Consider an instance {s1 . First observe that b∗ ≥ S. or cutting the maximum number of pieces of certain shapes out of a piece of sheet metal. We claim that ﬁrst-ﬁt uses at most twice as many bins as the optimum. Thus we have: bff (ti + ti+1 ) ≥ bff . we would need at least S bins. called the ﬁrst-ﬁt heuristic. If not. called ﬁrst ﬁt decreasing. that is. Let S = i si denote the sum of all the object sizes. then bff ≤ 2. the decision problem is still NP-complete. (The reduction is from the knapsack problem. Combining this with the fact that b∗ ≥ S we have bff ≤ 2S ≤ 2b∗ . Many of these applications involve not only the size of the object but their geometric shape as well. b∗ s6 s7 s3 s1 s2 s5 s4 Fig. even if we ignore the geometry. and just consider the sizes of the objects. these include packing boxes into a truck. as desired.) Lecture Notes 88 CMSC 451 . because it is best to ﬁrst load the big items. Another example is best ﬁt. and hence ﬁrst-ﬁt would never have started to ﬁll the second bin. We start with an unlimited number of empty bins. since no bin can hold a total capacity of more than 1 unit. i=1 But this sum adds up all the elements twice. . which attempts to put the object into the bin in which it ﬁts most closely with the available space (assuming that there is sufﬁcient available space). However. There is also a variant of ﬁrst-ﬁt. in which the objects are ﬁrst sorted in decreasing order of size. Consider bins i and i + 1 ﬁlled by ﬁrst-ﬁt. (In fact. then the contents of bins i and i + 1 could both be put into the same bin. implying that bff /b∗ ≤ 2. and ﬁrst-ﬁt uses bff bins. we would need at least S bins. let ti denote the total size of the objects that ﬁrst-ﬁt puts into bin i.

an emerging area is the study of algorithm engineering. An important ﬁrst step in solving any problem is to produce a simple and clean mathematical formulation. Lecture 24: Final Review Overview: This semester we have discussed general approaches to algorithm design. string pattern matching. as well as a theoretical sense. After you have isolated the good designs. Lecture Notes 89 CMSC 451 . If you cannot prove a ratio bound. the algorithms you have learned here are rarely immediately applicable to your later work (unless you go on to be an algorithm designer) because real world problems are always messier than these simple abstract problems. And to consider how these techniques can be applied on a number of well-deﬁned problems. A good proﬁling tool can tell you which subroutines are taking the most time. You should develop a good heuristic. The world is full of heuristics. run many experiments to see how good the actual performance is. and those are the ones you should work on improving. The intent has been to investigate basic algorithm design paradigms: dynamic programming. We have also discussed the class NP-completeness. First ﬁt decreasing has a signiﬁcantly better bound of 11/9 = 1. or approximation algorithms. How to use this information: In some sense. randomized algorithms.g.g. which hides many of details of the running time (e. etc. which we have largely ignored. but it may save you a few weeks of wasted programming time.222 . If you cannot clearly describe what your algorithm is supposed to do. prove a ratio bound for your algorithm. it is very difﬁcult to know when you have succeeded. can you establish a lower bound?) If your solution is exponential time. e. and in fact 17/10 is possible. It would be easy to devote an entire semester to any one of these topics. greedy algorithms. Is there some reason why a better algorithm does not exist? (That is. and if possible. try to come up with a better one. If you cannot see why it is correct. Can it be improved?: Once you have a solution. or to consider general search strategies such as simulated annealing. exhaustive enumeration. Finally. Prove your algorithm correct: Many times you come up with an idea that seems promising. then it is time to start prototyping and doing empirical tests to establish the real constant factors. Writing proofs is not always easy. If your rough design is based on a bad paradigm (e. then maybe your problem is NP-hard. but we have covered a great deal of the basic material. Still another direction might be to study numerical algorithms (as covered in a course on numerical analysis).. or strings. Another direction is to gain a better understanding of average-case analysis. constant factors). . which considers how to design algorithms that are both efﬁcient in a practical sense. of problems that believed to be very hard to solve. both good and bad. it is important to begin with a good rough design. when depth-ﬁrst search could have been applied) then no amount of additional tuning and reﬁning will save this bad design. there are some important lessons to take out of this class. depth-ﬁrst search. computational geometry. Develop a clean mathematical model: Most real-world problems are messy. this might involve describing the problem as an optimization problem on graphs.g. Prove that your algorithm is correct before coding. only to ﬁnd out later (after a lot of coding and testing) that it does not work. Prototype to generate better designs: We have attempted to analyze algorithms from an asymptotic perspective. For example. sets. However. Create good rough designs: Before jumping in and starting coding. Still too slow?: If your problem has an unacceptably high execution time. There is still much more to be learned about algorithm design. but give a general perspective for separating good designs from bad ones. Best ﬁt has a very similar bound. you might consider an approximation algorithm. One direction is to specialize in some particular area.A more careful proof establishes that ﬁrst ﬁt has a approximation ratio that is a bit smaller than 2. parallel algorithms. chances are that it is not correct at all. and ﬁnally some examples of approximation algorithms. .

Material for the ﬁnal exam: Old Material: Know general results. the shortcut twice-around tour for the MST is at most twice the MST cost). so make sure that understand the ones that we have done either in class or on homework problems. Running time O(V 3 ). Do not forget DFS and DP. certiﬁcates and the class NP. • Independent Set to Vertex Cover and Clique. Remember that you are trying to translate the elements of one problem into the common elements of the other problem. If you cannot ﬁgure out what f is. This basically involves specifying the certiﬁcate.) The key to proving many ratio bounds is ﬁrst coming up with a lower bound on the optimal solution (e. Then suppose that you have a solution to f (x) and show how to map this to a solution for x. It is also a good idea to understand all the reductions that were used in the homework solutions.g. at least tell me what you would like f to do. NP-completeness reductions: You are responsible for knowing the following reductions. • 3-coloring to clique cover. If you cannot see how to solve the problem. You will likely an algorithm design problem that will involve one of these two techniques.2. TSPopt ≥ MST). This almost always easy. Explain that you know the template.g. but I will not ask too many detailed questions. (Make sure to get the direction correct!) • Show the correctness of your reduction. Suppose that you want to prove that some problem B is NP-complete. (Most are based on simple greedy heuristics.. polynomial time. Explain which elements of problem A will likely map to which elements of problem B. Approximation Algorithms: (Chapt. The certiﬁcate is almost always the thing that the problem is asking you to ﬁnd. k-center: Ratio bound of 2. • Vertex Cover to Dominating Set. • Vertex Cover to Subset Sum. Next. A ≤P B. Many approximation algorithms are simple. and try to ﬁll in as many aspects as possible. 35. here are some suggestions for maximizing partial credit. All-Pairs Shortest paths: (Chapt 25. the class P.) Floyd-Warshall Algorithm: All-pairs shortest paths. • 3SAT to Independent Set (IS). • B ∈ NP. by showing that x ∈ A if and only if f (x) ∈ B. I try to make at least one reduction on the exam similar to one that you have seen before. NP-completeness: (Chapt 34. • For some known NP-complete problem A.. arbitrary edge weights (no negative cost cycles). Set Cover: Ratio bound of ln m. where m = |X|. polynomial time reductions.2. up through 35. Lecture Notes 90 CMSC 451 .) Vertex cover: Ratio bound of 2.) Basic concepts: Decision problems. so don’t blow it. All NP-complete proofs have a very speciﬁc form. since modiﬁcations of these will likely appear on the ﬁnal. TSP with triangle inequality: Ratio bound of 2. NP-complete reductions can be challenging. First suppose that you have a solution to x and show how to map this to a solution for f (x). This means that you want to ﬁnd a polynomial time function f that maps an instance of A to an instance of B. Bin packing: Ratio bound of 2. provide an upper bound on the cost of your heuristic relative to this same quantity (e.

suppose we have two programs. the asymptotically larger value is ﬁne. It may be the case that one function is smaller than another asymptotically. so we will not need to worry about these issues.000 The clear lesson is that as input sizes grow. For example. one whose running time is T1 (n) = n3 and another whose running time is T2 (n) = 100n. we will ignore constant factors.001 sec 1 sec 17 min 11. we deﬁne Θ(g(n)) to be a set of functions: Θ(g(n)) = {f (n) | there exist strictly positive constants c1 . then almost any algorithm is fast enough. which captures the essential growth rate properties. which essentially represents a function by its fastest growing term and ignores constant factors. which will will express mathematically as T (n) ∈ Θ(n3 ). suppose we have an algorithm whose (exact) worst-case running time is given by the following formula: T (n) = 13n3 + 5n2 − 17n + 16. By ignoring constant factors. optimizations in compilation.g.001 sec 0. Deﬁnition: Given any function g(n). and n0 such that 0 ≤ c1 g(n) ≤ f (n) ≤ c2 g(n) for all n ≥ n0 }. This intuitive deﬁnition is ﬁne for informal use. the 13n3 term dominates the others.Supplemental Lecture 1: Asymptotics Read: Chapters 2–3 in CLRS.. n 10 100 1000 10. For the most part. c2 . (The latter algorithm may be faster because it uses a more sophisticated and complex algorithm. n is a ﬁxed value. For example. Asymptotics: The formulas that are derived for the running times of program may often be quite complex. n ≤ 10) the ﬁrst algorithm is the faster of the two. These assumptions are not always reasonable. For example. but for your value of n. we might say that the running time grows “on the order of” n3 . and the added sophistication results in a larger constant factor.000 1.6 days T2 (n) 0. speed of the underlying hardware. which hold in most (but not all) cases. When designing algorithms. Large input sizes: We are most interested in how the running time grows for large values of n. etc). 91 CMSC 451 Lecture Notes . the performance of the asymptotically poorer algorithm degrades much more rapidly. Ignore constant factors: The actual running time of the program depends on various constant factors in the implementation (coding tricks. Let us consider how to make this idea mathematically formal. As n becomes large.) For small n (e.) We would like a simple way of representing complex functions. Most of the algorithms that we will study this semester will have both low constants and low asymptotic running times. Asymptotic analysis is based on two simplifying assumptions. in any particular application. these assumptions are reasonable when making comparisons between functions that have signiﬁcantly different behaviors.000 T1 (n) 0.000.01 sec 0.1 sec 1 sec T1 (n)/T2 (n) 1 100 10. The justiﬁcation for considering large n is that if n is small. But it is important to understand these assumptions and the limitations of asymptotic analysis. we use asymptotic notation. the main purpose of the analysis is to get a sense for the trend in the algorithm’s running time. Asymptotic Notation: To represent the running times of algorithms in a simpler form. This is the purpose of asymptotics. People are most concerned about running times for large inputs. Assuming one million operations per second. (An exact analysis is probably best done by implementing the algorithm and measuring CPU seconds. Therefore. But as n becomes larger the relative differences in running time become much greater.

This means that they have essentially the same growth rates for large n. but it is true for all n ≥ 1.” An example: Consider the function f (n) = 8n2 + 2n − 3. then both are true. if we set c1 = 7. if we let c1 = 7. which is what we need. then we are done. These are not true for all n. Lower bound: f (n) grows asymptotically at least as fast as n2 : This is established by the portion of the deﬁnition that reads: (paraphrasing): “there exist positive constants c1 and n0 . but it shows how this informal notion is implemented formally in the deﬁnition. then we are done. (Whew! That was a lot more work than just the informal notion of throwing away constants and keeping the largest term. c2 . since you only have to satisfy the condition for all n bigger than n0 . they all grow quadratically in n. In particular. which is what we need. We’ll do both very carefully. we have n0 ≥ √ and from the upper bound we have n0 ≥ 1. let’s show that f (n) ∈ Θ(n). This means that if we let c2 = 10. and now we have f (n) ≥ c1 n2 . Thus. This is not true for all n. (n2 /5 + n − 10 log n). In particular. we know that as n becomes large enough f (n) = 8n2 + 2n − 3 will eventually exceed c2 n no matter Lecture Notes 92 CMSC 451 . In other words. and you may make n0 as big a constant as you like.” Consider the following reasoning (which is almost correct): f (n) = 8n2 + 2n − 3 ≤ 8n2 + 2n ≤ 8n2 + 2n2 = 10n2 . let us select n0 = 1. and so combining these √ we let n0 be the larger of the two: n0 = 3. 0 ≤ c1 g(n) ≤ f (n) ≤ c2 g(n) and this is exactly what the deﬁnition requires. and now we have f (n) ≤ c2 n2 for all n ≥ n0 . First. and n(n − 3) are all intuitively asymptotically equivalent. such that f (n) ≤ c2 n2 for all n ≥ n0 . and n0 . But the upper bound is false. since as n becomes large. but they are√ for all sufﬁciently large n. and n0 = 3.” Informally. in conclusion.Let’s dissect this deﬁnition. Our informal rule of keeping the largest term and throwing away the constants suggests that f (n) ∈ Θ(n2 ) (since f grows quadratically). Since we have shown (by construction) the existence of constants c1 . It turns out that the lower bound is satisﬁed (because f (n) grows at least as fast asymptotically as n). Upper bound: f (n) grows asymptotically no faster than n2 : This is established by the portion of the deﬁnition that reads “there exist positive constants c2 and n0 . the dominant (fastest growing) term is some constant times n2 . So let us select n0 = 3. Thus. (8n2 + 2n − 3). we have established that f (n) ∈ n2 . But in the above reasoning we have implicitly made the − true assumptions that 2n ≥ 0 and n2 √ 3 ≥ 0. So. and second. We have implicitly made the assumption that 2n ≤ 2n2 . The portion of the deﬁnition that allows us to select c1 and c2 is essentially saying “the constants do not matter because you may pick c1 and c2 however you like to satisfy these conditions. if n ≥ 3.” Consider the following (almost correct) reasoning: f (n) = 8n2 + 2n − 3 ≥ 8n2 − 3 = 7n2 + (n2 − 3) ≥ 7n2 = 7n2 . c2 = 10. that f (n) does grows asymptotically at least as fast as n2 . Intuitively. functions such as √ 4n2 . then we would have to satisfy both the upper and lower bounds.” The portion of the deﬁnition that allows us to select n0 is essentially saying “we are only interested in large n. such that f (n) ≥ c1 n2 for all n ≥ n0 . the upper bound requires us to show “there exist positive constants c2 and n0 .) Now let’s show why f (n) is not in some other asymptotic class. Let’s see why the formal deﬁnition bears out this informal observation. We need to show two things: ﬁrst. what we want to say with “f (n) ∈ Θ(g(n))” is that f (n) and g(n) are asymptotically equivalent. that f (n) grows no faster asymptotically than n2 . then we have for all n ≥ n0 . for all n ≥ n0 . For example. √ 3 From the lower bound. If this were / true. such that f (n) ≤ c2 n for all n ≥ n0 .

Deﬁnition: Given any function g(n). lim 8n + 2 − n→∞ n It is easy to see that in the limit the left side tends to ∞. then f (n) ∈ Θ(g(n)). n→∞ It is easy to see that in the limit the left side tends to 0. You will see that O-notation only enforces the upper bound of the Θ deﬁnition. such that 8n2 + 2n − 3 ≤ c2 n for all n ≥ n0 . suppose towards a contradiction that constants c1 and n0 did exist. If we divide both side by n we have: 3 ≤ c2 . f (n) ∈ O(g(n)) means that f (n) grows asymptotically at the same rate or faster than g(n). Intuitively.how large we make c2 (since f (n) is growing quadratically and c2 n is only growing linearly). Since this is true for all sufﬁciently large n then it must be true in the limit as n tends to inﬁnity. / Let’s show that f (n) ∈ Θ(n3 ). such that 8n2 + 2n − 3 ≥ c1 n3 for all n ≥ n0 . this statement is violated. Also observe that f (n) ∈ Θ(g(n)) if and only if f (n) ∈ O(g(n)) and f (n) ∈ Ω(g(n)). g(n) n→∞ for some constant c > 0 (strictly positive but not inﬁnity). For example f (n) = 3n2 + 4n ∈ Θ(n2 ) but it is not in Θ(n) or Θ(n3 ). f (n) ∈ Ω(n2 ) and in Ω(n) but not in Ω(n3 ). and Ω-notation only enforces the lower bound. This means that f (n) ∈ Θ(n3 ). f (n) ∈ O(g(n)) means that f (n) grows asymptotically at the same rate or slower than g(n). To show this formally. Lecture Notes 93 CMSC 451 . and eventually any cubic function will exceed it. Deﬁnition: Given any function g(n). If we divide both side by n3 we have: lim 2 3 8 + − 3 n n2 n ≥ c1 . Since this is true for all sufﬁciently large n then it must be true in the limit as n tends to inﬁnity. To show this formally. Sometimes we are only interested in proving one bound or the other. Whereas. Ω(g(n)) = {f (n) | there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f (n) for all n ≥ n0 }. This means that f (n) ∈ Θ(n). and so the only way to satisfy this requirement is to set / c1 = 0. suppose towards a contradiction that constants c2 and n0 did exist. The O-notation allows us to state asymptotic upper bounds and the Ω-notation allows us to state asymptotic lower bounds. Limit Rule for Θ-notation: Given positive functions f (n) and g(n). if lim f (n) = c.” Informally this is true because f (n) is growing quadratically. But f (n) ∈ O(n2 ) and in O(n3 ) but not in O(n). but by hypothesis c1 is positive. The Limit Rule for Θ: The previous examples which used limits suggest alternative way of showing that f (n) ∈ Θ(g(n)). O(g(n)) = {f (n) | there exist positive constants c and n0 such that 0 ≤ f (n) ≤ cg(n) for all n ≥ n0 }. O-notation and Ω-notation: We have seen that the deﬁnition of Θ-notation relies on proving both a lower and upper asymptotic bound. Compare this with the deﬁnition of Θ. Finally. such that f (n) ≥ c1 n3 for all n ≥ n0 . Here the idea will be to violate the lower bound: “there exist positive constants / c1 and n0 . and so no matter how large c2 is.

we will try to avoid exponential running times at all costs. if lim f (n) =0 g(n) n→∞ (either a strictly positive constant or inﬁnity) then f (n) ∈ Ω(g(n)). The following are nice to keep in mind. n→∞ n→∞ n2 n n lim (since the two fractional terms tend to 0 in the limit). then lim f (n) f (n) = lim . For example. the limit of a ﬁnite sum is the sum of the individual limits). it follows that f (n) ∈ Θ(g(n)). but they can be shown by taking appropriate powers. and c: nb =0 n→∞ an lim lgb n = 0. f (n) = n(1+sin n) ). Since 8 is a nonzero constant. g(n) n→∞ g (n) n→∞ where f (n) and g (n) denote the derivatives of f and g relative to n.Limit Rule for O-notation: Given positive functions f (n) and g(n). recall the function f (n) = 8n2 + 2n − 3. This limit rule can be applied in almost every instance (that I know of) where the formal deﬁnition can be used. b. For this reason. o The important bottom line is that polynomials always grow more slowly than exponentials whose base is greater than 1.g. n→∞ nc lim We won’t prove these. then f (n) ∈ O(g(n)). The terminology lgb n means (lg n)b . For example: n500 ∈ O(2n ). Lecture Notes 94 CMSC 451 . logarithmic powers (sometimes called polylogarithmic functions) grow more slowly than any polynomial. But since most running times are fairly well-behaved functions this is rarely a problem. if lim f (n) = c.g.. Conversely. Limit Rule for Ω-notation: Given positive functions f (n) and g(n).) Most of the rules are pretty self evident (e. One important rule to remember is the following: o L’Hˆ pital’s rule: If f (n) and g(n) both approach 0 or both approach ∞ in the limit. To show that f (n) ∈ Θ(n2 ) we let g(n) = n2 and compute the limit. dredge out your calculus book to remember. Exponentials and Logarithms: Exponentials and logarithms are very important in analyzing algorithms. and then applying L’Hˆ pital’s rule. The only exceptions that I know of are strange instances where the limit does not exist (e. and it is almost always easier to apply than the formal deﬁnition. Lemma: Given any positive constants a > 1. You may recall the important rules from calculus for evaluating limits. (If not. We have 3 2 8n2 + 2n − 3 = lim 8 + − 2 = 8. g(n) n→∞ for some constant c ≥ 0 (nonnegative but not inﬁnite). For example: lg500 n ∈ O(n).

(E. given that you need Θ(n) time just to read in all the data. • Θ(n!). Supplemental Lecture 2: Max Dominance Read: Review Chapters 1–4 in CLRS. look up the deﬁnition of Ackerman’s function in our text. n ≤ 50). if you want to see a function that grows inconceivably fast. n ≤ 20). it would be a good idea to try to get a more accurate average case analysis. Expanding the latter function and grouping terms in order of their growth rate we have T2 (n) = n n2 + n log n − . which sorted the points by their x-coordinate. • Θ(n log n): This is the running time of the best sorting algorithms. this is still considered quite efﬁcient. n ≤ 1.. if it means avoiding any additional powers of n. you can’t beat it! • Θ(log n): This is typically the speed that most efﬁcient data structures operate in for a single access. by my calculations. it should be mentioned that these last observations are really asymptotic results.) Also it is the time to ﬁnd an object in a sorted list of length n by binary search. Although the second algorithm is twice as fast for large n (because of the 1/2 factor multiplying the n2 term). • Θ(2n ). Last time we considered a slight improvement. this improvement. this applies to all reasonably large input sizes. for small powers of logarithms. However. Θ(3n ): Exponential time. • Θ(n2 ). we will usually be happy to allow any number of additional logarithmic factors. Faster Algorithm for Max-Dominance: Recall the max-dominance problem from the last two lectures. The question we consider today is whether there is an approach that is signiﬁcantly better. At this point. Max Dominance Revisited: Returning to our Max Dominance algorithms.g. recall that one had a running time of T1 (n) = n2 and the other had a running time of T2 (n) = n log n + n(n − 1)/2. Are their even bigger functions? Deﬁnitely! For example. inserting a key into a balanced binary tree.. These running times are acceptable either when the exponent is small or when the data size is not too large (e. So far we have introduced a simple brute-force algorithm that ran in O(n2 ) time. only improved matters by a constant factor. and then compared each point against the subsequent points in the sorted order. • Θ(1): Constant time. Asymptotic Intuition: To get a intuitive feeling for what common asymptotic running times map into in terms of practical usage. Lecture Notes 95 CMSC 451 . For example. Since many problems require sorting the inputs. But. Θ(nn ): Acceptable only for really small inputs (e. • Θ(n): This is about the fastest that an algorithm can run. lg500 n ≤ n only for n > 26000 (which is much larger than input size you’ll ever see). which operated by comparing all pairs of points. you should take this with a grain of salt. Thus. Θ(n3 ). or (2) you know that this is a worst-case running time that will rarely occur in practical instances. . this does not represent a signiﬁcant improvement.g. For example lg2 n ≤ n for all n ≥ 16. . here is a little list.g. This is only acceptable when either (1) your know that you inputs will be of very small size (e. 2 2 We will leave it as an easy exercise to show that both T1 (n) and T2 (n) are Θ(n2 ).: Polynomial time. In case (2). .For this reason. but you should be careful just how high the crossover point is. 000).g. They are true in the limit for large n.

let’s look at the ratio of the running times. for a total of O(n log n) time.) On larger inputs. This is nice and clean (since it is usually easy to get good code for sorting without troubling yourself to write your own). How much of an improvement is this? Probably the most accurate way to ﬁnd out would be to code the two up. output P[n]. but since we are not using any really complex data structures. n] had the maximum y-coordinate. that if we knew which point among P [i + 1.) When we encounter the point P [i]. 000. However. But just to get a feeling.A Major Improvement: The problem with the previous algorithm is that. For any set of points. and compare their running times. n) { Sort P in ascending order by x-coordinate.g. n = 1. (Rule 1 of algorithm optimization: Don’t optimize code that is already fast enough. irrespective of the constant factors. divide and conquer. One of the basic maxims of algorithm design is to ﬁrst approach any problem using one of the standard algorithm design paradigms. and the code inside takes constant time. We will talk more about these methods as the semester continues.y. the ratio of n to log n is about 1000/10 = 100. dynamic programming.y ≥ P [j]. even though we have cut the number of comparisons roughly in half. Can we save time by making only one comparison for each point? The inner while loop is testing to see whether any point that follows P [i] in the sorted list has a larger y-coordinate. so inside of O-notation. from right to left. say. so there is a 100to-1 ratio in running times. 000.y >= P[j]. Divide and Conquer Approach: One problem with the previous algorithm is that it relies on sorting. e. because there is just a single loop that is executed n − 1 times. you might consider whether you can solve this problem without invoking a sorting algorithm. For this problem. we could just test against that point. For even larger inputs. 000. // last point is always maximal j = n. (Initially the rightmost point is maximal. we are looking at a ratio of roughly 1. ignoring constant factors: n n2 = . We keep track of the index j of the most recently seen maximal point. less than 100). // P[i] has the largest y so far } } } The running time of the for-loop is obviously O(n). How can we do this? Here is a simple observation.g.. 10. Note that a change in base only affects the value of a logarithm function by a constant amount. it is maximal if and only if P [i]. we will usually just write log n. The total running time is dominated by the O(n log n) sorting time. we would need to factor in constant factors. the point with the maximum ycoordinate is the maximal point with the smallest x-coordiante. What is this paradigm? Lecture Notes 96 CMSC 451 . each point is still making lots of comparisons. This suggests that we can sweep the points backwards. say. it is hard to imagine that the constant factors will differ by more than. . for i = n-1 downto 1 { if (P[i]. Of course. n lg n lg n (I use the notation lg n to denote the logarithm base 2. This is quite a signiﬁcant difference. 000/20 = 50. This suggests the following algorithm. greedy algorithms. if you really wanted to squeeze the most efﬁciency out of your code. .) For relatively small values of n (e. This suggests. . depth-ﬁrst search. 000. Max Dominance: Sort and Reverse Scan MaxDom3(P. both algorithms are probably running fast enough that the difference will be practically negligible. // yes. n = 1. ln n to denote the natural logarithm (base e) and log n when I do not care about the base. divide-and-conquer is a natural method to choose. say.output it j = i. .y) { // is P[i] maximal? output P[i]. 000.

How shall we divide the problem? I can think of a couple of ways. This will not be guaranteed to split the list into two equal parts. then the best way is usually by setting up a recurrence. and then partition the list into two sublists. if it dominates all the points of the other sublist. and split into two subarrays of equal size P [1.. then this point is maximal. then it must be maximal in one of the two sublists. Take the ﬁnger pointing to the point with the smaller x-coordinate. We will describe the algorithm at a very high level. just because a point is maximal in some list. We will ignore constant factors. Let’s consider the ﬁrst method. The key ingredient is a function that takes the maxima of two sets.) Here is more concrete outline. and merges them into an overall set of maxima. M2). It maintains two pointers. 10) in the example. x = P [r]. there is no particular relationship between the points in one side of the list from the other. because it does all the work. a function which is deﬁned recursively in terms of itself. The input will be a point array. Observe that because we spend a constant amount of time processing each point (either copying it to the result list or skipping over it) the total execution time of this procedure is O(n). Let us assume that it returns a list of points in sorted order according to x-coordinates of the maximal points.Divide: Divide the problem into two subproblems (ideally of approximately equal sizes).. n-m). writing O(n) just as n. Another approach. In either case. Because we do not sort the points. Observe that if a point is to be maximal overall. and a point array will be returned. but on average it can be shown that it does a pretty good job. It operates by walking through each of the two sorted lists of maximal points.n]. then we can assert that it is maximal. One is similar to how MergeSort operates. (Consider point (7. Max Dominance: Divide-and-Conquer MaxDom4(P. The details will be left as an exercise. return MaxMerge(M1. Otherwise it is not copied. where r is a random integer in the range from 1 to n. (The quicksort method will also work. called a pivot. we move to the next point in the same list. Think of these as ﬁngers. giving: T (n) = 1 if n = 1. and is copied to the next position of the result list. T (n) = 2T (n/2) + n if n > 1.. The result list is returned. does not imply that it is globally maximal.. and Combine: Combine the solutions to the two subproblems into a global solution. Lecture Notes 97 CMSC 451 . one pointing to the next unprocessed item in each list.n/2] and P [n/2 + 1. m = n/2. M1 = MaxDom4(P[1. Conquer: Solve each subproblem recursively. Recurrences: How do we analyze recursive procedures like this one? If there is a simple pattern to the sizes of the recursive calls. The main question is how the procedure Max Merge() is implemented. n) { if (n == 1) return {P[1]}. but leads to a tougher analysis.. Just take the array of points P [1.m]. } // // // // // one point is trivially maximal midpoint of list solve for first half solve for second half merge the results The general process is illustrated below.n].n]. M2 = MaxDom4(P[m+1. that is.) However. We break the problem into two subproblems of size roughly n/2 (we will say exactly n/2 for simplicity). and the additional overhead of merging the solutions is O(n). which is more reminiscent of QuickSort is to select a random element from the list. those elements whose x-coordinates are less than or equal to x and those that greater than x. I will describe the procedure at a very high level. m). and repeat the process. If its y-coordinate is larger than the y-coordinate of the point under the other ﬁnger. However.

1) (13. (12.10) (2.10) (15.3) 4 6 8 10 12 14 16 Merged solution. 68: Divide and conquer approach.12) (4.14) (7.11) (9.5) (5. Lecture Notes 98 CMSC 451 . 14 12 10 8 6 4 2 2 (5.14) (7.13) 2 4 6 8 10 12 14 16 Solutions to subproblems.5) (5.13) (12.7) (11.10) (15.7) (11.11) (9. Fig.10) (14.4) 4 6 8 10 12 14 16 Input and initial partition.4) 14 12 10 8 6 4 2 (2.4) (13.12) (4.3) (16.3) (16.11) (9.5) (4.14 12 10 8 6 4 2 2 (2.1) (7.7) (16.10) (14.7) (11.1) (13.7) (7.12) (14.7) (7.13) (12.14) (7.10) (15.

Let us consider applying this to the following recurrence. Thus T (n) is Θ(n log n). Case (3): a < bk then T (n) is Θ(nk ). Here is a slightly more restrictive version. but adequate for a lot of instances. See CLRS for the more complete version of the Master Theorem and its proof. i=0 Lecture Notes 99 CMSC 451 . If you look at the proof in CLRS for the Master Theorem. T (1) T (n) = = 1 2T n +n 3 if n > 1 First we expand the recurrence into a summation. deﬁned for n ≥ 0. We want to know how many expansions are needed to arrive at the basis case. other methods are needed. meaning that k = log3 n. so a = bk and case (2) applies. This pattern usually results in a summation that is easy to solve. b > 1 be constants and let T (n) be the recurrence T (n) = aT (n/b) + cnk . To do this we set n/(3k ) = 1. . Using this version of the Master Theorem we can see that in our recurrence a = 2. . The easiest method is to apply the Master Theorem that is given in CLRS. We assume that n is a power of 3. = 2T = 2k T n 3k k−1 + i=0 2i n = 2k T 3i n 3k k−1 +n i=0 (2/3)i . Expansion: A more basic method for solving recurrences is that of expansion (which CLRS calls iteration). For example. but the Master Theorem (either this form or the one in CLRS will not tell you this. This is a rather painstaking process of repeatedly applying the deﬁnition of the recurrence until (hopefully) a simple pattern emerges. it is actually based on expansion. the following recurrence is quite common: T (n) = 2T (n/2) + n log n. Theorem: (Simpliﬁed Master Theorem) Let a ≥ 1. b = 2.) For such recurrences.Solving Recurrences by The Master Theorem: There are a number of methods for solving the sort of recurrences that show up in divide-and-conquer algorithms. T (n) n +n 3 n 2n n n + + n = 4T + n+ = 2 2T 9 3 9 3 n 2n 2n 4n n n + + n+ = 8T + n+ + = 4 2T 27 9 3 27 3 9 . The parameter k is the number of expansions (not to be confused with the value of k we introduced earlier on the overhead). There many recurrences that cannot be put into this form. Substituting this in and using the identity alog b = blog a we have: T (n) = 2log3 n T (1) + n log3 n−1 i=0 (2/3)i = nlog3 2 + n log3 n−1 (2/3)i . and k = 1. Case (1): a > bk then T (n) is Θ(nlogb a ). Case (2): a = bk then T (n) is Θ(nk log n). This solves to T (n) = Θ(n log2 n). until seeing the general pattern emerge.

Induction step: For the induction step. and so it is Θ(n). Using this induction hypothesis we will show that the lemma holds for n itself. Sometimes there are parameters whose values you do not know. Thus the number of leaves obeys L(0) = L(1) = 1. we can apply the induction hypothesis. you will usually ﬁnd out what these values must be. Fn ≤ φn−1 for some constant φ. as desired. Lecture Notes 100 CMSC 451 . 69: Minimum-sized AVL trees. T (n) is dominated by the 3n term asymptotically. Proof: We will try to derive the tightest bound we can on the value of φ. let us assume that Fm ≤ φm−1 whenever 1 ≤ m < n. Let L(i) denote the number of leaves in the minimum-sized AVL tree of height i. In the course of the induction proof. If you expand the Fibonacci series for a number of terms. since n − 1 and n − 2 are both strictly less than n. Lemma: For all integers n ≥ 1. you will observe that Fn appears to grow exponentially. Basis: For the basis cases we consider n = 1. you will learn that the minimum-sized AVL trees are produced by the recursive construction given below. whenever n ≥ 2. We can use induction to prove this and derive a bound on φ.631 < 1. L(0) = 1 L(1)=1 L(2)=2 L(3)=3 L(4)=5 Fig. or the general form of the solution. The Fibonacci numbers arise in data structure design.Next. Since n ≥ 2. but not as fast as 2n . where 1 < φ < 2. We will consider a famous example. Now. Observe that F1 = 1 ≤ φ0 . It is easy to see that L(i) = Fi+1 . F0 F1 Fn = = = 0 1 Fn−1 + Fn−2 for n ≥ 2. To construct a minimum-sized AVL tree of height i. and then attempt to verify its correctness through induction. This is ﬁne. for some real parameter φ. from which we have Fn ≤ φn−2 + φn−3 = φn−3 (1 + φ). L(i) = L(i − 1) + L(i − 2). 1 < φ < 2. Since log3 2 ≈ 0. Induction and Constructive Induction: Another technique for solving recurrences (and this works for summations as well) is to guess the solution. If you study AVL (height balanced) trees in data structures. we can apply the formula for the geometric series and simplify to get: T (n) = nlog3 2 + n 1 − (2/3)log3 n 1 − (2/3) = nlog3 2 + 3n(1 − (2/3)log3 n ) = nlog3 2 + 3n(1 − nlog3 (2/3) ) = nlog3 2 + 3n(1 − n(log3 2)−1 ) = nlog3 2 + 3n − 3nlog3 2 = 3n − 2nlog3 2 . we have Fn = Fn−1 + Fn−2 . that of the Fibonacci numbers. you create a root node whose children consist of a minimum-sized AVL tree of heights i − 1 and i − 2. It is tempting to conjecture that Fn ≤ φn−1 .

Two numbers A and B satisfy the golden ratio if A+B A = . But the induction hypothesis only applies for m ≥ 1. √ By the way. a2 . implying that we want to ﬁnd the roots of the equation φ2 − φ − 1 = 0. . E. There a good description of generating functions in D. The Art of Computer Programming. we could deﬁne a polynomial function such that these are the coefﬁcients of the function: G(z) = a0 + a1 z + a2 z 2 + a3 z 3 + . Let us consider an approach to determine an exact representation of Fn .24. φ = 2 There is a very subtle bug in the preceding proof. observe that one of the roots is negative. If we would like to “encode” this sequence succinctly. called a generating function. We will (almost) never assign it a speciﬁc value. This is why this method is called constructive induction. and we had to generate a good guess before we were even able to start. Notice not only did we prove the lemma by induction. and hence cannot be applied to F0 ! To ﬁx it we could include F2 as part of the basis case as well. This is called the generating function of the sequence. . and hence would not be a possible candidate for φ.) At the critical value of φ this inequality will be an equality.618. B A It is easy to verify that A = φ and B = 1 satisﬁes this condition. What is the advantage of this representation? It turns out that we can perform arithmetic Lecture Notes 101 CMSC 451 . which requires no guesswork. Supplemental Lecture 3: Recurrences and Generating Functions Read: This material is not covered in CLR. It is the golden 2 ratio. architecture and art. Here we claim that F2 = F1 + F0 and then we apply the induction hypothesis to both F1 and F0 . Vol 1. Can you spot it? The error occurs in the case n = 2. The positive root is √ 1+ 5 ≈ 1. . a1 . . and vice versa.We want to show that this is at most φn−1 (for a suitable choice of φ). Knuth. This method is based on a very elegant concept. Thus. . Consider any inﬁnite sequence: a0 . the value φ = 1 (1 + 5) is a famous constant in mathematics. What is z? It is just a symbolic variable. 2 2 √ Since 5 ≈ 2. but we actually determined the value of φ which makes the lemma true. This proportion occurs throughout the world of art and architecture. Clearly this will be true if and only if (1 + φ) ≤ φ2 . every inﬁnite sequence of numbers has a corresponding generating function. Generating Functions: The method of constructive induction provided a way to get a bound on Fn . a3 . By the quadratic formula we have φ = 1± √ √ 1± 5 1+4 = . but we did not get an exact answer. This is not true for all values of φ (for example it is not true when φ = 1 but it is true when φ = 2.

Compute G(z) − zG(z) − z 2 G(z). 1−z (In other words. adding them. this has the effect of shifting the sequence to the right: G(z) = F0 zG(z) = z 2 G(z) = + F1 z F0 z + + F2 z 2 F1 z 2 F0 z 2 + + + F3 z 3 F2 z 3 F1 z 3 + + + F4 z 4 F3 z 4 F2 z 4 + + + . In general.. . the following is a simple consequence of the formula for the geometric series. If 0 < c < 1 then ∞ ci = i=0 1 . .. . . .transformations on these functions (e. a. = 1 − z − z2 1 − az 1 − bz 102 CMSC 451 . 1 − z − z2 So.). 1−c Setting z = c. (The particular manipulation we picked was chosen to cause this cancellation to occur. 1/(1 − z) is the generating function for the sequence (1. There are certain common tricks that people use to manipulate generating functions. notice that if we multiply the generating function by a factor of z. The ﬁrst is to observe that there are some functions for which it is very easy to get an power series expansion. . So what good is this? The main goal is to get at the coefﬁcients of its power series expansion.) From this we may conclude that G(z) = z . Let’s consider the generating function for the Fibonacci numbers: G(z) = F0 + F1 z + F2 z 2 + F3 z 3 + . a3 . 1.. .. .. The trick in dealing with generating functions is to ﬁgure out how various manipulations of the generating function to generate algebraically equivalent forms. as the coefﬁcients of this function if expanded as a power series. given an constant a we have 1 = 1 + az + a2 z 2 + a3 z 3 + . . For example.g. . Now. . . For example. and many sequences arising from linear recurrences) have generating functions that are easy to write down and manipulate. + (Fi − Fi−1 − Fi−2 )z i + . . It turns out that some nicely-structured sequences (like the Fibonacci numbers. . In order to do this. . . let’s try the following manipulation. . Observe that every term except the second is equal to zero by the deﬁnition of Fi . we have 1 = 1 + z + z2 + z3 + . since then we could then extract the coefﬁcients of the power series easily. . = z + z 2 + 2z 3 + 3z 4 + 5z 5 + .. now we have an alternative representation for the Fibonacci numbers. and see what we get (1 − z − z 2 )G(z) = F0 + (F1 − F0 )z + (F2 − F1 − F0 )z 2 + (F3 − F2 − F1 )z 3 + . = z. .).. a2 . multiplying them. . 1. It would be great if we could modify our generating function to be in the form of 1/(1 − az) for some constant a. 1 − az is the generating function for (1. differentiating them) and this has a corresponding effect on the underlying transformations. we would like to rewrite the generating function in the following form: G(z) = Lecture Notes B A z + .

B. Since duplicate elements make our life more complex (by creating multiple elements of the same rank). we will make the simplifying assumption that all the elements are distinct for now.for some A.. We will skip the steps in doing this. (In particular.. this is called the method of partial fractions. Fn = φn / 5. b. but surprisingly difﬁcult to solve optimally. multiply the equation by 1 − φz. . This is a rather remarkable. Selection: We have discussed recurrences and the divide-and-conquer method of solving problems. The problem that we will consider is very easy to state. Suppose that you are given a set of n numbers. and no guesswork was needed. The only parts that involved some cleverness (beyond the invention of generating functions) was (1) coming up with the simple closed form formula for G(z) by taking appropriate differences and applying the rule for the recurrence. The minimum is of rank 1 and the maximum is of rank n. rounded to the nearest integer. If n is odd then the median is deﬁned to be the element of rank (n + 1)/2. Thus we have. ˆ when you observe that φ < 1. Today we will give a rather surprising (and very tricky) algorithm which shows the power of these techniques. From this we have the following: G(z) = 1 √ 5 ( 1 −1 + φz ˆ + −φz ∞ + + φ2 z 2 ˆ −φ2 z 2 + + . You might try this for a few speciﬁc values of n to see why this is true. a. Of particular interest in statistics is the median. When n is even there are two natural choices. but it is not hard to verify the roots of (1 − az)(1 − bz) (which are 1/a and 1/b) must be equal to the roots of 1 − z − z 2 . and (2) applying the method of partial fractions to get the generating function into one for which we could easily read off the ﬁnal coefﬁcients. Supplemental Lecture 4: Medians and Selection Read: Chapter 9 of CLRS.) Now we are in good shape. It will be easy to get around this assumption later. Deﬁne the rank of an element to be one plus the number of elements that are smaller than this element. In general. and then consider what happens when z = 1/φ. i=0 We can now read off the coefﬁcients easily. because we can extract the coefﬁcients for these two fractions from the above function. Thus. because it says that we can express the integer Fn as the sum of two powers of to ˆ irrational numbers φ and φ. In particular it follows that 1 ˆ Fn = √ (φn − φn ). it is clear that the ﬁrst term is the dominant one. the rank of an element is its ﬁnal position if the set is sorted. A similar trick can be applied to get B.. for large enough √ n.. to determine A. namely the elements of ranks n/2 and (n/2) + 1. We can then solve for a and b by taking the reciprocals of the roots of this quadratic. In Lecture Notes 103 CMSC 451 . 5 This is an exact result. Then by some simple algebra we can plug these values in and solve for A and B yielding: z = G(z) = 1 − z − z2 √ √ −1/ 5 1/ 5 + ˆ 1 − φz 1−φ 1 = √ 5 1 1 − . By the way. ˆ 1 − φz 1−φ √ √ ˆ where φ = (1 + 5)/2 and φ = (1 − 5)/2. ) Combining terms we have 1 G(z) = √ 5 ˆ (φi − φi )z i .

n/3. we ﬁnd that we do not know what the desired item is. If k < xRank . The question that remains is how many elements did we succeed in eliminating? If x is the largest Lecture Notes 104 CMSC 451 . however after doing some amount of analysis of the data. We will discuss the details of choosing the pivot and partitioning later. into two sets of roughly equal size. the items may appear in any order. and want to ﬁnd the k-th smallest element of A. but for now just think of it as a random element of A.. and then returning A[k]... then we know that we need to recursively search in A[p. but they differ only in one step. Later we will see how to choose x.0001n). which involves judiciously choosing an item from the array. taking say Θ(nk ) time. The selection problem can easily be solved in Θ(n log n) time. and we want to ﬁnd the kth smallest item (where k ≤ r − p + 1).r]. Applying the Sieve to Selection: To see more concretely how the sieve technique works. In this latter case we have eliminated q smaller elements.) Within each subarray.r] as we did in MergeSort. The initial call will be to the entire array A[1.q − 1] will contain all the elements that are less than x. Let xRank = q − p + 1. If k = xRank . but assume for now that they both take Θ(n) time. for some constant k. Today we will focus on the following generalization. It analyzes the data (by choosing the pivot element and partitioning) and it eliminates some part of the data set... 1 ≤ k ≤ n.. so we want to ﬁnd the element of rank k − q.. A[q] contains the element x. subarray A[p. The sieve technique is a special case. is it possible to solve this problem in Θ(n) time? We will see that the answer is yes. (Recall that we assumed that all the elements are distinct. For example. we will assume that we are given a subarray A[p. In particular. simply by sorting the numbers of A. We will deﬁne the median to be either of these elements. eliminating a constant fraction of the remaining set.g. There are two principal algorithms for solving the selection problem. the median income in a community is likely to be more meaningful measure of the central tendency than the average is.statistics it is common to return the average of these two elements. let us apply it to the selection problem. which are then solved recursively. and can be eliminated from further consideration. where the number of subproblems is just 1. it is often desirable to partition a set about its median value.n]. The sieve technique works in phases as follows. called the selection problem. Since the algorithm will be applied inductively. and the solution is far from obvious. This is illustrated below. which we will denote by x. and we may just return it. Otherwise we either eliminate the pivot and the right subarray or the pivot and the left subarray. Then we solve the problem recursively on whatever items remain. whereas it cannot have a signiﬁcant inﬂuence on the median.. but we can identify a large enough number of elements that cannot be the desired value. will contain all the element that are greater than x. output the element of A of rank k. Selection: Given a set A of n distinct numbers and an integer k.r]. Medians are useful as measures of the central tendency of a set. Notice that this algorithm satisﬁes the basic form of a sieve algorithm. In particular “large enough” means that the number of items is at least some ﬁxed constant fraction of n (e. They are also useful. The Sieve Technique: The reason for introducing this algorithm is that it illustrates a very important special case of divide-and-conquer. 0.r]. which I call the sieve technique. Here is the complete pseudocode. since if Bill Gates lives in your community then his gigantic income may signiﬁcantly bias the average. n/2. It applies to problems where we are interested in ﬁnding a single item from a larger set of n items. We do not know which item is of interest.n] and an integer k. When k = xRank then we get lucky and eliminate everything. and A[q + 1. since in divide-and-conquer applications.q −1] and if k > xRank then we need to recursively search A[q +1. and recurses on the rest. We then partition A into three parts. called the pivot element. then the pivot is the kth smallest. especially when the distribution of values is highly skewed. Recall that we are given an array A[1. Each of the resulting recursive solutions then do the same thing. We think of divide-and-conquer as breaking the problem into a small number of smaller subproblems. It is easy to see that the rank of the pivot x is q − p + 1 in A[p. The question is whether it is possible to do better.

q-1].pivot p r 5 9 2 6 4 1 3 7 Before partitioing p q r 3 1 2 4 6 9 5 7 After partitioing x A[p. p. int r.. r. r. p. r) // choose the pivot element q = Partition(A. 70: Selection Algorithm. x) // partition <A[p.. return it else { x = ChoosePivot(A.r]> xRank = q . q-1. A[q+1. k-xRank)// select from right subarray } } Lecture Notes 105 CMSC 451 .q−1] < x A[q+1. p. Selection by the Sieve Technique Select(array A... int p.r] if (p == r) return A[p] // only 1 item left. x.. q+1.r] > x 5 9 2 6 4 1 3 7 Initial (k=6) 3 1 2 4 x_rnk=4 6 9 5 7 Partition (pivot = 4) 6 9 5 7 6 5 7 x_rnk=3 9 Partition (pivot = 7) 6 5 5 6 x_rnk=2 (DONE!) Recurse (k=6−4=2) Recurse (k=2) Partition (pivot = 6) Fig. int k) { // return kth smallest of A[p.p + 1 // rank of the pivot if (k == xRank) return x // the pivot is the kth smallest else if (k < xRank) return Select(A. k) // select from left subarray else return Select(A.

Suppose that the pivot x turns out to be of rank q in the array.. so in the worst case. otherwise. Actually this is not such a bad idea. then we are done. Both need to be solved in Θ(n) time. They are of sizes q − 1 and n − q. meaning that we recurse on the remaining n/2 elements. when we are given A[1. A[q] = x and A[q + 1.q − 1] < x. However. The second problem is a rather easy programming exercise. if x is one of the smallest elements of A or one of the largest. if n/4 ≤ q ≤ 3n/4.or smallest element in the array. Let us suppose for now (optimistically) that we are able to design the procedure Choose Pivot in such a way that is eliminates exactly half the array with each phase. If we could select q so that it is roughly of middle rank. In fact. If we eliminated even one per cent. then the larger subarray will never be larger than 3n/4. then we may have to recurse on the left subarray of size q − 1. = 1/(1 − c). In either case. The subarray that contains the kth smallest element will generally depend on what k is. In this algorithm we make many passes (it could be as many as lg n). implying a convergent series. The ﬁrst is how to choose the pivot element. Recall that before we said that we might think of the pivot as a random element of A. The key is that we want the procedure to eliminate at least some constant fraction of the array after each partitioning step. which is still less than 1. which shows that the total running time is indeed linear in n. Normally you would think that in order to design a Θ(n) time algorithm you could only make a single. Thus if q > n/2. k will be chosen so that we have to recurse on the larger of the two subarrays. Note that the assumption of eliminating half was not critical. then we may have to recurse on the right subarray of size n − q. then we will be in good shape. let’s concentrate on how to choose the pivot.. This lesson is well worth remembering. 2i 2i i=0 ∞ i i=0 c ∞ Recall the formula for the inﬁnite geometric series. then we may only succeed in eliminating one element with each phase. If k = q. we are in trouble if q is very small. Actually this works pretty well in practice. we will discuss partitioning in detail. we need to search one of the two subarrays. Otherwise. because we may only eliminate it and the few smaller or larger elements of A.) This is a bit counterintuitive. It is often possible to achieve running times in ways that you would not expect. but it is easy to see that it takes at least Ω(n) time. because we eliminate a constant fraction of elements with each phase. respectively. Ideally x should have a rank that is neither too large nor too small. T (n) = 1 T (n/2) + n if n = 1. and the second is how to partition the array. The reason is that Lecture Notes 106 CMSC 451 .n]. For any c such that |c| < 1. For example. or if q is very large. we get this convergent geometric series in the analysis. Using (This only proves the upper bound on the running time. This would lead to the following recurrence.. so the total running time is Θ(n). We can solve this either by expansion (iteration) or the Master Theorem. Earlier we said that we might think of the pivot as a random element of the array A. Let’s see why. and we would have gotten a geometric series involving 99/100. Choosing the Pivot: There are two issues that we have left unresolved. Let’s consider the top of the recurrence. when we discuss QuickSort. If we expand this recurrence level by level we see that we get the summation n n T (n) = n + + + · · · ≤ 2 4 ∞ i=0 n 1 = n . this we have T (n) ≤ 2n ∈ O(n).n] > x. then the recurrence would have been T (n) = T (99n/100) + n. then we get into trouble. or perhaps a constant number of passes over the data set. and if q < n/2. For the rest of the lecture. Later. Eliminating any constant fraction would have been good enough. The partitioning algorithm will split the array into A[1.

14 57 24 6 37 32 2 43 30 25 23 52 12 63 3 5 44 17 34 64 Group 10 27 48 8 19 60 21 1 55 41 29 11 58 39 6 14 24 37 57 2 25 30 32 43 3 12 23 52 63 5 17 34 44 64 8 10 19 27 48 1 21 41 55 60 11 29 39 58 Get group medians 8 10 19 27 48 3 12 23 52 63 6 14 24 37 57 2 25 30 32 43 5 17 34 44 64 11 29 39 58 1 21 41 55 60 Get median of medians (Sorting of group medians is not really performed) Fig. This can easily be done in Θ(n) time.. m. we might be continuously unlucky. For this. e. we will describe a rather complicated method for computing a pivot element that achieves the desired properties. and k = (m + 1)/2 . Observe that at least half of the group medians are less than or equal to x. so picking a random element as the pivot will succeed about half the time to eliminate at least n/4. Copy the group medians to a new array B. Return x as the desired pivot. This is a bit complicated. k). etc. we need to argue that x satisﬁes the desired rank properties. so to simplify things we will assume that n is evenly divisible by 5. For example. Instead. Consider the groups shown in the tabular form above. but a careful analysis will show that the expected running time is still Θ(n). The algorithm is illustrated in the ﬁgure below. There will be m group medians. Select(B. Of course. The other part of the proof is essentially symmetrical. (Because x is their median. To establish the correctness of this procedure. We do not need an intelligent algorithm to do this. 71: Choosing the Pivot. e. we will have to call the selection algorithm recursively on B. since the details are rather involved.roughly half of the elements lie between ranks n/4 and 3n/4. due to the ﬂoor and ceiling arithmetic. To do this. Let x be this median of medians. and repeating this n/5 times will give a total running time of Θ(n). where m = n/5 .5].) And for each group median. There will be exactly m = n/5 such groups (the last one might have fewer than 5 elements).n]. 30 is the ﬁnal pivot. there are three elements that are less than or equal to this median within its group (because it is the median of its Lecture Notes 107 CMSC 451 .. since each group has only a constant number of elements. We will return to this later. Here is the description for Select Pivot: Groups of 5: Partition A into groups of 5 elements.. We will have to describe this algorithm at a very high level.g. Group medians: Compute the median of each group of 5.15]. 1. and we want to compute an element x whose rank is (roughly) between n/4 and 3n/4. Lemma: The element x is of rank at least n/4 and at most 3n/4 in A. Each will take Θ(1) time. A[11. we need to show that there are at least n/4 elements that are less than or equal to x. Recall that we are given an array A[1. Proof: We will show that x is of rank at least n/4. we could just BubbleSort each group and take the middle element.g. A[1. Median of medians: Compute the median of the group medians. A[6.10]..

provided that we select c such that c ≥ (19c/20) + 1. We will then show that T (n) ≤ cn. However. Step: We assume that T (n ) ≤ cn for all n < n. As usual. Thus. Since we haven’t done any probabilistic analyses yet. we see that by letting c = 20. there are at least 3((n/5)/2 = 3n/10 ≥ n/4 elements that are less than or equal to x in the entire array. and write the Θ(n) as n for concreteness. A natural question is why did we pick groups of 5? If you look at the proof above. Theorem: There is a constant c. and so T (n) ≤ cn as long as c ≥ 1.) The ﬁrst thing to do in a probabilistic analysis is to deﬁne a random variable that describes the essential quantity that determines the execution time. Everything else took only Θ(n) time. and the items fall uniformly between them. you will see that it works for any value that is strictly greater than 4. in order to achieve this. Since n/5 and 3n/4 are both less than n. such that T (n) ≤ cn. we would expect a constant number of items per bucket. This mixture will make it impossible to use the Master Theorem. (This one is rather typical. within Select Pivot() we needed to make a recursive call to Select() on an array B consisting of n/5 elements. giving T (n) 3n n 1 3 +c + n = cn + 5 4 5 4 19 19c +1 . we can apply the induction hypothesis. otherwise. The running time is T (n) ≤ 1 T (n/5) + T (3n/4) + n if n = 1. The recursive call in Select() will be made to list no larger than 3n/4. Since there are n buckets. the expected insertion time for each bucket is only a constant. Therefore.) Supplemental Lecture 5: Analysis of BucketSort Probabilistic Analysis of BucketSort: We begin with a quick-and-dirty analysis of bucketsort. However. namely that of eliminating a constant fraction (at least 1/4) of the remaining list at each stage of the algorithm. this is a good place to apply constructive induction. and c ≥ 20. Solving for c we see that this is true provided that c ≥ 20. Therefore the expected running time of the algorithm is Θ(n). (You might try it replacing the 5 with 3. we are done.group). or 6 and see what happens. Combining the constraints that c ≥ 1. This is a very strange recurrence because it involves a mixture of different fractions (n/5 and 3n/4). A careful analysis involves understanding a bit about probabilistic analyses of algorithms. Analysis: The last order of business is to analyze the running time of the overall algorithm. By deﬁnition we have T (n) = T (n/5) + T (3n/4) + n. A discrete random variable can be thought of as variable that takes on some Lecture Notes 108 CMSC 451 . = cn + n = n 20 20 ≤ c +n This last expression will be ≤ cn. Proof: (by strong induction on n) Basis: (n = 1) In this case we have T (n) = 1. let’s try doing this one. We know we want an algorithm that runs in Θ(n) time. This quick-and-dirty analysis is probably good enough to convince yourself of this algorithm’s basic efﬁciency. we will ignore ﬂoors and ceilings. We achieved the main goal. 4. and difﬁcult to apply iteration.

So this leads to the key question. let Xi denote the random variable that indicates the number of elements assigned to the i-th bucket. With probability 1 − 1/n the item goes into some other bucket. Basically pk is the probability of tossing k heads. We want to determine E[X 2 ]. So how many items do we expect will wind up in bucket i? We can analyze this by thinking of each element of A as being represented by a coin ﬂip (with a biased coin. denoted E[X 2 ]. The reason for doing arithmetic on long numbers stems from cryptography. we are interested in the expected sorting time. n. With probability p = 1/n the number goes into bucket i. Since we assume that the elements of A are independent of each other. the character string to be encrypted is converted into a sequence of numbers. which we will interpret as the coin coming up tails. then the probability of getting k heads in n tosses is given by the following important formula P (X = k) = n k p (1 − p)n−k k where n k = n! . Since we are using a quadratic time algorithm to sort the elements of each bucket.set of discrete values with certain probabilities. k!(n − k)! Although this looks messy. that its mean value E[X] and its variance Var[X] are E[X] = np and Var[X] = E[X 2 ] − E 2 [X] = np(1 − p). so we may as well talk about a single random variable X. The number of times that a heads event occurs. If p is the probability of getting a head. it is not too hard to see where it comes from. (1 − p)n−k is the probability of tossing n − k tails. all of the random variables Xi have the same probability distribution. for a given n and p) is called the binomial distribution. which will work for any bucket. for large n the time to insert the items into any one of the linked lists is a just shade less than 2. Most techniques for encryption are based on number-theoretic techniques. Efﬁcient Lecture Notes 109 CMSC 451 . which is Θ(X 2 ). it has a probability of p = 1/n of going into the ith bucket. and encryption keys are stored as long integers. Summing up over all n buckets. given n independent trials in which each trial has two possible outcomes is a well-studied problem in probability theory. X is just the total number of heads we see after making n tosses with this (biased) coin. Such trials are called Bernoulli trials (named after the Swiss mathematician James Bernoulli). If you consult a standard textbook on probability and statistics. and multiplication in particular. then you will see the two important facts that we need to know about the binomial distribution. Long Integer Multiplication: The following little algorithm shows a bit more about the surprising applications of divide-and-conquer. and is denoted b(k. what is the expected value of X 2 . which has a different probability of heads and tails). it is a function that maps some some discrete sample space (the set of possible values) onto the reals (the probabilities). More formally. For 0 ≤ i ≤ n − 1. This probability distribution (as a function of k. but now we know it is true with conﬁdence. Because the elements are assumed to be uniformly distributed. This is exactly what our quick-and-dirty analysis gave us. and n is the total number of different k ways that the k heads could be distributed among the n tosses. each element has an equal probability of going into any bucket. Namely. p). Since the distribution is uniform. or in particular. n Thus. The problem that we want to consider is how to perform arithmetic on long integers. By the above formulas and the fact that p = 1/n we can derive this as E[X 2 ] = Var[X] + E 2 [X] = np(1 − p) + (np)2 = n n 1− 1 n + n n 2 = 2− 1 . gives a total running time of Θ(2n) = Θ(n). Supplemental Lecture 6: Long Integer Multiplication Read: This material on integer multiplication is not covered in CLRS. For example. which we will interpret as the coin coming up heads.

Addition and subtraction on large numbers is relatively easy. Let A and B be the two numbers to multiply.0]. Thus... since for centuries people have used the same algorithm that we all learn in grade school. which can be quite costly when lots of long multiplications are needed. But the standard algorithm for multiplication runs in Θ(n2 ) time. typically containing hundreds of digits. otherwise. The operation of multiplying by 10m should be thought of as simply shifting the number over by m positions to the right. let’s just assume that the number of digits n is a power of 2.. z). y and z as n/2 digit numbers. If we think of w. y))10m + mult(x. It would seem surprising if there were. we can express A and B as A = w · 10m + x B = y · 10m + z. the same multiplication rule still applies. We normally think of this algorithm as applying on a digit-by-digit basis. y)102m + (mult(w. If n is the number of digits. To avoid complicating things with ﬂoors and ceilings.. it is more natural to think of the elements of A as being indexed in decreasing order from left to right as A[n − 1.0] rather than the usual A[0. and so they take Θ(n) time each. Let m = n/2. but if we partition an n digit number into two “super digits” with roughly n/2 each into longer sequences. Let A[0] denote the least signiﬁcant digit and let A[n−1] denote the most signiﬁcant digit of A. x. and their product is mult(A. its running time would be given by the following recurrence T (n) = 1 4T (n/2) + n 110 if n = 1. 72: Long integer multiplication. we can express the multiplication of two long integers as the result of four products on integers of roughly half the length of the original.n − 1].. CMSC 451 Lecture Notes . each taking Θ(n) time. In fact. and so is not really a multiplication. and a constant number of additions and shifts.0] and z = B[m − 1. B) = mult(w. (Go back and analyze your solution to the problem on Homework 1).. then these algorithms run in Θ(n) time.m] x = A[m − 1. This suggests that if we were to implement this algorithm. n n/2 w y wz wy wy xy wz + xy xz Product n/2 x z xz A B Fig.encryption and decryption depends on being able to perform arithmetic on long numbers. Observe that all the additions involve numbers involving roughly n/2 digits.m] = B[n − 1. Let w y = A[n − 1. Because of the way we write numbers. Divide-and-Conquer Algorithm: We know the basic grade-school algorithm for multiplication. we will see that it is possible. z) + mult(x. This raises the question of whether there is a more efﬁcient way to multiply two very large numbers.

C D E = mult(w. However. The quantities that we need to compute are C = wy. but by trading off multiplications in favor of additions. recall that in cryptogrphy applications. and E = (wz + xy). Unfortunately. we cannot simulate multiplication through repeated additions. The burglar carries a knapsack capable of holding total weight W . Above. 4 additions. implying that Case 1 holds and the running time is Θ(nlg 4 ) = Θ(n2 ). b = 2. yielding T (n) ∈ Θ(nlg 3 ) ≈ Θ(n1. it actually has given us an important insight. B) = C · 102m + E · 10m + D. (y + z)) − C − D = (wy + wz + xy + xz) − wy − xz = (wz + xy). we can get everything we want. and a > bk . otherwise. 0-1 Knapsack Problem: Imagine that a burglar breaks into a museum and ﬁnds n items. It shows that the critical element is the number of multiplications on numbers of size n/2.585 ). y) = mult(x. Although this may seem like a very large number. and let wi denote the weight of the i-th item. The burglar wishes to carry away the most valuable subset items subject to the weight constraint. and shifts take Θ(n) time in total. Finally we have Altogether we perform 3 multiplications. For example. We assume that the burglar cannot take a fraction of an object. But he would rather steal gold before lead for the same reason.If we apply the Master Theorem. then we would have a more efﬁcient algorithm. (Of course. we have a = 3. but is brieﬂy discussed in Section 17. If the overhead was 10 times larger. encryption keys of this length and longer are quite reasonable. independent of n. Let vi denote the value of the i-th item. Now when we apply the Master Theorem.2. Is this really an improvement? This algorithm carries a larger constant factor because of the overhead of recursion and the additional arithmetic operations. if we could ﬁnd a way to arrive at the same result algebraically. this is no better than the standard algorithm. So the total running time is given by the recurrence: T (n) = 1 3T (n/2) + n if n = 1. So. subtractions. (There is a version of the Lecture Notes 111 CMSC 451 . Supplemental Lecture 7: Dynamic Programming: 0–1 Knapsack Problem Read: The introduction to Chapter 16 in CLR. We still need to shift the terms into their proper ﬁnal positions. since the number of additions must be a constant. k = 1. 5n1. then the crossover would occur for n ≥ 260. observe that if instead we compute the following quantities. then this algorithm will be superior. so he/she must make a decision to take the object entirely or leave it behind. and 2 subtractions all of numbers with n/2 digitis. we see that a = 4.585 versus n2 ) then this algorithm beats the simple algorithm for n ≥ 50. mult(A. it took us four multiplications to compute these.) The key turns out to be a algebraic “trick”. D = xz. using only three multiplications (but with more additions and subtractions). a burglar would rather steal diamonds before gold because the value per pound is better. But asymptotics says that if n is large enough. b = 2 and k = 1. The number of additions (as long as it is a constant) does not affect the running time. Faster Divide-and-Conquer Algorithm: Even though the above exercise appears to have gotten us nowhere. if we assume that the clever algorithm has overheads that are 5 times greater than the simple algorithm (e. The material on the Knapsack Problem is not presented in our text. The additions.g. z) = mult((w + x). For example.

We consider two cases: Leave object i: If we choose to not take object i. i} that can ﬁt into a knapsack of weight j. . . n} (of objects to “take”) that maximizes vi . j] if wi > j max(V [i − 1. i∈T Let us assume that the vi ’s. This is vi + V [i − 1.) More formally. wn . This is just V [i − 1. w2 . then the array entry V [n.. . 2. . We show that this problem can be solved in O(nW ) time. .) Here is how we solve the problem. and so we cannot really hope to ﬁnd an efﬁcient solution. (Note that this is not very good if W is a large integer. j − wi ]. . . W ] = V [4. then there is no value. W ] will contain the maximum value of all n objects that can ﬁt into the entire knapsack of weight W . . . With the remaining j −wi capacity in the knapsack. j − wi ]) if wi ≤ j The ﬁrst line states that if there are no objects. . j] we will store the maximum value of any subset of objects {1. then we gain a value of vi but have used up wi of our capacity. 0. . . Take object i: If we take object i. But if we truncate our numbers to lower precision. 2. V [0. irrespective of j. 2. An example is shown in the ﬁgure below. . wi ’s and W are all positive integers. which is O((n + 1)(W + 1)) = O(nW ). The only missing detail is what items should we select to achieve the maximum. and 0 ≤ j ≤ W . i∈T subject to wi ≤ W. the entry V [i. . and W > 0. . given v1 . vn and w1 . It turns out that this problem is NP-complete. we wish to determine the subset T ⊆ {1. The algorithm is given below.W ].W ]. This is only possible if wi ≤ j. j] in the matrix whether we got this entry by taking the ith item or leaving it. With this information. . As a basis. this gives a reasonable approximation algorithm. we can ﬁll it in the best possible way with objects {1. We assume that the wi ’s are small integers.n] and j ∈ [0. vi + V [i − 1. However if we make the same sort of assumption that we made in counting sort. . Lecture Notes 112 CMSC 451 . This reﬂects the selection of items 2 and 4. It is very easy to take these rules an produce an algorithm that computes the maximum value for the knapsack in time proportional to the size of the array. i− 1}. . We will leave this as an exercise.. v2 . The ranges on i and j are i ∈ [0.n. . The second line implements the rule above. . They key is to record for each entry V [i. we can come up with an efﬁcient solution. . We construct an array V [0. For 1 ≤ i ≤ n. . j]. observe that V [0. respectively and weights 4 + 3 ≤ 10. To compute the entries of the array V we will imply an inductive approach. and that W itself is a small integer. we can see that we have the following rule for constructing the array V . then the optimal value will come about by considering how to ﬁll a knapsack of size j with the remaining objects {1. Since these are the only two possibilities. . of values $40 and $50. 2. i − 1}. If we can compute all the entries of this array.problem where the burglar can take a fraction of an object for a fraction of the value and weight. The ﬁnal output is V [n. j]. j] V [i. This is much easier to solve.. 10] = 90. it is possible to reconstruct the optimum knapsack contents. j] = = 0 V [i − 1.. j] = 0 for 0 ≤ j ≤ W since if we have no items then we have no value.

. else take_val = -INFINITY. 10] = 90 (for taking items 2 and 4). 40. V[i. 50 .. W) { allocate V[0. j]. n. Lecture Notes 113 CMSC 451 . for i = 1 to n do { for j = 0 to W do { leave_val = V[i-1. W]. 6. 4. 30.0-1 Knapsack Problem KnapSack(v[1. j] = 0. 3 . } } return V[n.. if (j >= w[i]) take_val = v[i] + V[i-1.. Capacity → Value Weight 10 5 40 4 30 6 50 3 j=0 0 0 0 0 0 1 0 0 0 0 0 2 0 0 0 0 0 3 0 0 0 0 50 4 0 0 40 40 50 5 0 10 40 40 50 6 0 10 40 40 50 7 0 10 40 40 90 8 0 10 40 40 90 9 0 10 50 50 90 10 0 10 50 70 90 Item 1 2 3 4 Final result is V [4.W]. Fig.j] = max(leave_val. take_val). Weights of the objects are 5.n]. } // initialization // total value if we leave i // enough capacity to take i // total value if we take i // cannot take i // final value is max Values of the objects are 10. w[1.n][0. j . 73: 0–1 Knapsack Example.w[i]]. for j = 0 to W do V[0.n].

We show it just to make the connection with the earlier version clearer. the recursive formulations that we have derived have been set up in a “topdown” manner. − 1) = 2 k=0 n−1 In the ﬁrst line we simply ignored the T (n−k) term.n] is not really needed. (We have replaced the Θ(1)’s with the constant 1. So. we will see that its running time is exponential in n. and the time is Θ(1).) This version of the procedure certainly looks much simpler. So we get the following recurrence. The call Rec-Matrix-Chain(p. This is unacceptably slow. // basis case else { m[i. k+1. j) computes and returns the value of m[i. 1. (That is..) If i = j then we have a sequence of length 1. Must the computation proceed bottom-up? Consider the following recursive implementation of the chain-matrix multiplication algorithm. Recursive Implementation: We have described dynamic programming as a method that involves the “bottom-up” computation of a table.Supplemental Lecture 8: Dynamic Programming: Memoization Read: Section 15. int i. In general. Clearly this is true for n = 1.n. n = j −i+1. and more closely resembles the recursive formulation that we gave previously for this problem. T (n) = 1+ k=1 n−1 (T (k) + T (n − k)) ≥ 1 + k=1 n−2 T (k) ≥ 1+ = 1 + (2 2k−1 = 1 + k=1 n−1 2k .j].j] = cost. In fact.j]) m[i.j] = INFINITY. We only consider the cost here. Using this we have n−1 n−1 1 1+ n−1 k=1 (T (k) + T (n − k)) if n = 1. Otherwise. The initial call is Rec-Matrix-Chain(p. j]. what is wrong with this? The answer is the running time is much higher than the Θ(n3 ) algorithm that we gave before. the induction hypothesis is that T (m) ≥ 2m−1 for all m < n. and invoke the procedure recursively on each one. for n ≥ 2.3 of CLRS. if n ≥ 2. // initialize for k = i to j-1 do { // try all splits cost = Rec-Matrix-Chain(p. in the second line we applied the induction hypothesis. k) + Rec-Matrix-Chain(p. Proof: The proof is by induction on n. one of length k and the other of length n − k. j) + p[i-1]*p[k]*p[j]. However. i. int j) { if (i == j) m[i. // return final cost } (Note that the table m[1.. n).) T (n) = Claim: T (n) ≥ 2n−1 .j] = 0. i. we do Θ(1) work and then consider all possible ways of splitting the sequence of length n into two sequences. if (cost < m[i. and in the last line we applied the formula for the geometric series. deﬁned for n ≥ 1. // update if better } } return m[i. Lecture Notes 114 CMSC 451 . 1. Let T (n) denote the running time of this algorithm on a sequence of matrices of length n. since T (1) = 1 = 20 . Recursive Chain Matrix Multiplication Rec-Matrix-Chain(array p.

Articulation Point (or Cut Vertex): Is any vertex whose removal (together with the removal of any incident edges) results in a disconnected graph. Lecture Notes 115 CMSC 451 .j]. k) + Mem-Matrix-Chain(p.j] != UNDEFINED) return m[i. many of the table entries are simply not needed. Memoization is not usually used in practice. the main problem with the procedure is that it recomputes the same entries over and over. Let’s reconsider the function Rec-Matrix-Chain() given above.g. // initialize for k = i to j-1 do { // try all splits cost = Mem-Matrix-Chain(p. while keeping the same O(n3 ) efﬁciency of the bottom-up version? The answer is yes. If you have know that most of the table will not be needed. However.) Supplemental Lecture 9: Articulation Points and Biconnectivity Read: This material is not covered in CLR (except as Problem 23–2). Bridge: Is an edge whose removal results in a disconnected graph. Biconnected: A graph is biconnected if it contains no articulation points. One way to do this is to initialize every entry to some special value (e. Rather than storing the whole table explicitly as an array. here is a way to save space. because these are the “critical” points. j]. It’s job is to compute m[i. and the work needed to compute one table entry (most of it in the for-loop) is at most O(n). UNDEFINED). So.j] = INFINITY. this is because each of the O(n2 ) table entries is only computed once. Memoized Chain Matrix Multiplication Mem-Matrix-Chain(array p. this time to a problem on undirected graphs.j] = cost. Memoization: Is it possible to retain the nice top-down structure of the recursive solution. Consider the following deﬁnitions. In these cases memoization may be a good idea. int i. Intuitively. (See Chapter 11 in CLRS for more information on hashing. Here is the idea. if (cost < m[i.Why is this so much worse than the dynamic programming version? If you “unravel” the recursive calls on a reasonably long example. it is never recomputed. in some DP problems. k+1. and so bottom-up computation may compute entries that are never needed. j) as the hash key. // already defined else if (i == j) m[i.j]. using the index pair (i.) Biconnected graphs and articulation points are of great interest in the design of network algorithms. // return final cost } This version runs in O(n3 ) time. and return its value. we will ﬁx this by allowing the procedure to compute each entry exactly once. you will see that the procedure is called repeatedly with the same arguments. Articulation Points and Biconnected Graphs: Today we discuss another application of DFS. if k vertices must be removed to disconnect the graph. Once an entries value has been computed. whose failure will result in the network becoming disconnected. you can store the “deﬁned” entries of the table in a hash table. As noted above. (In general a graph is k-connected. E) be a connected undirected graph. The bottom-up version evaluates each entry exactly once. since it is generally slower than the bottom-up method. j) + p[i-1]*p[k]*p[j]. i. // update if better } } return m[i. through a technique called memoization. int j) { if (m[i.j] = 0. // basis case else { m[i.j]) m[i. Let G = (V.

On the other hand. then if u is removed. we cannot distinguish between forward edges and back edges. Biconnected components: The biconnected components of a graph are the equivalence classes of the cocylicity relation. This is the most common source of confusion for this algorithm. there are no cross edges. the graph remains connected (the backedges hold everything together). and we just call them back edges. If for some child. (You should take a moment to convince yourself why this is true. We say that two edges e1 and e2 are cocyclic if either e1 = e2 or if there is a simple cycle that contains both edges. can it be an articulation point? Answer: No. We give an algorithm for computing articulation points. vk be the children of u. . if every one of the subtrees rooted at the children of u have back edges to proper ancestors of u. because when you delete a leaf from a tree. Observation 1: An internal vertex u of the DFS tree (other than the root) is an articulation point if and only there exists a subtree rooted at a child of u such that there is no back edge from any vertex in this subtree to a proper ancestor of u. we will call depthﬁrst search. An algorithm for computing bridges is simple modiﬁcation to this procedure.a b c d e f g h i j Articulation point Bridge a b c a f c d e e i j g h Biconnected components Fig. Let’s let v1 . What about the leaves? If u is a leaf. and use the tree structure provided by the search to aid us. Articulation Points and DFS: In order to determine the articulation points of an undirected graph. and hence u is an articulation point. . Notice that unlike strongly connected components of a digraph (which form a partition of the vertex set) the biconnected components of a graph form a partition of the edge set. It is not too hard to verify that this deﬁnes an equivalence relation on the edges of a graph. let us consider the typical case of a vertex u. First off. Notice that if two edges are cocyclic. Lecture Notes 116 CMSC 451 . Also.) For now. let us ask ourselves if a vertex u is an articulation point. there is no back edge going to a proper ancestor of u. 74: Articulation Points and Bridges Last time we observed that the notion of mutual reachability partitioned the vertices of a digraph into equivalence classes. We would like to do the same thing here. how would we know it by its structure in the DFS tree? We assume that G is connected (if not. So we assume is only one tree in the DFS forest. the DFS tree has a simpler structure. For each child there is a subtree of the DFS tree rooted at this child. we can apply this algorithm to each individual connected component). Because G is undirected. In particular. . Please check this condition carefully to see that you understand it. In particular. where u is not a leaf and u is not the root. then there are essentially two different ways of getting from one edge to the other (by going around the the cycle each way). the graph is connected after the deletion of a leaf from the DFS tree. This leads to the following. this subtree would become disconnected from the rest of the graph. thus even ignoring the back edges. v2 . You might think for a while why this is so. then if we were to remove u. notice that the condition for whether u is an articulation point depends on a test applied to its children. . the rest of the tree remains connected.

The basic thing we need to check for is whether there is a back edge from some subtree to an ancestor of a given vertex. Back edge (u. If any back edge goes to an ancestor of u. since a leaf will not have any subtrees in the DFS tree. and 3 provide us with a structural characterization of which vertices in the DFS tree are articulation points. then (as in the case of leaves) its removal does not disconnect the DFS tree. What about the root? Since there are no cross edges between the subtrees of the root if the root has two or more children then it is an articulation point (since its removal separates these two subtrees). 2. Low: Deﬁne Low[u] to be the minimum of d[u] and {d[w] | where (v. Explanation: We have detected a new back edge coming out of u. observe that the discovery times of these ancestors of u get smaller and smaller (the root having the smallest discovery time of 1). Low[v]). How do we know how close a back edge goes to the root? As we travel from u towards the root. Lecture Notes 117 CMSC 451 .v u Low[u]=d[v] Fig. Observation 3: The root of the DFS is an articulation point if and only if it has two or more children. v): Low[u] = min(Low[u]. The term “descendent” is used in the nonstrict sense. w) is a back edge and v is a descendent of u}. How can we do this? It would be too expensive to keep track of all the back edges from each subtree (because there may be Θ(e) back edges. Note that this is completely consistent with Observation 1. v): Low[u] = min(Low[u]. 75: Articulation Points and DFS Observation 2: A leaf of the DFS tree is never an articulation point. in the sense of being close to the root. d[v]). v may be equal to u. A simpler scheme is to keep track of back edge that goes highest in the tree (in the sense of going closest to the root). If this goes to a lower d value than the previous back edge then make this the new low. but we will exploit the structure of the DFS tree to help us. Intuitively. if the root has only a single child. Articulation Points by DFS: Observations 1. w) that has the smallest value of d[w]. (Beware of this notation: “Low” means low discovery time. so we can delete the word “internal” from Observation 1. On the other hand. Initialization: Low[u] = d[u]. Tree edge (u. Checking Observation 1 is the hardest.) To compute Low[u] we use the following simple rules: Suppose that we are performing DFS on the vertex u. and hence cannot disconnect the graph in general. In fact Low[u] tends to be “high” in the tree. not low in the tree. How can we design an algorithm which tests these conditions? Checking that the root has multiple children is an easy exercise. that is. So we keep track of the back edge (v. Explanation: Since v is in the subtree rooted at u any single back edge leaving the tree rooted at v is a single back edge for the tree rooted at u. this one will. Low[u] is the highest (closest to the root) that you can get in the tree by taking any one backedge from either u or any of its descendents.

v) is a tree edge pred[v] = u ArtPt(v) Low[u] = min(Low[u]. We also saw that shortest paths are undeﬁned if you Lecture Notes 118 CMSC 451 . we need to know when a given edge (u. Articulation Points ArtPt(u) { color[u] = gray Low[u] = d[u] = ++time for each (v in Adj(u)) { if (color[v] == white) { // (u. How do we do this? An almost correct answer is to test whether v is colored gray (since all gray vertices are ancestors of the current vertex). We’ll leave it as an exercise. (Notice that if {u. We did not discuss how to compute the bridges of a graph. When you come to an articulation point. The main procedure for DFS is the same as before. The complete algorithm for computing articulation points is given below.Observe that once Low[u] is computed for all vertices u. v) is a back edge. v} is a bridge then it does not follow that u and v are both articulation points. d[v]) // update L[u] } } } An example is shown in the following ﬁgure. The Final Algorithm: There is one subtlety that we must watch for in designing the algorithm (in particular this is true for any DFS on undirected graphs).) Another question is how to determine which edges are in the biconnected components. except that it calls the following routine rather than DFSvisit(). we can test whether a given nonroot vertex u is an articulation point by Observation 1 as follows: u is an articulation point if and only if it has a child v in the DFS tree for which Low[v] ≥ d[u] (since if there were a back edge from either v or one of its descendents to an ancestor of v then we would have Low[v] < d[u]). A hint here is to store the edges in a stack as you go through the DFS search. Supplemental Lecture 10: Bellman-Ford Shortest Paths Read: Section 24. This can be done by a small modiﬁcation of the algorithm above. under the assumption that the edge weights are nonnegative. Low[v]) // update Low[u] if (pred[u] == NULL) { // root: apply Observation 3 if (this is u’s second child) Add u to set of articulation points } else if (Low[v] >= d[u]) { // internal node: apply Observation 1 Add u to set of articulation points } } else if (v != pred[u]) { // (u. As with all DFS-based algorithms. There are some interesting problems that we still have not discussed.v) is a back edge Low[u] = min(Low[u]. This is not quite correct because v may be the parent of v in the DFS tree and we are just seeing the “other side” of the tree edge between v and u (recalling that in constructing the adjacency list of an undirected graph we create two directed edges for each undirected edge). Bellman-Ford Algorithm: We saw that Dijkstra’s algorithm can solve the single-source shortest path problem.1 in CLRS. When processing a vertex u. you can show that all the edges in the biconnected component will be consecutive in the stack. the running time is Θ(n + e). To test correctly for a back edge we use the predecessor pointer to check that v is not the parent of u in the DFS tree.

) Dijkstra’s algorithm was based on the idea of organizing the relaxations in the best possible manner. v). have cycles of total negative cost.a b c d e f g h i j 2 3 4 5 6 b c d=1 1 1 a Low=1 8 9 10 j 8 8 e i d h 3 3 7 f 1 8 = articulation pt. and repeats this V − 1 times. This trick doesn’t seem to work when dealing with graphs with negative edge weights.v) } } } // standard initialization // repeat V-1 times // relax along each edge The Θ(V E) running time is pretty obvious. Bellman-Ford Algorithm BellmanFord(G. the Bellman-Ford algorithm is based on performing repeated relaxations. It was described in our discussion of Dijkstra’s algorithm. in the sense that shortest path information is propagated sequentially along each shortest path in the graph. which solves this problem. the Bellman-Ford algorithm simply applies a relaxation to every edge in the graph. The one presented in CLRS actually contains a bit of code that checks for this. running in Θ(V E) time. This algorithm is slower that Dijkstra’s algorithm. but no negative cost cycles? We shall present the Bellman-Ford algorithm. Lecture Notes 119 CMSC 451 . Like Dijkstra’s algorithm. . vi ). . What if you have negative edge weights. .) Recall that we are given a graph G = (V. The interesting question is how and why it works. 76: Articulation Points. Consider any shortest path from s to some other vertex u: v0 . . one iterated V − 1 times and the other iterated E times. Instead. vi ) (the true shortest path cost from s to vi ) satisﬁes δ(s. we know that k ≤ V − 1. vi−1 ) + w(vi−1 .w. Once relaxation is applied to an edge.v) in E { Relax(u. Since a shortest path will never visit the same vertex twice. vi ) = δ(s. E) with numeric edge weights. Correctness of Bellman-Ford: I like to think of the Bellman-Ford as a sort of “BubbleSort analogue” for shortest paths. since there are two main nested loops. g 3 Fig. w(u. namely in increasing order of distance. Since this is a shortest path we have δ(s. and hence the path consists of at most V − 1 edges. (Check it out. vk where v0 = s and vk = u. v1 . it need never be relaxed again.s) { for each (u in V) { d[u] = +infinity pred[u] = null } d[s] = 0 for i = 1 to V-1 { for each (u. (Recall that relaxation updates shortest path information along a single edge. In our version we will assume that there are no negative cost cycles.

completing the induction proof. It is the simplest problem in a line of many important problems having to do with the movement of commodities through a network. v) ≤ c(u. v) ∈ E has a nonegative capacity c(u. v) ∈ E we model this by setting c(u. f (u. Supplemental Lecture 11: Network Flows and Matching Read: Chapt 27 in CLR. f : V × V → R which satisﬁes the following three properties: Capacity Constraint: For all u. We assert that after the ith pass of the “for-i” loop that d[vi ] = δ(s. 77: Bellman-Ford Algorithm.) Flow conservation: For all u ∈ V − {s. vi ). v) = 0. v). t}. These are often studied in business schools. prior to the ith pass through the loop. after the (V − 1)st iteration of the for loop. the induction hypothesis tells us that d[vi−1 ] = δ(s. we have f (u. vi ) (since each time we do a relaxation there exists a path that witnesses its value). vi ) = δ(s. Thus. Maximum Flow: The Max Flow problem is one of the basic problems of algorithm design. In general. The max ﬂow problem has applications in areas like transportation. we have done a relaxation on the edge (vi−1 . vi ) (since we do relaxations along all the edges). Observe that after the initialization (pass 0) we have d[v1 ] = d[s] = 0. v∈V Lecture Notes 120 CMSC 451 . E) is a directed graph in which each edge (u. vi ). We assume that every vertex lies on some path from the source to the sink (for otherwise the vertex is of no use to us). after i passes through the for loop. we can think of backwards ﬂow as negative ﬂow. and operations research. u). Recall from Dijkstra’s algorithm that d[vi ] is never less than δ(s. The proof is by induction on i. all vertices that are i edges away (along the shortest path tree) from the source have the correct distance values stored in d[u]. vi ) = δ(s. Each edge has certain maximum capacity that it can carry. v ∈ V . Intuitively we can think of a ﬂow network as a directed graph in which ﬂuid is ﬂowing along the edges of the graph. In summary.) A ﬂow is a real valued function on pairs of vertices.8 0 4 ? −6 ? 5 ? 0 8 4 8 −6 4 5 ? 0 8 4 8 −6 2 5 9 0 8 4 8 −6 2 5 7 Initial configuration After 1st relaxation phase After 2nd relaxation phase After 3rd relaxation phase Fig. routing in networks. If (u. v) ≥ 0. (In other words. After the ith pass through the loop. Skew Symmetry: For all u. f (u. vi−1 ) + w(vi−1 . vi−1 ). (This implies that the digraph is connected. This is primarily for making algebraic analysis easier. There are two special vertices: a source s. Flow Networks: A ﬂow network G = (V. v) = 0. d[vi ] is in fact equal to δ(s. v ∈ V . vi ). Thus after the ith pass we have d[vi ] ≤ d[vi−1 ] + w(vi−1 . The idea is to ﬁnd out how much ﬂow we can push from one point to another. Thus. and a sink t. v) = −f (v. and hence e ≥ n − 1. all vertices u have the correct distance values stored in d[u].

Y ).e. In this case f (X. Z) + f (Y.e. Y ) = −f (Y. Y ) can be thought of as the net amount of ﬂow crossing over the cut. The idea is to start with a ﬂow of size zero. (iii) If X ∩ Y = ∅ then f (X ∪ Y. If you require that the ﬂow from source i goes ONLY to sink i. (ii) f (X. f (X. v) is called the net ﬂow from u to v. X ∪ Y ) = f (Z. .e. Multi-source. s2 . Lecture Notes 121 CMSC 451 . The total value of the ﬂow f is deﬁned as |f | = v∈V f (s. This idea is given by the most simple method for computing network ﬂows. We let these edges have inﬁnite capacity. From simple manipulations of the deﬁnition of ﬂow we can prove the following facts. A path in the network from s to t along which more ﬂow can be pushed is called an augmenting path. To do this we extend the deﬁnition of f to sets by deﬁning y ∈ Y f (x. . . We will show this later. We will prove that when it is impossible to “push” any more ﬂow through the network.) Note that ﬂow conservation does NOT apply to the source and sink. and then incrementally make the ﬂow larger and larger by ﬁnding a path along which we can push more ﬂow. t2 . They only differ in how they decide which path or paths along which to push ﬂow. . Almost all network ﬂow algorithms are based on this simple idea. a locally maximum ﬂow is globally maximum). we want to talk about the ﬂow from a SET of vertices X to another SET of vertices Y . and attaching s to all the si and attach all the tj to t . Set Notation: Sometimes rather than talking about the ﬂow from a vertex u to a vertex v. X). . Note that we don’t care which ﬂow from one source goes to another sink. this is equivalent to saying. Z) = f (X. ﬂow-in = ﬂow-out. X) + f (Z. multi-sink ﬂow problems: It may seem overly restrictive to require that there is only a single source and a single sink vertex. Lemma: (i) f (X. the ﬂow into t. called the Ford-Fulkerson method. then you have a tougher problem called the multi-commodity ﬂow problem. Z) and f (Z. It turns out that this is also equal to v∈V The maximum-ﬂow problem is. One important special case of this concept is when X and Y deﬁne a cut (i. Now by pushing the maximum ﬂow from s to t we are effectively producing the maximum ﬂow from all the si to all the tj ’s. the ﬂow out of s. t). tl . we have reached the maximum possible ﬂow (i. Many ﬂow problems have situations in which many source vertices s1 . V ) = 0. Ford-Fulkerson Method: The most basic concept on which all network-ﬂow algorithms work is the notion of augmenting ﬂows. X) = 0. . The quantity f (u.(Given skew symmetry. sk and many sink vertices t1 . given a ﬂow network. and source and sink vertices s and t. This can easily be modelled by just adding a special supersource s and a supersink t . . Flow conservation means that no ﬂow is lost anywhere else in the network. v) f (v. since we think of ourselves as pumping ﬂow from s to t. a partition of the vertex set into two disjoint subsets X ⊆ V and Y = V − X). i. ﬁnd the ﬂow of maximum value from s to t. Example: Page 581 of CLR. thus the ﬂow out of s will equal the ﬂow into t. . Y ) = x∈X Using this notation we can deﬁne ﬂow balance for a vertex u more succintly by just writing f (u. y).

The value of the ﬂow is |f | + |f |. (Remember that when deﬁning this ﬂow that whenever we push cf (p) units of ﬂow along any edge (u. v ∈ V to be cf (u. Augmenting Paths: An augmenting path is a simple path from s to t in Gf . we get a ﬂow in Gf . Note that in computing f (S. in a ﬂow network is a partition of the vertex set into two disjoint subsets S and T such that s ∈ S and t ∈ T . s. } Residual Network: To deﬁne the notion of an augmenting path. The residual network is the directed graph Gf with the same vertex set as G but whose edges are the pairs (u. Since DFS and BFS take Θ(n + e) time. v) > 0 then it is possible to push more ﬂow through the edge (u. First we construct the residual network. We deﬁne the ﬂow across the cut as f (S. v) of p. Lemma: The amount of ﬂow across any cut in the network is equal to |f |. T ). and we deﬁne the capcity of the cut as c(S. Lecture Notes 122 CMSC 451 . v). t) { initialize flow f to 0. A cut. Given a ﬂow network G and a ﬂow f . T ) ﬂows from T to S are counted negatively (by skew-symmetry). Correctness: To establish the correctness of the Ford-Fulkerson algorithm we need to delve more deeply into the theory of ﬂows and cuts in networks.Ford-Fulkerson Network Flow FordFulkerson(G. and then we run DFS or BFS on the residual network starting at s. the running time of Ford-Fulkerson is basically Θ((n + e)(number of augmenting stages)). and in computing c(S. Otherwise we say that the edge is saturated. It is denoted cf (p). T ). and hence we can use this to augment the ﬂow in G. Since every edge of the residual network has a strictly positive weight. Each edge in the residual network is weighted with its residual capacity. while (there exists an augmenting path p) { augment the flow along p. v) + f (u. v)−f (u. (S. Lemma: Let f be a ﬂow in G and let f be a ﬂow in Gf . T ) we ONLY count constraints on edges leading from S to T ignoring those from T to S). we have to push −cf (p) units of ﬂow along the reverse edge (v. Later we will analyze the latter quantity. Example: Page 589 of CLR. and it can be shown that the residual network has Θ(n + e) size. If the search reaches t then we know that a path exists (and can follow the predecessor pointers backwards to reconstruct it). v) such that cf (u. v) > 0. v) = c(u. deﬁne the residual capacity of a pair u. we ﬁrst deﬁne the notion of a residual network. the resulting ﬂow is strictly larger than the current ﬂow for G. The other rules for ﬂows are easy to verify. The residual capacity of the path is the MINIMUM capacity of any edge on the path. Proof: Basically the residual network tells us how much additional ﬂow we can push through G. T ). v) = f (u. v). Observe that by pushing cf (p) units of ﬂow along each edge of the path. v) ≥ 0. cf (u. Then (f + f ) (deﬁned (f + f )(u. In order to determine whether there exists an augmenting path from s to t is an easy problem. v)) is a ﬂow in G. Because of the capacity constraint. u) to maintain skew-symmetry. } output the final flow f. Observe that if cf (u. This implies that f + f never exceeds the overall edge capacities of G.

T ) for some cut (S. but there is another simple strategy that is guaranteed to give good bounds (in terms of n and e). T ) forms a cut. V ) + f (S − s. then by pushing ﬂow along this path we would have a larger ﬂow.) Corollary: The value of any ﬂow is bounded from above by the capacity of any cut.) It is not known how fast this method works in the worst case. it follows that the ﬂow across the cut equals the capacity of the cut. It basically states that in any ﬂow network the minimum capacity cut acts like a bottleneck to limit the maximum amount of ﬂow. then it must be maximum (and this cut must be minimum). (iii) |f | = c(S. (iii) ⇒ (i): Since the ﬂow is never bigger than the capacity of any cut. Because each edge crossing the cut must be saturated with ﬂow. It seems (from the example we showed) that a more logical way to push ﬂow is to select the augmenting path which holds the maximum amount of ﬂow. f (u. S) = f (S.000. Consider the following example (from page 596 in CLR). Proof: You cannot push any more ﬂow through a cut than its capacity. 2. Analysis of the Ford-Fulkerson method: The problem with the Ford-Fulkerson algorithm is that depending on how it picks augmenting paths. and since S − s is formed of such vertices the sum of their ﬂows will be zero also.e. (ii) ⇒ (iii): If there are no augmenting paths then s and t are not connected in the residual network. V ) = f (s. Let S be those vertices reachable from s in the residual network and let T be the rest. V ) = 0 for all u other than s and t. it may spend an inordinate amount of time arriving a the ﬁnal maximum ﬂow. T ). However.000. if the ﬂow equals the capacity of some cut.000. Maximum ﬂow ≤ Minimum cut). V ) − f (S. Ford-Fulkerson algorithm terminates when it ﬁnds this bottleneck. V ) |f | (The fact that f (S − s. and hence it ﬁnds the minimum cut and maximum ﬂow.Proof: f (S. it will continuously improve the ﬂow by only a single unit. Max-Flow Min-Cut Theorem: The following three conditions are equivalent. (ii) The residual network Gf contains no augmenting paths.000 augmenting will be needed before we get the ﬁnal ﬂow. T ) of G. The correctness of the Ford-Fulkerson method is based on the following theorem. Proof: (i) ⇒ (ii): If f is a max ﬂow and there were an augmenting path in Gf . (This is exactly the same as the beer transport problem given on the last exam. V ) = 0 comes from ﬂow conservation. Lecture Notes 123 CMSC 451 . if the algorithm were to try to augment using the middle edge. If the algorithm were smart enough to send ﬂow along the edges of weight 1. (i) f is a maximum ﬂow in G. (S. Min-Cut Theorem. (i. the algorithm would terminate in two augmenting steps. thus |f | = c(S. T ) = f (S. V ) = = f (s. In general. Ford-Fulkerson can take time Θ((n + e)|f ∗ |) where f ∗ is the maximum ﬂow. Computing this path is equivalent to determining the path of maximum capacity from s to t in the residual network. called the Max-Flow. a contradiction. An Improvement: We have shown that if the augmenting path was chosen in a bad way the algorithm could run for a very long time before converging on the ﬁnal ﬂow.

v) < δf (s. it must be that we reduce ﬂow on this edge. with one little change. Proof: (Messy. Then as we peform augmentations by the Edmonds-Karp algorithm the value of δf (s. there are O(e) edges. i. u ∈ L. Thus. v) = δf (s. we push ﬂow along the reverse edge (v. When ﬁnding the augmenting path. the overall running time will be O((n + e)e · n) = O(n2 e + e2 n) ∈ O(e2 n) (under the reasonable assumption that e ≥ n). Observation: If the edge (u. The problem is to ﬁnd a matching. u) + 2. t}. then δf (s. Lecture Notes 124 CMSC 451 . For this to be the case we have (at some later ﬂow f ) δf (s. Since each augmentation takes O(n + e) time to compute using BFS. See the text. v ∈ R such that u and v are compatible. you are running a dating service and there are a set of men L and a set of women R. v). Thus. u) be the distance function from s to u in the residual network Gf . Consider the following problem. subject to the constraint that each man is paired with at most one woman. In other words. and vice versa. Proof: An edge in the augmenting path is critical if the residual capacity of the path equals the residual capacity of this edge. v) is not on any shortest path. u) = δf (s. After this it disappears from the residual graph. Using a questionaire you establish which men are compatible which which women. The fact that Edmonds-Karp uses O(en) augmentations is based on the following observations. Since there is an edge from u to v.) This problem is modelled by giving an undirected graph whose vertex set is V = L ∪ R and whose edge set consists of pairs (u. we use Breadth-First search in the residual network. the Edmonds-Karp algorithm makes at most O(ne) augmentations and runs in O(ne2 ) time. after augmentation the critical edge becomes saturated. v) + 1 since dists increase with time ≥ δf (s. implying that δf (s. u) + 1. In summary. v) = δf (s. let δf (s. Thus we have: δf (s. hence after O(ne) augmentations.) Theorem: The Edmonds-Karp algorithm makes at most O(n · e) augmentations. δf (s.Edmonds-Karp Algorithm: The Edmonds-Karp algorithm is Ford-Fulkerson. u) + 1) + 1 = δf (s. u) increases monotonically with each ﬂow augmentation. v) + 1 = (δf (s. u). In order to reappear. (An example is problem 3 in the homework.) We will give another example here. u) + 1. v) is an edge on the minimum length augmenting path from s to t in Gf . each edge can become critical at most O(n) times. the number of ﬂow augmentations needed will be at most O(e · n). and if δf (s. Maximum Matching: One of the important elements of network ﬂow is that it is a very general algorithm which is capable of solving many problems. v) + 1. and thus we ﬁnd the shortest augmenting path (where the length of the path is the number of edges on the path). u) = δf (s. v) is critical it lies on the shortest augmenting path. its tail vertex increases in distance from the source by two. We claim that this choice is particularly nice in that. between the time that an edge becomes critical. (The best known algorithm is essentially O(e · n log n).e. and disappears from the residual graph. if we do so. u) + 1. and hence (u. Lemma: For each vertex u ∈ V −{s. (It may be that some men are not paired with any woman. Proof: This is a simple property of shortest paths. starting at the source s. the algorithm must terminate. u) + 1 then u would not be on the shortest path from s to v. How many times can an edge become critical before the algorithm terminates? Observe that when the edge (u. since no vertex can be further than n from the source. Your task is to pair up as many compatible pairs of men and women as possible. v) ≤ δf (s. but not too complicated. This can only happen n/2 times.

t}.) Lecture Notes 125 CMSC 451 . v) > 0}. (Note that this idea does not work for general undirected graphs. but all of these problems are NP-complete. We will leave TSP as an easy exercise. respectively.4 of CLR. v ∈ R. f (u. unless the dating service is located on Dupont Circle). does there exist a Hamiltonian cycle of total weight at most X? Today we will prove that Hamiltonian Cycle is NP-complete. and since each vertex in R has exactly 1 outgoing edge. such a cycle will always exist. The decision problem formulation is. (It is done in Section 36. Let s and t be two new vertices and let V = V ∪ {s. u)|u ∈ L} ∪ {(v. compute the maximum ﬂow in G . A Hamiltonian path is a simple path that visits every vertex in the graph (exactly once). The resulting undirected graph has the property that its vertex set can be divided into two groups such that all its edges go from one group to the other (never within a group. Example: See page 601 in CLR. Recall that given a graph (or digraph) a Hamiltonian cycle is a simple cycle that visits every vertex in the graph (exactly once). Hamiltonian Cycle: Today we consider a collection of problems related to ﬁnding paths in graphs and digraphs. Since each vertex in L has exactly 1 incoming edge. We claim that this matching is maximum because for every matching there is a corresponding ﬂow of equal value.5 in CLR. v)|(u. E = {(s. and is called a maximum matching. it can have ﬂow along at most 1 outgoing edge. The desired matching is the one that has the maximum number of edges. Example: See page 602 in CLR. and integer X. Now. observe that the Ford-Fulkerson algorithm will only assign integer value ﬂows to the edges (and this is true of all existing network ﬂow algorithms). it can have ﬂow along at most 1 incoming edge.that is a subset of edges M such that for each v ∈ V . t)|v ∈ R} ∪ {(u. Although in general it can be that ﬂows are real numbers. An important related problem is the traveling salesman problem (TSP). Reduction to Network Flow: We claim that if you have an algorithm for solving the network ﬂow problem. v)|u ∈ L.) Construct a ﬂow network G = (V . given a complete weighted graph G. Since the graph is complete. E ) as follows. and depending on whether you want a path or a cycle. Thus letting f denote the maximum ﬂow. then you can use this algorithm to solve the maximum bipartite matching problem. The Hamiltonian cycle (HC) and Hamiltonian path (HP) problems ask whether a given graph (or digraph) has such a cycle or path. This problem is called the maximum bipartite matching problem. Set the capacity of all edges in this network to 1. There are four variations of these problems depending on whether the graph is directed or undirected. we can deﬁne a matching M = {(u. there is at most one edge of M incident to v. v) ∈ E}. Thus by maximizing one we maximize the other. Supplemental Lecture 12: Hamiltonian Path Read: The reduction we present for Hamiltonian Path is completely different from the one in Chapt 36.5. and for every (integer) ﬂow there is a matching of equal value. determine the cycle of minimum weight that visits all the vertices. Given a complete graph (or digraph) with integer edge weights.5.

i3 and three outgoing edges. To see whether you really understand the gadget. we are deciding which edges will be a part of the path. x5 . This type of reduction is called a component design reduction. They are sometimes called local replacement reductions. Each of these vertices will have two outgoing paths. and the same outputs. Theorem: The directed Hamiltonian Path problem is NP-complete. must join corresponding inputs to corresponding outputs. In DHP. o2 . VC. because it involves designing special subgraphs. x2 . and DS in particular) are of a relatively simple variety. Let x1 . or three input paths. The general structure of the digraph will consist of a series vertices.Component Design: Up to now. In 3SAT we are selecting a truth assignment for the variables of the formula. one for each variable. there exists a set of paths which join each input edge i1 . one taken if xi is set to true and one if xi is set to false. Let us consider the similar elements between the two problems.) The gadget that we will use in the directed Hamiltonian path reduction. It was designed so it satisﬁed the following property. Thus. 3SAT ≤P DHP: This will be the subject of the rest of this section. (The order in which the path passes through the gadgets is unimportant. whose job it is to enforce a particular constraint. . o3 . then they are joined to the next variable vertex. . . The inputs and outputs of each gadget correspond to the literals appearing in this clause. or o3 such that together these paths visit every vertex in the gadget exactly once. In 3SAT there must be at least one true literal for each clause. Each of these paths will then pass through some number of DHP-gadgets. Claim: Given the DHP-gadget: • For any subset of input edges. We are given a boolean formula F in 3-CNF form (three literals per clause). It is an easy matter to check that the path visits every vertex exactly once. each vertex must be visited exactly once. It is probably easiest to see this on your own. The true path for xi will pass through all the clause gadgets for clauses in which xi appears. It consists of three incoming edges labeled i1 . xm denote the variables appearing in F . and in doing so each path must end on the corresponding output edge. Would some other number work? DHP is NP-complete: This gadget is an essential part of our proof that the directed Hamiltonian path problem is NP-complete. and visit all the vertices of the gadget exactly once. Very complex reductions may involve the creation of many gadgets.) When the paths for xi have passed through their last gadgets. (See CLR’s presentation of HP for other examples of gadgets. We will present a much more complex style of reduction for the Hamiltonian path problem on directed graphs. • Any subset of paths that start on the input edges and end on the output edges. and x8 . In DHP. the clause (x2 ∨ x5 ∨ x8 ) would generate a clause gadget with inputs labeled x2 . i2 . sometimes called components or gadgets (also called widgets). There will be paths coming into Lecture Notes 126 CMSC 451 . which you can verify. or i3 to its respective output edge o1 . 2 or 3 input edges.) The proof is not hard. called a DHP-gadget. but involves a careful inspection of the gadget. We will construct one DHP-gadget for each clause in the formula. by starting with one. Proof: DHP ∈ NP: The certiﬁcate consists of the sequence of vertices (or edges) in the path. This one involves the construction of only one. labeled o1 . xi+1 . Intuitively it says that if you enter the gadget on any subset of 1. two. a path that starts on input i1 must exit on output o1 . i2 . We will convert this formula into a digraph. (The ﬁgure only shows a portion of the construction. (In other words. . and the false path will pass through all the gadgets for clauses in which xi appears. and attempting to get through the gadget without skipping vertex and without visiting any vertex twice. because they operate by making some local change throughout the graph. most of the reductions that we have seen (for Clique. This is illustrated in the following ﬁgure. is shown in the ﬁgure below. then there is a way to get through the gadget and hit every vertex exactly once. o2 . answer the question of why there are 6 groups of triples.

Lecture Notes 127 CMSC 451 . 78: DHP-Gadget and examples of path traversals.Gadget i1 i2 i3 o1 o2 o3 i1 i2 i3 What it looks like inside o1 o2 o3 Path with 1 entry i1 o1 i2 i3 o2 o3 i1 i2 i3 o1 o2 o3 Path with 2 entries i1 o1 i2 i3 o2 o3 i1 i2 i3 o1 o2 o3 Path with 3 entries i1 o1 i2 i3 o2 o3 i1 i2 i3 o1 o2 o3 Fig.

these same gadgets from other variables as well.) We add one ﬁnal vertex xe , and the last variable’s paths are connected to xe . (If we wanted to reduce to Hamiltonian cycle, rather than Hamiltonian path, we could join xe back to x1 .)

xi xi xi xi _ xi _ _ xi xi xi _ xi _ xi _ xi ... xi xi ... xi _ xi xi xi+1 _ xi

Fig. 79: General structure of reduction from 3SAT to DHP. Note that for each variable, the Hamiltonian path must either use the true path or the false path, but it cannot use both. If we choose the true path for xi to be in the Hamiltonian path, then we will have at least one path passing through each of the gadgets whose corresponding clause contains xi , and if we chose the false path, then we will have at least one path passing through each gadget for xi . For example, consider the following boolean formula in 3-CNF. The construction yields the digraph shown in the following ﬁgure. (x1 ∨ x2 ∨ x3 ) ∧ (x1 ∨ x2 ∨ x3 ) ∧ (x2 ∨ x1 ∨ x3 ) ∧ (x1 ∨ x3 ∨ x2 ).

path starts here x1 x2 x3

T F T F T F

_ x1 x2 x3

x1 _ x2 _ x3

x2 _ x _1 x3

x1 x3 _ x2

to x3 to x2 to x3 to x2

xe

Fig. 80: Example of the 3SAT to DHP reduction. The Reduction: Let us give a more formal description of the reduction. Recall that we are given a boolean formula F in 3-CNF. We create a digraph G as follows. For each variable xi appearing in F , we create a variable vertex, named xi . We also create a vertex named xe (the ending vertex). For each clause c, we create a DHP-gadget whose inputs and outputs are labeled with the three literals of c. (The order is unimportant, as long as each input and its corresponding output are labeled the same.) We join these vertices with the gadgets as follows. For each variable xi , consider all the clauses c1 , c2 , . . . , ck in which xi appears as a literal (uncomplemented). Join xi by an edge to the input labeled with xi in the gadget for c1 , and in general join the the output of gadget cj labeled xi with the input of gadget cj+1 with this same label. Finally, join the output of the last gadget ck to the next vertex variable xi+1 . (If this is the last variable, then join it to xe instead.) The resulting chain of edges is called the true path for variable xi . Form a second chain in exactly the same way, but this time joining the gadgets for the clauses in which xi appears. This is called the false path for xi . The resulting digraph is the output of the reduction. Observe that the entire construction can be performed in polynomial time, by simply inspecting the formula, creating the appropriate vertices, and adding the appropriate edges to the digraph. The following lemma establishes the correctness of this reduction. Lemma: The boolean formula F is satisﬁable if and only if the digraph G produced by the above reduction has a Hamiltonian path. Lecture Notes 128 CMSC 451

Start here x1 x2 x3

F T

T

x2

x1 _ x3

x2 _ x3

x1

to x3 to x2

xe

**A satisfying assignment hits all gadgets Start here x1 x2 x3
**

F F T

_ x1 x2

_ x3

x2 _ x _1 x3

to x3 xe to x2

A nonsatisfying assignment misses some gadgets

Fig. 81: Correctness of the 3SAT to DHP reduction. The upper ﬁgure shows the Hamiltonian path resulting from the satisfying assignment, x1 = 1, x2 = 1, x3 = 0, and the lower ﬁgure shows the non-Hamiltonian path resulting from the nonsatisfying assignment x1 = 0, x2 = 1, x3 = 0. Proof: We need to prove both the “only if” and the “if”. ⇒: Suppose that F has a satisfying assignment. We claim that G has a Hamiltonian path. This path will start at the variable vertex x1 , then will travel along either the true path or false path for x1 , depending on whether it is 1 or 0, respectively, in the assignment, and then it will continue with x2 , then x3 , and so on, until reaching xe . Such a path will visit each variable vertex exactly once. Because this is a satisfying assignment, we know that for each clause, either 1, 2, or 3 of its literals will be true. This means that for each clause, either 1, 2, or 3, paths will attempt to travel through the corresponding gadget. However, we have argued in the above claim that in this case it is possible to visit every vertex in the gadget exactly once. Thus every vertex in the graph is visited exactly once, implying that G has a Hamiltonian path. ⇐: Suppose that G has a Hamiltonian path. We assert that the form of the path must be essentially the same as the one described in the previous part of this proof. In particular, the path must visit the variable vertices in increasing order from x1 until xe , because of the way in which these vertices are joined together. Also observe that for each variable vertex, the path will proceed along either the true path or the false path. If it proceeds along the true path, set the corresponding variable to 1 and otherwise set it to 0. We will show that the resulting assignment is a satisfying assignment for F . Any Hamiltonian path must visit all the vertices in every gadget. By the above claim about DHP-gadgets, if a path visits all the vertices and enters along input edge then it must exit along the corresponding output edge. Therefore, once the Hamiltonian path starts along the true or false path for some variable, it must remain on edges with the same label. That is, if the path starts along the true path for xi , it must travel through all the gadgets with the label xi until arriving at the variable vertex for xi+1 . If it starts along the false path, then it must travel through all gadgets with the label xi . Since all the gadgets are visited and the paths must remain true to their initial assignments, it follows that for each corresponding clause, at least one (and possibly 2 or three) of the literals must be true. Therefore, this is a satisfying assignment.

Lecture Notes

129

CMSC 451

**Supplemental Lecture 13: Subset Sum Approximation
**

Read: Section 37.4 in CLR. Polynomial Approximation Schemes: Last time we saw that for some NP-complete problems, it is possible to approximate the problem to within a ﬁxed constant ratio bound. For example, the approximation algorithm produces an answer that is within a factor of 2 of the optimal solution. However, in practice, people would like to the control the precision of the approximation. This is done by specifying a parameter > 0 as part of the input to the approximation algorithm, and requiring that the algorithm produce an answer that is within a relative error of of the optimal solution. It is understood that as tends to 0, the running time of the algorithm will increase. Such an algorithm is called a polynomial approximation scheme. For example, the running time of the algorithm might be O(2(1/ ) n2 ). It is easy to see that in such cases the user pays a big penalty in running time as a function of . (For example, to produce a 1% error, the “constant” factor would be 2100 which would be around 4 quadrillion centuries on your 100 Mhz Pentium.) A fully polynomial approximation scheme is one in which the running time is polynomial in both n and 1/ . For example, a running time of O((n/ )2 ) would satisfy this condition. In such cases, reasonably accurate approximations are computationally feasible. Unfortunately, there are very few NP-complete problems with fully polynomial approximation schemes. In fact, recently there has been strong evidence that many NP-complete problems do not have polynomial approximation schemes (fully or otherwise). Today we will study one that does. Subset Sum: Recall that in the subset sum problem we are given a set S of positive integers {x1 , x2 , . . . , xn } and a target value t, and we are asked whether there exists a subset S ⊆ S that sums exactly to t. The optimization problem is to determine the subset whose sum is as large as possible but not larger than t. This problem is basic to many packing problems, and is indirectly related to processor scheduling problems that arise in operating systems as well. Suppose we are also given 0 < < 1. Let z ∗ ≤ t denote the optimum sum. The approximation problem is to return a value z ≤ t such that z ≥ z ∗ (1 − ). If we think of this as a knapsack problem, we want our knapsack to be within a factor of (1 − ) of being as full as possible. So, if = 0.1, then the knapsack should be at least 90% as full as the best possible. What do we mean by polynomial time here? Recall that the running time should be polynomial in the size of the input length. Obviously n is part of the input length. But t and the numbers xi could also be huge binary numbers. Normally we just assume that a binary number can ﬁt into a word of our computer, and do not count their length. In this case we will to be on the safe side. Clearly t requires O(log t) digits to be store in the input. We will take the input size to be n + log t. Intuitively it is not hard to believe that it should be possible to determine whether we can ﬁll the knapsack to within 90% of optimal. After all, we are used to solving similar sorts of packing problems all the time in real life. But the mental heuristics that we apply to these problems are not necessarily easy to convert into efﬁcient algorithms. Our intuition tells us that we can afford to be a little “sloppy” in keeping track of exactly full the knapsack is at any point. The value of tells us just how sloppy we can be. Our approximation will do something similar. First we consider an exponential time algorithm, and then convert it into an approximation algorithm. Exponential Time Algorithm: This algorithm is a variation of the dynamic programming solution we gave for the knapsack problem. Recall that there we used an 2-dimensional array to keep track of whether we could ﬁll a knapsack of a given capacity with the ﬁrst i objects. We will do something similar here. As before, we will concentrate on the question of which sums are possible, but determining the subsets that give these sums will not be hard. Let Li denote a list of integers that contains the sums of all 2i subsets of {x1 , x2 , . . . , xi } (including the empty set whose sum is 0). For example, for the set {1, 4, 6} the corresponding list of sums contains 0, 1, 4, 5(= Lecture Notes 130 CMSC 451

4. This is essentially the procedure used in MergeSort but with the added duplicate element test. e. and let y ≥ z be the next element to be considered. remove for L all elements greater than t. 24. since some subsets may have the same sum. and the ﬁnal answer would be 7. 1 ∪ 0 + 4. 23. let L + x denote the list resulting by adding the number x to every element of list L. Note that Li can have as many as 2i elements.1 + 4). 21. 1 + 4 = 0. Exact Subset Sum Exact_SS(x[1. Approximation Algorithm: To convert this into an approximation algorithm. and (2) only keep sums that are less than or equal to t. 4. 050. } For example. 1. This gives the following procedure for the subset sum problem. how much trimming can we allow and still keep our approximation bound? Furthermore. if S = {1. 15. 11. One of them is good enough for future approximations. 6. 11 . Let z denote the last untrimmed element in L. The idea is that if the list L contains two numbers that are very close to one another. then we should not need to keep both of these numbers in the list. 6} and t = 8 then the successive lists would be L0 L1 L2 L3 = = = = 0 0 ∪ 0 + 1 = 0. L2) which merges two sorted lists. 6. 7. Let us suppose that we a procedure MergeLists(L1. 29 . This will reduce the size of the lists that the algorithm needs to maintain. (1) Remove any duplicates from Li . 6 + 3 = 4. We will trim elements whose values are sufﬁciently close to each other. The algorithm runs in Ω(2n ) time in the worst case. 5 ∪ 0 + 6. 048 and 91. If y−z ≤δ y then we trim y from the list. we will introduce a “trim” the lists to decrease their sizes. Lecture Notes 131 CMSC 451 . because this is the number of sums that are generated if there are no duplicates. 4. 7. But we should deﬁne close in manner that is relative to the sizes of the numbers involved. The trimming must also depend on . } return largest element in L. given δ = 0. We select δ = /n. 1 0. will we be able to reduce the list sizes from exponential to polynomial? The answer to both these questions is yes. Equivalently. 10(= 4 + 6). 1.g.. But. t) { L = <0>. and no items are removed. 20. 5. Thus 1. 5 0. As a bit of notation. for i = 1 to n do { L = MergeLists(L. 7(= 1 + 6).n].) Note that 0 < δ < 1. 11(= 1 + 4 + 6) . 10. 22. For example. 4 + 6. and returns a sorted lists with all duplicates removed. (Why? We will see later that this is the value that makes everything work out in the end. L+x[i]). 9 . 4. 12. 5 + 6 = 0. There are two things we will want to do for efﬁciency. but may have fewer.1 and given the list L = 10. The last list would have the elements 10 and 11 removed. We can think of z as representing y in the list. We walk through the list. 1. 91. 1 + 6. Assume that the elements of L are sorted. this means that the ﬁnal trimmed list cannot contain two value y and z such that (1 − δ)y ≤ z ≤ y. provided you apply a proper way of trimming the lists. 4.

Let d = 1/(1 − δ). 15.05. d3 ]. 29 . d2 ]. Thus. 23. . the smallest nonzero value and maximum value in the the trimmed list differ by a ratio of at least dk−1 . i y d d Thus. . consider the set S = {104. di ] then di − di−1 1 y−z ≤ = 1 − = δ. = O ≤ k Observe that the input size is at least as large as n (since there are n numbers) and at least as large as log t (since it takes log t digits to write down t on the input). Proof: We know that each pair of consecutive elements in a trimmed list differ by a ratio of at least d = 1/(1 − δ) > 1. 201. 82: Trimming Lists for Approximate Subset Sum. Taking the natural log of both sides we have (k − 1) ln d ≤ ln t. Consider the intervals [1. Thus. 101} and t = 308 and Here is a summary of the algorithm’s execution. If z ≤ y are in the same interval [di−1 . We can think of trimming as a way of enforcing the condition that items in our lists are not relatively too close to one another. this function is polynomial in the input size and 1/ . . by enforcing the condition that no bucket has more than one item. ignoring the element of value 0. Claim: The number of distinct items in a trimmed list is O((n log t)/ ). 1 2 L L’ 4 8 16 Fig. 20.20. and the largest is no larger than t. we cannot have more than one item within each bucket. [d. [d2 . The approximation algorithm operates as before. then it follows that dk−1 ≤ t/1 = t. Using the facts that δ = /n and the log identity that ln(1 + x) ≤ x.the trimmed list L will consist of L = 10. . We have δ = /4 = 0. For example. t] into a set of buckets of exponentially increasing size. = 0. 102. but in addition we call the procedure Trim given below. [dk−1 . d]. Let k denote the number of elements in the trimmed list. Lecture Notes 132 CMSC 451 . we have k−1 ln t ln t = ln d − ln(1 − δ) n ln t ln t = ≤ δ n log t . which is polynomial in input size and 1/ . dk ] where dk ≥ t. Note that d > 1. Since the smallest (nonzero) element is at least as large as 1. Another way to visualize trimming is to break the interval from [1. 12.

102. t. eps) { delta = eps/n. } init: merge: trim: remove: merge: trim: remove: merge: trim: remove: merge: trim: remove: L0 L1 L1 L1 L2 L2 L2 L3 L3 L3 L4 L4 L4 = = = = = = = = = = = = = 0 0. 407 0. } } } Approx_SS(x[1. 201. The optimum is 307 = 104 + 102 + 101. // empty sum = 0 for i = 1 to n do { L = MergeLists(L. 201. 302. delta). 102. 303 0.. // approx factor L = <0>. The running time of the procedure is O(n|L|) which is O(n2 ln t/ ) by the earlier claim.Approximate Subset Sum Trim(L. 302.n]. 206 0. 104 0. 104 0. 201. 206 0. 104. 206 0. 201. 407 0. 303. L’ = <y[1]>. 201. delta) { let the elements of L be denoted y[1. } return largest element in L. L+x[i]). // trim away "near" duplicates remove for L all elements greater than t. // last item to be added for i = 2 to m do { if (last < (1-delta) y[i]) { // different enough? append y[i] to end of L’. 102. So our actual relative error in this case is within 2%. 102. 302 The ﬁnal output is 302. Lecture Notes 133 CMSC 451 . 101. 303. 303.. 206. 404 0. 102. // add in next item L = Trim(L.m]. 404 0. 101. 104 0. 102. 101. 201. 203. 102. // start with first item last = y[1]. last = y[i].

These errors to not accumulate additively. Let L∗ denote the i-th list in the exponential time (optimal) solution and let Li denote the i-th list in the approxi imate algorithm. 83: Subset sum approximation analysis. Our proof will make use of an important inequality from real analysis. Suppose by 0 induction that the above equation holds for each item in L∗ . there is a representative element z in Li−1 such that (1 − /n)i−1 y ≤ z ≤ y. We want to argue that there will be a representative that is i “close” to each of these items. Let Y ∗ denote the optimum (largest) subset sum and let Y denote the value returned by the algorithm. Y ≥ Y ∗ (1 − ). We claim that for each y ∈ L∗ there exists a representative item z ∈ Li whose relative error i from y that satisﬁes (1 − /n)i y ≤ z ≤ y. Since the algorithm has n stages. Consider an element y ∈ L∗ . CMSC 451 . We have (1 − /n)z (1 − /n)(z + xi ) Lecture Notes 134 ≤z ≤ z ≤ z ≤ z + xi . The items z and z + xi might not appear in Li because they may be trimmed. So we need to be more careful. Let z and z be their respective representatives. and so there is no error. Lemma: For n > 0 and a real numbers. we will form two new items to add (initially) to Li : z and z + xi . z and z are elements of Li . then the total relative error should be (obviously?) n( /n) = . Thus. By our induction hypothesis. When we apply our algorithm. The proof of the claim is by induction on i. Initially L0 = L∗ = 0 . We know that i−1 i−1 y will generate two elements in L∗ : y and y + xi . We want to show that Y is not too much smaller than Y ∗ .Approximation Analysis: The ﬁnal question is why the algorithm achieves an relative error of at most over the optimum solution. not absolute errors. (1 + a) ≤ 1 + a n n ≤ ea . Recall that our intuition was that we would allow a relative error of /n at each stage of the algorithm. The catch is that these are relative. Observe that by adding xi to the inequality above and a little simpliﬁcation we get (1 − /n)i−1 (y + xi ) ≤ z + xi ≤ y + xi . that is. zy L* i−1 L i−1 L* i Li y+xi z’ y z’’ z+xi z Fig. but rather by multiplication.

Since z and z are in Li this is the desired result. and the fact that Y ∗ (the optimum answer) is the largest element of L∗ and Y (the approximate n answer) is the largest element of Ln we have (1 − /n)n Y ∗ ≤ Y ≤ Y ∗ . To complete the proof. Lecture Notes 135 CMSC 451 . This is not quite what we wanted. we observe from the lemma above (setting a = − ) that (1 − ) ≤ 1 − This completes the approximate analysis. This ends the proof of the claim. We wanted to show that (1 − )Y ∗ ≤ Y . Using our claim.Combining these with the inequalities above we have (1 − /n)i−1 (1 − /n)y ≤ (1 − /n)i y ≤ z ≤ y (1 − /n)i−1 (1 − /n)(y + xi ) ≤ (1 − /n)i (y + xi ) ≤ z ≤ z + yi . n n .

- Combinatorics
- 27287326-Algorithms-and-Data-Structures
- TCTutorial
- ADA Levitin
- Syllabus Beginner
- null
- A Notes on Design & Analysis of Algorithm
- Algorithm
- root
- adaNotes-1
- Computer Networks
- CS502_all
- Introduction to Design and Analysis Computer Algorithms - Solution Manual
- 10 tips to create useful beautiful visualizations
- Intro Notes
- Big Data Startups
- Impala and BigQuery (1)
- ANALYTICAL APPROACH TO DESCRIPTION OF SOME C0MBINATORIAL AND NUMBER-THEORETIC COMPUTATIVE ALGORITHMS
- WhatisNeo4j
- Impala Presentation - Orlando
- Spark ML Programming Guide - Spark 1.3
- Neo4j Use Case Social
- Machine Learning Resource Guide
- Graph Modeling Dos and Don'ts
- sparksql
- Spark Week 1 Lecture
- Monetdb Intro PPT
- operational risk in bank
- Shark SQL and Rich Analytics at Scala Reynold Xin
- Logit powerpoint

Skip carousel

- frbclv_wp1990-02.pdf
- tmpA3C6
- tmp8D02
- wp02_08
- A Mathematical Theory of Communication
- tmpADD4.tmp
- tmpB3E1.tmp
- tmpDB1F.tmp
- UT Dallas Syllabus for math1314.521 05u taught by Bentley Garrett (btg032000)
- An Internal Intrusion Approach And Self Protection For Generating Synthetic Log File Using Forensic Techniques
- tmpBB9E.tmp
- frbrich_wp94-1.pdf
- tmpE701.tmp
- Trusted Cloud Computing with Secure Aid Agency for a Couple of Clouds Collaboration
- tmp685C.tmp
- As 1046.3-1991 Letter Symbols for Use in Electrotechnology Logarithmic Quantities and Units
- tmpC4B6
- tmp1DD0
- Tmp 3970
- Technological Change, Human Capital Structure, and Multiple Growth Paths
- Tmp 6460
- 66547_1980-1984
- tmp338E.tmp
- Logjam attack
- Money & Sustainability

Sign up to vote on this title

UsefulNot usefulRead Free for 30 Days

Cancel anytime.

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Close Dialog## This title now requires a credit

Use one of your book credits to continue reading from where you left off, or restart the preview.

Loading