This action might not be possible to undo. Are you sure you want to continue?

Johan H˚astad

Department of Numerical Analysis and Computing Science

Royal Institute of Technology

S-100 44 Stockholm

SWEDEN

johanh@nada.kth.se

May 13, 2009

1

Contents

1 Preface 4

2 Recursive Functions 5

2.1 Primitive Recursive Functions . . . . . . . . . . . . . . . . . . 6

2.2 Partial recursive functions . . . . . . . . . . . . . . . . . . . . 10

2.3 Turing Machines . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.4 Church’s thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.5 Functions, sets and languages . . . . . . . . . . . . . . . . . . 16

2.6 Recursively enumerable sets . . . . . . . . . . . . . . . . . . . 16

2.7 Some facts about recursively enumerable sets . . . . . . . . . 19

2.8 G¨ odel’s incompleteness theorem . . . . . . . . . . . . . . . . . 26

2.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2.10 Answers to exercises . . . . . . . . . . . . . . . . . . . . . . . 28

3 Eﬃcient computation, hierarchy theorems. 32

3.1 Basic Deﬁnitions . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.2 Hierarchy theorems . . . . . . . . . . . . . . . . . . . . . . . . 33

4 The complexity classes L, P and PSPACE. 39

4.1 Is the deﬁnition of P model dependent? . . . . . . . . . . . . 40

4.2 Examples of members in the complexity classes. . . . . . . . . 48

5 Nondeterministic computation 56

5.1 Nondeterministic Turing machines . . . . . . . . . . . . . . . 56

6 Relations among complexity classes 64

6.1 Nondeterministic space vs. deterministic time . . . . . . . . . 64

6.2 Nondeterministic time vs. deterministic space . . . . . . . . . 65

6.3 Deterministic space vs. nondeterministic space . . . . . . . . 66

7 Complete problems 69

7.1 NP-complete problems . . . . . . . . . . . . . . . . . . . . . . 69

7.2 PSPACE-complete problems . . . . . . . . . . . . . . . . . . . 78

7.3 P-complete problems . . . . . . . . . . . . . . . . . . . . . . . 82

7.4 NL-complete problems . . . . . . . . . . . . . . . . . . . . . . 85

8 Constructing more complexity-classes 86

2

9 Probabilistic computation 89

9.1 Relations to other complexity classes . . . . . . . . . . . . . . 94

10 Pseudorandom number generators 95

11 Parallel computation 106

11.1 The circuit model of computation . . . . . . . . . . . . . . . . 106

11.2 NC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

11.3 Parallel time vs sequential space . . . . . . . . . . . . . . . . 112

12 Relativized computation 116

13 Interactive proofs 123

3

1 Preface

The present set of notes have grown out of a set of courses I have given at

the Royal Institute of Technology. The courses have been given at an in-

troductory graduate level, but also interested undergraduates have followed

the courses.

The main idea of the course has been to give the broad picture of mod-

ern complexity theory. To deﬁne the basic complexity classes, give some

examples of each complexity class and to prove the most standard relations.

The set of notes does not contain the amount of detail wanted from a text-

book. I have taken the liberty of skipping many boring details and tried to

emphasize the ideas involved in the proofs. Probably in many places more

details would be helpful and I would be grateful for hints on where this is

the case.

Most of the notes are at a fairly introductory level but some of the section

contain more advanced material. This is in particular true for the section

on pseudorandom number generators and the proof that IP = PSPACE.

Anyone getting stuck in these parts of the notes should not be disappointed.

These notes have beneﬁted from feedback from colleagues who have

taught courses based on this material. In particular I am grateful to Jens

Lagergren and Ingrid Lindstr¨ om. The students who have taken the courses

together with other people have also helped me correct many errors. Sincere

thanks to Jerker Andersson, Per Andersson, Lars Arvestad, J¨ orgen Back-

elin, Christer Berg, Christer Carlsson, Jan Frelin, Mikael Goldmann, Pelle

Grape, Joachim Hollman, Andreas Jakobik, Wojtek Janczewski, Kai-Mikael

J¨ a¨a-Aro, Viggo Kann, Mats N¨ aslund, and Peter Rosengren.

Finally, let me just note that there are probably many errors and inac-

curacies remaining and for those I must take full responsibility.

4

2 Recursive Functions

One central question in computer science is the basic question:

What functions are computable by a computer?

Oddly enough, this question preceded the invention of the modern com-

puter and thus it was originally phrased: “What functions are mechanically

computable?” The word “mechanically” should here be interpreted as “by

hand without really thinking”. Several independent attempts to answer this

question were made in the mid-1930’s. One possible reason that several

researchers independently came to consider this question is its close connec-

tions to the proof of G¨ odel’s incompleteness theorem (Theorem 2.32) which

was published in 1931.

Before we try to formalize the concept of a computable function, let

us be precise about what we mean by a function. We will be considering

functions from natural numbers (N= {0, 1, 2 . . .}) to natural numbers. This

might seem restrictive, but in fact it is not since we can code almost any

type of object as a natural number. As an example, suppose that we are

given a function from words of the English alphabet to graphs. Then we can

think of a word in the English alphabet as a number written in base 27 with

a = 1, b = 2 and so on. A graph on n nodes can be thought of as a sequence

of

_

n

2

_

binary symbols where each symbol corresponds to a potential edge

and it is 1 iﬀ the edge actually is there. For instance suppose that we are

looking at graphs with 3 nodes, and hence the possible edges are (1, 2), (1, 3)

and (2, 3). If the graph only contains the edges (1, 3) and (2, 3) we code it

as 011. Add a leading 1 and consider the result as a number written in

binary notation (our example corresponds to (1011)

2

= 11). It is easy to

see that the mapping from graphs to numbers is easy to compute and easy

to invert and thus we can use this representation of graphs as well as any

other. Thus a function from words over the English alphabet to graphs can

be represented as a function from natural numbers to natural numbers.

In a similar way one can see that most objects that have any reasonable

formal representation can be represented as natural numbers. This fact will

be used constantly throughout these notes.

After this detour let us return to the question of which functions are me-

chanically computable. Mechanically computable functions are often called

recursive functions. The reason for this will soon be obvious.

5

2.1 Primitive Recursive Functions

The name “recursive” comes from the use of recursion, i.e. when a function

value f(x +1) is deﬁned in terms of previous values f(0), f(1) . . . f(x). The

primitive recursive functions deﬁne a large class of computable functions

which contains most natural functions. It contains some basic functions and

then new primitive recursive functions can be built from previously deﬁned

primitive recursive functions either by composition or primitive recursion.

Let us give a formal deﬁnition.

Deﬁnition 2.1 The following functions are primitive recursive

1. The successor function, σ(x) = x + 1.

2. Constants, m(x) = m for any constant m.

3. The projections, π

n

i

(x

1

, x

2

. . . x

n

) = x

i

for 1 ≤ i ≤ n and any n.

The primitive recursive functions are also closed under the following

two operations. Assume that g, h, g

1

, g

2

. . . g

m

are known to be prim-

itive recursive functions, then we can form new primitive recursive

functions in the following ways.

4. Composition, f(x

1

, x

2

. . . x

n

) = h(g

1

(x

1

, . . . x

n

), g

2

(x

1

, . . . , x

n

), . . . g

m

(x

1

, . . . x

n

))

5. Primitive recursion The function deﬁned by

• f(0, x

2

, x

3

, . . . x

n

) = g(x

2

, x

3

, . . . x

n

)

• f(x

1

+ 1, x

2

, x

3

, . . . x

n

) = h(x

1

, f(x

1

, . . . x

n

), x

2

, . . . x

n

)

To get a feeling for this deﬁnition let us prove that some common functions

are primitive recursive.

Example 2.2 Addition is deﬁned as

Add(0, x

2

) = π

1

1

(x

2

)

Add(x

1

+ 1, x

2

) = σ(π

3

2

(x

1

, Add(x

1

, x

2

), x

2

))

= σ(Add(x

1

, x

2

))

It will be very cumbersome to follow the notation of the deﬁnition of

the primitive recursive functions strictly. Thus instead of the above, not

6

very transparent (but formally correct deﬁnition) we will use the equivalent,

more transparent (but formally incorrect) version stated below.

Add(0, x

2

) = x

2

Add(x

1

+ 1, x

2

) = Add(x

1

, x

2

) + 1

Example 2.3 Multiplication can be deﬁned as

Mult(0, x

2

) = 0

Mult(x

1

+ 1, x

2

) = Add(x

2

, Mult(x

1

, x

2

))

Example 2.4 We cannot deﬁne subtraction as usual since we require the

answer to be nonnegative

1

. However, we can deﬁne a function which takes

the same value as subtraction whenever it is positive and otherwise takes the

value 0. First deﬁne a function on one variable which is basically subtraction

by 1.

Sub1(0) = 0

Sub1(x + 1) = x

and now we can let

Sub(x

1

, 0) = x

1

Sub(x

1

, x

2

+ 1) = Sub1(Sub(x

1

, x

2

)).

Here for convenience we have interchanged the order of the arguments in

the deﬁnition of the recursion but this can be justiﬁed by the composition

rule.

Example 2.5 If f(x, y) =

y−1

i=0

g(x, i) where we let f(x, 0) = 1 and g is

primitive recursive then so is f since it can be deﬁned by

f(x, 0) = 1

f(x, y + 1) = Mult(f(x, y), g(x, y)).

Example 2.6 We can deﬁne a miniature version of the signum function by

Sg(0) = 0

Sg(x + 1) = 1

1

This is due to the fact that we have decided to work with natural numbers. If we

instead would be working with integers the situation would be diﬀerent

7

and this allows us to deﬁne equality by

Eq(m, n) = Sub(1, Add(Sg(Sub(n, m)), Sg(Sub(m, n))))

since Sub(n, m) and Sub(m, n) are both zero iﬀ n = m. Equality is here

deﬁned as by Eq(m, n) = 1 if m and n are equal and Eq(m, n) = 0 otherwise.

Equality is not really a function put a predicate of pairs of numbers i.e.

a property of pairs of numbers. However, as we did above, it is convenient

to identify predicates with functions that take the values 0 and 1, letting

the value of the function be 1 exactly when the predicate is true. With

this convention we deﬁne a predicate to be primitive recursive exactly when

the corresponding function is primitive recursive. This naturally leads to an

eﬃcient way to prove that more functions are primitive recursive. Namely,

let g and h be primitive recursive functions and let P be a primitive recursive

predicate. Then the function f(x) deﬁned by g(x) if P(x) and h(x) otherwise

will be primitive recursive since it can be written as

Add(Mult(g(x), P(x)), Mult(h(x), Sub(1, P(x))))

(which in ordinary notation is (P ∗ g + (1 −P) ∗ h).

Continuing along these lines it is not diﬃcult (but tedious) to prove that

most simple functions are primitive recursive. Let us now argue that all

primitive recursive functions are mechanically computable. Of course this

can only be an informal argument since “mechanically computable” is only

an intuitive notion.

Each primitive recursive function is deﬁned as a sequence of statements

starting with basic functions of the types 1-3 and then using rules 4-5. We

will call this a derivation of the function. We will argue that primitive recur-

sive functions are mechanically computable by induction over the complexity

of the derivation (i.e. the number of steps in the derivation).

The simplest functions are the basic functions 1-3 and, arguing infor-

mally, are easy to compute. In general a primitive recursive function f will

be obtained using the rules 4 and 5 from functions deﬁned previously. Since

the derivations of these functions are subderivations of the given derivation,

we can conclude that the functions used in the deﬁnition are mechanically

computable. Suppose the new function is constructed by composition, then

we can compute f by ﬁrst computing the g

i

and then computing h of the

results. On the other hand if we use primitive recursion then we can com-

pute f when the ﬁrst argument is 0 since it then agrees with g which is

8

computable by induction and then we can see that we can compute f in

general by induction over the size of the ﬁrst argument. This ﬁnishes the

informal argument that all primitive recursive functions are mechanically

computable.

Before we continue, let us note the following: If we look at the proof

in the case of multiplication it shows that multiplication is mechanically

computable but it gives an extremely ineﬃcient algorithm. Thus the present

argument has nothing to do with computing eﬃciently.

Although we have seen that most simple functions are primitive recursive

there are in fact functions which are mechanically computable but are not

primitive recursive. We will give one such function which, we have to admit,

would not be the ﬁrst one would like to compute but which certainly is very

important from a theoretical point of view.

A derivation of a primitive recursive function is just a ﬁnite number of

symbols and thus we can code it as a number. If the coding is reasonable it

is mechanically computable to decide, given a number, whether the number

corresponds to a correct derivation of a primitive recursive function in one

variable. Now let f

1

be the primitive recursive function in one variable which

corresponds to the smallest number giving such a legal derivation and then

let f

2

be the function which corresponds to the second smallest number and

so on. Observe that given x it is possible to mechanically ﬁnd the derivation

of f

x

by the following mechanical but ineﬃcient procedure. Start with 0 and

check the numbers in increasing order whether they correspond to correct

derivations of a function in one variable. The x’th legal derivation found is

the derivation of f

x

. Now let

V (x) = f

x

(x) + 1.

By the above discussion V is mechanically computable, since once we have

found the derivation of f

x

we can compute it on any input. On the other

hand we claim that V does not agree with any primitive recursive function.

If V was primitive recursive then V = f

y

for some number y. Now look at

the value of V at the point y. By the deﬁnition of V the value should be

f

y

(y) + 1. On the other hand if V = f

y

then it is f

y

(y). We have reached a

contradiction and we have thus proved:

Theorem 2.7 There are mechanically computable functions which are not

primitive recursive.

9

The method of proof used to prove this theorem is called diagonalization.

To see the reason for this name think of an inﬁnite two-dimensional array

with natural numbers along one axis and the primitive recursive functions on

the other. At position (i, j) we write the number f

j

(i). We then construct

a function which is not primitive recursive by going down the diagonal and

making sure that our function disagrees with f

i

on input i. The idea is

similar to the proof that Cantor used to prove that the real numbers are not

denumerable.

The above proof demonstrates something very important. If we want

to have a characterization of all mechanically computable functions the de-

scription cannot be mechanically computable by itself. By this we mean that

given x we should not be able to ﬁnd f

x

in a mechanical way. If we could

ﬁnd f

x

then the above deﬁned function V would be mechanically computable

and we would get a function which was not in our list.

2.2 Partial recursive functions

The way around the problem mentioned last in the last section is to allow a

derivation to deﬁne a function which is only partial i.e. is not deﬁned for all

inputs. We will do this by giving another way of forming new function. This

modiﬁcation will give a new class of functions called the partial recursive

functions.

Deﬁnition 2.8 The partial recursive functions contains the basic functions

deﬁned by 1-3 for primitive recursive functions and are closed under the

operations 4 and 5. There is an extra way of forming new functions:

6. Unbounded search Assume that g is a partial recursive function and

let f(x

1

, . . . x

n

) be the least m such that g(m, x

1

, . . . , x

n

) = 0 and such

that g(y, x

1

, . . . , x

n

) is deﬁned for all y < m. If no such m exists then

f(x

1

, . . . , x

n

) is undeﬁned. Then f is partial recursive.

Our ﬁrst candidate for the class of mechanically computable functions

will be a subclass of the partial recursive functions.

Deﬁnition 2.9 A function is recursive (or total recursive) if it is a partial

recursive function which is total, i.e. which is deﬁned for all inputs.

Observe that a recursive function is in an intuitive sense mechanically

computable. To see this we just have to check that the property of mechan-

ical computability is closed under the rule 6, given that f is deﬁned. But

10

Figure 1: A Turing machine

this follows since we just have to keep computing g until we ﬁnd a value for

which it takes the value 0. The key point here is that since f is total we

know that eventually there is going to be such a value.

Also observe that there is no obvious way to determine whether a given

derivation deﬁnes a total function and thus deﬁnes a recursive function.

The problem being that it is diﬃcult to decide whether the deﬁned func-

tion is total (i.e. if for each value of x

1

, x

2

, . . . x

n

there is an m such that

g(m, x

1

, x

2

, . . . x

n

) = 0. This implies that we will not be able to imitate the

proof of Theorem 2.7 and thus there is some hope that this deﬁnition will

give all mechanically computable functions. Let us next describe another

approach to deﬁne mechanically computable functions.

2.3 Turing Machines

The deﬁnition of mechanically computable functions as recursive functions

given in the last section is due to Kleene. Other deﬁnitions of mechanically

computable were given by Church (eﬀective calculability, also by equations),

Post (canonical systems, as rewriting systems) and Turing (Turing machines,

a type of primitive computer). Of these we will only look closer at Turing

machines. This is probably the deﬁnition which to most of us today, after

the invention of the modern computer, seems most natural.

A Turing machine is a very primitive computer. A simple picture of one

is given in Figure 1. The inﬁnite tape serves as memory and input and output

device. Each square can contain one symbol from a ﬁnite alphabet which

we will denote by Σ. It is not important which alphabet the machine uses

and thus let us think of it as {0, 1, B} where B symbolizes the blank square.

The input is initially given on the tape. At each point in time the head is

located at one of the tape squares and is in one of a ﬁnite number of states.

The machine reads the content of the square the head is located at, and

11

State Symbol New State New Symbol Move

q

0

0 q

1

B R

q

0

1 q

0

B R

q

0

B q

h

1

q

1

0,1 q

1

B R

q

1

B q

h

0

Table 1: The next step function of a simple Turing machine

based on this value and its state, it writes something into the square, enters

a potentially new state and moves left or right. Formally this is described

by the next-move function

f : Q×Σ →Q×Σ ×{R, L}

where Q is the set of possible states and R(L) symbolizes moving right (left).

From an intuitive point of view the next-move function is the program of

the machine.

Initially the machine is in a special start-state, q

0

, and the head is located

on the leftmost square of the input. The tape squares that do not contain

any part of the input contain the symbol B. There is a special halt-state,

q

h

, and when the machine reaches this state it halts. The output is now

deﬁned by the non-blank symbols on the tape.

It is possible to make the Turing machine more eﬃcient by allowing more

than one tape. In such a case there is one head on each tape. If there are k

tapes then the next-step function depends on the contents of all k squares

where the heads are located, it describes the movements of all k heads and

what new symbols to write into the k squares. If we have several tapes then

it is common to have one tape on which the input is located, and not to allow

the machine to write on this tape. In a similar spirit there is one output-tape

which the machine cannot read. This convention separates out the tasks of

reading the input and writing the output and thus we can concentrate on

the heart of the matter, the computation.

However, most of the time we will assume that we have a one-tape Turing

machine. When we are discussing computability this will not matter, but

later when considering eﬃciency of computation results will change slightly.

Example 2.10 Let us deﬁne a Turing Machine which checks if the input

contains only ones and no zeros. It is given in Table 1.

12

Thus the machine starts in state q

0

and remains in this state until it has

seen a “0”. If it sees a “B” before it sees a “0” it accepts. If it ever sees a

“0” it erases the rest of the input, prints the answer 0 and then halts.

Example 2.11 Programming Turing machines gets slightly cumbersome

and as an example let us give a Turing machine which computes the sum of

two binary numbers. We assume that we are given two numbers with least

signiﬁcant bit ﬁrst and that there is a B between the two numbers. To make

things simpler we also assume that we have a special output-tape on which

we print the answer, also here beginning with the least signiﬁcant bit.

To make the representation compact we will let the states have two

indices, The ﬁrst index is just a string of letters while the other is a number,

which in general will be in the range 0 to 3. Let division be integer division

and let lsb(i) be the least signiﬁcant bit of i. The program is given in Table

2.3, where we assume for notational convenience that the machine starts in

state q

0,0

:

It will be quite time-consuming to explicitly give Turing machines which

compute more complicated functions. For this reason this will be the last

Turing machine that we specify explicitly. To be honest there are more

economic ways to specify Turing machines. One can build up an arsenal

of small machines doing basic operations and then deﬁne composition of

Turing machines. However, since programming Turing machines is not our

main task we will not pursue this direction either.

A Turing machine deﬁnes only a partial function since it is not clear

that the machine will halt for all inputs. But whenever a Turing machine

halts for all inputs it corresponds to a total function and we will call such a

function Turing computable.

The “Turing computable functions” is a reasonable deﬁnition of the me-

chanically computable functions and thus the ﬁrst interesting question is

how this new class of functions relates to the recursive functions. We have

the following theorem.

Theorem 2.12 A function is Turing computable iﬀ it is recursive.

We will not give the proof of this theorem. The proof is rather tedious,

and hence we will only give an outline of the general approach. The easier

part of the theorem is to prove that if a function is recursive then it is

Turing computable. Before, when we argued that recursive functions were

mechanically computable, most people who have programmed a modern

13

State Symbol New State New Symbol Move Output

q

0,i

0, 1(= j) q

x,i+j

B R

q

x,i

0, 1 q

xm,i

same R

q

x,i

B q

xo,i

B R

q

xo,i

B q

xo,i

B R

q

xo,i

0, 1(= j) q

yc,

i+j

2

B R lsb(i+j)

q

yc,i

0, 1(= j) q

yc,

i+j

2

B R lsb(i+j)

q

yc,i

B q

h

B i

q

xm,i

0, 1 q

xm,i

same R

q

xm,i

B q

sy,i

B R

q

sy,i

B q

sy,i

B R

q

sy,i

0, 1(= j) q

y,

i+j

2

B R lsb(i+j)

q

y,i

0, 1 q

sx,i

same L

q

y,i

B q

yo,i

B L

q

sx,i

B q

sx,i

B L

q

sx,i

0, 1 q

fx,i

same L

q

fx,i

0, 1 q

fx,i

same L

q

fx,i

B q

0,i

B R

q

yo,i

B q

yo,i

B L

q

yo,i

0, 1 q

xf,i

same L

q

xf,i

0, 1 q

xf,i

same L

q

xf,i

B q

cx,i

B R

q

cx,i

0, 1(= j) q

cx,

i+j

2

B R lsb(i+j)

q

cx,i

B q

h

B i

Table 2: A Turing machine for addition

14

computer probably felt that without too much trouble one could write a

program that would compute a recursive function. It is harder to program

Turing machines, but still feasible.

For the other implication one has to show that any Turing computable

function is recursive. The way to do this is to mimic the behavior of the

Turing machine by equations. This gets fairly involved and we will not

describe this procedure here.

2.4 Church’s thesis

In the last section we stated the theorem that recursive functions are iden-

tical to the Turing computable functions. It turns out that all the other at-

tempts to formalize mechanically computable functions give the same class

of functions. This leads one to believe that we have captured the right no-

tion of computability and this belief is usually referred to as Church’s thesis.

Let us state it for future reference.

Church’s thesis: The class of recursive functions is the class of mechan-

ically computable functions, and any reasonable deﬁnition of mechanically

computable will give the same class of functions.

Observe that Church’s thesis is not a mathematical theorem but a state-

ment of experience. Thus we can use such imprecise words as “reasonable”.

Church’s thesis is very convenient to use when arguing about computabil-

ity. Since any high level computer language describes a reasonable model of

computation the class of functions computable by high level programs is in-

cluded in the class of recursive functions. Thus as long as our descriptions of

procedures are detailed enough so that we feel certain that we could write a

high level program to do the computation, we can draw the conclusion that

we can do the computation on a Turing machine or by a recursive function.

In this way we do not have to worry about actually programming the Turing

machine.

For the remainder of these notes we will use the term “recursive func-

tions” for the class of functions described by Church’s thesis. Sometimes,

instead of saying that a given function, f, is a recursive function we will

phrase this as “f is computable”. When we argue about such functions

we will usually argue in terms of Turing machines but the algorithms we

describe will only be speciﬁed quite informally.

15

2.5 Functions, sets and languages

If a function f only takes two values (which we assume without loss of

generality to be 0 and 1) then we can identify f with the set, A, of inputs

for which the function takes the value 1. In formulas

x ∈ A ⇔f(x) = 1.

In this connection sets are also called languages, e.g. the set of prime num-

bers could be called the language of prime numbers. The reason for this is

historical and comes from the theory of formal languages. The function f is

called the characteristic function of A. Sometimes the characteristic func-

tion of A will be denoted by χ

A

. A set is called recursive iﬀ its characteristic

function is recursive. Thus A is recursive iﬀ given x one can mechanically

decide whether x ∈ A.

2.6 Recursively enumerable sets

We have deﬁned recursive sets to be the sets for which membership can be

tested mechanically i.e. a set A is recursive if given x it is computable to test

whether x ∈ A. Another interesting class of sets is the class of sets which

can be listed mechanically.

Deﬁnition 2.13 A set A is recursively enumerable iﬀ there is a Turing

machine M

A

which, when started on the empty input tape, lists the members

of A on its output tape.

It is important to remember that, while any member of A will eventually

be listed, the members of A are not necessarily listed in order and that M

will probably never halt since A is inﬁnite most of the time. Thus if we

want to know whether x ∈ A it is not clear how to use M for this purpose.

We can watch the output of M and if x appears we know that x ∈ A, but

if we have not seen x we do not know whether x ∈ A or we have not waited

long enough. If we would require that A was listed in order we could check

whether x ∈ A since we would only have had to wait until we had seen x

or a number greater than x.

2

Thus in this case we can conclude that A is

recursive, but in general this is not true.

2

There is a slightly subtle point here since it might be the case that M never outputs

such a number, which would happen in the case when A is ﬁnite and does not contain x or

any larger number. However also in this case A is recursive since any ﬁnite set is recursive.

It is interesting to note that given the machine M it is not clear which alternative should

be used to recognize A, but one of them will work and that is all we care about.

16

Theorem 2.14 If a set is recursive then it is recursively enumerable. How-

ever there are sets that are recursively enumerable that are not recursive.

Proof: That recursive implies recursively enumerable is not too hard, the

procedure below will even print the members of A in order.

For i = 0, 1 . . . ∞

If i ∈ A print i.

Since it is computable to determine whether i ∈ A this will give a correct

enumeration of A.

The other part of the theorem is harder and requires some more notation.

A Turing machine is essentially deﬁned by the next-step function which can

be described by a number of symbols and thus can be coded as an integer.

Let us outline in more detail how this is done. We have described a Turing

machine by a number of lines where each line contains the following items:

State, Symbol, New state, New Symbol, Move and Output. Let us make

precise how to code this information. A state should be written as q

x

where

x is a natural number written in binary. A symbol is from the set {0, 1, B},

while a move is either R or L and the output is either 0, 1 or B. Each item

is separated from the next by the special symbol &, the end of a line is

marked as & & and the end of the speciﬁcation is marked as & & &. We

assume that the start state is always q

0

and the halt state q

1

. With these

conventions a Turing machine is completely speciﬁed by a ﬁnite string over

the alphabet {0, 1, B, &, R, L, q}. This coding is also eﬃcient in the sense

that given a string over this alphabet it is possible to mechanically decide

whether it is a correct description of a Turing machine (think about this for

a while). By standard coding we can think of this ﬁnite string as a number

written in base 8. Thus we can uniquely code a Turing machine as a natural

number.

For technical reason we allow the end of the speciﬁcation not to be the

last symbols in the coding. If we encounter the end of the speciﬁcation we

will just discard the rest of the description. This deﬁnition implies that each

Turing machine occurs inﬁnitely many times in any natural enumeration.

We will denote the Turing machine which is given by the description

corresponding to y by M

y

. We again emphasize that given y it is possible

to mechanically determine whether it corresponds to a Turing machine and

in such a case ﬁnd that Turing machine. Furthermore we claim that once

we have the description of the Turing machine we can run it on any input

17

(simulate M

y

on a given input). We make this explicit by stating a theorem

we will not prove.

Theorem 2.15 There is a universal Turing machine which on input (x, y, z)

simulates z computational steps of M

y

on input x. By this we mean that

if M

y

halts with output w on input x within z steps then also the universal

machine outputs w. If M

y

does not halt within z steps then the universal

machine gives output “not halted”. If y is not the description of a legal

Turing machine, the universal Turing machine enters a special state q

ill

,

where it usually would halt, but this can be modiﬁed at will.

We will sometimes allow z to take the value ∞. In such a case the

universal machine will simulate M

y

until it halts or go on for ever without

halting if M

y

does not halt on input x. The output will again agree with

that of M

y

.

In a more modern language, the universal Turing machine is more or less

an interpreter since it takes as input a Turing machine program together with

an input and then runs the program. We encourage the interested reader

to at least make a rough sketch of a program in his favorite programming

language which does the same thing as the universal Turing machine.

We now deﬁne a function which is in the same spirit of the function V

which we proved not to be primitive recursive. To distinguish it we call it

V

T

.

V

T

(x) =

_

1, if M

x

halts on input x with output 0;

0, otherwise.

V

T

is the characteristic function of a set which we will denote by K

D

. We

call this set “the diagonal halting set” since it is the set of Turing machines

which halt with output 0 when given their own encoding as input. We claim

that K

D

is recursively enumerable but not recursive. To prove the ﬁrst claim

observe that K

D

can be enumerated by the following procedure

For i = 1, 2 . . . ∞

For j = 1, 2, . . . i, If M

j

is legal, run M

j

, i steps on input j, if it halts

within these i steps and gives output 0 and we have not listed j before,

print j.

Observe that this is an recursive procedure using the universal Turing

machine. The only detail to check is that we can decide whether j has

18

been listed before. The easiest way to do this is to observe that j has not

been listed before precisely if j = i or M

j

halted in exactly i steps. The

procedure lists K

D

since all numbers ever printed are by deﬁnition members

in K

D

and if x ∈ K

D

and M

x

halts in T steps on input x then x will be

listed for i = max(x, T) and j = x.

To see that K

D

is not recursive, suppose that V

T

can be computed by

a Turing machine M. We know that M = M

y

for some y. Consider what

happens when M is fed input y. If it halts with output 0 then V

T

(y) = 1.

On the other hand if M does not halt with output 0 then V

T

(y) = 0. In

either case M

y

makes an error and hence we have reached a contradiction.

This ﬁnishes the proof of Theorem 2.14

We have proved slightly more than was required by the theorem. We have

given an explicit function which cannot be computed by a Turing machine.

Let us state this as a separate theorem.

Theorem 2.16 The function V

T

cannot be computed by a Turing machine,

and hence is not recursive.

2.7 Some facts about recursively enumerable sets

Recursion theory is really the predecessor of complexity theory and let us

therefore prove some of the standard theorems to give us something to com-

pare with later. In this section we will abbreviate recursively enumerable as

“r.e.”.

Theorem 2.17 A is recursive if and only if both A and the complement of

A, (

¯

A) are r.e.

Proof: If A is recursive then also

¯

A is recursive (we get a machine recog-

nizing

¯

A from a machine recognizing A by changing the output). Since any

recursive set is r.e. we have proved one direction of the theorem. For the

converse, to decide whether x ∈ A we just enumerate A and

¯

A in parallel,

and when x appears in one of lists, which we know it will, we can give the

answer and halt.

From Theorem 2.16 we have the following immediate corollary.

Corollary 2.18 The complement of K

D

is not r.e..

19

For the next theorem we need the fact that we can code pairs of natural

numbers as natural numbers. For instance one such coding is given by

f(x, y) = (x +y)(x +y + 1)/2 +x.

Theorem 2.19 A is r.e. iﬀ there is a recursive set B such that x ∈ A ⇔

∃y (x, y) ∈ B.

Proof: If there is such a B then A can be enumerated by the following

program:

For z = 0, 1, 2, . . . ∞

For x = 0, 1, 2 . . . z If for some y ≤ z we have (x, y) ∈ B and (x, y

) ∈ B

for y

**< y and x has not been printed before then print x.
**

First observe that x has not been printed before if either x or y is equal

to z. By the relation between A and B this program will list only members

of A and if x ∈ A and y is the smallest number such that (x, y) ∈ B then x

is listed for z = max(x, y).

To see the converse, let M

A

be the Turing machine which enumerates

A. Deﬁne B to be the set of pairs (x, y) such that x is output by M

A

in at

most y steps. By the existence of the universal Turing machine it follows

that B is recursive and by deﬁnition ∃y(x, y) ∈ B precisely when x appears

in the output of M

A

, i.e. when x ∈ A. This ﬁnishes the proof of Theorem

2.19.

The last theorem says that r.e. sets are just recursive sets plus an existen-

tial quantiﬁer. We will later see that there is a similar relationship between

the complexity classes P and NP.

Let the halting set, K, be deﬁned by

K = {(x, y)|M

y

is legal and halts on input x}.

To determine whether a given pair (x, y) ∈ K is for natural reasons called the

halting problem. This is closely related to the diagonal halting problem which

we have already proved not to be recursive in the last section. Intuitively

this should imply that the halting problem also is not recursive and in fact

this is the case.

Theorem 2.20 The halting problem is not recursive.

20

Proof: Suppose K is recursive i.e. that there is a Turing machine M which

on input (x, y) gives output 1 precisely when M

y

is legal and halts on input

x. We will use this machine to construct a machine that computes V

T

using

M as a subroutine. Since we have already proved that no machine can

compute V

T

this will prove the theorem.

Now consider an input x and that we want to compute V

T

(x). First

decide whether M

x

is a legal Turing machine. If it is not we output 0 and

halt. If M

x

is a legal machine we feed the pair (x, x) to M. If M outputs

0 we can safely output 0 since we know that M

x

does not halt on input x.

On the other hand if M outputs 1 we use the universal machine on input

(x, x, ∞) to determine the output of M

x

on input x. If the output is 0 we give

the answer 1 and otherwise we answer 0. This gives a mechanical procedure

that computes V

T

and we have reached the desired contradiction.

It is now clear that other problems can be proved to be non-recursive by

a similar technique. Namely we assume that the given problem is recursive

and we then make an algorithm for computing something that we already

know is not recursive. One general such method is by a standard type of

reduction and let us next deﬁne this concept.

Deﬁnition 2.21 For sets A and B let the notation A ≤

m

B mean that

there is a recursive function f such that x ∈ A ⇔f(x) ∈ B.

The reason for the letter m on the less than sign is that one usually

deﬁnes several diﬀerent reductions. This particular reduction is usually re-

ferred to as a many-one reduction. We will not study other deﬁnitions in

detail, but since the only reduction we have done so far was not a many-one

reduction but a more general notion called Turing reduction, we will deﬁne

also this reduction.

Deﬁnition 2.22 For sets A and B let the notation A ≤

T

B mean that given

a Turing machine that recognizes B then using this machine as a subroutine

we can construct a Turing machine that recognizes A.

The intuition for either of the above deﬁnitions is that A is not harder

to recognize than B. This is formalized as follows:

Theorem 2.23 If A ≤

m

B and B is recursive then A is recursive.

Proof: To decide whether x ∈ A, ﬁrst compute f(x) and then check

whether f(x) ∈ B. Since both f and B are recursive this is a recursive

procedure and it gives the correct answer by the deﬁnition of A ≤

m

B.

21

Clearly the similar theorem with Turing reducibility rather than many-

one reducibility is also true (prove it). However in the future we will only

reason about many-one reducibility. Next let us deﬁne the hardest problem

within a given class.

Deﬁnition 2.24 A set A is r.e.-complete iﬀ

1. A is r.e.

2. If B is r.e. then B ≤

m

A.

We have

Theorem 2.25 The halting set is r.e.-complete.

Proof: The fact that the halting problem is r.e. can be seen in a similar way

that the diagonal halting problem K

D

was seen to be r.e.. Just run more

and more machines more and more steps and output all pairs of machines

and inputs that leads to halting.

To see that it is complete we have to prove that any other r.e. set, B can

be reduced to K. Let M be the Turing machine that enumerates B. Deﬁne

M

**to be the Turing machine which on input x runs M until it outputs x (if
**

ever) and then halts with output 0. Then M

**halts precisely when x ∈ B.
**

Thus if M

= M

y

we can let f(x) = (x, y) and this will give a reduction

from B to K. The proof is complete.

It is also true that the diagonal halting problem is r.e.-complete, but we

omit the proof. There are many other (often more natural) problems that

can be proved r.e.-complete (or to be even harder) and let us deﬁne two such

problems.

The ﬁrst problem is called tiling a can be thought of as a two-dimensional

domino game. Given a ﬁnite set of squares (which will be called tiles), each

with a marking on all four sides and one tile placed at the origin in the

plane. The question is whether it is possible to cover the entire positive

quadrant with tiles such that on any two neighboring tiles, the markings

agree on their common side and such that each tile is equal to one of the

given tiles.

Theorem 2.26 The complement problem of tiling is r.e.-complete.

22

Proof: (Outline) Given a Turing machine M

x

we will construct a set of

tiles and a tile at the origin such that the entire positive quadrant can be

tiled iﬀ M

x

does not halt on the empty input. The problem whether a

Turing machine halts on the empty input is not recursive (this is one of the

exercises in the end of this chapter). We will construct the tiles in such a

way that the only way to put down tiles correctly will be to make them

describe a computation of M

x

. The tile at the origin will make sure that the

machine starts correctly (with some more complication this tile could have

been eliminated also).

Let the state of a tape cell be the content of the cell with the additional

information whether the head is there and in such a case which state the

machine is in. Now each tile will describe the state of three adjacent cells.

The tile to be placed at position (i, j) will describe the state of cells j, j +1

and j +2 at time i of the computation. Observe that this implies that tiles

which are to the left and right of each other will describe overlapping parts

of the tape. However, we will make sure that the descriptions do not conﬂict.

A tile will thus be partly be speciﬁed by three cell-states s

1

, s

2

and s

3

(we call this the signature of the tile) and we need to specify how to mark

its four sides. The left hand side will be marked by (s

1

, s

2

) and the right

hand side by (s

2

, s

3

). Observe that this makes sure that there is no conﬂict

in the descriptions of a cell by diﬀerent tiles. The markings on the top and

the bottom will make sure that the computation proceeds correctly.

Suppose that the states of cells j, j + 1, and j + 2 are s

1

, s

2

, and s

3

at time t. Consider the states of these cells at time t + 1. If one of the

s

i

tells us that the head is present we know exactly what states the cells

will be in. On the other hand if the head is not present in any of the

three cells there might be several possibilities since the head could be in

cells j − 1 or j + 3 and move into one of our positions. In a similar way

there might be one or many (or even none) possible states for the three

cells at time t −1. For each possibility (s

−1

1

, s

−1

2

, s

−1

3

) and (s

+1

1

, s

+1

2

, s

+1

3

) of

states in the previous and next step we make a tile. The marking on the

lower side is (s

−1

1

, s

−1

2

, s

−1

3

), (s

1

, s

2

, s

3

) while the marking on the top side is

(s

1

, s

2

, s

3

), (s

+1

1

, s

+1

2

, s

+1

3

). This completes the description of the tiles.

Finally at the origin we place a tile which describes that the machine

starts in the ﬁrst cell in state q

0

and blank tape. Now it is easy to see that

a valid tiling describes a computation of M

x

and the entire quadrant can be

tiled iﬀ M

x

goes on for ever i.e. it does not halt.

There are a couple of details to take care of. Namely that new heads

don’t enter from the left and that the entire tape is blank from the beginning.

23

A couple of special markings will take care of this. We leave the details to

the reader.

The second problem we will consider is number theoretic statements,

i.e. given a number theoretic statement is it false or true? One particular

statement people have been interested in for a long time (which supposedly

was proved true in 1993) is Fermat’s last theorem, which can be written as

follows

∀n > 2 ∀x, y, zx

n

+y

n

= z

n

←xyz = 0.

In general a number theoretic statement involves the quantiﬁers ∀ and ∃,

variables and usual arithmetical operations. Quantiﬁers range over natural

numbers.

Theorem 2.27 The set of true number theoretic statements is not recur-

sive.

Remark 2.28 In fact the set of true number theoretic statements is not

even r.e. but have a much more complicated structure. To prove this would

lead us to far into recursion theory. The interested reader can consult any

standard text in recursion theory.

Proof: (Outline) Again we will prove that we can reduce the halting

problem to the given problem. This time we will let an enourmous integer

z code the computation. Thus assume we are given a Turing machine M

x

and that we want to decide whether it halts on the empty input.

The state of each cell will be given by a certain number of bits in the

binary expansion of z. Suppose that each cell has at most S ≤ 2

r

states.

A computation of M

x

that runs in time t never uses more than t tape cells

and thus such a computation can be described by the content of t

2

cells (i.e.

t cells each at t diﬀerent points in time). This can now be coded as rt

2

bits and these bits concatenated will be the integer z. Now let A

x

be an

arithmetic formula such that A

x

(z, t) is true iﬀ z is a rt

2

bit integer which

describes a correct computation for M

x

which have halted. To check that

such a formula exists requires a fair amount of detailed reasoning and let us

just sketch how to construct it. First one makes a predicate Cell(i, j, z, t, p)

which is true iﬀ p is the integer that describes the content of cell i at time

j. This amounts to extracting the r bits of z which are in position starting

at (it +j)r. Next one makes a predicate Move(p

1

, p

2

, p

3

, q) which says that

if p

1

, p

2

and p

3

are the states of squares i −1, i and i +1 at time j then q is

24

the resulting state of square i at time j + 1. The Cell predicate is from an

intuitive point of view very arithmetic (and thus we hope the reader feels

that it can be constructed). Move on the other hand is of constant size

(there are only 2

4r

inputs, which is a constant depending only on x and

independent of t ) and thus can be coded by brute force. The predicate

A

x

(z, t) is now equivalent to the conjunction of

∀i, j, p

1

, p

2

, p

3

, q Cell(i −1, j, z, t, p

1

)∧

Cell(i, j, z, t, p

2

) ∧ Cell(i + 1, j, z, t, p

3

)∧

Cell(i, j + 1, z, t, p

q

) ⇒Move(p

1

, p

2

, p

3

, q)

and

∀q

Cell(1, t, z, t, q

) ⇒Stop(q

)

where Stop(p) is true if p is a haltstate. Now we are almost done since M

x

halts iﬀ

∃z, tA

x

(z, t)

and thus if we can decide the truth of arithmetic formulae with quantiﬁers

we can decide if a given Turing machine halts. Since we know that this is

not possible we have ﬁnished the outline of the proof.

Remark 2.29 It is interesting to note that (at least to me) the proofs of

the last two theorems are in some sense counter intuitive. It seems like the

hard part of the tiling problem is what to do at points where we can put down

many diﬀerent tiles (we never know if we made the correct decision). This

is not utilized in the proof. Rather at each point we have only one choice

and the hard part is to decide whether we can continue for ever. A similar

statement is true about the other proof.

Let us explicitly state a theorem we have used a couple of times.

Theorem 2.30 If A is r.e.-complete then A is not recursive.

Proof: Let B be a set that is r.e.but not recursive (e.g.the halting problem)

then by the second property of being r.e.-complete B ≤

m

A. Now if A was

recursive then by Theorem 2.7.6 we could conclude that B is recursive,

contradicting the initial assumption that B is not recursive.

25

Before we end this section let us make an informal remark. What does

it mean that the halting problem is not recursive? Experience shows that

for most programs that do not halt there is a simple reason that they do not

halt. They often tend to go into an inﬁnite loop and of course such things can

be detected. We have only proved that there is not a single program which

when given as input the description of a Turing machine and an input to that

machine, the program will always give the correct answer to the question

whether the machine halts or not. One ﬁnal deﬁnition: A problem that is

not recursive is called undecidable. Thus the halting problem is undecidable.

2.8 G¨ odel’s incompleteness theorem

Since we have done many of the pieces let us brieﬂy outline a proof of

G¨ odel’s incompleteness theorem. This theorem basically says that there are

statements in arithmetic which neither have proof or a disproof. We want

to avoid a too elaborate machinery and hence we will be rather informal

and give an argument in the simplest case. However, before we state the

theorem we need to address what we mean by “statement in arithmetic”

and “proof”.

Statements in arithmetic will simply be the formulas considered in the

last examples, i.e. quantiﬁed formulas where the variables values which are

natural numbers. We encourage the reader to write common theorems and

conjectures in number theory in this form to check its power.

The notion of a proof is more complicated. One starts with a set of

axioms and then one is allowed to combine axioms (according to some rules)

to derive new theorems. A proof is then just such a derivation which ends

with the desired statement.

First note that most proofs used in modern mathematics is much more

informal and given in a natural language. However, proof can be formalized

(although most humans prefer informal proofs).

The most common set of axioms for number theory was proposed by

Peano, but one could think of other sets of axioms. We call a set of axioms

together with the rules how they can be combined a proofsystem. There are

two crucial properties to look for in a proofsystem. We want to be able to

prove all true theorem (this is called completeness) and we do not want to be

able to prove any false theorems (this is called that the system is consistent).

In particular, for each statement A we want to be able to prove exactly one

of A and ¬A.

Our goal is to prove that there is no proof system that is both consistent

26

and complete. Unfortunately, this is not true since we can as axioms take all

true statements and then we need no rules for deriving new theorems. This

is not a very practical proofsystem since there is no way to tell whether a

given statement is indeed an axiom. Clearly the axioms need to be speciﬁed

in a more eﬃcient manner. We take the following deﬁnition.

Deﬁnition 2.31 A proofsystem is recursive iﬀ the set of proofs (and hence

the set of axioms) form a recursive set.

We can now state the theorem.

Theorem 2.32 (G¨ odel) There is no recursive proofsystem which is both

consistent and complete.

Proof: Assume that there was indeed such a proofsystem. Then we claim

that also the set of all theorems would be recursive. Namely to decide

whether a statement A is true we could proceed as follows:

For z = 0, 1, 2, . . . ∞

If z is a correct proof of A output “true” and halt.

If z is a correct proof of ¬A output “false” and halt.

To check whether a given string is a correct proof is recursive by as-

sumption and since the proofsystem is consistent and complete sooner or

later there will be a proof of either A or ¬A. Thus this procedure always

halts with the correct answer. However, by Theorem 2.27 the set of true

statements is not recursive and hence we have reached a contradiction.

2.9 Exercises

Let us end this section with a couple of exercises (with answers). The reader

is encouraged to solve the exercises without looking too much at the answers.

II.1: Given x is it recursive to decide whether M

x

halts on an empty input?

II.2: Is there any ﬁxed machine M, such that given y, deciding whether M

halts on input y is recursive?

II.3: Is there any ﬁxed machine M, such that given y, deciding whether M

halts on input y is not recursive?

27

II.4: Is it true that for each machine M, that given y, it is recursive to

decide whether M halts on input y in y

2

steps?

II.5: Given x is it recursive to decide whether there exists a y such that M

x

halts on y?

II.6: Given x is it recursive to decide whether for all y, M

x

halts on y?

II.7: If M

x

halts on empty input let f(x) be the number of steps it needs be-

fore it halts and otherwise set f(x) = 0. Deﬁne the maximum time function

by MT(y) = max

x≤y

f(x). Is the maximum time function computable?

II.8 Prove that the maximum time function (cf ex. II.7) grows at least as

fast as any recursive function. To be more precise let g be any recursive

function, then there is an x such that MT(x) > g(x).

II.9 Given a set of rewriting rules over a ﬁnite alphabet and a starting

string and a target string, is it decidable whether we, using the rewriting

rules, can transform the starting string to the target string? An example

of this instance is: Rewriting rules ab → ba, aa → bab and bb → a. Is it

possible to transform ababba to aaaabbb?

II.10 Given a set of rewriting rules over a ﬁnite alphabet and a starting

string. Is it decidable whether we, using the rewriting rules, can transform

the starting string to an arbitrarily long string?

II.11 Given a set of rewriting rules over a ﬁnite alphabet and a starting

string. Is it decidable whether we, using the rewriting rules, can transform

the starting string to an arbitrarily long string, if we restrict the left hand

side of each rewriting rule to be of length 1?

2.10 Answers to exercises

II.1 The problem is undecidable. We will prove that if we could decide

whether M

x

halts on the empty input, then we could decide whether M

z

halts on input y for an arbitrary pair z, y. Namely given z and y we make a

machine M

x

which basically looks like M

z

but has a few special states. We

have one special state for each symbol of y. On empty input M

x

ﬁrst goes

trough all its special states which writes y on the tape. The machine then

returns to the beginning of the tape and from this point on it behaves as

M

z

. This new machine halts on empty input-tape iﬀ M

z

halted on input y

and thus if we could decide the former we could decide the latter which is

known undecidable. To conclude the proof we only have to observe that it

is recursive to compute the number x from the pair y and z.

28

II.2 There are plenty of machines of this type. For instance let M be the

machine that halts without looking at the input (or any machine deﬁning a

total function). In one of these cases the set of y’s for which the machine

halts is everything which certainly is a decidable set.

II.3 Let M be the universal machine. Then M halts on input (x, y) iﬀ M

x

halts on input y. Since the latter problem is undecidable so is the former.

II.4 This problem is decidable by the existence of the universal machine. If

we are less formal we could just say that running a machine a given number

of steps is easy. What makes halting problems diﬃcult is that we do not

know for how many steps to run the machine.

II.5 Undecidable. Suppose we could decide this problem, then we show that

we could determine whether a machine M

x

halts on empty input. Given M

x

we create a machine M

z

which ﬁrst erases the input and then behaves as M

x

.

We claim that M

z

halts on some input iﬀ M

x

halts on empty input. Also

it is true that we can compute z from x. Thus if we could decide whether

M

z

halts on some input then we could decide whether M

x

halts on empty

input, but this is undecidable by exercise II.1.

II.6 Undecidable. The argument is the same as in the previous exercise.

The constructed machine M

z

halts on all inputs iﬀ it halts on some input.

II.7 MT is not computable. Suppose it was, then we could decide whether

M

x

halts on empty input as follows: First compute MT(x) and then run M

x

for MT(x) steps on the empty input. If it halts in this number of steps, we

know the answer and if it did not halt, we know by the deﬁnition of MT that

it will never halt. Thus we always give the correct answer. However we know

by exercise II.1 that the halting problem on empty input is undecidable. The

contradiction must come from our assumption that MT is computable.

II.8 Suppose we had a recursive function g such that g(x) ≥ MT(x) for all

x. Then g(x) would work in the place of MT(x) in the proof of exercise II.7

(we would run more steps than we needed to, but we would always get the

correct answer). Thus there can be no such function.

II.9 The problem is undecidable, let us give an outline why this is true. We

will prove that if we could decide this problem then we could decide whether

a given Turing machine halts on the empty input. The letters in our ﬁnite

alphabet will be the nonblank symbols that can appear on the tape of the

Turing machine, plus a symbol for each state of the machine. A string in

this alphabet containing exactly one letter corresponding to a state of the

machine can be viewed as coding the Turing machine at one instant in time

29

by the following convention. The nonblank part of the tape is written from

left to write and next to the letter corresponding to the square where the

head is, we write the letter corresponding to the state the machine is in.

For instance suppose the Turing machine has symbols 0 and 1 and 4 states.

We choose a, b, c and d to code these states. If, at an instant in time, the

content of the tape is 0110000BBBBBBBBBBBB. . . and the head is in

square 3 and is in state 3, we could code this as: 011c000. Now it is easy

to make rewriting rules corresponding to the moves of the machine. For

instance if the machine would write 0, go into state 2 and move left when

it is in state 3 and sees a 1 this would correspond to the rewriting rule

1c → b0. Now the question whether a machine halts on the empty input

corresponds to the question whether we can rewrite a to a description of a

halted Turing machine. To make this description unique we add a special

state to the Turing machine such that instead of just halting, it erases the

tape and returns to the beginning of the tape and then halts. In this case

we get a unique halting conﬁguration, which is used as the target string.

It is very interesting to note that although one would expect that the

complexity of this problem comes from the fact that we do not know which

rewriting rule to apply when there is a choice, this is not used in the proof.

In fact in the special cases we get from the reduction from Turing machines,

at each point there is only one rule to apply (corresponding to the move of

the Turing machine).

In the example given in the exercise there is no way to transform the

start string to the target string. This might be seen by letting a have weight

2 and b have weight 1. Then the rewriting rules preserve weight while the

two given words are of diﬀerent weight.

II.10 Undecidable. Do the same reduction as in exercise II.9 to get a rewrit-

ing system and a start string corresponding to a Turing machine M

x

working

on empty input. If this system produces arbitrarily long words then the ma-

chine does not halt. On the other hand if we knew that the system did not

produce arbitrarily long words then we could simulate the machine until

it either halts or enters the same state twice (we know one of these two

cases will happen). In the ﬁrst case the machine halted and in the second

it will loop forever. Thus if we could decide if a rewriting system produced

arbitrarily long strings we can decide if a Turing machine halts on empty

input.

II.11 This problem is decidable. Make a directed graph G whose nodes cor-

respond to the letters in the alphabet. There is an edge from v to w if there

30

is a rewriting rule which rewrites v into a string that contains w. Let the

weight of this string be 1 if the rewriting rule replaces v by a longer string

and 0 otherwise. Now we claim that the rewriting rules can produce arbi-

trarily long strings iﬀ there is a circuit of positive weight that can be reached

from one of the letters contained in the starting word. The decidability now

follows from standard graph algorithms.

31

3 Eﬃcient computation, hierarchy theorems.

To decide what is mechanically computable is of course interesting, but what

we really care about is what we can compute in practice, i.e by using an

ordinary computer for a reasonable amount of time. For the remainder of

these notes all functions that we will be considering will be recursive and

we will concentrate on what resources are needed to compute the function.

The two ﬁrst such resources we will be interested in are computing time and

space.

3.1 Basic Deﬁnitions

Let us start by deﬁning what we mean by the running time and space usage

of a Turing machine. The running time is a function of the input and

experience has showed that it is convenient to treat inputs of the same

length together.

Deﬁnition 3.1 A Turing machine M runs in time T(n) if for every input

of length n, M halts within T(n) steps.

Deﬁnition 3.2 The length of string x is denoted by |x|.

The natural deﬁnition for space would be to say that a Turing machine

uses space S(n) if its head visits at most S(n) squares on any input of

length n. This deﬁnition is not quite suitable under all circumstances. In

particular, the deﬁnition would imply that if the Turing machine looks at the

entire input then S(n) ≥ n. We will, however, also be interested in machines

which use less than linear space and to make sense of this we have to modify

the model slightly. We will assume that there is a special input-tape which

is read-only and a special output-tape which is write-only. Apart from these

two tapes the machine has one or more work-tapes which it can use in the

oldfashioned way. We will then only count the number of squares visited on

the work-tapes.

Deﬁnition 3.3 Assume that a Turing machine M has a read-only input-

tape, a write-only output-tape and one or more work-tapes. Then we will

say that M uses space S(n) if for every input of length n, M visits at most

S(n) tape squares on its work-tapes before it halts.

32

When we are discussing running times we will most of the time not be

worried about constants i.e. we will not really care if a machine runs in time

n

2

or 10n

2

. Thus the following deﬁnition is useful:

Deﬁnition 3.4 O(f(n)) is the set of functions which is bounded by cf(n)

for some positive constant c.

Having done the deﬁnitions we can go on to see whether more time

(space) actually enables us to compute more functions.

3.2 Hierarchy theorems

Before we start studying the hierarchy theorems (i.e. theorems of the type

“more time helps”) let us just prove that there are arbitrarily complex func-

tions.

Theorem 3.5 For any recursive function f(n) there is a function V

f

which

is recursive but cannot be computed in time f(n).

Proof: Deﬁne V

f

by letting V

f

(x) be 1 if M

x

is a legal Turing machine

which halts with output 0 within f(|x|) steps on input x and let V

f

(x) take

the value 0 otherwise.

We claim that V

f

cannot be computed within time f(n) on any Turing

machine. Suppose for contradiction that M

y

computes V

f

and halts within

time f(|x|) for every input x. Consider what happens on input y. Since we

have assumed that M

y

halts within time f(|y|) we see that V

f

(y) = 1 iﬀ M

y

gives output 0, and thus we have reached a contradiction.

To ﬁnish the proof of the theorem we need to check that V

f

is recursive,

but this is fairly straightforward. We need to do two things on input x.

1. Compute f(|x|).

2. Check if M

x

is a legal Turing machine and in such a case simulate M

x

for f(|x|) steps and check whether the output is 0.

The ﬁrst of these two operations is recursive by assumption while the

second can be done using the universal Turing machine as a subroutine.

This completes the proof of Theorem 3.5

33

Up to this point we have not assumed anything about the alphabet of

our Turing machines. Implicitly we have thought of it as {0, 1, B} but let

us now highlight the role of the alphabet in two theorems.

Theorem 3.6 If a Turing machine M computes a {0, 1} valued function f

in time T(n) then there is a Turing machine M

**which computes f in time
**

2n +

T(n)

2

.

Proof: (Outline) Suppose that the alphabet of M is {0, 1, B} then the

alphabet of M

**will be 5-tuples of these symbols. Then we can code every
**

ﬁve adjacent squares on the tape of M into a single square of M

. This will

enable M

**to take several steps of M in one step provided that the head
**

stays within the same block of 5 symbols coded in the same square of M

.

However, it is not clear that this will help since it might be the case that

many of M’s steps will cross a boundary of 5-blocks. One can avoid this by

having the 5-tuples of M

**be overlapping, and we leave this construction to
**

the reader.

The reason for requiring that f only takes the values 0 and 1 is to make

sure that M does not spend most of its time printing the output and the

reason for adding 2n in the running time of M

is that M

**has to read the
**

input in the old format before it can be written down more succinctly and

then return to the intitial conﬁguration.

The previous theorem tells us that we can gain any constant factor in

running time provided we are willing to work with a larger alphabet. The

next theorem tells us that this is all we can gain.

Theorem 3.7 If a Turing machine M computes a {0, 1} valued function

f on inputs that are binary strings in time T(n), then there is a Turing

machine M

**which uses the alphabet {0, 1, B} which computes f in time
**

cT(n) for some constant c.

Proof: (Outline) Each symbol of M is now coded as a ﬁnite binary string

(assume for notational convenience that the length of these strings is 3 for

any symbol of M’s alphabet). To each square on the tape of M there will

be associated 3 tape squares on the tape of M

**which will contain the code
**

of the corresponding symbol of M. Each step of M will be a sequence of

steps of M

**which reads the corresponding squares. We need to introduce
**

some intermediate states to remember the last few symbols read and there

are some other details to take care of. However, we leave these details to

the reader.

34

The last two theorems tell us that there is no point in keeping track

of constants when analyzing computing times. The same is of course true

when analyzing space since the proofs naturally extend. The theorems also

say that it is suﬃcient to work with Turing machines that have the alphabet

{0, 1, B} as long as we remember that constants have no signiﬁcance. For

deﬁniteness we will state results for Turing machines with 3 tapes.

It will be important to have eﬃcient simulations and we have the follow-

ing theorem.

Theorem 3.8 The number of operations for a universal two-tape Turing

machine needed to simulate T(n) operations of a Turing machine M is at

most αT(n) log T(n), where α is a constant dependent on M, but indepen-

dent of n. If the original machine runs in space S(n) ≥ log n, the simulation

also runs in space αS(n), where α again is a constant dependent on M, but

independent of n.

We skip the complicated proof.

Now consider the function V

f

deﬁned in the proof of Theorem 3.5 and

let us investigate how much is needed to compute it. Of the two steps of the

algorithm, the second step can be analyzed using the above result and thus

the unknown part is how long it takes to compute f(|x|). As many times in

mathematics we deﬁne away this problem.

Deﬁnition 3.9 A function f is time constructible if there is a Turing ma-

chine that on input 1

n

computes f(n) in time f(n).

It is easy to see that most natural functions like n

2

, 2

n

and nlog n are

time constructible. More or less just collecting all the pieces of the work

already done we have the following theorem.

Theorem 3.10 If T

2

(n) is time constructible, T

1

(n) > n, and

lim

n→∞

T

2

(n)

T

1

(n) log T

1

(n)

= ∞

then there is a function computable in time O(T

2

(n)) but not in T

1

(n). Both

time bounds refer to Turing machines with three tapes.

Proof: The intuition would be to use the function V

T

1

deﬁned previously.

To avoid some technical obstacles we work with a slightly modiﬁed function.

35

When simulating M

x

we count the steps of the simulating machine rather

than of M

x

. I.e. we ﬁrst compute T

2

(n) and then run the simulation for

that many steps. We use two of the tapes for the simulation and the third

tape to keep a clock. If we get an answer within this simulation we output

1 if the answer was 0 and output 0 otherwise. If we do not get an answer

we simply answer 0. This deﬁnes a function V

T

2

and we need to check that

it cannot be computed by any M

y

in time T

1

.

Remember that there are inﬁnitely many y

i

such that M

y

i

codes M

y

(we

allowed an end marker in the middle of the description). Now note that the

constant α in Theorem 3.8 only depends on the machine M

y

to be simulated

and thus there is a y

i

which codes M

y

such that

T

2

(|y

i

|) ≥ αT

1

(|y

i

|) log T

1

(|y

i

|).

By the standard argument M

y

will make an error for this input.

It is clear that we will be able to get the same result for space-complexity

even though there is some minor problems to take care of. Let us ﬁrst prove

that there are functions which require arbitrarily large amounts of space.

Theorem 3.11 If f(n) is a recursive function then there is a recursive

function which cannot be computed in space f(n).

Proof: Deﬁne U

f

by letting U

f

(x) be 1 if M

x

is a legal Turing machine

which halts with output 0 without visiting more than f(|x|) tape squares

on input x and let U

f

(x) take the value 0 otherwise.

We claim that U

f

cannot be computed in space f(n). Given a Turing

machine M

y

which never uses more than f(n) space, then as in all previous

arguments M

y

will output 0 on input y iﬀ U

f

(y) = 1 and otherwise U

f

(y) =

0.

To ﬁnish the theorem we need to prove that U

f

is recursive. This might

seem obvious at ﬁrst since we can just use the universal machine to simulate

M

x

and all we have to keep track of is whether M

x

uses more than the allowed

amount of space. This is not quite suﬃcient since M

x

might run forever and

never use more than f(|x|) space. We need the following important but not

very diﬃcult lemma.

Lemma 3.12 Let M be a Turing machine which has a work tape alphabet

of size c, Q states and k work-tapes and which uses space at most S(n).

Then on inputs of length n, M either halts within time nQS(n)

k

c

kS(n)

or it

never halts.

36

Proof: Let a conﬁguration of M be a complete description of the machine

at an instant in time. Thus, the conﬁguration consists of the contents of the

tapes of M, the positions of all its heads and its state.

Let us calculate the number of diﬀerent conﬁgurations of M given a ﬁxed

input of length n. Since it uses at most space S(n) there at most c

kS(n)

possible contents of it work-tapes and at most S(n)

k

possible positions of

the heads on the worktapes. The number of possible locations of the head on

the input-tape is at most n and there are Q possible states. Thus we have a

total of nQS(n)

k

c

kS(n)

possible conﬁgurations. If the machine does not halt

within this many timesteps the machine will be in the same conﬁguration

twice. But since the future actions of the machine is completely determined

by the present conﬁguration, whenever it returns to a conﬁguration where

it has been previously it will return inﬁnitely many times and thus never

halt. The proof of Lemma 3.12 is complete.

Returning to the proof of Theorem 3.11 we can now prove that U

f

is

computable. We just simulate M

x

for at most |x|Qf(|x|)

k

c

kf(|x|)

steps or

until it has halted or used more than f(|x|) space. We use a counter to count

the number of steps used. This ﬁnishes the proof of Theorem 3.11

To prove that more space actually enables us to compute more functions

we need the appropriate deﬁnition.

Deﬁnition 3.13 A function f is space constructible if there is a Turing

machine that on input 1

n

computes f(n) in space f(n).

We now can state the space-hierarchy theorem.

Theorem 3.14 If S

2

(n) is space constructible, S(n) ≥ log n and

lim

n→∞

S

2

(n)

S(n)

= ∞

then there is a function computable in space O(S

2

(n)) but not in space S(n).

These space bounds refer to machines with 3 tapes.

Proof: The function achieving the separation is basically U

S

with the

same twist as in Theorem 3.10. In other words deﬁne a function essentially

as U

S

but restrict the computation to using space S

2

of the simulating

machine. The rest of the proof is now more or less identical. The only

detail to take care of is that if S(n) ≥ log n then a counter counting up to

|x|QS(|x|)

k

c

kS(|x|)

can be implemented in space S(n).

37

The reason that we get a tighter separation between space-complexity

classes than time-complexity classes is the fact that the universal machine

just uses constant more space than the original machine.

This completes our treatment of the hierarchy theorems. These results

are due to Hartmanis and Stearns and are from the 1960’s. Next we will

continue into the 1970’s and move further away from recursion theory and

into the realm of more modern complexity theory.

38

4 The complexity classes L, P and PSPACE.

We can now start our main topic, namely the study of complexity classes.

We will in this section deﬁne the basic deterministic complexity classes, L,

P and PSPACE.

Deﬁnition 4.1 Given a set A, we say that A ∈ L iﬀ there is a Turing

machine which computes the characteristic function of A in space O(log n).

Deﬁnition 4.2 Given a set A, we say that A ∈ P iﬀ there is a Turing

machine which for some constant k computes the characteristic function of

A in time O(n

k

).

Deﬁnition 4.3 Given a set A, we say that A ∈ PSPACE iﬀ there is

a Turing machine which for some constant k computes the characteristic

function of A in space O(n

k

).

There are some relations between the given complexity classes.

Theorem 4.4 L ⊂ PSPACE.

Proof: The inclusion is obvious. That it is strict follows from Theorem

3.14.

Theorem 4.5 P ⊆ PSPACE.

This is also obvious since a Turing machine cannot use more space than

time.

Theorem 4.6 L ⊆ P.

Proof: This follows from Lemma 3.12 since if S(n) ≤ c log n and we assume

that the machine uses a three letter alphabet, has k work-tapes, and Q states

and always halts, then we know it runs in time at most

nQ(c log n)

k

3

c log n

∈ O(n

2+c log 3

)

where we used that (log n)

k

∈ O(n) for any constant k. We can conclude

that a machine which runs in logarithmic space also runs in polynomial

time.

The inclusions given in Theorems 4.5 and 4.6 are believed to be strict

but this is not known. Of course, it follows from Theorem 4.4 that at least

one of the inclusions is strict, but it gives no idea to which one it is.

39

Figure 2: A Random Access Machine

4.1 Is the deﬁnition of P model dependent?

When studying mechanically computable functions we had several deﬁni-

tions which turned out to be equivalent. This fact convinced us that we had

found the right notion i.e. that we had deﬁned a class of functions which

captured a property of the functions rather than a property of the model.

The same argument applies here. We have to investigate whether the de-

ﬁned complexity classes are artifacts of the particulars of Turing machines

as a computational model or if they are genuine classes of functions which

are more or less independent of the model of computation. The reader who

is not worried about such questions is adviced to skip this section.

Turing machine seems incredibly ineﬃcient and thus we will compare

it to a model of computation which is more or less a normal computer

(programmed in assembly language). This type of computer is called a

Random Access Machine (RAM) and a pictured is given i Figure 2. A RAM

40

has a ﬁnite control, and inﬁnite number of registers and two accumulators.

Both the registers and the accumulators can hold arbitrarily large integers.

We will let r(i) be the content of register i and ac

1

and ac

2

the contents of

the accumulators. The ﬁnite control can read a program and has a read-only

input-tape and a write-only output tape. In one step a RAM can carry out

the following instructions.

1. Add, subtract, divide (integer division) or multiply the two numbers

in ac

1

and ac

2

, the result ends up in ac

1

.

2. Make conditional and unconditional jumps. (Condition ac

1

> 0 or

ac

1

= 0).

3. Load something into an accumulator, e.g. ac

1

= r(k) for constant k

or ac

1

= r(ac

1

), similarly for ac

2

.

4. Store the content of an accumulator, e.g. r(k) = ac

1

for constant k or

r(ac

2

) = ac

1

, similarly for ac

2

.

5. Read input ac

1

= input(ac

2

).

6. Write an output.

7. Use constants in the program.

8. Halt

One might be tempted to let the time used by a RAM be the number

of operations it does (the unit-cost RAM). This turns out to give a quite

unrealistic measure of complexity and instead we will use the logarithmic

cost model.

Deﬁnition 4.7 The time to do a particular instruction on a RAM is 1 +

log(k + 1) where k is the least upper bound on the integers involved in the

instruction. The time for a computation on a RAM is the sum of the times

for the individual instructions.

This actually agrees quite well with our everyday computers. The size of

a computer word is bounded by a constant and operations on larger numbers

require us to access a number of memory cells which is proportional to

logarithm of the number used.

41

To deﬁne the amount of memory used by a RAM on a particular oper-

ation let us assume that the initial contents of all the registers are 0. Then

we have:

Deﬁnition 4.8 The space used by a RAM under a computation is the max-

imum of

log(ac

1

+ 1) + log(ac

2

+ 1) +

r(i)=0

log (i +r(i))

during the computation.

Intuitively the RAM seems more powerful than a Turing machine. We

will not try to prove exactly this, but only to establish strong enough results

to show that the class P is well deﬁned.

Theorem 4.9 If a Turing machine can compute a function in time T(n)

and space S(n), for T(n) ≥ n and S(n) ≥ log n then the same function can

be computed in time O(T

2

(n)) and space O(S(n)) on a RAM.

Proof: (Outline) Assume for simplicity that the Turing machine just has

one work-tape and that it uses the alphabet {0, 1, B}. The RAM will sim-

ulate the computation of the Turing machine step by step. It will code the

content of the work-tape as an integer and store this integer in register 1,

the position of the head on the input-tape in accumulator 2, the position

of the head on the work-tape(s) in register 2 and the current state of the

Turing machine in register 3. To simulate a step of the Turing machine the

RAM gets the appropriate information from the work-tape by an integer di-

vision and then it follows the transition described by the next-step function.

The cost of the simulation of an individual step is the size of the integers

involved and this is bounded by O(S(n)). Since we have at most T(n) steps

and S(n) ≤ T(n) the bound for the running time follows. The bound for

the space used is obvious.

Observe that we need to store the entire contents of the work-tape in

one register to conserve space. If we instead stored the content of square i

in register i the total space used would be O(S(n) log S(n)). The running

time would be improved to O(T(n) log T(n)) but for the present purposes it

is more important to keep the space small.

Next let us see that in fact a Turing machine is not that much less powerful

than a RAM.

42

Theorem 4.10 If a function f can be computed by a RAM in time T(n)

and space S(n) then f can be computed in time O(T

2

(n)) and space S(n)

on a Turing machine.

Proof: (Outline) As many other proofs this is not a very thrilling simula-

tion argument, which we usually tend to omit. However, since the result is

central in that it proves that P is invariant under change of model, we will

at least give a reasonable outline of the proof.

The way to proceed is of course to simulate the RAM step by step.

Assume for simplicity that we do the simulation on a Turing machine which

apart from its input-tape and output-tape has 4 work-tapes. Three of the

four work-tapes will correspond to ac

1

, ac

2

, the registers, respectively, while

the forth tape is used as a scratch pad. A schematic picture is given in

Figure 3 The register tape will contain pairs (i, r(i)) where the two numbers

are separated with a B. Two diﬀerent pairs are separated by BB. If some

i does not appear on the register tape this means that r(i) = 0.

The RAM-program is now translated into a next-step function of the

Turing machine. Each line is translated into a set of states and transitions

between the states as indicated by Figure 4. Let us give a few examples how

to simulate some particular instructions. We will deﬁne the Turing machine

pictorially by having circles indicate states. Inside the circle we write the

tape(s) we are currently interested in, and the labeled arrows going out

of the circle indicate which states to proceed to where the label indicates

the current symbol(s). Rectangular boxes indicate subroutines, a special

subroutine is “Rew” which is rewinding the register tape, i.e. moving the

head to the beginning, the same operation also applies to other tapes.

1. If the instruction is an arithmetical step, we just replace it by a Turing

machine which computes the arithmetical step using the ac

1

and ac

2

tapes as inputs and the scratch pad tape as work-tape.

2. If the instruction is jump-instruction we just make the next-step func-

tion take the next state which is the ﬁrst state of the set of states

corresponding to that line. (See Figure 5.)

3. If the jump is conditional on the content of ac

1

being 0, then we just

search the ac

1

-tape for the symbol 1. If we do not ﬁnd any 1 before we

ﬁnd B the next step function directs us to the given line and otherwise

we proceed with the next line. (See Figure 6.)

43

Figure 3: A TM simulating a RAM

44

Figure 4: Basic picture

Figure 5: The jump instruction

45

Figure 6: Conditional jump

4. Let us just give an outline of how to load r(ac

2

) into ac

1

. Clearly,

what we want to do is to look for the content of ac

2

as the ﬁrst part of

any pair on the register tape. If we ﬁnd that no such pair exists then

we should load 0 into ac

1

. A description of this is given in Figure 7.

5. Finally let us indicate how to store ac

1

into register ac

2

. To do this

we scan the register-tape to ﬁnd out the present value of r(ac

2

). If

r(ac

2

) = 0 previously this is easy. If ac

1

= 0 we store the pair (ac

2

, ac

1

)

at the end of the register-tape and otherwise we do nothing. If r(ac

2

) =

0 we erase the old copy (ac

2

, r(ac

2

)) and then move the rest of the

content of the register-tape left to avoid empty space. After we have

moved the information we write (ac

2

, ac

1

) at the end (provided ac

1

=

0).

Let us analyze the eﬃciency of the simulation. The space used by the Turing

machine is easily seen to be bounded by

O(log (ac

1

+ 1) + log (ac

2

+ 1) +

r(i)=0

(log (i + 1) + log (r(i) + 1) + 3))

and thus the simulation works in O(S(n)) space. To analyze the time needed

for the simulation we claim that you can do multiplication and integer divi-

sion of two m-digit numbers in time O(m

2

) on a Turing machine. This im-

plies that any arithmetical operation can be done in a factor O(S(n)) longer

46

Figure 7: Loading instruction

47

time on the Turing machine than on the RAM. The storing and retrieving

of information can also be done in time O(S(n)) and using S(n) ≤ T(n)

Theorem 4.10 follows.

Using Theorems 4.9 and 4.10 we see that P, L and PSPACE are the same

whether we use Turing machines or RAMs in the deﬁnitions. This turns out

to be true in general and this gives us a very important principle which we

can formalize as a complexity theoretic version of Church’s thesis.

Complexity theoretic version of Church’s thesis: The complexity

classes L, P and PSPACE remain the same under any reasonable computa-

tional model.

The above statement also remains true for all other complexity classes

that we will deﬁne throughout these notes and we will frequently implicitly

apply the above thesis. This works as follows. When designing algorithms

it is much easier to describe and analyze the algorithm if we use a high

level description. On the other hand when we argue about computation it

is much easier to work with Turing machines since their local behavior is so

easy to describe. By virtue of the above thesis we can take the easy road to

both things and still be correct.

4.2 Examples of members in the complexity classes.

We have deﬁned L, P and PSPACE as families of sets. We will every now

and then abuse this notation and say that a function (not necessarily {0, 1}-

valued) lies in one of these complexity classes. This will just mean that the

function can be computed within the implied resource bounds.

Example 4.11 Given two n-digit numbers x and y written in binary, com-

pute their sum.

This can clearly be done in time O(n) as we all learned in ﬁrst grade. It

is also quite easy to see that it can be done in logarithmic space. If we have

x =

n−1

i=0

x

i

2

i

and y =

n−1

i=0

y

i

2

i

then x + y is computed by the following

program:

carry= 0

For i = 0 to n −1

bit = x

i

+y

i

+carry

carry = 0

If bit ≥ 2 then carry = 1, bit = bit −2.

48

write bit

next i

write carry.

The only things that need to be remembered is the counter i and the

values of bit and carry. This can clearly be done in O(log n) space and thus

addition belongs to L.

Example 4.12 Given two n-digit numbers x and y written in binary, com-

pute their product.

This can again be done in P by ﬁrst grade methods, and if we do it

as taught, it will take us O(n

2

) (this can be improved by more elaborate

methods). In fact we can also do it in L.

carry= 0

For i = 0 to 2n −2

low = max(0, i −(n −1))

high = min(n −1, i)

For j = low to high, carry = carry +x

j

∗ y

i−j

write lsb(carry)

carry = carry/2

next i

write carry with least signiﬁcant bit ﬁrst

If one looks more closely at the algorithm one discovers that it is the

ordinary multiplication algorithm where one saves space by computing a

number only when it is needed. The only slightly nontrivial thing to check

in order to verify that the algorithm does not use more that O(log n) space

is to verify that carry always stays less than 2n. We leave this easy detail

to the reader.

One might be tempted to think that also division could be done in L.

However, it is not known whether this is the case. Another very easy problem

that is not known how to do in L: Given an integer in base 2, convert it to

base 3.

Example 4.13 Given two n-bit integers x and y compute their greatest

common divisor.

49

We will show that this problem is in P and in fact give two diﬀerent algo-

rithms to show this. First the old and basic algorithm: Euclid’s algorithm.

Assume for simplicity that x > y.

a = x

b = y

While b = 0 do

ﬁnd q and r such that a = bq +r, 0 ≤ r < b

a = b

b = r

od

write a

The algorithm is correct since if d divides x and y then clearly d divides

all a and b. On the other hand if d divides any pair a and b then it also

divides x and y.

To analyze the algorithm we have to focus on two things, namely the

number of iterations and the cost of each iteration. First observe that for

each iteration the numbers get smaller and thus we will always be working

with numbers with at most n digits. The work in each iteration is essentially

a division and this can be done in O(n

2

) bit operations.

The fact that numbers get smaller at each iteration implies that there

are at most 2

n

iterations. This is not suﬃcient to get a polynomial time

running time and we need the following lemma.

Lemma 4.14 Let a and b have the values a

0

and b

0

at one point in time

in Euclid’s algorithm and let a

2

and b

2

be their values two iterations later,

then a

2

≤ a

0

/2.

Proof: Let a

1

and b

1

be the values of a and b after one iteration. Then if

b

0

< a

0

/2 we have a

2

< a

1

= b

0

< a

0

/2 and the conclusion of the lemma is

true. On the other hand if b

0

≥ a

0

/2 then we will have a

2

= b

1

= a

0

−b

0

≤

a

0

/2 and thus we have proved the lemma.

The lemma implies that there are at most 2n iterations and thus the

total complexity is bounded by O(n

3

). If you are careful, however, it is pos-

sible to do better (without applying any fancy techniques) by the following

observation. If you use standard long division (with remainder) to ﬁnd q

then the complexity is actually O(ns) where s is the number of bits in q.

50

Thus if q is small we can do each iteration signiﬁcantly faster. On the other

hand if q is large then it is easy to see that the numbers decrease more

rapidly than given by the above lemma. If one analyzes this carefully we

actually get complexity O(n

2

).

Let us give another algorithm for the same problem. This algorithm is

called “Binary GCD”.

Let 2

dx

be the highest power of 2 that divides x and deﬁne d

y

similarly.

Set a = x2

−dx

and b = y2

−dy

. If b < a interchange a and b.

While b > 1 do

Either a +b or a −b is divisible by 4. Set r to the number that is

divisible by 4 and set a = max(b, r2

−dr

) and b = min(b, r2

−dr

)

where 2

dr

is the highest power of 2 that divides r.

od

write a2

min(dx,dy)

The algorithm is correct by a similar argument as the previous algorithm.

To analyze the complexity of the algorithm we again have to study the

number of iterations and the cost of each iteration. Again it is clear that

the numbers decrease in size and thus we will never work with numbers with

more than n digits. Each iteration only consists of a few comparisons and

shifts if the numbers are coded in binary and thus it can be implemented in

time O(n). To analyze the number of iterations we have:

Lemma 4.15 Let a and b have the values a

0

and b

0

at one point in time in

the binary GCD algorithm and let a

2

and b

2

be their values two iterations

later, then a

2

≤ a

0

/2.

Proof: If a

1

and b

1

are the numbers after one iteration then b

1

≤ (a

0

+b

0

)/4

and a

1

≤ a

0

. Since b

0

≤ a

0

this implies that a

2

≤ max(b

1

, (a

1

+ b

1

)/4) ≤

a

0

/2.

Thu again we can conclude that we have at most 2n iterations, and

hence the total work is bounded by O(n

2

). This implies that binary GCD

is a competitive algorithm in particular since the individual operations can

be implemented very eﬃciently when the binary representation of integers

is used.

Let us just remark that the best known greatest common divisor algo-

rithm for integers runs in time O(n(log n)

2

log log n) and is based on the

51

Euclidean algorithm. It is unknown if integer greatest common divisor can

be solved in small space.

Example 4.16 Given a nonsingular integer matrix M with entries which

are n-bit numbers, solve Mx = b for some vector of n-bit numbers.

It might seem like this problem obviously is in P since Gaussian elimina-

tion is well known to be doable in O(n

3

) steps. However, there is something

to check. We need to verify that the numbers do not get too large during the

computation i.e. that the rational numbers that appear can be represented.

To analyze what happens to the numbers assume for notational simplicity

that the upper left i × i matrix is non-singular for any i and thus we will

be able to perform Gaussian elimination without pivoting. Let us investi-

gate what the matrix looks like after we have eliminated the i’th variable.

Suppose the original matrix looks like

_

A B

C D

_

where A is the upper i×i matrix. After the i’th variable has been eliminated

the matrix will be

_

A

−1

0

−CA

−1

I

__

A B

C D

_

=

_

I A

−1

B

0 −CA

−1

B +D

_

where I is the i ×i identity matrix. Thus using the following lemma we can

bound the rational numbers involved in the computation.

Lemma 4.17 If A is a nonsingular n×n integer matrix with entries bounded

in size by m then A

−1

has rational entries with numerator and denominator

bounded by m

n

n

n

2

.

Proof: Any entry of A

−1

is an (n − 1) × (n − 1) subdeterminant of A

divided by the determinant of A. Thus we just need to bound the size of

determinants of integer matrices. A determinant can be interpreted as the

volume of the parallelipiped spanned by the rows. This volume is bounded

by the product of the lengths of the row vectors

3

which in its turn is bounded

by (m

√

n)

n

.

3

This is not a formal proof and the inequality indicated in this sentence is known as

Hadamard’s inequality

52

It follows from the lemma that the rational numbers involved in Gaus-

sian elimination can be represented by O(n

2

) binary digits. Since Gaussian

elimination can be done in O(n

3

) operations and each operation can be

performed in time O(n

4

) (if we use classical arithmetic) then we get total

complexity O(n

7

).

Example 4.18 The determinant of an n ×n matrix can be written as

π∈Sn

sg(π)

n

i=1

x

i,π(i)

where the sum is over all permutations of the numbers 1 through n and

sg(π) is the signum

4

of the permutation. The determinant can be computed

by Gaussian elimination and thus by the previous example it is in P. The

permanent is a very related number which is deﬁned as

π∈Sn

n

i=1

x

i,π(i)

.

Thus we have just removed the signum part of the deﬁnition. The deﬁnition

looks simpler but it removes the nice invariance under the row operations

of Gaussain elimination. There is no known polynomial time algorithm for

computing the permanent and there is good reason to believe that there is no

such algorithm (the problem is #P-complete, we will get to this complexity

class later). It is not hard to see that the problem is in PSPACE and we will

not give the most eﬃcient algorithm but rather the easiest to understand.

per = 0

For 1 ≤ π(1), π(2), . . . π(n) ≤ n

If π(i) = π(j) for i = j, per = per +

n

i=1

x

i,π(i)

Thus we just generate all n-tuples of numbers between 1 and n, check if

it is a permutation and, if it is, add the corresponding term to the sum. All

the space required is to store the variables π(i) and per. The space needed

for the former is bounded by (nlog n) while the second is bounded by the

size of the answer and if we assume that all entries in the original matrix

are bounded by 2

n

the per is bounded by 2

n

2

n! and thus can be stored in

space O(n

2

).

4

If you do not know the signum function just forget this deﬁnition of the determinant

53

It is interesting to note that there is a polynomial time algorithm to

decide whether a permanent of a 0, 1 matrix is nonzero but that it seems

hard to compute it.

Example 4.19 Given a prime number p and a number a, ﬁnd x (if one

exists) such that x

2

≡ a (mod p).

Let us ﬁrst recall some basic facts from number theory. Assume that

we have an odd prime p (the case p = 2 being easy) then a can be written

as a square mod p iﬀ a

p−1

2

≡ 1 (mod p) (i.e. we have a solution iﬀ this

condition holds). Remember also that, by Fermat’s little theorem, it is true

that x

p−1

≡ 1 (mod p) for any number not divisible by p.

Now if p ≡ 3 (mod 4) and if a

p−1

2

≡ 1 (mod p) then if we set x = a

p+1

4

we have

x

2

≡ a

p+1

2

≡ aa

p−1

2

≡ a. (mod p)

Thus taking square roots when p ≡ 3 (mod 4) is just computing a power.

Let us investigate how much resources are needed to compute a

p+1

4

(mod

p). Assume that p and a are at most n digit numbers. Then computing

a

p+1

4

by successive multiplications would require on the order of 2

n

multi-

plications. It is more eﬃcient to ﬁrst compute a

2

i

(mod p) for 0 ≤ i ≤ n

in n squarings. Observe here that since we are only interested in the result

(mod p), we can reduce mod p after each squaring and thus we will never

need to work with numbers with more than 2n digits. Now we write

p+1

4

in

binary and we compute a

p+1

4

by multiplying together the powers a

2

i

with

the i’s corresponding to 1’s in the binary expansion of

p+1

4

. Hence, we get

O(n) multiplications of O(n) bit numbers and this can be done in total time

O(n

3

).

Thus we have proved that taking square-roots modulo primes p with

p ≡ 3 (mod 4) can be done in polynomial time. It is not known if this is

true in general for primes p ≡ 1 (mod 4) or when p is a composite number.

We will return to these questions later in these notes.

Example 4.20 Given a directed graph G with n nodes and two distin-

guished nodes s and t in G. Is it possible to ﬁnd a directed path from s to

t?

This problem is in P by the following straightforward algorithm.

Set R ={s}.

Set R

new

to the set of nodes reachable from s in one step,

i.e. the set of v such that there is an edge (s, v).

54

While R

new

is not empty do

Take an element w in R

new

and move it into R. Also take

any nodes reachable from w in one step which do not belong

to either R or R

new

and put them into R

new

.

od

If t ∈ R say yes, otherwise say no.

We claim that when the algorithm ends all the nodes reachable from s

are in R. We leave the veriﬁcation of this to the reader. It is important to

know that R

new

contains the set of nodes known to be reachable from s but

whose neighbors have not yet been put into R or R

new

.

Remark 4.21 Observe that in fact R is the set of nodes reachable from s

and thus we have really solved a more general problem.

To see that the problem is in P let us analyze the time needed for the

algorithm. Since we each time the loop is executed put one node into R and

we never remove anything from R the loop is only executed n times. Each

execution in the loop can be done in time n since we just have to investigate

the neighbors of w. Thus the complexity is bounded by O(n

2

).

Next we turn to the deﬁnition of non-deterministic computation. The

obvious goal in mind is to formally deﬁne NP.

55

5 Nondeterministic computation

The two most famous complexity classes are probably P and NP. We have

already deﬁned P and to deﬁne NP we need the concept of a nondeterministic

Turing machine. The formal deﬁnition might make nondeterminism seem

like a paper-tiger which has nothing to do with reality, but it will soon be

clear that this is not the case.

5.1 Nondeterministic Turing machines

The heart of a normal, deterministic Turing machine is the next-step func-

tion, which tells the machine what to do in a given situation. A nondeter-

ministic Turing machine also has a next-step function, but it is multivalued.

By this we mean that in a given situation the machine might do several

diﬀerent things. This implies that on a given input there are several possi-

ble computations and in particular, there might be several diﬀerent possible

outputs. This calls for a deﬁnition.

Deﬁnition 5.1 A nondeterministic Turing machine can only compute func-

tions which takes the values 0 and 1. The machine takes the value 1 on (or

accepts) an input x iﬀ there is some possible computation on input x which

gives output 1. If there is no computation that gives the output 1, the ma-

chine takes value 0 (or rejects the input).

Since we will only be working with {0, 1} functions we will think of

nondeterministic machines as recognizing sets i.e. the set of inputs for which

there is an accepting computation.

Example 5.2 Suppose we want to recognize composite numbers i.e. num-

bers which are not prime and hence can be written as the product of two

numbers both greater than or equal to 2. This can be done by a nondeter-

ministic machine as follows:

On input x, write y

1

and y

2

nondeterministically with |y

i

| ≤ |x| for

i = 1, 2. Writing down y

1

is done by allowing the machine move left for

|x| steps while at each step either writing down 0,1 or an endmarker. The

machine constructs y

2

in the same way. Now the machine gives output 1 iﬀ

y

1

y

2

= x and y

i

> 1 for i = 1, 2.

Let us see that the algorithm is correct. If x is composite then there is

some computation that outputs 1, namely if x = ab then when y

1

= a and

y

2

= b we will get the output 1. On the other hand if x is prime there is

56

no possible computation that gives output 1 since if y

1

y

2

= x then by the

deﬁnition of prime one of the y

i

is 1.

Observe that when we are considering deterministic computation recog-

nizing primes and recognizing composite numbers are very similar, since one

just changes the output routine to reverse the meaning of 0 and 1. When it

comes to nondeterministic computation there is a tremendous diﬀerence. If,

for instance, you change the output of the machine recognizing composite

numbers deﬁned above then you get a machine that accepts everything. It

is important to keep this non-symmetry in mind.

The deﬁnitions of space and time need to be slightly modiﬁed since there

is no unique computation given the input.

Deﬁnition 5.3 A nondeterministic Turing machine M runs in time T(n)

if for every input of length n, every computation of M halts within T(n)

steps.

Deﬁnition 5.4 A nondeterministic Turing machine M runs in space S(n)

if for every input of length n, every computation of M visits at most S(n)

squares on the work-tape.

Since non-deterministic Turing machines can always be made to have

output 1 or 0 the size of the answer will always be small. This implies that

we do not need an output-tape. Some proofs will be formally easier if we

assume that the output is written on the worktape and therefore we will

assume this.

With these basic deﬁnitions done we can proceed to deﬁne some com-

plexity classes.

Deﬁnition 5.5 Given a set A, we say that A ∈ NL iﬀ there is a nondeter-

ministic Turing machine which accepts A and runs in space O(log n).

Deﬁnition 5.6 Given a set A, we say that A ∈ NP iﬀ there is a nondeter-

ministic Turing machine which accepts A and runs in time O(n

k

) for some

constant k.

Deﬁnition 5.7 Given a set A, we say that A ∈ NPSPACE iﬀ there is a

nondeterministic Turing machine which accepts A and runs in space O(n

k

)

for some constant k.

57

We have similar theorems to 4.4, 4.5 and 4.6.

Theorem 5.8 NL ⊂ NPSPACE.

Proof: The inclusion is obvious. It is at this point not clear that it is

strict. This will follow from results later on and we leave it for the time

being.

Theorem 5.9 NP ⊆ NPSPACE.

Proof: This follows since also nondeterministic Turing machines cannot

use more space than time.

Theorem 5.10 NL ⊆ NP.

Proof: The proof is quite close to the proof of the corresponding deter-

ministic statement but we need an extra observation. The time bound given

in Lemma 3.12 is no longer true for nondeterministic computation. The

reason for this is that even if a nondeterministic machine is in the same

conﬁguration twice it need not loop forever. The reason is that it can make

diﬀerent non-deterministic choices the second time around. However, it is

easy to see that if a nondeterministic machine has an accepting computation

then it has a nondeterministic computation which visits each conﬁguration

at most once. This implies that we can impose the time-restriction given

by Lemma 3.12 without changing the set of inputs accepted. This proves

Theorem 5.10.

Let us now proceed to some examples of members in the newly deﬁned

complexity classes.

Example 5.11 Composite numbers are in NP, since the nondeterministic

algorithm given previously is easily seen to run in time O(n

2

).

It might be tempting to guess that Composite numbers are in NL since

the essential part of the algorithm is a multiplication and we know from

before that multiplication can be done in L. This is not known however, and

the reason that the given algorithm does not work is that multiplication is

in L only when the input is on a separate input-tape where we can access

any part of the input when it is needed. In the present situation we have

to write down the two factors on the work-tape and there is no room to do

this.

58

Example 5.12 Traveling Sales Person (TSP): Given n cities and a symmet-

ric integer n ×n matrix (m

ij

)

n

i,j=1

where m

ij

denotes the distance between

cities i and j, and an integer K. Is there a tour which visits all cities exactly

once and is of total length ≤ K? TSP is in NP as can be seen from the

following non-deterministic algorithm.

1. Nondeterministically write numbers b

i

, i = 1, 2 . . . , n each with at

most log n + 1 digits.

2. If 1 ≤ b

i

≤ n for all i and b

i

= b

j

for i = j then compute

n−1

i=1

m

b

i

,b

i+1

+

m

bn,b

1

. If this number is less than K output 1 and in all other cases

output 0.

Observe that the conditions 1 ≤ b

i

≤ n and b

i

= b

j

for i = j imply that

the b

i

deﬁne a tour starting in b

1

and tracing through b

i

for increasing i and

then returning to b

1

. If this tour is short enough the machine accepts the

output. It is easy to check that the algorithm runs in polynomial time and

thus we have proved that TSP ∈ NP.

Example 5.13 Boolean formula satisﬁability: Given a Boolean formula,

consisting of Boolean variables x

i

, i = 1, 2 . . . n, ∧-gates (logical conjunc-

tion) ∨-gates (logical disjunction) and negation-gates, is there a setting of

the variables that satisﬁes the formula?

This problem is in NP, by the obvious procedure. Namely, nondeter-

ministically write down the value of every variable and then write 1 iﬀ the

guessed assignment satisﬁes the formula. To check that this procedure runs

in polynomial time one has to observe that given a formula and an assign-

ment of all the variables then one can check whether the assignment satisﬁes

the formula in polynomial time. This is easy and we leave this as an exercise.

Let us return to the problem of graph-reachability (previously considered

in section 4.2:

Example 5.14 Directed graph reachability: Given a directed graph G and

two nodes s and t of G, is there a directed path from s to t?

We present an algorithm that uses only logarithmic space and hence we

need to be slightly careful about how the input is presented. We assume

that the graph is given as a list of the edges. Now we have the following

algorithm:

Suppose the graph has n nodes.

59

Set H= s

For i = 1, 2 . . . n

If H = t print 1 and halt.

If there is no edge out of H print 0 and halt.

Choose nondeterministically one of the edges leaving H and set H to

the endpoint of this edge.

Next i

Print 0.

This procedure uses only logarithmic space since all we need to remember

is the counter i and the value of H. The conditions given in the algorithm

are easily checked given the assumed encoding of G.

To verify that the algorithm is correct, ﬁrst observe that by construction

H is always a node that can be reached from s. Thus, since the machine

output 1 only when H = t, we know that when the machine takes the value

1 then t is reachable from s. On the other hand suppose that t is reachable

from s. Then there is a path v

1

, v

2

. . . v

k

where v

1

= s, v

k

= t and there

is an edge from v

i

to v

i+1

for any i. We can assume that k ≤ n since if

v

i

= v

j

for i < j then we can eliminate v

i+1

through v

j

and still maintain a

path. Then there is a possibility that H = v

i

for every i and thus there is a

possibility that the machine outputs 1.

The argument implies that the algorithm recognizes exactly the graphs

that have a path from s to t and therefore directed graph reachability is in

NL.

We will not give any example of a language in NPSPACE and in the

next section it will be clear why.

Before we continue to establish some of the more formal properties of

NP, let us be informal for a while.

The class P is intuitively thought of as the class of functions which are

computable in practice, i.e. within moderate amounts of computation we

can solve reasonably large problems. That this is the case is not clear from

the deﬁnition and one could object that although n

100

is polynomial, it grows

too quickly. In practice however, this anomaly does not seem to appear and

thus if a problem has a polynomial time solution then the exponent tends

to be small and the algorithm is usually eﬃcient in practice.

In a similar way NP can be thought of as the class of problems where, if

you knew the solution, it could be veriﬁed eﬃciently. In an abstract mean-

60

ing “the solution” must here be interpreted as the set of nondeterministic

choices that makes the machine accept. As we have seen, in practice “the

solution” is much more concrete. Thus the nondeterministic choices have

in our examples corresponded to the factors, a short tour, and a satisfying

assignment, respectively.

The recursive sets corresponded to functions that could be computed,

while the recursively enumerable sets corresponded to statements that could

be veriﬁed. The latter statement follows from the fact that if A is r.e. and

x ∈ A then this can be veriﬁed since we just wait until x is listed. On the

other hand if x ∈ A this cannot be veriﬁed since we never know if we just

haven’t waited long enough to see it listed. In view of this one can say that

recursive and r.e. have the same relation as P and NP and thus it is not

surprising that we can prove some similar theorems.

Theorem 5.15 Given a set A, then A ∈ NP iﬀ there is a language B ∈ P

and a constant k such that

x ∈ A ⇔∃

y,|y|≤|x|

k(x, y) ∈ B.

Proof: Let us ﬁrst prove that if there is such a B then A ∈ NP. In fact,

a nondeterministic algorithm for membership in A just consists of guessing

a y of the desired length and then accepting iﬀ (x, y) ∈ B. If B can be

recognized in time O(n

c

) this procedure runs in time O(n

(1+k)c

) which is

polynomial.

To see the converse, we will need the concept of a computation tableau.

Deﬁnition 5.16 A computation tableau is a complete description of a com-

putation of a Turing machine. It consists of all conﬁgurations of the Turing

machine on a speciﬁc input (i.e one conﬁguration for every time step) start-

ing with the input conﬁguration and ending with the halting conﬁguration.

The reason for the name is that we will think of it in the following way.

Assume that the Turing machine has only one tape. Then we can think of

its computation tableau as a two-dimensional array with time on one axis

and the tape squares on the other. The position (i, j) of this tableau thus

contains the symbol that is in the j’th square at time i. It also contains

information about whether the head is there and in such a case which state it

is in. A computation which starts with input x

1

, x

2

. . . x

n

on the input-tape

and ends with only a 1 on the tape is given in Table 3.

61

x

1

, q

0

x

2

x

3

x

4

. . .

0 x

2

, q

3

x

3

x

4

. . .

0 1 x

3

, q

1

x

4

. . .

. .

. .

1, q

h

B B B B

Table 3: A computation tableau

Now, we can return to the converse of Theorem 5.15. Suppose A is

recognized by a one-tape Turing machine M in nondeterministic time n

c

.

Deﬁne B to be the set of pairs (x, y) such that y describes an n

c

× n

c

computation tableau of M on input x which ends in an accepting state.

Then B satisﬁes the condition with respect to A of the theorem with k = 2c.

We claim that B is in P. To see this observe that to check whether a pair

(x, y) is in B we basically have to check three things.

1. That the computation described by y starts with x on the input tape.

2. That the computation is legal for M.

3. That the computation accepts.

The ﬁrst and the last conditions are easy to check since they just talk

about the contents of particular squares. Also to check 2 is straightforward

since we have to check that the only square that changed value between

two timesteps is the square where the head was located, and also that the

transition by the head was a possible transition given the next-step function

of M. This ﬁnishes the proof.

Remark 5.17 One might be tempted to think that the relation given between

NP and P in Theorem 5.15 would be true also for NL and L. As the interested

reader can convince himself, this is probably not the case as even if we restrict

B to belong to L then the set of all A deﬁnable in this way is still all of NP.

62

Thus we have given the theorem about NP and P corresponding to The-

orem 2.19. Of the other theorems in Section 2.7, it is not known whether

the analogue of 2.17 is true. (The general belief is that it is not.) There is a

nice reduction theory and also a notion of complete sets and we will return

to these questions in Chapter 7.

63

6 Relations among complexity classes

Up to this point we have deﬁned six complexity classes (L,P,PSPACE,

NL,NP, and NPSPACE) and we have observed some relations. In this

section we will establish some more relations, some obvious and some not

obvious. Let us ﬁrst observe that the option of non-determinism will never

hurt and thus any deterministic complexity class is contained in the corre-

sponding nondeterministic complexity class. This gives us three immediate

theorems.

Theorem 6.1 L ⊆ NL.

Theorem 6.2 P ⊆ NP.

Theorem 6.3 PSPACE ⊆ NPSPACE.

In the next subsection we will prove the ﬁrst nontrivial complexity result.

For notational convenience let TIME(T(n)) denote the class of languages

that can be recognized in deterministic time T(n) and let NTIME(T(n)) be

the class of languages that can be recognized in the same nondeterministic

time. Similarly we deﬁne SPACE(S(n)) and NSPACE(S(n)).

6.1 Nondeterministic space vs. deterministic time

The aim is to establish the following theorem.

Theorem 6.4 Suppose S(n) > log n and that S(n) is space constructible,

then NSPACE(S(n)) ⊆ TIME(2

O(S(n))

).

Proof: Let A be a language that can be recognized by a nondeterministic

Turing machine N which uses space at most S(n) on inputs of length n.

We have to design a deterministic Turing machine that runs in time 2

O(S(n))

which recognizes A.

Assume for simplicity that N has only one worktape, a three letter alpha-

bet, and Q states. Consider the set of conﬁgurations of N. Remember that

a conﬁguration consists of the state of N, the positions of all its heads and

the contents of the worktape. By the argument in the proof of Lemma 3.12

there are at most |x|QS(|x|)3

S(|x|)

possible conﬁgurations that N may visit

on input x. Let G

x,N

be the following directed graph:

64

The nodes of G

x,N

are the conﬁgurations of N and there is an edge from

conﬁguration C

1

to conﬁguration C

2

iﬀ it is possible to go from C

1

to C

2

in

one step on input x.

G

x,N

has one node C

st

which corresponds to the initial conﬁguration

and one or more conﬁgurations where N halts with output 1. We now claim

that the machine takes value 1 on a given input exactly when there is a path

from C

st

to any of the conﬁgurations that end with output 1. This is fairly

obvious and the veriﬁcation is left to the reader. By the above claim G

x,N

has at most 2

O(S(|x|))

nodes and using the fact that S is space constructible

we see that G

x,N

can be constructed in 2

O(S(|x|))

time. Now it follows from

the example in Section 4.2 that in time 2

O(S(|x|))

it is checkable whether any

conﬁguration that outputs 1 can be reached from the initial conﬁguration.

Since this is equivalent to N accepting x we have proved Theorem 6.4

We have the following corollary.

Corollary 6.5 NL ⊆ P.

Proof: Just insert S(n) = O(log n) in Theorem 6.4.

6.2 Nondeterministic time vs. deterministic space

This section has only one basic theorem.

Theorem 6.6 NP ⊆ PSPACE.

Proof: Remember the characterization of NP given in Theorem 5.15, i.e.

given A ∈ NP there is a B ∈ P and a k such that

x ∈ A ⇔∃y, |y| ≤ |x|

k

(x, y) ∈ B.

This gives the following algorithm to determine whether x ∈ A

found = 0

For y = 0, 1 . . . 2

|x|

k

do

If (x, y) ∈ B then found = 1

od

Write found

65

The algorithm is correct since found will be 1 exactly when there is a

short y such that (x, y) ∈ B. To see that the algorithm runs in polynomial

space observe that all we need to do is to keep track of y and to do the

computation to check whether (x, y) ∈ B. Since this latter computation is

polynomial time, we can do it in polynomial space and once we have checked

a given y we can erase the computation and use the same space for the next

y.

6.3 Deterministic space vs. nondeterministic space

Nondeterministic computation seems very powerful, and it seems for the

moment that complexity theory supports this intuition at least in the case

when we are focusing on time as the main resource. If, on the other hand,

we focus on space it turns out that nondeterminism only helps marginally.

This fact is usually referred to as Savitch’s theorem and was ﬁrst proved by

W.J. Savitch in 1970.

Theorem 6.7 If S(n) is space-constructible and S(n) ≥ log n, then

NSPACE(S(n)) ⊆ SPACE(O(S

2

(n)))

.

Proof: Assume that A is accepted by the nondeterministic machine N in

space S(n). We will again work with the conﬁgurations of N and in fact if

you look closely, we solve the same graph problem as we did in the proof

of Theorem 6.4. This time however we will be concerned with saving space

and thus we will never write down the graph explicitly.

Assume for notational simplicity that N has a unique conﬁguration

where it halts with output 1. Let us call this conﬁguration C

acc

. Let C

1

and C

2

be any two conﬁgurations of N and let k be an integer. Then we

will be interested in the predicate GET(C

1

, C

2

, k, x) which we will interpret

“On input x it is possible to get from conﬁguration C

1

to conﬁguration C

2

in time ≤ 2

k

and without being in a conﬁguration which uses more than

S(|x|) space.” (If we think about the graph in the proof of Theorem 6.4 this

can be interpreted as “There is a path of length at most 2

k

from node C

1

to node C

2

”.)

Let C

st

denote the start conﬁguration of N and recall the argument in

the proof of Theorem 5.10 that if a machine has an accepting computation

66

then there is an accepting computation which visits each conﬁguration at

most once and, in particular, the running time is bounded by the number of

conﬁgurations. This implies that there is a constant c such that N accepts

an input x iﬀ GET(C

st

, C

acc

, cS(n), x) is true. Thus all we have to do is

to evaluate this predicate in small space and to achieve this, the following

observation will be crucial.

GET(C

1

, C

2

, k, x) =

(GET(C

1

, C, k −1, x) ∧ GET(C, C

2

, k −1, x))

The ∨ is here taken over all possible conﬁgurations C of N which uses

space less than S(|x|). The reason for the above relation is that if there

exists a computational path from C

1

to C

2

of length at most 2

k

which

never uses more than S(|x|) space then there is a midpoint on this path and

the conﬁguration at this midpoint can be used as C. Conversely if there

is a C that fulﬁlls the left hand side of the above equation, then the two

computations from C

1

to C and from C to C

2

can be concatenated to a

computation from C

1

to C

2

.

The above equation gives the following recursive algorithm to evaluate

the predicate GET.

GET(C

1

, C

2

, k, x)

If k = 0 then

Check whether the next-step function of N allows a transition from

C

1

to C

2

on input x in one step and set GET accordingly.

else

For all conﬁgurations C which uses space at most S(n):

Evaluate GET(C

1

, C, k −1, x) and GET(C, C

2

, k −1, x).

If for some C both are true, set GET to true and otherwise to false.

endif

By the above argument x ∈ A iﬀ GET(C

1

, C

2

, cS(|x|), x) and thus to

prove the theorem we need only calculate the amount of space needed to

evaluate GET.

We prove by induction that GET(C

1

, C

2

, k, x) can be evaluated in space

D(k + 1)S(|x|) for some constant D. This is clearly to true for k = 0 since

all that need to be done is to check if one of the constantly many possible

next steps that N can do from C

1

will take it into C

2

.

To do the induction step let us specify more closely how the above pro-

cedure works. We loop over all possible C and to remember which C we are

67

currently working on requires space dS(n) for some constant d. For each C

we do two evaluations of GET with the parameter k −1. These two evalu-

ations are done sequentially and thus we can ﬁrst do one of the evaluations,

remember the result and then do the other evaluation in the same space.

By the induction hypothesis this implies that the computation for a ﬁxed C

can be done in space DkS(n) + 1. Provided that D > d the induction step

is complete and thus we have completed the proof of Theorem 6.7.

We have two obvious corollaries of the above theorem.

Corollary 6.8 NPSPACE = PSPACE.

This explains that NPSPACE is not a very famous complexity class. We

introduced it for symmetry purposes and now that we have proved that we

do not need it, we will forget it.

Corollary 6.9 NL ⊂ PSPACE.

Proof: By Theorem 6.7 everything in NL can be done in space O(log

2

n)

and thus we get a strict inclusion by Theorem 3.14.

Observe that Corollary 6.9 ﬁnishes the proof of Theorem 5.8 as promised

before.

By now we have gathered some information about the relations between

the complexity classes we have deﬁned. Let us sum up the information in a

theorem.

Theorem 6.10 L ⊆ NL ⊆ P ⊆ NP ⊆ PSPACE. The inclusion of NL in

PSPACE is strict.

It is a sad fact for complexity theory that Theorem 6.10 reﬂects our total

knowledge of the relation between the given complexity-classes.

68

7 Complete problems

Even though Theorem 6.10 gives the present state of knowledge about the

deﬁned complexity classes, there are some important things to be said. The

common belief today is that all the given inclusions are strict, but unfor-

tunately we have not yet developed the machinery to prove this. One step

on the way is to identify the hardest problems within each complexity class.

This serves two purposes. Firstly they will serve as candidates that can be

used to prove strict inclusions. Secondly, proving a problem complete will

give a good hint that it can probably not be placed in a lower complex-

ity class and thus is a good way to classify a problem. We will start by

considering a very famous class of problems; the NP-complete problem.

7.1 NP-complete problems

To identify the hardest problem we need ﬁrst deﬁne the concept of “not

harder than”. There are a couple of diﬀerent ways to do this but we will

only consider one.

Deﬁnition 7.1 Let A and B be two sets. Then A ≤

p

B (read as “A is

polynomial time reducible to B”) iﬀ there is a polynomial time computable

function f such that x ∈ A ⇔f(x) ∈ B.

Clearly this deﬁnition is very close to the Deﬁnition 2.21. The only

diﬀerence is that we require the function f to be computable in polynomial

time.

We can now proceed to develop a reduction theory similar to the one

described in the end of Section 2.7. Instead of talking about recursive and

recursively enumerable sets we will talk about P and NP. Many proofs and

theorems are similar.

Theorem 7.2 If A ≤

p

B and B ∈ P, then A ∈ P.

Proof: Suppose the function f in the deﬁnition of ≤

p

can be computed

in time O(n

c

) and that B can be recognized in time O(n

k

). Then to check

whether a given input x belongs to A just compute f(x) and then check

whether f(x) ∈ B. To compute f(x) is done in time O(|x|

c

) and from this

also follows that |f(x)| ≤ O(|x|

c

) which in its turn implies that f(x) ∈ B

can be checked in time O(|x|

ck

). Thus the procedure works in polynomial

time and we can conclude that A ∈ P.

69

The deﬁnition of NP-complete is now very natural having seen the deﬁ-

nition of r.e.-complete before.

Deﬁnition 7.3 A set A is NP-complete iﬀ

1. A ∈ NP

2. If B ∈ NP then B ≤

p

A.

By dropping the ﬁrst condition we get another known concept.

Deﬁnition 7.4 A set A is NP-hard iﬀ for all B ∈ NP, B ≤

p

A.

Before we continue to prove some problems to be NP-complete let us

prove a simple theorem.

Theorem 7.5 If A is NP-complete then

P = NP ⇔A ∈ P

.

Proof: Clearly if NP = P then A ∈ P since A by the deﬁnition of NP-

completeness belongs to NP.

To see the converse assume that A ∈ P and take any B ∈ NP. Then

by property 2 of being NP-complete, B ≤

p

A and hence by Theorem 7.2

B ∈ P. But since B was an arbitrary language in NP we can conclude that

NP = P.

With this motivation we are ready to study our ﬁrst NP-complete prob-

lem. Let SAT be the set of satisﬁable Boolean formulas (as introduced in

the example in section 5.1).

Theorem 7.6 (Cook, 1971) SAT is NP-complete.

Proof: We have already established that SAT ∈ NP (see the example

in section 5.1) and thus we need to establish that B ∈ NP implies that

B ≤

p

SAT.

Assume that B is recognized by a non-deterministic Turing machine

N which has one tape, Q states, runs in time n

c

and uses the alphabet

{0, 1, B}. Remember that the computation tableau is a complete description

of a computation. We will now construct a Boolean formula such that if it is

70

satisﬁable then its satisfying assignment will describe a computation tableau

of an accepting computation of N on input x.

The formula has two types of variables:

y

ijk

, 1 ≤ i, j ≤ n

c

, k ∈ {0, 1, B} and

z

ijl

, 1 ≤ i, j ≤ n

c

, 1 ≤ l ≤ Q.

The intuitive meaning of the variable will be that y

ijk

= 1 iﬀ the symbol

k appears in square j at time i and will take the value 0 otherwise while

z

ijl

= 1 iﬀ the head is in square j at time i and the machine at this time is

in state q

l

. Let us denote the length of x by n.

Clearly the y and z variables code a computation completely and thus

all that needs to be done is to make a Boolean formula which is true iﬀ the

y and z variables code an accepting computation of N on input x. There

are three conditions to take care of.

1. The computation starts with x

2. It is a valid computation.

3. The computation accepts.

Of these three conditions, 1 and 3 are very easy to handle. The condition

1 is equivalent to the following conditions:

• For 1 ≤ j ≤ n we have y

1jk

= 1 iﬀ k = x

j

.

• For n + 1 ≤ j ≤ n

c

we have y

1jk

= 1 iﬀ k = B.

• z

1,j,l

= 0 except when j = l = 1 (assuming that q

1

is the start-state).

The condition 3 is equivalent to y

n

c

11

= 1 and z

n

c

1l

= 1 i.e. at time n

c

we

have written a 1 in square 1 and the head is located in square 1 and we have

halted (assuming that q

l

is the halting state).

To see how to translate condition 2 into a formula we will need some

more information.

Deﬁnition 7.7 A computational tableau C is locally correct if for every i

and j there is some correct computation which have the same contents as C

in squares (i

, j

) for i ≤ i

≤ i + 1 and j ≤ j

≤ j + 2.

71

That computation is a local phenomena is now formalized as follows:

Lemma 7.8 A computational tableau describes a legal computation iﬀ it is

locally correct.

We leave the easy veriﬁcation to the reader.

Armed with this lemma we can now express condition 2 in a suitable

way. To determine whether the variables y

ijk

and z

ijl

describe as legal com-

putation we only have to check all the local correctness conditions. Whether

a given local area is correct is described as a condition on 6Q+18 variables

and since any condition on K variables can be expressed as a formula of size

2

K

we can express each local correctness condition in constant size. The

conjunction of all these correctness formulas now takes care of condition 2.

The size of the formula is O(n

2c

).

We now claim that the conjunction of the formulas taking care of the

conditions 1-3 is satisﬁable iﬀ x ∈ B. This is fairly obvious since there is

a satisfying assignment iﬀ there is an accepting computation which uses at

most space n

c

and time n

c

of N on input x, which by the deﬁnition of N

is equivalent to x ∈ B. To conclude the proof of the theorem we need just

observe that to construct the formula is clearly polynomial time.

Let us make a couple of observations about the above proof. Firstly

the ﬁnal formula is the conjunction of a number of subformula where each

subformula is of constant size. Without increasing the size of the entire

formula by more than a constant we write each of the subformulas in con-

junctive normal form (i.e. as a conjunction of disjunctions). This puts the

entire formula on conjunctive normal form. This implies that satisﬁability

of formulas on conjunctive normal form is NP-complete. Let us call this

problem CNF-SAT and we have the following theorem.

Theorem 7.9 CNF-SAT is NP-complete.

The second observation is that the given proof is almost identical to

the proof of Theorem 5.15. If one thinks about this, Theorem 5.15 can

be used to give another NP-complete problem, namely the existence of a

computational tableau with certain conditions. However, we do not feel

that this is a natural problem and hence we will not make that argument.

There are also striking similarities with the proof of Theorem 2.26. It is just

a question of coding a computation in a suitable way.

72

Having obtained one NP-complete problem it turns out to be easy to

construct more NP-complete problems. The main tool for this is given

below.

Theorem 7.10 If A is NP-complete and B satisﬁes B ∈ NP, A ≤

p

B,

then B is NP-complete

Proof: We have only to check that for any C in NP it is true that C ≤

p

B. Since A is NP-complete we know that C ≤

p

A and hence there is a

polynomial-time computable function f such that

x ∈ C ⇔f(x) ∈ A.

By the hypothesis of the theorem there is a polynomial time computable g

such that

y ∈ A ⇔g(y) ∈ B.

Now it clearly follows that

x ∈ C ⇔g(f(x)) ∈ B

and since the composition of two polynomial-time computable functions is

polynomial-time computable we have proved C ≤

p

B and thus the proof of

the theorem is complete.

To put the proof in other words: Polynomial-time reductions are tran-

sitive i.e. if we can reduce C to A and A to B then we reduce C to B by

composing the reductions.

Clearly Theorem 7.10 is much more useful for proving problems NP-

complete than the original deﬁnition. The reason is that to use Theorem 7.10

we only have to make one reduction while to use the deﬁnition we have to

make a reduction from any problem in NP.

Let 3-SAT be the problem of checking whether a restricted Boolean

formula given on conjunctive normal form is satisﬁable. The restriction is

that there are exactly 3 literals (i.e. variables or negated variables) in each

disjunction. Such a formula is called a 3-CNF formula and an example is:

(x

1

∨ ¯ x

2

∨ x

3

) ∧ (¯ x

1

∨ x

2

∨ x

4

) ∧ (¯ x

2

∨ x

3

∨ x

4

)

This formula is satisﬁable as can be seen from the assignment x

1

= 1, x

2

=

1, x

3

= 1 and x

4

= 0. We have

73

Theorem 7.11 3-SAT is NP-complete.

Proof: We will use Theorem 7.10 and since 3-SAT is clearly in NP all

that we need to do is to ﬁnd a polynomial-time reduction from CNF-SAT

to 3-SAT.

Thus we need to given a CNF-SAT formula φ construct in polynomial

time a 3-SAT formula f(φ) such that φ is satisﬁable iﬀ f(φ) is satisﬁable.

Suppose φ =

_

m

i=1

C

i

where C

i

are disjunctions containing an arbitrary

number of literals. We will call C

i

a clause and let |C

i

| denote the number

of literals in C

i

. We will replace each clause by one or more clauses each

containing exactly 3 variables. We have the following cases.

1. |C

i

| = 1.

2. |C

i

| = 2.

3. |C

i

| = 3.

4. |C

i

| > 3.

Let us take care of the cases one by one. Let x

i

, i = 1, 2 . . . n be the

variables that appear in φ and let y

ij

denote new variables.

(1.) Suppose C

i

= x

j

, then we replace it by

(x

j

∨ y

i1

∨ y

i2

) ∧ (x

j

∨ ¯ y

i1

∨ y

i2

) ∧ (x

j

∨ y

i1

∨ ¯ y

i2

) ∧ (x

j

∨ ¯ y

i1

∨ ¯ y

i2

)

.

(2.) Suppose C

i

= (x

j

∨ x

k

) then we replace it by

(x

j

∨ x

k

∨ y

i1

) ∧ (x

j

∨ x

k

∨ ¯ y

i1

)

(3.) We keep C

i

as it is.

(4.) Suppose C

i

=

_

k

j=1

u

j

for some literals u

j

we then replace C

i

by

(u

1

∨ u

2

∨ y

i1

) ∧ (

k−4

j=1

(¯ y

ij

∨ u

j+2

∨ y

i(j+1)

)) ∧ (¯ y

i(k−3)

∨ u

k−1

∨ u

k

)

The formula we obtain by these substitutions is clearly a 3-CNF formula and

it is also obvious that given the original formula it can be constructed in

polynomial time. Thus all we need to check is that φ is satisﬁable precisely

when f(φ) is satisﬁable.

74

First assume that φ is satisﬁable. We now must ﬁnd a satisfying assign-

ment for f(φ). We will give the same values to the x

i

and must ﬁnd values

for the y

ij

to satisfy the formula. The clauses constructed according to rules

1-3 are already satisﬁed and thus will cause no problem. Look at the clauses

constructed under rule 4. Since the corresponding clause C

i

in φ is satisﬁed,

one of the u

j

is true and suppose this is u

j

0

. Now set y

ij

= 1 for j ≤ j

0

−2

and y

ij

= 0 for j > j

0

− 2 then it is easy to verify that this assignment

satisﬁes f(φ).

To prove the converse, suppose that f(φ) is satisﬁable and let x

i

= α

i

be the assignment to the x variables in this satisfying assignment. We claim

that this part of the assignment will satisfy φ. For clauses that fall under

the rules 1-3 this is not too hard to see. Let us consider case 1. If C

i

= x

j

and α

j

= 0 then, no matter what the values of y

i1

and y

i2

are, at least one

of the clauses is not satisﬁed.

Now consider the case 4. If C

i

was not satisﬁed then all the literals u

j

would be false, but this implies that

y

i1

∧ (

k−4

j=1

(¯ y

ij

∨ y

i(j+1)

)) ∧ ¯ y

i(k−3)

would be satisﬁed, but this is clearly not possible. Thus the reduction is

correct and the proof is complete.

Proving problems NP-complete is not the main purpose of these notes

but let us at least give one more NP-completeness proof. Let 3-dimensional

matching (3DM) be the following problem:

Given a set of triplets (x

i

, y

i

, z

i

), i = 1, 2 . . . m where x

i

∈ X, y

i

∈ Y and

z

i

∈ Z where X, Y and Z are sets of cardinality q. Is there a subset S of q

of the triplets such that each element in X, Y and Z appear in exactly one

of the triplets in S?

Theorem 7.12 3DM is NP-complete.

Proof: 3DM is clearly in NP since a nondeterministic machine can just

nondeterministically pick q of the triplets and then check if each element

appears exactly once. To prove 3DM NP-complete we will reduce 3-SAT to

it. Thus given a 3-CNF formula φ we must construct an instance f(φ) of

3DM such that φ is satisﬁable iﬀ f(φ) contains a matching.

Suppose φ has n variables and m clauses. We will construct an instance

of 3DM with three types of triplets, “variable triplets”, “clause triplets”

75

Figure 8: The variable triplets

and “garbage collecting triplets”. The elements of the sets X, Y and Z will

be deﬁned as we go along. Let us start by deﬁning the variable triplets.

Suppose variable x

i

appears (with or without negation) in m

i

clauses then

we will associate with it the following 2m

i

triplets.

T

t

i

= {(¯ u

i

[j], a

i

[j], b

i

[j]) : 1 ≤ j ≤ m

i

}

T

f

i

= {(u

i

[j], a

i

[j + 1], b

i

[j]) : 1 ≤ j < m

i

}

_

(u

i

[m

i

], a

i

[1], b

i

[m

i

])

The elements a

i

[j] and b

i

[j] will not appear in any other triplets. As

can be seen from Figure 8 this implies that any matching M must contain

either all triplets from T

f

i

or T

t

i

for any i. We will let the choice of which of

the two sets to pick correspond to whether the variable x

i

is true or false.

Each clause C

i

will have two special values and three triplets. Suppose

C

i

= u

i

1

∨u

i

2

∨u

i

3

and it is the j

k

’th time the variable corresponding to the

76

literal u

i

k

appears. Then we include the triplets

(u

i

k

[j

k

], s[i], t[i]), k = 1, 2, 3.

Observe that the u

i

k

should here be interpreted as literals and thus corre-

sponds to either u

l

or ¯ u

l

, i.e. these are the same elements as in the variable

triplets. The elements s[i] and t[i] will not appear in any other triplets and

this implies that in any matching precisely one of the triplets correspond-

ing to each clauses will be included. Observe that we can include a triplet

precisely when one of the corresponding literals is true.

We have done the essential part of the construction and all that remains

is specify the garbage collecting triplets which will match up the x

i

[j] and

¯ x

i

[j] that have not been used. This is done by the following triplets

(x

i

[j], g

1

[k], g

2

[k]), 1 ≤ i ≤ n, 1 ≤ j ≤ m

i

, 1 ≤ k ≤ 2m

(¯ x

i

[j], g

1

[k], g

2

[k]), 1 ≤ i ≤ n, 1 ≤ j ≤ m

i

, 1 ≤ k ≤ 2m

This enables us to cover any 2mliteral-elements which have not been matched

by previous triplets.

It is clear from the above description that the set of triplets can contain

a matching only if the formula is satisﬁable. Suppose on the other hand

that the formula is satisﬁable. Then make the choice of which T sets to pick

based on the satisfying assignment. Then for each clause pick a variable

that satisﬁes it and the corresponding clause triplet. This will cover m of

the 3m literal-elements. The last 2m elements can be covered together with

the g elements by the garbage collecting triplets.

Thus there is a matching iﬀ there is a satisfying assignment and since

the reduction is straightforward the only thing needed to check that it is

polynomial time is to check that we do not have to construct too many

triplets. However it is easy to check that there are 6m+3m+6m

2

triplets.

This concludes the proof.

There are hundreds of known NP-complete problems and many appear

in the listing in the ﬁnal part of the excellent book by Garey and Johnson.

It turns out that most problems in NP that are not known to be in P are

NP-complete. One notable exception is factoring, another one is graph-

isomorphism. Let us however move on and consider problems complete for

other classes.

77

7.2 PSPACE-complete problems

The theory of PSPACE-complete problems is very similar to that of NP-

complete problems. The concept of reduction is the same and the basic

properties are the same. Of course the problems are diﬀerent.

Deﬁnition 7.13 A set A is PSPACE-complete iﬀ

1. A ∈ PSPACE.

2. If B ∈ PSPACE then B ≤

p

A

We have an immediate equivalent of Theorem 7.5.

Theorem 7.14 If A is PSPACE-complete then

P = PSPACE ⇔A ∈ P.

Proof: If you substitute PSPACE for NP in the proof of Theorem 7.5 you

get a proof of Theorem 7.14.

By a similar argument we get:

Theorem 7.15 If A is PSPACE-complete then

NP = PSPACE ⇔A ∈ NP.

One last deﬁnition for completeness before we go to business.

Deﬁnition 7.16 A set A is PSPACE-hard if for any B ∈ PSPACE, B ≤

p

A.

Now let us encounter our ﬁrst PSPACE-complete problem. When deal-

ing with NP-complete problems we came across the satisﬁability of Boolean

formulas. Now we will consider quantiﬁed Boolean formulas which looks

like:

∀x

1

∃x

2

. . . Qx

n

φ(x)

where each x

1

can take the value 0 or 1 and φ is a normal quantiﬁer free

formula and Q is either ∃ or ∀ depending on whether n is even or odd. Let

TQBF be the set of True Quantiﬁed Boolean Formulas. We have:

78

Theorem 7.17 TQBF is PSPACE-complete.

Proof: Let us ﬁrst check that TQBF can be recognized in polynomial

space. We claim that if the formula has n variables and the size of the

description of φ is bounded by S then to check whether:

∀x

1

∃x

2

. . . Qx

n

φ(x)

is true can be done in space O((n + 1)S). We prove this by induction and

ﬁrst observe that it is certainly true for n = 0. To the induction step we use

the observation that the given formula is true iﬀ both

∃x

2

. . . Qx

n

φ(x)|

x

1

=0

and

∃x

2

. . . Qx

n

φ(x)|

x

1

=1

are true. These two formulas can be evaluated by induction in space O(nS)

and since we can evaluate one and then evaluate the other in the same space

while only remembering the value of the ﬁrst evaluation and which formula

to evaluate the claim follows. Of course if the ﬁrst quantiﬁer is ∃ we just

need to check that one of the values is true. From this the claim follows and

thus TQBF ∈ PSPACE.

Remark 7.18 By being more careful it is not to hard to see that the eval-

uation actually can be done in space O(n +S).

Next we need to take care of the slightly more diﬃcult part of proving

that if B ∈ PSPACE then B ≤

p

TQBF. Suppose that B is recognized by

Turing machine M

B

which never uses more space than |x|

c

on input x for a

given constant c.

We will again use the predicate GET(C

1

, C

2

, k, x) which means that on

input x, M

B

will get from conﬁguration C

1

to conﬁguration C

2

in at most

2

k

steps and never use more space than |x|

c

. As before we have

GET(C

1

, C

2

, k, x) =

C

(GET(C

1

, C, k −1, x) ∧ GET(C, C

2

, k −1, x)) .

With the present formalism it is more convenient to think of the ∨ as an

existential quantiﬁer and we get

GET(C

1

, C

2

, k, x) = ∃

C

(GET(C

1

, C, k −1, x) ∧ GET(C, C

2

, k −1, x)) .

79

Now we could write the two GETs to the right in the same way but this

would be mean trouble since we would then get a formula of exponential

size. However there is a way around this by replacing the ∧ by a universal

quantiﬁer obtaining.

GET(C

1

, C

2

, k, x) = ∃

C

∀

(A,B)∈{(C

1

,C),(C,C

2

)}

GET(A, B, k −1, x).

Now we only get one copy of GET to expand further and if we continue

recursively we get 2k quantiﬁers and a ﬁnal formula GET(X, Y, 0, x). All

that remains to do is to check that it is suﬃcient to quantify over Boolean

variables, rather than the more complicated objects we are currently quanti-

fying over, and that the ﬁnal application of GET can be written as a Boolean

formula.

Both these points are easy and let us just give a rough outline. It is

straightforward to encode a conﬁguration as a set of Boolean variables. The

∀ quantiﬁcation is just a binary choice and thus can be represented by a

Boolean variable which will take the value 0 if we make the ﬁrst choice and

the 1 if we make the other. Finally, to check whether we can get from one

conﬁguration to another in one step is just a simple formula where we list

all possible transitions of the Turing machine. We leave the details to the

interested reader.

Now since x ∈ B iﬀ GET(C

st

, C

acc

, d|x|

c

, x) is true for the appropri-

ate constant d and since we know how to write the latter condition as a

quantiﬁed Boolean formula we have completed the reduction.

In fact if one writes down the ﬁnal formula carefully one can write it in

CNF, i.e. if we restrict the formula φ in TQBF to be a CNF-formula we still

obtain a PSPACE-complete problem. We call this problem TQBF-CNF.

Theorem 7.19 TQBF-CNF is PSPACE-complete.

To get other PSPACE-complete problems we ﬁrst state an obvious the-

orem.

Theorem 7.20 If A is PSPACE-complete and B satisﬁes B ∈ PSPACE,

A ≤

p

B, then B is PSPACE-complete.

PSPACE-problems are not as abundant as NP-complete problems and do

not come up in as varying contexts. The main source of PSPACE- complete

problems outside logic is games. It is a only slight exaggeration to say that

to determine who is the winner in most games is PSPACE-complete.

80

The reason that games are this hard is that already quantiﬁed Boolean

formulas can be viewed as a game between two players, “Exists” and “Forall”

in the following way. Given a formula “Exists” chooses the values of all

variables which correspond to existential quantiﬁers and “Forall” chooses

the values of all variables which correspond to universal quantiﬁers. “Exists”

wins the game iﬀ the ﬁnal total assignment satisﬁes the formula. It is not

hard to see that the formula is true iﬀ “Exists” wins the game when both

players play optimally.

Of course the PSPACE-completeness cannot apply to any usual game like

chess, since chess is of a given constant size and hence not very interesting

from our point of view. But games that can be generalized to arbitrary size

are often PSPACE-complete (or hard). Thus for instance to determine who

is the winner in a given position of generalized checkers or generalized go

is PSPACE-hard. We will not get into those games but instead consider a

more childish game.

“Geography” is a two-person game where one person starts by giving the

name of a geographical place and then the two people alternatingly name

geographic places subject to the two conditions that no place is named twice

and that each name starts with the same letter that the previous name ended

by. The ﬁrst person not being able to name a place with these two conditions

loses.

To get a computational problem out of this game let us generalize.

“Generalized Geography” (GG) is a graph game where two people alter-

natingly choose nodes in a directed graph. Each node must be a successor of

the previous node and no node can be chosen twice. The ﬁrst person having

no choice loses the game. Initially the game starts with a given node.

The computational problem is now: Given a graph, which of the two

players has a winning strategy?

Let us ﬁrst observe that clearly this is a generalization of the geography

game where the nodes corresponds to places and there is an edge from A

to B if A ends with the same letter B starts with. (On the other hand it

is a slightly cheating generalization since the skill in the normal game is to

know as many geographic names as possible.)

Theorem 7.21 Generalized geography is PSPACE-complete.

Proof: It is not hard to verify by normal procedures that GG is in PSPACE

and thus by Theorem 7.20 we need only to prove that TQBF-CNF can be

reduced to GG. We will call the players in the game ∃ and ∀. Given the

81

s

x

1

x

2

c

1

c

2

c l

Figure 9: Generalized geography graph

formula

∃x

1

∀x

2

∃x

3

[

c

1

¸ .. ¸

(x

1

∨ ¯ x

2

∨ x

3

) ∧· · ·

c

l

¸..¸

( )]

we construct a graph given in Figure 9. There is a diamond for each variable

of the formula, with the last diamond pointing to nodes representing all the

clauses of the formula and each clause node pointing to nodes representing

the literals in the clause. Finally these nodes are hooked back to the top

or the bottom of the diamond for the corresponding variable according to

whether the literal is positive or negative. The games starts at the node

named S and the ∃ and ∀ labels in the diagram show whose turn it is to

move at each stage.

We can think of ∃’s and ∀’s choices of how to move through the diamonds

as setting the variables (true if the high road is taken and false if the low

road is taken). Then ∀ gets to pick any clause that he claims to be false, and

∃ must pick a literal in that clause which he will claim is true. If ∃’s claim is

valid, ∀ will not be able to move without reusing a node, while if the claim is

not true, ∀ will be able to move and then ∃ will be stuck. Thus we see that

∃ has a winning strategy iﬀ the formula is true. Since the reduction clearly

is polynomial time we have proved that GG is PSPACE-complete.

7.3 P-complete problems

The question P = NP? is of real practical importance since it is a question

whether many natural problems can be solved eﬃciently. The question

whether P is equal to L is not of the same practical importance, (although

82

it has a nice connection with parallel computation we have not seen yet) but

from a theoretical point of view it is of course of major importance.

Up to this point we have allowed polynomial time for free when we have

compared problems. This is clearly not possible when we are considering

the question P = L? and thus we need a ﬁner reduction concept. The

modiﬁcation is very slight. We just require the reduction-function to be

computable in logarithmic space.

Deﬁnition 7.22 Let A and B be two sets. Then A ≤

L

B (read as “A is

logarithmic space reducible to B”) iﬀ there is a function f, computable in

logarithmic space, such that x ∈ A ⇔f(x) ∈ B.

Using this we can now deﬁne P-completeness.

Deﬁnition 7.23 A set A is P-complete iﬀ

1. A ∈ P.

2. If B ∈ P then B ≤

L

A

We get the usual theorem.

Theorem 7.24 If A is P-complete then

P = L ⇔A ∈ L.

The proof is identical to the other proofs. One small lemma is needed,

namely that the composition of two functions in L is in L. We leave this as

an exercise. We are now ready to encounter our ﬁrst P-complete problem.

Deﬁne a Boolean circuit to be a directed acyclic graph where each node is

labeled by either ∧, ∨ or ¬ and the number of incoming edges is at least two

in the ﬁrst two cases and one in the last. The graph contains sources which

are labelled by input variables x

i

and one sink which is called the output

node. Given values of the inputs to the circuit one can evaluate the circuit

in the natural way. An example is given in Figure 10. In this ciruit all edges

are directed upwards. Let CVAL be the following problem: Given a circuit

and values of the inputs of the circuit. What is the output of the circuit?

We have:

Theorem 7.25 CVAL is P-complete.

83

x

1

x

2

x

3

V

V

V

V

Figure 10: A circuit

Proof: First observe that CVAL belongs to P since it is straightforward

to evaluate a circuit once the inputs are given.

Now take any B ∈ P. We need to reduce B to CV AL. Assume that B is

recognized by a Turing Machine M

B

that runs in time at most n

c

for inputs

of length n. We will again use the concept of a computation tableau. Since

we are considering deterministic computation there is a unique computation

tableau given the input. The content of each square of the tableau is easily

coded by a constant number of Boolean values. We construct a circuit

which successively computes these descriptions. The output of the circuit

will correspond to the output of the machine, i.e. be the content of the ﬁrst

square at the ﬁnal timestep.

The content of a given square of the tableau only depends on the contents

of the square itself and its two neighboring squares at the previous time step.

This means that we can build a constant piece of circuitry that computes

the Boolean variables corresponding to the square (i, j) in the computation

tableau from the variables corresponding to (i − 1, j − 1), (i − 1, j), and

(i−1, j+1). Thus to construct a circuit that given the correct input simulates

the computation tableau of M

B

we just have to copy this piece of circuitry

everywhere. To print the description of this circuit on the output tape all

we need to remember is the identities of the nodes of the circuits. This can

be done in O(log n) space. Thus in logrithmic space we can construct a

circuit and an input to this circuit such that the circuit outputs one iﬀ M

B

outputs 1 on input x. Thus we have a correct reduction and the proof is

complete.

84

Several other P-complete problems can be constructed by making loga-

rithmic space reduction from CVAL. We will however not present any more

P-complete problems in this section.

7.4 NL-complete problems

The ﬁnal question we will consider is the NL = L? question. Again we have

complete problems under L-reductions.

Deﬁnition 7.26 A set A is NL-complete iﬀ

1. A ∈ NL.

2. If B ∈ NL then B ≤

L

A

As before we ge:

Theorem 7.27 If A is NL-complete then

NL = L ⇔A ∈ L.

We have already encountered the standard NL-complete problem, namely

graph-reachability (GR) i.e. given a directed graph G and two nodes s and

t of G, is it possible to ﬁnd a directed path from s to t.

Theorem 7.28 Graph-reachability is NL-complete.

Proof: We have more or less already proved the theorem. The fact that

GR ∈ NL was established in Section 5.1.

That the problem is NL-complete was implicitly used in the proof of

Theorem 6.4. Let us recall this proof. We started with an arbitrary nonde-

terministic machine M and an input x to M. We then constructed a graph

(of conﬁgurations of M) with two special nodes s and t (corresponding to

the start conﬁguration and the accepting conﬁguration, respectively) where

x was accepted by M iﬀ we could reach t from s. We then observed that

graph-reachability could be done in polynomial time and hence NL ⊆ P.

The ﬁrst part of this proof is clearly the desired reduction. All we need do

is to prove that the reduction can be done in logarithmic space. This is not

hard and we leave this to the reader.

85

8 Constructing more complexity-classes

Let us just brieﬂy mention some more complexity-classes which are very

related to the given classes. Before we have pointed out that P is symmetric

with respect to complementation i.e. if a set A belongs to P then so does its

complement

¯

A. We have also pointed out that this is not true for NP. Thus

it is natural to talk about the set of languages whose complement belongs

to NP.

Deﬁnition 8.1 A set A belongs to co-NP iﬀ its complement

¯

A belongs to

NP.

It is in general believed that co-NP is not equal to NP. In general for

any complexity-class C that is not closed under taking complements, we can

deﬁne a corresponding complexity-class co-C. The only other such class we

have encountered is NL.

Deﬁnition 8.2 A set A belongs to co-NL iﬀ its complement

¯

A belongs to

NL.

It was generally believed that co-NL is not equal to NL. Thus it came

as a surprise when the following theorem was proved independently by Im-

merman and Szelepcs´enyi in 1988.

Theorem 8.3 If S(n) is space constructible, S(n) ≥ log n and suppose A

can be recognized in nondeterministic space S(n), then the complement of A

can be recognized in nondeterministic space O(S(n)).

We get the following immediate corollary:

Corollary 8.4 NL=co-NL.

Remark 8.5 Although this theorem was a surprise, one already knew that

nondeterminism was not that helpful with regard to space. In particular by

Savitch’s theorem (Theorem 6.7) we know that whatever can be done in non-

deterministic space S(n) can be done in deterministic space O(S

2

(n)). On

the other hand the smallest deterministic time-class that is known to include

all things that can be done in nondeterministic time T(n) is essentially 2

T(n)

.

Thus in spite of the given collapse it is still believed that NP = co-NP.

86

Proof: For notational convenience we will only prove the corollary. The

general case will follow from just substituting S(n) for log n. We will prove

that co-NL ⊆ NL. By symmetry this will imply the equality of the two

classes.

Since graph-reachability is complete for NL, its complement is complete

for co-NL. To prove that co-NL ⊆ NL we need only prove that graph-non-

reachability is in NL. In particular we need only to describe a nondetermin-

istic algorithm which works in logarithmic space and given a graph G and

two vertices s and t accepts if there is no path from s to t. The idea behind

the algorithm is to compute the number of nodes reachable from s. Once

we know this number we can verify that t is not reachable by just guessing

(and checking) all reachable vertices. Since we cannot guess them all indi-

vidually, we need to guess them in increasing order. This way we need only

remember the number of vertices seen this far and the last one seen.

The number of reachable vertices is computed iteratively. In stage k we

compute the number of vertices which are reachable with at most k edges.

This is done by at each stage nondeterministically generating all vertices

that can be reached in k − 1 steps. Since we know their number, we know

when we have generated all, and thus we can without error decide if a given

vertex is reachable in k steps. The complete algorithm now works as follows:

N

k

= 1

for k = 1 to n do

newN

k

=0

for l = 1 to n do

check = 0

for m = 1 to n do

Nondeterministically try to generate a path from s to

v

m

of length at most k −1.

If this is successful then

check = check + 1

If v

m

is connected to v

l

(or equal to v

l

) then

set newN

k

= newN

k

+ 1

goto next l

endif

endif

next m

if check = N

k

reject and stop

next l

87

N

k

= newN

k

next k

check = 0

for m = 1 to n do

Nondeterminstically try to generate a path from s to

v

m

of length at most n −1. If this is successful then

check = check + 1

If v

m

is t reject and stop

endif

next m

if check = N

k

accept otherwise reject

We need to prove that it is correct and that it only uses logarithmic

space. Let us start with the latter part. The variables used by the program

is k, l, m, N

k

, newN

k

and check. It is easy to see that each of them is an

nonnegative integer which is at most n and thus we can store these values

in space O(log n). On top of this we need to nondeterministically guess a

path of at most a certain length at certain parts of the program. This can

be done in logarithmic space by the example in section 5.1 augmented with

a simple counter.

Now let us consider correctness. We claim that, unless the algorithm has

already halted and rejected, the counter N

k

will at stage k give the number

of vertices reachable by a path of length at most k from s. We prove this

by induction and the base case k = 0 is trivial since only s can be reached

with 0 edges and N

k

is initially 1. For the induction step observe that since

the algorithm does not halt and by the induction hypothesis, for each l the

algorithm generates all v

m

which can be reached in at most k−1 steps. Thus

it is easy to see that the algorithm decides correctly whether v

l

is reachable

in at most k steps and thus the new value of N

k

will be correct and the

induction step is complete.

Finally, for the ﬁnal loop observe that if in the end check = N

k

then

we have generated all vertices that are reachable from s with at most n −1

steps (and hence reachable at all) and if t was not one of them we accept

correctly. The argument is complete and we have proved Corollary 8.4.

88

9 Probabilistic computation

From a practical point of view it is suﬃcient if an algorithm is fast most of

the time. On could even relax conditions even further and just ask that the

algorithm is correct most of the time.

A key point when reasoning about such algorithms is to make precise

what is meant by “most of the time” i.e. we need to introduce some proba-

bilistic assumptions. There are two basic ways to do this:

1. To consider a random input, i.e. to take a probability distribution over

the inputs and ask that the algorithm performs well for most inputs.

2. To allow the algorithm to make random choices, and require that the

algorithm is fast (correct) for every input.

Of course one could also combine the two ways of introducing randomness.

Both approaches give many interesting results, but here we will only

study the second approach.

Deﬁnition 9.1 A probabilistic Turing machine is a normal deterministic

Turing machine equipped with a special coinﬂipping state. When the machine

enters this state it receives a bit which is 0 with probability 1/2 and 1 with

probability 1/2.

As with nondeterministic Turing machines, a probabilistic Turing ma-

chine can do many diﬀerent computations on a given input. Thus for in-

stance, the output is not uniquely determined, but rather is given by a

probability distribution. Also the running time is a random variable and we

will say that a probabilistic Turing machine runs in time S if it always halts

in time S(n) on every input of length n. Another interesting running time

characteristic is the expected running time.

We can now deﬁne a new complexity class.

Deﬁnition 9.2 A set A belongs to BPP iﬀ there is a polynomial time prob-

abilistic Turing machine M such that

x ∈ A ⇒ Pr[M(x) = 1] ≥ 2/3

x ∈ A ⇒ Pr[M(x) = 1] ≤ 1/3

BPP is an abbreviation for Bounded Probabilistic Polynomial time.

89

Thus the machine M gives at least a reasonable guess of whether an

input x belongs to A (We will later see that this guess can be improved).

To get the ideas behind these deﬁnitions, let us next give an example of a

language in BPP not known to be in P.

Example 9.3 Checking polynomial identities: Given two polynomials P

1

and P

2

in several variables represented in some convenient way (e.g. as

determinants, products or something similar). Do P

1

and P

2

represent the

same polynomial? We require that the representation is such that if we

are given values of the variables then we can evaluate the polynomials in

polynomial time. A typical example would be to investigate whether the

equality

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

1 1 1 · · · 1

x

1

x

2

x

3

· · · x

n

x

2

1

x

2

2

x

2

3

· · · x

2

n

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

x

n−1

1

x

n−1

2

x

n−1

3

· · · x

n−1

n

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

=

i>j

(x

i

−x

j

)

is a true identity.

The obvious approach to this problem is to expand the polynomials

into a sum of monomials and then compare the expansions term by term.

This procedure will in general be quite ineﬃcient since there might be ex-

ponentially many monomials (as in the example given). Our probabilistic

algorithm will evaluate the two polynomials at randomly chosen points. If

the polynomials disagree on one of these points they are diﬀerent and we

will prove that if they agree on all points then they are probably the same

polynomial. The algorithm will depend on two extra parameters, d and k.

The ﬁrst parameter is a known upper bound for the degrees of the polyno-

mials in question (in our example we could take d =

n(n−1)

2

) and the second

is related to the error probability.

Input P

1

and P

2

For i = 1, 2 to k

Pick random integer values independently for x

1

through x

n

in the range

[1, 2nd]. If P

1

(x) = P

2

(x) conclude that P

1

= P

2

(answer 0) and stop.

Next i.

Conclude that P

1

= P

2

(answer 1).

90

Clearly iﬀ we answer 0 we are always correct and to see that the algorithm

is useful we have to prove that most of the time we are correct even when

we answer 1. The key lemma is the following.

Lemma 9.4 Given a nonzero polynomial, P, in n variables and of degree

≤ d. The set

Z = {x | 1 ≤ x

i

≤ R, 1 ≤ i ≤ n ∧ P(x) = 0}

has cardinality at most dnR

n−1

.

Proof: We prove the lemma by induction over n. For n = 1 the lemma

follows from the fact that a polynomial of degree d has at most d zeroes.

For the induction step, let us consider the polynomials Q

j

in the vari-

ables x

1

. . . x

n−1

obtained by substituting j for the variable x

n

. Q

j

is a

polynomial of degree ≤ d in n − 1 variables and thus we could use the in-

duction hypothesis if we knew that Q

j

was nonzero. We claim that there

are at most d diﬀerent j such that Q

j

is identically zero. To see this take

any monomial in P which appears with a nonzero coeﬃcient (assume for

the sake of the argument that it is x

1

x

2

x

n

). Now look at the coeﬃcient of

x

1

x

2

in Q

j

. It is the value at j of a nonzero-polynomial of degree ≤ d − 2.

Thus there are at most d − 2 values of j such that this coeﬃcient is 0 and

in general at most d values of j such that Q

j

is identically zero.

The set Z splits into the union of sets obtained by ﬁxing the last coordi-

nate to any value in the range 1 to R. When the corresponding polynomial

is nonzero, then by the induction hypothesis the cardinality of the set is

bounded by (n − 1)dR

n−2

and when the polynomial is zero the cardinality

is R

n−1

. Since there are at most R sets of the ﬁrst kind and d of the second

we get the total estimate

R(n −1)dR

n−2

+dR

n−1

= ndR

n−1

and the induction is complete.

Using this lemma we can analyze the algorithm. If P

1

and P

2

represent

the same polynomial then we will always answer 1 and we always get the

correct answer. When P

1

and P

2

do not represent the same polynomial call

an x such that P

1

(x) = P

2

(x) an unlucky x. Thus the algorithms gives the

correct answer unless we happen to pick k unlucky x’s. By applying the

above lemma to P

1

−P

2

we see that there are at most (2dn)

n

/2 unlucky x

and thus the probability that we pick one unlucky x is bounded by

1

2

. Since

91

the k x’s are independent the probability of them all being unlucky is at

most 2

−k

. Thus if k is reasonably large we get the correct answer with high

probability.

All that remains to see that the problem lies in BPP is to observe that

the algorithm is polynomial time, but this is obvious since the essential step

of the algorithm is to evaluate the polynomials and this is polynomial time

by assumption.

In the example we saw that if we were willing to run the algorithm longer

(i.e. try more random points) then we could make the probability of error

arbitrarily small. It is not hard to see that this is true in general.

Theorem 9.5 A set A belongs to BPP iﬀ there is a polynomial time prob-

abilistic Turing machine M such that

x ∈ A ⇒ Pr [M(x) = 1] ≥ 1 −2

−|x|−2

x ∈ A ⇒ Pr [M(x) = 1] ≤ 2

−|x|−2

Proof: Clearly the above conditions are stronger than our original deﬁni-

tion and thus if A satisﬁes the above condition then it belongs to BPP.

We need to prove the converse i.e. that if A ∈ BPP we can ﬁnd a

machine M which satisﬁes the above condition. We know by the deﬁnition

of BPP that there is a machine M

such that

x ∈ A ⇒ Pr[M

(x) = 1] ≥ 2/3

x ∈ A ⇒ Pr[M

(x) = 1] ≤ 1/3.

Now let M be deﬁned by running M

**, 2(|x| + 3)/ log(9/8) = C times with
**

independent random choices and outputting 1 iﬀ M

outputs 1 at least C/2

times. We need to verify the claim that this M satisﬁes the condition in the

theorem.

Assume that x ∈ A and that M

**outputs 1 with probability p on input
**

x (we know that p ≥ 2/3). Then the probability that M does not output 1

is bounded by

C/2

i=0

_

C

i

_

p

i

(1 −p)

C−i

.

The ratio of two consecutive terms in this sum is at least

p

1−p

≥

2/3

1/3

≥ 2 and

thus if the last term is T then the sum is bounded by

C/2

i=0

2

i−C/2

T ≤ 2T.

This last term is bounded by

2

C

(2/3)

C/2

(1/3)

C/2

≤ (8/9)

C/2

≤ 2

−|x|−3

92

and thus the ﬁrst condition of the theorem follows. The second condition is

proved in a similar way.

In our example we proved more than needed to establish that the problem

in question was in BPP. In particular we proved that if the input was in the

language the answer was always correct. With this additional restriction we

get a new complexity class.

Deﬁnition 9.6 A set A belongs to R iﬀ there is a polynomial time proba-

bilistic Turing machine M such that

x ∈ A ⇒ Pr [M(x) = 1] ≥ 2/3

x ∈ A ⇒ Pr [M(x) = 1] = 0.

Remark 9.7 I believe that R is short for Random polynomial time. Hence

this class is sometimes also called RP.

While BPP is closed under complement, this is not obvious (or known)

for R and thus we also have a third probabilistic complexity class, co-R, the

set of languages whose complement lies in R. Observe that both R and co-R

are subsets of BPP. Our example “Polynomial identities” is a member of

co-R.

There are not many known examples of problems not known to be in P

that lie in BPP. The main other example is to recognize primes. We will

not discuss that algorithm here. However, by quite elaborate methods it is

possible to prove that primes belongs to R

**co-R and for this class we can
**

make a very strong statement.

Theorem 9.8 A set A belongs to R

**co-R iﬀ there is probabilistic machine
**

M which runs in expected polynomial time and always decides A correctly.

Proof: By assumption there is a machine M

1

that outputs 1 with proba-

bility at least 2/3 when the input x is in A and with probability 0 when x

is not in A (since A ∈ R). Similarly since A ∈ co-R there is a machine M

2

that outputs 1 with probability at least 2/3 when x is not in A and never

when x in A. Both M

1

and M

2

run in polynomial time. Now on input x

alternate in running M

1

and M

2

until one of them answers 1. When this

happens we know that x ∈ A if the 1-answer was given by M

1

and we know

that x ∈ A if it was given by M

2

. Each time we run both machines we have

probability 2/3 of getting a decisive answer and hence it follows that the

procedure is expected polynomial time.

93

9.1 Relations to other complexity classes

Let us relate the newly deﬁned complexity classes to our old classes. Clearly

any of the deﬁned classes contains P since we can always ignore our possi-

bility to use randomness. We have some non-obvious relations.

Theorem 9.9 R ⊆ NP.

Proof: We know by the deﬁnition of R that if A ∈ R then there is a

machine M such that when x ∈ A then with probability ≥ 2/3 M accepts

x and when x ∈ A there are no accepting computation. But this implies

that if we replace the probabilistic choices by non-deterministic choices M

accepts x precisely when x ∈ A.

The above theorem immediately yields:

Theorem 9.10 co-R ⊆ co-NP.

Our next theorem is also not very surprising.

Theorem 9.11 Suppose A ∈ BPP and the machine M that recognizes A

runs in time T(n) and uses at most p(n) coins, then A can be recognized by

a deterministic machine that runs in time O(2

p(n)

T(n)) and space O(T(n)+

p(n)).

Proof: Just run M for all possible 2

p(n)

set of coinsﬂips and calculate the

probability that M accepts. A straightforward implementation gives the

given resource bounds.

We have an immediate corollary:

Corollary 9.12 BPP ⊆ PSPACE.

Apart from these theorems, nothing is known about the relation between

our probabilistic classes and our old classes. There is not a great consensus

what the true relations are, but many people think it is possible that P =

BPP.

94

10 Pseudorandom number generators

In the last section we used random numbers. Without discussing the matter,

we assumed that we had access to an unlimited number of perfectly random

coins. In practice this might not be the case. One could indeed question

whether there are any random phenomena in nature, and thus whether ran-

domness in computation at all makes sense. This is a valid question, but

it is mostly philosophical in nature and we will not discuss it. Instead we

will take the optimistic attitude that there is randomness, but there is a

problem getting enough random numbers into the computer. For the sake

of this section we will assume that we only need random bits, where each

bit is 0 and 1 with probability 1/2. This is not a severe restriction since

random bits can be turned into random numbers in many ways.

The common solution to the problem of not having enough truly random

numbers is to have a what is generally called a pseudorandom number gen-

erator (we will in the future call them pseudorandom bit generators since we

will be generating bits). This is a function which takes a short truly random

string and produces a longer “random looking” string. How the short truly

random string (which is called the seed) is produced is clearly a problem

(it is generally supplied by the user), but we will not concern us with this

problem, just assume that somehow we can get a few random bits into the

computer.

The main question we will deal with in this section is how to deﬁne

what we want from a pseudorandom generator and how to construct such

a generator. One obvious property is that it should be easy to run and

produce something useful, i.e. it should be computable in polynomial time

and the output should be longer than the input. Something that has only

these two properties is a bit generator.

Deﬁnition 10.1 A bit generator is a polynomial time computable function

that take a binary string as input and on an input of length n produces an

output of length p(n) where p is a polynomial such that p(n) > n for all n.

For technical reasons we assume that p(n) is strictly increasing with n.

Note that the deﬁnition allows for the output to be of only length n +

1 and this does not seem to be much of a generator. We will see later

(Theorem 10.8) that this is not a real problem.

The more interesting aspect of pseudorandom bit generators is to try to

formalize the “random looking” requirement of the output. Traditionally,

95

this was interpreted as that the output bits passed a small set of standard

statistical tests. This is the germ of what today is believed to be the correct

deﬁnition.

Deﬁnition 10.2 A statistical test is a function from binary strings to {0, 1}.

Intuitively the output 1 can be interpreted as the string passes the test and

the output 0 as failing.

Note, however that not even all strings produced truly at random will

pass a statistical test.

Deﬁnition 10.3 (First attempt) A bit generator passes a statistical test S

if the probability that S outputs 1 on a random output of the generator is

equal to the probability that S outputs 1 on a truly random string.

Here a random output of the generator is deﬁned as the output on a

truly random seed. The tempting deﬁnition of pseudorandom generator is

now:

Deﬁnition 10.4 (First attempt) A bit generator is pseudorandom if it passes

all statistical tests.

A bit generator that passes all statistical tests produces a very random

looking output. However the deﬁnition is too restrictive and there is no such

generator. Take any bit generator G and consider the following statistical

test:

S

G

(x) =

_

1 if x can be output by G

0 otherwise

First observe that if G stretches strings of length n to strings of length p(n)

in time T(n) then S

G

can be implemented on strings of length p(n) to run

in time 2

n

T(n) since we just run G on all possible strings of length n and

check if one of them equals x.

When we run S

G

on the output of G then the result will always be 1. On

the other hand when we feed S

G

a truly random string then the probability

that we get output 1 is at most 1/2. This follows since there is one output

for each seed which implies that there are at most 2

n

possible outputs of

G of length p(n) (here we use that p is strictly increasing), and since there

are 2

p(n)

possible strings and p(n) ≥ n + 1 at most half of the strings are

possible outputs of G.

96

In practice, if n is large, it is not feasible to compute S

G

as described

above, since the exponential time needed to try all the seeds is usually too

much. Thus somehow this test is “cheating” and we change the deﬁnition

to take care of this.

Deﬁnition 10.5 (Final attempt) A bit generator is pseudorandom if it

passes all statistical tests that run in probabilistic polynomial time.

Remark 10.6 From the development up to this point polynomial time is

the natural requirement on eﬃcient statistical tests. The choice to allow

statistical tests to be probabilistic is not clear, but for many reasons (we will

not go into them here) it is the better choice. Allowing randomness makes

the deﬁnition stronger since anything that passes all probabilistic polynomial

time statistical tests also pass all deterministic polynomial time statistical

tests.

We have still not overcome all problems with the deﬁnitions as can be

seen from following miniature version of S

G

.

Test s

G

On input x of length p(n) guess n

2

random seeds of length n

and run G on these seeds and output 1 if one of the outputs seen

from G is equal to x. Otherwise output 0.

Since G is assumed to be polynomial time, s

G

can be implemented in

polynomial time. Furthermore, if x is a string that could have been gen-

erated by G then there is some small but positive probability that s

g

will

output 1 while if x cannot be output by G then this probability is 0. By

the analysis of S

G

this implies that the probability that s

G

outputs 1 on a

random output of G is diﬀerent from the probability that it outputs 1 on a

random input. As we have deﬁned passing statistical tests this means that

G fails the test s

G

. This is counterintuitive since for large n the test s

G

is

very weak. We change the deﬁnition to take care of this anomaly.

Deﬁnition 10.7 (Final attempt) Let S be a statistical test and let G be a

bit generator. Let a

n

be the probability that S outputs 1 on a random output

of G of length n and let b

n

be the probability that it outputs 1 on a truly

random input of length n. G passes the statistical test S if for any k there is

a N

k

such that for all n > N

k

it is true that |a

n

−b

n

| < n

−k

. The probability

is taken over the random output of G and the random choices of S.

97

In other words the diﬀerence of the behavior of the test on the outputs

of the generator and random strings goes to 0 faster than the inverse of any

polynomial.

Let us ﬁrst prove that once you have a pseudorandom generator which

extends the seed slightly, then we get an arbitrary extension.

Theorem 10.8 If there is a pseudorandom bit generator G, then for any

strictly increasing polynomial p there is a pseudorandom bit generator G

**that extends from n bits to p(n) bits.
**

Proof: The only problem is that G might not extend the seed suﬃciently.

By deﬁnition G maps n bits to more than n bits. We will assume that

G outputs n + 1 bits since if it outputs more bits we can just ignore them.

Note that G remains a pseudorandom bit generator (Prove this!) Now deﬁne

G

**to be G iterated p(n) − n times, i.e. on an input of length n, we ﬁrst
**

compute G to get a string of length n+1, then compute G on this string to

get a string of length n + 2 etc. until we have a string of length p(n). This

generator produces a string of the wanted length and it is easy to see that it

works in polynomial time. We prove that it is pseudorandom by converting

a hypothetical statistical test S which distinguishes the output of G

from

random strings to a test which distinguishes the output of G from random

strings.

Let a

n

be the probability that S outputs 1 on random outputs from G

of length p(n) and let b

n

be the corresponding probability when the input is

truly random. By assumption for some k and inﬁnitely many (for notational

convenience we assume this is true for all) n we have |a

n

− b

n

| ≥ n

−k

.

Consider the following probability distribution R

i

, 0 ≤ i ≤ p(n) − n on

strings of length p(n). Start with a truly random string of length n +i and

iterate G p(n) −i −n times. Note that R

0

are random outputs of G

while

R

p(n)−n

are truly random strings. Let q

i

be the probability that S outputs

1 on distribution R

i

. Since q

0

= a

n

and q

p(n)−n

= b

n

and |a

n

− b

n

| ≥ n

−k

there is some i such that |q

i

−q

i+1

| ≥

1

n

k

p(n)

. Let us ﬁx this i.

Now consider the following statistical test on strings of length n +i +1:

Given a string x iterate G p(n) − n − i − 1 times and run S. If the initial

string was random we have produced an element according to R

i+1

and the

probability of getting output 1 is q

i+1

. On the other hand if the initial

string was the output of G on a random string of length n + i, then we

have produced a string according to R

i

and the probability of getting a 1

is q

i

. This implies that we have found a way of distinguishing the output

98

for G from random strings and hence we have reached a contradiction since

G was supposed to be pseudorandom. Note that the test obviously runs in

polynomial time.

This should ﬁnish the proof, but the very careful reader will see that there

are some minor problems. The proposed test uses two auxiliary parameters

p(n) and i. The value p(n) causes no problems since it is the value of a

ﬁxed polynomial. However it is not clear how to ﬁnd i. We sketch how to

get around this problem: Let c be a constant. On a given input of length n

consider the tests given by diﬀerent values of i. For each test evaluate the

test by picking n

c

random inputs according to both distributions. Let i

0

be the value that gives the biggest diﬀerence between the two distributions.

Now run the test with i = i

0

on the given input. It is a tedious (and not that

easy) exercise to check that for some c this “universal” test will distinguish

the random strings from outputs of G.

Let us next investigate the existence of pseudorandom bit generators.

Theorem 10.9 If NP ⊆ BPP then there are no pseudorandom generators.

Proof: Just observe that the test S

G

is in NP. Since this test distin-

guishes the output of G from random bits it should not run in probabilistic

polynomial time.

In particular if P = NP there are no pseudorandom generators and thus

proving the existence of such generators would prove P = NP, which we

cannot do for the moment. Thus the best we could hope for is to prove that

if P = NP then there are pseudorandom generators. Also this is probably

too much to hope for. The reason is that P vs NP is a question of the worst

case behavior of algorithms while the existence of pseudorandom generators

is an average case question. This forces us to base the construction of

pseudorandom generators on even stronger assumptions.

Deﬁnition 10.10 A function f is a one-way function if it is computable in

polynomial time and for any probabilistic polynomial time algorithm A the

following holds. Choose a random input x of length n and compute y = f(x).

If A is given y as input, then the probability that it outputs a z such that

f(z) = y goes to 0 faster than the inverse of any polynomial.

Remark 10.11 Note that we cannot ask A to actually ﬁnd the initial x,

since in such a case the constant function would be one-way.

99

We have:

Theorem 10.12 If there is a pseudorandom bit generator then there is a

one-way function.

Proof: We claim that the function given by the generator (i.e. from

the seed to the output) is one-way. By Theorem 10.8 we can assume the

generator expands n bits to 2n bits. Assume that the function given by

this generator (let us by abuse of notation call the generator as well as the

function it computes by G) is not one-way, in other words that there is a k

and an A such that A ﬁnds an inverse image of a given function value with

probability at least n

−k

(for inﬁnitely many n). Then the following test, S,

will distinguish outputs of G from random bits.

On input x run A. Suppose A outputs y, then if G(y) = x output

1 otherwise output 0.

If x is a truly random string of length 2n then the probability that the

test S outputs 1 is bounded by the probability that x can be output from

G. Since there are 2

2n

possible strings and at most 2

n

outputs from G this

probability is bounded by 2

−n

. On the other hand if x is the output of

G then the probability of output 1 is exactly the success probability of A

which by assumption is at least n

−k

(for inﬁnitely many n). Thus this test

distinguishes the output from G from random strings contradicting that

G is pseudorandom (the test is polynomial time since both A and G are

polynomial time). This proves that G is a one-way function.

It was a long standing open question whether the converse of Theo-

rem 10.12 would also be true i.e. if starting from any one-way function it

would be possible to construct a pseudorandom bit generator. In 1990 it was

proved by H˚astad, Impagliazzo, Levin and Luby that this is indeed the case,

but their proof is much too complicated for the present set of notes. Instead

we prove the following theorem which is due to Yao (the present proof is due

to Goldreich and Levin). Let a one-way lengthpreserving permutation be a

one-way function which for each n is a 1-1 mapping on strings of length n.

Theorem 10.13 If there is a one-way lengthpreserving permutation then

there is a pseudorandom bit generator.

100

Proof: Let f be the one-way lengthpreserving permutation. Let x and r

be random strings of length n and let (x, y) be the inner product modulo 2

of the strings x and y (i.e. it is the parity of

n

i=1

x

i

y

i

). Then we claim that

the function g(x, r) = f(x), r, (r, x) is a pseudorandom bit generator. It is

a bit generator since it expands 2n bits to 2n + 1 bits and is polynomial

time computable since f is polynomial time computable. The hard part is

to prove that it is pseudorandom. The following lemma of Goldreich and

Levin will be crucial.

Lemma 10.14 Suppose we have a probabilistic polynomial time algorithm

A that on input f(x), r computes (x, r) with a probability greater than

1

2

+

1

Q(n)

where Q is a polynomial. (Here the probability is taken over a random

choice of x and r and the random choices of A). Then there is a probabilistic

polynomial time algorithm B that inverts f with probability of success at least

1

2Q(n)

.

In other words, if f is a one-way function then (x, r) looks random to

any probabilistic polynomial time machine which only has the information

f(x), r.

Let us ﬁrst see how Theorem 10.13 follows from Lemma 10.14. Suppose

g is not pseudorandom and that S is a statistical test which outputs 1 with

probability a

n

on random bits and b

n

on random outputs of g. Suppose

without loss of generality that a

n

≥ b

n

+ n

−k

. Now consider the following

algorithm for predicting (x, r).

On input f(x), r run S let b

0

= S(f(x), r, 0) and b

1

= S(f(x), r, 1). Now if

b

0

= b

1

output a random bit and otherwise output i such that b

i

= 1.

Let p(x, r, i) be the probability that S outputs 1 on (f(x), r, i). Then

a

n

= 2

−2n−1

x,r,i

p(x, r, i)

and

b

n

= 2

−2n

x,r

p(x, r, (r, x)).

Consider the above algorithm on input f(x), r. Let (r, x) be the complement

of (r, x), then the probability that it outputs the correct value for f(x), r is

p(x, r, (r, x))(1 −p(x, r, (r, x))+

101

1

2

_

p(x, r, (r, x))p(x, r, (r, x)) + (1 −p(x, r, (r, x)))(1 −p(x, r, (r, x)))

_

which equals

1

2

(1 + p(x, r, (r, x)) − p(x, r, (r, x))). Hence the total prob-

ability of it being correct is

1

2

(1 +a

n

− b

n

) and now Theorem 10.13 follows

from Lemma 10.14.

Next let us prove Lemma 10.14.

Proof: (Lemma 10.14) We give a proof due to Rackoﬀ.

First observe that for at least a fraction

1

2Q(n)

of the x’s, A predicts

(r, x) with probability (only over r) at least

1

2

+

1

2Q(n)

. We will describe a

procedure that will be successful with high probability for each such x and

this is clearly suﬃcient.

We compute each bit of x individually. Let e

i

be the unit-vector in the

i’th dimension. We can ask A about f(x), e

i

, but there is no reason it will

be correct for these inputs. We need to ask about many points, and we will

use a small random subspace shifted by a e

i

. The set of r’s asked will be

pairwise independent but we can guess the answers to the entire subspace

by guessing the answers on the basis vectors. Let k be a parameter and ⊕

be exclusive-or then the algorithm on input y now works as follows:

Pick k random vectors r

1

, r

2

, . . . r

k

of length n.

For each value of k bits b

1

, b

2

. . . b

k

do

For i = 1 to n do

count = 0

For all non-empty subsets S of {1, 2 . . . k} do

Ask A about y, e

i

⊕

j∈S

r

j

, suppose answer is b.

Compute b

= b ⊕

j∈S

b

j

and set count = count + 1 −2b

.

Next S

Set x

i

= 0 if count > 0 and 1 otherwise.

Next i.

If f(x) = y output x, stop

od

Report ’failure’.

Just to avoid confusion observe that count is the number of 0-guesses

minus the number of 1-guesses and hence we are doing a majority decision.

If A runs in time T(n) and f in time T

1

(n) then the algorithm runs in time

2

2k

nT(n) +T

1

(n) and thus the algorithm is polynomial time if k is O(log n).

102

We need to analyze the probability that we ﬁnd the correct x. We claim that

this happens with good probability when b

i

= (r

i

, x). Let r

i

S

be e

i

⊕

j∈S

r

j

and let b

i

S

be b ⊕

j∈S

b

j

. If A gives the correct answer (i.e. (x, r

i

S

)) to y, r

i

S

then x

i

= b

i

S

. This implies that we are in pretty good shape since we know

that A gives a majority of correct answers and r

i

S

are fairly random.

Lemma 10.15 For S

1

= S

2

r

i

S

1

and r

i

S

2

are independent and uniformly

distributed on {0, 1}

n

.

Proof: Suppose j ∈ S

1

but j ∈ S

2

(if there is no such j we can inter-

change S

1

and S

2

). Now it is easy to see that r

S

2

is uniformly distributed

(its deﬁnition is an exclusive-or of several things at least one which is uni-

formly random) and that for any ﬁxed value of r

S

2

the existence of r

j

in the

exclusive-or deﬁning r

S

1

makes sure it still uniformly distributed.

Now it follows by Lemma 10.15 that the b

i

S

are pairwise independent.

Suppose for notational convenience that x

i

= 0 then we know that check is

a random variable with expected value at least

2

k

−1

Q(n)

and variance at most

2

k

−1. Now remember, Tchebychev’s inequality:

Theorem 10.16 Let X be a random variable with expected value µ and

variance v then the probability that |X −µ| ≥ λ is bounded by

v

λ

2

.

Using this with λ =

2

k

−1

Q(n)

and v = 2

k

−1 we see that x

i

takes the incorrect

value with probability at most

Q(n)

2

2

k

−1

. Now if 2

k

− 1 ≥ 10nQ(n)

2

then the

probability that x

i

does not take the correct value is bounded by

1

10n

. Thus

the probability that some x

i

is incorrect is bounded by 1/10. This concludes

the proof of Lemma 10.14.

Remark 10.17 We have now given a generator that extends the input by

one bit and we know by Theorem 10.8 that we can get a generator which

extends the output arbitrarily. We can take this to be the following very

natural generator: Pick x and r randomly and let b

i

= (f

i

(x), r) where f

i

is

f iterated i times, for i = 1, 2, . . . p(n).

Now that we have studied good generators it is natural to ask what

happens if we use these generators to produce the random bits needed by a

probabilistic algorithm. Suppose we have a probabilistic machine M which

recognizes a BPP-language B and let G be a pseudorandom generator.

103

Suppose M uses p(n) random bits and that for some small constant , G

extends n

**bits to p(n) bits. The latter can be assumed by Theorem 10.8.
**

Now consider the following statistical test S

M,x

of a random string r of

length p(n):

Given x run M on input x with random coins r. Answer with

the output of M.

We know that when x ∈ B and r is random then the probability that this

test outputs 1 is at least 2/3 while otherwise it is at most 1/3. Since G by

assumption passes all statistical tests it is tempting to think that the same

is true for outputs of G. This would imply that we would get a theorem

similar to Theorem 9.11 saying that B could be recognized in time close to

2

n

**since we would only have to try all seeds of G rather than all sets of
**

p(n) coins.

The reason this is not true is that the test has a parameter x which might

be hard to ﬁnd (the parameter M is not a problem since it is of constant

size). All is not lost since we could change the statistical test to choose x

randomly and then study the behavior of M. Then we could prove that we

had a deterministic algorithm that ran in time close to 2

n

**and was correct
**

for most inputs. However since we have not studied the concept of being

correct for most inputs we will not pursue this approach. Instead we have:

Deﬁnition 10.18 A non-uniform statistical test is a probabilistic polyno-

mial time algorithm that on inputs of length n gets an advice a

n

which is of

polynomial length.

Remark 10.19 Note that the advice is the same for all strings of length n.

The interested reader might want to prove that the given deﬁnition corre-

sponds to polynomial size circuits without any uniformity constraints.

Deﬁnition 10.20 A pseudorandom generator is non-uniformly strong if it

passes all non-uniform statistical tests.

This deﬁnition is stronger than the previous deﬁnition since we are al-

lowing stronger statistical tests. We will not do so here, but it turns out

that the existence of such generators is equivalent to the existence of one-

way functions where we allow the inverting function to have an advice. In

general all proofs for the uniform case translates to the non-uniform case.

In particular Theorem 10.8 remains true. We now ﬁnish the discussion with

a theorem of Yao.

104

Theorem 10.21 If there is a pseudorandom generator which is non-uniformly

strong then

BPP ⊆

>0

DTIME(2

n

).

Proof: The proof is as outlined above. Suppose B ∈ BPP and that it is

recognized by M which uses p(n) coins and runs in time T

1

(n) (both these

bounds are some polynomials). Let δ < and let G be a non-uniformly

strong generator which extends n

δ

bits to p(n) bits and runs in time T

2

(n)

(which also is a polynomial). Now let x be an arbitrary input of length n

and consider the above test S

M,x

. This test uses the advice x but since G

is non-uniformly strong, it passes this test. This implies that if we replace

the coins by a random output of G then we still have essentially the same

probability of acceptance. We now just try all the 2

n

δ

possible seeds for G

and take a majority decision. This can be done in time

2

n

δ

(T

1

(n) +T

2

(n))

and this is O(2

n

**). Since both B and were arbitrary we have proved the
**

theorem.

Thus we have proved that if there are one-way functions in the non-

uniform setting then BPP can be simulated in time which is signiﬁcantly

cheaper than exponential time. If one is willing to make stronger assump-

tions then one can make stronger conclusions. In particular if there is a

polynomial time computable function such that inverting this function (in

the non-uniform setting, with non-negligible success ration) on inputs of

length n requires time 2

cn

for some small n then BPP = P.

105

11 Parallel computation

The price of processors have dropped remarkably in the last decade and it is

now feasible to make computers that have a large number of processors. The

most famous multi-processor computer might be the Connection Machine

which has 2

16

= 65536 processors.

The concept of having many processors working in parallel leads to many

interesting theoretical problems. One could phrase the main question as a

variant of a traditional mathproblem. Suppose one computer can compute

a given function in one million seconds, how long would it take a million

computers to compute the same function?

The answer to this question is not known, but it seems like the answer

could be anywhere from one second to a million seconds depending on the

function. It is an important theoretical problem to identify the computa-

tional tasks that can be parallelized in an eﬃcient manner. In this section

we will just give the ﬁrst deﬁnitions and show some basic properties.

When many processors cooperate to solve a problem it is of crucial im-

portance how they communicate. In fact it seems like that in practice this

is the overshadowing problem to make large scale parallel computation eﬃ-

cient. It is hard to get this fairly practical consideration into the theoretical

models in a suitable manner and this complication will usually get lost. We

choose here to study the circuit model of computation and as we will see,

communication between processors will be ignored. We do not want to ar-

gue that the model does not reﬂect reality, we only want to point out that

there is one important aspect missing.

11.1 The circuit model of computation

We have previously brieﬂy discussed the concept of a Boolean circuit. It is

a directed acyclic graph with three type of nodes: Input nodes, operation

nodes and output nodes. The input nodes are labeled by variable names x

i

and the operation nodes are labeled by logical operators. The inputs to a

node v is the set of nodes w for which (w, v) is an edge.

We will here only allow the operators ∧, ∨ and ¬. The circuit computes

a function {0, 1}

n

→{0, 1} in the natural way. (Substitute the value of the

i’th coordinate for x

i

and then evaluate the nodes by letting each operation

node take the value which corresponds to the corresponding operator applied

to the inputs of that node.) We will be interested in two parameters of the

circuit; its size and depth. The size of a circuit C

n

will be denoted by

106

|C

n

| and is equal to the number of nodes it contains while the depth will

be denoted by d(C

n

) and is the longest directed path from the input to the

output. If there is a processor at each node of the circuit then the number of

processors is equal to the size of the circuit and the time needed to evaluate

the circuit is equal to the depth of the circuit. Thus if we are interested

in fast parallel computation it is interesting to construct small circuits with

small depth.

The functions we have been considering so far take inputs that are of

arbitrary length while a circuit can only take inputs of a given length. The

way to resolve this is to let a function be computed by a sequence of circuits

(C

n

)

∞

n=1

where C

n

computes f on inputs of length n. We will then be

interested in the growthrate of the size and depth of C

n

as a function of n.

In particular we will say that a sequence of circuits is of polynomial size if

the growthrate of |C

n

| is not more than polynomial in n. Let us now state

a theorem that was implicitly proved in Section 7.3.

Theorem 11.1 If B ∈ P then B can be recognized by polynomial size cir-

cuits.

Proof: (Outline) In the proof of Theorem 7.25 we saw that given a Tur-

ing machine M and an input x we could construct a circuit such that the

output of the circuits was equal to the output of M on input x. The circuit

constructed the computation tableau of M row by row. If one looks closely

at that proof, one discovers that the structure of the circuit only depends

on M while x enters as the input of the circuit. In particular, given a lan-

guage B ∈ P we take the corresponding Turing machine M

B

and given n

we can now construct a circuit C

n

which will give the same output as M

B

on all inputs of length n. The size of this circuit will only be a constant

greater than the size of the computation tableau of M

B

on inputs of length

n. If M

B

runs in time O(n

c

) then this size will be O(n

2c

) and thus we have

constructed circuits for B of polynomial size.

Remark 11.2 By more eﬃcient constructions it is possible to to give a bet-

ter simulation of Turing machines and decrease the size of the above circuit

to O(n

c

log n).

One immediate question is whether the converse of the above theorem is

true, i.e. that if a function can be computed by polynomial size circuits then

is it in fact true that the function lies in P? With the current deﬁnitions

107

this is not true. The reason for this is that we have not put any conditions

on how to obtain the circuits C

n

. To see the problem consider the following

language:

B = {x|M

|x|

halts on blank input}

As we have seen earlier this language is not even recursive. However it has

very small circuits since for each length n, either all strings of length n are

in B or no string of that length is a member of B. Thus C

n

could just be a

trivial circuit which either always outputs 0 or 1 depending on whether M

n

halts on blank input. How to decide which one to choose is non-recursive

but is of no concern in the old deﬁnition and the following deﬁnition is called

for.

Deﬁnition 11.3 A sequence of circuits (C

n

)

∞

n=1

is P (L)-uniform iﬀ there

is a Turing machine M, which works in polynomial time (logarithmic space),

that on input 1

n

prints a description of C

n

on its output tape.

Using this deﬁnition we get:

Theorem 11.4 B can be computed by polynomial size P-uniform circuits

iﬀ B ∈ P.

Proof: (Outline) First just observe that the circuits described in the above

proof are P-uniform. They are in fact L-uniform by the proof of Theorem

7.25. This proves one of the implications in the theorem.

To see the reverse implication, suppose that B is recognized by polyno-

mial size P-uniform circuits. Then on input x a Turing machine can ﬁrst

construct the circuit C

|x|

and then compute its value on input x. The ﬁrst

part is polynomial time by the deﬁnition of P-uniform and the second part

is easily seen to be polynomial time.

11.2 NC

We can now deﬁne our main complexity class of parallel computation.

Deﬁnition 11.5 A set B is in NC

k

iﬀ it can be recognized by a family

of L-uniform circuits (C

n

)

∞

n=1

where C

n

is of polynomial size and d(C

n

) ≤

O((log n)

k

). Furthermore NC =

∞

k=1

NC

k

.

108

Remark 11.6 The name NC is short for Nick’s Class. This is named after

Nick Pippenger who was one of the ﬁrst researchers to study this class.

Remark 11.7 Normally one requires even stricter uniformity constraints

for NC

1

than L-uniformity. For reasons that go beyond the scope of these

notes, this gives a better deﬁnition. However to make life easier we will stick

with the above deﬁnition.

We can now make an obvious observation.

Theorem 11.8 NC ⊆ P.

Proof: This follows immediately from the deﬁnition of NC and Theorem

11.4.

From a theoretical standpoint NC is considered as the subset of P which

admits ultrafast parallel algorithms (time O((log n)

k

)). Some of the algo-

rithms we present will also be eﬃcient in practice and some will not. When

we describe how to construct circuits, we will be quite informal and talk in

terms of processors doing simple operations. Formally this should of course

be replaced by nodes in circuits, but somehow processors seem to go better

with the intuition.

Example 11.9 Given two n-bit numbers, compute their sum. This might

look straightforward since we can have one processor which takes care of each

digit. This will be the basic idea, but we have to do something intelligent

with the carries, since if we treat them without thinking, we will need circuits

of linear depth. You see the reason for this if you try to add the binary

numbers 01111111 and 00000001. The critical point is to discover quickly if

you have a carry coming from your right. The process to do this is called

Carry-look-ahead.

We use one processor for each digit of the two numbers. This processor

checks whether that position Generates, Propagates or Stops a carry and

marks the position G, P and S accordingly. We can combine this informa-

tion in a binary tree to see how longer blocks will behave with respect to

carries. For instance a block of length two will generate a carry if it looks

like GG, GP, GS or PG, it will propagate a carry if it looks like PP and it

will stop a carry if it looks like PS, SG, SP or SS. Continuing in this way

we can quickly compute whether certain intervals propagate or stop a carry.

How to do this might best be seen by an example. Suppose the numbers

are 01111011 and 01001010. We get the representation SGPPGSGP and

109

0

0

1

1

1

0

1 1 0 1 1

0 1 0 1 0

S G P P G S G P

S P G G

G

S

S

Figure 11: Carry look ahead tree, going up

0

0

1

1

1

0

1 1 0 1 1

0 1 0 1 0

S G P P G S G P

S P G G

G

S

S

Figure 12: Carry look ahead tree, going down

we build a binary tree (see Figure 11) to ﬁnd out how longer blocks behave.

Now to see if we have a carry in a given position we just have to ﬁgure out if

all suﬃces of the string SGPPGSGP generates a carry. It is quite easy to

see how this is done. One way to phrase it formally is the following: Suppose

you want to know if there is a carry in a given position, start at that position

and walk down the tree. Whenever you go right write down what you see

coming in from the left to that same node. Finally evaluate the string you

get. For instance if you start in position 6 in the given tree, you get the

string PG which evaluates to G and thus there is a carry in position 6. One

can also view this last step as sending the appropriate values down the tree

as indicated in Figure 12. By actually building this tree in the circuit we

see that we get a circuit of depth O(log n) which computes all the carries

and since once we know the carries the rest is simple we can conclude that

addition belongs to NC

1

.

Example 11.10 Given two n-bit numbers, we want to multiply the num-

bers. It is not hard to see that this can be reduced to adding together n,

n-digit numbers (just do the ordinary multiplication algorithm we learned

110

in ﬁrst grade). Now by the previous example we can add these numbers

pairwise in depth O(log n) to obtain

n

2

numbers whose sum we want to com-

pute. Adding the numbers pairwise for log n rounds gives us the answer.

This gives a circuit of polynomial size and depth O((log n)

2

). In fact mul-

tiplication and addition of n numbers can both be done in depth O(log n).

We leave this as an exercise.

Example 11.11 Given two n ×n matrices, multiply them. Let us suppose

the entries are m bit integers. Suppose the given matrices are A = (a

ij

) and

B = (b

ij

). Then we want to compute

n

j=1

a

ij

b

jk

for all i and k. We have

the following algorithm:

1. Compute all the products a

ij

b

jk

for all i, j and k.

2. Compute the sums

n

j=1

a

ij

b

jk

for all i and k.

If we have O(m

2

n

3

) processors we can do the ﬁrst operation in depth

O(log m) (by the exercise extending the multiplication example) while the

second can be done with O(n

3

m) processors in depth O(log nm) (using the

same exercise). Thus the entire computation uses a polynomial number of

processors and O(log nm) depth.

The problems that seems to be hardest to give a parallel solution to

are problems where the natural sequential algorithms are iterative in na-

ture. Examples of such problems are computing integer GCDs, solving lin-

ear equations and computing the depth-ﬁrst search tree of a graph. Of these

the linear equation problem can be solved in NC, and ﬁnding a depth-ﬁrst

search tree is known to be in RNC (Random NC, i.e. circuits of small depth

where you allow random inputs and only require that you have a good prob-

ability of ﬁnding a depth-ﬁrst search tree), while for integer GCDs there is

not known to be any circuits of sublinear depth. Just to give an example

of something nontrivial, let us give as a last example an algorithm to com-

pute the determinant of a matrix which runs in O((log n)

2

) time and uses a

polynomial number of processors. We have to assume some facts from linear

algebra.

Example 11.12 Given a matrix M, compute its determinant. Let us recall

some facts. If λ

i

denote the eigenvalues of M, then it is well known that

n

i=1

λ

i

= det(M). The trace of a matrix M (denoted by Tr(M)) is the

sum of its diagonal elements, i.e. Tr(M) =

n

i=1

m

ii

, and it is well known

that Tr(M) =

n

i=1

λ

i

. Let s

k

= Tr(M

k

) which equals

n

i=1

λ

k

i

since the

111

eigenvalues of M

k

are λ

k

i

. The s

k

are easy to compute in parallel since

we have already shown how to compute matrix-products and M

k

can be

computed by O(log k) matrix-products done in sequence. The characteristic

polynomial of M is det(λI −M) = λ

n

+

n

i=1

λ

n−i

c

i

= c(λ). It is standard

that c

n

= det(−M) and c(λ) =

n

i=1

(λ − λ

i

). From this it follows that

c

i

=

S:|S|=i

(−1)

i

j∈S

λ

j

where S is a subset of {1, 2, . . . , n} and |S| is its

cardinality. Using this one can prove that

_

_

_

_

_

_

_

_

_

_

1 0 0 0 . . . 0

s

1

2 0 0 . . . 0

s

2

s

1

3 0 . . . 0

s

3

s

2

s

1

4 . . . 0

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. 0

s

n−1

s

n−2

s

n−3

s

n−4

. . . n

_

_

_

_

_

_

_

_

_

_

_

_

_

_

_

_

_

_

_

_

c

1

c

2

c

3

c

4

.

.

.

c

n

_

_

_

_

_

_

_

_

_

_

= −

_

_

_

_

_

_

_

_

_

_

s

1

s

2

s

3

s

4

.

.

.

s

n

_

_

_

_

_

_

_

_

_

_

Thus all that remains is to prove that we can solve Ax = b where A is a

lower-triangular matrix. If we multiply each row by a suitable number we

can assume that all the entries on the diagonal of A is unity. Then A can be

written as I −B where B is strictly lower-triangular. Now it is easy to check

that A

−1

=

n

i=0

B

i

and thus by some additional matrix-multiplications we

can compute the inverse of A and hence we can solve for the c

i

and ﬁnd

c

n

= det(−M). The number of processors is quite bad but still polynomial,

and the depth is O((log n)

2

).

Once we can compute determinants we can do almost all operations in

linear algebra. The drawback in practice is that we get fairly large circuits.

11.3 Parallel time vs sequential space

A couple of the examples of problems that we could do in NC also appeared

as problems doable in small space. This is no coincidence and in fact se-

quential space and parallel time are quite related as soon as one does not

put any other restrictions on the computation.

Theorem 11.13 Suppose S(n) ≥ log n for all n. If B can be recognized in

space O(S(n)), then it can be done by circuits of depth O(S

2

(n)).

Proof: Suppose B is recognized by M

B

which runs in space O(S(n)). We

will use one processor p

C

for each possible conﬁguration C of M

B

. There

112

are 2

O(S(n))

conﬁgurations and thus we will use many processors, but this is

of no concern for us for the moment.

At stage i of the algorithm p

C

ﬁnds out which conﬁguration C would

change to in 2

i

computation steps. This is easy for i = 0 and in general it

is done as follows. After step i − 1, p

C

already knows what conﬁguration

C

, C transforms to in 2

i−1

steps. On the other hand p

C

knows which

conﬁguration C

transforms to in 2

i−1

steps and this is the desired answer.

Since M

B

runs in time 2

O(S(n))

, in O(S(n)) stages the processor corre-

sponding to the initial conﬁguration will know the result of the computation.

Thus the critical parameter is what depth is required to do one stage.

A single stage can be done by having a binary tree of depth O(S(n))

which connects each processor to each other processor and selects the pro-

cessor corresponding to the current information. We leave the details to the

reader.

To sum up: We have O(S(n)) stages where each stage can be done in

depth O(S(n)). This gives total depth O(S

2

(n)) and thus we have proved

the theorem.

Corollary 11.14 L ⊆ NC

2

Proof: By Theorem 11.13 we know L can be done by circuits of depth

(log n)

2

. By inspection of the proof we conclude that the circuits are of

polynomial size.

There is also a close to converse result to Theorem 11.13. Let S-uniform

denote a family of circuits that can be constructed by a Turing machine that

runs in space S.

Theorem 11.15 Suppose S(n) ≥ log n for all n, then if B can be recognized

by S-uniform circuits of depth O(S(n)), then B can be recognized in space

S(n).

Proof: The idea of the proof is to do a depth ﬁrst search of the circuit for

B.

By duplicating nodes we can assume that the circuit is actually a tree.

(One has to check that this does not change the condition of S-uniformity,

but it does not) We evaluate the circuit by a depth ﬁrst search manner. At

each point in time we maintain a path in the circuit from the output to an

input which has the following properties. Whenever the path goes to the

113

V

V

V

0

1

V

1 x

1

V

Figure 13: The path at one point in time

V

V

V

0

1

V

0 V

x

2

Figure 14: The path at next point in time

left, we require nothing extra while when it turns right we require that we

have marked the value of the left input to that node. Also we keep track

of what kind of operation we have at each node of the path. We start with

the path always going to the left and it is now easy to see that if we always

move to the next input to the right it is easy to update the tree. This might

best be seen by an example. Suppose our path at one point is given by

Figure 13. The active path are the shaded nodes. Assuming that x

1

= 0

then at the next time-step a possible path is given by Figure 14. The path

is of length O(S(n)) and thus can be represented in this space. To update

the path we need to be able to ﬁnd out what the circuit looks like locally,

but this can be done in space O(S(n)) by the uniformity condition. Thus,

114

we have completed the proof.

Using S(n) = log n we get the following immediate corollary:

Corollary 11.16 NC

1

⊆ L.

With this close connection between L and NC the following theorem is

not surprising:

Theorem 11.17 If A is P-complete then

P = NC ⇔A ∈ NC.

Proof: The proof is more or less the same as the proof of other theorems

of this type, but let us give it anyway. If P = NC then clearly A ∈ NC.

On the other hand if A ∈ NC then we have to construct NC-circuits for any

function in P. Given any B ∈ P we know by the deﬁnition of P-complete

that there is function f computable in L such that

x ∈ B ⇔f(x) ∈ A.

However we know by Corollary 11.14 that f can be computed also in NC

2

.

Combining this circuit with the NC-circuit for A becomes an NC-circuit for

B.

As a ﬁnal comment let us note that for one of the most famous problems

that seem hard to do in parallel, namely integer GCDs, it is not known that

this problem is P-complete.

115

12 Relativized computation

As a tool in understanding computation, one particular way in augmenting

the power of a computation has been studied extensively. For deﬁniteness

assume that we use the Turing machine model of computation. Let A be a

ﬁxed set and give the machine an extra tape, called the query tape. On this

tape the machine can write a string x and then enter a special state called

the query state. In one time-step the query tape now changes content. The

new value will be 1 if x ∈ A and 0 otherwise. Thus the machine is allowed to

ask questions about the set A and very inexpensively obtain correct answers.

The set A, which is called the oracle set should be thought of as a diﬃcult

set, since otherwise the machine could have answered the questions itself at

only a slightly higher cost. The computation is said to take place relative

to the oracle A (and hence the title relativized computation). A Turing

machine M with an oracle A is usually denoted M

A

to avoid confusion.

Now it is natural to deﬁne P

A

as the set of languages that can be rec-

ognized in polynomial time by Turing machines with oracle A. In a similar

way all the other complexity classes can be deﬁned. One word of caution.

We will count the part of the query tape used as part of the work-tape

of the machine and hence this should be bounded when we are looking at

space bounded classes. This deﬁnition is not standard when dealing with L

and NL, but we will not consider those classes here. Instead we will only

consider

P

A

, NP

A

, BPP

A

and PSPACE

A

.

The reason this concept is interesting is that almost all proofs that are

known remain true if we allow all machines involved in the proof have access

to the same oracle. In particular this is the case for all proofs given in these

notes upto this point. Let us state some theorems that follow (the reader is

encouraged to go back and check the proofs).

Theorem 12.1 For all oracles A,

P

A

⊆ NP

A

⊆ PSPACE

A

.

Theorem 12.2 For all oracles A,

P

A

⊆ BPP

A

⊆ PSPACE

A

.

The idea is that if P ⊂ NP (i.e. that the inclusion would be strict) has

an “easy” proof then P

A

⊂ NP

A

would be true for all oracles A. However

this is not the case:

116

Theorem 12.3 If A is a PSPACE-complete set then

P

A

= NP

A

= BPP

A

= PSPACE

A

= PSPACE.

Proof: It is suﬃcient to prove that PSPACE ⊆ P

A

and that PSPACE

A

⊆

PSPACE.

For the ﬁrst part let B be anything in PSPACE. Since A is PSPACE-

complete we have B ≤

p

A i.e. there is a polynomial time computable func-

tion f such that x ∈ B ⇔ f(x) ∈ A. But this makes B easy to recognize

for a machine with oracle A. On input x it just computes f(x), writes this

on the oracle tape, reads the answer from the oracle and outputs this as its

own answer. Thus B ∈ P

A

and we conclude that PSPACE ⊆ P

A

.

For the second part, suppose we are given a machine M

A

that recognizes

some language in PSPACE

A

. We have to convert this into an ordinary

PSPACE-machine which recognizes the same language. Essentially we have

to get rid of A. But since A is in PSPACE this is not too diﬃcult. Build a

subroutine S which takes an input x and outputs 1 if x ∈ A and 0 otherwise.

This subroutine can be made to run in polynomial space. Now modify M

A

,

such that instead of entering the query-state it runs S. By deﬁnition the

result is the same, and it is easy to see that this modiﬁed machine also runs

in polynomial space.

Theorem 12.3 rules out the possibility of an easy proof that P = NP.

This might raise in a more serious way (at least it seems) the possibility

that P = NP. However, oracles will not support this:

Theorem 12.4 There is an oracle B such that P

B

= NP

B

.

Proof: The oracle B will not be as natural as the oracle A given above and

we will construct it piece by piece. Together with B we will also deﬁne a

language L(B) which for all B will be in NP

B

, but we will cleverly construct

B such that it is not in P

B

.

Deﬁnition 12.5 Let L(B) be a language which only contains strings which

solely consists of 1’s (such a language is called a unary language). The string

of n 1’s is in L(B) if and only if there is at least one string x of length n

such that x ∈ B.

First observe that for any oracle B, L(B) is in NP

B

. Formally L(B) is

recognized by the following algorithm.

117

1. If there is a ’0’ in the input reject and stop.

2. Nondeterministically write down a query to the oracle of the same

length as the input. If the oracles answers 1 accept otherwise reject.

To verify that this algorithm is correct is left to the reader.

Next we will have to deﬁne B such that L(B) is not in P

B

. Let M

B

i

be an enumeration of all oracle machines that run in polynomial time. This

is a slightly subtle point since whether an oracle Turing machine runs in

polynomial time depends on the oracle and we have not yet decided what

the oracle should be. This is no real problem and we get around it as follows:

Assume that M

B

i

is an enumeration of all Turing machines which has the

property that each machine appears an inﬁnite number of times . Equip M

B

i

with a stop-watch such that if it has not halted in i|x|

i

steps on input x, it

automatically halts and outputs 1. Now all sets recognized by a polynomial

time machine is recognized by some M

B

i

(we need to repeat each machine

inﬁnitely many times since we do not know for which i it is true that it runs

in time in

i

. We will now go through an inﬁnite number of stages. In stage

i we determine a little bit more of the oracle B to make sure that M

B

i

does

not recognize L(B). Let a string be undetermined if we have not yet decided

whether it will be in B.

n

0

= 1

for i = 1 to ∞ do

make n

i

the smallest number bigger than n

i−1

such that 2

n

i

> in

i

i

and such that no string of length n

i

has been determined.

Run M

B

i

on input 1

n

i

. Whenever the machine asks about an

undetermined string, ﬁx that string not to be in B

If M

B

i

accepts the input then

Make sure that no string of length n

i

is in the oracle set.

else

Put one undetermined string of length n

i

in the oracle set.

endif

next i

ﬁx all undetermined strings not to be in B.

For the constructed B, M

B

i

will not accept L(B) since it will make

an error on 1

n

i

. Hence we need only check that the construction is not

contradictory. The only nonobvious point is that when needed there exists

118

an undetermined string of length n

i

However, since M

B

i

on input 1

n

i

only

runs for time in

i

i

and hence it can only ask this many questions. Thus only

this many new strings can be determined during stage i and since there were

no determined string of length n

i

when stage i started and 2

n

i

> in

i

i

there

is an undetermined string that can be put into B.

It turns out that also all the other questions can be relativized in the

possible way. Let us next take NP versus PSPACE.

Theorem 12.6 There is an oracle C such that NP

C

= PSPACE

C

.

Proof: This proof will very much follow the same line as the last proof.

Let us start by deﬁning the language.

Deﬁnition 12.7 Let L

⊕

(C) be a unary language such that 1

n

∈ L

⊕

(C) iﬀ

there is an odd number of strings of length n in C.

First observe that for any oracle C, L

⊕

(C) is in PSPACE

C

. The algo-

rithm just asks all questions of length n and keeps a counter to compute the

parity of the number of strings in the oracle. We will now construct C such

to make sure L

⊕

(C) is not in NP

C

.

Using the same argument as in the last proof there is an enumeration

N

C

1

, N

C

2

, . . . of all polynomial time nondeterministic oracle machines where

N

C

i

runs in time at most in

i

. We now construct C in stages:

n

0

= 1

for i = 1 to ∞ do

Make n

i

the smallest number bigger than n

i−1

such that 2

n

i

> in

i

i

and such that no string of length n

i

has been determined.

Consider N

C

i

on input 1

n

i

. If there is some setting of

undetermined strings to make N

C

i

accept then

Make such a setting, by ﬁxing at most in

i

i

strings, ﬁx the

remaining strings of length n

i

to make sure that an even number

of strings of length n

i

are in C.

else

Fix strings to make sure that an odd number of strings of length

n

i

are in C.

endif

Fix all undetermined strings not to be in C.

next i

119

Again by construction for this oracle L

⊕

(C) is not in NP

C

. The con-

struction can be seen to be correct by more or less the same reasoning as

the last construction. Please observe that if N

C

i

accepts an input then it is

suﬃcient to ﬁx the answers of the questions on one accepting computation

path and hence it is suﬃcient to ﬁx in

i

i

strings in the ﬁrst case.

Next we have:

Theorem 12.8 There is an oracle D such that BPP

D

⊆ NP

D

.

Proof: We proceed as usual.

Deﬁnition 12.9 Let L

maj

(D) be a unary language such that 1

n

∈ L

maj

(D)

if a majority of the strings of length n is in D.

This language is not always in BPP

D

. However, if we make sure that

for each n, at least 60% or at most 40% of the strings is in the oracle set,

then a simple sampling algorithm will work. This extra condition means

that we have to be slightly careful in the oracle construction, but there is

no real problem. We again give an algorithm to determine the oracle:

n

0

= 1

for i = 1 to ∞ do

Make n

i

the smallest number bigger than n

i−1

such that

2

n

i

> 10 · in

i

i

and such that no string of length n

i

has been determined.

Fix all undetermined strings of length less than n

i

not to be in D.

Consider N

D

i

on input 1

n

i

. If there is some setting of

undetermined strings to make N

C

i

accept then

Make such a setting, by ﬁxing at most in

i

i

strings and ﬁx the

remaining strings of length n

i

not to be in D.

else

Put all undetermined strings of length n

i

into D.

endif

next i

The veriﬁcation that this construction is correct is similar to the previous

veriﬁcations. The reason to put all undetermined strings of length at most

n

i

out of the oracle is to make sure that for n’s which are not chosen to

be one of the n

i

’s it is also true that the number of strings of length n in

the oracle is not close to half of all strings of length n. The condition that

2

n

i

≥ 10 · in

i

i

make sure that this is true for all n with n

i

≤ n < n

i+1

.

120

Our last oracle construction will be:

Theorem 12.10 There is an oracle E such that NP

E

⊆ BPP

E

.

Proof: We will use the same language as we used in the proof that there

was an oracle B such that NP

B

= P

B

. Remember that L(E) is a unary

language such that 1

n

∈ L(E) iﬀ there is some string of length n in E.

We now construct E to make sure it is not in BPP

E

. This time let M

E

i

be an enumeration of probabilistic Turing machines. Here there is a slight

problem that M

E

i

might not deﬁne a correct machine in that the probability

of acceptance is not bounded away from 1/2 for some inputs. However, this

is only to our advantage since this means this machine will not accept any

BPP-language, and we do not have to worry that it might accept L(E). We

now construct E in stages as follows:

n

0

= 1

for i = 1 to ∞ do

Make n

i

the smallest number bigger than n

i−1

such that

2

n

i

> 10 · in

i

i

and such that no string of length n

i

has been

determined.

Run M

E

i

on input 1

n

i

. Whenever the machine asks about a string

which is not determined, pretend that this string is not in E. Let p

be the probability that M

B

i

accepts under these conditions.

If p ≥ 1/2 then

Fix all strings M

E

i

could possibly ask about not to be in E. Also

ﬁx all other strings of length n

i

not to be in E.

else

Find one string of length n

i

such that the probability that this

string is asked by M

E

i

is at most 1/10 and put this into E. Fix all

other strings M

E

i

might possibly look at not to be in E

endif

next i

ﬁx all undetermined strings not to be in E.

Here there are some details to check. If p ≥ 1/2 then this is actually

the correct probability of acceptance since we eventually ﬁx all the strings

not to appear in E. In this case 1

n

i

∈ L(E) while the probability that M

E

i

accepts 1

n

i

is at least 1/2 and thus M

E

i

does not recognize L(E) in the BPP

sense. On the other hand if p < 1/2 then the ﬁnal oracle does not agree with

121

the simulation. However since the probability of ﬁnding out the diﬀerence

is bounded by 1/10, the acceptance probability remains below 0.6. Since in

this case, 1

n

i

∈ L(E), also in this case M

E

i

fails to recognize L(E).

We need also check that there is a suitable string which is asked with

probability at most 1/10. Since the running time of M

E

i

on input 1

n

i

is

bounded by in

i

i

it does not ask more than this number of questions. If

PR(x) is the probability that string x is asked then

|x|=n

i

PR(x) ≤ in

i

i

and since 2

n

i

> 10 · in

i

i

there is some x with PR(x) < 1/10. The proof is

complete.

We have now established that all the unknown inclusion properties of

our main complexity classes can be relativized in diﬀerent directions. The

only information this gives is that the true inclusions can not be proved

with methods that relativize. In principle, methods that do not look very

detailed at the computation will relativize. In particular when you treat the

computation as a black box which just takes an input and then produces an

output (after a certain number of steps). Thus, the main lesson to learn from

this section is that to establish the true relations of our main complexity

classes, we have to look in a very detailed way at computation.

There are a few results in complexity theory which do not relativize.

One of them (IP=PSPACE) is given in Chapter 13.

122

13 Interactive proofs

One motivation for NP is to capture the notion of “eﬃcient provability”.

If A ∈ NP and x ∈ A then there is a short proof of this fact (the non-

deterministic choices of the algorithm which recognizes A) which can be

veriﬁed eﬃciently. By the deﬁnition of NP all proofs are correct and an all

powerful prover can always convince a polynomial time bounded veriﬁer of a

correct NP-statement. As we did with regards to ordinary computation we

can introduce randomness and decrease the requirements. A proof will be a

discussion (interaction) between an all powerful prover and a probabilistic

polynomial time veriﬁer. Before we make a formal deﬁnition let us give an

example.

Example 13.1 Given two graphs G

1

and G

2

both on n vertices. G

1

and

G

2

are said to be isomorphic iﬀ there is a permutation π of the vertices such

that (i, j) is an edge in G

1

iﬀ (π(i), π(j)) is an edge in G

2

. In other words

there is a relabeling of the vertices to make the two graphs identical. This

problem is in NP since one can just guess the permutation. On the other

hand it is not known to be in P (or co-NP) nor known to be NP-complete.

Now consider the following protocol for proving that two graphs are not

isomorphic.

For m = 1 to k:

The veriﬁer chooses a random i (1 or 2) and sends a graph H which is

a random permutation of G

i

to the prover.

The prover responds j.

The veriﬁer rejects and halts if i = j

next m

The veriﬁer accepts.

In other words the prover tries to guess which graph the veriﬁer started

with and the veriﬁer accepts if he always guesses correctly. Now suppose

that G

1

and G

2

are not isomorphic. Then H is isomorphic only to G

i

and

the all powerful prover can tell the value of i and always answer correctly.

On the other hand if G

1

and G

2

are isomorphic then, independent of the

value of i, the graph H is a random graph isomorphic to both G

1

and G

2

.

Thus there is no way the prover can distinguish the two cases and thus if

he tries to answer he will each time fail with probability 1/2. Thus the

probability that he can incorrectly make the veriﬁer accept is 2

−k

which is

123

very small if k is large. Thus, for all practical purposes if k = 100 and the

prover always answer correctly the graph will be non-isomorphic.

A discussion (or interaction) of the type described in the example will

be called an interactive proof. Let us formalize the properties wanted.

Deﬁnition 13.2 A language A admits an interactive proof iﬀ there is an

interaction between a probabilistic polynomial time veriﬁer V and an all

powerful prover P such that:

1. (Completeness) If x ∈ A then the probability (over V ’s random choices)

that V accepts is at least 2/3.

2. (Soundness) If x ∈ A then no matter what the prover does the proba-

bility (over V ’s random choices) that V accepts is at most 1/3.

Deﬁnition 13.3 The complexity class IP is the set of languages that admit

an interactive proof.

The number of exchanges of messages might depend on the length of the

input, but since we want the entire process to be polynomial time, we limit

this to be a polynomial number in the length of the input.

Interactive proofs were deﬁned by Goldwasser, Micali and Rackoﬀ in

1985. A diﬀerent deﬁnition that was later proved to give the same class of

languages was given independently by Babai around the same time. Inter-

active proofs attracted a lot of attention in the end of the 1980’s and we will

only touch on the highlights of this theory. Let us ﬁrst state an equivalent

of Theorem 9.5.

Theorem 13.4 If A ∈ IP then there is an interaction between a probabilis-

tic polynomial time veriﬁer V and an all powerful prover P such that:

1. If x ∈ A then the probability (over V ’s random choices) that V accepts

is at least 1 −2

−|x|

.

2. If x ∈ A then no matter what the prover does the probability (over V ’s

random choices) that V accepts is at most 2

−|x|

.

Proof: (Outline) The proof is very similar to the proof of Theorem 9.5.

We just run many protocols in many times and make a majority decision in

the end. We leave the details to the reader.

124

A far less obvious fact is that one can in fact obtain perfect completeness

(i.e.when x ∈ A then the probability that V accepts is 1). Proving this would

take us too far and we omit this theorem.

The ﬁrst couple of years, one of the main drawbacks of the theory of

interactive proofs was the small number of languages that were not in NP

that admitted interactive proofs. This was dramatically changed in Decem-

ber 1989 when work of Nisan, Fortnow, Karloﬀ, Lund and ﬁnally Shamir

led to the following remarkable theorem:

Theorem 13.5 IP = PSPACE.

Proof: (Outline) The fact that IP ⊆ PSPACE was established quite early

in the theory of interactive proofs. A formal proof is slightly cumbersome

(but not really hard) and hence let us only give an outline. Suppose A ∈ IP

and the interaction that recognizes A contains k pairs of messages. We

denote the ith prover message by p

i

and the ith veriﬁer by v

i

and assume that

the prover sends the ﬁrst message in each round. Now let α be any partial

conversation consisting of the ﬁrst s messages for some s and let Pr(x, α) be

the probability that V accepts given that the initial conversation is α and

that P plays optimally in the future and that V follows his protocol. Our

goal is to compute Pr(x, e) where e is the empty string, since this number

is at least 2/3 when x ∈ A and less then 1/3 otherwise. Now if the last

message in α is by the veriﬁer then

Pr(x, α) = E ((x, αv

i

))

where E is expected value over the veriﬁer message v

i

. On the other hand

if the next message is by the prover then

Pr(x, α) = max ((x, αp

i

))

where the maximum is taken over all messages p

i

. Finally when α is a full

conversation then Pr(x, α) is 1 iﬀ the veriﬁer would have accepted after the

conversation α and 0 otherwise. By assumption this can be computed in

polynomial time. Using these equations it is easy to give an algorithm that

proceeds in a depth ﬁrst search fashion and evaluates Pr(x, e) in polynomial

space.

This inclusion was no surprise since PSPACE is a big complexity class.

It was the reverse computation that was the big surprise.

125

To prove that PSPACE ⊆ IP we need “only” give an interactive

proof which recognizes TQBF which was proved PSPACE-complete in The-

orem 7.17. We only give an outline of the argument.

In fact we will use that determining the truth of the special type of

quantiﬁed Boolean formulas constructed in the proof of Theorem 7.17 is

PSPACE-complete. Let us recall part of this proof. We wanted to con-

struct a formula GET(C

1

, C

2

, k) that said that the Turing machine could

get from conﬁguration C

1

to conﬁguration C

2

in 2

k

steps. This formula was

constructed recursively using:

GET(C

1

, C

2

, k, x) = ∃

C

∀

(A,B)∈{(C

1

,C),(C,C

2

)}

GET(A, B, k −1, x).

Now encode the ∀ quantiﬁer as a Boolean variable x

1

and rewrite the formula

to the following.

GET(C

1

, C

2

, k, x) = ∃

C

∀

x

1

∃

(A,B)

(x

1

→((A = C

1

)∧(B = C)))∧(¯ x

1

→((A = C)∧(B = C

2

)))∧GET(A, B, k−1, x).

Now assume that each conﬁguration consists of n Boolean variables and that

initially k = n. In reality they are both polynomial in n but this is of no

importance. It is not diﬃcult to write (x

1

⇒((A = C

1

)∧(B = C)))∧(¯ x

1

⇒

((A = C) ∧ (B = C

2

))) as a CNF-formula with n clauses and each clause

is of polynomial size. Furthermore note that each variable describing C

1

and C

2

does not appear in GET(A, B, k − 1). When we iterate the above

construction it will be true that no variable in any quantiﬁer will used inside

more than 3 other quantiﬁers. Let us also note that GET(Y, Z, 0) can be

done by a CNF-formula with O(n) clauses of constant size. To summarize

the discussion the formula has the following properties.

• It has 3n quantiﬁers which appear in blocks of the form ∃∀∃ where the

∃ quantiﬁers quantify over n variables and 2n variables respectively

and the ∀ quantify over one variable.

• Each variable is used only inside at most one block of following quan-

tiﬁers.

• All formulas between quantiﬁers and after the last quantiﬁer are CNF-

formulas with O(n) clauses of constant size.

Now take this formula and replace all ∃ by

and ∀ by

. Here the

sums and products extends over all variables that was originally in the scope

126

of the quantiﬁer. Also replace ∧ by × and ∨ by +. Finally for a variable

replace ¯ x by 1 − x. Using this replacement the formula is now turned into

an expression which evaluates to an integer. It is not diﬃcult to see that

this integer is 0 iﬀ the original formula was false (prove this by induction).

We will show how the prover can convince the veriﬁer with high probability

that this integer I is not 0.

First observe that I is bounded by 2

O(n2

n

)

. This is true since the value

of the ﬁnal CNF-formula is at most c

n

and each

only multiplies the value

by 2

n

while each

only squares the value (remember that there is only one

variable in each

). The following lemma follows from the prime number

theorem (the reader is asked to take it on faith).

Lemma 13.6 For c < 1 and x > X

c

, the product of all primes less than x

is ≥ e

cx

where e is the base of the natural logarithm ≈ 2.718.

This lemma implies that there is some prime p, n

4

≤ p ≤ O(2

n

) such

that I ≡ 0 modulo p. To see this observe that if I > 0 and it is divisible

by a set of primes then it is at least the product of the primes. The prover

starts by giving this p together with I (mod p) (which is not 0).

Remark 13.7 In fact if one is more careful one can make I = 1 when the

formula is true. This implies that one can use a small prime. This will

make the proof slightly more eﬃcient, but this is of no major concern for

the moment.

Now consider the outermost quantiﬁed variable. Let us call it x

1

and

suppose it is part of an ∃ quantiﬁer (i.e. now we are summing over its two

values). Keep this variable free and evaluate the entire expression mod p

with its sums and products. Naturally the result is a polynomial P(x

1

)

and by the conditions of the formula it is of degree O(n). Here we need

both that intermediate pieces of the formula are simple CNF-formula and

that the usage of each variable is very limited. The prover now gives this

formal polynomial (mod (p)) to V . This can be done since there are O(n)

coeﬃcients each which can be speciﬁed with O(n) bits. The veriﬁer veriﬁes

that P(0) + P(1) ≡ I (mod p), and responds with a random integer n

1

where n

1

is chosen randomly among 1, 2 . . . p −1. The task for the prover is

now to prove that P(n

1

) is the value of the algebraic when n

1

is substituted

for x

1

. The resulting algebraic expression has one quantiﬁed variable less

and we can now attack the next variable. Once all the variables have been

127

eliminated the veriﬁer can himself evaluate the remaining polynomial and if

it equals the value claimed by the prover he accepts and otherwise rejects.

Let us sketch why this protocol is correct. When the formula is true

there is really no complications since the prover all the time is claiming

correct statements and thus the veriﬁer will accept with probability 1. Note

that there is really no diﬀerence between the ∀-variables and the ∃-variables,

we only need the assumption on the structure of the formula to make the

degree of the polynomial P small.

Suppose on the other hand that the formula is false. In particular I = 0

and the ﬁrst value claimed by the prover for I (mod p) is incorrect and

hence also the ﬁrst polynomial P is not correct (since it takes an incorrect

value for either 0 or 1). Suppose the true polynomial is Q. Let us say that

n

1

is lucky for the prover if P(n

1

) ≡ Q(n

1

) (mod p). If the prover is lucky

once then he starts claiming correct statements and thus he will be able to

convince the veriﬁer. On the other hand if he is never lucky then he will be

forced to continue lying and the veriﬁer will expose him in the end. Since

P − Q is a nonzero polynomial of degree O(n) it has at most O(n) zeroes.

This implies that the probability that the prover is lucky at a single point is

O(n/p) ≤ O(n

−3

). Since there are only O(n

2

) variables, the probability that

he is ever lucky is O(n

−1

). Thus with probability 1 − O(n

−1

) the veriﬁer

will reject and the protocol is correct.

To give a little bit perspective of this proof, let us give an example to

show how it works.

Example 13.8 For simplicity let us work with a formula on normal TQBF-

CNF form and in particular, consider

∃x

1

∀x

2

∃x

3

∀x

4

(x

1

∨ x

2

∨ x

3

) ∧ (¯ x

1

∨ ¯ x

4

).

This formulas is true since if we put x

1

= 0 and x

3

= 1 both clauses are

satisﬁed. It does not matter what happens with the other variables. The

formula is turned into the following arithmetical expression:

1

x

1

=0

1

x

2

=0

1

x

3

=0

1

x

4

=0

(x

1

+x

2

+x

3

)(2 −x

1

−x

4

)

This is just an integer (in fact 20). A proof would go like the following.

128

1. The prover chooses the prime 7 (in reality it should be larger, but

we are only trying to illustrate the procedure). He claims that the

expression is 6 modulo 7 and in fact that

1

x

2

=0

1

x

3

=0

1

x

4

=0

(x

1

+x

2

+x

3

)(2 −x

1

−x

4

)

as a function of x

1

is

P

1

(x

1

) = (2x

2

1

+ 2x

1

+ 1)(2x

2

1

+ 6x

1

+ 5)(1 −x

1

)

2

(2 −x

1

)

2

.

(Normally the prover represents these polynomials in a dense repre-

sentation, but this is more convenient for hand calculation).

2. The veriﬁer checks that P

1

(0) +P

1

(1) ≡ 6 modulo 7 (in the future we

reduce everything modulo 7 without saying). Indeed P

1

(0) = 20 ≡ 6

while P

1

(1) = 0. He now chooses random value for x

1

(in our case

x

1

= 3) and wants to be convinced that

1

x

2

=0

1

x

3

=0

1

x

4

=0

(3 +x

2

+x

3

)(6 −x

4

) = P(3) = 25 ×41 ×4 ×1 ≡ 5.

3. The prover now claims that

1

x

3

=0

1

x

4

=0

(3 +x

2

+x

3

)(6 −x

4

)

as a function of x

2

is P

2

(x

2

) = 1 + 4x

2

2

.

4. The veriﬁer checks that P

2

(0)P

2

(1) ≡ 5 and randomly chooses x

2

= 5

and asks to be convinced that

1

x

3

=0

1

x

4

=0

(1 +x

3

)(6 −x

4

) ≡ P

2

(5) ≡ 3

5. The prover claims that

1

x

4

=0

(1 +x

3

)(6 −x

4

)

as a a function of x

3

is P

3

(x

3

) = 2 + 4x

3

+ 2x

2

3

.

129

6. The veriﬁer checks that P

3

(0) +P

3

(1) ≡ 3 and then randomly chooses

x

3

= 2 and wants to be convinced that

1

x

4

=0

3(6 −x

4

) ≡ P

3

(2) ≡ 4.

This he can to by himself and he accepts the input since 18 × 15 is

indeed 4 modulo 7.

As mentioned before, the above proof does not relativize (the IP ⊂

PSPACE does relativize, but not the second part). It is not diﬃcult to

construct an oracle A such that IP

A

⊂ PSPACE

A

. The reason that the

proof does not relativize is that if we allow oracle questions then the condi-

tion “C

1

is the conﬁguration that follows C

2

” cannot be described by a low

degree polynomial.

This proof which does not relativize gives some hope to attack the NP

vs P question. However it is still true that no strict inclusion that does not

relativize has been proved for any complexity class that includes NC

1

.

130

Contents

1 Preface 2 Recursive Functions 2.1 Primitive Recursive Functions . . . . . . 2.2 Partial recursive functions . . . . . . . . 2.3 Turing Machines . . . . . . . . . . . . . 2.4 Church’s thesis . . . . . . . . . . . . . . 2.5 Functions, sets and languages . . . . . . 2.6 Recursively enumerable sets . . . . . . . 2.7 Some facts about recursively enumerable 2.8 G¨del’s incompleteness theorem . . . . . o 2.9 Exercises . . . . . . . . . . . . . . . . . 2.10 Answers to exercises . . . . . . . . . . . 4 5 6 10 11 15 16 16 19 26 27 28

. . . . . . . . . . . . . . . . . . sets . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

3 Eﬃcient computation, hierarchy theorems. 32 3.1 Basic Deﬁnitions . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.2 Hierarchy theorems . . . . . . . . . . . . . . . . . . . . . . . . 33 4 The complexity classes L, P and P SP ACE. 39 4.1 Is the deﬁnition of P model dependent? . . . . . . . . . . . . 40 4.2 Examples of members in the complexity classes. . . . . . . . . 48 5 Nondeterministic computation 56 5.1 Nondeterministic Turing machines . . . . . . . . . . . . . . . 56 6 Relations among complexity classes 64 6.1 Nondeterministic space vs. deterministic time . . . . . . . . . 64 6.2 Nondeterministic time vs. deterministic space . . . . . . . . . 65 6.3 Deterministic space vs. nondeterministic space . . . . . . . . 66 7 Complete problems 7.1 NP-complete problems . . . 7.2 PSPACE-complete problems 7.3 P-complete problems . . . . 7.4 NL-complete problems . . . 69 69 78 82 85 86

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

8 Constructing more complexity-classes

2

9 Probabilistic computation 89 9.1 Relations to other complexity classes . . . . . . . . . . . . . . 94 10 Pseudorandom number generators 95

11 Parallel computation 106 11.1 The circuit model of computation . . . . . . . . . . . . . . . . 106 11.2 NC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 11.3 Parallel time vs sequential space . . . . . . . . . . . . . . . . 112 12 Relativized computation 13 Interactive proofs 116 123

3

The main idea of the course has been to give the broad picture of modern complexity theory. Probably in many places more details would be helpful and I would be grateful for hints on where this is the case. I have taken the liberty of skipping many boring details and tried to emphasize the ideas involved in the proofs. aa a Finally. To deﬁne the basic complexity classes. Andreas Jakobik. Viggo Kann. Pelle Grape. Mikael Goldmann. This is in particular true for the section on pseudorandom number generators and the proof that IP = P SP ACE. and Peter Rosengren. Christer Carlsson. Jan Frelin. Joachim Hollman. let me just note that there are probably many errors and inaccuracies remaining and for those I must take full responsibility. Most of the notes are at a fairly introductory level but some of the section contain more advanced material. J¨rgen Backo elin. The students who have taken the courses o together with other people have also helped me correct many errors.1 Preface The present set of notes have grown out of a set of courses I have given at the Royal Institute of Technology. The courses have been given at an introductory graduate level. Mats N¨slund. Anyone getting stuck in these parts of the notes should not be disappointed. These notes have beneﬁted from feedback from colleagues who have taught courses based on this material. 4 . Lars Arvestad. Kai-Mikael J¨¨-Aro. Wojtek Janczewski. Per Andersson. give some examples of each complexity class and to prove the most standard relations. The set of notes does not contain the amount of detail wanted from a textbook. Sincere thanks to Jerker Andersson. Christer Berg. In particular I am grateful to Jens Lagergren and Ingrid Lindstr¨m. but also interested undergraduates have followed the courses.

Before we try to formalize the concept of a computable function. . 3) and (2. In a similar way one can see that most objects that have any reasonable formal representation can be represented as natural numbers. A graph on n nodes can be thought of as a sequence of n binary symbols where each symbol corresponds to a potential edge 2 and it is 1 iﬀ the edge actually is there. This might seem restrictive. As an example. It is easy to see that the mapping from graphs to numbers is easy to compute and easy to invert and thus we can use this representation of graphs as well as any other. . The reason for this will soon be obvious. 2 . 3). Thus a function from words over the English alphabet to graphs can be represented as a function from natural numbers to natural numbers. 1. Add a leading 1 and consider the result as a number written in binary notation (our example corresponds to (1011)2 = 11). If the graph only contains the edges (1. Then we can think of a word in the English alphabet as a number written in base 27 with a = 1.2 Recursive Functions What functions are computable by a computer? One central question in computer science is the basic question: Oddly enough. 5 . After this detour let us return to the question of which functions are mechanically computable. this question preceded the invention of the modern computer and thus it was originally phrased: “What functions are mechanically computable?” The word “mechanically” should here be interpreted as “by hand without really thinking”.32) which o was published in 1931. (1. One possible reason that several researchers independently came to consider this question is its close connections to the proof of G¨del’s incompleteness theorem (Theorem 2. Mechanically computable functions are often called recursive functions. suppose that we are given a function from words of the English alphabet to graphs. and hence the possible edges are (1. 3) and (2. This fact will be used constantly throughout these notes. 2). We will be considering functions from natural numbers (N= {0. Several independent attempts to answer this question were made in the mid-1930’s.}) to natural numbers. but in fact it is not since we can code almost any type of object as a natural number. b = 2 and so on. For instance suppose that we are looking at graphs with 3 nodes. let us be precise about what we mean by a function. 3) we code it as 011.

. xn ) = h(g1 (x1 . The projections. . then we can form new primitive recursive functions in the following ways. f (x1 . gm are known to be primitive recursive functions. . x2 )) = σ(Add(x1 . n 3. Primitive recursion The function deﬁned by • f (0. x2 . xn ) = xi for 1 ≤ i ≤ n and any n. . . x3 . It contains some basic functions and then new primitive recursive functions can be built from previously deﬁned primitive recursive functions either by composition or primitive recursion. not 6 . xn ). when a function value f (x + 1) is deﬁned in terms of previous values f (0). . g2 (x1 . . . x3 . Composition. .2. x2 . xn ). .1 Primitive Recursive Functions The name “recursive” comes from the use of recursion. The primitive recursive functions deﬁne a large class of computable functions which contains most natural functions. . . . f (x1 . . . . 4. gm (x1 . . x2 . . x2 ). πi (x1 . xn ) To get a feeling for this deﬁnition let us prove that some common functions are primitive recursive. Example 2. Thus instead of the above. x2 . . Deﬁnition 2. m(x) = m for any constant m. . . f (x). x2 ) = σ(π2 (x1 . x3 . xn ) • f (x1 + 1. Let us give a formal deﬁnition. f (1) . . . x2 )) It will be very cumbersome to follow the notation of the deﬁnition of the primitive recursive functions strictly. Assume that g. 2. xn ) = h(x1 . . . g2 . xn ). h. . The primitive recursive functions are also closed under the following two operations. g1 . x2 ) = π1 (x2 ) 3 Add(x1 + 1. . xn )) 5. The successor function.1 The following functions are primitive recursive 1. . σ(x) = x + 1. Add(x1 . . . . .2 Addition is deﬁned as 1 Add(0. . Constants. x2 . . xn ) = g(x2 . . . i.e.

Sub1(0) = 0 Sub1(x + 1) = x and now we can let Sub(x1 . more transparent (but formally incorrect) version stated below. Example 2. x2 ) = 0 M ult(x1 + 1. M ult(x1 . x2 ) = x2 Add(x1 + 1. x2 ) + 1 Example 2. y + 1) = M ult(f (x. x2 + 1) = Sub1(Sub(x1 . Add(0. y). y) = i=0 g(x. x2 )). x2 ) = Add(x2 . If we instead would be working with integers the situation would be diﬀerent 1 7 . 0) = 1 f (x. 0) = x1 Sub(x1 .3 Multiplication can be deﬁned as M ult(0. we can deﬁne a function which takes the same value as subtraction whenever it is positive and otherwise takes the value 0. y−1 Example 2. Here for convenience we have interchanged the order of the arguments in the deﬁnition of the recursion but this can be justiﬁed by the composition rule.4 We cannot deﬁne subtraction as usual since we require the answer to be nonnegative1 .very transparent (but formally correct deﬁnition) we will use the equivalent.6 We can deﬁne a miniature version of the signum function by Sg(0) = 0 Sg(x + 1) = 1 This is due to the fact that we have decided to work with natural numbers. g(x. i) where we let f (x. x2 ) = Add(x1 . x2 )) Example 2. y)). However. First deﬁne a function on one variable which is basically subtraction by 1.5 If f (x. 0) = 1 and g is primitive recursive then so is f since it can be deﬁned by f (x.

and this allows us to deﬁne equality by Eq(m. the number of steps in the derivation). n) = 1 if m and n are equal and Eq(m. then we can compute f by ﬁrst computing the gi and then computing h of the results. Namely.e. With this convention we deﬁne a predicate to be primitive recursive exactly when the corresponding function is primitive recursive. Since the derivations of these functions are subderivations of the given derivation. n) = Sub(1. m) and Sub(m. Equality is not really a function put a predicate of pairs of numbers i. arguing informally. Of course this can only be an informal argument since “mechanically computable” is only an intuitive notion. a property of pairs of numbers. n) are both zero iﬀ n = m. On the other hand if we use primitive recursion then we can compute f when the ﬁrst argument is 0 since it then agrees with g which is 8 .e. n) = 0 otherwise. Add(Sg(Sub(n. Suppose the new function is constructed by composition. However. Sub(1. Sg(Sub(m. Continuing along these lines it is not diﬃcult (but tedious) to prove that most simple functions are primitive recursive. M ult(h(x). let g and h be primitive recursive functions and let P be a primitive recursive predicate. Let us now argue that all primitive recursive functions are mechanically computable. We will call this a derivation of the function. We will argue that primitive recursive functions are mechanically computable by induction over the complexity of the derivation (i. are easy to compute. m)). Equality is here deﬁned as by Eq(m. we can conclude that the functions used in the deﬁnition are mechanically computable. n)))) since Sub(n. In general a primitive recursive function f will be obtained using the rules 4 and 5 from functions deﬁned previously. it is convenient to identify predicates with functions that take the values 0 and 1. P (x)))) (which in ordinary notation is (P ∗ g + (1 − P ) ∗ h). Then the function f (x) deﬁned by g(x) if P (x) and h(x) otherwise will be primitive recursive since it can be written as Add(M ult(g(x). letting the value of the function be 1 exactly when the predicate is true. P (x)). as we did above. The simplest functions are the basic functions 1-3 and. Each primitive recursive function is deﬁned as a sequence of statements starting with basic functions of the types 1-3 and then using rules 4-5. This naturally leads to an eﬃcient way to prove that more functions are primitive recursive.

This ﬁnishes the informal argument that all primitive recursive functions are mechanically computable. On the other hand we claim that V does not agree with any primitive recursive function. we have to admit. Start with 0 and check the numbers in increasing order whether they correspond to correct derivations of a function in one variable. Thus the present argument has nothing to do with computing eﬃciently. Now look at the value of V at the point y. Now let V (x) = fx (x) + 1. Before we continue. Now let f1 be the primitive recursive function in one variable which corresponds to the smallest number giving such a legal derivation and then let f2 be the function which corresponds to the second smallest number and so on.computable by induction and then we can see that we can compute f in general by induction over the size of the ﬁrst argument. would not be the ﬁrst one would like to compute but which certainly is very important from a theoretical point of view. We will give one such function which. given a number. If V was primitive recursive then V = fy for some number y. If the coding is reasonable it is mechanically computable to decide. The x’th legal derivation found is the derivation of fx . Observe that given x it is possible to mechanically ﬁnd the derivation of fx by the following mechanical but ineﬃcient procedure. On the other hand if V = fy then it is fy (y).7 There are mechanically computable functions which are not primitive recursive. since once we have found the derivation of fx we can compute it on any input. Although we have seen that most simple functions are primitive recursive there are in fact functions which are mechanically computable but are not primitive recursive. By the above discussion V is mechanically computable. whether the number corresponds to a correct derivation of a primitive recursive function in one variable. 9 . let us note the following: If we look at the proof in the case of multiplication it shows that multiplication is mechanically computable but it gives an extremely ineﬃcient algorithm. A derivation of a primitive recursive function is just a ﬁnite number of symbols and thus we can code it as a number. By the deﬁnition of V the value should be fy (y) + 1. We have reached a contradiction and we have thus proved: Theorem 2.

2. Observe that a recursive function is in an intuitive sense mechanically computable. To see the reason for this name think of an inﬁnite two-dimensional array with natural numbers along one axis and the primitive recursive functions on the other. Our ﬁrst candidate for the class of mechanically computable functions will be a subclass of the partial recursive functions. This modiﬁcation will give a new class of functions called the partial recursive functions. xn ) = 0 and such that g(y. x1 . .e. . Unbounded search Assume that g is a partial recursive function and let f (x1 .8 The partial recursive functions contains the basic functions deﬁned by 1-3 for primitive recursive functions and are closed under the operations 4 and 5. .e. To see this we just have to check that the property of mechanical computability is closed under the rule 6. x1 . . We then construct a function which is not primitive recursive by going down the diagonal and making sure that our function disagrees with fi on input i. is not deﬁned for all inputs. By this we mean that given x we should not be able to ﬁnd fx in a mechanical way. . Deﬁnition 2. xn ) be the least m such that g(m. . . . At position (i. . . We will do this by giving another way of forming new function. which is deﬁned for all inputs. given that f is deﬁned. There is an extra way of forming new functions: 6. xn ) is deﬁned for all y < m. If no such m exists then f (x1 . If we want to have a characterization of all mechanically computable functions the description cannot be mechanically computable by itself.9 A function is recursive (or total recursive) if it is a partial recursive function which is total.The method of proof used to prove this theorem is called diagonalization. Deﬁnition 2. If we could ﬁnd fx then the above deﬁned function V would be mechanically computable and we would get a function which was not in our list. The above proof demonstrates something very important. j) we write the number fj (i). But 10 . . i. The idea is similar to the proof that Cantor used to prove that the real numbers are not denumerable. .2 Partial recursive functions The way around the problem mentioned last in the last section is to allow a derivation to deﬁne a function which is only partial i. Then f is partial recursive. . . xn ) is undeﬁned. .

seems most natural. The machine reads the content of the square the head is located at. Each square can contain one symbol from a ﬁnite alphabet which we will denote by Σ. B} where B symbolizes the blank square. x1 . This implies that we will not be able to imitate the proof of Theorem 2. and 11 . This is probably the deﬁnition which to most of us today.3 Turing Machines The deﬁnition of mechanically computable functions as recursive functions given in the last section is due to Kleene. It is not important which alphabet the machine uses and thus let us think of it as {0. . x2 . after the invention of the modern computer.e. a type of primitive computer). Other deﬁnitions of mechanically computable were given by Church (eﬀective calculability. . if for each value of x1 . also by equations). A Turing machine is a very primitive computer. 2.Figure 1: A Turing machine this follows since we just have to keep computing g until we ﬁnd a value for which it takes the value 0. Also observe that there is no obvious way to determine whether a given derivation deﬁnes a total function and thus deﬁnes a recursive function. The problem being that it is diﬃcult to decide whether the deﬁned function is total (i. . A simple picture of one is given in Figure 1. Let us next describe another approach to deﬁne mechanically computable functions. The inﬁnite tape serves as memory and input and output device. as rewriting systems) and Turing (Turing machines. . 1. Of these we will only look closer at Turing machines. The key point here is that since f is total we know that eventually there is going to be such a value. x2 . The input is initially given on the tape. xn ) = 0. At each point in time the head is located at one of the tape squares and is in one of a ﬁnite number of states. xn there is an m such that g(m.7 and thus there is some hope that this deﬁnition will give all mechanically computable functions. . Post (canonical systems. .

This convention separates out the tasks of reading the input and writing the output and thus we can concentrate on the heart of the matter.State q0 q0 q0 q1 q1 Symbol 0 1 B 0. It is possible to make the Turing machine more eﬃcient by allowing more than one tape. It is given in Table 1. 12 . Formally this is described by the next-move function f : Q × Σ → Q × Σ × {R. and the head is located on the leftmost square of the input. L} where Q is the set of possible states and R(L) symbolizes moving right (left).1 B New State q1 q0 qh q1 qh New Symbol B B 1 B 0 Move R R R Table 1: The next step function of a simple Turing machine based on this value and its state. In a similar spirit there is one output-tape which the machine cannot read. most of the time we will assume that we have a one-tape Turing machine. The tape squares that do not contain any part of the input contain the symbol B. enters a potentially new state and moves left or right. From an intuitive point of view the next-move function is the program of the machine. Example 2. There is a special halt-state. Initially the machine is in a special start-state. the computation. If there are k tapes then the next-step function depends on the contents of all k squares where the heads are located. In such a case there is one head on each tape. it writes something into the square. qh . and not to allow the machine to write on this tape. and when the machine reaches this state it halts. The output is now deﬁned by the non-blank symbols on the tape. If we have several tapes then it is common to have one tape on which the input is located. but later when considering eﬃciency of computation results will change slightly.10 Let us deﬁne a Turing Machine which checks if the input contains only ones and no zeros. q0 . When we are discussing computability this will not matter. However. it describes the movements of all k heads and what new symbols to write into the k squares.

If it sees a “B” before it sees a “0” it accepts. Theorem 2. since programming Turing machines is not our main task we will not pursue this direction either. However. when we argued that recursive functions were mechanically computable. Before. The proof is rather tedious. prints the answer 0 and then halts. But whenever a Turing machine halts for all inputs it corresponds to a total function and we will call such a function Turing computable. We will not give the proof of this theorem. Let division be integer division and let lsb(i) be the least signiﬁcant bit of i. To be honest there are more economic ways to specify Turing machines.12 A function is Turing computable iﬀ it is recursive. To make the representation compact we will let the states have two indices. The program is given in Table 2. We assume that we are given two numbers with least signiﬁcant bit ﬁrst and that there is a B between the two numbers. To make things simpler we also assume that we have a special output-tape on which we print the answer. The “Turing computable functions” is a reasonable deﬁnition of the mechanically computable functions and thus the ﬁrst interesting question is how this new class of functions relates to the recursive functions. We have the following theorem. Example 2. where we assume for notational convenience that the machine starts in state q0. If it ever sees a “0” it erases the rest of the input. For this reason this will be the last Turing machine that we specify explicitly.3. also here beginning with the least signiﬁcant bit. which in general will be in the range 0 to 3. A Turing machine deﬁnes only a partial function since it is not clear that the machine will halt for all inputs.0 : It will be quite time-consuming to explicitly give Turing machines which compute more complicated functions. and hence we will only give an outline of the general approach. most people who have programmed a modern 13 . One can build up an arsenal of small machines doing basic operations and then deﬁne composition of Turing machines.11 Programming Turing machines gets slightly cumbersome and as an example let us give a Turing machine which computes the sum of two binary numbers.Thus the machine starts in state q0 and remains in this state until it has seen a “0”. The easier part of the theorem is to prove that if a function is recursive then it is Turing computable. The ﬁrst index is just a string of letters while the other is a number.

i qsx. i+j 2 2 New Symbol B same B B B B B same B B B same B B same same B B same same B B B Move R R R R R R R R R R L L L L L R L L L R R Output lsb(i+j) lsb(i+j) i qyc.i qxm. 1 0.i qxo.i q0.i qxf.i qxo.i qyc. i+j 2 lsb(i+j) qsx. 1(= j) B New State qx.i qx.i qf x.i qx.i qyo. 1(= j) B 0.i qy. 1 B 0.i qf x. 1 B B 0.i qxf. 1(= j) 0.i qy. 1 B B 0.i qf x.i qsy.i qsy.i qxf.i qcx. 1(= j) 0.i qcx.i qsx.i qf x.i qy.i qyc.State q0.i qyo.i qxm. i+j 2 lsb(i+j) i qh Table 2: A Turing machine for addition 14 .i+j qxm. 1 B B 0.i qyo. 1 B B 0.i qsy.i Symbol 0.i qsx.i qxo. i+j qh qxm.i qyc. 1 0.i qsy.i qxf.i qxo.i qcx.i qcx.i qyo. 1(= j) 0.

Thus we can use such imprecise words as “reasonable”. It turns out that all the other attempts to formalize mechanically computable functions give the same class of functions. Since any high level computer language describes a reasonable model of computation the class of functions computable by high level programs is included in the class of recursive functions. It is harder to program Turing machines. instead of saying that a given function. 2. is a recursive function we will phrase this as “f is computable”. Church’s thesis: The class of recursive functions is the class of mechanically computable functions. f . For the remainder of these notes we will use the term “recursive functions” for the class of functions described by Church’s thesis. Observe that Church’s thesis is not a mathematical theorem but a statement of experience.computer probably felt that without too much trouble one could write a program that would compute a recursive function. Thus as long as our descriptions of procedures are detailed enough so that we feel certain that we could write a high level program to do the computation. and any reasonable deﬁnition of mechanically computable will give the same class of functions. In this way we do not have to worry about actually programming the Turing machine. Sometimes. but still feasible. This gets fairly involved and we will not describe this procedure here. 15 . When we argue about such functions we will usually argue in terms of Turing machines but the algorithms we describe will only be speciﬁed quite informally.4 Church’s thesis In the last section we stated the theorem that recursive functions are identical to the Turing computable functions. This leads one to believe that we have captured the right notion of computability and this belief is usually referred to as Church’s thesis. Church’s thesis is very convenient to use when arguing about computability. The way to do this is to mimic the behavior of the Turing machine by equations. we can draw the conclusion that we can do the computation on a Turing machine or by a recursive function. For the other implication one has to show that any Turing computable function is recursive. Let us state it for future reference.

lists the members of A on its output tape. 2. However also in this case A is recursive since any ﬁnite set is recursive. In this connection sets are also called languages.13 A set A is recursively enumerable iﬀ there is a Turing machine MA which.5 Functions. a set A is recursive if given x it is computable to test whether x ∈ A. We can watch the output of M and if x appears we know that x ∈ A. The reason for this is historical and comes from the theory of formal languages. the set of prime numbers could be called the language of prime numbers. If we would require that A was listed in order we could check whether x ∈ A since we would only have had to wait until we had seen x or a number greater than x. Thus if we want to know whether x ∈ A it is not clear how to use M for this purpose.2. The function f is called the characteristic function of A. Sometimes the characteristic function of A will be denoted by χA . In formulas x ∈ A ⇔ f (x) = 1. when started on the empty input tape. It is important to remember that. which would happen in the case when A is ﬁnite and does not contain x or any larger number. A set is called recursive iﬀ its characteristic function is recursive. but if we have not seen x we do not know whether x ∈ A or we have not waited long enough. Deﬁnition 2.2 Thus in this case we can conclude that A is recursive. 2 16 . Another interesting class of sets is the class of sets which can be listed mechanically.g. A. Thus A is recursive iﬀ given x one can mechanically decide whether x ∈ A. There is a slightly subtle point here since it might be the case that M never outputs such a number.e. but in general this is not true. e. but one of them will work and that is all we care about. of inputs for which the function takes the value 1. sets and languages If a function f only takes two values (which we assume without loss of generality to be 0 and 1) then we can identify f with the set. It is interesting to note that given the machine M it is not clear which alternative should be used to recognize A. the members of A are not necessarily listed in order and that M will probably never halt since A is inﬁnite most of the time. while any member of A will eventually be listed.6 Recursively enumerable sets We have deﬁned recursive sets to be the sets for which membership can be tested mechanically i.

1 . 1. ∞ If i ∈ A print i. . while a move is either R or L and the output is either 0. the end of a line is marked as & & and the end of the speciﬁcation is marked as & & &. 1. &. This deﬁnition implies that each Turing machine occurs inﬁnitely many times in any natural enumeration.Theorem 2. A state should be written as qx where x is a natural number written in binary. 1 or B. Thus we can uniquely code a Turing machine as a natural number. . B}. B. We will denote the Turing machine which is given by the description corresponding to y by My . A Turing machine is essentially deﬁned by the next-step function which can be described by a number of symbols and thus can be coded as an integer. Since it is computable to determine whether i ∈ A this will give a correct enumeration of A. The other part of the theorem is harder and requires some more notation. the procedure below will even print the members of A in order. We again emphasize that given y it is possible to mechanically determine whether it corresponds to a Turing machine and in such a case ﬁnd that Turing machine. We assume that the start state is always q0 and the halt state q1 .14 If a set is recursive then it is recursively enumerable. New Symbol. However there are sets that are recursively enumerable that are not recursive. We have described a Turing machine by a number of lines where each line contains the following items: State. L. For i = 0. Let us make precise how to code this information. Let us outline in more detail how this is done. By standard coding we can think of this ﬁnite string as a number written in base 8. Each item is separated from the next by the special symbol &. Symbol. A symbol is from the set {0. If we encounter the end of the speciﬁcation we will just discard the rest of the description. For technical reason we allow the end of the speciﬁcation not to be the last symbols in the coding. With these conventions a Turing machine is completely speciﬁed by a ﬁnite string over the alphabet {0. q}. Proof: That recursive implies recursively enumerable is not too hard. This coding is also eﬃcient in the sense that given a string over this alphabet it is possible to mechanically decide whether it is a correct description of a Turing machine (think about this for a while). Move and Output. R. New state. Furthermore we claim that once we have the description of the Turing machine we can run it on any input 17 .

i steps on input j. 2. To prove the ﬁrst claim observe that KD can be enumerated by the following procedure For i = 1. If Mj is legal. The only detail to check is that we can decide whether j has 18 . where it usually would halt. In such a case the universal machine will simulate My until it halts or go on for ever without halting if My does not halt on input x. VT is the characteristic function of a set which we will denote by KD . The output will again agree with that of My . If y is not the description of a legal Turing machine. We call this set “the diagonal halting set” since it is the set of Turing machines which halt with output 0 when given their own encoding as input. . print j.15 There is a universal Turing machine which on input (x. i. 2 . z) simulates z computational steps of My on input x. We now deﬁne a function which is in the same spirit of the function V which we proved not to be primitive recursive. Theorem 2. . . but this can be modiﬁed at will. We make this explicit by stating a theorem we will not prove. ∞ For j = 1. In a more modern language. Observe that this is an recursive procedure using the universal Turing machine. VT (x) = 1.(simulate My on a given input). By this we mean that if My halts with output w on input x within z steps then also the universal machine outputs w. y. We encourage the interested reader to at least make a rough sketch of a program in his favorite programming language which does the same thing as the universal Turing machine. . the universal Turing machine enters a special state qill . if Mx halts on input x with output 0. 0. . If My does not halt within z steps then the universal machine gives output “not halted”. To distinguish it we call it VT . run Mj . the universal Turing machine is more or less an interpreter since it takes as input a Turing machine program together with an input and then runs the program. We claim that KD is recursively enumerable but not recursive. if it halts within these i steps and gives output 0 and we have not listed j before. otherwise. We will sometimes allow z to take the value ∞.

we can give the answer and halt. 2. Since any nizing A recursive set is r. The procedure lists KD since all numbers ever printed are by deﬁnition members in KD and if x ∈ KD and Mx halts in T steps on input x then x will be listed for i = max(x. and when x appears in one of lists. To see that KD is not recursive. From Theorem 2. (A) are r. we have proved one direction of the theorem. Let us state this as a separate theorem.been listed before.16 we have the following immediate corollary. On the other hand if M does not halt with output 0 then VT (y) = 0. We have given an explicit function which cannot be computed by a Turing machine. Consider what happens when M is fed input y.7 Some facts about recursively enumerable sets Recursion theory is really the predecessor of complexity theory and let us therefore prove some of the standard theorems to give us something to compare with later. which we know it will.18 The complement of KD is not r. and hence is not recursive. This ﬁnishes the proof of Theorem 2.17 A is recursive if and only if both A and the complement of ¯ A. ¯ Proof: If A is recursive then also A is recursive (we get a machine recog¯ from a machine recognizing A by changing the output). We know that M = My for some y. T ) and j = x. Theorem 2. For the ¯ converse.e. suppose that VT can be computed by a Turing machine M . In this section we will abbreviate recursively enumerable as “r. 19 .16 The function VT cannot be computed by a Turing machine.”. If it halts with output 0 then VT (y) = 1.e.e. Theorem 2.14 We have proved slightly more than was required by the theorem. Corollary 2. The easiest way to do this is to observe that j has not been listed before precisely if j = i or Mj halted in exactly i steps. In either case My makes an error and hence we have reached a contradiction..e. to decide whether x ∈ A we just enumerate A and A in parallel.

y) = (x + y)(x + y + 1)/2 + x. y) ∈ B then x is listed for z = max(x.19. To see the converse. z If for some y ≤ z we have (x. 1. To determine whether a given pair (x. This is closely related to the diagonal halting problem which we have already proved not to be recursive in the last section. . 20 . First observe that x has not been printed before if either x or y is equal to z. y) ∈ B and (x.e. ∞ For x = 0. .19 A is r. Intuitively this should imply that the halting problem also is not recursive and in fact this is the case. K.For the next theorem we need the fact that we can code pairs of natural numbers as natural numbers. i. when x ∈ A. By the existence of the universal Turing machine it follows that B is recursive and by deﬁnition ∃y(x. y) ∈ B. . The last theorem says that r. y ) ∈ B for y < y and x has not been printed before then print x. .20 The halting problem is not recursive. Theorem 2. 2. Deﬁne B to be the set of pairs (x. iﬀ there is a recursive set B such that x ∈ A ⇔ ∃y (x. By the relation between A and B this program will list only members of A and if x ∈ A and y is the smallest number such that (x.e. Let the halting set. y). y) ∈ K is for natural reasons called the halting problem.sets are just recursive sets plus an existential quantiﬁer. This ﬁnishes the proof of Theorem 2. Proof: If there is such a B then A can be enumerated by the following program: For z = 0. 1.e. let MA be the Turing machine which enumerates A. . y) such that x is output by MA in at most y steps. y)|My is legal and halts on input x}. Theorem 2. y) ∈ B precisely when x appears in the output of MA . be deﬁned by K = {(x. 2 . We will later see that there is a similar relationship between the complexity classes P and N P . For instance one such coding is given by f (x.

One general such method is by a standard type of reduction and let us next deﬁne this concept.e. We will use this machine to construct a machine that computes VT using M as a subroutine. The intuition for either of the above deﬁnitions is that A is not harder to recognize than B. This gives a mechanical procedure that computes VT and we have reached the desired contradiction.22 For sets A and B let the notation A ≤T B mean that given a Turing machine that recognizes B then using this machine as a subroutine we can construct a Turing machine that recognizes A. that there is a Turing machine M which on input (x. On the other hand if M outputs 1 we use the universal machine on input (x.23 If A ≤m B and B is recursive then A is recursive. 21 . First decide whether Mx is a legal Turing machine. Since both f and B are recursive this is a recursive procedure and it gives the correct answer by the deﬁnition of A ≤m B. If Mx is a legal machine we feed the pair (x. It is now clear that other problems can be proved to be non-recursive by a similar technique. we will deﬁne also this reduction. This is formalized as follows: Theorem 2. but since the only reduction we have done so far was not a many-one reduction but a more general notion called Turing reduction. We will not study other deﬁnitions in detail. y) gives output 1 precisely when My is legal and halts on input x. If the output is 0 we give the answer 1 and otherwise we answer 0. If it is not we output 0 and halt. Proof: To decide whether x ∈ A. Namely we assume that the given problem is recursive and we then make an algorithm for computing something that we already know is not recursive. ﬁrst compute f (x) and then check whether f (x) ∈ B. If M outputs 0 we can safely output 0 since we know that Mx does not halt on input x. Deﬁnition 2. x. This particular reduction is usually referred to as a many-one reduction. Since we have already proved that no machine can compute VT this will prove the theorem. ∞) to determine the output of Mx on input x.21 For sets A and B let the notation A ≤m B mean that there is a recursive function f such that x ∈ A ⇔ f (x) ∈ B.Proof: Suppose K is recursive i. Deﬁnition 2. Now consider an input x and that we want to compute VT (x). x) to M . The reason for the letter m on the less than sign is that one usually deﬁnes several diﬀerent reductions.

Then M halts precisely when x ∈ B.e. A is r. B can be reduced to K. Proof: The fact that the halting problem is r. Given a ﬁnite set of squares (which will be called tiles). The proof is complete.e.e.can be seen in a similar way that the diagonal halting problem KD was seen to be r. Thus if M = My we can let f (x) = (x. Deﬁnition 2.25 The halting set is r. Theorem 2. Next let us deﬁne the hardest problem within a given class.e.-complete. then B ≤m A.-complete (or to be even harder) and let us deﬁne two such problems. but we omit the proof.-complete.24 A set A is r. The ﬁrst problem is called tiling a can be thought of as a two-dimensional domino game. We have Theorem 2.e.e.e. It is also true that the diagonal halting problem is r.e. the markings agree on their common side and such that each tile is equal to one of the given tiles. set.-complete. If B is r. y) and this will give a reduction from B to K. Just run more and more machines more and more steps and output all pairs of machines and inputs that leads to halting.Clearly the similar theorem with Turing reducibility rather than manyone reducibility is also true (prove it). each with a marking on all four sides and one tile placed at the origin in the plane. However in the future we will only reason about many-one reducibility. To see that it is complete we have to prove that any other r. Deﬁne M to be the Turing machine which on input x runs M until it outputs x (if ever) and then halts with output 0.26 The complement problem of tiling is r. 22 .e.e.. Let M be the Turing machine that enumerates B.-complete iﬀ 1. The question is whether it is possible to cover the entire positive quadrant with tiles such that on any two neighboring tiles. There are many other (often more natural) problems that can be proved r. 2.

s3 ). If one of the si tells us that the head is present we know exactly what states the cells will be in. Namely that new heads don’t enter from the left and that the entire tape is blank from the beginning. j) will describe the state of cells j. A tile will thus be partly be speciﬁed by three cell-states s1 . s2 and s3 (we call this the signature of the tile) and we need to specify how to mark its four sides. s2 . There are a couple of details to take care of. s+1 . we will make sure that the descriptions do not conﬂict. The markings on the top and the bottom will make sure that the computation proceeds correctly. On the other hand if the head is not present in any of the three cells there might be several possibilities since the head could be in cells j − 1 or j + 3 and move into one of our positions. Observe that this implies that tiles which are to the left and right of each other will describe overlapping parts of the tape. s3 ). (s1 .e. In a similar way there might be one or many (or even none) possible states for the three cells at time t − 1. s−1 ). s−1 . s+1 ) of 1 2 3 1 2 3 states in the previous and next step we make a tile. s−1 . Now it is easy to see that a valid tiling describes a computation of Mx and the entire quadrant can be tiled iﬀ Mx goes on for ever i. The problem whether a Turing machine halts on the empty input is not recursive (this is one of the exercises in the end of this chapter). Observe that this makes sure that there is no conﬂict in the descriptions of a cell by diﬀerent tiles. s3 ) while the marking on the top side is 1 2 3 (s1 . 23 . The left hand side will be marked by (s1 . and s3 at time t. (s+1 . j + 1. s2 . s2 . The tile at the origin will make sure that the machine starts correctly (with some more complication this tile could have been eliminated also). Consider the states of these cells at time t + 1. Now each tile will describe the state of three adjacent cells. 1 2 3 Finally at the origin we place a tile which describes that the machine starts in the ﬁrst cell in state q0 and blank tape. s2 ) and the right hand side by (s2 . The tile to be placed at position (i. The marking on the lower side is (s−1 . s−1 ) and (s+1 . For each possibility (s−1 . j + 1 and j + 2 at time i of the computation. This completes the description of the tiles. We will construct the tiles in such a way that the only way to put down tiles correctly will be to make them describe a computation of Mx . and j + 2 are s1 .Proof: (Outline) Given a Turing machine Mx we will construct a set of tiles and a tile at the origin such that the entire positive quadrant can be tiled iﬀ Mx does not halt on the empty input. s+1 ). Let the state of a tape cell be the content of the cell with the additional information whether the head is there and in such a case which state the machine is in. However. it does not halt. s+1 . Suppose that the states of cells j.

p) which is true iﬀ p is the integer that describes the content of cell i at time j. j. t. A computation of Mx that runs in time t never uses more than t tape cells and thus such a computation can be described by the content of t2 cells (i. The interested reader can consult any standard text in recursion theory. The second problem we will consider is number theoretic statements. y. p3 .e. i and i + 1 at time j then q is 24 . Theorem 2.e. We leave the details to the reader. This time we will let an enourmous integer z code the computation. but have a much more complicated structure. t cells each at t diﬀerent points in time). Suppose that each cell has at most S ≤ 2r states. The state of each cell will be given by a certain number of bits in the binary expansion of z. This amounts to extracting the r bits of z which are in position starting at (it + j)r. To check that such a formula exists requires a fair amount of detailed reasoning and let us just sketch how to construct it. Remark 2. p2 . variables and usual arithmetical operations. given a number theoretic statement is it false or true? One particular statement people have been interested in for a long time (which supposedly was proved true in 1993) is Fermat’s last theorem. which can be written as follows ∀n > 2 ∀x. Proof: (Outline) Again we will prove that we can reduce the halting problem to the given problem. In general a number theoretic statement involves the quantiﬁers ∀ and ∃. To prove this would lead us to far into recursion theory. zxn + y n = z n ← xyz = 0. i. First one makes a predicate Cell(i. t) is true iﬀ z is a rt2 bit integer which describes a correct computation for Mx which have halted. Quantiﬁers range over natural numbers. z. This can now be coded as rt2 bits and these bits concatenated will be the integer z. Next one makes a predicate M ove(p1 .A couple of special markings will take care of this.27 The set of true number theoretic statements is not recursive.e. Now let Ax be an arithmetic formula such that Ax (z. p2 and p3 are the states of squares i − 1.28 In fact the set of true number theoretic statements is not even r. q) which says that if p1 . Thus assume we are given a Turing machine Mx and that we want to decide whether it halts on the empty input.

t. j. j.-complete B ≤m A.but not recursive (e. This is not utilized in the proof. t) is now equivalent to the conjunction of ∀i. z. t) and thus if we can decide the truth of arithmetic formulae with quantiﬁers we can decide if a given Turing machine halts. z. 25 . which is a constant depending only on x and independent of t ) and thus can be coded by brute force.the halting problem) then by the second property of being r. t.e. pq ) ⇒ M ove(p1 .the resulting state of square i at time j + 1. Remark 2. Let us explicitly state a theorem we have used a couple of times. It seems like the hard part of the tiling problem is what to do at points where we can put down many diﬀerent tiles (we never know if we made the correct decision). p2 . t. p1 )∧ Cell(i. M ove on the other hand is of constant size (there are only 24r inputs. contradicting the initial assumption that B is not recursive.e. p1 . t. Since we know that this is not possible we have ﬁnished the outline of the proof. z. t. A similar statement is true about the other proof.e. j. j + 1.29 It is interesting to note that (at least to me) the proofs of the last two theorems are in some sense counter intuitive. q Cell(i − 1.6 we could conclude that B is recursive. q) and ∀q Cell(1. Proof: Let B be a set that is r. The predicate Ax (z. Now if A was recursive then by Theorem 2. Now we are almost done since Mx halts iﬀ ∃z. p2 ) ∧ Cell(i + 1. The Cell predicate is from an intuitive point of view very arithmetic (and thus we hope the reader feels that it can be constructed). p2 . z. t. Theorem 2.7. j. p3 )∧ Cell(i.g.-complete then A is not recursive. p3 . Rather at each point we have only one choice and the hard part is to decide whether we can continue for ever. z. q ) ⇒ Stop(q ) where Stop(p) is true if p is a haltstate. p3 . tAx (z.30 If A is r.

We want to be able to prove all true theorem (this is called completeness) and we do not want to be able to prove any false theorems (this is called that the system is consistent). Our goal is to prove that there is no proof system that is both consistent 26 . The most common set of axioms for number theory was proposed by Peano. but one could think of other sets of axioms. Statements in arithmetic will simply be the formulas considered in the last examples. The notion of a proof is more complicated. What does it mean that the halting problem is not recursive? Experience shows that for most programs that do not halt there is a simple reason that they do not halt.e. quantiﬁed formulas where the variables values which are natural numbers. We want to avoid a too elaborate machinery and hence we will be rather informal and give an argument in the simplest case. Thus the halting problem is undecidable. before we state the theorem we need to address what we mean by “statement in arithmetic” and “proof”. the program will always give the correct answer to the question whether the machine halts or not. We have only proved that there is not a single program which when given as input the description of a Turing machine and an input to that machine. 2. First note that most proofs used in modern mathematics is much more informal and given in a natural language. We call a set of axioms together with the rules how they can be combined a proofsystem. There are two crucial properties to look for in a proofsystem. for each statement A we want to be able to prove exactly one of A and ¬A. i. We encourage the reader to write common theorems and conjectures in number theory in this form to check its power.Before we end this section let us make an informal remark. However. proof can be formalized (although most humans prefer informal proofs). However. One starts with a set of axioms and then one is allowed to combine axioms (according to some rules) to derive new theorems. In particular. A proof is then just such a derivation which ends with the desired statement. They often tend to go into an inﬁnite loop and of course such things can be detected. One ﬁnal deﬁnition: A problem that is not recursive is called undecidable.8 G¨del’s incompleteness theorem o Since we have done many of the pieces let us brieﬂy outline a proof of G¨del’s incompleteness theorem. This theorem basically says that there are o statements in arithmetic which neither have proof or a disproof.

However. this is not true since we can as axioms take all true statements and then we need no rules for deriving new theorems. Then we claim that also the set of all theorems would be recursive. This is not a very practical proofsystem since there is no way to tell whether a given statement is indeed an axiom. . Clearly the axioms need to be speciﬁed in a more eﬃcient manner. If z is a correct proof of ¬A output “false” and halt. Proof: Assume that there was indeed such a proofsystem. We take the following deﬁnition. 1. 2.2: Is there any ﬁxed machine M . Thus this procedure always halts with the correct answer. 2. We can now state the theorem. Namely to decide whether a statement A is true we could proceed as follows: For z = 0. . II. such that given y. deciding whether M halts on input y is recursive? II. To check whether a given string is a correct proof is recursive by assumption and since the proofsystem is consistent and complete sooner or later there will be a proof of either A or ¬A. by Theorem 2. Theorem 2. Unfortunately. Deﬁnition 2.and complete.32 (G¨del) There is no recursive proofsystem which is both o consistent and complete. deciding whether M halts on input y is not recursive? 27 .1: Given x is it recursive to decide whether Mx halts on an empty input? II.27 the set of true statements is not recursive and hence we have reached a contradiction.3: Is there any ﬁxed machine M .31 A proofsystem is recursive iﬀ the set of proofs (and hence the set of axioms) form a recursive set. ∞ If z is a correct proof of A output “true” and halt.9 Exercises Let us end this section with a couple of exercises (with answers). . The reader is encouraged to solve the exercises without looking too much at the answers. such that given y.

Mx halts on y? II. is it decidable whether we. Namely given z and y we make a machine Mx which basically looks like Mz but has a few special states. II. it is recursive to decide whether M halts on input y in y 2 steps? II.8 Prove that the maximum time function (cf ex. if we restrict the left hand side of each rewriting rule to be of length 1? 2. using the rewriting rules. Deﬁne the maximum time function by M T (y) = maxx≤y f (x). y.11 Given a set of rewriting rules over a ﬁnite alphabet and a starting string. then there is an x such that M T (x) > g(x).1 The problem is undecidable. On empty input Mx ﬁrst goes trough all its special states which writes y on the tape. can transform the starting string to an arbitrarily long string? II. To conclude the proof we only have to observe that it is recursive to compute the number x from the pair y and z. The machine then returns to the beginning of the tape and from this point on it behaves as Mz .10 Given a set of rewriting rules over a ﬁnite alphabet and a starting string. Is it decidable whether we. To be more precise let g be any recursive function. We have one special state for each symbol of y. We will prove that if we could decide whether Mx halts on the empty input. Is the maximum time function computable? II. This new machine halts on empty input-tape iﬀ Mz halted on input y and thus if we could decide the former we could decide the latter which is known undecidable. Is it possible to transform ababba to aaaabbb? II.10 Answers to exercises II.7: If Mx halts on empty input let f (x) be the number of steps it needs before it halts and otherwise set f (x) = 0. using the rewriting rules.9 Given a set of rewriting rules over a ﬁnite alphabet and a starting string and a target string. then we could decide whether Mz halts on input y for an arbitrary pair z. that given y. using the rewriting rules. can transform the starting string to the target string? An example of this instance is: Rewriting rules ab → ba. aa → bab and bb → a.II. 28 . can transform the starting string to an arbitrarily long string.6: Given x is it recursive to decide whether for all y. Is it decidable whether we.4: Is it true that for each machine M .5: Given x is it recursive to decide whether there exists a y such that Mx halts on y? II.7) grows at least as fast as any recursive function. II.

II.2 There are plenty of machines of this type. For instance let M be the machine that halts without looking at the input (or any machine deﬁning a total function). In one of these cases the set of y’s for which the machine halts is everything which certainly is a decidable set. II.3 Let M be the universal machine. Then M halts on input (x, y) iﬀ Mx halts on input y. Since the latter problem is undecidable so is the former. II.4 This problem is decidable by the existence of the universal machine. If we are less formal we could just say that running a machine a given number of steps is easy. What makes halting problems diﬃcult is that we do not know for how many steps to run the machine. II.5 Undecidable. Suppose we could decide this problem, then we show that we could determine whether a machine Mx halts on empty input. Given Mx we create a machine Mz which ﬁrst erases the input and then behaves as Mx . We claim that Mz halts on some input iﬀ Mx halts on empty input. Also it is true that we can compute z from x. Thus if we could decide whether Mz halts on some input then we could decide whether Mx halts on empty input, but this is undecidable by exercise II.1. II.6 Undecidable. The argument is the same as in the previous exercise. The constructed machine Mz halts on all inputs iﬀ it halts on some input. II.7 M T is not computable. Suppose it was, then we could decide whether Mx halts on empty input as follows: First compute M T (x) and then run Mx for M T (x) steps on the empty input. If it halts in this number of steps, we know the answer and if it did not halt, we know by the deﬁnition of M T that it will never halt. Thus we always give the correct answer. However we know by exercise II.1 that the halting problem on empty input is undecidable. The contradiction must come from our assumption that M T is computable. II.8 Suppose we had a recursive function g such that g(x) ≥ M T (x) for all x. Then g(x) would work in the place of M T (x) in the proof of exercise II.7 (we would run more steps than we needed to, but we would always get the correct answer). Thus there can be no such function. II.9 The problem is undecidable, let us give an outline why this is true. We will prove that if we could decide this problem then we could decide whether a given Turing machine halts on the empty input. The letters in our ﬁnite alphabet will be the nonblank symbols that can appear on the tape of the Turing machine, plus a symbol for each state of the machine. A string in this alphabet containing exactly one letter corresponding to a state of the machine can be viewed as coding the Turing machine at one instant in time 29

by the following convention. The nonblank part of the tape is written from left to write and next to the letter corresponding to the square where the head is, we write the letter corresponding to the state the machine is in. For instance suppose the Turing machine has symbols 0 and 1 and 4 states. We choose a, b, c and d to code these states. If, at an instant in time, the content of the tape is 0110000BBBBBBBBBBBB . . . and the head is in square 3 and is in state 3, we could code this as: 011c000. Now it is easy to make rewriting rules corresponding to the moves of the machine. For instance if the machine would write 0, go into state 2 and move left when it is in state 3 and sees a 1 this would correspond to the rewriting rule 1c → b0. Now the question whether a machine halts on the empty input corresponds to the question whether we can rewrite a to a description of a halted Turing machine. To make this description unique we add a special state to the Turing machine such that instead of just halting, it erases the tape and returns to the beginning of the tape and then halts. In this case we get a unique halting conﬁguration, which is used as the target string. It is very interesting to note that although one would expect that the complexity of this problem comes from the fact that we do not know which rewriting rule to apply when there is a choice, this is not used in the proof. In fact in the special cases we get from the reduction from Turing machines, at each point there is only one rule to apply (corresponding to the move of the Turing machine). In the example given in the exercise there is no way to transform the start string to the target string. This might be seen by letting a have weight 2 and b have weight 1. Then the rewriting rules preserve weight while the two given words are of diﬀerent weight. II.10 Undecidable. Do the same reduction as in exercise II.9 to get a rewriting system and a start string corresponding to a Turing machine Mx working on empty input. If this system produces arbitrarily long words then the machine does not halt. On the other hand if we knew that the system did not produce arbitrarily long words then we could simulate the machine until it either halts or enters the same state twice (we know one of these two cases will happen). In the ﬁrst case the machine halted and in the second it will loop forever. Thus if we could decide if a rewriting system produced arbitrarily long strings we can decide if a Turing machine halts on empty input. II.11 This problem is decidable. Make a directed graph G whose nodes correspond to the letters in the alphabet. There is an edge from v to w if there 30

is a rewriting rule which rewrites v into a string that contains w. Let the weight of this string be 1 if the rewriting rule replaces v by a longer string and 0 otherwise. Now we claim that the rewriting rules can produce arbitrarily long strings iﬀ there is a circuit of positive weight that can be reached from one of the letters contained in the starting word. The decidability now follows from standard graph algorithms.

31

In particular. To decide what is mechanically computable is of course interesting. We will then only count the number of squares visited on the work-tapes. M halts within T (n) steps. We will. For the remainder of these notes all functions that we will be considering will be recursive and we will concentrate on what resources are needed to compute the function.3 Assume that a Turing machine M has a read-only inputtape. 3. The natural deﬁnition for space would be to say that a Turing machine uses space S(n) if its head visits at most S(n) squares on any input of length n. Deﬁnition 3. 32 . a write-only output-tape and one or more work-tapes. but what we really care about is what we can compute in practice.1 A Turing machine M runs in time T (n) if for every input of length n.1 Basic Deﬁnitions Let us start by deﬁning what we mean by the running time and space usage of a Turing machine. Then we will say that M uses space S(n) if for every input of length n. Deﬁnition 3. This deﬁnition is not quite suitable under all circumstances. also be interested in machines which use less than linear space and to make sense of this we have to modify the model slightly. We will assume that there is a special input-tape which is read-only and a special output-tape which is write-only.e by using an ordinary computer for a reasonable amount of time. Apart from these two tapes the machine has one or more work-tapes which it can use in the oldfashioned way. the deﬁnition would imply that if the Turing machine looks at the entire input then S(n) ≥ n. M visits at most S(n) tape squares on its work-tapes before it halts.3 Eﬃcient computation. i. Deﬁnition 3. The two ﬁrst such resources we will be interested in are computing time and space.2 The length of string x is denoted by |x|. hierarchy theorems. however. The running time is a function of the input and experience has showed that it is convenient to treat inputs of the same length together.

When we are discussing running times we will most of the time not be worried about constants i.e. we will not really care if a machine runs in time n2 or 10n2 . Thus the following deﬁnition is useful: Deﬁnition 3.4 O(f (n)) is the set of functions which is bounded by cf (n) for some positive constant c. Having done the deﬁnitions we can go on to see whether more time (space) actually enables us to compute more functions.

3.2

Hierarchy theorems

Before we start studying the hierarchy theorems (i.e. theorems of the type “more time helps”) let us just prove that there are arbitrarily complex functions. Theorem 3.5 For any recursive function f (n) there is a function Vf which is recursive but cannot be computed in time f (n). Proof: Deﬁne Vf by letting Vf (x) be 1 if Mx is a legal Turing machine which halts with output 0 within f (|x|) steps on input x and let Vf (x) take the value 0 otherwise. We claim that Vf cannot be computed within time f (n) on any Turing machine. Suppose for contradiction that My computes Vf and halts within time f (|x|) for every input x. Consider what happens on input y. Since we have assumed that My halts within time f (|y|) we see that Vf (y) = 1 iﬀ My gives output 0, and thus we have reached a contradiction. To ﬁnish the proof of the theorem we need to check that Vf is recursive, but this is fairly straightforward. We need to do two things on input x. 1. Compute f (|x|). 2. Check if Mx is a legal Turing machine and in such a case simulate Mx for f (|x|) steps and check whether the output is 0. The ﬁrst of these two operations is recursive by assumption while the second can be done using the universal Turing machine as a subroutine. This completes the proof of Theorem 3.5

33

Up to this point we have not assumed anything about the alphabet of our Turing machines. Implicitly we have thought of it as {0, 1, B} but let us now highlight the role of the alphabet in two theorems. Theorem 3.6 If a Turing machine M computes a {0, 1} valued function f in time T (n) then there is a Turing machine M which computes f in time 2n + T (n) . 2 Proof: (Outline) Suppose that the alphabet of M is {0, 1, B} then the alphabet of M will be 5-tuples of these symbols. Then we can code every ﬁve adjacent squares on the tape of M into a single square of M . This will enable M to take several steps of M in one step provided that the head stays within the same block of 5 symbols coded in the same square of M . However, it is not clear that this will help since it might be the case that many of M ’s steps will cross a boundary of 5-blocks. One can avoid this by having the 5-tuples of M be overlapping, and we leave this construction to the reader. The reason for requiring that f only takes the values 0 and 1 is to make sure that M does not spend most of its time printing the output and the reason for adding 2n in the running time of M is that M has to read the input in the old format before it can be written down more succinctly and then return to the intitial conﬁguration. The previous theorem tells us that we can gain any constant factor in running time provided we are willing to work with a larger alphabet. The next theorem tells us that this is all we can gain. Theorem 3.7 If a Turing machine M computes a {0, 1} valued function f on inputs that are binary strings in time T (n), then there is a Turing machine M which uses the alphabet {0, 1, B} which computes f in time cT (n) for some constant c. Proof: (Outline) Each symbol of M is now coded as a ﬁnite binary string (assume for notational convenience that the length of these strings is 3 for any symbol of M ’s alphabet). To each square on the tape of M there will be associated 3 tape squares on the tape of M which will contain the code of the corresponding symbol of M . Each step of M will be a sequence of steps of M which reads the corresponding squares. We need to introduce some intermediate states to remember the last few symbols read and there are some other details to take care of. However, we leave these details to the reader. 34

The last two theorems tell us that there is no point in keeping track of constants when analyzing computing times. The same is of course true when analyzing space since the proofs naturally extend. The theorems also say that it is suﬃcient to work with Turing machines that have the alphabet {0, 1, B} as long as we remember that constants have no signiﬁcance. For deﬁniteness we will state results for Turing machines with 3 tapes. It will be important to have eﬃcient simulations and we have the following theorem. Theorem 3.8 The number of operations for a universal two-tape Turing machine needed to simulate T (n) operations of a Turing machine M is at most αT (n) log T (n), where α is a constant dependent on M , but independent of n. If the original machine runs in space S(n) ≥ log n, the simulation also runs in space αS(n), where α again is a constant dependent on M , but independent of n. We skip the complicated proof. Now consider the function Vf deﬁned in the proof of Theorem 3.5 and let us investigate how much is needed to compute it. Of the two steps of the algorithm, the second step can be analyzed using the above result and thus the unknown part is how long it takes to compute f (|x|). As many times in mathematics we deﬁne away this problem. Deﬁnition 3.9 A function f is time constructible if there is a Turing machine that on input 1n computes f (n) in time f (n). It is easy to see that most natural functions like n2 , 2n and n log n are time constructible. More or less just collecting all the pieces of the work already done we have the following theorem. Theorem 3.10 If T2 (n) is time constructible, T1 (n) > n, and T2 (n) =∞ n→∞ T1 (n) log T1 (n) lim then there is a function computable in time O(T2 (n)) but not in T1 (n). Both time bounds refer to Turing machines with three tapes. Proof: The intuition would be to use the function VT1 deﬁned previously. To avoid some technical obstacles we work with a slightly modiﬁed function. 35

Remember that there are inﬁnitely many yi such that Myi codes My (we allowed an end marker in the middle of the description). Then on inputs of length n. we ﬁrst compute T2 (n) and then run the simulation for that many steps. Q states and k work-tapes and which uses space at most S(n). If we get an answer within this simulation we output 1 if the answer was 0 and output 0 otherwise. This deﬁnes a function VT2 and we need to check that it cannot be computed by any My in time T1 .e. I. Now note that the constant α in Theorem 3. We claim that Uf cannot be computed in space f (n). It is clear that we will be able to get the same result for space-complexity even though there is some minor problems to take care of. This might seem obvious at ﬁrst since we can just use the universal machine to simulate Mx and all we have to keep track of is whether Mx uses more than the allowed amount of space. To ﬁnish the theorem we need to prove that Uf is recursive. 36 .11 If f (n) is a recursive function then there is a recursive function which cannot be computed in space f (n). We need the following important but not very diﬃcult lemma. Let us ﬁrst prove that there are functions which require arbitrarily large amounts of space.8 only depends on the machine My to be simulated and thus there is a yi which codes My such that T2 (|yi |) ≥ αT1 (|yi |) log T1 (|yi |).12 Let M be a Turing machine which has a work tape alphabet of size c. M either halts within time nQS(n)k ckS(n) or it never halts. then as in all previous arguments My will output 0 on input y iﬀ Uf (y) = 1 and otherwise Uf (y) = 0. Lemma 3. Theorem 3. This is not quite suﬃcient since Mx might run forever and never use more than f (|x|) space. If we do not get an answer we simply answer 0. We use two of the tapes for the simulation and the third tape to keep a clock. Proof: Deﬁne Uf by letting Uf (x) be 1 if Mx is a legal Turing machine which halts with output 0 without visiting more than f (|x|) tape squares on input x and let Uf (x) take the value 0 otherwise.When simulating Mx we count the steps of the simulating machine rather than of Mx . Given a Turing machine My which never uses more than f (n) space. By the standard argument My will make an error for this input.

S(n) ≥ log n and n→∞ lim S2 (n) =∞ S(n) then there is a function computable in space O(S2 (n)) but not in space S(n). These space bounds refer to machines with 3 tapes. In other words deﬁne a function essentially as US but restrict the computation to using space S2 of the simulating machine. Returning to the proof of Theorem 3. Let us calculate the number of diﬀerent conﬁgurations of M given a ﬁxed input of length n. The rest of the proof is now more or less identical. Since it uses at most space S(n) there at most ckS(n) possible contents of it work-tapes and at most S(n)k possible positions of the heads on the worktapes. Theorem 3. The number of possible locations of the head on the input-tape is at most n and there are Q possible states.10.11 we can now prove that Uf is computable. Deﬁnition 3. 37 . The only detail to take care of is that if S(n) ≥ log n then a counter counting up to |x|QS(|x|)k ckS(|x|) can be implemented in space S(n). the conﬁguration consists of the contents of the tapes of M . This ﬁnishes the proof of Theorem 3.13 A function f is space constructible if there is a Turing machine that on input 1n computes f (n) in space f (n).Proof: Let a conﬁguration of M be a complete description of the machine at an instant in time. The proof of Lemma 3. If the machine does not halt within this many timesteps the machine will be in the same conﬁguration twice. whenever it returns to a conﬁguration where it has been previously it will return inﬁnitely many times and thus never halt. We now can state the space-hierarchy theorem. But since the future actions of the machine is completely determined by the present conﬁguration. Proof: The function achieving the separation is basically US with the same twist as in Theorem 3. Thus.11 To prove that more space actually enables us to compute more functions we need the appropriate deﬁnition. the positions of all its heads and its state. We just simulate Mx for at most |x|Qf (|x|)k ckf (|x|) steps or until it has halted or used more than f (|x|) space.12 is complete.14 If S2 (n) is space constructible. We use a counter to count the number of steps used. Thus we have a total of nQS(n)k ckS(n) possible conﬁgurations.

The reason that we get a tighter separation between space-complexity classes than time-complexity classes is the fact that the universal machine just uses constant more space than the original machine. These results are due to Hartmanis and Stearns and are from the 1960’s. 38 . This completes our treatment of the hierarchy theorems. Next we will continue into the 1970’s and move further away from recursion theory and into the realm of more modern complexity theory.

There are some relations between the given complexity classes. Deﬁnition 4. The inclusions given in Theorems 4.5 P ⊆ P SP ACE. Proof: 3. 39 . and Q states and always halts. We can now start our main topic.4 The complexity classes L. We will in this section deﬁne the basic deterministic complexity classes. P and P SP ACE.3 Given a set A.14. Proof: This follows from Lemma 3. we say that A ∈ L iﬀ there is a Turing machine which computes the characteristic function of A in space O(log n). we say that A ∈ P SP ACE iﬀ there is a Turing machine which for some constant k computes the characteristic function of A in space O(nk ). we say that A ∈ P iﬀ there is a Turing machine which for some constant k computes the characteristic function of A in time O(nk ). That it is strict follows from Theorem Theorem 4. L. it follows from Theorem 4.5 and 4. but it gives no idea to which one it is. Of course. namely the study of complexity classes. Theorem 4. This is also obvious since a Turing machine cannot use more space than time.4 L ⊂ P SP ACE. then we know it runs in time at most nQ(c log n)k 3c log n ∈ O(n2+c log 3 ) where we used that (log n)k ∈ O(n) for any constant k. We can conclude that a machine which runs in logarithmic space also runs in polynomial time.2 Given a set A.1 Given a set A. Theorem 4.12 since if S(n) ≤ c log n and we assume that the machine uses a three letter alphabet. Deﬁnition 4. Deﬁnition 4. P and PSPACE. The inclusion is obvious.6 are believed to be strict but this is not known. has k work-tapes.4 that at least one of the inclusions is strict.6 L ⊆ P .

A RAM 40 . We have to investigate whether the deﬁned complexity classes are artifacts of the particulars of Turing machines as a computational model or if they are genuine classes of functions which are more or less independent of the model of computation. Turing machine seems incredibly ineﬃcient and thus we will compare it to a model of computation which is more or less a normal computer (programmed in assembly language).e. The same argument applies here. This type of computer is called a Random Access Machine (RAM) and a pictured is given i Figure 2. The reader who is not worried about such questions is adviced to skip this section.1 Is the deﬁnition of P model dependent? When studying mechanically computable functions we had several deﬁnitions which turned out to be equivalent.Figure 2: A Random Access Machine 4. This fact convinced us that we had found the right notion i. that we had deﬁned a class of functions which captured a property of the functions rather than a property of the model.

In one step a RAM can carry out the following instructions. similarly for ac2 . ac1 = r(k) for constant k or ac1 = r(ac1 ). (Condition ac1 > 0 or ac1 = 0). Use constants in the program.g. 8.g. Store the content of an accumulator. 7. 41 . the result ends up in ac1 . Add. Make conditional and unconditional jumps. 4. e. We will let r(i) be the content of register i and ac1 and ac2 the contents of the accumulators. 3. Halt One might be tempted to let the time used by a RAM be the number of operations it does (the unit-cost RAM). Load something into an accumulator. 2. Both the registers and the accumulators can hold arbitrarily large integers. 1. The size of a computer word is bounded by a constant and operations on larger numbers require us to access a number of memory cells which is proportional to logarithm of the number used. Deﬁnition 4.7 The time to do a particular instruction on a RAM is 1 + log(k + 1) where k is the least upper bound on the integers involved in the instruction. r(k) = ac1 for constant k or r(ac2 ) = ac1 . This turns out to give a quite unrealistic measure of complexity and instead we will use the logarithmic cost model. e. divide (integer division) or multiply the two numbers in ac1 and ac2 . Write an output. This actually agrees quite well with our everyday computers. Read input ac1 = input(ac2 ). 5. and inﬁnite number of registers and two accumulators. similarly for ac2 . The time for a computation on a RAM is the sum of the times for the individual instructions.has a ﬁnite control. 6. The ﬁnite control can read a program and has a read-only input-tape and a write-only output tape. subtract.

Proof: (Outline) Assume for simplicity that the Turing machine just has one work-tape and that it uses the alphabet {0. Then we have: Deﬁnition 4. The bound for the space used is obvious. Observe that we need to store the entire contents of the work-tape in one register to conserve space. It will code the content of the work-tape as an integer and store this integer in register 1. Next let us see that in fact a Turing machine is not that much less powerful than a RAM. The RAM will simulate the computation of the Turing machine step by step. If we instead stored the content of square i in register i the total space used would be O(S(n) log S(n)). 1. The running time would be improved to O(T (n) log T (n)) but for the present purposes it is more important to keep the space small.8 The space used by a RAM under a computation is the maximum of log (i + r(i)) log(ac1 + 1) + log(ac2 + 1) + r(i)=0 during the computation.To deﬁne the amount of memory used by a RAM on a particular operation let us assume that the initial contents of all the registers are 0. Intuitively the RAM seems more powerful than a Turing machine. The cost of the simulation of an individual step is the size of the integers involved and this is bounded by O(S(n)). Theorem 4. We will not try to prove exactly this. B}. To simulate a step of the Turing machine the RAM gets the appropriate information from the work-tape by an integer division and then it follows the transition described by the next-step function. 42 . the position of the head on the input-tape in accumulator 2. for T (n) ≥ n and S(n) ≥ log n then the same function can be computed in time O(T 2 (n)) and space O(S(n)) on a RAM.9 If a Turing machine can compute a function in time T (n) and space S(n). but only to establish strong enough results to show that the class P is well deﬁned. the position of the head on the work-tape(s) in register 2 and the current state of the Turing machine in register 3. Since we have at most T (n) steps and S(n) ≤ T (n) the bound for the running time follows.

which we usually tend to omit. If the instruction is an arithmetical step. then we just search the ac1 -tape for the symbol 1. Two diﬀerent pairs are separated by BB. If the instruction is jump-instruction we just make the next-step function take the next state which is the ﬁrst state of the set of states corresponding to that line. ac2 .10 If a function f can be computed by a RAM in time T (n) and space S(n) then f can be computed in time O(T 2 (n)) and space S(n) on a Turing machine.) 3. Proof: (Outline) As many other proofs this is not a very thrilling simulation argument. We will deﬁne the Turing machine pictorially by having circles indicate states. the same operation also applies to other tapes. 2. If the jump is conditional on the content of ac1 being 0. Let us give a few examples how to simulate some particular instructions. Each line is translated into a set of states and transitions between the states as indicated by Figure 4. since the result is central in that it proves that P is invariant under change of model. Inside the circle we write the tape(s) we are currently interested in. However.e. 1. Rectangular boxes indicate subroutines. while the forth tape is used as a scratch pad.) 43 . (See Figure 5. Three of the four work-tapes will correspond to ac1 . The way to proceed is of course to simulate the RAM step by step. A schematic picture is given in Figure 3 The register tape will contain pairs (i. respectively. r(i)) where the two numbers are separated with a B. (See Figure 6. we will at least give a reasonable outline of the proof. we just replace it by a Turing machine which computes the arithmetical step using the ac1 and ac2 tapes as inputs and the scratch pad tape as work-tape.Theorem 4. the registers. Assume for simplicity that we do the simulation on a Turing machine which apart from its input-tape and output-tape has 4 work-tapes. a special subroutine is “Rew” which is rewinding the register tape. If some i does not appear on the register tape this means that r(i) = 0. i. and the labeled arrows going out of the circle indicate which states to proceed to where the label indicates the current symbol(s). The RAM-program is now translated into a next-step function of the Turing machine. If we do not ﬁnd any 1 before we ﬁnd B the next step function directs us to the given line and otherwise we proceed with the next line. moving the head to the beginning.

Figure 3: A TM simulating a RAM 44 .

Figure 4: Basic picture Figure 5: The jump instruction 45 .

what we want to do is to look for the content of ac2 as the ﬁrst part of any pair on the register tape. ac1 ) at the end of the register-tape and otherwise we do nothing. r(ac2 )) and then move the rest of the content of the register-tape left to avoid empty space. If we ﬁnd that no such pair exists then we should load 0 into ac1 . To analyze the time needed for the simulation we claim that you can do multiplication and integer division of two m-digit numbers in time O(m2 ) on a Turing machine. If ac1 = 0 we store the pair (ac2 . Clearly. This implies that any arithmetical operation can be done in a factor O(S(n)) longer 46 . Let us just give an outline of how to load r(ac2 ) into ac1 . Let us analyze the eﬃciency of the simulation. If r(ac2 ) = 0 previously this is easy. After we have moved the information we write (ac2 . 5. The space used by the Turing machine is easily seen to be bounded by O(log (ac1 + 1) + log (ac2 + 1) + (log (i + 1) + log (r(i) + 1) + 3)) r(i)=0 and thus the simulation works in O(S(n)) space. A description of this is given in Figure 7.Figure 6: Conditional jump 4. To do this we scan the register-tape to ﬁnd out the present value of r(ac2 ). If r(ac2 ) = 0 we erase the old copy (ac2 . Finally let us indicate how to store ac1 into register ac2 . ac1 ) at the end (provided ac1 = 0).

Figure 7: Loading instruction 47 .

4. On the other hand when we argue about computation it is much easier to work with Turing machines since their local behavior is so easy to describe.10 follows. This works as follows.2 Examples of members in the complexity classes. We will every now and then abuse this notation and say that a function (not necessarily {0. compute their sum. This turns out to be true in general and this gives us a very important principle which we can formalize as a complexity theoretic version of Church’s thesis.11 Given two n-digit numbers x and y written in binary. By virtue of the above thesis we can take the easy road to both things and still be correct.9 and 4. L and PSPACE are the same whether we use Turing machines or RAMs in the deﬁnitions. This can clearly be done in time O(n) as we all learned in ﬁrst grade. The storing and retrieving of information can also be done in time O(S(n)) and using S(n) ≤ T (n) Theorem 4. It is also quite easy to see that it can be done in logarithmic space. If we have x = n−1 xi 2i and y = n−1 yi 2i then x + y is computed by the following i=0 i=0 program: carry= 0 For i = 0 to n − 1 bit = xi + yi + carry carry = 0 If bit ≥ 2 then carry = 1. 48 .time on the Turing machine than on the RAM.10 we see that P. P and PSPACE as families of sets. Using Theorems 4. Complexity theoretic version of Church’s thesis: The complexity classes L. P and PSPACE remain the same under any reasonable computational model. The above statement also remains true for all other complexity classes that we will deﬁne throughout these notes and we will frequently implicitly apply the above thesis. When designing algorithms it is much easier to describe and analyze the algorithm if we use a high level description. bit = bit − 2. We have deﬁned L. This will just mean that the function can be computed within the implied resource bounds. Example 4. 1}valued) lies in one of these complexity classes.

Example 4. compute their product. 49 . carry= 0 For i = 0 to 2n − 2 low = max(0. The only slightly nontrivial thing to check in order to verify that the algorithm does not use more that O(log n) space is to verify that carry always stays less than 2n. and if we do it as taught. In fact we can also do it in L. One might be tempted to think that also division could be done in L. The only things that need to be remembered is the counter i and the values of bit and carry. This can clearly be done in O(log n) space and thus addition belongs to L. carry = carry + xj ∗ yi−j write lsb(carry) carry = carry/2 next i write carry with least signiﬁcant bit ﬁrst If one looks more closely at the algorithm one discovers that it is the ordinary multiplication algorithm where one saves space by computing a number only when it is needed. it is not known whether this is the case. it will take us O(n2 ) (this can be improved by more elaborate methods).write bit next i write carry. convert it to base 3.12 Given two n-digit numbers x and y written in binary. Another very easy problem that is not known how to do in L: Given an integer in base 2. Example 4. However. We leave this easy detail to the reader. i) For j = low to high.13 Given two n-bit integers x and y compute their greatest common divisor. This can again be done in P by ﬁrst grade methods. i − (n − 1)) high = min(n − 1.

0 ≤ r < b a=b b=r od write a The algorithm is correct since if d divides x and y then clearly d divides all a and b. it is possible to do better (without applying any fancy techniques) by the following observation. The lemma implies that there are at most 2n iterations and thus the total complexity is bounded by O(n3 ). The fact that numbers get smaller at each iteration implies that there are at most 2n iterations. Proof: Let a1 and b1 be the values of a and b after one iteration. To analyze the algorithm we have to focus on two things.14 Let a and b have the values a0 and b0 at one point in time in Euclid’s algorithm and let a2 and b2 be their values two iterations later. On the other hand if d divides any pair a and b then it also divides x and y. This is not suﬃcient to get a polynomial time running time and we need the following lemma. First observe that for each iteration the numbers get smaller and thus we will always be working with numbers with at most n digits. 50 . The work in each iteration is essentially a division and this can be done in O(n2 ) bit operations. Lemma 4. then a2 ≤ a0 /2.We will show that this problem is in P and in fact give two diﬀerent algorithms to show this. a=x b=y While b = 0 do ﬁnd q and r such that a = bq + r. however. Then if b0 < a0 /2 we have a2 < a1 = b0 < a0 /2 and the conclusion of the lemma is true. If you use standard long division (with remainder) to ﬁnd q then the complexity is actually O(ns) where s is the number of bits in q. Assume for simplicity that x > y. If you are careful. namely the number of iterations and the cost of each iteration. On the other hand if b0 ≥ a0 /2 then we will have a2 = b1 = a0 − b0 ≤ a0 /2 and thus we have proved the lemma. First the old and basic algorithm: Euclid’s algorithm.

od write a2min(dx . Since b0 ≤ a0 this implies that a2 ≤ max(b1 . then a2 ≤ a0 /2. Proof: If a1 and b1 are the numbers after one iteration then b1 ≤ (a0 +b0 )/4 and a1 ≤ a0 . If one analyzes this carefully we actually get complexity O(n2 ). Again it is clear that the numbers decrease in size and thus we will never work with numbers with more than n digits. and hence the total work is bounded by O(n2 ). Thu again we can conclude that we have at most 2n iterations. While b > 1 do Either a + b or a − b is divisible by 4. To analyze the number of iterations we have: Lemma 4. On the other hand if q is large then it is easy to see that the numbers decrease more rapidly than given by the above lemma.dy ) The algorithm is correct by a similar argument as the previous algorithm. Let us just remark that the best known greatest common divisor algorithm for integers runs in time O(n(log n)2 log log n) and is based on the 51 .Thus if q is small we can do each iteration signiﬁcantly faster. To analyze the complexity of the algorithm we again have to study the number of iterations and the cost of each iteration. Let us give another algorithm for the same problem. r2−dr ) where 2dr is the highest power of 2 that divides r. Set a = x2−dx and b = y2−dy . If b < a interchange a and b. Each iteration only consists of a few comparisons and shifts if the numbers are coded in binary and thus it can be implemented in time O(n). Set r to the number that is divisible by 4 and set a = max(b. (a1 + b1 )/4) ≤ a0 /2. r2−dr ) and b = min(b. This algorithm is called “Binary GCD”. This implies that binary GCD is a competitive algorithm in particular since the individual operations can be implemented very eﬃciently when the binary representation of integers is used.15 Let a and b have the values a0 and b0 at one point in time in the binary GCD algorithm and let a2 and b2 be their values two iterations later. Let 2dx be the highest power of 2 that divides x and deﬁne dy similarly.

We need to verify that the numbers do not get too large during the computation i.17 If A is a nonsingular n×n integer matrix with entries bounded in size by m then A−1 has rational entries with numerator and denominator n bounded by mn n 2 . To analyze what happens to the numbers assume for notational simplicity that the upper left i × i matrix is non-singular for any i and thus we will be able to perform Gaussian elimination without pivoting. This volume is bounded by the product of the lengths of the row vectors3 which in its turn is bounded √ by (m n)n . A determinant can be interpreted as the volume of the parallelipiped spanned by the rows.16 Given a nonsingular integer matrix M with entries which are n-bit numbers. Lemma 4.Euclidean algorithm. It might seem like this problem obviously is in P since Gaussian elimination is well known to be doable in O(n3 ) steps. Thus using the following lemma we can bound the rational numbers involved in the computation. solve M x = b for some vector of n-bit numbers. After the i’th variable has been eliminated the matrix will be A−1 0 −1 I −CA A B C D = I A−1 B 0 −CA−1 B + D where I is the i × i identity matrix. there is something to check. It is unknown if integer greatest common divisor can be solved in small space. Thus we just need to bound the size of determinants of integer matrices. This is not a formal proof and the inequality indicated in this sentence is known as Hadamard’s inequality 3 52 . Suppose the original matrix looks like A B C D where A is the upper i×i matrix. Let us investigate what the matrix looks like after we have eliminated the i’th variable. Example 4. However. that the rational numbers that appear can be represented.e. Proof: Any entry of A−1 is an (n − 1) × (n − 1) subdeterminant of A divided by the determinant of A.

There is no known polynomial time algorithm for computing the permanent and there is good reason to believe that there is no such algorithm (the problem is #P -complete. The space needed for the former is bounded by (n log n) while the second is bounded by the size of the answer and if we assume that all entries in the original matrix 2 are bounded by 2n the per is bounded by 2n n! and thus can be stored in space O(n2 ). Example 4. if it is. Since Gaussian elimination can be done in O(n3 ) operations and each operation can be performed in time O(n4 ) (if we use classical arithmetic) then we get total complexity O(n7 ).It follows from the lemma that the rational numbers involved in Gaussian elimination can be represented by O(n2 ) binary digits.18 The determinant of an n × n matrix can be written as n sg(π) π∈Sn i=1 xi. per = 0 For 1 ≤ π(1).π(i) where the sum is over all permutations of the numbers 1 through n and sg(π) is the signum4 of the permutation. . check if it is a permutation and. . 4 If you do not know the signum function just forget this deﬁnition of the determinant 53 .π(i) . It is not hard to see that the problem is in PSPACE and we will not give the most eﬃcient algorithm but rather the easiest to understand. The determinant can be computed by Gaussian elimination and thus by the previous example it is in P. per = per + n i=1 xi. The deﬁnition looks simpler but it removes the nice invariance under the row operations of Gaussain elimination. π(2). we will get to this complexity class later). All the space required is to store the variables π(i) and per. add the corresponding term to the sum. The permanent is a very related number which is deﬁned as n xi. .π(i) Thus we just generate all n-tuples of numbers between 1 and n. π(n) ≤ n If π(i) = π(j) for i = j. π∈Sn i=1 Thus we have just removed the signum part of the deﬁnition.

we get 4 O(n) multiplications of O(n) bit numbers and this can be done in total time O(n3 ). p−1 p+1 Now if p ≡ 3 (mod 4) and if a 2 ≡ 1 (mod p) then if we set x = a 4 we have p+1 p−1 x2 ≡ a 2 ≡ aa 2 ≡ a.19 Given a prime number p and a number a. It is more eﬃcient to ﬁrst compute a2 (mod p) for 0 ≤ i ≤ n in n squarings.e. Example 4. Set Rnew to the set of nodes reachable from s in one step. i.20 Given a directed graph G with n nodes and two distinguished nodes s and t in G. Let us ﬁrst recall some basic facts from number theory. We will return to these questions later in these notes. it is true that xp−1 ≡ 1 (mod p) for any number not divisible by p. Assume that p and a are at most n digit numbers. by Fermat’s little theorem. Now we write p+1 in 4 p+1 i binary and we compute a 4 by multiplying together the powers a2 with the i’s corresponding to 1’s in the binary expansion of p+1 . ﬁnd x (if one exists) such that x2 ≡ a (mod p). Hence. v). Assume that we have an odd prime p (the case p = 2 being easy) then a can be written p−1 as a square mod p iﬀ a 2 ≡ 1 (mod p) (i. Is it possible to ﬁnd a directed path from s to t? This problem is in P by the following straightforward algorithm. Thus we have proved that taking square-roots modulo primes p with p ≡ 3 (mod 4) can be done in polynomial time. Remember also that. 1 matrix is nonzero but that it seems hard to compute it. 54 . p+1 Let us investigate how much resources are needed to compute a 4 (mod p). (mod p) Thus taking square roots when p ≡ 3 (mod 4) is just computing a power. Set R ={s}.It is interesting to note that there is a polynomial time algorithm to decide whether a permanent of a 0. we have a solution iﬀ this condition holds).e. It is not known if this is true in general for primes p ≡ 1 (mod 4) or when p is a composite number. we can reduce mod p after each squaring and thus we will never need to work with numbers with more than 2n digits. the set of v such that there is an edge (s. Observe here that since we are only interested in the result (mod p). Example 4. Then computing p+1 a 4 by successive multiplications would require on the order of 2n multii plications.

Also take any nodes reachable from w in one step which do not belong to either R or Rnew and put them into Rnew . We leave the veriﬁcation of this to the reader. Remark 4.21 Observe that in fact R is the set of nodes reachable from s and thus we have really solved a more general problem.While Rnew is not empty do Take an element w in Rnew and move it into R. od If t ∈ R say yes. Next we turn to the deﬁnition of non-deterministic computation. otherwise say no. 55 . To see that the problem is in P let us analyze the time needed for the algorithm. Each execution in the loop can be done in time n since we just have to investigate the neighbors of w. The obvious goal in mind is to formally deﬁne N P . Thus the complexity is bounded by O(n2 ). It is important to know that Rnew contains the set of nodes known to be reachable from s but whose neighbors have not yet been put into R or Rnew . Since we each time the loop is executed put one node into R and we never remove anything from R the loop is only executed n times. We claim that when the algorithm ends all the nodes reachable from s are in R.

there might be several diﬀerent possible outputs. the machine takes value 0 (or rejects the input). write y1 and y2 nondeterministically with |yi | ≤ |x| for i = 1. Let us see that the algorithm is correct. 2. namely if x = ab then when y1 = a and y2 = b we will get the output 1. 2.2 Suppose we want to recognize composite numbers i. By this we mean that in a given situation the machine might do several diﬀerent things. 5. Now the machine gives output 1 iﬀ y1 y2 = x and yi > 1 for i = 1. This calls for a deﬁnition.1 or an endmarker. deterministic Turing machine is the next-step function. but it is multivalued. If there is no computation that gives the output 1. The formal deﬁnition might make nondeterminism seem like a paper-tiger which has nothing to do with reality. 1} functions we will think of nondeterministic machines as recognizing sets i. This implies that on a given input there are several possible computations and in particular. If x is composite then there is some computation that outputs 1. which tells the machine what to do in a given situation. numbers which are not prime and hence can be written as the product of two numbers both greater than or equal to 2.1 Nondeterministic Turing machines The heart of a normal. The machine takes the value 1 on (or accepts) an input x iﬀ there is some possible computation on input x which gives output 1. On the other hand if x is prime there is 56 .5 Nondeterministic computation The two most famous complexity classes are probably P and NP.e. The machine constructs y2 in the same way. Writing down y1 is done by allowing the machine move left for |x| steps while at each step either writing down 0. Example 5. This can be done by a nondeterministic machine as follows: On input x. A nondeterministic Turing machine also has a next-step function.e.1 A nondeterministic Turing machine can only compute functions which takes the values 0 and 1. We have already deﬁned P and to deﬁne NP we need the concept of a nondeterministic Turing machine. but it will soon be clear that this is not the case. Since we will only be working with {0. Deﬁnition 5. the set of inputs for which there is an accepting computation.

Deﬁnition 5.4 A nondeterministic Turing machine M runs in space S(n) if for every input of length n.5 Given a set A. we say that A ∈ N L iﬀ there is a nondeterministic Turing machine which accepts A and runs in space O(log n).7 Given a set A. When it comes to nondeterministic computation there is a tremendous diﬀerence.no possible computation that gives output 1 since if y1 y2 = x then by the deﬁnition of prime one of the yi is 1. If. every computation of M visits at most S(n) squares on the work-tape. The deﬁnitions of space and time need to be slightly modiﬁed since there is no unique computation given the input. With these basic deﬁnitions done we can proceed to deﬁne some complexity classes. Deﬁnition 5. every computation of M halts within T (n) steps. for instance. Deﬁnition 5.6 Given a set A. Deﬁnition 5. you change the output of the machine recognizing composite numbers deﬁned above then you get a machine that accepts everything. It is important to keep this non-symmetry in mind.3 A nondeterministic Turing machine M runs in time T (n) if for every input of length n. since one just changes the output routine to reverse the meaning of 0 and 1. 57 . we say that A ∈ N P iﬀ there is a nondeterministic Turing machine which accepts A and runs in time O(nk ) for some constant k. Since non-deterministic Turing machines can always be made to have output 1 or 0 the size of the answer will always be small. Deﬁnition 5. This implies that we do not need an output-tape. Observe that when we are considering deterministic computation recognizing primes and recognizing composite numbers are very similar. Some proofs will be formally easier if we assume that the output is written on the worktape and therefore we will assume this. we say that A ∈ N P SP ACE iﬀ there is a nondeterministic Turing machine which accepts A and runs in space O(nk ) for some constant k.

4. Proof: The proof is quite close to the proof of the corresponding deterministic statement but we need an extra observation. It might be tempting to guess that Composite numbers are in NL since the essential part of the algorithm is a multiplication and we know from before that multiplication can be done in L. In the present situation we have to write down the two factors on the work-tape and there is no room to do this.11 Composite numbers are in NP. 4.6. It is at this point not clear that it is strict.10 N L ⊆ N P . Theorem 5.9 N P ⊆ N P SP ACE. This is not known however.5 and 4. Theorem 5. and the reason that the given algorithm does not work is that multiplication is in L only when the input is on a separate input-tape where we can access any part of the input when it is needed. The time bound given in Lemma 3.12 is no longer true for nondeterministic computation. Let us now proceed to some examples of members in the newly deﬁned complexity classes. The reason is that it can make diﬀerent non-deterministic choices the second time around. The reason for this is that even if a nondeterministic machine is in the same conﬁguration twice it need not loop forever. 58 . Example 5. Proof: The inclusion is obvious. This implies that we can impose the time-restriction given by Lemma 3. since the nondeterministic algorithm given previously is easily seen to run in time O(n2 ). Proof: This follows since also nondeterministic Turing machines cannot use more space than time. This will follow from results later on and we leave it for the time being.10. Theorem 5. However. This proves Theorem 5.8 N L ⊂ N P SP ACE.We have similar theorems to 4.12 without changing the set of inputs accepted. it is easy to see that if a nondeterministic machine has an accepting computation then it has a nondeterministic computation which visits each conﬁguration at most once.

i = 1. Is there a tour which visits all cities exactly once and is of total length ≤ K? TSP is in NP as can be seen from the following non-deterministic algorithm. n each with at most log n + 1 digits. i = 1. is there a setting of the variables that satisﬁes the formula? This problem is in NP. by the obvious procedure. 2 . consisting of Boolean variables xi . 2. If this number is less than K output 1 and in all other cases output 0.b1 . Namely. Nondeterministically write numbers bi . and an integer K.13 Boolean formula satisﬁability: Given a Boolean formula. This is easy and we leave this as an exercise. Now we have the following algorithm: Suppose the graph has n nodes.Example 5. If 1 ≤ bi ≤ n for all i and bi = bj for i = j then compute n−1 mbi . n.14 Directed graph reachability: Given a directed graph G and two nodes s and t of G. If this tour is short enough the machine accepts the output. nondeterministically write down the value of every variable and then write 1 iﬀ the guessed assignment satisﬁes the formula. We assume that the graph is given as a list of the edges. . Observe that the conditions 1 ≤ bi ≤ n and bi = bj for i = j imply that the bi deﬁne a tour starting in b1 and tracing through bi for increasing i and then returning to b1 . . . 59 . ∧-gates (logical conjunction) ∨-gates (logical disjunction) and negation-gates. . It is easy to check that the algorithm runs in polynomial time and thus we have proved that T SP ∈ N P .12 Traveling Sales Person (TSP): Given n cities and a symmetric integer n × n matrix (mij )n i. .j=1 where mij denotes the distance between cities i and j.bi+1 + i=1 mbn . Let us return to the problem of graph-reachability (previously considered in section 4. Example 5. 2 . To check that this procedure runs in polynomial time one has to observe that given a formula and an assignment of all the variables then one can check whether the assignment satisﬁes the formula in polynomial time. 1.2: Example 5. is there a directed path from s to t? We present an algorithm that uses only logarithmic space and hence we need to be slightly careful about how the input is presented.

v2 . In a similar way NP can be thought of as the class of problems where. it grows too quickly. n If H = t print 1 and halt. This procedure uses only logarithmic space since all we need to remember is the counter i and the value of H. we know that when the machine takes the value 1 then t is reachable from s. Then there is a possibility that H = vi for every i and thus there is a possibility that the machine outputs 1. . vk where v1 = s. i. Next i Print 0. We can assume that k ≤ n since if vi = vj for i < j then we can eliminate vi+1 through vj and still maintain a path. Before we continue to establish some of the more formal properties of NP. this anomaly does not seem to appear and thus if a problem has a polynomial time solution then the exponent tends to be small and the algorithm is usually eﬃcient in practice. Then there is a path v1 . ﬁrst observe that by construction H is always a node that can be reached from s. vk = t and there is an edge from vi to vi+1 for any i. Choose nondeterministically one of the edges leaving H and set H to the endpoint of this edge. On the other hand suppose that t is reachable from s. We will not give any example of a language in NPSPACE and in the next section it will be clear why. The class P is intuitively thought of as the class of functions which are computable in practice. To verify that the algorithm is correct. within moderate amounts of computation we can solve reasonably large problems. The conditions given in the algorithm are easily checked given the assumed encoding of G. If there is no edge out of H print 0 and halt. since the machine output 1 only when H = t. In an abstract mean60 . if you knew the solution. The argument implies that the algorithm recognizes exactly the graphs that have a path from s to t and therefore directed graph reachability is in NL. Thus. . In practice however. . . 2 . That this is the case is not clear from the deﬁnition and one could object that although n100 is polynomial.Set H= s For i = 1. let us be informal for a while. it could be veriﬁed eﬃciently.e.

|y|≤|x|k (x. while the recursively enumerable sets corresponded to statements that could be veriﬁed. Assume that the Turing machine has only one tape. respectively. and x ∈ A then this can be veriﬁed since we just wait until x is listed. In view of this one can say that recursive and r. On the other hand if x ∈ A this cannot be veriﬁed since we never know if we just haven’t waited long enough to see it listed. we will need the concept of a computation tableau. If B can be recognized in time O(nc ) this procedure runs in time O(n(1+k)c ) which is polynomial.e. then A ∈ N P iﬀ there is a language B ∈ P and a constant k such that x ∈ A ⇔ ∃y. The position (i.15 Given a set A. Then we can think of its computation tableau as a two-dimensional array with time on one axis and the tape squares on the other. The recursive sets corresponded to functions that could be computed. Thus the nondeterministic choices have in our examples corresponded to the factors. In fact. . xn on the input-tape and ends with only a 1 on the tape is given in Table 3. The reason for the name is that we will think of it in the following way. To see the converse.16 A computation tableau is a complete description of a computation of a Turing machine. Deﬁnition 5. x2 . . and a satisfying assignment. As we have seen. A computation which starts with input x1 . j) of this tableau thus contains the symbol that is in the j’th square at time i. y) ∈ B. a short tour. The latter statement follows from the fact that if A is r. have the same relation as P and NP and thus it is not surprising that we can prove some similar theorems. in practice “the solution” is much more concrete. 61 . It also contains information about whether the head is there and in such a case which state it is in. a nondeterministic algorithm for membership in A just consists of guessing a y of the desired length and then accepting iﬀ (x. Theorem 5. Proof: Let us ﬁrst prove that if there is such a B then A ∈ N P .e one conﬁguration for every time step) starting with the input conﬁguration and ending with the halting conﬁguration. It consists of all conﬁgurations of the Turing machine on a speciﬁc input (i.e. y) ∈ B.ing “the solution” must here be interpreted as the set of nondeterministic choices that makes the machine accept.

15 would be true also for NL and L. That the computation is legal for M . . . This ﬁnishes the proof. We claim that B is in P. To see this observe that to check whether a pair (x. Deﬁne B to be the set of pairs (x. q0 0 0 . qh x2 x2 . Remark 5. As the interested reader can convince himself. 62 . Also to check 2 is straightforward since we have to check that the only square that changed value between two timesteps is the square where the head was located. we can return to the converse of Theorem 5. Then B satisﬁes the condition with respect to A of the theorem with k = 2c. y) such that y describes an nc × nc computation tableau of M on input x which ends in an accepting state.x1 . That the computation described by y starts with x on the input tape. Suppose A is recognized by a one-tape Turing machine M in nondeterministic time nc . 1. The ﬁrst and the last conditions are easy to check since they just talk about the contents of particular squares. q3 1 B x3 x3 x3 . . y) is in B we basically have to check three things. 3. That the computation accepts... .. B x4 x4 x4 ..17 One might be tempted to think that the relation given between NP and P in Theorem 5. 2.. and also that the transition by the head was a possible transition given the next-step function of M . q1 . this is probably not the case as even if we restrict B to belong to L then the set of all A deﬁnable in this way is still all of NP.15. 1.. B B Table 3: A computation tableau Now.

19.7.17 is true. it is not known whether the analogue of 2.) There is a nice reduction theory and also a notion of complete sets and we will return to these questions in Chapter 7.Thus we have given the theorem about NP and P corresponding to Theorem 2. Of the other theorems in Section 2. (The general belief is that it is not. 63 .

Theorem 6. Theorem 6.1 Nondeterministic space vs.6 Relations among complexity classes Up to this point we have deﬁned six complexity classes (L. Let us ﬁrst observe that the option of non-determinism will never hurt and thus any deterministic complexity class is contained in the corresponding nondeterministic complexity class. Assume for simplicity that N has only one worktape. Theorem 6. and NPSPACE) and we have observed some relations. By the argument in the proof of Lemma 3. Consider the set of conﬁgurations of N . We have to design a deterministic Turing machine that runs in time 2O(S(n)) which recognizes A. deterministic time The aim is to establish the following theorem. For notational convenience let T IM E(T (n)) denote the class of languages that can be recognized in deterministic time T (n) and let N T IM E(T (n)) be the class of languages that can be recognized in the same nondeterministic time.N be the following directed graph: 64 . Theorem 6. Proof: Let A be a language that can be recognized by a nondeterministic Turing machine N which uses space at most S(n) on inputs of length n. In this section we will establish some more relations. Let Gx. 6. This gives us three immediate theorems. some obvious and some not obvious. the positions of all its heads and the contents of the worktape.P. NL. Remember that a conﬁguration consists of the state of N . then N SP ACE(S(n)) ⊆ T IM E(2O(S(n)) ). Similarly we deﬁne SP ACE(S(n)) and N SP ACE(S(n)). a three letter alphabet.12 there are at most |x|QS(|x|)3S(|x|) possible conﬁgurations that N may visit on input x.2 P ⊆ N P . and Q states.PSPACE.NP.4 Suppose S(n) > log n and that S(n) is space constructible.3 P SP ACE ⊆ N P SP ACE. In the next subsection we will prove the ﬁrst nontrivial complexity result.1 L ⊆ N L.

deterministic space This section has only one basic theorem. given A ∈ N P there is a B ∈ P and a k such that x ∈ A ⇔ ∃y.6 N P ⊆ P SP ACE. |y| ≤ |x|k (x.N has at most 2O(S(|x|)) nodes and using the fact that S is space constructible we see that Gx. y) ∈ B. Proof: Remember the characterization of NP given in Theorem 5. . We now claim that the machine takes value 1 on a given input exactly when there is a path from Cst to any of the conﬁgurations that end with output 1. i. 1 .2 Nondeterministic time vs. This gives the following algorithm to determine whether x ∈ A f ound = 0 k For y = 0. This is fairly obvious and the veriﬁcation is left to the reader. Proof: Just insert S(n) = O(log n) in Theorem 6. 2|x| do If (x. Theorem 6. Now it follows from the example in Section 4. By the above claim Gx. y) ∈ B then f ound = 1 od Write f ound 65 . 6.2 that in time 2O(S(|x|)) it is checkable whether any conﬁguration that outputs 1 can be reached from the initial conﬁguration. Gx.N can be constructed in 2O(S(|x|)) time.N has one node Cst which corresponds to the initial conﬁguration and one or more conﬁgurations where N halts with output 1.The nodes of Gx.4 We have the following corollary.e. Corollary 6. Since this is equivalent to N accepting x we have proved Theorem 6.4. .5 N L ⊆ P .N are the conﬁgurations of N and there is an edge from conﬁguration C1 to conﬁguration C2 iﬀ it is possible to go from C1 to C2 in one step on input x.15.

C2 .4.The algorithm is correct since f ound will be 1 exactly when there is a short y such that (x. If. we focus on space it turns out that nondeterminism only helps marginally.) Let Cst denote the start conﬁguration of N and recall the argument in the proof of Theorem 5. then N SP ACE(S(n)) ⊆ SP ACE(O(S 2 (n))) . on the other hand. Assume for notational simplicity that N has a unique conﬁguration where it halts with output 1. 6. nondeterministic space Nondeterministic computation seems very powerful.3 Deterministic space vs. We will again work with the conﬁgurations of N and in fact if you look closely.7 If S(n) is space-constructible and S(n) ≥ log n. This fact is usually referred to as Savitch’s theorem and was ﬁrst proved by W. This time however we will be concerned with saving space and thus we will never write down the graph explicitly. Then we will be interested in the predicate GET (C1 . Savitch in 1970. To see that the algorithm runs in polynomial space observe that all we need to do is to keep track of y and to do the computation to check whether (x.10 that if a machine has an accepting computation 66 .J. x) which we will interpret “On input x it is possible to get from conﬁguration C1 to conﬁguration C2 in time ≤ 2k and without being in a conﬁguration which uses more than S(|x|) space. Since this latter computation is polynomial time. k. Proof: Assume that A is accepted by the nondeterministic machine N in space S(n). Let C1 and C2 be any two conﬁgurations of N and let k be an integer. Theorem 6.4 this can be interpreted as “There is a path of length at most 2k from node C1 to node C2 ”.” (If we think about the graph in the proof of Theorem 6. and it seems for the moment that complexity theory supports this intuition at least in the case when we are focusing on time as the main resource. y) ∈ B. we solve the same graph problem as we did in the proof of Theorem 6. Let us call this conﬁguration Cacc . we can do it in polynomial space and once we have checked a given y we can erase the computation and use the same space for the next y. y) ∈ B.

x) and GET (C. C. then the two computations from C1 to C and from C to C2 can be concatenated to a computation from C1 to C2 . GET (C1 . x) is true. Thus all we have to do is to evaluate this predicate in small space and to achieve this. This implies that there is a constant c such that N accepts an input x iﬀ GET (Cst . To do the induction step let us specify more closely how the above procedure works. Conversely if there is a C that fulﬁlls the left hand side of the above equation. C2 . x)) The ∨ is here taken over all possible conﬁgurations C of N which uses space less than S(|x|). C2 . x) If k = 0 then Check whether the next-step function of N allows a transition from C1 to C2 on input x in one step and set GET accordingly. Cacc . k − 1. C2 . x) = (GET (C1 . The reason for the above relation is that if there exists a computational path from C1 to C2 of length at most 2k which never uses more than S(|x|) space then there is a midpoint on this path and the conﬁguration at this midpoint can be used as C. cS(|x|). x) ∧ GET (C. k − 1. k. The above equation gives the following recursive algorithm to evaluate the predicate GET . We prove by induction that GET (C1 . k. C2 . k − 1. C2 . cS(n). This is clearly to true for k = 0 since all that need to be done is to check if one of the constantly many possible next steps that N can do from C1 will take it into C2 . If for some C both are true. x) can be evaluated in space D(k + 1)S(|x|) for some constant D. the running time is bounded by the number of conﬁgurations. k − 1. the following observation will be crucial. k. else For all conﬁgurations C which uses space at most S(n): Evaluate GET (C1 . in particular. set GET to true and otherwise to false. C. We loop over all possible C and to remember which C we are 67 .then there is an accepting computation which visits each conﬁguration at most once and. GET (C1 . x). C2 . endif By the above argument x ∈ A iﬀ GET (C1 . x) and thus to prove the theorem we need only calculate the amount of space needed to evaluate GET .

Provided that D > d the induction step is complete and thus we have completed the proof of Theorem 6. we will forget it.9 ﬁnishes the proof of Theorem 5.10 reﬂects our total knowledge of the relation between the given complexity-classes.currently working on requires space dS(n) for some constant d. Let us sum up the information in a theorem. Corollary 6. These two evaluations are done sequentially and thus we can ﬁrst do one of the evaluations.10 L ⊆ N L ⊆ P ⊆ N P ⊆ P SP ACE. remember the result and then do the other evaluation in the same space. Corollary 6.8 N P SP ACE = P SP ACE. This explains that NPSPACE is not a very famous complexity class. We have two obvious corollaries of the above theorem.8 as promised before. By the induction hypothesis this implies that the computation for a ﬁxed C can be done in space DkS(n) + 1. By now we have gathered some information about the relations between the complexity classes we have deﬁned. We introduced it for symmetry purposes and now that we have proved that we do not need it. For each C we do two evaluations of GET with the parameter k − 1. 68 . It is a sad fact for complexity theory that Theorem 6.7 everything in NL can be done in space O(log2 n) and thus we get a strict inclusion by Theorem 3. The inclusion of NL in PSPACE is strict. Observe that Corollary 6.7. Theorem 6.9 N L ⊂ P SP ACE.14. Proof: By Theorem 6.

21. We can now proceed to develop a reduction theory similar to the one described in the end of Section 2.7 Complete problems Even though Theorem 6. 69 .2 If A ≤p B and B ∈ P . To compute f (x) is done in time O(|x|c ) and from this also follows that |f (x)| ≤ O(|x|c ) which in its turn implies that f (x) ∈ B can be checked in time O(|x|ck ). Firstly they will serve as candidates that can be used to prove strict inclusions. Proof: Suppose the function f in the deﬁnition of ≤p can be computed in time O(nc ) and that B can be recognized in time O(nk ). The common belief today is that all the given inclusions are strict. Many proofs and theorems are similar. the NP-complete problem. Instead of talking about recursive and recursively enumerable sets we will talk about P and NP.7. Theorem 7. Clearly this deﬁnition is very close to the Deﬁnition 2. Secondly. This serves two purposes.1 Let A and B be two sets. 7. Deﬁnition 7. Thus the procedure works in polynomial time and we can conclude that A ∈ P .1 NP-complete problems To identify the hardest problem we need ﬁrst deﬁne the concept of “not harder than”. there are some important things to be said. One step on the way is to identify the hardest problems within each complexity class. but unfortunately we have not yet developed the machinery to prove this. proving a problem complete will give a good hint that it can probably not be placed in a lower complexity class and thus is a good way to classify a problem.10 gives the present state of knowledge about the deﬁned complexity classes. We will start by considering a very famous class of problems. The only diﬀerence is that we require the function f to be computable in polynomial time. Then to check whether a given input x belongs to A just compute f (x) and then check whether f (x) ∈ B. Then A ≤p B (read as “A is polynomial time reducible to B”) iﬀ there is a polynomial time computable function f such that x ∈ A ⇔ f (x) ∈ B. then A ∈ P . There are a couple of diﬀerent ways to do this but we will only consider one.

B ≤p A. By dropping the ﬁrst condition we get another known concept. Remember that the computation tableau is a complete description of a computation. B}. B ≤p A and hence by Theorem 7. A ∈ N P 2. Q states. With this motivation we are ready to study our ﬁrst NP-complete problem. Proof: We have already established that SAT ∈ N P (see the example in section 5.4 A set A is NP-hard iﬀ for all B ∈ N P .The deﬁnition of NP-complete is now very natural having seen the deﬁnition of r. Proof: Clearly if N P = P then A ∈ P since A by the deﬁnition of NPcompleteness belongs to NP. If B ∈ N P then B ≤p A. Let SAT be the set of satisﬁable Boolean formulas (as introduced in the example in section 5.1). Then by property 2 of being NP-complete. 1. We will now construct a Boolean formula such that if it is 70 . 1971) SAT is NP-complete. Deﬁnition 7.1) and thus we need to establish that B ∈ N P implies that B ≤p SAT . Before we continue to prove some problems to be NP-complete let us prove a simple theorem. runs in time nc and uses the alphabet {0.3 A set A is NP-complete iﬀ 1.2 B ∈ P .-complete before. Theorem 7.e. Theorem 7. Deﬁnition 7. But since B was an arbitrary language in NP we can conclude that NP = P.5 If A is NP-complete then P = NP ⇔ A ∈ P .6 (Cook. To see the converse assume that A ∈ P and take any B ∈ N P . Assume that B is recognized by a non-deterministic Turing machine N which has one tape.

It is a valid computation.j. 3. Deﬁnition 7. 1 ≤ l ≤ Q. zijl . k ∈ {0. • z1. Let us denote the length of x by n. The intuitive meaning of the variable will be that yijk = 1 iﬀ the symbol k appears in square j at time i and will take the value 0 otherwise while zijl = 1 iﬀ the head is in square j at time i and the machine at this time is in state ql . B} and 1 ≤ i. Clearly the y and z variables code a computation completely and thus all that needs to be done is to make a Boolean formula which is true iﬀ the y and z variables code an accepting computation of N on input x. j ≤ nc . • For n + 1 ≤ j ≤ nc we have y1jk = 1 iﬀ k = B. There are three conditions to take care of. j ) for i ≤ i ≤ i + 1 and j ≤ j ≤ j + 2. 1. The computation starts with x 2.l = 0 except when j = l = 1 (assuming that q1 is the start-state). 71 . Of these three conditions.e.7 A computational tableau C is locally correct if for every i and j there is some correct computation which have the same contents as C in squares (i . at time nc we have written a 1 in square 1 and the head is located in square 1 and we have halted (assuming that ql is the halting state). j ≤ nc .satisﬁable then its satisfying assignment will describe a computation tableau of an accepting computation of N on input x. 1 and 3 are very easy to handle. The condition 3 is equivalent to ync 11 = 1 and znc 1l = 1 i. 1. The computation accepts. The condition 1 is equivalent to the following conditions: • For 1 ≤ j ≤ n we have y1jk = 1 iﬀ k = xj . The formula has two types of variables: yijk . 1 ≤ i. To see how to translate condition 2 into a formula we will need some more information.

The size of the formula is O(n2c ). We now claim that the conjunction of the formulas taking care of the conditions 1-3 is satisﬁable iﬀ x ∈ B. This implies that satisﬁability of formulas on conjunctive normal form is NP-complete. Theorem 5. To conclude the proof of the theorem we need just observe that to construct the formula is clearly polynomial time. Let us make a couple of observations about the above proof. which by the deﬁnition of N is equivalent to x ∈ B. It is just a question of coding a computation in a suitable way. If one thinks about this.e. Without increasing the size of the entire formula by more than a constant we write each of the subformulas in conjunctive normal form (i. Armed with this lemma we can now express condition 2 in a suitable way. Whether a given local area is correct is described as a condition on 6Q + 18 variables and since any condition on K variables can be expressed as a formula of size 2K we can express each local correctness condition in constant size. This is fairly obvious since there is a satisfying assignment iﬀ there is an accepting computation which uses at most space nc and time nc of N on input x. namely the existence of a computational tableau with certain conditions. Let us call this problem CNF-SAT and we have the following theorem. To determine whether the variables yijk and zijl describe as legal computation we only have to check all the local correctness conditions. We leave the easy veriﬁcation to the reader. Firstly the ﬁnal formula is the conjunction of a number of subformula where each subformula is of constant size. as a conjunction of disjunctions). This puts the entire formula on conjunctive normal form. 72 . Theorem 7. The conjunction of all these correctness formulas now takes care of condition 2. we do not feel that this is a natural problem and hence we will not make that argument.9 CNF-SAT is NP-complete. There are also striking similarities with the proof of Theorem 2.8 A computational tableau describes a legal computation iﬀ it is locally correct.15 can be used to give another NP-complete problem.26. The second observation is that the given proof is almost identical to the proof of Theorem 5. However.15.That computation is a local phenomena is now formalized as follows: Lemma 7.

e. By the hypothesis of the theorem there is a polynomial time computable g such that y ∈ A ⇔ g(y) ∈ B. then B is NP-complete Proof: We have only to check that for any C in NP it is true that C ≤p B. Clearly Theorem 7. The restriction is that there are exactly 3 literals (i. To put the proof in other words: Polynomial-time reductions are transitive i.10 we only have to make one reduction while to use the deﬁnition we have to make a reduction from any problem in NP.Having obtained one NP-complete problem it turns out to be easy to construct more NP-complete problems. variables or negated variables) in each disjunction. The reason is that to use Theorem 7.10 If A is NP-complete and B satisﬁes B ∈ N P . x2 = 1. if we can reduce C to A and A to B then we reduce C to B by composing the reductions.10 is much more useful for proving problems NPcomplete than the original deﬁnition. We have 73 . Such a formula is called a 3-CNF formula and an example is: ¯ x x (x1 ∨ x2 ∨ x3 ) ∧ (¯1 ∨ x2 ∨ x4 ) ∧ (¯2 ∨ x3 ∨ x4 ) This formula is satisﬁable as can be seen from the assignment x1 = 1. Let 3-SAT be the problem of checking whether a restricted Boolean formula given on conjunctive normal form is satisﬁable. Since A is NP-complete we know that C ≤p A and hence there is a polynomial-time computable function f such that x ∈ C ⇔ f (x) ∈ A.e. A ≤p B. The main tool for this is given below. Now it clearly follows that x ∈ C ⇔ g(f (x)) ∈ B and since the composition of two polynomial-time computable functions is polynomial-time computable we have proved C ≤p B and thus the proof of the theorem is complete. x3 = 1 and x4 = 0. Theorem 7.

We will replace each clause by one or more clauses each containing exactly 3 variables. 74 . Let us take care of the cases one by one. We will call Ci a clause and let |Ci | denote the number of literals in Ci . (2. 2. Let xi . then we replace it by ¯ ¯ ¯ ¯ (xj ∨ yi1 ∨ yi2 ) ∧ (xj ∨ yi1 ∨ yi2 ) ∧ (xj ∨ yi1 ∨ yi2 ) ∧ (xj ∨ yi1 ∨ yi2 ) . Thus all we need to check is that φ is satisﬁable precisely when f (φ) is satisﬁable. 2 . Proof: We will use Theorem 7. (4.) Suppose Ci = k uj for some literals uj we then replace Ci by j=1 k−4 (u1 ∨ u2 ∨ yi1 ) ∧ ( j=1 (¯ij ∨ uj+2 ∨ yi(j+1) )) ∧ (¯i(k−3) ∨ uk−1 ∨ uk ) y y The formula we obtain by these substitutions is clearly a 3-CNF formula and it is also obvious that given the original formula it can be constructed in polynomial time. 1.10 and since 3-SAT is clearly in NP all that we need to do is to ﬁnd a polynomial-time reduction from CNF-SAT to 3-SAT. Suppose φ = m Ci where Ci are disjunctions containing an arbitrary i=1 number of literals. We have the following cases.) Suppose Ci = (xj ∨ xk ) then we replace it by ¯ (xj ∨ xk ∨ yi1 ) ∧ (xj ∨ xk ∨ yi1 ) (3. 3. n be the variables that appear in φ and let yij denote new variables. . . |Ci | = 3. Thus we need to given a CNF-SAT formula φ construct in polynomial time a 3-SAT formula f (φ) such that φ is satisﬁable iﬀ f (φ) is satisﬁable. |Ci | = 1. (1.) Suppose Ci = xj . |Ci | = 2.Theorem 7.) We keep Ci as it is.11 3-SAT is NP-complete. |Ci | > 3. 4. i = 1.

Now consider the case 4. Look at the clauses constructed under rule 4. m where xi ∈ X. Let us consider case 1. zi ). 2 . Proving problems NP-complete is not the main purpose of these notes but let us at least give one more NP-completeness proof. Thus the reduction is correct and the proof is complete. Y and Z are sets of cardinality q. To prove 3DM NP-complete we will reduce 3-SAT to it. Since the corresponding clause Ci in φ is satisﬁed. at least one of the clauses is not satisﬁed.12 3DM is NP-complete. one of the uj is true and suppose this is uj0 . To prove the converse.First assume that φ is satisﬁable. Now set yij = 1 for j ≤ j0 − 2 and yij = 0 for j > j0 − 2 then it is easy to verify that this assignment satisﬁes f (φ). Thus given a 3-CNF formula φ we must construct an instance f (φ) of 3DM such that φ is satisﬁable iﬀ f (φ) contains a matching. yi ∈ Y and zi ∈ Z where X. Proof: 3DM is clearly in NP since a nondeterministic machine can just nondeterministically pick q of the triplets and then check if each element appears exactly once. suppose that f (φ) is satisﬁable and let xi = αi be the assignment to the x variables in this satisfying assignment. yi . If Ci was not satisﬁed then all the literals uj would be false. . “variable triplets”. Let 3-dimensional matching (3DM) be the following problem: Given a set of triplets (xi . but this implies that k−4 yi1 ∧ ( j=1 (¯ij ∨ yi(j+1) )) ∧ yi(k−3) y ¯ would be satisﬁed. i = 1. We now must ﬁnd a satisfying assignment for f (φ). Is there a subset S of q of the triplets such that each element in X. The clauses constructed according to rules 1-3 are already satisﬁed and thus will cause no problem. We claim that this part of the assignment will satisfy φ. We will give the same values to the xi and must ﬁnd values for the yij to satisfy the formula. “clause triplets” 75 . Suppose φ has n variables and m clauses. but this is clearly not possible. For clauses that fall under the rules 1-3 this is not too hard to see. We will construct an instance of 3DM with three types of triplets. Y and Z appear in exactly one of the triplets in S? Theorem 7. . If Ci = xj and αj = 0 then. no matter what the values of yi1 and yi2 are.

Let us start by deﬁning the variable triplets. ai [1]. The elements of the sets X. ai [j + 1]. Suppose Ci = ui1 ∨ ui2 ∨ ui3 and it is the jk ’th time the variable corresponding to the 76 . Each clause Ci will have two special values and three triplets. bi [j]) : 1 ≤ j < mi } (ui [mi ]. We will let the choice of which of the two sets to pick correspond to whether the variable xi is true or false. Suppose variable xi appears (with or without negation) in mi clauses then we will associate with it the following 2mi triplets. ai [j].Figure 8: The variable triplets and “garbage collecting triplets”. Y and Z will be deﬁned as we go along. bi [j]) : 1 ≤ j ≤ mi } Tif = {(ui [j]. As can be seen from Figure 8 this implies that any matching M must contain either all triplets from Tif or Tit for any i. bi [mi ]) The elements ai [j] and bi [j] will not appear in any other triplets. u Tit = {(¯i [j].

k = 1. 3. 77 . 1 ≤ j ≤ mi . g1 [k]. 1 ≤ k ≤ 2m (¯i [j]. One notable exception is factoring. Let us however move on and consider problems complete for other classes. g2 [k]).e. 2. s[i]. Then for each clause pick a variable that satisﬁes it and the corresponding clause triplet. The elements s[i] and t[i] will not appear in any other triplets and this implies that in any matching precisely one of the triplets corresponding to each clauses will be included. There are hundreds of known NP-complete problems and many appear in the listing in the ﬁnal part of the excellent book by Garey and Johnson. Observe that the uik should here be interpreted as literals and thus corre¯ sponds to either ul or ul . 1 ≤ i ≤ n. g2 [k]). 1 ≤ j ≤ mi . Then we include the triplets (uik [jk ]. It turns out that most problems in NP that are not known to be in P are NP-complete. t[i]). This will cover m of the 3m literal-elements. these are the same elements as in the variable triplets. Observe that we can include a triplet precisely when one of the corresponding literals is true. Then make the choice of which T sets to pick based on the satisfying assignment. The last 2m elements can be covered together with the g elements by the garbage collecting triplets. This is done by the following triplets ¯ (xi [j].literal uik appears. i. We have done the essential part of the construction and all that remains is specify the garbage collecting triplets which will match up the xi [j] and xi [j] that have not been used. 1 ≤ k ≤ 2m x This enables us to cover any 2m literal-elements which have not been matched by previous triplets. However it is easy to check that there are 6m + 3m + 6m2 triplets. 1 ≤ i ≤ n. It is clear from the above description that the set of triplets can contain a matching only if the formula is satisﬁable. g1 [k]. This concludes the proof. Suppose on the other hand that the formula is satisﬁable. another one is graphisomorphism. Thus there is a matching iﬀ there is a satisfying assignment and since the reduction is straightforward the only thing needed to check that it is polynomial time is to check that we do not have to construct too many triplets.

. Now let us encounter our ﬁrst PSPACE-complete problem.5. Let TQBF be the set of True Quantiﬁed Boolean Formulas. Of course the problems are diﬀerent. When dealing with NP-complete problems we came across the satisﬁability of Boolean formulas.16 A set A is PSPACE-hard if for any B ∈ P SP ACE. B ≤p A. By a similar argument we get: Theorem 7.14 If A is PSPACE-complete then P = P SP ACE ⇔ A ∈ P. Proof: If you substitute PSPACE for NP in the proof of Theorem 7.7.15 If A is PSPACE-complete then N P = P SP ACE ⇔ A ∈ N P. We have: 78 .13 A set A is PSPACE-complete iﬀ 1. One last deﬁnition for completeness before we go to business. Now we will consider quantiﬁed Boolean formulas which looks like: ∀x1 ∃x2 . If B ∈ P SP ACE then B ≤p A We have an immediate equivalent of Theorem 7. Deﬁnition 7. The concept of reduction is the same and the basic properties are the same.14. 2.2 PSPACE-complete problems The theory of PSPACE-complete problems is very similar to that of NPcomplete problems. Theorem 7. Qxn φ(x) where each x1 can take the value 0 or 1 and φ is a normal quantiﬁer free formula and Q is either ∃ or ∀ depending on whether n is even or odd. A ∈ P SP ACE.5 you get a proof of Theorem 7. Deﬁnition 7. .

Theorem 7. MB will get from conﬁguration C1 to conﬁguration C2 in at most 2k steps and never use more space than |x|c . k − 1. k. As before we have GET (C1 . With the present formalism it is more convenient to think of the ∨ as an existential quantiﬁer and we get GET (C1 . Qxn φ(x)|x1 =1 are true. . . x) = ∃C (GET (C1 . 79 . k − 1. These two formulas can be evaluated by induction in space O(nS) and since we can evaluate one and then evaluate the other in the same space while only remembering the value of the ﬁrst evaluation and which formula to evaluate the claim follows. x) ∧ GET (C. C. Proof: Let us ﬁrst check that TQBF can be recognized in polynomial space. x) which means that on input x.17 TQBF is PSPACE-complete. Of course if the ﬁrst quantiﬁer is ∃ we just need to check that one of the values is true. x)) . C2 . k. x) ∧ GET (C. k − 1. C2 . Next we need to take care of the slightly more diﬃcult part of proving that if B ∈ P SP ACE then B ≤p T QBF . To the induction step we use the observation that the given formula is true iﬀ both ∃x2 . k − 1. C. Qxn φ(x)|x1 =0 and ∃x2 . . From this the claim follows and thus T QBF ∈ P SP ACE. We claim that if the formula has n variables and the size of the description of φ is bounded by S then to check whether: ∀x1 ∃x2 . Suppose that B is recognized by Turing machine MB which never uses more space than |x|c on input x for a given constant c. . C2 . x)) . C2 . k. . We will again use the predicate GET (C1 . Remark 7.18 By being more careful it is not to hard to see that the evaluation actually can be done in space O(n + S). Qxn φ(x) is true can be done in space O((n + 1)S). C2 . . We prove this by induction and ﬁrst observe that it is certainly true for n = 0. x) = C (GET (C1 .

The main source of PSPACE.e.(C. To get other PSPACE-complete problems we ﬁrst state an obvious theorem. and that the ﬁnal application of GET can be written as a Boolean formula. We leave the details to the interested reader. rather than the more complicated objects we are currently quantifying over. Finally.B)∈{(C1 .complete problems outside logic is games.20 If A is PSPACE-complete and B satisﬁes B ∈ P SP ACE. Both these points are easy and let us just give a rough outline. All that remains to do is to check that it is suﬃcient to quantify over Boolean variables. However there is a way around this by replacing the ∧ by a universal quantiﬁer obtaining. GET (C1 . It is a only slight exaggeration to say that to determine who is the winner in most games is PSPACE-complete. Now we only get one copy of GET to expand further and if we continue recursively we get 2k quantiﬁers and a ﬁnal formula GET (X.Now we could write the two GETs to the right in the same way but this would be mean trouble since we would then get a formula of exponential size. if we restrict the formula φ in TQBF to be a CNF-formula we still obtain a PSPACE-complete problem. x) = ∃C ∀(A. then B is PSPACE-complete. PSPACE-problems are not as abundant as NP-complete problems and do not come up in as varying contexts. k. 0. It is straightforward to encode a conﬁguration as a set of Boolean variables. k − 1.19 TQBF-CNF is PSPACE-complete. x) is true for the appropriate constant d and since we know how to write the latter condition as a quantiﬁed Boolean formula we have completed the reduction. to check whether we can get from one conﬁguration to another in one step is just a simple formula where we list all possible transitions of the Turing machine. In fact if one writes down the ﬁnal formula carefully one can write it in CNF. A ≤p B. Now since x ∈ B iﬀ GET (Cst . Y.C). 80 . We call this problem TQBF-CNF. x). i.C2 )} GET (A. The ∀ quantiﬁcation is just a binary choice and thus can be represented by a Boolean variable which will take the value 0 if we make the ﬁrst choice and the 1 if we make the other. C2 . B. x). d|x|c . Cacc . Theorem 7. Theorem 7.

Given the 81 . The computational problem is now: Given a graph.) Theorem 7. Of course the PSPACE-completeness cannot apply to any usual game like chess. “Exists” and “Forall” in the following way.21 Generalized geography is PSPACE-complete. (On the other hand it is a slightly cheating generalization since the skill in the normal game is to know as many geographic names as possible. To get a computational problem out of this game let us generalize. It is not hard to see that the formula is true iﬀ “Exists” wins the game when both players play optimally. We will not get into those games but instead consider a more childish game. We will call the players in the game ∃ and ∀.20 we need only to prove that TQBF-CNF can be reduced to GG. “Generalized Geography” (GG) is a graph game where two people alternatingly choose nodes in a directed graph. which of the two players has a winning strategy? Let us ﬁrst observe that clearly this is a generalization of the geography game where the nodes corresponds to places and there is an edge from A to B if A ends with the same letter B starts with. Thus for instance to determine who is the winner in a given position of generalized checkers or generalized go is PSPACE-hard. Given a formula “Exists” chooses the values of all variables which correspond to existential quantiﬁers and “Forall” chooses the values of all variables which correspond to universal quantiﬁers. since chess is of a given constant size and hence not very interesting from our point of view. “Exists” wins the game iﬀ the ﬁnal total assignment satisﬁes the formula.The reason that games are this hard is that already quantiﬁed Boolean formulas can be viewed as a game between two players. “Geography” is a two-person game where one person starts by giving the name of a geographical place and then the two people alternatingly name geographic places subject to the two conditions that no place is named twice and that each name starts with the same letter that the previous name ended by. Proof: It is not hard to verify by normal procedures that GG is in PSPACE and thus by Theorem 7. Initially the game starts with a given node. The ﬁrst person having no choice loses the game. Each node must be a successor of the previous node and no node can be chosen twice. The ﬁrst person not being able to name a place with these two conditions loses. But games that can be generalized to arbitrary size are often PSPACE-complete (or hard).

Finally these nodes are hooked back to the top or the bottom of the diamond for the corresponding variable according to whether the literal is positive or negative.c 1 s x 1 x c2 2 cl Figure 9: Generalized geography graph formula c1 cl ¯ ∃x1 ∀x2 ∃x3 [(x1 ∨ x2 ∨ x3 ) ∧ · · · ( )] we construct a graph given in Figure 9. Thus we see that ∃ has a winning strategy iﬀ the formula is true. The question whether P is equal to L is not of the same practical importance.3 P-complete problems The question P = N P ? is of real practical importance since it is a question whether many natural problems can be solved eﬃciently. Since the reduction clearly is polynomial time we have proved that GG is PSPACE-complete. The games starts at the node named S and the ∃ and ∀ labels in the diagram show whose turn it is to move at each stage. (although 82 . Then ∀ gets to pick any clause that he claims to be false. ∀ will be able to move and then ∃ will be stuck. and ∃ must pick a literal in that clause which he will claim is true. We can think of ∃’s and ∀’s choices of how to move through the diamonds as setting the variables (true if the high road is taken and false if the low road is taken). with the last diamond pointing to nodes representing all the clauses of the formula and each clause node pointing to nodes representing the literals in the clause. If ∃’s claim is valid. while if the claim is not true. There is a diamond for each variable of the formula. ∀ will not be able to move without reusing a node. 7.

If B ∈ P then B ≤L A We get the usual theorem. Then A ≤L B (read as “A is logarithmic space reducible to B”) iﬀ there is a function f . We just require the reduction-function to be computable in logarithmic space.25 CVAL is P-complete. The graph contains sources which are labelled by input variables xi and one sink which is called the output node. The modiﬁcation is very slight. In this ciruit all edges are directed upwards. We leave this as an exercise.23 A set A is P-complete iﬀ 1. 2. Given values of the inputs to the circuit one can evaluate the circuit in the natural way. 83 . Deﬁnition 7. One small lemma is needed. computable in logarithmic space. namely that the composition of two functions in L is in L. Deﬁne a Boolean circuit to be a directed acyclic graph where each node is labeled by either ∧. A ∈ P . This is clearly not possible when we are considering the question P = L? and thus we need a ﬁner reduction concept.it has a nice connection with parallel computation we have not seen yet) but from a theoretical point of view it is of course of major importance. We are now ready to encounter our ﬁrst P-complete problem. ∨ or ¬ and the number of incoming edges is at least two in the ﬁrst two cases and one in the last. Let CVAL be the following problem: Given a circuit and values of the inputs of the circuit.22 Let A and B be two sets. Using this we can now deﬁne P-completeness. The proof is identical to the other proofs. Theorem 7. such that x ∈ A ⇔ f (x) ∈ B. What is the output of the circuit? We have: Theorem 7. An example is given in Figure 10. Deﬁnition 7. Up to this point we have allowed polynomial time for free when we have compared problems.24 If A is P-complete then P = L ⇔ A ∈ L.

This means that we can build a constant piece of circuitry that computes the Boolean variables corresponding to the square (i. Since we are considering deterministic computation there is a unique computation tableau given the input. To print the description of this circuit on the output tape all we need to remember is the identities of the nodes of the circuits. 84 . be the content of the ﬁrst square at the ﬁnal timestep. (i − 1. i. The content of each square of the tableau is easily coded by a constant number of Boolean values. This can be done in O(log n) space. We need to reduce B to CV AL. j − 1). We will again use the concept of a computation tableau. The output of the circuit will correspond to the output of the machine. Assume that B is recognized by a Turing Machine MB that runs in time at most nc for inputs of length n.V V V V x 1 x 2 x 3 Figure 10: A circuit Proof: First observe that CVAL belongs to P since it is straightforward to evaluate a circuit once the inputs are given. Now take any B ∈ P .e. j). j) in the computation tableau from the variables corresponding to (i − 1. Thus to construct a circuit that given the correct input simulates the computation tableau of MB we just have to copy this piece of circuitry everywhere. j+1). The content of a given square of the tableau only depends on the contents of the square itself and its two neighboring squares at the previous time step. We construct a circuit which successively computes these descriptions. Thus we have a correct reduction and the proof is complete. Thus in logrithmic space we can construct a circuit and an input to this circuit such that the circuit outputs one iﬀ MB outputs 1 on input x. and (i−1.

4 NL-complete problems The ﬁnal question we will consider is the N L = L? question. All we need do is to prove that the reduction can be done in logarithmic space.4. This is not hard and we leave this to the reader.Several other P-complete problems can be constructed by making logarithmic space reduction from CVAL. Theorem 7. Proof: We have more or less already proved the theorem. We then observed that graph-reachability could be done in polynomial time and hence N L ⊆ P . respectively) where x was accepted by M iﬀ we could reach t from s. The ﬁrst part of this proof is clearly the desired reduction. 85 . If B ∈ N L then B ≤L A As before we ge: Theorem 7. That the problem is NL-complete was implicitly used in the proof of Theorem 6. Let us recall this proof.26 A set A is NL-complete iﬀ 1. namely graph-reachability (GR) i.28 Graph-reachability is NL-complete. Again we have complete problems under L-reductions. given a directed graph G and two nodes s and t of G. We have already encountered the standard NL-complete problem. 2. We will however not present any more P-complete problems in this section. We started with an arbitrary nondeterministic machine M and an input x to M . 7. is it possible to ﬁnd a directed path from s to t. A ∈ N L.1. The fact that GR ∈ N L was established in Section 5.e. We then constructed a graph (of conﬁgurations of M ) with two special nodes s and t (corresponding to the start conﬁguration and the accepting conﬁguration.27 If A is NL-complete then N L = L ⇔ A ∈ L. Deﬁnition 7.

The only other such class we have encountered is N L. one already knew that nondeterminism was not that helpful with regard to space. 86 . Before we have pointed out that P is symmetric with respect to complementation i.7) we know that whatever can be done in nondeterministic space S(n) can be done in deterministic space O(S 2 (n)). Thus in spite of the given collapse it is still believed that N P = co-N P .8 Constructing more complexity-classes Let us just brieﬂy mention some more complexity-classes which are very related to the given classes. In general for any complexity-class C that is not closed under taking complements. if a set A belongs to P then so does its ¯ complement A. It is in general believed that co-N P is not equal to N P . then the complement of A can be recognized in nondeterministic space O(S(n)).1 A set A belongs to co-N P iﬀ its complement A belongs to NP.4 N L=co-N L. ¯ Deﬁnition 8. Thus it came as a surprise when the following theorem was proved independently by Immerman and Szelepcs´nyi in 1988. e Theorem 8. we can deﬁne a corresponding complexity-class co-C. It was generally believed that co-N L is not equal to N L.5 Although this theorem was a surprise.2 A set A belongs to co-N L iﬀ its complement A belongs to N L. In particular by Savitch’s theorem (Theorem 6. Remark 8. Thus it is natural to talk about the set of languages whose complement belongs to N P . ¯ Deﬁnition 8.3 If S(n) is space constructible. On the other hand the smallest deterministic time-class that is known to include all things that can be done in nondeterministic time T (n) is essentially 2T (n) . We get the following immediate corollary: Corollary 8. We have also pointed out that this is not true for N P . S(n) ≥ log n and suppose A can be recognized in nondeterministic space S(n).e.

Since we cannot guess them all individually. The idea behind the algorithm is to compute the number of nodes reachable from s. The general case will follow from just substituting S(n) for log n. To prove that co-N L ⊆ N L we need only prove that graph-nonreachability is in NL. The complete algorithm now works as follows: Nk = 1 for k = 1 to n do newNk =0 for l = 1 to n do check = 0 for m = 1 to n do Nondeterministically try to generate a path from s to vm of length at most k − 1. We will prove that co-N L ⊆ N L.Proof: For notational convenience we will only prove the corollary. we need to guess them in increasing order. This is done by at each stage nondeterministically generating all vertices that can be reached in k − 1 steps. and thus we can without error decide if a given vertex is reachable in k steps. If this is successful then check = check + 1 If vm is connected to vl (or equal to vl ) then set newNk = newNk + 1 goto next l endif endif next m if check = Nk reject and stop next l 87 . In particular we need only to describe a nondeterministic algorithm which works in logarithmic space and given a graph G and two vertices s and t accepts if there is no path from s to t. By symmetry this will imply the equality of the two classes. Once we know this number we can verify that t is not reachable by just guessing (and checking) all reachable vertices. This way we need only remember the number of vertices seen this far and the last one seen. The number of reachable vertices is computed iteratively. we know when we have generated all. its complement is complete for co-NL. Since we know their number. Since graph-reachability is complete for NL. In stage k we compute the number of vertices which are reachable with at most k edges.

the counter Nk will at stage k give the number of vertices reachable by a path of length at most k from s.4. Let us start with the latter part. The argument is complete and we have proved Corollary 8. Finally. newNk and check. l.Nk = newNk next k check = 0 for m = 1 to n do Nondeterminstically try to generate a path from s to vm of length at most n − 1.1 augmented with a simple counter. We claim that. for each l the algorithm generates all vm which can be reached in at most k−1 steps. Now let us consider correctness. It is easy to see that each of them is an nonnegative integer which is at most n and thus we can store these values in space O(log n). Thus it is easy to see that the algorithm decides correctly whether vl is reachable in at most k steps and thus the new value of Nk will be correct and the induction step is complete. We prove this by induction and the base case k = 0 is trivial since only s can be reached with 0 edges and Nk is initially 1. 88 . Nk . On top of this we need to nondeterministically guess a path of at most a certain length at certain parts of the program. For the induction step observe that since the algorithm does not halt and by the induction hypothesis. m. This can be done in logarithmic space by the example in section 5. unless the algorithm has already halted and rejected. If this is successful then check = check + 1 If vm is t reject and stop endif next m if check = Nk accept otherwise reject We need to prove that it is correct and that it only uses logarithmic space. for the ﬁnal loop observe that if in the end check = Nk then we have generated all vertices that are reachable from s with at most n − 1 steps (and hence reachable at all) and if t was not one of them we accept correctly. The variables used by the program is k.

Also the running time is a random variable and we will say that a probabilistic Turing machine runs in time S if it always halts in time S(n) on every input of length n. On could even relax conditions even further and just ask that the algorithm is correct most of the time.9 Probabilistic computation From a practical point of view it is suﬃcient if an algorithm is fast most of the time. Deﬁnition 9. There are two basic ways to do this: 1. to take a probability distribution over the inputs and ask that the algorithm performs well for most inputs. When the machine enters this state it receives a bit which is 0 with probability 1/2 and 1 with probability 1/2. To allow the algorithm to make random choices.e. To consider a random input. Of course one could also combine the two ways of introducing randomness.1 A probabilistic Turing machine is a normal deterministic Turing machine equipped with a special coinﬂipping state. the output is not uniquely determined. but here we will only study the second approach. 89 . We can now deﬁne a new complexity class. but rather is given by a probability distribution. Deﬁnition 9. A key point when reasoning about such algorithms is to make precise what is meant by “most of the time” i. Thus for instance. Another interesting running time characteristic is the expected running time.e. a probabilistic Turing machine can do many diﬀerent computations on a given input. As with nondeterministic Turing machines.2 A set A belongs to BPP iﬀ there is a polynomial time probabilistic Turing machine M such that x ∈ A ⇒ P r[M (x) = 1] ≥ 2/3 x ∈ A ⇒ P r[M (x) = 1] ≤ 1/3 BPP is an abbreviation for Bounded Probabilistic Polynomial time. i. 2. Both approaches give many interesting results. and require that the algorithm is fast (correct) for every input. we need to introduce some probabilistic assumptions.

A typical example would be to investigate whether the equality 1 x1 x2 1 . . 1 x3 x2 3 . Do P1 and P2 represent the same polynomial? We require that the representation is such that if we are given values of the variables then we can evaluate the polynomials in polynomial time. The ﬁrst parameter is a known upper bound for the degrees of the polynomials in question (in our example we could take d = n(n−1) ) and the second 2 is related to the error probability.g. ··· ··· ··· . . This procedure will in general be quite ineﬃcient since there might be exponentially many monomials (as in the example given). . Input P1 and P2 For i = 1. Example 9. 1 x2 x2 2 . let us next give an example of a language in BPP not known to be in P . To get the ideas behind these deﬁnitions. . Conclude that P1 = P2 (answer 1). products or something similar). d and k. . is a true identity. Next i. as determinants. = i>j (xi − xj ) xn−1 xn−1 xn−1 · · · xn−1 n 1 2 3 90 . The algorithm will depend on two extra parameters. 2 to k Pick random integer values independently for x1 through xn in the range [1. . . If P1 (x) = P2 (x) conclude that P1 = P2 (answer 0) and stop. Our probabilistic algorithm will evaluate the two polynomials at randomly chosen points.Thus the machine M gives at least a reasonable guess of whether an input x belongs to A (We will later see that this guess can be improved). . . The obvious approach to this problem is to expand the polynomials into a sum of monomials and then compare the expansions term by term.. If the polynomials disagree on one of these points they are diﬀerent and we will prove that if they agree on all points then they are probably the same polynomial. 2nd]. 1 xn x2 n .3 Checking polynomial identities: Given two polynomials P1 and P2 in several variables represented in some convenient way (e.

let us consider the polynomials Qj in the variables x1 . Qj is a polynomial of degree ≤ d in n − 1 variables and thus we could use the induction hypothesis if we knew that Qj was nonzero.4 Given a nonzero polynomial. . When P1 and P2 do not represent the same polynomial call an x such that P1 (x) = P2 (x) an unlucky x.Clearly iﬀ we answer 0 we are always correct and to see that the algorithm is useful we have to prove that most of the time we are correct even when we answer 1. The set Z splits into the union of sets obtained by ﬁxing the last coordinate to any value in the range 1 to R. then by the induction hypothesis the cardinality of the set is bounded by (n − 1)dRn−2 and when the polynomial is zero the cardinality is Rn−1 . If P1 and P2 represent the same polynomial then we will always answer 1 and we always get the correct answer. For the induction step. It is the value at j of a nonzero-polynomial of degree ≤ d − 2. in n variables and of degree ≤ d. Now look at the coeﬃcient of x1 x2 in Qj . 1 ≤ i ≤ n ∧ P (x) = 0} has cardinality at most dnRn−1 . By applying the above lemma to P1 − P2 we see that there are at most (2dn)n /2 unlucky x and thus the probability that we pick one unlucky x is bounded by 1 . Since there are at most R sets of the ﬁrst kind and d of the second we get the total estimate R(n − 1)dRn−2 + dRn−1 = ndRn−1 and the induction is complete. xn−1 obtained by substituting j for the variable xn . Proof: We prove the lemma by induction over n. Thus there are at most d − 2 values of j such that this coeﬃcient is 0 and in general at most d values of j such that Qj is identically zero. Lemma 9. Since 2 91 . The set Z = {x | 1 ≤ xi ≤ R. P . . To see this take any monomial in P which appears with a nonzero coeﬃcient (assume for the sake of the argument that it is x1 x2 xn ). Thus the algorithms gives the correct answer unless we happen to pick k unlucky x’s. For n = 1 the lemma follows from the fact that a polynomial of degree d has at most d zeroes. Using this lemma we can analyze the algorithm. When the corresponding polynomial is nonzero. The key lemma is the following. We claim that there are at most d diﬀerent j such that Qj is identically zero.

i=0 2 The ratio of two consecutive terms in this sum is at least thus if the last term is T then the sum is bounded by This last term is bounded by 2C (2/3)C/2 (1/3)C/2 ≤ (8/9)C/2 ≤ 2−|x|−3 92 . that if A ∈ BP P we can ﬁnd a machine M which satisﬁes the above condition. All that remains to see that the problem lies in BPP is to observe that the algorithm is polynomial time. but this is obvious since the essential step of the algorithm is to evaluate the polynomials and this is polynomial time by assumption. We need to verify the claim that this M satisﬁes the condition in the theorem. Now let M be deﬁned by running M . Assume that x ∈ A and that M outputs 1 with probability p on input x (we know that p ≥ 2/3). It is not hard to see that this is true in general. Then the probability that M does not output 1 is bounded by C/2 i=0 C i p (1 − p)C−i .e. Thus if k is reasonably large we get the correct answer with high probability. We know by the deﬁnition of BPP that there is a machine M such that x∈A⇒ P r[M (x) = 1] ≥ 2/3 x ∈ A ⇒ P r[M (x) = 1] ≤ 1/3.5 A set A belongs to BPP iﬀ there is a polynomial time probabilistic Turing machine M such that x ∈ A ⇒ P r [M (x) = 1] ≥ 1 − 2−|x|−2 x ∈ A ⇒ P r [M (x) = 1] ≤ 2−|x|−2 Proof: Clearly the above conditions are stronger than our original deﬁnition and thus if A satisﬁes the above condition then it belongs to BPP.the k x’s are independent the probability of them all being unlucky is at most 2−k . i 2/3 p 1−p ≥ 1/3 ≥ 2 and C/2 i−C/2 T ≤ 2T . try more random points) then we could make the probability of error arbitrarily small. Theorem 9. We need to prove the converse i.e. 2(|x| + 3)/ log(9/8) = C times with independent random choices and outputting 1 iﬀ M outputs 1 at least C/2 times. In the example we saw that if we were willing to run the algorithm longer (i.

Theorem 9. Our example “Polynomial identities” is a member of co-R. Deﬁnition 9. The second condition is proved in a similar way. the set of languages whose complement lies in R. Hence this class is sometimes also called RP. We will not discuss that algorithm here. While BPP is closed under complement. Remark 9. In particular we proved that if the input was in the language the answer was always correct. Now on input x alternate in running M1 and M2 until one of them answers 1. Observe that both R and co-R are subsets of BPP. There are not many known examples of problems not known to be in P that lie in BPP. When this happens we know that x ∈ A if the 1-answer was given by M1 and we know that x ∈ A if it was given by M2 .7 I believe that R is short for Random polynomial time. Both M1 and M2 run in polynomial time. co-R. Similarly since A ∈ co-R there is a machine M2 that outputs 1 with probability at least 2/3 when x is not in A and never when x in A.and thus the ﬁrst condition of the theorem follows. Each time we run both machines we have probability 2/3 of getting a decisive answer and hence it follows that the procedure is expected polynomial time.6 A set A belongs to R iﬀ there is a polynomial time probabilistic Turing machine M such that x ∈ A ⇒ P r [M (x) = 1] ≥ 2/3 x ∈ A ⇒ P r [M (x) = 1] = 0. 93 . by quite elaborate methods it is possible to prove that primes belongs to R co-R and for this class we can make a very strong statement. In our example we proved more than needed to establish that the problem in question was in BPP. Proof: By assumption there is a machine M1 that outputs 1 with probability at least 2/3 when the input x is in A and with probability 0 when x is not in A (since A ∈ R). However. this is not obvious (or known) for R and thus we also have a third probabilistic complexity class.8 A set A belongs to R co-R iﬀ there is probabilistic machine M which runs in expected polynomial time and always decides A correctly. With this additional restriction we get a new complexity class. The main other example is to recognize primes.

Theorem 9. A straightforward implementation gives the given resource bounds. nothing is known about the relation between our probabilistic classes and our old classes. We have some non-obvious relations. Apart from these theorems. But this implies that if we replace the probabilistic choices by non-deterministic choices M accepts x precisely when x ∈ A. Clearly any of the deﬁned classes contains P since we can always ignore our possibility to use randomness. Theorem 9.11 Suppose A ∈ BP P and the machine M that recognizes A runs in time T (n) and uses at most p(n) coins.9.10 co-R ⊆ co-N P . The above theorem immediately yields: Theorem 9.9 R ⊆ N P . There is not a great consensus what the true relations are.12 BP P ⊆ P SP ACE. Proof: Just run M for all possible 2p(n) set of coinsﬂips and calculate the probability that M accepts. 94 . Our next theorem is also not very surprising. then A can be recognized by a deterministic machine that runs in time O(2p(n) T (n)) and space O(T (n)+ p(n)).1 Relations to other complexity classes Let us relate the newly deﬁned complexity classes to our old classes. but many people think it is possible that P = BP P . We have an immediate corollary: Corollary 9. Proof: We know by the deﬁnition of R that if A ∈ R then there is a machine M such that when x ∈ A then with probability ≥ 2/3 M accepts x and when x ∈ A there are no accepting computation.

Without discussing the matter. Instead we will take the optimistic attitude that there is randomness. How the short truly random string (which is called the seed) is produced is clearly a problem (it is generally supplied by the user).10 Pseudorandom number generators In the last section we used random numbers. The main question we will deal with in this section is how to deﬁne what we want from a pseudorandom generator and how to construct such a generator. i.8) that this is not a real problem. Deﬁnition 10. where each bit is 0 and 1 with probability 1/2.e. This is a valid question. This is a function which takes a short truly random string and produces a longer “random looking” string. Traditionally. it should be computable in polynomial time and the output should be longer than the input. 95 . and thus whether randomness in computation at all makes sense. Something that has only these two properties is a bit generator. For the sake of this section we will assume that we only need random bits. One could indeed question whether there are any random phenomena in nature. but we will not concern us with this problem. but it is mostly philosophical in nature and we will not discuss it. but there is a problem getting enough random numbers into the computer. One obvious property is that it should be easy to run and produce something useful. In practice this might not be the case. We will see later (Theorem 10. Note that the deﬁnition allows for the output to be of only length n + 1 and this does not seem to be much of a generator. For technical reasons we assume that p(n) is strictly increasing with n. just assume that somehow we can get a few random bits into the computer.1 A bit generator is a polynomial time computable function that take a binary string as input and on an input of length n produces an output of length p(n) where p is a polynomial such that p(n) > n for all n. we assumed that we had access to an unlimited number of perfectly random coins. The more interesting aspect of pseudorandom bit generators is to try to formalize the “random looking” requirement of the output. The common solution to the problem of not having enough truly random numbers is to have a what is generally called a pseudorandom number generator (we will in the future call them pseudorandom bit generators since we will be generating bits). This is not a severe restriction since random bits can be turned into random numbers in many ways.

and since there are 2p(n) possible strings and p(n) ≥ n + 1 at most half of the strings are possible outputs of G.3 (First attempt) A bit generator passes a statistical test S if the probability that S outputs 1 on a random output of the generator is equal to the probability that S outputs 1 on a truly random string. Take any bit generator G and consider the following statistical test: SG (x) = 1 if x can be output by G 0 otherwise First observe that if G stretches strings of length n to strings of length p(n) in time T (n) then SG can be implemented on strings of length p(n) to run in time 2n T (n) since we just run G on all possible strings of length n and check if one of them equals x. 96 . However the deﬁnition is too restrictive and there is no such generator. On the other hand when we feed SG a truly random string then the probability that we get output 1 is at most 1/2. Deﬁnition 10. This is the germ of what today is believed to be the correct deﬁnition. Intuitively the output 1 can be interpreted as the string passes the test and the output 0 as failing. When we run SG on the output of G then the result will always be 1. Note. This follows since there is one output for each seed which implies that there are at most 2n possible outputs of G of length p(n) (here we use that p is strictly increasing). 1}.4 (First attempt) A bit generator is pseudorandom if it passes all statistical tests. The tempting deﬁnition of pseudorandom generator is now: Deﬁnition 10. A bit generator that passes all statistical tests produces a very random looking output.2 A statistical test is a function from binary strings to {0. however that not even all strings produced truly at random will pass a statistical test. Deﬁnition 10.this was interpreted as that the output bits passed a small set of standard statistical tests. Here a random output of the generator is deﬁned as the output on a truly random seed.

97 . By the analysis of SG this implies that the probability that sG outputs 1 on a random output of G is diﬀerent from the probability that it outputs 1 on a random input. Otherwise output 0. Since G is assumed to be polynomial time.6 From the development up to this point polynomial time is the natural requirement on eﬃcient statistical tests. This is counterintuitive since for large n the test sG is very weak. G passes the statistical test S if for any k there is a Nk such that for all n > Nk it is true that |an − bn | < n−k .5 (Final attempt) A bit generator is pseudorandom if it passes all statistical tests that run in probabilistic polynomial time. if n is large. We have still not overcome all problems with the deﬁnitions as can be seen from following miniature version of SG . Allowing randomness makes the deﬁnition stronger since anything that passes all probabilistic polynomial time statistical tests also pass all deterministic polynomial time statistical tests. The choice to allow statistical tests to be probabilistic is not clear. Furthermore. The probability is taken over the random output of G and the random choices of S. Deﬁnition 10. Deﬁnition 10.7 (Final attempt) Let S be a statistical test and let G be a bit generator. if x is a string that could have been generated by G then there is some small but positive probability that sg will output 1 while if x cannot be output by G then this probability is 0. Thus somehow this test is “cheating” and we change the deﬁnition to take care of this. since the exponential time needed to try all the seeds is usually too much. sG can be implemented in polynomial time. Test sG On input x of length p(n) guess n2 random seeds of length n and run G on these seeds and output 1 if one of the outputs seen from G is equal to x. Remark 10.In practice. As we have deﬁned passing statistical tests this means that G fails the test sG . Let an be the probability that S outputs 1 on a random output of G of length n and let bn be the probability that it outputs 1 on a truly random input of length n. but for many reasons (we will not go into them here) it is the better choice. it is not feasible to compute SG as described above. We change the deﬁnition to take care of this anomaly.

then we get an arbitrary extension.e. until we have a string of length p(n). on an input of length n. Start with a truly random string of length n + i and iterate G p(n) − i − n times. We prove that it is pseudorandom by converting a hypothetical statistical test S which distinguishes the output of G from random strings to a test which distinguishes the output of G from random strings. Let an be the probability that S outputs 1 on random outputs from G of length p(n) and let bn be the corresponding probability when the input is truly random. we ﬁrst compute G to get a string of length n + 1. By assumption for some k and inﬁnitely many (for notational convenience we assume this is true for all) n we have |an − bn | ≥ n−k . Note that G remains a pseudorandom bit generator (Prove this!) Now deﬁne G to be G iterated p(n) − n times. Let qi be the probability that S outputs 1 on distribution Ri . Note that R0 are random outputs of G while Rp(n)−n are truly random strings. Theorem 10. then for any strictly increasing polynomial p there is a pseudorandom bit generator G that extends from n bits to p(n) bits. then we have produced a string according to Ri and the probability of getting a 1 is qi . On the other hand if the initial string was the output of G on a random string of length n + i. Let us ﬁrst prove that once you have a pseudorandom generator which extends the seed slightly. Proof: The only problem is that G might not extend the seed suﬃciently.8 If there is a pseudorandom bit generator G. i. This implies that we have found a way of distinguishing the output 98 . We will assume that G outputs n + 1 bits since if it outputs more bits we can just ignore them. By deﬁnition G maps n bits to more than n bits. Now consider the following statistical test on strings of length n + i + 1: Given a string x iterate G p(n) − n − i − 1 times and run S. This generator produces a string of the wanted length and it is easy to see that it works in polynomial time. Since q0 = an and qp(n)−n = bn and |an − bn | ≥ n−k 1 there is some i such that |qi − qi+1 | ≥ nk p(n) . then compute G on this string to get a string of length n + 2 etc. If the initial string was random we have produced an element according to Ri+1 and the probability of getting output 1 is qi+1 . Let us ﬁx this i.In other words the diﬀerence of the behavior of the test on the outputs of the generator and random strings goes to 0 faster than the inverse of any polynomial. Consider the following probability distribution Ri . 0 ≤ i ≤ p(n) − n on strings of length p(n).

The value p(n) causes no problems since it is the value of a ﬁxed polynomial. 99 . Let i0 be the value that gives the biggest diﬀerence between the two distributions.for G from random strings and hence we have reached a contradiction since G was supposed to be pseudorandom. since in such a case the constant function would be one-way. Choose a random input x of length n and compute y = f (x). Also this is probably too much to hope for. Deﬁnition 10. which we cannot do for the moment. Thus the best we could hope for is to prove that if P = N P then there are pseudorandom generators. However it is not clear how to ﬁnd i.9 If N P ⊆ BP P then there are no pseudorandom generators. It is a tedious (and not that easy) exercise to check that for some c this “universal” test will distinguish the random strings from outputs of G. then the probability that it outputs a z such that f (z) = y goes to 0 faster than the inverse of any polynomial. If A is given y as input. Now run the test with i = i0 on the given input. Remark 10. We sketch how to get around this problem: Let c be a constant. Note that the test obviously runs in polynomial time. This should ﬁnish the proof. This forces us to base the construction of pseudorandom generators on even stronger assumptions.10 A function f is a one-way function if it is computable in polynomial time and for any probabilistic polynomial time algorithm A the following holds. The reason is that P vs NP is a question of the worst case behavior of algorithms while the existence of pseudorandom generators is an average case question.11 Note that we cannot ask A to actually ﬁnd the initial x. The proposed test uses two auxiliary parameters p(n) and i. On a given input of length n consider the tests given by diﬀerent values of i. Proof: Just observe that the test SG is in N P . In particular if P = N P there are no pseudorandom generators and thus proving the existence of such generators would prove P = N P . but the very careful reader will see that there are some minor problems. Let us next investigate the existence of pseudorandom bit generators. Theorem 10. Since this test distinguishes the output of G from random bits it should not run in probabilistic polynomial time. For each test evaluate the test by picking nc random inputs according to both distributions.

Levin and Luby that this is indeed the case. Let a one-way lengthpreserving permutation be a one-way function which for each n is a 1-1 mapping on strings of length n. but their proof is much too complicated for the present set of notes.12 If there is a pseudorandom bit generator then there is a one-way function. in other words that there is a k and an A such that A ﬁnds an inverse image of a given function value with probability at least n−k (for inﬁnitely many n). Suppose A outputs y. This proves that G is a one-way function.e. Impagliazzo. Assume that the function given by this generator (let us by abuse of notation call the generator as well as the function it computes by G) is not one-way.We have: Theorem 10. Proof: We claim that the function given by the generator (i.e. from the seed to the output) is one-way. By Theorem 10. Instead we prove the following theorem which is due to Yao (the present proof is due to Goldreich and Levin). Thus this test distinguishes the output from G from random strings contradicting that G is pseudorandom (the test is polynomial time since both A and G are polynomial time). On the other hand if x is the output of G then the probability of output 1 is exactly the success probability of A which by assumption is at least n−k (for inﬁnitely many n). S. if starting from any one-way function it would be possible to construct a pseudorandom bit generator. then if G(y) = x output 1 otherwise output 0.8 we can assume the generator expands n bits to 2n bits. If x is a truly random string of length 2n then the probability that the test S outputs 1 is bounded by the probability that x can be output from G. In 1990 it was proved by H˚ astad. Theorem 10. Then the following test.13 If there is a one-way lengthpreserving permutation then there is a pseudorandom bit generator. will distinguish outputs of G from random bits. 100 .12 would also be true i. It was a long standing open question whether the converse of Theorem 10. On input x run A. Since there are 22n possible strings and at most 2n outputs from G this probability is bounded by 2−n .

Let p(x.r. x). 0) and b1 = S(f (x). On input f (x). r is p(x. Now consider the following algorithm for predicting (x. r. r. r). r.e. r. 1).i p(x. (r. if f is a one-way function then (x. r run S let b0 = S(f (x). It is a bit generator since it expands 2n bits to 2n + 1 bits and is polynomial time computable since f is polynomial time computable. r. then the probability that it outputs the correct value for f (x). Let x and r be random strings of length n and let (x. Then an = 2−2n−1 x. r.13 follows from Lemma 10. x) be the complement of (r.Proof: Let f be the one-way lengthpreserving permutation. Lemma 10. Then we claim that i=1 the function g(x. r) looks random to any probabilistic polynomial time machine which only has the information f (x). i). r) = f (x). r. r. Suppose g is not pseudorandom and that S is a statistical test which outputs 1 with probability an on random bits and bn on random outputs of g. r. x))(1 − p(x. In other words. The following lemma of Goldreich and Levin will be crucial. x)). r.r p(x.14 Suppose we have a probabilistic polynomial time algorithm A that on input f (x). Let us ﬁrst see how Theorem 10. Then there is a probabilistic polynomial time algorithm B that inverts f with probability of success at least 1 2Q(n) . y) be the inner product modulo 2 of the strings x and y (i. (r. r. Suppose without loss of generality that an ≥ bn + n−k . r computes (x. r) with a probability greater than 1 + 2 1 Q(n) where Q is a polynomial. x) is a pseudorandom bit generator. (r. it is the parity of n xi yi ). x))+ 101 . (r. Let (r.14. Consider the above algorithm on input f (x). The hard part is to prove that it is pseudorandom. (Here the probability is taken over a random choice of x and r and the random choices of A). i) and bn = 2−2n x. Now if b0 = b1 output a random bit and otherwise output i such that bi = 1. i) be the probability that S outputs 1 on (f (x).

Proof: (Lemma 10. x)) + (1 − p(x. 1 First observe that for at least a fraction 2Q(n) of the x’s. We need to ask about many points. but there is no reason it will be correct for these inputs.1 p(x. We will describe a 2 procedure that will be successful with high probability for each such x and this is clearly suﬃcient. . r. r2 . (r. . . x))p(x. (r. ei . k} do Ask A about y. We can ask A about f (x). x)) − p(x. . Hence the total prob2 ability of it being correct is 1 (1 + an − bn ) and now Theorem 10. r. Compute b = b ⊕j∈S bj and set count = count + 1 − 2b . . A predicts 1 (r. Next S Set xi = 0 if count > 0 and 1 otherwise. . We compute each bit of x individually. x))) 2 which equals 1 (1 + p(x. suppose answer is b.13 follows 2 from Lemma 10. bk do For i = 1 to n do count = 0 For all non-empty subsets S of {1. b2 . ei ⊕j∈S rj . x)))(1 − p(x. Next i. (r. stop od Report ’failure’. Next let us prove Lemma 10. (r. For each value of k bits b1 . rk of length n. .14) We give a proof due to Rackoﬀ. x) with probability (only over r) at least 1 + 2Q(n) . r. r. Let ei be the unit-vector in the i’th dimension. r. If A runs in time T (n) and f in time T1 (n) then the algorithm runs in time 22k nT (n) + T1 (n) and thus the algorithm is polynomial time if k is O(log n). Just to avoid confusion observe that count is the number of 0-guesses minus the number of 1-guesses and hence we are doing a majority decision. (r. Let k be a parameter and ⊕ be exclusive-or then the algorithm on input y now works as follows: Pick k random vectors r1 . If f (x) = y output x. and we will use a small random subspace shifted by a ei .14. The set of r’s asked will be pairwise independent but we can guess the answers to the entire subspace by guessing the answers on the basis vectors. r. x))). 102 . (r.14. 2 .

Using this with λ = 2k −1 Q(n) and v = 2k −1 we see that xi takes the incorrect 2 value with probability at most Q(n) . rS )) to y.15 that the bi are pairwise independent. Now that we have studied good generators it is natural to ask what happens if we use these generators to produce the random bits needed by a probabilistic algorithm. Let rS be ei ⊕j∈S rj i be b ⊕ i i and let bS j∈S bj .e. (x. This implies that we are in pretty good shape since we know then xi = bS i that A gives a majority of correct answers and rS are fairly random. If A gives the correct answer (i. We can take this to be the following very natural generator: Pick x and r randomly and let bi = (f i (x). Now remember. Thus the probability that some xi is incorrect is bounded by 1/10. i i Lemma 10. Now if 2k − 1 ≥ 10nQ(n)2 then the 2k −1 1 probability that xi does not take the correct value is bounded by 10n . This concludes the proof of Lemma 10. We claim that i this happens with good probability when bi = (ri . Now it is easy to see that rS2 is uniformly distributed (its deﬁnition is an exclusive-or of several things at least one which is uniformly random) and that for any ﬁxed value of rS2 the existence of rj in the exclusive-or deﬁning rS1 makes sure it still uniformly distributed. Remark 10.15 For S1 = S2 rS1 and rS2 are independent and uniformly n. . x).14. 1} Proof: Suppose j ∈ S1 but j ∈ S2 (if there is no such j we can interchange S1 and S2 ). p(n). 103 . distributed on {0. Tchebychev’s inequality: Theorem 10.8 that we can get a generator which extends the output arbitrarily. r) where f i is f iterated i times. Now it follows by Lemma 10. Suppose we have a probabilistic machine M which recognizes a BP P -language B and let G be a pseudorandom generator. S Suppose for notational convenience that xi = 0 then we know that check is k a random variable with expected value at least 2 −1 and variance at most Q(n) 2k − 1. .We need to analyze the probability that we ﬁnd the correct x. for i = 1.17 We have now given a generator that extends the input by one bit and we know by Theorem 10. 2.16 Let X be a random variable with expected value µ and v variance v then the probability that |X − µ| ≥ λ is bounded by λ2 . . rS i .

Suppose M uses p(n) random bits and that for some small constant . In general all proofs for the uniform case translates to the non-uniform case. The latter can be assumed by Theorem 10. Instead we have: Deﬁnition 10. We will not do so here.18 A non-uniform statistical test is a probabilistic polynomial time algorithm that on inputs of length n gets an advice an which is of polynomial length. However since we have not studied the concept of being correct for most inputs we will not pursue this approach.19 Note that the advice is the same for all strings of length n. Answer with the output of M . 104 . Since G by assumption passes all statistical tests it is tempting to think that the same is true for outputs of G. This would imply that we would get a theorem similar to Theorem 9. Now consider the following statistical test SM.8. but it turns out that the existence of such generators is equivalent to the existence of oneway functions where we allow the inverting function to have an advice. This deﬁnition is stronger than the previous deﬁnition since we are allowing stronger statistical tests. In particular Theorem 10. Deﬁnition 10.20 A pseudorandom generator is non-uniformly strong if it passes all non-uniform statistical tests. G extends n bits to p(n) bits.8 remains true. The interested reader might want to prove that the given deﬁnition corresponds to polynomial size circuits without any uniformity constraints. The reason this is not true is that the test has a parameter x which might be hard to ﬁnd (the parameter M is not a problem since it is of constant size).11 saying that B could be recognized in time close to 2n since we would only have to try all seeds of G rather than all sets of p(n) coins. All is not lost since we could change the statistical test to choose x randomly and then study the behavior of M . Then we could prove that we had a deterministic algorithm that ran in time close to 2n and was correct for most inputs. We now ﬁnish the discussion with a theorem of Yao. Remark 10. We know that when x ∈ B and r is random then the probability that this test outputs 1 is at least 2/3 while otherwise it is at most 1/3.x of a random string r of length p(n): Given x run M on input x with random coins r.

If one is willing to make stronger assumptions then one can make stronger conclusions. with non-negligible success ration) on inputs of length n requires time 2cn for some small n then BP P = P . This test uses the advice x but since G is non-uniformly strong. Since both B and theorem.x . This implies that if we replace the coins by a random output of G then we still have essentially the same δ probability of acceptance. This can be done in time 2n (T1 (n) + T2 (n)) and this is O(2n ).21 If there is a pseudorandom generator which is non-uniformly strong then DT IM E(2n ). were arbitrary we have proved the δ Thus we have proved that if there are one-way functions in the nonuniform setting then BPP can be simulated in time which is signiﬁcantly cheaper than exponential time. Now let x be an arbitrary input of length n and consider the above test SM. Suppose B ∈ BP P and that it is recognized by M which uses p(n) coins and runs in time T1 (n) (both these bounds are some polynomials).Theorem 10. BP P ⊆ >0 Proof: The proof is as outlined above. it passes this test. In particular if there is a polynomial time computable function such that inverting this function (in the non-uniform setting. We now just try all the 2n possible seeds for G and take a majority decision. Let δ < and let G be a non-uniformly strong generator which extends nδ bits to p(n) bits and runs in time T2 (n) (which also is a polynomial). 105 .

) We will be interested in two parameters of the circuit. v) is an edge. we only want to point out that there is one important aspect missing. One could phrase the main question as a variant of a traditional mathproblem. In fact it seems like that in practice this is the overshadowing problem to make large scale parallel computation eﬃcient. 11. The circuit computes a function {0.1 The circuit model of computation We have previously brieﬂy discussed the concept of a Boolean circuit. In this section we will just give the ﬁrst deﬁnitions and show some basic properties. 1}n → {0. operation nodes and output nodes.11 Parallel computation The price of processors have dropped remarkably in the last decade and it is now feasible to make computers that have a large number of processors. but it seems like the answer could be anywhere from one second to a million seconds depending on the function. Suppose one computer can compute a given function in one million seconds. communication between processors will be ignored. We choose here to study the circuit model of computation and as we will see. ∨ and ¬. We will here only allow the operators ∧. The input nodes are labeled by variable names xi and the operation nodes are labeled by logical operators. The most famous multi-processor computer might be the Connection Machine which has 216 = 65536 processors. The size of a circuit Cn will be denoted by 106 . It is a directed acyclic graph with three type of nodes: Input nodes. The concept of having many processors working in parallel leads to many interesting theoretical problems. It is hard to get this fairly practical consideration into the theoretical models in a suitable manner and this complication will usually get lost. how long would it take a million computers to compute the same function? The answer to this question is not known. When many processors cooperate to solve a problem it is of crucial importance how they communicate. The inputs to a node v is the set of nodes w for which (w. 1} in the natural way. We do not want to argue that the model does not reﬂect reality. its size and depth. It is an important theoretical problem to identify the computational tasks that can be parallelized in an eﬃcient manner. (Substitute the value of the i’th coordinate for xi and then evaluate the nodes by letting each operation node take the value which corresponds to the corresponding operator applied to the inputs of that node.

i. given a language B ∈ P we take the corresponding Turing machine MB and given n we can now construct a circuit Cn which will give the same output as MB on all inputs of length n. Theorem 11. The circuit constructed the computation tableau of M row by row. Thus if we are interested in fast parallel computation it is interesting to construct small circuits with small depth. The size of this circuit will only be a constant greater than the size of the computation tableau of MB on inputs of length n.|Cn | and is equal to the number of nodes it contains while the depth will be denoted by d(Cn ) and is the longest directed path from the input to the output. One immediate question is whether the converse of the above theorem is true. The way to resolve this is to let a function be computed by a sequence of circuits (Cn )∞ where Cn computes f on inputs of length n. We will then be n=1 interested in the growthrate of the size and depth of Cn as a function of n. In particular.25 we saw that given a Turing machine M and an input x we could construct a circuit such that the output of the circuits was equal to the output of M on input x. Let us now state a theorem that was implicitly proved in Section 7.1 If B ∈ P then B can be recognized by polynomial size circuits. If one looks closely at that proof. one discovers that the structure of the circuit only depends on M while x enters as the input of the circuit.3. Proof: (Outline) In the proof of Theorem 7. If there is a processor at each node of the circuit then the number of processors is equal to the size of the circuit and the time needed to evaluate the circuit is equal to the depth of the circuit. The functions we have been considering so far take inputs that are of arbitrary length while a circuit can only take inputs of a given length. Remark 11. that if a function can be computed by polynomial size circuits then is it in fact true that the function lies in P ? With the current deﬁnitions 107 .2 By more eﬃcient constructions it is possible to to give a better simulation of Turing machines and decrease the size of the above circuit to O(nc log n).e. If MB runs in time O(nc ) then this size will be O(n2c ) and thus we have constructed circuits for B of polynomial size. In particular we will say that a sequence of circuits is of polynomial size if the growthrate of |Cn | is not more than polynomial in n.

which works in polynomial time (logarithmic space).2 NC We can now deﬁne our main complexity class of parallel computation. The reason for this is that we have not put any conditions on how to obtain the circuits Cn .this is not true. However it has very small circuits since for each length n. Deﬁnition 11. Thus Cn could just be a trivial circuit which either always outputs 0 or 1 depending on whether Mn halts on blank input. To see the reverse implication.5 A set B is in N C k iﬀ it can be recognized by a family of L-uniform circuits (Cn )∞ where Cn is of polynomial size and d(Cn ) ≤ n=1 O((log n)k ). How to decide which one to choose is non-recursive but is of no concern in the old deﬁnition and the following deﬁnition is called for. This proves one of the implications in the theorem. To see the problem consider the following language: B = {x|M|x| halts on blank input} As we have seen earlier this language is not even recursive.3 A sequence of circuits (Cn )∞ is P (L)-uniform iﬀ there n=1 is a Turing machine M . 11. either all strings of length n are in B or no string of that length is a member of B. Furthermore N C = ∞ N C k . They are in fact L-uniform by the proof of Theorem 7. k=1 108 . Deﬁnition 11. The ﬁrst part is polynomial time by the deﬁnition of P -uniform and the second part is easily seen to be polynomial time.25. that on input 1n prints a description of Cn on its output tape. Proof: (Outline) First just observe that the circuits described in the above proof are P -uniform. Using this deﬁnition we get: Theorem 11. Then on input x a Turing machine can ﬁrst construct the circuit C|x| and then compute its value on input x.4 B can be computed by polynomial size P -uniform circuits iﬀ B ∈ P . suppose that B is recognized by polynomial size P -uniform circuits.

This might look straightforward since we can have one processor which takes care of each digit. compute their sum. This is named after Nick Pippenger who was one of the ﬁrst researchers to study this class. The critical point is to discover quickly if you have a carry coming from your right. it will propagate a carry if it looks like P P and it will stop a carry if it looks like P S. From a theoretical standpoint N C is considered as the subset of P which admits ultrafast parallel algorithms (time O((log n)k )). For instance a block of length two will generate a carry if it looks like GG. P and S accordingly. We use one processor for each digit of the two numbers. but somehow processors seem to go better with the intuition. However to make life easier we will stick with the above deﬁnition.6 The name NC is short for Nick’s Class. We can combine this information in a binary tree to see how longer blocks will behave with respect to carries. The process to do this is called Carry-look-ahead. Remark 11. How to do this might best be seen by an example. Continuing in this way we can quickly compute whether certain intervals propagate or stop a carry. For reasons that go beyond the scope of these notes. GP .7 Normally one requires even stricter uniformity constraints for N C 1 than L-uniformity.8 N C ⊆ P. SP or SS.Remark 11. GS or P G. Formally this should of course be replaced by nodes in circuits. This will be the basic idea. You see the reason for this if you try to add the binary numbers 01111111 and 00000001. Some of the algorithms we present will also be eﬃcient in practice and some will not. Propagates or Stops a carry and marks the position G. SG. since if we treat them without thinking. we will be quite informal and talk in terms of processors doing simple operations. we will need circuits of linear depth. Example 11. Theorem 11. Proof: This follows immediately from the deﬁnition of N C and Theorem 11.4.9 Given two n-bit numbers. but we have to do something intelligent with the carries. this gives a better deﬁnition. We get the representation SGP P GSGP and 109 . This processor checks whether that position Generates. Suppose the numbers are 01111011 and 01001010. We can now make an obvious observation. When we describe how to construct circuits.

going up S S G S P G G S 0 0 G 1 1 P 1 0 P 1 0 G 1 1 S 0 0 G 1 1 P 1 0 Figure 12: Carry look ahead tree. Example 11.S S G S P G G S 0 0 G 1 1 P 1 0 P 1 0 G 1 1 S 0 0 G 1 1 P 1 0 Figure 11: Carry look ahead tree. By actually building this tree in the circuit we see that we get a circuit of depth O(log n) which computes all the carries and since once we know the carries the rest is simple we can conclude that addition belongs to N C 1 . For instance if you start in position 6 in the given tree. going down we build a binary tree (see Figure 11) to ﬁnd out how longer blocks behave. It is not hard to see that this can be reduced to adding together n. Now to see if we have a carry in a given position we just have to ﬁgure out if all suﬃces of the string SGP P GSGP generates a carry. n-digit numbers (just do the ordinary multiplication algorithm we learned 110 . you get the string P G which evaluates to G and thus there is a carry in position 6. we want to multiply the numbers.10 Given two n-bit numbers. Finally evaluate the string you get. Whenever you go right write down what you see coming in from the left to that same node. One can also view this last step as sending the appropriate values down the tree as indicated in Figure 12. start at that position and walk down the tree. It is quite easy to see how this is done. One way to phrase it formally is the following: Suppose you want to know if there is a carry in a given position.

Compute the sums n j=1 aij bjk for all i and k. circuits of small depth where you allow random inputs and only require that you have a good probability of ﬁnding a depth-ﬁrst search tree). Just to give an example of something nontrivial. Examples of such problems are computing integer GCDs. Suppose the given matrices are A = (aij ) and B = (bij ). Example 11. Let sk = T r(M k ) which equals n λk since the i=1 i=1 i 111 . This gives a circuit of polynomial size and depth O((log n)2 ). If we have O(m2 n3 ) processors we can do the ﬁrst operation in depth O(log m) (by the exercise extending the multiplication example) while the second can be done with O(n3 m) processors in depth O(log nm) (using the same exercise). 2. Let us suppose the entries are m bit integers. and it is well known i=1 that T r(M ) = n λi . The trace of a matrix M (denoted by T r(M )) is the sum of its diagonal elements. and ﬁnding a depth-ﬁrst search tree is known to be in RNC (Random NC. then it is well known that n i=1 λi = det(M ). i. Now by the previous example we can add these numbers pairwise in depth O(log n) to obtain n numbers whose sum we want to com2 pute.e. Of these the linear equation problem can be solved in NC. Adding the numbers pairwise for log n rounds gives us the answer. In fact multiplication and addition of n numbers can both be done in depth O(log n). Let us recall some facts. Example 11.e. Compute all the products aij bjk for all i. Thus the entire computation uses a polynomial number of processors and O(log nm) depth. We have j=1 the following algorithm: 1. compute its determinant. i. while for integer GCDs there is not known to be any circuits of sublinear depth. We have to assume some facts from linear algebra. If λi denote the eigenvalues of M . We leave this as an exercise.12 Given a matrix M . Then we want to compute n aij bjk for all i and k. T r(M ) = n mii .11 Given two n × n matrices. j and k. let us give as a last example an algorithm to compute the determinant of a matrix which runs in O((log n)2 ) time and uses a polynomial number of processors. solving linear equations and computing the depth-ﬁrst search tree of a graph. The problems that seems to be hardest to give a parallel solution to are problems where the natural sequential algorithms are iterative in nature.in ﬁrst grade). multiply them.

. . This is no coincidence and in fact sequential space and parallel time are quite related as soon as one does not put any other restrictions on the computation.13 Suppose S(n) ≥ log n for all n. . The number of processors is quite bad but still polynomial. Now it is easy to check that A−1 = n B i and thus by some additional matrix-multiplications we i=0 can compute the inverse of A and hence we can solve for the ci and ﬁnd cn = det(−M ). . . ... If B can be recognized in space O(S(n)). cn = − s1 s2 s3 s4 . . n} and |S| is its cardinality. . It is standard i=1 that cn = det(−M ) and c(λ) = n (λ − λi ).. and the depth is O((log n)2 ).. 11. ... Then A can be written as I − B where B is strictly lower-triangular. We will use one processor pC for each possible conﬁguration C of MB . 0 0 0 4 . . . . . From this it follows that i=1 ci = S:|S|=i(−1)i j∈S λj where S is a subset of {1.. . . 2. 0 2 s1 s2 . Once we can compute determinants we can do almost all operations in linear algebra. The drawback in practice is that we get fairly large circuits. . sn sn−1 sn−2 sn−3 sn−4 .3 Parallel time vs sequential space A couple of the examples of problems that we could do in NC also appeared as problems doable in small space. . . If we multiply each row by a suitable number we can assume that all the entries on the diagonal of A is unity.. Proof: Suppose B is recognized by MB which runs in space O(S(n)). .eigenvalues of M k are λk . n Thus all that remains is to prove that we can solve Ax = b where A is a lower-triangular matrix. Using this one can prove that 1 s1 s2 s3 . Theorem 11. 0 0 0 0 0 c1 c2 c3 c4 . There 112 . The sk are easy to compute in parallel since i we have already shown how to compute matrix-products and M k can be computed by O(log k) matrix-products done in sequence. 0 0 3 s1 . . . . . then it can be done by circuits of depth O(S 2 (n)).. The characteristic polynomial of M is det(λI − M ) = λn + n λn−i ci = c(λ). .

but it does not) We evaluate the circuit by a depth ﬁrst search manner. There is also a close to converse result to Theorem 11. After step i − 1. Proof: The idea of the proof is to do a depth ﬁrst search of the circuit for B. then if B can be recognized by S-uniform circuits of depth O(S(n)).13 we know L can be done by circuits of depth (log n)2 . Since MB runs in time 2O(S(n)) . pC already knows what conﬁguration C . C transforms to in 2i−1 steps. By inspection of the proof we conclude that the circuits are of polynomial size. Corollary 11. We leave the details to the reader. Theorem 11. Whenever the path goes to the 113 . To sum up: We have O(S(n)) stages where each stage can be done in depth O(S(n)). Let S-uniform denote a family of circuits that can be constructed by a Turing machine that runs in space S. A single stage can be done by having a binary tree of depth O(S(n)) which connects each processor to each other processor and selects the processor corresponding to the current information. but this is of no concern for us for the moment. This gives total depth O(S 2 (n)) and thus we have proved the theorem. (One has to check that this does not change the condition of S-uniformity. This is easy for i = 0 and in general it is done as follows. On the other hand pC knows which conﬁguration C transforms to in 2i−1 steps and this is the desired answer. At each point in time we maintain a path in the circuit from the output to an input which has the following properties. At stage i of the algorithm pC ﬁnds out which conﬁguration C would change to in 2i computation steps. Thus the critical parameter is what depth is required to do one stage.15 Suppose S(n) ≥ log n for all n. in O(S(n)) stages the processor corresponding to the initial conﬁguration will know the result of the computation. then B can be recognized in space S(n).are 2O(S(n)) conﬁgurations and thus we will use many processors. By duplicating nodes we can assume that the circuit is actually a tree.13.14 L ⊆ N C 2 Proof: By Theorem 11.

Thus. This might best be seen by an example. we require nothing extra while when it turns right we require that we have marked the value of the left input to that node. To update the path we need to be able to ﬁnd out what the circuit looks like locally.V 1 1 Figure 13: The path at one point in time V V 1 0 Figure 14: The path at next point in time left. Also we keep track of what kind of operation we have at each node of the path. but this can be done in space O(S(n)) by the uniformity condition. The active path are the shaded nodes. The path is of length O(S(n)) and thus can be represented in this space. 114 V 0 V 0 V V V x 1 V V x 2 . Suppose our path at one point is given by Figure 13. We start with the path always going to the left and it is now easy to see that if we always move to the next input to the right it is easy to update the tree. Assuming that x1 = 0 then at the next time-step a possible path is given by Figure 14.

On the other hand if A ∈ N C then we have to construct NC-circuits for any function in P. As a ﬁnal comment let us note that for one of the most famous problems that seem hard to do in parallel. However we know by Corollary 11. With this close connection between L and NC the following theorem is not surprising: Theorem 11.16 N C 1 ⊆ L.we have completed the proof. If P = N C then clearly A ∈ N C. namely integer GCDs.14 that f can be computed also in N C 2 . 115 . Combining this circuit with the NC-circuit for A becomes an NC-circuit for B.17 If A is P-complete then P = N C ⇔ A ∈ N C. it is not known that this problem is P-complete. but let us give it anyway. Given any B ∈ P we know by the deﬁnition of P-complete that there is function f computable in L such that x ∈ B ⇔ f (x) ∈ A. Proof: The proof is more or less the same as the proof of other theorems of this type. Using S(n) = log n we get the following immediate corollary: Corollary 11.

Thus the machine is allowed to ask questions about the set A and very inexpensively obtain correct answers. The set A. A Turing machine M with an oracle A is usually denoted M A to avoid confusion. P A ⊆ BP P A ⊆ P SP ACE A . The reason this concept is interesting is that almost all proofs that are known remain true if we allow all machines involved in the proof have access to the same oracle. One word of caution.12 Relativized computation As a tool in understanding computation. On this tape the machine can write a string x and then enter a special state called the query state.e. We will count the part of the query tape used as part of the work-tape of the machine and hence this should be bounded when we are looking at space bounded classes. BP P A and P SP ACE A . one particular way in augmenting the power of a computation has been studied extensively. In particular this is the case for all proofs given in these notes upto this point. In a similar way all the other complexity classes can be deﬁned.1 For all oracles A. which is called the oracle set should be thought of as a diﬃcult set. N P A . The idea is that if P ⊂ N P (i. Theorem 12. For deﬁniteness assume that we use the Turing machine model of computation. that the inclusion would be strict) has an “easy” proof then P A ⊂ N P A would be true for all oracles A. since otherwise the machine could have answered the questions itself at only a slightly higher cost. Now it is natural to deﬁne P A as the set of languages that can be recognized in polynomial time by Turing machines with oracle A. In one time-step the query tape now changes content. P A ⊆ N P A ⊆ P SP ACE A . The new value will be 1 if x ∈ A and 0 otherwise. The computation is said to take place relative to the oracle A (and hence the title relativized computation).2 For all oracles A. Let A be a ﬁxed set and give the machine an extra tape. Let us state some theorems that follow (the reader is encouraged to go back and check the proofs). Theorem 12. but we will not consider those classes here. Instead we will only consider P A . However this is not the case: 116 . This deﬁnition is not standard when dealing with L and N L. called the query tape.

3 If A is a P SP ACE-complete set then P A = N P A = BP P A = P SP ACE A = P SP ACE. But this makes B easy to recognize for a machine with oracle A. L(B) is in N P B .3 rules out the possibility of an easy proof that P = N P . Since A is P SP ACEcomplete we have B ≤p A i. Deﬁnition 12. writes this on the oracle tape. 117 . and it is easy to see that this modiﬁed machine also runs in polynomial space.4 There is an oracle B such that P B = N P B . Now modify M A . oracles will not support this: Theorem 12. By deﬁnition the result is the same.Theorem 12. Proof: The oracle B will not be as natural as the oracle A given above and we will construct it piece by piece. Thus B ∈ P A and we conclude that P SP ACE ⊆ P A . Formally L(B) is recognized by the following algorithm. such that instead of entering the query-state it runs S. First observe that for any oracle B. For the ﬁrst part let B be anything in P SP ACE. there is a polynomial time computable function f such that x ∈ B ⇔ f (x) ∈ A. Build a subroutine S which takes an input x and outputs 1 if x ∈ A and 0 otherwise. Proof: It is suﬃcient to prove that P SP ACE ⊆ P A and that P SP ACE A ⊆ P SP ACE. For the second part. We have to convert this into an ordinary P SP ACE-machine which recognizes the same language. reads the answer from the oracle and outputs this as its own answer. Essentially we have to get rid of A. This might raise in a more serious way (at least it seems) the possibility that P = N P . Together with B we will also deﬁne a language L(B) which for all B will be in N P B . However. But since A is in P SP ACE this is not too diﬃcult.e.5 Let L(B) be a language which only contains strings which solely consists of 1’s (such a language is called a unary language). The string of n 1’s is in L(B) if and only if there is at least one string x of length n such that x ∈ B. Theorem 12. This subroutine can be made to run in polynomial space. but we will cleverly construct B such that it is not in P B . suppose we are given a machine M A that recognizes some language in P SP ACE A . On input x it just computes f (x).

endif next i ﬁx all undetermined strings not to be in B. To verify that this algorithm is correct is left to the reader. The only nonobvious point is that when needed there exists 118 . Hence we need only check that the construction is not contradictory. For the constructed B. Equip MiB with a stop-watch such that if it has not halted in i|x|i steps on input x. ﬁx that string not to be in B If MiB accepts the input then Make sure that no string of length ni is in the oracle set. Let a string be undetermined if we have not yet decided whether it will be in B. Next we will have to deﬁne B such that L(B) is not in P B . n0 = 1 for i = 1 to ∞ do make ni the smallest number bigger than ni−1 such that 2ni > ini i and such that no string of length ni has been determined. In stage i we determine a little bit more of the oracle B to make sure that MiB does not recognize L(B). Nondeterministically write down a query to the oracle of the same length as the input. Let MiB be an enumeration of all oracle machines that run in polynomial time. else Put one undetermined string of length ni in the oracle set. 2. This is a slightly subtle point since whether an oracle Turing machine runs in polynomial time depends on the oracle and we have not yet decided what the oracle should be. If the oracles answers 1 accept otherwise reject. Run MiB on input 1ni . This is no real problem and we get around it as follows: Assume that MiB is an enumeration of all Turing machines which has the property that each machine appears an inﬁnite number of times . MiB will not accept L(B) since it will make an error on 1ni . Now all sets recognized by a polynomial time machine is recognized by some MiB (we need to repeat each machine inﬁnitely many times since we do not know for which i it is true that it runs in time ini . If there is a ’0’ in the input reject and stop. it automatically halts and outputs 1. Whenever the machine asks about an undetermined string. We will now go through an inﬁnite number of stages.1.

6 There is an oracle C such that N P C = P SP ACE C . Let us start by deﬁning the language. . . next i 119 . We now construct C in stages: n0 = 1 for i = 1 to ∞ do Make ni the smallest number bigger than ni−1 such that 2ni > ini i and such that no string of length ni has been determined. We will now construct C such to make sure L⊕ (C) is not in N P C . Consider NiC on input 1ni . since MiB on input 1ni only runs for time ini and hence it can only ask this many questions. First observe that for any oracle C. L⊕ (C) is in P SP ACE C . N C . Thus only i this many new strings can be determined during stage i and since there were no determined string of length ni when stage i started and 2ni > ini there i is an undetermined string that can be put into B. of all polynomial time nondeterministic oracle machines where N1 2 NiC runs in time at most ini . endif Fix all undetermined strings not to be in C.7 Let L⊕ (C) be a unary language such that 1n ∈ L⊕ (C) iﬀ there is an odd number of strings of length n in C. ﬁx the i remaining strings of length ni to make sure that an even number of strings of length ni are in C. It turns out that also all the other questions can be relativized in the possible way.an undetermined string of length ni However. The algorithm just asks all questions of length n and keeps a counter to compute the parity of the number of strings in the oracle. Theorem 12. Deﬁnition 12. else Fix strings to make sure that an odd number of strings of length ni are in C. Let us next take N P versus P SP ACE. . by ﬁxing at most ini strings. Using the same argument as in the last proof there is an enumeration C . Proof: This proof will very much follow the same line as the last proof. If there is some setting of undetermined strings to make NiC accept then Make such a setting.

The reason to put all undetermined strings of length at most ni out of the oracle is to make sure that for n’s which are not chosen to be one of the ni ’s it is also true that the number of strings of length n in the oracle is not close to half of all strings of length n. i 120 . We again give an algorithm to determine the oracle: n0 = 1 for i = 1 to ∞ do Make ni the smallest number bigger than ni−1 such that 2ni > 10 · ini and such that no string of length ni has been determined. if we make sure that for each n. then a simple sampling algorithm will work. Consider NiD on input 1ni . but there is no real problem. This language is not always in BP P D .8 There is an oracle D such that BP P D ⊆ N P D . The condition that 2ni ≥ 10 · ini make sure that this is true for all n with ni ≤ n < ni+1 . at least 60% or at most 40% of the strings is in the oracle set. else Put all undetermined strings of length ni into D. If there is some setting of undetermined strings to make NiC accept then Make such a setting. Proof: We proceed as usual. i Next we have: Theorem 12. The construction can be seen to be correct by more or less the same reasoning as the last construction. Deﬁnition 12.Again by construction for this oracle L⊕ (C) is not in N P C . This extra condition means that we have to be slightly careful in the oracle construction. However. by ﬁxing at most ini strings and ﬁx the i remaining strings of length ni not to be in D.9 Let Lmaj (D) be a unary language such that 1n ∈ Lmaj (D) if a majority of the strings of length n is in D. i Fix all undetermined strings of length less than ni not to be in D. endif next i The veriﬁcation that this construction is correct is similar to the previous veriﬁcations. Please observe that if NiC accepts an input then it is suﬃcient to ﬁx the answers of the questions on one accepting computation path and hence it is suﬃcient to ﬁx ini strings in the ﬁrst case.

Our last oracle construction will be: Theorem 12.10 There is an oracle E such that N P E ⊆ BP P E . Proof: We will use the same language as we used in the proof that there was an oracle B such that N P B = P B . Remember that L(E) is a unary language such that 1n ∈ L(E) iﬀ there is some string of length n in E. We now construct E to make sure it is not in BP P E . This time let MiE be an enumeration of probabilistic Turing machines. Here there is a slight problem that MiE might not deﬁne a correct machine in that the probability of acceptance is not bounded away from 1/2 for some inputs. However, this is only to our advantage since this means this machine will not accept any BP P -language, and we do not have to worry that it might accept L(E). We now construct E in stages as follows: n0 = 1 for i = 1 to ∞ do Make ni the smallest number bigger than ni−1 such that 2ni > 10 · ini and such that no string of length ni has been i determined. Run MiE on input 1ni . Whenever the machine asks about a string which is not determined, pretend that this string is not in E. Let p be the probability that MiB accepts under these conditions. If p ≥ 1/2 then Fix all strings MiE could possibly ask about not to be in E. Also ﬁx all other strings of length ni not to be in E. else Find one string of length ni such that the probability that this string is asked by MiE is at most 1/10 and put this into E. Fix all other strings MiE might possibly look at not to be in E endif next i ﬁx all undetermined strings not to be in E. Here there are some details to check. If p ≥ 1/2 then this is actually the correct probability of acceptance since we eventually ﬁx all the strings not to appear in E. In this case 1ni ∈ L(E) while the probability that MiE accepts 1ni is at least 1/2 and thus MiE does not recognize L(E) in the BP P sense. On the other hand if p < 1/2 then the ﬁnal oracle does not agree with 121

the simulation. However since the probability of ﬁnding out the diﬀerence is bounded by 1/10, the acceptance probability remains below 0.6. Since in this case, 1ni ∈ L(E), also in this case MiE fails to recognize L(E). We need also check that there is a suitable string which is asked with probability at most 1/10. Since the running time of MiE on input 1ni is bounded by ini it does not ask more than this number of questions. If i P R(x) is the probability that string x is asked then P R(x) ≤ ini i

|x|=ni

and since 2ni > 10 · ini there is some x with P R(x) < 1/10. The proof is i complete. We have now established that all the unknown inclusion properties of our main complexity classes can be relativized in diﬀerent directions. The only information this gives is that the true inclusions can not be proved with methods that relativize. In principle, methods that do not look very detailed at the computation will relativize. In particular when you treat the computation as a black box which just takes an input and then produces an output (after a certain number of steps). Thus, the main lesson to learn from this section is that to establish the true relations of our main complexity classes, we have to look in a very detailed way at computation. There are a few results in complexity theory which do not relativize. One of them (IP=PSPACE) is given in Chapter 13.

122

13

Interactive proofs

One motivation for NP is to capture the notion of “eﬃcient provability”. If A ∈ N P and x ∈ A then there is a short proof of this fact (the nondeterministic choices of the algorithm which recognizes A) which can be veriﬁed eﬃciently. By the deﬁnition of NP all proofs are correct and an all powerful prover can always convince a polynomial time bounded veriﬁer of a correct NP-statement. As we did with regards to ordinary computation we can introduce randomness and decrease the requirements. A proof will be a discussion (interaction) between an all powerful prover and a probabilistic polynomial time veriﬁer. Before we make a formal deﬁnition let us give an example. Example 13.1 Given two graphs G1 and G2 both on n vertices. G1 and G2 are said to be isomorphic iﬀ there is a permutation π of the vertices such that (i, j) is an edge in G1 iﬀ (π(i), π(j)) is an edge in G2 . In other words there is a relabeling of the vertices to make the two graphs identical. This problem is in NP since one can just guess the permutation. On the other hand it is not known to be in P (or co-NP) nor known to be NP-complete. Now consider the following protocol for proving that two graphs are not isomorphic. For m = 1 to k: The veriﬁer chooses a random i (1 or 2) and sends a graph H which is a random permutation of Gi to the prover. The prover responds j. The veriﬁer rejects and halts if i = j next m The veriﬁer accepts. In other words the prover tries to guess which graph the veriﬁer started with and the veriﬁer accepts if he always guesses correctly. Now suppose that G1 and G2 are not isomorphic. Then H is isomorphic only to Gi and the all powerful prover can tell the value of i and always answer correctly. On the other hand if G1 and G2 are isomorphic then, independent of the value of i, the graph H is a random graph isomorphic to both G1 and G2 . Thus there is no way the prover can distinguish the two cases and thus if he tries to answer he will each time fail with probability 1/2. Thus the probability that he can incorrectly make the veriﬁer accept is 2−k which is

123

but since we want the entire process to be polynomial time.5. Let us ﬁrst state an equivalent of Theorem 9. Deﬁnition 13. A discussion (or interaction) of the type described in the example will be called an interactive proof. We leave the details to the reader.4 If A ∈ IP then there is an interaction between a probabilistic polynomial time veriﬁer V and an all powerful prover P such that: 1.3 The complexity class IP is the set of languages that admit an interactive proof. Interactive proofs attracted a lot of attention in the end of the 1980’s and we will only touch on the highlights of this theory. 124 . (Completeness) If x ∈ A then the probability (over V ’s random choices) that V accepts is at least 2/3. If x ∈ A then no matter what the prover does the probability (over V ’s random choices) that V accepts is at most 2−|x| . Interactive proofs were deﬁned by Goldwasser.very small if k is large. A diﬀerent deﬁnition that was later proved to give the same class of languages was given independently by Babai around the same time. Proof: (Outline) The proof is very similar to the proof of Theorem 9. for all practical purposes if k = 100 and the prover always answer correctly the graph will be non-isomorphic. The number of exchanges of messages might depend on the length of the input. 2. Thus. Theorem 13. we limit this to be a polynomial number in the length of the input. We just run many protocols in many times and make a majority decision in the end. If x ∈ A then the probability (over V ’s random choices) that V accepts is at least 1 − 2−|x| . Deﬁnition 13.2 A language A admits an interactive proof iﬀ there is an interaction between a probabilistic polynomial time veriﬁer V and an all powerful prover P such that: 1. 2. Let us formalize the properties wanted. (Soundness) If x ∈ A then no matter what the prover does the probability (over V ’s random choices) that V accepts is at most 1/3.5. Micali and Rackoﬀ in 1985.

Karloﬀ. since this number is at least 2/3 when x ∈ A and less then 1/3 otherwise. α) is 1 iﬀ the veriﬁer would have accepted after the conversation α and 0 otherwise. Lund and ﬁnally Shamir led to the following remarkable theorem: Theorem 13. We denote the ith prover message by pi and the ith veriﬁer by vi and assume that the prover sends the ﬁrst message in each round. Now let α be any partial conversation consisting of the ﬁrst s messages for some s and let P r(x. By assumption this can be computed in polynomial time. A formal proof is slightly cumbersome (but not really hard) and hence let us only give an outline. e) where e is the empty string. α) = E ((x. Fortnow. αvi )) where E is expected value over the veriﬁer message vi . Finally when α is a full conversation then P r(x. It was the reverse computation that was the big surprise. The ﬁrst couple of years.when x ∈ A then the probability that V accepts is 1). Suppose A ∈ IP and the interaction that recognizes A contains k pairs of messages.e. Using these equations it is easy to give an algorithm that proceeds in a depth ﬁrst search fashion and evaluates P r(x. 125 . α) = max ((x. Proving this would take us too far and we omit this theorem. This inclusion was no surprise since P SP ACE is a big complexity class. α) be the probability that V accepts given that the initial conversation is α and that P plays optimally in the future and that V follows his protocol. Our goal is to compute P r(x. Proof: (Outline) The fact that IP ⊆ P SP ACE was established quite early in the theory of interactive proofs. This was dramatically changed in December 1989 when work of Nisan. On the other hand if the next message is by the prover then P r(x.A far less obvious fact is that one can in fact obtain perfect completeness (i. αpi )) where the maximum is taken over all messages pi . e) in polynomial space.5 IP = P SP ACE. one of the main drawbacks of the theory of interactive proofs was the small number of languages that were not in NP that admitted interactive proofs. Now if the last message in α is by the veriﬁer then P r(x.

k − 1). To summarize the discussion the formula has the following properties. k. When we iterate the above construction it will be true that no variable in any quantiﬁer will used inside more than 3 other quantiﬁers. C2 . B. x Now assume that each conﬁguration consists of n Boolean variables and that initially k = n. • All formulas between quantiﬁers and after the last quantiﬁer are CNFformulas with O(n) clauses of constant size. GET (C1 . C2 . • It has 3n quantiﬁers which appear in blocks of the form ∃∀∃ where the ∃ quantiﬁers quantify over n variables and 2n variables respectively and the ∀ quantify over one variable.B)∈{(C1 .17. In fact we will use that determining the truth of the special type of quantiﬁed Boolean formulas constructed in the proof of Theorem 7. x). Z. Let us recall part of this proof. Let us also note that GET (Y.C2 )} GET (A. x). We wanted to construct a formula GET (C1 .17 is PSPACE-complete. B. B. We only give an outline of the argument. k) that said that the Turing machine could get from conﬁguration C1 to conﬁguration C2 in 2k steps. k.(C. Now encode the ∀ quantiﬁer as a Boolean variable x1 and rewrite the formula to the following. In reality they are both polynomial in n but this is of no x importance. x) = ∃C ∀x1 ∃(A. 0) can be done by a CNF-formula with O(n) clauses of constant size.B) (x1 → ((A = C1 )∧(B = C)))∧(¯1 → ((A = C)∧(B = C2 )))∧GET (A. C2 . x) = ∃C ∀(A. • Each variable is used only inside at most one block of following quantiﬁers. and ∀ by . This formula was constructed recursively using: GET (C1 .To prove that P SP ACE ⊆ IP we need “only” give an interactive proof which recognizes TQBF which was proved PSPACE-complete in Theorem 7. k − 1. It is not diﬃcult to write (x1 ⇒ ((A = C1 ) ∧ (B = C))) ∧ (¯1 ⇒ ((A = C) ∧ (B = C2 ))) as a CNF-formula with n clauses and each clause is of polynomial size. Furthermore note that each variable describing C1 and C2 does not appear in GET (A.C). k−1. Here the Now take this formula and replace all ∃ by sums and products extends over all variables that was originally in the scope 126 .

p − 1. Also replace ∧ by × and ∨ by +. This lemma implies that there is some prime p. Lemma 13. We will show how the prover can convince the veriﬁer with high probability that this integer I is not 0. It is not diﬃcult to see that this integer is 0 iﬀ the original formula was false (prove this by induction).6 For c < 1 and x > Xc . Let us call it x1 and suppose it is part of an ∃ quantiﬁer (i. the product of all primes less than x is ≥ ecx where e is the base of the natural logarithm ≈ 2. The resulting algebraic expression has one quantiﬁed variable less and we can now attack the next variable. The prover starts by giving this p together with I (mod p) (which is not 0). The task for the prover is now to prove that P (n1 ) is the value of the algebraic when n1 is substituted for x1 . Once all the variables have been 127 .of the quantiﬁer.718. To see this observe that if I > 0 and it is divisible by a set of primes then it is at least the product of the primes. now we are summing over its two values). The veriﬁer veriﬁes that P (0) + P (1) ≡ I (mod p). This implies that one can use a small prime. This is true since the value only multiplies the value of the ﬁnal CNF-formula is at most cn and each by 2n while each only squares the value (remember that there is only one variable in each ). but this is of no major concern for the moment. Here we need both that intermediate pieces of the formula are simple CNF-formula and that the usage of each variable is very limited. The following lemma follows from the prime number theorem (the reader is asked to take it on faith).7 In fact if one is more careful one can make I = 1 when the formula is true. This can be done since there are O(n) coeﬃcients each which can be speciﬁed with O(n) bits. Naturally the result is a polynomial P (x1 ) and by the conditions of the formula it is of degree O(n). The prover now gives this formal polynomial (mod (p)) to V . This will make the proof slightly more eﬃcient. 2 . Using this replacement the formula is now turned into ¯ an expression which evaluates to an integer. Remark 13. n4 ≤ p ≤ O(2n ) such that I ≡ 0 modulo p. . and responds with a random integer n1 where n1 is chosen randomly among 1. n First observe that I is bounded by 2O(n2 ) .e. Finally for a variable replace x by 1 − x. Keep this variable free and evaluate the entire expression mod p with its sums and products. Now consider the outermost quantiﬁed variable. .

Let us sketch why this protocol is correct. consider x ¯ ∃x1 ∀x2 ∃x3 ∀x4 (x1 ∨ x2 ∨ x3 ) ∧ (¯1 ∨ x4 ). Since there are only O(n2 ) variables.8 For simplicity let us work with a formula on normal TQBFCNF form and in particular. Suppose on the other hand that the formula is false. In particular I = 0 and the ﬁrst value claimed by the prover for I (mod p) is incorrect and hence also the ﬁrst polynomial P is not correct (since it takes an incorrect value for either 0 or 1). we only need the assumption on the structure of the formula to make the degree of the polynomial P small. The formula is turned into the following arithmetical expression: 1 1 1 1 (x1 + x2 + x3 )(2 − x1 − x4 ) x1 =0 x2 =0 x3 =0 x4 =0 This is just an integer (in fact 20).eliminated the veriﬁer can himself evaluate the remaining polynomial and if it equals the value claimed by the prover he accepts and otherwise rejects. the probability that he is ever lucky is O(n−1 ). Note that there is really no diﬀerence between the ∀-variables and the ∃-variables. let us give an example to show how it works. To give a little bit perspective of this proof. This implies that the probability that the prover is lucky at a single point is O(n/p) ≤ O(n−3 ). Thus with probability 1 − O(n−1 ) the veriﬁer will reject and the protocol is correct. A proof would go like the following. 128 . On the other hand if he is never lucky then he will be forced to continue lying and the veriﬁer will expose him in the end. Example 13. When the formula is true there is really no complications since the prover all the time is claiming correct statements and thus the veriﬁer will accept with probability 1. Let us say that n1 is lucky for the prover if P (n1 ) ≡ Q(n1 ) (mod p). This formulas is true since if we put x1 = 0 and x3 = 1 both clauses are satisﬁed. If the prover is lucky once then he starts claiming correct statements and thus he will be able to convince the veriﬁer. Since P − Q is a nonzero polynomial of degree O(n) it has at most O(n) zeroes. Suppose the true polynomial is Q. It does not matter what happens with the other variables.

1 1 (Normally the prover represents these polynomials in a dense representation. The prover claims that 1 (1 + x3 )(6 − x4 ) x4 =0 as a a function of x3 is P3 (x3 ) = 2 + 4x3 + 2x2 . but this is more convenient for hand calculation). 3 129 . Indeed P1 (0) = 20 ≡ 6 while P1 (1) = 0. The veriﬁer checks that P1 (0) + P1 (1) ≡ 6 modulo 7 (in the future we reduce everything modulo 7 without saying). He claims that the expression is 6 modulo 7 and in fact that 1 1 1 (x1 + x2 + x3 )(2 − x1 − x4 ) x2 =0 x3 =0 x4 =0 as a function of x1 is P1 (x1 ) = (2x2 + 2x1 + 1)(2x2 + 6x1 + 5)(1 − x1 )2 (2 − x1 )2 . The prover chooses the prime 7 (in reality it should be larger.1. but we are only trying to illustrate the procedure). The prover now claims that 1 1 (3 + x2 + x3 )(6 − x4 ) x3 =0 x4 =0 as a function of x2 is P2 (x2 ) = 1 + 4x2 . 2 4. x2 =0 x3 =0 x4 =0 3. He now chooses random value for x1 (in our case x1 = 3) and wants to be convinced that 1 1 1 (3 + x2 + x3 )(6 − x4 ) = P (3) = 25 × 41 × 4 × 1 ≡ 5. 2. The veriﬁer checks that P2 (0)P2 (1) ≡ 5 and randomly chooses x2 = 5 and asks to be convinced that 1 1 (1 + x3 )(6 − x4 ) ≡ P2 (5) ≡ 3 x3 =0 x4 =0 5.

The veriﬁer checks that P3 (0) + P3 (1) ≡ 3 and then randomly chooses x3 = 2 and wants to be convinced that 1 3(6 − x4 ) ≡ P3 (2) ≡ 4. It is not diﬃcult to construct an oracle A such that IP A ⊂ P SP ACE A . As mentioned before. This proof which does not relativize gives some hope to attack the NP vs P question. but not the second part). 130 . The reason that the proof does not relativize is that if we allow oracle questions then the condition “C1 is the conﬁguration that follows C2 ” cannot be described by a low degree polynomial. x4 =0 This he can to by himself and he accepts the input since 18 × 15 is indeed 4 modulo 7.6. the above proof does not relativize (the IP ⊂ P SP ACE does relativize. However it is still true that no strict inclusion that does not relativize has been proved for any complexity class that includes N C 1 .

Sign up to vote on this title

UsefulNot useful- RI Math GSEs (Gr 9-12)by fogleman
- Basic Concepts of Functionsby shanecarl
- Olivier Faugeras, Romain Veltz and François Grimbert- Persistent neural states: stationary localized activity patterns in nonlinear continuous n-population, q-dimensional neural networksby NeerFam
- U05 Antony 2009 Thinkingby Anonymous mOk9p98

- Rr310504 Theory of Computation
- Recursive Functions
- [Abolfazlian-Paper] Whats Connectionism got to do.pdf
- A Step by Step Guide to Gödel’s Incompleteness Proof
- Lecture 13
- lambda
- Monitoring CPU Usage
- Introduction to Functions
- RI Math GSEs (Gr 9-12)
- Basic Concepts of Functions
- Olivier Faugeras, Romain Veltz and François Grimbert- Persistent neural states
- U05 Antony 2009 Thinking
- Mathematic in Action
- Using models to build and understanding of functions
- Salesforce Formulas Cheatsheet
- Computer Mathematics for the Engineer
- PHY 331 Practical 3 - The Random Walk in Two Dimensions
- Quantitative Analysis for Business & Economics
- A Novel Parallel Recursive Newton-Euler Algorithm for Modelling
- Graphing Calculator
- Cain Analysis Complex
- Cain G. Complex Analysis s
- Functions Onto
- Love
- macro_s01
- CCP Question BANK
- Iast.lect12
- me1
- Tutorial 5
- Limits and its properties
- Complexity Lecture Notes